id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
55910999
pes2o/s2orc
v3-fos-license
Depression and Anxiety as Predictors of Perceived Quality of Life in Breast Cancer Survivors Breast cancer is one of the hundred types of cancer1. Mainly breast cancer occurs in women but men can also be victims of it2. According to The National Cancer Institute a cancer survivor is an individual who survives from the time of diagnosis until he or she gains the balance of his or her life. The caregivers, family members and friends are also impacted by survivorship experience3. The women survival rate from breast cancer is very low in Pakistan4. INTRODUCTION Breast cancer is one of the hundred types of cancer 1 . Mainly breast cancer occurs in women but men can also be victims of it 2 . According to The National Cancer Institute a cancer survivor is an individual who survives from the time of diagnosis until he or she gains the balance of his or her life. The caregivers, family members and friends are also impacted by survivorship experience 3 . The women survival rate from breast cancer is very low in Pakistan 4 . Depression and anxiety are common psychological consequences of cancer 5 . Research indicated a significant negative effect of depression on Quality of Life (QoL) in cancer patients 6 . The extended, abnormal and severe anxiety reactions affect patients' normal functioning and social life 7 . Breast cancer has a great effect on a woman's QoL and self-esteem. Ferrans defines QoL as an individual's sense of wellbeing that originates from satisfaction or dissatisfaction with his or her important aspects of life 8 . There is extensive literature suggesting a crucial role of psychological comorbidities in influencing patients' QoL. A study suggested that chemotherapy and tamoxifen is significant predictor of poorer QoL 9 . A study concluded that lower income & education, unemployment and poor health care services were related with QoL 10 . Age also affects QoL as older BCS reported lower QoL and higher level of education appeared to reduce depression 11 . Depression and anxiety as significant predictors of quality of life and as major psychological conditions were studied in BCS 12 . Literature concluded that BCS experienced higher level of anxiety and depression in Pakistan. The results revealed that poor QoL was related with depression in BCS 13 . The objective of the study was to investigate the role of depression and anxiety in influencing perceived QoL in BCS in order to understand the psychological consequences of a life threatening illness. The study also aimed to examine the role of socio-demographic factors in influencing depression and anxiety levels among BCS. The current study further tried to explore whether there is a reciprocal/causal relationship between depression, anxiety and QoL?. As studying QoL in BCS may help to find out the long term impacts of breast cancer and can help to improve the QoL of BCS in future. MATERIALS & METHODS The present study followed cross sectional within group research design. The sample comprised of 52 female BCS from INMOL and Jinnah hospital, Lahore, after fulfilling required formal ethical procedures. Purposive sampling was used to recruit participants on the following inclusion/ exclusion criteria; Participants on regular follow-ups at public sector hospitals, post one year of successful chemotherapy, recruited as referrals from the medical professionals. No age range was set to include a diverse population. Patients with complications were excluded after being screened from the doctors. Patients with minimum basic schooling equivalent to primary school level, who could read and write, were included in the study. The BCS with any recurrence of cancer and having any other type of cancer were excluded. This study was conducted over a period of two months from 1 st March, 2014 to7 May, 2014. The individual assessment was conducted in one session in the absence of family members, when BCS visited outdoor of hospital for their follow up. The measurements included self report questionnaires that were also translated into Urdu for those who cannot speak and comprehend English. Demographics and medical information was taken from their medical reports. All formalities and ethical considerations were fulfilled including participant's consent form and information sheet. A 14 items Hospital Anxiety and Depression Scale (HADS) developed by Zigmond and Snaith was used to measure depression and anxiety. It has two subscales, depression (7 items, á=0.70) and anxiety (7 items, á= 0.74). All items rated on 4 point scale as some items are rated on 0 to 3 scale and others are rated on 3 to 0 scale. For each scale the scores range is 0 to 21, the high scores means high level of distress. Total score will be achieved by adding two subscales. High scores show high depression and anxiety 14 . In the present study the reliability of HADS was 72. The Quality of Life Index (QLI) Cancer Version III by Ferrans and Powers was used; it measures importance and satisfaction with various areas of life. All items rated on 6 point scale (1-6) 8 . Scores range from 0-30 and calculated by weighting each satisfaction item by its corresponding importance item 15 . Total scores and four subscales scores are calculated. The reliability of QLI in BCS was reported .95 & .97 16 . In the present study the reliability of QLI was 89. All procedures followed were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2008. Informed consent was obtained from all patients for being included in the study. RESULTS Descriptive statistics of demographic characteristics of BCS showed that the mean age of participants is 47.23 with SD=9.21. The participants were categorized according to the marital status as 76% participants were married, 26% BCS had 3 children, 48% participants were educated till primary level and 98% were unemployed. 57% participants belonged to urban areas, 65% lived in nuclear family system and 55% were not satisfied with their financial conditions. The descriptive statistics of medical factors indicated that mean age of onset of breast cancer was 43.46 years with SD=9.37. The mean age of breast cancer diagnosis was 43.67 years with SD=9.39. 88% BCS had 1-4 years duration of illness and treatment. 38% BCS had stage 1 and 34% BCS had stage 2 of breast cancer. 88% survivors visited the hospital for follow up after 1-6 months. It is showed that 73% survivors come for their regular follow up during 1-2 years immediately after the completion of chemotherapy. 30% BCS had hypertension and 15% BCS had hypertension and diabetes (both) as co-morbidity. 58% BCS used Tamoxifen medicine during survivorship period. The levels of depression and anxiety among breast cancer survivors are presented in Table 1 Results (in Table 2) indicated significant negative correlation of depression and anxiety with overall QoL, HF, PSP, SOC, and FAM which means that depressed and anxious BCS inclined to be less satisfied with their QoL or vice versa. There is a significant positive relationship between depression and anxiety which indicated that anxious BCS are more likely to be depressed because depression and anxiety are often co-morbid. The significant positive correlation between overall QoL and its domains (i.e. HF, SOC, PSP, FAM) showed that BCS tend to be more satisfied with their overall QoL if they were satisfied with their HF, SOC, PSP and FAM life. The results of multiple hierarchical regression analysis are presented in Table 3. Table 3) indicated that number of children and level of education were the significant strongest predictors of depression among all demographic variables. It is suggested that low level of education and less number of children make BCS more depressed. Marital status, monthly income, number of dependents and financial satisfaction did not emerge as significant predictors of depression. The analysis indicated that overall QoL and HF, SOC, PSP, FAM did not predict depression and anxiety among BSC. Table 4 shows a multiple linear regression analysis that was carried out to explore that do depression and anxiety predict quality of life. A hierarchical multiple regression analysis (in These results (in Table 4) revealed that depression and anxiety emerged as significant predictors of overall QoL, HF and PSP. It means that lower the level of anxiety and depression leads to the higher level of QoL. Both depression and anxiety did not emerge as significant predictors of SOC and FAM well being. DISCUSSION The current research was conducted to investigate depression and anxiety as predictors of QoL and to explore whether there is a reciprocal association among depression, anxiety and QoL. Findings of the present research suggested that depressed and anxious BCS inclined to be less satisfied with their QoL or vice versa. These findings are similar with the earlier researches 13,[17][18][19] . These findings suggested that anxiety and depression are negatively related with QoL among BCS. The present study indicated that BCS after one year of chemotherapy experienced mild to moderate level of depression and normal to mild anxiety and improved QoL. These results are similar with the earlier findings 17 . One reason for these consistent findings can be the time since chemotherapy. The immediate psychological consequences (especially depression and anxiety) of chemotherapy can be reduced with the passage of time. There are many factors (e.g. physical activity, exercise, social support) that can contribute in reduction of the depression and anxiety levels with the passage of time. A previous study concluded that BCS who had increased physical activity level after diagnosis had better QoL and low intensity of depression 20 . A meta-analysis concluded that exercise lead towards a little improvement in depressive symptoms 21 . A study concluded that BCS with better social support had improved depressive symptoms and better QoL 22 . The findings of present research indicated that number of children and education level was the significant stronger predictors of depression among BCS. These findings are similar with earlier researches [23][24][25]11 . The current study showed that age, marital status, occupational status, monthly income and financial satisfaction did not predict depression among BCS. The results further exposed that there was no demographic predictor of anxiety among BCS. These results are similar with the previous research 26 . The current research indicated that QoL did not predict anxiety and depression. These results are inconsistent with the literature. A study concluded that QoL has negative outcomes like anxiety & depression in BCS 27 . The present study indicated that anxiety and depression significantly predicted overall QoL, HF and PSP well being. These findings are similar with the earlier findings which suggested that anxiety and depression were considered as indicators of QoL 28 . Therefore, present study could not find out the bidirectional relationship between anxiety, depression and QoL. According to Wilson and Cleary model there is reciprocal relationship between anxiety, depression and QoL. But it is difficult to say that which one is the predictor or which one is outcome. There is ongoing debate on this issue 29 . CONCLUSION The study found that anxiety and depression are significant psychological issues that need to be screened and identified for a formal assessment and management in any life threatening chronic condition. Anxiety and depression can have a considerable negative influence on the overall QoL of BCS and significantly affect their psychological well-being. Efforts need to focus on adding quality and not just quantity to life.
2018-12-06T21:49:42.266Z
2016-12-20T00:00:00.000
{ "year": 2016, "sha1": "2370f3fadc6a0a0572ba3d65ef9cf32995599b29", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.36570/jduhs.2016.3.457", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "2370f3fadc6a0a0572ba3d65ef9cf32995599b29", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255992250
pes2o/s2orc
v3-fos-license
Mapping epigenetic modifications on chicken lampbrush chromosomes The epigenetic regulation of genome is crucial for implementation of the genetic program of ontogenesis through establishing and maintaining differential gene expression. Thus mapping of various epigenetic modifications to the genome is relevant for studying the regulation of gene expression. Giant transcriptionally active lampbrush chromosomes are an established tool for high resolution physical mapping of the genome and its epigenetic modifications. This study is aimed at characterizing the epigenetic status of compact chromatin domains (chromomeres) of chicken lampbrush macrochromosomes. Distribution of three epigenetic modifications – 5-methylcytosine, histone H3 trimethylated at lysine 9 and hyperacetylated histone H4 – along the axes of chicken lampbrush chromosomes 1–4, Z and W was analyzed in details. Enrichment of chromatin domains with the investigated epigenetic modifications was indicated on the cytological chromomere-loop maps for corresponding chicken lampbrush chromosomes. Heterogeneity in the distribution of 5-methylcytosine and histone H3 trimethylated at lysine 9 along the chromosome axes was revealed. On examples of certain chromomeres of chicken lampbrush chromosomes 1, 3, 4 and W we demonstrated that a combination of immunofluorescent staining and fluorescence in situ hybridization allows to relate the epigenetic status and a DNA sequence context of individual chromomeres. Background Lampbrush chromosomes are highly extended transcriptionally active chromosomes that appear at the diplotene stage of meiotic prophase I in growing oocytes of all vertebrate taxons, except mammals. Lampbrush chromosomes have a prominent chromomere-loop organization [1][2][3]. Condensed chromatin is accumulated in chromomeresdeoxyribonucleoprotein granules about 1 μm in size, connected in a chain by thin axes of chromatin [4,5]. Average length of genomic segment packed into a single lampbrush chromomere is estimated as 1.5-2 Mb for chicken [6] and 5-10 Mb for urodele amphibians [5]. Transcriptionally active chromatin is organized in paired lateral loops outgoing from the chromomeres. Being highly decondensed and enriched with morphological markers, lampbrush chromosomes represent a promising tool for highresolution physical gene mapping [6]. Individual genes or tandem repeat families can be mapped precisely to certain lampbrush chromomeres or lateral loops by fluorescence in situ hybridization (FISH) according to DNA/DNA and/ or DNA/RNA hybridization protocols [7]. Moreover, lampbrush chromosomes allow to investigate the pattern of various epigenetic modifications in both completely decondensed (lateral loops) and condensed (chromomeres) chromatin domains. General distribution of certain epigenetic modifications on avian and amphibian lampbrush chromosomes has been earlier described. Notably, lampbrush chromosomes lack linker histone H1 [8]. Essential marker of transcriptionally active chromatinhyperacetylated histone H4 (H4Ac)was revealed in the axes of lateral loops, areas of contacts of lateral loops with chromomeres as well as in a chromomere core [3,9]. 5methylcytosine (5mC) was found to be accumulated in chromomeres and in untranscribed regions of lateral loops [2,10,11]. Other markers of transcriptionally inactive chromatin (histone H3 trimethylated at lysine 9 (H3K9me3) or lysine 27 (H3K27me3)) accumulate in the regions of constitutive heterochromatin such as pericentromeric and subtelomeric chromomeres, polymorphic segments of chromosomes [12] and sex chromosome W [11]. All chromatin modifications studied on lampbrush chromosomes demonstrate more or less heterogeneous distribution along the chromosome axes with the exception of histones H4 and H2A phosphorylated by serine 1, which quite homogeneously enrich the majority of lampbrush chromomeres [11]. To date there are no maps illustrating the distribution of epigenetic modifications along the axes of lampbrush chromosomes. The aim of this study was to develop working chromosome maps reflecting the enrichment of 5mC, H3K9me3 and H4Ac in certain lampbrush chromomeres. Lampbrush chromosomes of domestic chicken (Gallus gallus domesticus) were used as a model for studying the epigenetic status of chromomeres. Genome of domestic chicken is almost completely deciphered [13]. Moreover, cytological chromomere-loop maps reflecting the number and size of chromomeres, the intensity of DAPI-staining of chromomeres, the average length of lateral loops in a region as well as the positions of centromeres and marker structures were designed for all chicken macrochromosomes [6,14,15] and the largest microchromosomes in a lampbrush form [16]. Certain number of tandem repeats and BAC clones was mapped on chicken lampbrush macrochromosomes [6,[14][15][16][17][18][19][20]. It is important to note that individual chromomeres can be microdissected to generate chomomere specific FISH-probes. Furthermore the obtained DNA samples are applicable for sequencing enabling to define genomic position of a chromomere and to analyze its DNA context [21,22]. Here we mapped the pattern of chromomeric distribution of three epigenetic modifications (5mC, H3K9me3 and H4Ac) to corresponding cytological chromomere-loop maps of G. g. domesticus lampbrush chromosomes (GGA) 1-4, Z and W. In addition we demonstrated that the obtained maps could be applied to relate the DNA sequences of individual lampbrush chromomeres with their epigenetic status. Results In general, by immunfluorescent staining we revealed predominant enrichment of all three studied epigenetic modifications (H4Ac, H3K9me3 and 5mC) in chromomeres brightly stained with DAPI (hereinafter referred to as DAPI-positive chromomeres) (Figs. 1, 2, 3, 4, 5 and 6). H4Ac demonstrated a punctate distribution pattern on lateral loops and in the areas of contact of lateral loops with chromomeres, which is expected for the maker of an open chromatin. In lampbrush chromosome axes H4Ac was enriched on the surface of certain chromomeres, predominantly DAPI-positive ones. On the contrary, H3K9me3 was nearly undetectable in the axes of lateral loops but was enriched in lampbrush chromomeres. Anti-H3K9me3 antibodies showed a heterogeneous staining pattern along lampbrush chromosome axes with the most bright labeling in the q-terminus of chromosome Z (Fig. 5 b-b′′′) and all chromomeres of GGAW ( Fig. 6 b-b′′′). Immunostaining with antibodies against 5mC revealed its enrichment in the majority of DAPI-positive chromomeres and minor content along lateral loop axes as well as at the loop bases. With that, the distribution pattern of 5mC mostly matched the distribution pattern of H3K9me3; vivid examples are chromomeres of GGAW ( Fig. 6 b-c′′′). Few chromomeres and/or clusters of chromomeres faintly stained with DAPI (hereinafter referred to as DAPI-negative chromomeres) demonstrated enrichment with one, two or all three epigenetic modifications. The staining pattern was reproducible, reflecting the association of the histone modifications and DNAmethylation with defined genomic regions during the lampbrush stage of oogenesis. Cytological maps and description of the distribution of H4Ac, H3K9me3 and 5mC in chromomeres of chicken lampbrush chromosomes 1-4, Z and W The distribution of H4Ac, H3K9me3 and 5mC along the axes of chicken lampbrush chromosomes 1-4, Z and W was indicated on the corresponding cytological chromomere-loop maps reflecting DAPI-staining pattern and the average length of lateral loops [6,15]. In the following descriptions certain chromosomal regions, clusters and individual chromomeres are specified by numbers, marked both in microphotographs and maps. GGA4 The majority of DAPI-positive chromomeres of the GGA4 demonstrated bright labeling with anti-H4Ac, anti-H3K9me3, and anti-5mC (Fig. 4). These include DAPI-positive chromomeres of the p-arm (1), cluster of chromomeres surrounding the centromere (2) and terminal clusters of DAPI-positive chromomeres in the region 4 of the q-arm with short lateral loops. The exceptions were two clusters of DAPI-positive chromomeres in the region 4, which were less brightly stained with anti-H3K9me3 ( Fig. 4 b-b′′′). The majority of DAPI-negative chromomeres forming an extended cluster on the q-arm (3) were almost depleted with H4Ac and H3K9me3 ( Fig. 4 a-b′′′); certain chromomeres in the distal half of this region were enriched with 5mC ( Fig. 4 c-c′′). Bright anti-5mC labeling was also revealed in the subterminal cluster of DAPI-negative chromomeres of the q-arm ( Fig. 4 b-c′′). GGAZ A pair of prominent DAPI-positive chromomeres on the distal part of GGAZ p-arm (1) was brightly labeled with antibodies against H4Ac and H3K9me3; 5mC was revealed only in the distal chromomere of the pair ( a-a′′′, c-c′′). DAPI-negative chromomeres at the centromere region as well as a proximal DAPI-positive chromomere of the q-arm (5) and a large cluster of DAPI-positive chromomeres (6) were enriched with all three epigenetic modifications (Fig. 5). A cluster of DAPI-negative chromomeres between chromomeres 5 and 6 was enriched only with H4Ac (Fig. 5 a-a′′′). The terminal region of the q-arm occupied by the Z-mascrosatellite (7) was highly enriched with H3K9me3 and 5mC (Fig. 5 b-c′′); certain DAPInegative chromomeres of this region were also enriched with H4Ac (Fig. 5 a-a′′′). Epigenetic status of individual chromomeres To relate the epigenetic status of chromatin with a DNA sequence context of lampbrush chromomeres we combined immunofluorescence and FISH with seven DNA-probes obtained by mechanical microdissection of individual chromomeres (Fig. 7, Supplementary Fig. 2, 3). DNA sequences of microdissected samples were previously deciphered, mapped to the particular genomic regions and bioinformatically analysed [21,22]. The first chromomere analyzed was large (about 5 Mb in size), DAPI-positive, marker chromomere on the q-arm of GGA1 for which a DNA probe #16-16 was obtained [21]. Immuno-FISH with the DNA probe #16-16 revealed that this chromomere combines conflicting epigenetic modifications: it was highly enriched with both H4Ac (Fig. 7 a-e) and H3K9me3 (Fig. 7 f-j) and according to the established map of 5mC distribution contained highly methylated DNA ( Fig. 1 c-c′′). Following the genome context analysis, this chromomere is gene-poor and enriched with repetitive sequences including simple repeats and dispersed retrotransposons such as CR1 repeat of LINE family [21]. Juxtaposition of DNA sequences of the chromomere #16-16 to large scale chromatin compartments (A/B-compartments), identified by Hi-C technique in the interphase nucleus of chicken embryonic fibroblasts [27], demonstrated that they belong to B-compartment [22]. A combination of H3K9me3 and 5mC enrichment with H4Ac depletion was revealed in two individual chromomeres analyzed by immuno-FISH: large DAPI-positive chromomere (about 5 Mb) bearing a marker lumpy loop on GGA3q (LL3) (DNA probes #16-6 and #16-4) [21] and a small DAPI-negative chromomere (2.4 Mb in size) in GGA4q (DNA probe #17) [22] (Supplementary Fig. 2, 3 f-j). Two DNA probes #16-6 and #16-4 obtained by microdissection of LL3-bearing chromomere on GGA3q were hybridized after immunodetection of H4Ac and H3K9me3, correspondingly. Both DNA probes hybridized to LL3bearing chromomere and to two-three neighboring chromomeres ( Supplementary Fig. 2). LL3-bearing chromomere demonstrated depletion with H4Ac ( Supplementary Fig. 2 a-e) and enrichment with anti-H3K9me3 ( Supplementary Fig. 2 f-j). On the map of 5mC distribution LL3-bearing chromomere is also marked as highly methylated (Fig. 3 c-c′′). FISH with the DNA probe #17 to GGA4q after immunodetection of H3K9me3 revealed moderate (but not bright) labeling of the hybridized chromomere ( Supplementary Fig. 3 f-i, f′′′-i′′′). The position of this chromomere on the maps corresponds to 5mC-rich chromomere which is not enriched with H4Ac (Fig. 4 a′′′, c′′). DNA sequences from the microdissected samples #16-6 and #17 were shown to be enriched with repeats and for 70-75% correspond to Bcompartment in chicken interphase nucleus [21,22]. Thus gene-poor and repeat-rich chromomeres studied here (GGA1 sample #16-16, GGA3 sample #16-6, GGA4 sample #17) have different epigenetic status: closed in chromomeres #16-6 and #17 and mixed in #16-16. Interestingly, samples #16-6 and #17 demonstrate lower density of interspersed repeats in comparison to the sample #16-16. Individual chromomeres identified by three DNA probes (#3, #6 and #18) to GGA4 demonstrated different combination of epigenetic modifications. The DNA probe #3 hybridized with a group of three-four neighboring DAPI-negative chromomeres near the termini of GGA4q. Distal chromomere of the group demonstrated bright labeling with anti-H4Ac whereas proximal chromomeres were faintly labeled ( Supplementary Fig. 3 a-e, a′′-d′′). All hybridyzed chromomeres map to highly methylated cluster which is not enriched with H3K9me3 ( Fig. 4 b′′′, c′′). DNA sequences of the sample #3 occupy 2.7 Mb region on GGA4 sequence assembly and demonstrate uneven gene enrichment, which is higher to the distal part of the region. 70% of #3 sample DNA sequences were attributed to A-compartment in chicken interphase nucleus [22]. Thus bright anti-H4Ac labeling of the distal chromomere hybridizing with the DNA probe #3 correlates with gene enrichment in the genomic region occupied by DNA sequences of the sample #3 . The DNA probe #6 to a small DAPI-positive chromomere (1.5 Mb in size) in the terminal part of GGA4q hybridized to a chromomere faintly labeled with H3K9me3 ( Supplementary Fig. 3 f-j, f′-i′′). On the maps of epigenetic modifications distribution, this chromomere was marked as enriched with both H4Ac and 5mC (Fig. 4 a′′′, c′′). The DNA probe #18 to a DAPI-negative chromomere (2.4 Mb in size) from the proximal part of GGA4q [22] hybridized to a single loose chromomere, which also can be stretched to three tiny chromomeres ( Supplementary Fig. 3 a-e, a′-d′). Chromomeres hybridized with the DNA probe #18 were not enriched with H4Ac, although the bases of the lateral loops extending from these chromomeres were H4Ac-rich (Supplementary Fig. 3 a, d, a′, d′). According to our maps, chromomeres in this region are not enriched with H3K9me3 but contain highly methylated DNA (Fig. 4 a′′′, c′′). Following the DNA sequence analysis samples #6 and #18 correspond to regular chromomeres with mixed genomic context (comprised both gene-reach/repeat-poor and gene-poor/repeatrich DNA) [22]. Discussion Here we described in detail the distribution of H4Ac, H3K9me3 and 5mC along the axes of chicken lampbrush chromosomes GGA1-4, Z and W. The brightest chromomeres were mapped on cytological chromomere- loop maps reflecting DAPI-staining pattern. One of the most interesting findings is that in chicken lampbrush macrochromosomes the majority of chromomeres brightly stained with DAPI comprise modifications of both transcriptionally repressed (5mC and H3K9me3) and active (H4Ac) chromatin. We argue that anti-H4Ac reveals transcriptionally-active microloops on the surface of chromomeres. Such small loops forming a rosette structures were observed by electron microscopy of Miller spreads of chicken lampbrush chromosomes as well as after simultaneous immunodetection of H4Ac and elongating form of RNA-polymerase II [3]. We suggest that chromomeres bearing conflicting epigenetic landmarks comprise DNA sequences that generally should be repressed but remain transcriptionally active during the lampbrush stage. Examples of the chromomeres combining conflicting histone modifications are regions surrounding the centromeres in all studied chicken chromosomes, except for GGA3, and terminal region of GGAZq occupied by a cluster of Zmacrosatellite. Using immuno-FISH with the chromomere-specific DNA probes obtained by microdissection [21] we revealed that marker chromomere #16-16 on GGA1q also combines such conflicting epigenetic modifications. Enrichment of the chromomere #16-16 with H4Ac correlates with the high density of retrotransposons suggesting their potential transcription during the lampbrush stage. Some chromomeres and chromosomal regions are enriched only with the markers of repressed chromatin (5mC and H3K9me3) and depleted for H4Ac. Such epigenetic status is typical for chromomeres containing certain tandem repeats, for instance terminal chromomeres of GGA1p/q and GGA2q containing PO41 repeat; chromomeres containing non-transcribing clusters of CNM repeat on GGA3q and GGAW (chromomere 4); as well as chromomeres containing EcoRIand XhoIrepeats in GGAW (chromomeres 1, 3 and 5). Similar combination of epigenetic modifications was found in repeat-rich chromomeres #16-6 on GGA3q and #17 on GGA4q. Another interesting finding is that cytosine methylation pattern on chicken lampbrush chromosomes significantly differs from that on mitotic metaphase chromosomes. In chicken mitotic metaphase karyotype, hypermethylated regions were generally restricted to constitutive heterochromatin [28], whereas in lampbrush chromosomes numerous chromosomal regions were enriched with 5mC. For instance, in metaphase GGA1 only the pericentromere region is highly methylated [28], while in lampbrush GGA1 clusters of highly methylated chromomeres are also found in many other regions. In completely heterochromatic metaphase chromosome W hypermethylated DNA is restricted to the subtelomere region [28], whereas at the lampbrush stage DNA of all lampbrush chromomeres of chromosome W is highly methylated. These observations suggest that meiotic diplotene chromosomes demonstrate a specific DNA methylation pattern different from that in mitotic metaphase chromosomes. On the other hand, the difference in methylation pattern may be due to dissimilar DNA denaturing conditions used for meiotic lampbrush and mitotic metaphase chromosomes. Conclusions Here we described in detail the epigenetic landscape of chicken meiotic chromosomes 1-4, Z and W at the lampbrush stage. On the base of established cytological chromomere-loop maps we developed maps reflecting the distribution of epigenetic modifications (5mC, H3K9me3 and H4Ac) on GGA1-4, GGAZ and GGAW. The developed maps can be used to establish a correlation between epigenetically different chromatin domains with their transcriptional activity, level of compaction and 3D-organization in interphase nucleus. We demonstrated that a combination of immunofluorescence staining and fluorescence in situ hybridization allows to relate the epigenetic status and the DNA sequence context of individual chromomeres. As a probe one can use DNA from dissected material, probes to repetitive elements, cloned DNA-fragments or regionspecific oligonucleotide paints. Lampbrush chromosome preparations Chicken lampbrush chromosomes were prepared according to previously described procedure [29,30] with minor modifications. The oocytes with a diameter from 1 to 2.5 mm were dissected from the ovary and placed in (See figure on previous page.) Fig. 7 FISH with the DNA-probe to individual marker chromomere on GGA1 q-arm after immunodetection of H4Ac and H3K9me3. aimmunodetection of H4Ac on GGA1. fimmunodetection of H3K9me3 on the fragment of GGA1 q-arm. b, g -DAPI staining. c, h -FISH with the DNA probe #16-16 to individual marker chromomere of GGA1. d, imerged a-c and f-h images, correspondingly (immunostainingred, DAPIblue, FISH signalsgreen). a′-d′, f′-i′enlarged areas of panels a-d and f-i framed on d and f. Arrows indicate FISH signals. Scale bars: ad, f-i -20 μm; a′-d′, f′-i′ -10 μm. e, jmaps of H4Ac and H3K9me3 distribution on GGA1, correspondingly. DAPI-positive chromomereswhite circles, DAPI-negative chromomeresblack circles, DAPI-positive chromomeres enriched with H4Ac or H3K9me3green circles, DAPI-negative chromomeres enriched with H4Ac or H3K9me3orange circles. Arrows indicate position of the chromomere hybridized with the DNA probe #16-16, dashed line (j) indicates border of the GGA1 fragment (f -i). CENcentromere position, TBLtelomere bow-like loops a cooled "5:1" medium (83 mM KCl, 17 mM NaCl, 6.5 mM Na 2 HPO 4 , 3.5 mM KH 2 PO 4 , 1 mM MgCl 2 , 1 mM DTT, pH 7.2). Oocytes and nuclei were manipulated under a Leica MZ16 stereomicroscope. To release the nucleus, oocyte membrane was broken with the help of tungsten needles. The isolated nucleus was washed with a hypotonic "1/4" medium ("5:1" medium diluted 4 times and containing 0.1% formaldehyde, 1 mM MgCl 2 ) and transferred to the slide mounted chamber filled with "1/ 4" medium. The nuclear envelope was removed by thin tungsten needles. Preparations were centrifuged for 20 min at 4000 rpm and + 4°C, fixed in 2% formaldehyde in PBS for 20 min, dehydrated in ethanol series (50, 70%) and left in 70% ethanol overnight at + 4°C. The animal studies received an approval of the Ethical committee of Saint-Petersburg State University (#131-03-2, 14.03.2016). Immunofluorescent staining Immunostaining of chicken lampbrush chromosomes was carried out as previously described [12] with the following primary antibodies: rabbit polyclonal antibodies against H4Ac (06-866, Millipore), rabbit polyclonal antibodies against H3K9me3 (8898, Abcam), and mouse antibodies against 5mC (ab51552, Abcam). The slides were rehydrated in a series of ethanol (50, 30%) and then in PBS for 5 min. To reveal the distribution of 5mC, chromosomes were denatured in 2 M HCl for 60-90 min followed by thrice 5 min wash in PBS. Then preparations were blocked with 0.5% blocking reagent (Calbiochem) in PBS or with 1% BSA in PBS in case of 5mC. Primary and secondary antibodies were diluted in the blocking solution according to manufacturer's recommendations. Following secondary antibodies were used: Alexa-488 conjugated goat anti-rabbit IgG (Molecular Probes) and Cy3 conjugated goat anti-mouse IgG (Jackson Immuno Research Laboratories). All incubations were performed in a humidity chamber at RT for 60 min. After incubation with antibodies, preparations were washed thrice in PBS with 0.02% Tween 20 for 5 min at RT. Finally, the slides were dehydrated in ethanol series (50, 70, 96%) for 5 min, air dried and mounted in antifade solution containing 1 μg/ml DAPI (4,6-diamidino-2phenylindole). Fluorescence in situ hybridization Fluorescence in situ hybridization (FISH) was performed on selected lampbrush chromosome preparations after immunostaining according to a DNA/(DNA + RNA) hybridization protocol [7]. Hybridization mixture contained 20 ng/μl DNA probe, 50% formamide, 10% dextran sulfate, 2 × SSC (0.3 M NaCl, 30 mM Na 3 C 6 H 5 O 7 ), 1 μg/μl salmon sperm DNA and deionized water for PCR generated DNA probes; for oligonucleotide probe formamide concentration was decreased to 42%. Chromosomes and DNA-probes were denatured simultaneously for 5 min at 78°C followed by overnight hybridization in a humidity chamber at 37°C for PCR generated DNA probes or at room temperature for oligonucleotide probe. The slides were washed in two changes of 0.2 × SSC at 60°C after hybridization with PCR generated DNA probes and two changes of 2 × SSC at 45°C or in three changes of 2xSCC at 37°C after FISH with oligonucleotide probe. DNA probes labeled with biotin and digoxigenin were detected with Cy3-or Alexa488-conjugated avidin (Jackson Immuno Research Laboratories) and Cy3-conjugated anti-digoxigenin antibody (Jackson Immuno Research Laboratories) correspondingly. Biotinylated anti-avidin (Vectorlabs) and Cy3-conjugated goat anti-mouse antibody (Jackson Immuno Research Laboratories) were used to amplify hybridization signals of the corresponding DNA probes. After FISH slides were mounted in antifade solution containing DAPI (1 μg/ml). Fluorescent microscopy The slides were analyzed using Leica DM4000 and/or DM6000 fluorescence microscopes (Leica-Microsystems) equipped with a monochrome high-sensitivity digital CCD camera with a resolution of 1.3 megapixels and the appropriate set of filter cubes. The morphology of lampbrush chromosomes was analyzed in a phase contrast mode. Image analysis and mapping of epigenetic modifications The intensity of immunofluorescence signals on obtained microphotographs was evaluated visually as for the maps reflecting DAPI staining pattern on chicken lampbrush chromosomes [6]. The most brightly stained chromomeres consistently demonstrating bright fluorescence were mapped to the established cytological chromomereloop maps of lampbrush macrochromosomes GGA1, GGA2, GGA3, GGA4 and GGAZW reflecting a pattern of
2023-01-19T21:09:37.227Z
2020-08-03T00:00:00.000
{ "year": 2020, "sha1": "07a94e1e873c76c0bd0adb7f99df35d3a45d38da", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13039-020-00496-0", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "07a94e1e873c76c0bd0adb7f99df35d3a45d38da", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
260141456
pes2o/s2orc
v3-fos-license
Return to sport after anterior cruciate ligament reconstruction: a qualitative analysis Abstract Introduction Return to sport is a desired outcome in individuals submitted to anterior cruciate ligament reconstruction (ACLR). Objective Understand the factors that affect return to pre-injury level sport after ACLR from the patient’s perspective. Methods The sample consisted of 29 individuals submitted to ACLR who participated in sport before the ligament injury. This is a narrative analysis with a qualitative approach, using a semi-structured interview as a methodological resource. Standardized instruments were also applied to evaluate psychological readiness to return to sport, via the Anterior Cruciate Ligament – Return to Sport after Injury Scale (ACL-RSI); self-perceived knee function using the International Knee Documentation Committee (IKDC) subjective questionnaire; and the frequency of participation in sports with the Marx scale. Results Analysis of the interviews produced three main themes related to post-ACLR return to sport: self-discipline, fear of reinjury and social support. In qualitative analysis, the average scores obtained were 59.17 (± 23.22) on the ACL-RSI scale, 78.16 (± 19.03) for the IKDC questionnaire and 9.62 (± 4.73) and 7.86 (± 5.44) for the Marx scale before and after surgery, respectively. Conclusion: Psychological factors influence the decision to return to sport post-ACLR. Physiotherapists should therefore be aware of the psychological aspects and expectations of patients, and that other health professionals may be needed to help prepare these individuals to return to their preinjury sports level and achieve more satisfactory outcomes after ACLR. Introduction Sport is no longer viewed as simply a leisure or competitive activity, but in recent years has also become an important strategy in social inclusion and mitigating problems related to health and education.1 Thus, sport plays a key role in contributing to preventing chronic degenerative diseases and antisocial behavior.2 However, it can also lead to injury resulting from mechanical trauma or joint and muscle overuse.3 Anterior cruciate ligament (ACL) rupture is the most prevalent injury in sports that require abrupt changes in direction at high speeds or sudden decelerations with high axial loads on the knee.4 After injury, decision making about whether to undergo ligament reconstruction or conservative treatment is influenced by different factors, such as the extent of the injury, degree of instability, physical activity level and patient's functional needs.5 Many individuals with this injury opt for ACL reconstruction as opposed to conservative rehabilitation with the goal of returning to sport.6 Return to preinjury level sport is the desired outcome of patients who undergo physiotherapy to treat these injuries.5 However, a systematic review with meta-analysis demonstrated that only 65% of those submitted to this surgical procedure are able to return to their preinjury level of sport.7 It should be noted that the authors of the meta-analysis did not establish mandatory postoperative physiotherapy as an inclusion criterion.Thus, it is possible that some participants in the studies analyzed had not undergone physiotherapy, representing a limitation for the return to sport outcome.Nevertheless, the fact that a considerable portion of patients do not obtain the desired outcome after surgery is important, given the functional aspects of social participation.Patients' perception of their functional status is one of the psychological factors that seem to influence engagement in sport and can be evaluated using standardized questionnaires developed for this purpose.11 However, studies that investigate the psychological aspects surrounding the return to sport, such as self-reported knee function after surgery, primarily use methods that do not provide an in-depth assessment of these issues.9-11 Likewise, aspects such as lifestyle, employment status and social support were not explored in detail in the Brazilian population submitted to knee ligament reconstruction. As such, the aim of this study was to identify the barriers, facilitators and meaning of the return to sport from the perspective of patients who undergo ACLR. Discussion The aim of this study was to identify the barriers, facilitators and meaning of the return to sport from the perspective of patients who undergo ACLR.The interviews demonstrated that psychological factors influenced the outcome of post-ACLR return to sport, with enjoyment, self-discipline and fear of reinjury cited most often.Fear was identified as a relevant factor in the decisions of both groups (those who returned to sport and those who did not).Social support also played an important role in patient decisions about returning to sport after surgery. The first theme identified in interview analysis was self-discipline, considered a significant factor in the decision to return to sport by many of the interviewees. Self-discipline and persistence in pursuing this outcome are evident in the interviews of several patients who returned to sport, as shown in the following statement: Despite the fear of reinjury, especially in the early stages of returning to sport, all patients who did so described their struggle to overcome this obstacle. This strategy to deal with obstacles is known as coping or active coping.24 An example of this is evident in the following statement : There is a fear of going through it all over again, of needing more surgery or getting the same injury, but my desire to be active again was greater than the fear. (P8) In a study similar to our investigation, but conducted in Canada, fear was also prominent in the interview statements of both groups, but predominated among those who did not return to sport, 25 with 64% of interviewees not returning to their preinjury level.This differs from our findings, where returning to sport was more prevalent. Athletes experience considerable stress when undergoing ACLR and dealing with this adversity seems to be the most important coping strategy used, as observed by Dias and Fonseca. Conclusion In this study, the positive outcome of returning to sport surpassed the small number of individuals who did not return, establishing self-discipline and enjoyment of sport as decisive factors in this outcome.Thus, the results indicate that physiotherapists must be aware of psychological factors and patients' goals, and that other health professionals may be needed to help prepare these individuals to return to their preinjury sports level and achieve more satisfactory outcomes after ACLR. Another noteworthy aspect regarding the interviews that does fall within the themes identified here was weight gain as a decisive factor in the decision not to return to sport, reported by two patients: Several factors have been proposed to explain successful return to sport after ACLR.8 In recent years, psychological factors have been investigated as possible variables that may help or hinder individuals submitted to ACLR in returning to their preinjury sport level.9 The negative emotions experienced by athletes after injury hamper their rehabilitation, making psychological, social and contextual factors critical to successful rehabilitation.10 Thus, personal psychological factors can also influence this clinical outcome in terms of the individual's return to their preinjury activity level. sites) to prevent third party interference.Patients filled out a form before the interview to provide demographic and clinical data for sample characterization.Data on physiotherapy quality and treatment plan were not collected because this was not the focus of the proposed qualitative approach.As such, the quality of the physiotherapy treatment received might be one of the factors mentioned by participants, depending on their ability to critically assess and identify barriers and facilitators in the return or not to sport after surgery.Interviewing was halted once new information rarely emerged, confirming saturation.The interviews were recorded and then transcribed in full, with prior authorization from participants, who signed a consent form.Participants were assigned a number (P1, P2, etc.) to protect their identity.The study was approved by the Research Ethics Committee of Minas Gerais State University (protocol number: 2.239.953).The script used for the interview contained ten questions that addressed: 1-what influenced the return or not to sport; 2 -how the interviewee felt about the possibility of knee reinjury; 3 -coping with the injury/ surgery; 4 -history of previous severe injuries; 5 -the influence of family support; 6 -outside pressure to return to sport; 7 -the influence of financial status on rehabilitation; 8 -advice and guidance from professionals on returning to sport; 9 -the role of health professionals monitoring rehabilitation; 10 -the rehabilitation process from surgery to return or not to sport.Three standardized data collection instruments were also used, in order to achieve better sample characterization regarding relevant aspects involved in the return to sport.The instruments were applied before the interview, after participants had filled out the form providing demographic and clinical data.The Anterior Cruciate Ligament -Return to Sport after Injury Scale -(ACL-RSI) was used to assess psychological readiness to return to sport after ligament reconstruction.13 It contains 12 items divided into three subscales (emotions, confidence and risk appraisal), with each item graded from 0 to 10.The scores for each item are added and the total converted into a percentage, with the result ranging from 0 to 100.The instrument demonstrates adequate validity, reliability and internal consistency.13,14 The International Knee Documentation Committee (IKDC) subjective questionnaire was applied to evaluate patients' perception of their knee function and consists of 18 items related to symptoms, daily activities and sports and knee function, with the result converted to a scale from 0 to 100, whereby the higher the score, the better the knee function.The validity, reliability and internal consistency of the IKDC have been tested and confirmed as adequate.15-17 The Marx scale analyzed the frequency of sports activities before injury and after ligament reconstruction, at the time of the interview.It was developed to measure how often individuals perform physical activities that are difficult for those with knee pathologies.Items are scored from 0 to 16 and activities are divided into four categories: running, changing direction, deceleration and pivoting.The higher the score, the more frequent the individual's participation in sports.18 Data analysis The quantitative data obtained from the standardized instruments were analyzed by descriptive statistics via mean and standard deviation.The qualitative data were assessed by content analysis, whereby a set of criteria are used as a guide to identify topics or themes that can be considered a unit of meaning in the text analyzed.19 To establish discussion, interview data, information from the scientific literature and interpretations of the statements within the themes were triangulated.20 FISIOTERAPIA EM MOVIMENTO Physical Therapy in Movement Rabelo LM et al.Fisioter Mov.2023;36:e36124 4 the support received is linked to different mental and physical health outcomes that affect how they perceive stressful situations and their emotional and psychological well-being.According to Gokeler et al., 10 a person's level of social support modulates the psychological stress that accompanies ACL injury, reconstruction surgery and rehabilitation.This is consistent with patient accounts that highlight the positive influence of social support from family members or friends on their decision to return to sport: It was a challenge in my life, but with support from my family and friends, thank God I was able to return.(P9) In times like those you really need support, someone to help and encourage you, I think it makes a big difference.(P10) My whole life, I made so many friends through soccer and that definitely influenced my decision to go back.(P11) This motivational factor may be linked to different definitions of social support that emphasize different aspects of interpersonal relationships.In general, social support is defined as any information, spoken or not, and/or material assistance and protection offered by other people and/or groups to those with whom they have regular contact that result in emotional effects and/ or positive behavior.28 FISIOTERAPIA EM MOVIMENTO Physical Therapy in Movement Rabelo LM et al.Fisioter Mov.2023;36:e36124 6 related to post-ACLR return to sport from the perspective of patients, who do not have the scientific knowledge to critically analyze some factors that may be linked to the outcome analyzed, such as the surgical techniques used or the quality of the physiotherapy received after surgery.This could explain why psychosocial factors were more frequently cited by participants than physical and functional aspects. Table 1 - Participants' (n = 29) scores on the instruments used think it's about willpower, about really wanting it, because when you want it, you go after it.It's mind over body; if you want it, if you really enjoy something, then you have to go after it; it's about overcoming ourselves.developmentand improved performance in activities of daily living.Physical exercise and sport promote continuous learning in practitioners when they find meaning in the activity.Some accounts obtained in the present study illustrate this point: Note: Marx scale preinjury and post-surgery; IKDC = International Knee Documentation Committee; ACL-RSI = Anterior Cruciate Ligament -Return to Sport after Injury Scale; SD = standard deviation.
2023-07-26T15:13:21.753Z
2023-07-24T00:00:00.000
{ "year": 2023, "sha1": "fb4d5ed48e0a41e162a3586a2407a278aa8a2679", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/fm/a/c6YGvW6G6RT3PGJnxs59WPd/?format=pdf&lang=en", "oa_status": "CLOSED", "pdf_src": "Anansi", "pdf_hash": "cce098cba2526b0c16102a108246c1ba7c05944d", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
1813743
pes2o/s2orc
v3-fos-license
Modulation of Heme/Substrate Binding Cleft of Neuronal Nitric-oxide Synthase (nNOS) Regulates Binding of Hsp90 and Hsp70 Proteins and nNOS Ubiquitination* Background: Hsp90 and Hsp70 act together as machinery for protein quality control. Results: Both chaperones sense conformational changes (opening/closing) in ligand binding clefts. Conclusion: Hsp90 inhibits substrate ubiquitination and degradation, whereas Hsp70 promotes ubiquitination and degradation. Significance: The opposing effects of the two chaperones can account for the triage of damaged and aberrant proteins. Like other nitric-oxide synthase (NOS) enzymes, neuronal NOS (nNOS) turnover and activity are regulated by the Hsp90/Hsp70-based chaperone machinery, which regulates signaling proteins by modulating ligand binding clefts (Pratt, W. B., Morishima, Y., and Osawa, Y. (2008) J. Biol. Chem. 283, 22885–22889). We have previously shown that nNOS turnover is due to Hsp70/CHIP-dependent ubiquitination and proteasomal degradation. In this work, we use an intracellular cross-linking approach to study both chaperone binding and nNOS ubiquitination in intact HEK293 cells. Treatment of cells with NG-nitro-l-arginine, a slowly reversible competitive inhibitor that stabilizes nNOS, decreases both nNOS ubiquitination and binding of Hsp90, Hsp70, and CHIP. Treatment with the calcium ionophore A23187, which increases Ca2+-calmodulin binding to nNOS, increases nNOS ubiquitination and binding of Hsp90, Hsp70, and CHIP in a manner that is specific for changes in the heme/substrate binding cleft. Both Hsp90 and Hsp70 are bound to the expressed nNOS oxygenase domain, which contains the heme/substrate binding cleft, but not to the reductase domain, and binding is increased to an expressed fragment containing both the oxygenase domain and the calmodulin binding site. Overexpression of Hsp70 promotes nNOS ubiquitination and decreases nNOS protein, and overexpression of Hsp90 inhibits nNOS ubiquitination and increases nNOS protein, showing the opposing effects of the two chaperones as they participate in nNOS quality control in the cell. These observations support the notion that changes in the state of the heme/substrate binding cleft affect chaperone binding and thus nNOS ubiquitination. Hsp70-based chaperone machinery regulates signaling proteins by modulating ligand binding clefts (reviewed in Ref. 2), and these proteins constantly undergo cycles of Hsp90 heterocomplex assembly and disassembly in the cytoplasm and nucleoplasm (1). Two types of cycling with Hsp90 occur. The classical Hsp90 "client" proteins, such as steroid receptors and many protein kinases, form Hsp90 heterocomplexes that are stable enough to be isolated and analyzed biochemically. We call this "stable cycling" with Hsp90, and the turnover of these proteins is stringently regulated by the chaperone (2). Formation of heterocomplexes with Hsp90 inhibits client protein turnover, and treatment with an Hsp90 inhibitor, such as geldanamycin, uniformly triggers client protein degradation (3). Other signaling proteins, such as the nitric-oxide synthase (NOS) enzymes, form Hsp90 heterocomplexes that rapidly disassemble such that no (or only trace amounts of) Hsp90 heterocomplexes are recovered from cell lysates. We call this "dynamic cycling," and the turnover of these proteins is not as affected by Hsp90 inhibitors as the classical client proteins (2). Degradation of both types of Hsp90-regulated signaling proteins occurs via the ubiquitin-proteasome pathway, which in this case is initiated by Hsp70-dependent E3 ubiquitin ligases, such as CHIP (4) and parkin (5). Ligand binding clefts are hydrophobic clefts that must be open to allow access of ligands, such as steroids or ATP, to their binding sites within the interior of the protein. In the absence of the chaperone machinery, ligand binding clefts are dynamic, shifting to varying extents between the closed and open states. When clefts open, hydrophobic residues of the interior of the protein are exposed to solvent, and continued opening may progress to protein unfolding. Therefore, the extent to which the ligand binding cleft is open determines ligand access and thus protein function, but clefts are inherent sites of conformational instability. We have proposed that the stability of the open state of the cleft is modulated by the Hsp90/Hsp70-based chaperone machinery (2,6), and in this study, we further develop that model. The NOS enzymes, including endothelial NOS (eNOS), neuronal NOS (nNOS), and inducible NOS (iNOS), are signaling proteins whose activity is enhanced by Hsp90 (7)(8)(9)(10)(11)(12)(13)(14). These enzymes are cytochrome P450-like hemoproteins that catalyze the conversion of L-arginine to nitric oxide and citrulline by a process that requires NADPH and molecular oxygen (15). NOS enzymes are bidomain in structure with an oxygenase domain, which contains the binding sites for heme, substrate, and tetrahydrobiopterin, and a reductase domain, which contains the binding sites for FMN, FAD, and NADPH. NOS enzymes are highly regulated, requiring homodimerization and binding of Ca 2ϩ -calmodulin (CaM) for activity, and several signaling pathways initiate nNOS and eNOS activity by raising intracellular Ca 2ϩ concentration. Studies with purified proteins show that CaM and Hsp90 increase binding of each other to both eNOS and nNOS (11,12,14,16). Another mechanism for regulation is the selective ubiquitin-dependent proteasomal degradation of dysfunctional NOS (reviewed in Ref. 17). Because nNOS is cytosolic and because metabolism-based inactivators, such as N G -amino-L-arginine and the antihypertensive drug guanabenz, cause covalent alteration in the heme/ substrate binding cleft that triggers nNOS ubiquitination and proteasomal degradation (18,19), we have found the enzyme to be a good model for studying how the state of the ligand binding cleft affects ubiquitination. For example, guanabenz treatment leads to the oxidation of tetrahydrobiopterin with formation of a pterin-depleted cleft that triggers nNOS ubiquitination and proteasomal degradation (18,20,21). A number of other inhibitors, such as N G -amino-L-arginine, cross-link heme to the enzyme (17,22), a modification that was shown in a myoglobin model to cause opening of the heme/substrate binding cleft (23) to yield a more unfolded state of the protein (24) that triggers ubiquitination. Both geldanamycin and N G -amino-L-arginine promote nNOS ubiquitination by a purified CHIP-dependent ubiquitinating system (25). As with eNOS and iNOS, treatment of cells with an Hsp90 inhibitor leads to nNOS degradation via the ubiquitin-proteasome pathway (8,18). We have shown that both CHIP and parkin can function as E3 ligases for nNOS ubiquitination (5,26) but that CHIP accounts for all of the nNOS ubiquitinating activity (27) in the reticulocyte lysate ubiquitination system of Hershko et al. (28). Ubiquitination of nNOS by a purified, CHIP-dependent ubiquitinating system (25,26) and by the reticulocyte lysate system (27,29) is dependent upon Hsp70. In contrast to Hsp70, which stimulates nNOS ubiquitination when added to the purified CHIP-dependent system, Hsp90 inhibits ubiquitination (25). Like the stimulation of nNOS activity by Hsp90, inhibition of nNOS ubiquitination by Hsp90 is calmodulin-dependent (25), suggesting that both activation and stabilization result from the same interaction of Hsp90 with the enzyme. In this work, we will use the ligand N G -nitro-L-arginine (NNA), a slowly reversible, competitive inhibitor of nNOS that stabilizes the enzyme (30), to modulate the binding of Hsp90 and Hsp70 to nNOS in HEK293 cells. We also use the calcium ionophore A23187 to modulate chaperone binding through CaM binding. Inasmuch as CaM enhances electron flux from flavin bound to the reductase domain to heme bound within the cleft (31), CaM binding is likely to affect the state of the cleft. To obtain mechanism-based inactivation of nNOS, cells must be treated with calcium ionophore to activate the enzyme. As we show here, treatment with calcium ionophore markedly changes the binding of Hsp90 and Hsp70 to nNOS in HEK293 cells, and because of this, we have not been able to tease out the effects of N G -amino-L-arginine and guanabenz on chaperone binding. Because Hsp90 cycles dynamically with the holo-nNOS homodimer, we have used an intracellular cross-linker to trap nNOS-chaperone complexes that can then be immunoprecipitated from cell lysates with anti-nNOS. We show that treatment of HEK293 cells with the calcium ionophore A23187 increases nNOS ubiquitination and the amount of Hsp90, Hsp70, and CHIP in nNOS heterocomplexes in a manner that is specific for changes in the heme/substrate binding cleft. In contrast, the stabilizing ligand NNA decreases both nNOS ubiquitination and the recovery of Hsp90, Hsp70, and CHIP in nNOS heterocomplexes. Both Hsp90 and Hsp70 are bound to the expressed nNOS oxygenase domain but not to the reductase domain, and binding is increased to an expressed fragment containing both the oxygenase domain and the CaM binding site. Overexpression of Hsp70 promotes and overexpression of Hsp90 inhibits nNOS ubiquitination, with Hsp70 decreasing and Hsp90 increasing nNOS protein levels in HEK293 cells. This confirms that the two chaperones have opposing effects as they participate together in nNOS quality control. Taken together, the demonstration that ligand-dependent changes to the heme/substrate binding cleft regulate chaperone binding in the cell and that the chaperones bind to the oxygenase domain, which contains the heme/substrate binding cleft, strongly support the general model where the Hsp90/Hsp70-based chaperone machinery regulates signaling proteins by modulating ligand binding clefts (2). Methods Cell Culture and Transient Transfection-HEK293 cells stably transfected with rat nNOS were cultured in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum (HyClone), 20 mM Hepes, pH 7.4, and G418 (0.5 mg/ml) as described previously (32). Twenty hours before harvesting, the cells were cultured in DMEM containing 0.1 mM arginine (low arginine DMEM). The HEK293T cells used for transient transfection were obtained from the American Type Culture Collection and cultured in minimum essential medium supplemented with 10% fetal bovine serum. Transient transfections of 293T cells were carried out with the use of a standard calcium phosphate method in 10-cm dishes. A solution (100 l) of 2.5 M CaCl 2 in 10 mM Hepes, pH 7.2, and the desired amount of plasmid DNA was diluted to 1.0 ml with TE buffer (10 mM Tris-HCl, 1 mM EDTA, pH 7.3), and 1 volume of this 2ϫ Ca/DNA solution was added dropwise to an equal volume of Hepes-buffered saline solution (275 mM NaCl, 10 mM KCl, 1.5 mM Na 2 HPO 4 , 10 mM dextrose, 40 mM Hepes, pH 7.2). The two solutions were mixed and added to the cell culture medium after 15 min of incubation. Cells at 70 -80% confluence were transfected with cDNAs for His-HA-ubiquitin (3 g), CHIP (1 g), nNOS (3 g), and Hsp70 (3 g) or Hsp90 (3 g) with the total amount of cDNA being kept constant with vector plasmid. The transfection efficiency of HEK293T cells was in the 60 -80% range. In this protocol, 10 M lactacystin is present for 18 h before cell lysis at 48 h. For experiments depleting Hsp90 or Hsp70, HEK293 cells stably transfected with nNOS were seeded at 3 ϫ 10 5 cells/well in 6-well plates 1 day before the addition of siRNA in Lipofectamine RNAiMAX transfection reagent from Invitrogen. Forty-eight hours later, cells were harvested in SDS sample buffer, boiled, and electrophoresed. Plasmids-The rat nNOS cDNA was subcloned from PVL1393 (8) into the EcoRI and NotI sites of pcDNA3.1ϩ. Expression plasmids for the isolated oxygenase domain (Oxy-(1-720)), the oxygenase domain containing the CaM binding site (OxyCaM-(1-756)), and the isolated reductase domain (Red-(746 -1429)) of nNOS were created by inserting the correspondent PCR fragments into mammalian expression vector pcDNA3.1ϩ. The HA tag was attached to the C terminus of the Oxy and OxyCaM domains (designated Oxy-HA and OxyCaM-HA, respectively), and an HA tag was attached to the N terminus of the reductase domain (designated HA-Red). The coding sequence of human Hsp70 and Hsp90 was amplified by PCR and subcloned into pcDNA4/HisMax for expression in mammalian cells. The inserts of all recombinant plasmids were sequenced to ensure accuracy. Chemical Cross-linking and Immunoprecipitations-nNOSexpressing HEK293 cells were treated with 200 M NNA or 4 M calcium ionophore (A 23187) for 30 min at 37°C. After incubation, cells were washed twice with room temperature phosphate-buffered saline (PBS) and then resuspended in cross-linking buffer that contains 1.5 mM DSP and protease inhibitor in PBS. The cells were incubated at room temperature for 30 min with rotation, and the cross-linking was arrested by the addition of Tris, pH 7.5, at a final concentration of 20 mM for 15 min at 4°C. Cells were subsequently resuspended in 0.1 ml of buffer A (0.3% Triton X-100, 0.7% n-octylglucoside in PBS) and sonicated twice for 25 s with a 30-s interval. Whole-cell lysates were obtained by centrifugation at 14,000 ϫ g for 20 min at 4°C to remove cellular debris. Equal amounts of proteins (ϳ400 g) from HEK293 cell lysates were incubated with 20 l of anti-nNOS antibody preconjugated with 140 l of protein A-Sepharose in a total volume of 300 l of buffer B (0.1% Triton X-100, 0.2% n-octylglucoside in PBS) for 4 h at 4°C with agitation. Immunoprecipitates were washed two times with buffer B and three times with PBS by resuspension and centrifugation (3000 ϫ g, 2 min each). Fifty microliters of SDS sample buffer (5% SDS, 20% glycerol, 6 mg/ml dithiothreitol, and 0.02% bromphenol blue in 125 mM Tris-HCl, pH 6.8) were added to each sample tube, and tubes were heated at 100°C for 5 min to extract the bound target proteins. In studies where HA was immunoadsorbed, 25 l of anti-HA antibody-conjugated agarose replaced anti-nNOS IgG and protein A-Sepharose. For ubiquitination studies, the cell pellet was directly homogenized in HS buffer (10 mM Hepes, pH 7.4, 0.32 M sucrose, 2 mM EDTA, 6 mM phenylmethylsulfonyl fluoride, 10 g/ml leupeptin, 2 g/ml aprotinin, 10 g/ml trypsin inhibitor, 15 mM sodium vanadate, 1% Nonidet P-40, and 5 mM N-ethylmaleimide) without cross-linking. SDS-Polyacrylamide Gel Electrophoresis and Western Blotting-After removing the protein A-Sepharose by a brief centrifugation at room temperature, the supernatant was resolved by SDS-PAGE under reducing conditions. Western blot was performed with a PVDF membrane and probing with anti-Hsp90, Hsp70, CHIP, or nNOS antibody as indicated. Immunoreactive bands were visualized with the use of enhanced chemiluminescence reagent (ECL) and X-Omat film. The mono-ubiquitinated conjugate is the predominant ubiquitinated nNOS species detected in HEK293 cells (19). Thus, the mono-ubiquitinated nNOS bands were scanned, and the relative densities were determined with the ImageJ software (rsb.info.nih.gov/ij/). nNOS bands were exposed for a short time to visualize non-ubiquitinated nNOS and for a long time to visualize the mono-ubiquitinated nNOS bands. To correct for any minor differences in nNOS protein in different immunoprecipitates, mono-ubiquitinated nNOS bands were normalized according to the short time exposures of the non-ubiquitinated nNOS. Relative densities for at least three experiments are presented in bar graphs as the percentage of the condition with the greatest ubiquitination Ϯ S.E. Significance of difference was determined by one-way analysis of variance (Tukey's multiple comparison test). Statistical probability is expressed as *, p Ͻ 0.05, **, p Ͻ 0.01, ***, p Ͻ 0.001 in Figs. 2-8. Fig. 1, HEK293 cells stably expressing nNOS were either untreated or treated for 30 min with the nNOS stabilizer NNA. Cells for each condition were then exposed to DSP or vehicle, and cell lysates were prepared and immunoprecipitated for nNOS. Hsp90 and Hsp70 were detected in nNOS complexes immunoadsorbed from cells treated with the cross-linker (lanes 1 and 2) but not in immunoprecipitates from cells not exposed to DSP (lanes 3 and 4). Thus, in HEK293 cells, the intracellular cross-linker is required to visualize the chaperones interacting with nNOS. It also appears that treatment of cells with NNA decreased the amount of the chaperones and CHIP co-immunoprecipitating with nNOS. Intracellular Cross-linking Is Required for Detection of nNOS-Chaperone Complexes-In the experiment of Modulation of Chaperone Binding by Treatment with NNA or Calcium Ionophore- Fig. 2 shows that treatment of cells with the nNOS stabilizer NNA (lane 3) does indeed decrease the amount of Hsp90, Hsp70, and CHIP co-immunoprecipitating with nNOS, and NNA also decreases the level of nNOS ubiquitination when compared with untreated cells (lane 2). The chaperones are not present in immunoprecipitates prepared from HEK293 cells that are not expressing nNOS (lane 1), confirming their presence in an nNOS-specific heterocomplex. The ubiquitination observed in untreated cells (lane 2) reflects a basal level of nNOS autoinactivation occurring in the absence of a rise in intracellular Ca 2ϩ concentration. When the Ca 2ϩ concentration is increased by treating cells with the calcium ionophore A23187 (lane 4), both the level of nNOS ubiquitination and the amounts of Hsp90, Hsp70, and CHIP co-immunoprecipitating with nNOS increase. The higher level of ubiquitination likely reflects increased autoinactivation occurring when nNOS enzymatic activity is stimulated by CaM binding. The increased ubiquitination may reflect increased binding of Hsp70 and CHIP to nNOS that is unfolding in response to autoinactivation by active oxygen species generated within the heme/substrate binding cleft. The increased binding of Hsp90 in the presence of the ionophore is probably due to the fact that CaM and Hsp90 increase the binding of each other to nNOS (14,16). To determine whether the ionophore effects on chaperone binding reflect changes within the heme/substrate binding cleft, we used the slowly reversible inhibitor L-NNA or its inactive isomer D-NNA to demonstrate stereospecific modulation of the ionophore effect. As shown in Fig. 3, L-NNA (lane 3) protects nNOS from the increases in ubiquitination and chaperone binding that are seen with the ionophore (lane 2), whereas D-NNA (lane 4) does not protect. This stereospecific protection by ligand binding within the heme/substrate binding cleft supports the notion that the state of the cleft affects chaperone binding. Binding of Chaperones to Expressed nNOS Domains-The heme/substrate binding cleft is located in the oxygenase domain, and the experiments shown in Fig. 4 show that the oxygenase domain alone is sufficient for binding of Hsp90 and Hsp70. In precipitated with anti-nNOS. Hsp90 and Hsp70 co-immunoprecipitated with all three, and as in Fig. 4A, more of each chaperone appeared to be present with the OxyCaM than with the Oxy domain alone. To correct for possible differences in the recovery of domains, the bands from several experiments like that of Fig. 4A were scanned, normalized by the HA band, and plotted in the bar graphs of Fig. 4C. Although the oxygenase band is sufficient for binding of Hsp90 and Hsp70, the presence of the CaM binding segment increases recovery. This suggests that the affinity of chaperone binding may be higher when the CaM binding segment is present. This would be consistent with reports that purified CaM and Hsp90 increase binding of each other to eNOS and nNOS (11,12,14,16). Opposing Effects of Hsp90 and Hsp70 on nNOS Ubiquitination in HEK Cells-We have shown previously that nNOS ubiquitination by a purified CHIP-containing ubiquitinating system is promoted by Hsp70 and inhibited by Hsp90 (25). To determine whether the two chaperone proteins have opposing actions on nNOS ubiquitination in the cell, each of them was overexpressed in HEK293T cells along with nNOS in a protocol we have used previously to demonstrate CHIP-dependent nNOS ubiquitination under transient transfection conditions (26). Fig. 5A shows the levels of Hsp90 and Hsp70 in lysates of HEK293T cells 48 h after transient transfection with cDNA for each chaperone in addition to cDNAs for nNOS, CHIP, and His-HA-ubiquitin. Fig. 5B shows the effect of overexpression of Hsp70 (lane 2) or Hsp90 (lane 3) on nNOS mono-ubiquitina-tion. The data from scans of three experiments are shown in the bar graphs. The region of the gel above the mono-ubiquitinated nNOS was blotted with anti-HA antibody to show the polyubiquitinated nNOS, which underwent the same changes with Hsp70 or Hsp90 overexpression. As was found in the purified ubiquitinating system (25), overexpression of Hsp70 promoted and overexpression of Hsp90 inhibited nNOS ubiquitination in HEK cells. To determine whether the opposing effects of Hsp90 and Hsp70 on nNOS ubiquitination are reflected by the appropriate opposing effects on nNOS protein level, HEK293T cells were transfected with nNOS cDNA and increasing amounts of Hsp90 or Hsp70 cDNA. Fig. 6 shows the levels of nNOS in lysates of HEK293T cells 48 h after transient transfection with cDNA for Hsp90 (Fig. 6A) or Hsp70 (Fig. 6B) in addition to cDNAs for nNOS and CHIP. Overexpression of Hsp90 yields an increase in nNOS protein, and overexpression of Hsp70 yields a decrease in nNOS protein, consistent with the inhibition and promotion of ubiquitination, respectively, shown in Fig. 5B. In Fig. 6C, HEK293 cells stably expressing nNOS were transfected with siRNA for human Hsp90␣/␤, Hsc70, or Hsp70. Decreasing the level of Hsp90 (lane 3) caused a decrease in nNOS, consistent with a decrease in the stabilizing effect of Hsp90 shown in Fig. 6A. HEK293 cells stably expressing nNOS are stressed, probably as a result of the generation of active oxygen species by the enzyme, and they have very high levels of Hsp70 with respect to Hsc70 (Fig. 6C). Decreasing the level of Hsc70 (Fig. 6C, lane 4) does not affect the level of nNOS or of Hsp70. In these cells, the Hsp70 level can be decreased only by about one-half (lane 5), which is not sufficient to affect the level of nNOS. Transfection with siRNA for both Hsc70 and Hsp70 still leaves half of the Hsp70, and again, the level of nNOS is unaffected (lane 6). In that overexpression of the chaperones affected nNOS mono-ubiquitination in Fig. 5B, we asked whether inhibition of each chaperone had the opposite effect on nNOS ubiquitination. Radicicol is an inhibitor that binds to the N-terminal domain of Hsp90, and treatment with radicicol depletes cells of classical Hsp90 clients, such as Raf-1 and mutant p53 (33). In Fig. 7A, HEK293 cells stably transfected with nNOS were treated with 20 M radicicol, a concentration of the inhibitor we have previously shown to prevent heme binding and activation of apo-nNOS in insect cells (34). As shown in Fig. 7A, treatment with radicicol (lane 3) increases nNOS mono-ubiquitination when compared with the level seen without treatment (lane 1). When cells are treated with both the calcium ionophore and radicicol (lane 4), ubiquitination is greater than that seen with the ionophore alone (lane 2). In Fig. 7B, cells were treated with pifithrin-, a small molecule inhibitor of Hsp70 (35). Pifithrin-inhibited nNOS ubiquitination in the absence (cf. lanes 1 and 3) or presence (cf. lanes 2 and 4) of calcium ionophore. This is consistent with cell-free observations where ubiquitination of nNOS by a purified, CHIP-dependent ubiquitinating system (25,26) and by the reticulocyte lysate system (27,29) is dependent upon Hsp70. As shown in Fig. 7C, treatment with the inhibitors did not affect the level of total Hsp90 or Hsp70 in the HEK cells. Treatment of cells with the inhibitors radicicol and geldanamycin has been shown to reduce Hsp90 binding to a wide variety of Hsp90 target proteins (1), including all three NOS enzymes (10,12,36). In contrast, pifithrinhas not been widely studied, although it has been shown to interact specifically with Hsp70 and not to interact with Hsp90, and treatment of cells with pifithrinhas been shown to reduce Hsp70 binding to p53 (35). In Fig. 7D, nNOS was immunoprecipitated from cytosol prepared from HEK293 cells treated with radicicol or pifithrin-, and nNOS-associated Hsp90 and Hsp70 were visualized by immunoblotting. Radicicol reduces the binding of Hsp90 but not Hsp70, and pifithrinreduces the binding of Hsp70 but not Hsp90. In Fig. 8, the effects of radicicol and pifithrinon nNOS ubiquitination were compared in the absence and presence of overexpression of Hsp90 or Hsp70. When Hsp90 is overexpressed, radicicol yields roughly the same increase in ubiquitination relative to the untreated control as in the absence of overexpression (Fig. 8A). Similarly, pifithrinyields roughly the same inhibition of ubiquitination in the presence and absence of Hsp70 overexpression (Fig. 8B). Because Hsp90 and Hsp70 are highly abundant proteins, their overexpression only doubles the amount of each chaperone (Fig. 5A). Thus, in each case, we only reduce the ratio of inhibitor to chaperone by about one-half when each chaperone is overexpressed, yielding roughly the same change in ubiquitination with and without overexpression. Taken together, the data of Figs. 5, 7, and 8 support the notion that Hsp90 and Hsp70 have opposing effects, with Hsp90 inhibiting and Hsp70 promoting nNOS ubiquitination. DISCUSSION The oxygenase domain of nNOS contains the heme/substrate binding cleft and is the domain that interacts with Hsp90 (Fig. 4). Both pulldown and peptide competition experiments suggest that a region (amino acids 300 -400) in the oxygenase domain of endothelial NOS is important for Hsp90 binding (37,38). Here, we show that the oxygenase domain of nNOS is also the site of its interaction with Hsp70. The finding that Hsp90 and Hsp70 bind to domains containing ligand binding clefts also pertains to signaling proteins that are classic Hsp90 clients. For example, both chaperones interact with the ligand binding domains of the steroid receptors (reviewed in Ref. 39). Also, Hsp90 has been shown to interact with the catalytic domains, which contain the ATP binding clefts, of protein kinase clients, such as v-Raf (40) and ErbB-2 (41). Although it has been demonstrated that Hsp90 interacts with domains containing ligand binding clefts, there is no common specific binding motif defining a surface for interaction, and our notion is that the chaperone binds at the opening where hydrophobic ligand binding clefts merge with the protein surface (1,2). Such cleft openings are a topological feature of virtually all proteins in native conformation, raising the possibility that Hsp90 interacts with a much broader range of proteins than the limited subset of stable cycling, stringently regulated proteins that have been the focus of interest as Hsp90 clients. The use of cross-linking techniques, such as that employed here with nNOS, may identify multiple proteins that cycle dynamically with Hsp90 and are regulated in less dramatic fashion than the classic Hsp90 clients. It is not known what feature of a ligand binding cleft (e.g. dynamics of opening/closing?) determines whether a protein will undergo stable or dynamic cycling with Hsp90. However, there are several examples where mutations within the ligand binding domain of steroid receptors and catalytic domains of protein kinases convert these classic Hsp90 clients to the dynamic cycling that is seen with the NOS enzymes (2). Binding of the slowly reversible inhibitor NNA within the heme/substrate binding site of nNOS decreases binding of Hsp90 and Hsp70 to nNOS and decreases nNOS ubiquitination (Fig. 2). The ability of ligand binding to modulate Hsp90 binding was originally reported for steroid receptors (39), and steroid-dependent dissociation of Hsp90 is often presented in textbook models as the first step in steroid hormone action. It is now realized that binding of steroid within the cleft promotes a temperature-dependent collapse of the cleft to the closed state, converting the receptor from stable Hsp90 cycling to dynamic Hsp90 cycling (2). A study of Hsp90 binding to iNOS suggests that the binding of heme to the apo-iNOS monomer may drive a similar conversion from stable to dynamic cycling (42). Heme binding to apo-NOS drives its homodimerization to the active holo-NOS enzyme, and heme insertion into apo-nNOS (34) and apo-iNOS (42) requires Hsp90. Stuehr and co-workers (42) have shown that apo-iNOS forms stable complexes with Hsp90, whereas heme-bound holo-iNOS does not, consistent with conversion from stable to dynamic cycling. Similarly, binding of NNA may favor a more closed conformation of the heme/substrate binding cleft of holo-nNOS to favor even more dynamic cycling with Hsp90 and decreased capture of the nNOS-Hsp90 heterocomplex upon cross-linking. CaM binding is required for nNOS to be active, and CaM binding may favor a more open state of the ligand binding cleft that cycles less dynamically with Hsp90, increasing capture of the nNOS-Hsp90 heterocomplex upon cross-linking (Fig. 2). To our knowledge, there have been no studies of ligand effects on Hsp70 recovery with steroid receptors, but NNA binding to nNOS decreases the recovery of both Hsp70 and CHIP (Fig. 2). Again, NNA binding may favor a more closed state of the heme/substrate binding cleft, reducing exposure of hydrophobic amino acids of the cleft interior that favor Hsp70 binding. Decreased binding of the Hsp70-dependent E3 ligase CHIP likely accounts for the decreased ubiquitination of nNOS seen with NNA (Fig. 2). Inasmuch as ubiquitination is the initial step leading to proteasomal degradation, this could account for the ability of NNA to stabilize nNOS (30). In a broader sense, the possibility that ligands affect Hsp70 binding by modulating cleft conformation could contribute in a major way to explaining how substrates and inhibitors stabilize enzymes in general. We have proposed that the Hsp90/Hsp70-based chaperone machinery may be the major mechanism for quality control of damaged proteins via the ubiquitin-proteasome pathway (2,6,25). This model evolved from the observation that ubiquitination of purified nNOS by a purified ubiquitinating system is promoted by Hsp70 and inhibited by Hsp90 (25). We envision that as proteins undergo toxic or oxidative damage, ligand binding clefts open to expose hydrophobic residues as the initial step in unfolding. When Hsp90 can no longer cycle with the protein to inhibit ubiquitination, E3 ligases interacting with the substrate-bound Hsp70 target ubiquitincharged E2 enzyme to the nascently unfolding substrate (6). In Figs. 5, 7, and 8, we provide strong evidence that these two essential components of the chaperone machinery have opposing effects on nNOS ubiquitination in the cell, with Hsp70 promoting and Hsp90 inhibiting ubiquitination. The opposing effects of the two chaperones on substrates in the cell are required for a general model in which the Hsp90/ Hsp70-based chaperone machinery makes the triage decision in protein quality control via the ubiquitin-proteasome pathway. In this respect, we note a report that treatment of cells with the irreversible thymidine kinase inhibitor CI-1033 induces ErbB-2 ubiquitination and proteasomal degradation (43). Like nNOS (26), CHIP serves as an E3 ligase for ErbB-2, and both CHIP and Hsp70 are co-immunoadsorbed with ErbB-2 from cells treated with the Hsp90 inhibitor geldanamycin (44). In the study by Citri et al. (43), it was noted that treatment with CI-1033 reduced ErbB-2 exposures are shown, and the bar graphs show the relative densities of mono-ubiquitinated nNOS (mono-Ub) bands expressed as the means Ϯ S.E. for four experiments. ***, p Ͻ 0.001, **, p Ͻ 0.01. C, the inhibitors do not affect the levels of Hsp90 or Hsp70. HEK293 cells were treated with radicicol or pifithrin-(PIF) as above, and an aliquot of total cell lysate was Western blotted for Hsp90 and Hsp70. D, effects of inhibitors on chaperone binding to nNOS. HEK293 cells treated with radicicol or pifithrinwere treated with DSP cross-linker, and lysates were immunoprecipitated with non-immune (NI) or anti-nNOS (I) antibody. The washed immunopellets were Western blotted for nNOS, Hsp90, and Hsp70.
2018-04-03T02:53:36.727Z
2011-11-28T00:00:00.000
{ "year": 2011, "sha1": "060ddedf1ace81b72aa5d916fd444efce0ead04e", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/287/2/1556.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "83f734b40f082a4e83ad315ba3985b949536ec39", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
14052316
pes2o/s2orc
v3-fos-license
Graded mapping cone theorem, multisecants and syzygies Let $X$ be a reduced closed subscheme in $\mathbb P^n$. As a slight generalization of property $\textbf{N}_p$ due to Green-Lazarsfeld, we can say that $X$ satisfies property $\textbf{N}_{2,p}$ scheme-theoretically if there is an ideal $I$ generating the ideal sheaf $\mathcal I_{X/\P^n}$ such that $I$ is generated by quadrics and there are only linear syzygies up to $p$-th step (cf. \cite{EGHP1}, \cite{EGHP2}, \cite{V}). Recently, many algebraic and geometric results have been proved for projective varieties satisfying property $\textbf{N}_{2,p}$(cf. \cite{CKP}, \cite{EGHP1}, \cite{EGHP2} \cite {KP}). In this case, the Castelnuovo regularity and normality can be obtained by the blowing-up method as $\reg(X)\le e+1$ where $e$ is the codimension of a smooth variety $X$ (cf. \cite{BEL}). On the other hand, projection methods have been very useful and powerful in bounding Castelnuovo regularity, normality and other classical invariants in geometry(cf. \cite{BE}, \cite{K}, \cite{KP}, \cite{L} \cite {R}). In this paper, we first prove the graded mapping cone theorem on partial eliminations as a general algebraic tools and give some applications. Then, we bound the length of zero dimensional intersection of $X$ and a linear space $L$ in terms of graded Betti numbers and deduce a relation between $X$ and its projections with respect to the geometry and syzygies in the case of projective schemes satisfying property $\textbf{N}_{2,p}$ scheme-theoretically. In addition, we give not only interesting information on the regularity of fibers and multiple loci for the case of $\textbf{N}_{d,p}, d\ge 2$ but also geometric structures for projections according to moving the center. Introduction Let X be a non-degenerate reduced closed subscheme in P(V ) with the saturated ideal I X = ∞ m=0 H 0 (I X/P n (m)) where V is an (n+1)-dimensional vector space over an algebraically closed field k of characteristic zero and R = k[x 0 , . . . , x n ] be the coordinate ring of P(V ). As Eisenbud et al in [9] introduced the notion N d,p for some d ≥ 2, we say that X(or I X ) satisfies the property N d,p if Tor R i (R/I X , k) is concentrated in degree < d + i for all i ≤ p, which is equivalent to the condition that the truncated ideal (I X ) ≥d is generated by homogeneous forms of degree d and has a linear resolution until the first p steps. The case of d = 2 has been of particular interest and there have been many classical conjectures and known results for highly positive embeddings and the canonical embedding of a smooth variety X. The property N 2,p is the same as the property N p defined by Green and Lazarsfeld if X is projectively normal. The property N 2,1 means that I X is generated by quadrics and the property N 2,2 means that there are only linear relations on quadrics in addition to the property N 2,1 . In [9] the authors have exhibited some cases in which there is an interesting connection between the minimal free resolution of I X and the minimal free resolution of I X∩L,L where L is a linear subspace of P(V ). They have shown that if X satisfies N 2,p and dim X ∩ L ≤ 1 for some linear space L of dimension ≤ p, then I X∩L,L is 2-regular. They also gave the conditions when the syzygies of X restrict to the syzygies of the intersection. As a slight generalization, we can say that X ⊂ P(V ) satisfies property N 2,p scheme-theoretically if there is an ideal I generating the ideal sheaf I X/P n such that I is generated by quadrics and there are only linear syzygies up to p-th step (cf. [9], [21]). For example, if I X satisfies property N 2,p then a general hyperplane section X ∩ H satisfies property N 2,p schemetheoretically because I X +(H) (H) is generated by quadrics in R/(H) and has only linear syzygies up to first p-th steps even if I X +(H) (H) is not saturated in general. If X is smooth and is cut out by quadrics scheme-theoretically, then the Castelnuovo regularity and normality are easily obtained by the blowing-up method and Kawamata-Viehweg vanishing theorem. In particular, reg(X) ≤ e + 1 where e is the codimension of X (cf. [2]). On the other hand, projection methods have been very useful and powerful in bounding the Castelnuovo regularity, higher order normality and other classical invariants in geometry(cf. [4], [14], [15], [16], [20]). Consider a linear projection π Λ : X → Y ⊂ P n−t = P(W ), W ⊂ V from the center Λ = P t−1 such that Λ ∩ X = ∅. What can we say about a connection between the minimal free resolution of I X and the minimal free resolution of I Y ? We are mainly interested in homological, cohomological, geometric and local properties of projections as the center moves in an ambient space. Note that for graded ideals or modules which are not saturated, the Koszul techniques of Green and Lazarsfeld are not so much adaptable to understand their syzygies. Our basic idea is to compare the graded Betti numbers of R/I X as a Rmodule and those of R/I X as a Sym(W )-module via the graded mapping cone theorem and to interpret its geometric meanings. The paper is organized as follows: in Section 2, we prove the graded mapping cone theorem on partial eliminations and give some applications. When applying this theorem to the projection π Λ : X ։ Y ⊂ P n−t from the center Λ = P t−1 such that Λ ∩ X = ∅, we prove that every fiber of arbitrary projection π Λ : X → Y is (d − 1)-normal if X satisfies the property N d,p scheme-theoretically and dim Λ < p and recover some results in [9]: a linear section X ∩ L is d-regular if dim(X ∩ L) = 0 and dim L ≤ p. In particular, a projective variety satisfying property N 2,p scheme-theoretically has no (p + 2)-secant p-plane. As a generalization, we bound the possible maximal length of X ∩ L in terms of the graded Betti numbers (see Theorem 2.11). In Section 3, we study the effects of the property N 2,p scheme-theoretically on the Castelnuovo normality and defining equations of projected varieties (see Theorem 3.1 and Theorem 3.11). Using the partial elimination ideal theory due to M. Green [12], we give some information on the multiple loci for birational projections (Theorem 3.9) for the case of the property N d,2 . Moreover, we obtain the relation between the regularity of the i-th partial elimination ideals for all i ≥ 1 and syzygies of projections even though the Castelnuovo normality is very delicate and difficult to control under projections. In Section 4, we deal with some properties of projections, e.g. the number of quadratic equations and the depth of projected varieties according to moving the center (e.g., Proposition 4.1). In particular, for birational projections, we show that the singular locus of the projected variety is a linear subspace for p ≥ 2. From this fact, we give some interesting non-normal examples and applications. Acknowledgements It is our pleasure to thank the referee for valuable comments and further directions. The second author would like to thank Korea Institute of Advanced Study(KIAS) for supports and hospitality during his stay there for a sabbatical year. Graded mapping cone theorem and applications The mapping cone under projection and its related long exact sequence is our starting point to understand algebraic and geometric structures of projections. k · x i : vector spaces over k. • M : a graded R-module (which is also a graded S 1 -module). • K S 1 • (M ) : the graded Koszul complex of M as follows: where ∂ is the differential of Koszul complex K S 1 • (M ). Finally, the mapping cone (C • (ϕ), ∂ ϕ ) becomes a complex over S 1 and we have the exact sequence of complexes From the exact sequence (2.1), we have a long exact sequence in homology: i+j −→ and the connecting homomorphism δ is the multiplicative map induced by ϕ. In the following Lemma 2.1, we claim that Tor R (M, k) can be obtained by the homology of the mapping cone. Lemma 2.1. Let M be a graded R-module. Then we have the following natural isomorphism: Proof. Let K R • (M ) be the Koszul complex of a graded R-module M . Then the graded component in degree i Hence we see that the Koszul complex K R i (M ) has the following canonical decomposition in each graded component: Using the decomposition (2.3), we can verify that the following diagram is commutative: From the long exact sequence (2.2) and Lemma 2.1, we obtain the following useful Theorem. . , x n ] be polynomial rings. For a graded R-module M , we have the following long exact sequence: whose connecting homomorphism δ is the multiplicative map × x 0 . Proof. It is clear from (2.2) and Lemma 2.1. Note that Theorem 2.2 gives us an useful information about syzygies of outer projections (i.e. isomorphic or birational projections) of projective varieties. As a first step, we obtain the following important Corollary. Corollary 2.3. Let I ⊂ R be a homogeneous ideal such that R/I is a finitely generated S 1 -module. Assume that I admits d-linear resolution as a R-module up to p-th step for p ≥ 2. Then, for 1 ≤ i ≤ p − 1, (a) the minimal free resolution of R/I as a graded S 1 -module is Proof. (a) First, consider the exact sequence Since Tor R 1 (R/I, k) j = 0 for all j = d and Tor R 0 (R/I, k) j = 0 for all j = 0, we obtain that β R 0,0 = β S 1 0,j = 1 for all 0 ≤ j ≤ d − 1 and β S 1 0,j = 0 for all j / ∈ {0, 1, . . . , d − 1} from the finiteness of R/I as a S 1 -module. Note that Tor R i (R/I, k) i+j = 0 for 1 ≤ i ≤ p and j = d − 1 by assumption that I is d-linear up to p-th step. Applying Theorem 2.2 for M = R/I, we have an isomorphism induced by δ = × x 0 Hence we conclude that Tor S 1 i−1 (R/I, k) (i−1)+j = 0 for 2 ≤ i ≤ p and j = d − 1 since R/I is finitely generated as an S 1 -module, which means that Then, by induction on p, we get the desired result. From now on, we consider a projection π Λ : X → Y t = π Λ (X) ⊂ P(W ) where dim Λ = t − 1 ≥ 0, Λ ∩ X = φ. Then, the following basic sequence is also exact as finitely generated S t -modules as Lemma 2.5 shows. Furthermore, it would be very useful to compare their graded Betti tables by the graded mapping cone theorem. Lemma 2.5. Let I be a homogeneous ideal defining X scheme-theoretically in P n . Then R/I and E = ℓ∈Z H 0 (X, O X (ℓ)) are finitely generated S tmodules. x n ] is a homogeneous polynomial of degree m i with the power of x i less than m i . Hence R/I is generated by monomials of the form Remark 2.6. For an inner projection of X from the center q ∈ X, let Y 1 = π q (X) be the Zariski-closure of π q (X) in P n−1 . Then R/I X is not a finitely generated S 1 -module. The following theorem is a generalization of Corollary 2.3, which is related to the existence of multisecant plane (cf. Theorem 2.11). Theorem 2.7. Suppose that X satisfies property N d,p scheme-theoretically with an ideal I. Consider the linear projection π Λ :X → P(W ), Sym(W ) = S t from the center Λ such that Λ∩X = φ, Λ = P(U ) = P t−1 . Then, R = R/I ≥d has the simplest syzygies up to (p − t)-th step as S t -module for 1 ≤ t ≤ p, In particular, if d = 2 then the minimal free resolutions of R/I is . . , x n ] be a polynomial ring for 0 ≤ t ≤ p and let be the minimal free resolution of R as a S t -module. We will give a proof by induction on t ≥ 1. For t = 1, the result (2.5) follows directly from Corollary 2.3. For t > 1, by induction hypothesis, we can assume that for 1 ≤ α ≤ p − (t − 1) Using an exact sequece by mapping cone theorem for α ≥ 1 we can also show by similar argument used in Corollary 2.3 that is actually the set of all generators. This can be done by the dimension counting. Let us prove this by induction on t. In the case of t = 1, the result easily follows from Corollary 2.3 (a). If we have t > 1 then, by induction hypothesis, we see that for all and we have the following sequence from the mapping cone construction as we wished. Definition 2.8 ([15]). Consider three vector spaces and suppose that t = codim(W, V ) and α = codim(W, H 0 (O X (1))). We say that R/I X (resp. E) satisfies property N S p if R/I X (resp. E) have the simplest minimal free resolutions until p-th step as graded S t -modules; On the other hand, we have the similar result for E = ℓ∈Z H 0 (X, O X (ℓ)) as the following proposition shows. Proposition 2.9. In the same situation as in the Theorem 2.7, suppose E(or R/I X ) satisfies property N S p as R-module for some p ≥ 2. Then E (or R/I X ) also satisfies property N S p−t as S t -module under the projection morphism Proof. When t = 1, we can similarly show that E satisfies property N S p−1 as an S 1 module by using Theorem 2.2 for M = E and the vanishing As a consequence, E has the following simplest resolution; , we inductively check that if E satisfies property N S p−i as an S i -module, then E also satisfies property N S p−i−1 as an S i+1 -module by the same argument as in the Theorem 2.7. For R/I X , the proof is exactly same. Remark 2.10. For isomorphic projections, the above Proposition 2.9 is in fact a simple algebraic reproof of Theorem 2 in [6], Theorem 1.2 in [15], and for birational projections, see a part of Theorem 3.1 in [19]. Indeed, for any regular projection From the following commutative diagram: it was shown (cf. [6], [15], [19]) that α i,j is injective by induction for all 1 ≤ i ≤ p − t and j ≥ 2. Thus, E satisfies property N S p−t as S t -module. The following theorem gives us a geometric meaning of property N d,p with respect to multisecant planes. Note that part (b) was also proved in Theorem 1.1 in [9] with a different method. In particular, for a projective variety satisfying property N 2,p , there is no (p + 2)-secant p-plane. (c) Suppose that X satisfies N 2,p and N 3,p+1 scheme-theoretically for p ≥ 0. If there is a l-secant (p + 1)-plane then Proof. Choose an ideal I defining X with N d,p scheme-theoretically. For a proof of (a), consider the minimal free resolution of R/(I) ≥d in Proposition 2.7. (b), namely, is a vector space of homogeneous forms of degree i generated by U . By sheafifying this exact sequence and tensoring O P n−t (d − 1), we have the surjective morphism of sheaves For all y ∈ Y t , we have the following surjective commutative diagram ( * ) by Nakayama's lemma: Therefore, as a finite scheme, π Λ −1 (y) is (d − 1)-normal for all y ∈ Y t . For a proof of (b), suppose that reg(X ∩ L) > d for some linear section X ∩ L as a finite scheme where L = P k 0 for some 1 ≤ k 0 ≤ p. Then we can take a linear subspace Λ 1 ⊂ L of dimension k 0 − 1 disjoint from X ∩ L. Then X ∩ L is a fiber of projection π Λ 1 : X → P n−k 0 −1 at π Λ 1 (L). However, this is a contradiction by (a). Then it follows from Theorem 2.7 that the minimal free presentation of R/I as a S p -module is of the form: Now consider the following long exact sequence for each j = 0, 1, 2, (2.9) Sp 0 (R/I, k) j −→ 0. By the property N 3,p+1 of R/I, we can easily verify that the minimal free resolution of R/I as a S p+1 -module is of the form for some α in Z ≥0 . Then it follows from 2.9 and j = 2 that dim k Tor On the other hand, we have the following surjections from the fact that for 1 ≤ i ≤ p + 1, (cf, Proposition 2.9) Tor S i p+1−i (R/I, k) p+1−i+3 = 0: So, we have the inequality This completes the proof of (c) by sheafification of the sequence (2.10). Remark 2.12. For p = 0, the bound in (c) is clearly sharp because X is cut out by at most cubic equations. For p = 1, we can check that α = 1. In addition, there is a unique conic passing through general 5 points, there is no 5-secant 2-plane to X which is cut out by quadrics. In particular, if β R p+1,2 = 0 then we know that X has no (p + 3)-secant (p + 1)-plane because of the property N 2,p+1 . Therefore, it would also be interesting to check whether the upper bound 2p + 3 in (c) is optimal if p + 1 < β R p+1,2 for some p ≥ 2. Example 2.13. (a) Let S ℓ (C) be the ℓ-th higher secant variety of the rational normal curve C in P n . Then the defining ideal of S ℓ (C) is generated by maximal minors of 1-generic matrix of linear forms in S = C[x 0 , . . . , x n ] of size ℓ + 1. Then, it follows that S ℓ (C) is aCM of degree n−ℓ+1 ℓ having (ℓ + 1)-linear resolution which is given by Eagon-Northcott complex. Let Λ = P n−2ℓ be a general linear space and consider a linear projection from the center Λ. Then the length of a general fiber is the degree of S ℓ (C) which is equal to the dimension of the space of ℓ forms on the linear span of the fiber. So, the bound in Theorem 2.11 (a) is sharp. For example, if X has 3-linear resolution up to p-th step and a linear space L is a ℓ-secant p-plane, then ℓ ≤ p+2 2 , which is sharp as the example S 2 (C) in P 5 shows. (b) Let C be an elliptic normal curve of degree d in P d−1 which satisfies property N d−3 but fails to satisfy property N d−2 with β R d−2,2 = 1. Since deg(C) = d = d−1+min{d−2, β R p+1,2 (R/I)}, the bound in Theorem 2.11,(c) is also sharp for the case β R d−2,2 (R/I) < d − 2. (c) Let C = ν 10 (P 1 ) be a rational normal curve in P 10 . Let S ℓ (C) be the ℓ-th higher secant variety of dim S ℓ (C) = min{2ℓ − 1, 10}. Then, C Sec(C) = S 2 (C) S 3 (C) · · · S 6 (C) = P 10 . Remark 2.14. In the process of proving Theorem 2.11, we know that the global property N d,p scheme-theoretically gives local information on the length of fibers in any linear projection from the center Λ of dimension ≤ p − 1. The commutative diagram ( * ) in the proof can also be understood geometrically as follows: where σ : Bl Λ P n → P n is a blow-up of P n along Λ and is a vector bundle over P n−t . We have a natural morphism of sheaves . We actually showed from the property N d,p that the following morphism is surjective for all y ∈ Y t . Similar constructions were used in bounding regularity of smooth surfaces and threefolds in [14] and [16]. Effects of property N 2,p on projections and multiple loci For a projective variety X ⊂ P n , property N 2,p is a natural generalization of property N p . Note that a smooth variety X ⊂ P n satisfying property N 2,p , p ≥ 1 scheme-theoretically has reg(X) ≤ e+ 1 where e = codim(X, P n ) and so X is m-normal for all m ≥ e (cf. [2]). The main theorems in this section show that property N 2,p plays an important role to control the normality and defining equations of projected varieties under isomorphic and birational projections up to (p − 1)-th step. Theorem 3.1. (Isomorphic projections for N 2,p case) Suppose that X ⊂ P n satisfy property N 2,p scheme-theoretically for some p ≥ 2. Consider an isomorphic projection π Λ : X → Y t ⊂ P n−t for some 1 ≤ t ≤ p − 1. Then we have the following: and reg(Y t ) ≤ max{reg(X), t + 2}; (b) In particular, if I X satisfies N 2,p then I Yt is also cut out by equations of degree at most t + 2 and further satisfies property N t+2,p−t . Proof. Let R = k[x 0 , x 1 . . . , x n ] and S t = k[x t , x t+1 , . . . , x n ] be the coordinate rings of P n and P n−t respectively. Choose an ideal I defining X with N 2,p scheme-theoretically. Then, by Theorem 2.7, we have the minimal free resolution of R/I as a graded S t -module: Note that π Λ * (O X ) ≃ O Yt and (R/I) d = H 0 (O Yt (d)) for all d >> 0. Therefore, by sheafifying the resolution of R/I, we have the following familiar two diagrams by using Snake Lemma( [13], [15]); and in the first syzygies of R/I, we have the following diagram: Proof. First of all, we can control the Castelnuovo-regularity of N (cf. [13], [15], [17]) by using Eagon-Northcott complex associated to the exact sequence As a consequence, reg(N) ≤ t + 2. Thus, from the leftmost column and first row of (3.2), we have the following isomorphisms for all m ≥ t + 1: On the other hand, by taking global sections and using simple linear syzygies of R/I as S t -module, we have the following two commutative diagrams: Since im H 0 * ( ϕ 0 ) = R/I X and ⊕ ℓ∈Z H 0 (O Yt (ℓ)) = ⊕ ℓ∈Z H 0 (O X (ℓ)), we get H 1 * (L) = H 1 * (I X/P n ). Therefore, our claim and (a) are proved. Now, let's return to the proof of (b) in Theorem 3.1. In this case, note that I = I X and H 1 * (K) = 0. Consider the following diagram for all ℓ ≥ 1: Note that surjectivity of the first row is given by reg(N) ≤ t + 2 and surjectivity of two vertical columns are given by the fact H 1 * (K) = 0. Thus, the second row is also surjective and consequently Y t is cut out by equations of degree at most (t + 2). For the syzygies of I Yt , consider the exact sequence by taking global sections is the first syzygy module of R/I X , H 0 * (K) has the following resolution: On the other hand, we know the following equivalence; Thus, from the long exact sequence: we get Tor St i (I Yt , k) i+j = 0 for 0 ≤ i ≤ p − t − 1 and j ≥ t + 3 and Y t satisfies property N 2+t,p−t . In the complete embedding of X ⊂ P(H 0 (O X (1))), property N 2,p is the same as property N p . In this case, we have the following Corollary which is already given in Theorem 1.2 in [15] and Corollary 3 in [6]. Corollary 3.3. Let X ⊂ P(H 0 (O X (1))) = P n be a reduced non-degenerate projective variety with property N p for some p ≥ 2. Consider an isomorphic projection π Λ : X → Y t ⊂ P(W ) = P n−t , t = codim(W, H 0 (O X (1))), 1 ≤ t ≤ p − 1. The projected variety Y t ⊂ P(W ) satisfies the following: (a) Y t is m-normal for all m ≥ t + 1; (b) Y t is cut out by equations of degree at most (t+2) and further satisfies property N 2+t,p−t ; (c) reg(Y t ) ≤ max{reg(X), t + 2}. Proof. This is clear from Theorem 3.1 with n 0 (X) = 1. For a different proof using vector bundle technique in the restricted Euler sequence, see [6] and [15]. Remark 3.4. A. Alzati and F. Russo gave a necessary and sufficient condition for the isomorphic projection of a m-normal variety to be also m-normal. As an application, they showed that for a variety X ⊂ P n satisfying property N 2 , one point isomorphic projection of X in P n−1 is k-normal for all k ≥ 2 (Theorem 3.2 and Corollary 3.3 in [1]). So, Theorem 3.1 is a generalization to nonlinearly normal embedding on normality, defining equations and their syzygies. On the other hand, for a point q ∈ Sec(X)∪Tan(X) we can also consider a birational projection and syzygies of the projected varieties. To begin with, let us explain the basic situation and information on the partial elimination ideals under outer projections. For q = (1, 0, · · · , 0, 0) / ∈ X, consider an outer projection π q : X → Y 1 ⊂ P n−1 = Proj(S 1 ), S 1 = k[x 1 , x 2 , . . . , x n ]. Suppose the ideal I define X scheme-theoretically. For the degree lexicographic order, if f ∈ I has leading term in(f ) = x d 0 0 · · · x dn n , we set d 0 (f ) = d 0 , the leading power of x 0 in f . Then it is well known that K 0 (I) = m≥0 f ∈ I m | d 0 (f ) = 0 = I ∩ S 1 and defines Y 1 schemetheoretically. More generally, let us give the definition and basic properties of partial elimination ideals, which was introduced by M. Green in [12]. Definition 3.6 ( [12]). Let I ⊂ R be a homogeneous ideal and let If f ∈K i (I), we may write uniquely f = x i 0f + g where d 0 (g) < i. Now we define K i (I) by the image ofK i (I) in S 1 under the map f →f and we call K i (I) the i-th partial elimination ideal of I. Note that K 0 (I) = I ∩ S 1 and there is a short exact sequence as graded S 1 -modules In addition, we have the filtration on partial elimination ideals of I: K 0 (I) ⊂ K 1 (I) ⊂ K 2 (I) ⊂ · · · ⊂ K i (I) ⊂ · · · ⊂ S 1 . Proposition 3.7 ([12] ). Set theoretically, the i-th partial elimination ideal Lemma 3.8. Let X ⊂ P n be a reduced non-degenerate projective variety satisfying property N 2,p , p ≥ 2 scheme-theoretically. Consider a projection be the nonempty secant locus of one-point projection. Then, (a) Σ q (X) is a quadric hypersurface in a linear subspace L and q ∈ L; Proof. Since X satisfies N 2,p , p ≥ 2, there is no 4-secant 2-plane to X by Theorem 2.11.(b). Let Z 1 := {y ∈ Y 1 |π q −1 (y) has length ≥ 2 } and choose two points y 1 , y 2 in Z 1 . Consider the line ℓ = y 1 , y 2 in P n−1 . If y 1 , y 2 ∩ Y 1 is finite, then we have 4-secant plane q, y 1 , y 2 which is a contradiction. So, Sec(Z 1 ) = Z 1 and finally, we conclude that Z 1 is a linear space. Since π q : Σ q (X) ։ Z 1 ⊂ Y 1 is a 2:1 morphism, Σ q (X) is a quadric hypersurface in L = Z 1 , q . For a proof of (c), if dim Σ q (X) is positive, then clearly, q ∈ Tan Σ q (X) ⊂ Tan(X). So, we are done. We have also the following generalization of the Lemma 3.8 for N d,2 , d ≥ 2 by using the mapping cone theorem and partial elimination ideals.. Proposition 3.9. Let X ⊂ P n be a non-degenerate reduced projective scheme and the ideal I define X scheme-theoretically with property N d,2 . For any projection π q : X → Y = π q (X) ⊂ P n−1 from a point q ∈ P n \ X, Z i = {y ∈ π q (X) | π −1 q (y) has length ≥ i + 1 } satisfies the following properties for d − 2 ≤ i ≤ d. (a) K d−1 (I) is generated by at most linear forms. Thus Z d−1 is either empty or a linear space; (b) K d−2 (I) is generated by at most cubic forms. Thus Z d−2 is either empty or cut out by at most cubic equations set-theoretically. Proof. (a): Since the ideal I satisfies property N d,2 , there exists the following exact sequence (not necessarily minimal) by Proposition 2.7: Furthermore, we can easily verify that ker ϕ 0 isK d−1 (I) and thus we have the following exact sequence: Now consider the following commutative diagram with K 0 (I) = I ∩ S 1 : Since I satisfies the property N d,2 , it follows from the middle row and left column sequences in the diagram (3.5) thatK d−1 (I) andK d−1 (I)/K 0 (I) are generated by at most degree d elements. On the other hands, we have a short exact sequence from (3.3) : Hence, K d−1 (I) is generated by at most linear forms and further 1-regular. So, Z d−1 is either empty or a linear space by Proposition 3.7. (b): Since K d−1 (I) is 1-regular, we have Tor S 1 1 (K d−1 (I)(−d + 1), k) j = 0 for all j ≥ d + 2 and it follows from (3.6) thatK d−2 (I)/K 0 (I) is generated by at most degree d + 1 elements. Similarly, consider again the following short exact sequence as S 1 -modules Hence, K d−2 (I X ) is generated by at most cubic forms and therefore we complete the proof of (b). Corollary 3.10. In the same situation as in Proposition 3.9, assume that Proof. It is cleat that Z d−1 (X), q ∩ X is a hypersurface in a linear space Z d−1 (X), q . Since there is no d + 1-secant line through q, we are done. As shown in Lemma 3.8, the fact that Z 1 is a linear space is crucial in the proof of the following theorem. Theorem 3.11. (Birational projections for N 2,p case) Let X ⊂ P n be a reduced non-degenerate projective variety satisfying property N 2,p scheme-theoretically for p ≥ 2. Consider a projection π q : X → Y 1 ⊂ P n−1 where q ∈ Sec(X) ∪ Tan(X) \ X. Then we have the following: Proof. We may assume that q = (1, 0, , , 0) ∈ Sec(X) ∪ Tan(X) \ X. Let R = k[x 0 , x 1 . . . , x n ] be a coordinate ring of P n , S 1 = k[x 1 , x 2 , . . . , x n ] be a coordinate ring of P n−1 . Let the ideal I define X with the condition N 2,p scheme-theoretically. Then, it is easily checked that K 0 (I) also defines Y 1 scheme-theoretically. By Theorem 2.7, we have the minimal free resolution of R/I as a graded S 1 -module: Furthermore, we have the following diagram: Note that ϕ 0 (f, g) = f + g · x 0 and thus, K 1 (I) is the first partial elimination ideal of I associated to the projection π q . SinceK 1 (I) has the following minimal free resolution as a graded S 1 -module: we know that K 1 (I) is generated by linear forms and reg(K 1 (I)(−1)) = 2, coker α = S 1 /K 1 (I)(−1). On the other hand, consider the following exact sequence By Lemma 3.8, since cokerα has the support Z 1 which is a linear space in P n−1 and π q : Σ q (X) ։ Z 1 is 2 : 1, we have Therefore, H 0 * (coker α) = S 1 /I Z 1 (−1). Then, by taking global sections from the above sequence (3.7), we have the following commutative diagram as S 1modules with exact rows and columns: The reason why the left column is exact is that K 1 (I) = I Z 1 for any ideal I defining X scheme-theoretically. Thus, H 1 * (I Y 1 ) ≃ H 1 * (I X ) and so, X is m-normal if and only if Y 1 is m-normal. So we complete the proof of (a) and (b). Remark 3.12. For a complete embedding of X ⊂ P(H 0 (O(1)) satisfying property N p , Lemma 3.8 and Theorem 3.11 was proved in [19] with different method. However, the point is that we can also deal with non-complete embeddings of X in P n satisfying property N 2,p by virtue of the graded mapping cone theorem without using Green-Lazarsfeld's vector bundle technique on restricted Euler sequence on X. Moving the center and the structure of projected varieties In the previous sections, we proved the uniform properties of higher normality and syzygies of projections when the given variety X satisfies property N 2,p , p ≥ 2 scheme-theoretically. However, according to moving the center, we have a lot of interesting varieties with different structures in algebra, geometry and syzygies. As an example, for a rational normal curve C = ν d (P 1 ) in P d , consider the following filtration on the ℓ-th higher secant variety S ℓ (C) of dimension min{2ℓ − 1, d}: Then we have ( [6], [18]) (a) π q (C) ⊂ P d−1 satisfies property N 2,d−2 for q ∈ C, (b) π q (C) ⊂ P d−1 is a rational curve with one node satisfying property N 2,d−3 for q ∈ Sec(C) \ C, (c) π q (C) ⊂ P d−1 satisfies property N 2,ℓ−3 for q ∈ S ℓ (C) \ S ℓ−1 (C). Note that all projected curves are m-normal for all m ≥ 2 and thus 3-regular. Note that for varieties of next to minimal degree, the arithmetic properties of projected varieties by moving the center were investigated in [3] for the first time. The following proposition show that the number of quadratic equations, Hilbert functions, the depth of projected varieties and Betti tables depend on the dimension of the secant locus Σ q (X) and the position of the center of projection. For a complete embedding X ⊂ P(H 0 (L)), the same result is given in [19]. Let s = dim Σ q (X) and if the secant locus Σ q (X) = ∅, then s = −1. Remark 4.2. Similary, under the same assumption, we have which shows that Hilbert functions of projected varieties depend only on the dimension of the singular locus. But, the Betti tables are much more delicate and we get the following additional information: for each 1 ≤ i ≤ p − 1, In particular, for i = 1, Hence, we see that if n−s−1 is positive then there exist cubic generators of I Y 1 . Then H i (O X (ℓ)) = 0 for all ℓ << 0 and i < δ(X) by vanishing theorem of Enriques-Severi-Zariski-Serre. In the proof of proposition 4.1, for s = 0 we have an interesting example Y 1 such that Y 1 has only one isolated nonnormal singular point and in fact, H 1 (O Y 1 (ℓ)) = 0 for all ℓ ≤ 0. As examples, suppose that a projective variety X has no lines and plane conics in P n with the condition N 2,p , p ≥ 2 (e.g., the Veronese variety υ d (P n ), d ≥ 3 or its isomorphic projections). Then, the singular locus of any simple projection is either empty or only one point because the secant locus is a quadric hypersurface in some linear subspace. As the center of projection q ∈ P 8 moves toward S 1,1,4 , we will see that the number of cubic generators decreases and the number of quadric generators increases in the following: (a) Let q ∈ P 8 \ Sec(S 1,1,4 ) and any isomorphic projection Y ⊂ P 7 has the following resolution with depth(Y ) = 1 · · · → S(−4) 40 ⊕ S(−3) 8 → S(−3) 10 ⊕ S(−2) 6 → I Y → 0. (b) Suppose q ∈ Sec(S 1,1,4 ) \ Tan (S 1,1,4 ). Then s = 0 and I Y has the following resolution with depth(Y ) = 2: (c) For a point q ∈ Tan(S 1,1,4 ) \ S 1,1,4 , Y has different two types of resolutions: First, consider a linear span P 3 = ℓ 1 , F where ℓ 1 is a line embedded by P(O P 1 (1)) ֒→ P(O P 1 (1) ⊕ O P 1 (1) ⊕ O P 1 (4)) ⊂ P 8 and F be the any fiber of the morphism ϕ : S 1,1,4 → P 1 . For a general point q ∈ P 3 = ℓ 1 , F , Y has a singular locus P 1 , only one cubic generator and the following minimal resolution of length 5: Second, take a general point q ∈ P 3 where the quadric hypersurface P(O P 1 (1) ⊕ O P 1 (1)) ⊂ P 3 is a subvariety of S 1,1,4 ⊂ P 8 . Then the projected variety Y has clearly the singular locus P 2 , depth(Y ) = 4 and the following resolution: As mentioned in [9], property N 2,p is rigid : if X is a reduced subscheme in P n with the condition N 2,p , p = codim(X, P n ), then X is 2-regular. By the rigidity, for a projective reduced scheme X satisfying property N 2,p , p = codim(X, P n ), any outer projection π q (X) does not satisfy the property N 2,p−1 . On the other hand, for a projective variety X ⊂ P n with the condition N 2,p , p ≥ 2 and q / ∈ X, we obtained that π q (X) satisfies at least property N 3,p−1 by Theorem 3.1 and Theorem 3.11. However, we raise the following question for inner projections: Question 4.5. Let X be a projective reduced scheme in P n satisfying property N 2,p , p ≥ 1 which is not necessarily linearly normal. Consider the inner projection from linear subvariety L of X and Y = π L (X \ L) in P n−t−1 , where dim L = t < p. In contrast with the outer projections, is it true that Y satisfies N 2,p−t−1 for a linear space L ⊂ X? For example, for a nondegenerate smooth variety X in P(H 0 (L)) with property N p , Y satisfies N p−1 for a point q ∈ X \Trisec(X) where Trisec(X) is the union of all proper trisecant lines or lines in X (see [5] for details). Note that the graded mapping cone theorem can not be directly applied to this case because R/I X is infinitely generated as a S t -module.
2009-07-09T06:41:57.000Z
2008-04-23T00:00:00.000
{ "year": 2011, "sha1": "9646899363d15c7709e6919315d746841e7870dc", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.jalgebra.2010.07.030", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "455903d12c359894cf7556d6e97f230c8ff231e9", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
259187501
pes2o/s2orc
v3-fos-license
Towards Better Certified Segmentation via Diffusion Models The robustness of image segmentation has been an important research topic in the past few years as segmentation models have reached production-level accuracy. However, like classification models, segmentation models can be vulnerable to adversarial perturbations, which hinders their use in critical-decision systems like healthcare or autonomous driving. Recently, randomized smoothing has been proposed to certify segmentation predictions by adding Gaussian noise to the input to obtain theoretical guarantees. However, this method exhibits a trade-off between the amount of added noise and the level of certification achieved. In this paper, we address the problem of certifying segmentation prediction using a combination of randomized smoothing and diffusion models. Our experiments show that combining randomized smoothing and diffusion models significantly improves certified robustness, with results indicating a mean improvement of 21 points in accuracy compared to previous state-of-the-art methods on Pascal-Context and Cityscapes public datasets. Our method is independent of the selected segmentation model and does not need any additional specialized training procedure. INTRODUCTION Neural networks have been known to be vulnerable to adversarial perturbations (Szegedy et al., 2013;Madry et al., 2018;Goodfellow et al., 2014;Carlini and Wagner, 2017), i.e., imperceptible variations of natural examples, crafted to deliberately mislead the models. In recent years, significant efforts have been made to develop certified defenses that guarantee a specified level of robustness against adversarial inputs within a certain radius. (e.g., 1-Lipschitz Networks (Trockman and Kolter, 2021;Meunier et al., 2022;Araujo et al., 2023), bound propagation (Gowal et al., 2018;Huang et al., 2021), randomized smoothing (Li et al., 2019a;Cohen et al., 2019;Salman et al., 2019)). Although most defenses focus on classification tasks, in this paper, we focus on certifying segmentation models and argue that certified segmentation is an even more pressing issue as these models are already used in critical systems such as healthcare and autonomous vehicles. Randomized smoothing has emerged as the leading technique for certified robustness due to its scalability and model-agnostic properties. It consists in applying a convolution between a base classifier and a Gaussian distribution, enabling the method to handle large input sizes (e.g., ImageNet, Pascal-Context, Cityscapes), while providing state-of-the-art certified accuracy. However, this technique exhibits a trade-off between adding enough noise for certification and preserving the input's semantic information for accurate predictions. In fact, several impossibility results from an information-theory perspective have been introduced (Kumar et al., 2020;Blum et al., 2020; and inherently limit randomized smoothing from providing large certified radii. Nevertheless, recent works, both theoretical (Ettedgui et al., 2022;Mohapatra et al., 2020) and empirical Carlini et al., 2023), have explored potential solutions to this trade-off. To address the issue of removed information due to noise injection, several works, in the context of classification tasks, have proposed methods to denoise the input after the noise injection step Carlini et al., 2023). While trained their own denoiser models on Gaussian noise for the specific task of certified robustness, Carlini et al. (2023) extended the work of Salman et al. (2020) by using off-the-shelf Denoising Diffusion Probabilistic Models (Sohl-Dickstein et al., 2015;Ho et al., 2020;, a form of generative models that takes a random Gaussian noise and generates a real-world image. In this paper, we build upon previous work on certified robustness to improve certified segmentation extending the work of Fischer et al. (2021) and Carlini et al. (2023). We present a comprehensive set of experiments on PASCAL-Context (Mottaghi et al., 2014) and Cityscapes (Cordts et al., 2016) datasets and successfully achieve state-of-the-art results on certified robustness for segmentation tasks. Our results show that combining randomized smoothing and diffusion models significantly improves certified robustness, with a mean increase of 21 points in accuracy and 14 points in mIoU when compared to previous methods. Our main contributions are summarized as follows: • First, we build upon the work of Fischer et al. (2021) and Carlini et al. (2023) and propose for the first time, a certified segmentation approach leveraging diffusion models. Through a series of experiments, we demonstrate that incorporating a denoiser in conjunction with a segmentation model that has been trained with noise injection presents certain trade-offs in the certified accuracy achieved, depending on the variance of the noise. • Second, we further improve certified accuracy by combining off-the-shelf diffusion and state-of-the-art segmentation models allowing us to reach state-of-the-art results for certified segmentation. • Third, we propose an in-depth analysis through a series of experiments on the use of noise during training as well as the generalization of denoising diffusion models with respect to image resolution and data distribution. RELATED WORK Adversarial Attacks & Certified Defenses. Since the discovery of adversarial examples (Szegedy et al., 2013), a wealth of work focused on devising attacks (Goodfellow et al., 2014;Kurakin et al., 2018;Carlini and Wagner, 2017;Hein, 2020, 2021) and defenses (Goodfellow et al., 2014;Madry et al., 2018;Pinot et al., 2019;Araujo et al., 2020Araujo et al., , 2021, leading to an ongoing back-and-forth battle. Most of these defenses relied on smoothing the local neighborhood around each point, resulting in very small gradients on which attacks were based. However, it has become apparent that many of the empirical defenses that have been created could be circumvented with stronger attacks (Athalye et al., 2018). This false sense of security and the persistent cat-and-mouse game called for certified defenses that provide provable robustness guarantees. In recent years, mainly two types of certified defenses have been proposed. The first approach provides robustness guarantees based on the Lipschitz constant of the networks and their margin (i.e., the difference between the highest and second highest logits). This connection was introduced by Tsuzuku et al. (2018) and opened an important research direction in the design and training of 1-Lipschitz neural networks (Miyato et al., 2018;Farnia et al., 2019;Li et al., 2019b;Trockman and Kolter, 2021;Singla and Feizi, 2021;Yu et al., 2022;Meunier et al., 2022;Prach and Lampert, 2022;Xu et al., 2022;Araujo et al., 2023). Although this approach offers fast certificate computation, it suffers from important drawbacks. Indeed, due to the strict constraint on the networks and reduced expressivity, 1-Lipschitz neural networks offer a reduced natural and certified accuracy and do not scale to large datasets (e.g., ImageNet, Pascal-Context, Cityscapes (2019), consists in convolving the function with a Gaussian probability distribution during the inference phase. The desirable property of a smooth classifier is ensuring that the prediction is constant within an ℓ 2 ball around any input. Diffusion models. Diffusion probabilistic models have been introduced by Sohl-Dickstein et al. (2015), and further refined by Ho et al. (2020) and . The goal was to design a generative Markov chain that transforms a known distribution (e.g., Gaussian) into a target (data) distribution using a diffusion process. However, instead of using a Markov chain to evaluate the model, they defined the probabilistic model as the endpoint of the Markov chain. Subsequently, this methodology was refined and applied for producing high-quality samples, such as images, as demonstrated by Ho et al. (2020) and . The results indicated that this type of model can generate better images in comparison to other methods and also demonstrated a connection with denoising. Recently, diffusion probabilistic models have been applied successfully in the context of certified robustness for classification tasks where a diffusion model is used as a first step to denoise inputs for randomized smoothing (Carlini et al., 2023). Certified Segmentation. Deep neural networks trained for segmentation tasks have been shown to be vulnerable to adversarial attacks (Xie et al., 2017;Arnab et al., 2018;Xiang et al., 2019;He et al., 2019;Kang et al., 2020). In this context, Fischer et al. (2021) use the work of Cohen et al. (2019) and propose a method to certify segmentation with randomized smoothing for norm-bounded perturbations. Other lines of work investigate certified robustness for structured outputs, for example, Kumar and Goldstein (2021) proposed a procedure based on randomized smoothing to find the minimum enclosing ball in the output space and Yatsura et al. (2023) introduced a method called demasked smoothing to defend against adversarial patch attacks for semantic segmentation tasks. In this paper, we build upon the work of Fischer et al. (2021) and Carlini et al. (2023) and introduce, for the first time, a randomized smoothing approach with a denoising step in the context of certified semantic segmentation. BACKGROUND In this section, we review the necessary background on randomized smoothing and on certified segmentation. ADVERSARIAL ATTACKS & RANDOMIZED SMOOTHING FOR CLASSIFICATION We first introduce adversarial attacks and randomized smoothing in the setting they have been introduced, i.e., for a classification task. We will generalize it to the segmentation task in the next paragraph. Let X ⊂ R d and Y = {1, . . . , K} be the input space and target space respectively with K denoting the number of classes. Let us denote a classifier f : X → Y (e.g., a neural network) such that for a given couple input-label (x, y) ∈ X × Y, we say the classifier f correctly classifies x if: f (x) = y. An adversarial attack is a small norm-bounded perturbation δ ∈ R d with ∥δ∥ 2 ≤ ϵ such that: Randomized smoothing, introduced in Cohen et al. (2019), considers a smooth version of the classifier f , such that: To compute the probability in Equation 2, Cohen et al. (2019) proposed a Monte-Carlo approach where the prediction is computed from a small number of samples, i.e., n 0 , with a majority vote and a lower-bound on the certified radius computed with a higher number of samples, i.e., n. A benefit of using the smooth classifier g is obtaining a certified radius of robustness for each data point, thus determining a certified level of accuracy within a specified attack 'budget' ϵ. More formally, Cohen et al. introduced the following theorem: Theorem 1 (From Cohen et al. (2019)). Suppose y ∈ Y, let and let p y the lower bound of p y computed via Monte-Carlo sampling. Let p ¬y = 1 − p y , then, if then g(x + δ) = y for all δ satisfying ∥δ∥ 2 ≤ R with R := σΦ −1 (p y ) and Φ is the cumulative distribution function of the standard Gaussian distribution. To properly approximate the probability p y with a confidence interval, Cohen et al. (2019) proposed a procedure which samples n realizations of η ∼ N (0, σ 2 I) and computes f (x + η). From these n realizations, a vector of counts for each class in Y is computed and these counts are then used to estimate the probability p y and the radius R with confidence 1 − α with α ∈ [0, 1]. If the confidence level is not reached (for example, the number of samples is not enough), the procedure will abstain. Fischer et al. (2021) 1: function SEGCERTIFY(g, σ, x, n, n 0 , δ, α) 2: Algorithm 1 Predict & Certify by cnts y ← cnts y + 1 8: return cnts 9: 10: function COMPUTETIMESTEP(σ) 11: To perform image segmentation, each pixel in an image is assigned a segmentation class. This can be seen as a type of classification task, but instead of predicting the content of the entire image, the goal is to predict the class of each individual pixel. In this setting, the target space corresponds to regions/categories to segment (e.g., cars, roads, pedestrians, etc.), and the classifier f : R d → Y d outputs a class for each pixel and classifies each component individually. It is relatively straightforward to extend the certification algorithm proposed by Cohen et al. (2019) for the segmentation task. Nevertheless, Fischer et al. (2021) identified two primary challenges with the method. First, given that the certified radius of a particular region will be the minimum radius over the entire region, the algorithm may report an extremely low certified radius based on only a few bad pixels. Second, since Cohen et al.'s certification algorithm is applied to each pixel separately, and the certification is only valid with a probability of 1 − α, considering the entire region and applying the union bound could significantly reduce the overall confidence in the certificate. To address the first challenge and limit the impact on bad pixels on the overall result, Fischer et al. (2021) proposed a simple solution which consists in defining a threshold τ ∈ [ 1 2 , 1] and instead of checking p y > 1 2 , they advise p y > τ . To account for the multiple testing problem, i.e., low confidence due to the union bound of the entire region, Fischer et al. (2021) introduce the FwerControl function used in Algorithm 1 which is based on the Holm-Bonferroni method (Holm, 1979), and performs multiple-testing correction. Conceptually, the idea is to control the type I error (rejecting the null hypothesis when it is actually true) while reducing type II errors (not rejecting the null hypothesis when it is false). Now that we have reviewed randomized smoothing for classification and segmentation tasks, we will present how it is possible to improve upon the current state-of-the-art with diffusion models. CERTIFIED SEGMENTATION VIA DIFFUSION MODELS To prevent a distribution shift when using randomized smoothing for inference, it is common practice to train networks with noise injection (Cohen et al., 2019). However, from an information theory perspective, randomized smoothing has inherent trade-offs and limitations. While adding noise during training can enhance the certified accuracy of models compared to those trained without noise, it may also lower the model's natural accuracy, as the variance of the noise decreases the information present in the input. These limitations have led to a series of no-go results for randomized smoothing (Blum et al., 2020;Hayes, 2020;Kumar et al., 2020;Mohapatra et al., 2021;Wu et al., 2021;Ettedgui et al., 2022), suggesting that achieving high certified accuracy may be challenging due to the significant variance that must be introduced in the input. Consequently, the destruction of information due to noise can result in information loss, potentially leading to a useless classifier. To address this important limitation of randomized smoothing, Salman et al. (2020) have investigated denoising the input before giving it to the classifier. The idea is to use trained neural networks to reconstruct the removed information of the image due to the noise. This process has two main advantages: it mitigates the no-go results of randomized smoothing since the destroyed information is "reconstructed" by the denoiser and it does not involve training the classifier with noise mitigating the reduced natural accuracy of training with noise injection. Of course, in this new setting, the quality of the denoiser will matter. Salman et al. (2020) were able to boost the certified accuracy by up to 33% on the ImageNet dataset with respect to previous state-of-the-art defenses. (2015) and further improved by Ho et al. (2020) and , are a class of generative models and have shown to beat Generative Adversarial Networks (GANs) (Goodfellow et al., 2020) on image synthesis. Conceptually, the training of these models consists in adding noise at each step of the diffusion process until purely random noise is reached. The reverse process then starts from random noise and generates a new image from the data distribution. Carlini et al. (2023) proposes a procedure to use these models for denoising instead of generating new images. The idea is to start the reverse process with a noisy image instead of Gaussian noise in order for the DPM to output an image from the initial data distribution that resembles the original image. As explained by Carlini et al. (2023), to use the DPM in the context of randomized smoothing, one needs to convert the noise added for randomized smoothing, i.e., x rs = x + τ with τ ∼ N (0, σ 2 I) to the specific step in the diffusion process: where β t denotes a constant from the timestamp t that controls the amount of noise added to the image during the diffusion process. For more details on how to compute the timestamp t, one can refer to Section 3 of Carlini et al. (2023). We provide in Algorithm 2 an updated version of the algorithm to compute the samples for the Predict & Certify function of Fischer et al. (2021). Pipeline. Our pipeline starts by passing the image through a Denoising Diffusion Probabilistic Model (DDPM) and then calling a semantic segmentation model for prediction. For both components, we use an off-the-shelf model made publicly available. To denoise images, we use the class unconditional DDPM from . This 552M-parameter denoiser has been trained on ImageNet and performs very well on images from both Cityscapes and Pascal-Context. For segmentation, we use two model architectures with different training strategies. First, we test on High-resolution networks, HRNet from Wang et al. (2020), trained in two different ways. The non-robust HRNet has been trained with natural images and the base model is an HRNet trained with a Gaussian noise of σ = 0.25. The second architecture we use is the Vision Transformer Adapter for Dense Predictions, ViT from Chen et al. (2023), that was trained only on natural images. We use the 568Mparameter model trained on Pascal-Context and the 571Mparameter model trained on Cityscapes. Both models were reported to provide state-of-the-art accuracy and mean intersection over union (mIoU) on the task of semantic segmentation. Our code is provided at: https://github. com/othmanela/certified_segmentation EXPERIMENTS We evaluate our method on a set of experiments with multiple approaches. First, we compare our technique with SEGCERTIFY, the state-of-the-art introduced by Fischer et al. (2021). Then, we set new state-of-the-art results using off-the-shelf models. We name our method DENOISECER- Table 2: Performance of SEGCERTIFY and DENOISECERTIFY (ours) on an off-the-shelf HRNet model trained without Gaussian noise. Scale corresponds to the image sizing scale used as input to the segmentation model (e.g., a scale of 0.5 on cityscapes will resize the images to 512 × 1024). Accuracy, mean intersection over union (mIoU) and percentage of abstentions (%⊘) are certified given a noise level σ and radius R. All results are provided with Holm correction. Datasets. All of our experiments are performed on the task of semantic image segmentation on Pascal-Context and Cityscapes datasets, two very common datasets for this task. Pascal-Context (Mottaghi et al., 2014) consists of an extension of the Pascal-VOC (Everingham et al., 2015) dataset with all of the image pixels annotated. There are 60 classes (59 foreground and 1 background). Typical evaluation strategies use either all of the 60 classes or the 59 foreground classes only. We evaluate here on the 59 foreground classes in order to have a fair comparison with SEGCERTIFY. The Cityscapes dataset (Cordts et al., 2016) contains high resolution 1024 × 2048 images of diverse street scenes from 50 different cities. The images are annotated in 30 classes but only 19 of them are used for evaluation. Similar to SEGCERTIFY, we evaluate our method on the same 100 images set from both datasets. We use every 5 th image on the Cityscapes dataset and every 51 st on Pascal. DENOISECERTIFY on models trained with noise. We start first by comparing DENOISECERTIFY with SEGCER-TIFY. The state-of-the-art certification results proposed by the latter were obtained with an HRNet trained with a Gaussian noise of σ = 0.25. A comparison of both methods is provided in the first two sections of Table 1. We notice that DENOISECERTIFY outperforms SEGCERTIFY for all sigmas except for 0.25. In fact, for σ = 0.5 the accuracy jumps from 0.34 to 0.55 and the mIoU from 0.06 to 0.21 which corresponds to an increase of 61% and 250% respectively. This gives us an idea of the power of denoising diffusion models when used to certify segmentation models. Since our pipeline contains an added denoising step, we note an increase in the reported runtime in seconds. On the largest images of the dataset (1024 × 2048), the runtime increases from 92.69 to 131.42 seconds with HRNet, which per image is minor. We did not perform any optimization on the code to make our pipeline faster. With more engineering, the runtime can be optimized further. Also, we believe that the gain in performance easily outweighs the increase in runtime. For σ = 1.0, it appears from Table 1 that SEGCER-TIFY has a lower number of abstentions than DENOISECER-TIFY. However, looking at the segmentations it looks like SEGCERTIFY predicts a large number of pixels in the image with the wrong class. An example is provided in the last row of Figure 1. For σ = 0.25, SEGCERTIFY outperforms our technique but this may be due to two main reasons. First, the model we are using was trained with a Gaussian noise of 0.25. Thus, it is performing best when provided with images with the same level of noise. In the next section, we show that given the right model, DENOISECERTIFY outperforms SEGCERTIFY. Second, one of the limitations of using the off-the-shelf denoiser provided by is rescaling the images to 256 × 256. Therefore scaling them back to their original size may decrease the image quality, especially when using high-resolution images like Cityscapes. We perform experiments with multiple scales and report them in the subsequent section. DENOISECERTIFY on non-robust models. Here we use an HRNet model that was trained on natural images only without any introduction of Gaussian noise. We compare our performance with SEGCERTIFY and report our results in Table 2. Focusing on a scale of 1, we notice that SEGCERTIFY achieves a poor performance. In fact, as σ increases to values > 0.25 the mIoU and accuracy end up becoming 0. This is an expected result since models trained on natural images are very sensitive to Gaussian noise. However, those types of models perfectly suit our methodology, as we denoise images, we are able to use off-the-shelf segmentation models and achieve a much better prediction. This introduces a paradigm shift as we no longer require training robust deep learning models that need highly engineered strategies and that also degrade the natural accuracy significantly. As reported in Table 1, when comparing the first two rows, the non-robust HRNet accuracy drops from 0.97 to 0.91 and the decrease is more significant for the mIoU, going from 0.81 to 0.57. Therefore, our technique allows us to limit the drop in performance that traditional models used to suffer from while keeping strong certification guarantees. Impact of image scales on performance. One of the limitations of using off-the-shelf models is having to comply with their restrictions. The unconditional DDPM we are using only takes as input images of size 256 × 256. We thus have to downscale the input images and upscale them back to their original size for prediction. As stated above, this is the main limitation of our method. But, since semantic segmentation models can be invoked with multiple scales, we can use them to predict at a given scale and then upsample the output probabilities back to the original size of the image. This also has the advantage of providing faster predictions. As an example, at a scale of 0.5 for Cityscapes, we downsample the images to 256 × 256 in order to call the DDPM, the denoised image is then reshaped to 512 × 1024 and serves as input to the segmentation model. The output probabilities of the segmentation model are then upsampled to their original size (1024 × 2048) to be compared with the ground truth. We always perform the certification on the original size of the image in order to follow the same strategy as SEGCERTIFY and perform a fair comparison. The performance of both methods with multiple scales is reported in Table 2. Examples of denoised and upscaled images are provided in Figure 3. For Cityscapes, we notice that the smaller the scale, the better the performance. In fact, the accuracy jumps from 0.58 to 0.74, and 0.78 for scale values of 1, 0.5, and 0.25 respectively. DENOISECER-TIFY performs best for a scale of 0.25, corresponding to a Cityscapes image size of 256 × 512 which is very close to the output of the DDPM. Therefore, the rescaling does not impact the details and overall quality of the denoised image. The same happens for the Pascal-Context dataset, the best performance is obtained for a scale of 0.5 which corresponds to images of size 240 × 240, again very close to the 256×256 DDPM output. Training DDPMs on images of higher resolution would be another way to circumvent this limitation. Also, with the improvement of powerful techniques that rely on the denoising backbone, our approach would still be able to leverage the resources made available. We believe that having a denoiser that is also able to upscale images to very high resolutions would allow us to improve our results even further. DENOISECERTIFY on state-of-the-art segmentation models. So far we have discussed how DENOISECERTIFY performs on both a robust and non-robust HRNet model. We have empirically shown that it achieves the best results on standard deep learning-based segmentation models. Going a step further, we can leverage the power of Vision Transformers which have been reported to be more robust to attacks (Mao et al., 2022), but also give state-of-the-art results on semantic segmentation tasks (Chen et al., 2023). In this section, we use the ViT Adapter trained with natural images and report the results in the last section of Table 1. When comparing with the results of Table 2 on the same scale of 1, we notice that the ViT model provides a considerable increase. For the lowest σ = 0.25, the accuracy and mIoU are respectively boosted to 0.77 and 0.41 compared to 0.58 and 0.26 previously. This empirically proves the points of Mao et al. (2022) and would even encourage us to make the assumption that we would be able to obtain higher certification results with stronger transformer models. Overall, DENOISECERTIFY combined with a ViT achieves state-of-the-art semantic segmentation certification results for Pascal-Context and Cityscapes. Generalization of Denoising Diffusion Models. As stated above, we use an off-the-shelf denoising diffusion model that was trained on ImageNet. We set the number of channels to 256 and apply a linear scheduler with 1000 steps. Qualitative results are provided in Figure 2. We clearly notice that the DDPM is able the denoise the image with the highest level of noise (σ = 1.0) while keeping all of the information of the picture. Therefore, it is important to state that diffusion models generalize well to datasets they were not trained on. Pascal-Context and Cityscapes are the first two examples. Future work will involve testing DDPMs on images from other distributions (e.g., the medical domain). Multistep Denoising Diffusion Model. Denoising diffusion probabilistic models have been introduced as a class of generative models that beat GANs on image synthesis . Starting with Gaussian noise, each step of the DDPM consists in denoising an input image at timestep t to a marginally less noisy image at timestep t − 1. The complete diffusion process is an iterative procedure starting from t * until t = 0. Programmatically, each call to the denoiser d at timestep t performs two actions; it predicts the completely denoised image and returns the average between the estimated denoised image and the noisy image of timestep t − 1. We conduct experiments on the two possible denoising strategies. The top section of Table 3 reports the results of a single-step denoised image prediction from the class unconditional DDPM. The bottom section of Table 3 on the other hand reports results of a multiple-step denoising strategy going from t * until t = 0 iteratively on the same class unconditional DDPM. Both use the ViT as the segmentation model. From the presented results, it is clear that the single-step denoiser performs better than the multi-step one in terms of accuracy, mIoU, percentage of abstentions, and runtime. This shows that denoising the image in a single shot is better than repeatedly denoising it multiple times. Intuitively, since the DDPMs are generative models at heart, they will tend to behave as such when denoising an image multiple times. Therefore, the output image at t = 0 may have lost a lot of its original information or may even end up from a different distribution. Qualitative results from Figure 2 support this claim as we can clearly notice that elements of the image were removed in the multi-step approach (the flower pot, as well as the reader disappeared, and the shape of the furniture changed). Another advantage of single-step denoising is the runtime efficiency. Instead of having to call the denoiser multiple times passing the outputted image at each timestep t, the denoiser is only called once (e.g., For σ = 1.0 a denoiser with linear scheduling will be called 258 times compared to a single time with the first scheme). This represents a nonnegligible advantage of the single-shot denoising since we are using multiple calls to the denoiser for each image in order to obtain the certificate. We deduce that denoising diffusion models are powerful but should be used accordingly. In our case, we would like to leverage the denoising properties of the DDPM more than their generative properties. Thus, a single-step denoising strategy should be adopted. CONCLUSION We present the first work on certified semantic segmentation that leverages denoising diffusion probabilistic models and vision transformers. We conduct a comprehensive set of experiments on Pascal-Context (Mottaghi et al., 2014) and Cityscapes (Cordts et al., 2016) datasets and show that our method achieves state-of-the-art results on certified robustness for semantic segmentation tasks. We were able to achieve significant improvements in accuracy and mIoU using off-the-shelf models that are not trained or fine-tuned for robustness. This work provides a new direction for certified image segmentation with off-the-shelf models. However, an interesting direction would be to explore task-specific training. For instance, in the context of certified segmentation, Salman et al. (2019) improved upon the work of Cohen et al. (2019) by training classifiers with noise injection and adversarial training. It would be straightforward to extend this approach to certified segmentation with our DENOISECER-TIFY procedure by adversarially training a classifier or a diffusion model. Although computationally expensive, this method may lead to further improvements. Moreover, we have seen that the diffusion model is able to generalize to Pascal-Context and Cityscapes datasets. A promising future direction would be to investigate the generalization of this model for denoising medical images and provide certified segmentation for critical healthcare applications.
2023-06-19T01:15:54.633Z
2023-06-16T00:00:00.000
{ "year": 2023, "sha1": "7a75192e153180d3e3f4a34686ea7b7d3543a3f1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7a75192e153180d3e3f4a34686ea7b7d3543a3f1", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
232159639
pes2o/s2orc
v3-fos-license
Obesity is associated with lower bacterial vaginosis prevalence in menopausal but not pre-menopausal women in a retrospective analysis of the Women’s Interagency HIV Study The vaginal microbiota is known to impact women’s health, but the biological factors that influence the composition of the microbiota are not fully understood. We previously observed that levels of glycogen in the lumen of the vagina were higher in women that had a high body mass index (BMI). Vaginal glycogen is thought to impact the composition of the vaginal microbiota. We therefore sought to determine if BMI was associated having or not having bacterial vaginosis (BV), as determined by the Amsel criteria. We also hypothesized that increased blood glucose levels could lead to the previously-observed higher vaginal glycogen levels and therefore investigated if hemoglobin A1c levels were associated with BV. We analyzed data from the Women’s Interagency HIV Study using multiple multivariable (GEE) logistic regression models to assess the relationship between BMI, BV and blood glucose. Women with a BMI >30 kg/m2 (obese) had a lower rate (multivariable adjusted OR 0.87 (0.79–0.97), p = 0.009) of BV compared to the reference group (BMI 18.5–24.9 kg/m2). There was a significantly lower rate of BV in post-menopausal obese women compared to the post-menopausal reference group, but not in pre-menopausal women. HIV- post-menopausal obese women had a significantly lower rate of BV, but this was not seen in HIV+ post-menopausal obese women. Pre-menopausal women with a higher hemoglobin A1c (≥6.5%) had a significantly lower rate (multivariable adjusted OR 0.66 (0.49–0.91), p = 0.010) of BV compared to pre-menopausal women with normal hemoglobin A1c levels (<5.7%), but there was no difference in post-menopausal women. This study shows an inverse association of BMI with BV in post-menopausal women and hemoglobin A1c with BV in pre-menopausal women. Further studies are needed to confirm these relationships in other cohorts across different reproductive stages and to identify underlying mechanisms for these observed associations. Introduction The vaginal microbiota plays an important role in susceptibility to HIV and other sexually transmitted infections (i.e. Neisseria gonorrhoeae, Chlamydia trachomatis and others), pelvic inflammatory disease and premature delivery [1][2][3]. A vaginal microbiota that consists predominantly of bacteria in the genus Lactobacillus is associated with protection from these conditions and is characterized by a low vaginal pH from Lactobacillus-mediated lactic acid production. Conversely, bacterial vaginosis (BV), a condition where anaerobic non-Lactobacillus spp. bacteria dominate the vaginal microbiota, is associated with a higher vaginal pH and greater susceptibility to adverse outcomes. Several behaviors including sexual activity, smoking and vaginal douching affect the makeup of the vaginal microbiota [1,4]. Menopause is also associated with decreased levels of vaginal Lactobacillus spp. [5,6]. However, the main biological influences that prevent BV and lead to more beneficial vaginal bacteria types are poorly understood. We previously found, among HIV-seronegative women enrolled in the Women's Interagency HIV Study (WIHS), that levels of glycogen in the lumen of the vagina were higher in women that had a high BMI [7]. Glycogen released by the vaginal epithelium is thought to support growth of beneficial Lactobacillus spp. such as L. crispatus leading to decreased BV [7][8][9][10]. The conditions that influence the levels of vaginal glycogen are unknown, but in skeletal muscle, higher levels of blood glucose, as occurs with carbohydrate loading by athletes or in diabetes, have been associated with increased muscle glycogen [11,12]. It is therefore plausible that increased blood glucose could also lead to increased vaginal glycogen levels which could be a mechanism for an association between BMI and increased genital glycogen. Since hemoglobin A1c (HbA1c) is routinely measured in the WIHS and reflects average blood glucose levels over the preceding 3 months [13], it could be measured to test the hypothesis that increased blood glucose impacts the genital microbiota. The current study was undertaken to determine if BMI was associated with BV in women in the WIHS, an ongoing longitudinal cohort study of HIV-seropositive and demographically similar HIV-seronegative women initiated in 1994 across several US cities. Analyzing and comparing the effects of BMI on BV in both HIV+ and HIV-women is important since HIV infection and its treatments have effects on metabolism and body weight [14]. Additionally, HIV infection itself has been suggested to affect the makeup of the genital microbiota [15][16][17][18], with several studies finding an increased rate of BV in HIV+ women. Hemoglobin A1c values, available for a subset of the women, were also analyzed to determine if there was a significant relationship with BV. Elucidating the relationships between physiologic parameters and the vaginal microbiota could help identify ways to intervene to promote healthy vaginal microbiota, prevent BV and improve women's reproductive health across reproductive stages. Materials and methods The WIHS cohort consists of US women living with or at risk for HIV. Study methods have previously been described in detail [19][20][21]. Women in the cohort have twice-yearly visits/ evaluations. For the purposes of this study, participant visits from October 1, 1994 through September 30, 2016 were included in this analysis. Participant visits in which the participant reported vaginal douching, use of vaginal medication, vaginal sex 48 hours before the visit, pregnancy, breastfeeding, and/or were less than 12 weeks postpartum were excluded from analyses because these factors potentially influence vaginal microbiota. In total, 10,184 person visits (15.3%) were excluded, consisting of; 1337 for douching in previous 48 hours; 1167 for vaginal meds use in previous 48 hours; 7584 for vaginal sex in previous 48 hours; 842 for current pregnancy; 230 for breastfeeding; and 166 for 12 weeks postpartum. The parent WIHS study and this data analysis conformed to the procedures for informed written consent approved by institutional review boards (IRB) at all sponsoring organizations and to human-experimentation guidelines set forth by the United States Department of Health and Human Services, and finally reviewed and approved by the Cook County Health Review Board. Bacterial vaginosis Bacterial vaginosis was identified by the presence of vaginal fluid pH >4.5 and at least 2 of the 3 other Amsel criteria: wet mount clue cells, amine odor of vaginal fluid when KOH is added (whiff test), and presence of a white/gray homogenous vaginal discharge. This is a modification of the standard Amsel criteria. However, for the first eight visits occurring from October 1, 1994 through September 30, 1998 bacterial vaginosis was identified as having all three of the following criteria; the presence of a vaginal fluid pH >4.5; wet mount clue cells; and amine odor of vaginal fluid when KOH is added. There were 10,047 pre-1998 visits included in this study. Menopausal status Person visits were retrospectively categorized using available data as menopausal (surgical or natural) if a woman reported bilateral oophorectomy or hysterectomy and/or no menses for >12 months with no subsequent resumption of menses during followup observations or premenopausal if a woman reported any menses within the past 6 months or subsequently resumed menses after a period of amenorrhea. Covariates Covariates including potential confounders were selected a priori and based on previous literature and included age at visit, education level (<high school, completed high school, or >high school), alcohol use (>7 drinks per week or �7 drinks per week), tobacco use (current or none), drug use defined as use of crack, cocaine, and/or heroin (current or none), marijuana use (current or none), sexual activity and condom use (no vaginal sex since last visit, condom use always with one or more partners, sometimes or never condom use with one partner, or sometimes or never condom use with more than one partner), WIHS site (Bronx, Brooklyn, Washington D.C., Los Angeles, San Francisco, Chicago, Chapel Hill, Atlanta, Miami, Birmingham, or Jackson), parity (� one pregnancy or no pregnancies), recent yeast infection (selfreported infection since last visit or no recent infection), hormonal contraceptive use (current or none), absolute neutrophil count, hypertension (current vs previous or none), and chronic kidney disease (estimated glomerular filtration rate: �90, 60-89, 30-59, 15-29, and <15). Race (black, other, white) and ethnicity (Hispanic, not Hispanic) were included as a composite variable (black, Hispanic, other, white), those that self-reported Hispanic ethnicity were categorized as Hispanic regardless of self-identified race. HIV serostatus was included as a covariate and analyses were stratified by HIV status. For HIV+ only multivariable models, CD4 count (�200, <200) was also included. All available HbA1c (�6.5%, 5.7-6.4%, or <5.7%) measurements were also included in analysis (the WIHS measured this parameter in all women at a subset of visits). The frequency of genital infections was self-reported as being told by a healthcare provider that the participant had an infection since the last visit: gonorrhea 0.21%, syphilis 0.18%, chlamydia 0.43%, pelvic inflammatory disease 0.31%, Herpes 3.23%, trichomoniasis 1.44% and yeast 9.05%. Because yeast infection was relatively frequent, this was included in statistical modelling. Statistical analysis Bivariate analyses using chi-square and Mann-Whitney tests assessed the relationship between BV at a visit and sociodemographic and clinical variables, by menopausal status and HIV serostatus. To account for repeated measures within participants, generalized estimating equation (GEE) adjusted odds ratios and 95% confidence intervals were obtained. Multiple multivariable (GEE) logistic regression models considered i) the association of BV and BMI (underweight, normal, overweight, and obese); ii) the association of BV and BMI stratified by menopausal status; iii) among the subset of participant visits with hemoglobin A1c data, the association of BV and BMI and BV and hemoglobin A1c, stratified by menopausal status; and iv) the previous model was also run separately for women living with HIV and HIV uninfected participants. The multivariable models adjusted for confounders including age at visit, HIV serostatus, race/ethnicity, education, alcohol use, cigarette use, drug use, marijuana use, sexual activity and condom use, parity, recent vaginal yeast infection, hormonal contraception, absolute neutrophil count, hypertension, and enrollment site. All analyses were performed using Statistical Analysis Software (SAS) software, version 9.4 (SAS Institute, Inc., Cary, NC). Results We identified 4,637 women (3,451 HIV+ and 1,186 HIV-) assessed in WIHS for BV, contributing to 56,537 visits; BV was found at 6,770 (12.0%) of those visits. Among women living with HIV, BV was found at 5,435 visits (11.6%) and at 1,335 visits (13.7%) among HIV uninfected women. The average age (range) at time of enrollment of the women analyzed in this study was 37 years (range: 18-73). Other demographic, behavioral and physiologic data are shown in Table 1 for the index visit of the women in this study as well as the number of visits. The reference group (women with normal BMI, 18.5-24.9) had BV at 12.6% of visits. Adjusting for repeated measures, obese women (BMI >30) had a significantly lower likelihood of BV than the reference group (Table 2), even after adjusting for the covariates (OR 0.87 (0.79-0.97), p = 0.009). Overweight women (BMI 25.0-29.9) had lower rates of BV when compared to women with a normal BMI (18.5-24.9) in GEE univariable but not in multivariable adjusted analysis. Underweight women (BMI <18.5) did not have a different rate of BV than normal weight women. We further examined whether the relationship between BMI and BV differed by menopausal status. In multivariable-adjusted analysis of post-menopausal women, the rate of BV was significantly lower in obese and overweight women than in women of normal weight (Table 3). However, among pre-menopausal women, there was no statistically significant difference in BV rate between obese women and normal weight women when adjusting for the covariates. Underweight women, whether pre-or post-menopausal, did not have significantly different rates of BV than the reference group in adjusted analyses. To investigate the hypothesis that increased blood glucose levels in women might be associated with a lower prevalence of BV, we examined available hemoglobin A1c levels. Hemoglobin A1c values were collected in all WIHS participants over a subset of visits, and were available for women at 16,306 of the WIHS visits (3,543 subjects: 2,615 HIV+, 928 HIV-). In this subgroup, obese women also had a lower rate of BV in multivariable adjusted analysis (not shown). Post-menopausal obese women had a significantly lower rate of BV compared to women with a normal BMI in adjusted analysis while pre-menopausal women did not (Table 4). In adjusted analysis of pre-menopausal women, those with higher hemoglobin A1c (�6.5%) levels had a significantly lower rate (OR 0.66 (0.49-0.91), p = 0.01) of BV than women with normal levels of hemoglobin A1c (<5.7%), but this association was not found among post-menopausal women. (Table 4). In all of the analyses described above, the models controlled for HIV serostatus. We further examined the effect of HIV serostatus on the relationship between BMI, menopausal status, hemoglobin A1c, and BV. When stratified by menopausal status, post-menopausal obese women had a lower rate of BV in HIV seronegative, but not among HIV seropositive women when compared to normal weight women (Table 5). In multivariable-adjusted analyses, among pre-menopausal HIV seropositive women, those with poor glycemic control (hemoglobin A1c: �6.5%) also had significantly lower rates of BV than women with normal levels of hemoglobin A1c. This relationship was not seen in adjusted analyses among HIV-seronegative women. Discussion This study found that, overall, participants in the WIHS that were obese (BMI>30) had a lower rate of BV than those with a normal BMI. This relationship between BMI and BV was only observed in post-menopausal women. HIV-seronegative post-menopausal obese women had a significantly lower rate of BV. Pre-menopausal, but not post-menopausal, women with elevated hemoglobin A1c had significantly lower BV than the reference group. Several other studies assessed the relationship between BMI and BV with substantial differences from this study both in the characteristics of the population studied and the findings. However, none of those studies assessed the relationships between BMI and BV either in the context of menopause status or HIV status. Lokken et al. [22] studied a cohort of female sex workers in Mombasa Kenya and found that obese women had a lower rate of BV compared with the control group. One similarity between that study and the current WIHS study is that both cohorts included HIV-seropositive and HIV-seronegative women. However, there are several important differences in the studies. First, the racial and ethnic makeup of the two cohorts is substantially different. Second, the Mombasa study only assessed women 46 years old or younger (their stated proxy for menopause) while the WIHS cohort includes women that, over the years of the study, range in age from 18-83 yrs (the average age of post-menopausal women that had BV in the WIHS study was greater than 50 yrs). Additionally, menopause is assessed using menstrual cycle criteria in the WIHS cohort [23]. While it was not analyzed in the Lokken et al. study, it is possible that a portion of the significant inverse relationship they observed between BMI and BV could have been influenced by the age of women, with older obese women having lower rates of BV, as was found in this WIHS study. Another difference between the studies was that the Mombasa study used Nugent criteria for assessing BV for their main conclusions while this WIHS study used Amsel criteria for assessing BV. However, the Mombasa study also performed the Amsel in parallel to the Nugent and interestingly, the Amsel method also revealed that obese women had a lower rate of BV. The Mombasa study controlled for HIV serostatus during analysis but did not stratify or report whether HIV serostatus affected the relationship between BMI and BV. Conversely, Brookheart et al. [24] found that obese women had a higher rate of BV when analyzing Nugent scores from 5,918 US women in the Contraceptive CHOICE Project. Our study did not observe a higher rate of BV in obese pre-menopausal women. A substantial difference from the current WIHS study was that recruitment for CHOICE targeted women ages 14-45 years old and mean age was 25.3 years old for all participants. Also, the CHOICE group were mainly HIV-seronegative. It is not clear why the CHOICE study found a higher rate of BV in obese women while the current WIHS study and the study by Lokken et al. did not observe this relationship in pre-menopausal women. Kancheva et al. [25] found in a small study of copper IUDs in Thai women with a median age of 39, that those with a BMI <20 had a higher prevalence of BV. This WIHS study also observed a significantly higher rate of BV in pre-menopausal women with a BMI <18.5 (Table 4), although in adjusted analysis this was not significant. Mastrobattista et al. [26] evaluated whether BMI would affect the outcome of treatment of BV. However, that study was performed in pregnant women (excluded from our analysis), and no effect of BMI on treatment was observed. Given the many dramatic changes in physiology caused by pregnancy, it would not be surprising if pregnancy can affect any relationships between BMI and BV. Previous studies provide evidence that vaginal glycogen can lead to growth of vaginal lactobacilli [7, 9, 10]. We did not measure vaginal glycogen levels in the women in this WIHS study. However, based on our previous study where we observed that obese women had higher levels of vaginal glycogen [7], we predicted that obese women might have a lower rate of BV. Further, since ingestion of higher levels of carbohydrates and higher levels of blood glucose are associated with higher muscle glycogen [11,12], we predicted that there would be a significant negative relationship between BV and hemoglobin A1c. However, while obese post-menopausal women were less likely to have BV, post-menopausal women with elevated hemoglobin A1c did not have a lower rate of BV suggesting that in obese post-menopausal women, the lower BV levels were not caused by increased glycogen due to higher glucose. Interestingly, pre-menopausal women with elevated hemoglobin A1c had significantly lower BV than the reference group of hemoglobin A1c <5.7% (Table 4) possibly indicating that increased blood glucose had an effect through glycogen although this apparently was not related to BMI. To our knowledge there are no studies of the effect of chronic diabetes on the rate of BV. Two studies examined women with gestational diabetes mellitus (GDM) and found no difference in the rate of BV from controls with no GDM [27,28]. African American women have been observed to have generally higher levels of diabetes and obesity and it will therefore be important for further studies to determine if ethnic or racial backgrounds affect the relationship between diabetes, obesity and BV. This study provides evidence that two very common conditions, obesity and diabetes, can affect the vaginal microbiota. The rate of obesity in women worldwide can be as high as 10-15%, and in the US, the rate is much higher [29]. Within some HIV-infected cohorts, aging is associated with increased obesity [30]. Additionally, HIV-infected African Americans have higher rates of obesity [31]. The prevalence of diabetes is approximately 10% in the US [32]. Diabetes is highly prevalent in HIV-infected persons (both men and women) [30]. Our current study found that while HIV-post-menopausal obese women had a significantly lower rate of BV, this relationship was not seen in HIV+ post-menopausal obese women. The immune system is significantly impacted during HIV infection which could affect the relationship between obesity and BV and may help explain this difference. There are few if any studies that address the impact of HIV infection on BV in post-menopausal women. However, several studies found that HIV infection is associated with changes in the vaginal microbiota in pre-menopausal women [15,16]. In contrast, Massad et al. [18] found that hygienic and sexual practices such as sexual practices and smoking affected development of BV while HIV infection itself had little impact on BV rates. While this study found an association between obesity and the rate of BV, this relationship did not appear to be due to increased blood glucose as measured by hemoglobin A1c. Our previous study showed however, that vaginal glycogen was increased in obese women and it is therefore possible that the lower rate of BV we observed in obese women in the WIHS was due to higher levels of vaginal glycogen. While estrogen has been posited to affect vaginal glycogen, in a previous study we did not find any association between vaginal glycogen and estrogen [10]. A separate study in menopausal women also did not find a significant relationship between vaginal glycogen and serum estrogens [33]. However, the relationship between glycogen, obesity and estrogen are highly complex. For example, in menopausal women, estrogen replacement therapy impacts body fat distribution and may reduce obesity [34]. Also, ovariectomy in animal models can lead to increased body weight [35]. Therefore, further research is needed to determine the underlying causes of the association between obesity, increased hemoglobin A1c and bacterial vaginosis. There were several limitations to this study. We found early onset menopause (age <50 years of age, prolonged amenorrhea, and no later resumption of menses) in 1,874 (3.32%) of participant visits. At the index visit and using this definition, 18% of women were in menopause and by the end of the study 33.2% of participants were menopausal. Based on our definition of menopause in this study, we are possibly mislabeling a small subset of women as menopausal who have prolonged, but eventual reversible amenorrhea just not captured. It is also unclear whether these women with no later resumption of menses are all truly post-menopausal, or if there is another or unknown etiology for the prolonged amenorrhea. Additionally, some data were excluded (see methods) and this could have biased results. Further, this study determined BV using the Amsel criteria instead of Nugent's Gram stain criteria [36], which has been suggested as more rigorous in the identification of BV. The Amsel criteria is considered to be highly specific, with most of found cases likely true cases. However, Amsel is not highly sensitive and it is possible that cases of BV were missed; rates of BV were higher in a previous single site WHIS study where BV rates based on Nugent scoring were 33% [37]. Additionally, there was a change in the Amsel criteria used to define BV in 1998 which may have impacted some of the findings. Interestingly, in a cross-sectional analysis of a single visit of data from the WIHS where data for both tests was available, BV by Amsel Criteria and BV by Nugent score (7 out of 10) were strongly associated (odds ratio = 45.7, 95% confidence interval: 21.4, 97.3) (ED unpublished observation) (Nugent data was not collected for all WIHS visits). Despite these weaknesses, there were also several strengths to the study. The WIHS is a large longitudinal cohort study conducted over several decades, so there is a relatively large amount of data available spanning several reproductive stages. Also, the WIHS has sites throughout the contiguous US providing geographic representation, as well as the ability to control for areas with higher rates of STI acquisition. In summary, in the WIHS cohort, obese post-menopausal women had a significantly lower BV rate compared to post-menopausal women with a normal BMI. To our knowledge, this is the first study to report the relationship between BV and BMI in post-menopausal women. This relationship of BMI and BV was not seen in pre-menopausal women and the relationship did not appear to be related to blood glucose levels in post-menopausal women. However, in pre-menopausal women with a higher hemoglobin A1c (�6.5%), there was a significantly lower BV rate. These results suggest that the role of glycemia, vaginal glycogen in pre-and post-menopausal women, and the relationship with BV should be further explored.
2021-03-10T06:23:17.264Z
2021-03-08T00:00:00.000
{ "year": 2021, "sha1": "e270dfcafa4276070455437bf72f7c1c500e8835", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0248136&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b2886bce38eb99d9da29d459e210c8151a5c98f9", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
248049592
pes2o/s2orc
v3-fos-license
Tumor blood flow and apparent diffusion coefficient histogram analysis for differentiating malignant salivary tumors from pleomorphic adenomas and Warthin’s tumors We aimed to assess the combined diagnostic value of apparent diffusion coefficient (ADC) and tumor blood flow (TBF) obtained by pseudocontinuous arterial spin labeling (pCASL) for differentiating malignant tumors (MTs) in salivary glands from pleomorphic adenomas (PAs) and Warthin’s tumors (WTs). We used pCASL imaging and ADC map to evaluate 65 patients, including 16 with MT, 30 with PA, and 19 with WT. We evaluated all tumors by histogram analyses and compared various characteristics by one-way analysis of variance followed by Tukey post-hoc tests. Diagnostic performance was evaluated by receiver operating characteristic (ROC) curve analysis. There were significant differences in the mean, 50th, 75th, and 90th percentiles of TBF among the tumor types, in the mean TBFs (mL/100 g/min) between MTs (57.47 ± 35.14) and PAs (29.88 ± 22.53, p = 0.039) and between MTs and WTs (119.31 ± 50.11, p < 0.001), as well as in the mean ADCs (× 10−3 mm2/s) between MTs (1.08 ± 0.28) and PAs (1.60 ± 0.34, p < 0.001), but not in the mean ADCs between MTs and WTs (0.87 ± 0.23, p = 0.117). In the ROC curve analysis, the highest areas under the curves (AUCs) were achieved by the 10th and 25th percentiles of ADC (AUC = 0.885) for differentiating MTs from PAs and the 50th percentile of TBF (AUC = 0.855) for differentiating MTs from WTs. The AUCs of TBF, ADC, and combination of TBF and ADC were 0.850, 0.885, and 0.950 for MTs and PAs differentiation and 0.855, 0.814, and 0.905 for MTs and WTs differentiation, respectively. The combination of TBF and ADC evaluated by histogram analysis may help differentiate salivary gland MTs from PAs and WTs. Recently, arterial spin labeling (ASL) techniques, such as pulsed ASL or pseudocontinuous ASL (pCASL), were introduced for clinical applications 9 . This method has been applied for noninvasive measurement of tumor blood flow (TBF) by using the magnetization of protons in arterial blood as an intrinsic tracer without an exogenous contrast agent 9 . There have been only a few reports on the usefulness of ASL for differentiating salivary gland tumors so far [10][11][12] . The use of multiparametric MRI, such as DWI and ASL, may help radiologists by increasing their efficiency in the differential diagnosis of salivary gland tumors. This is because this method may decrease unnecessary examinations and invasive procedures, such as biopsies. We aimed to assess the combined diagnostic value of ADC and TBF for differentiating MTs in salivary glands from PAs and WTs. Results A total of 65 subjects (age range, 11-86 years; mean 59 years; 34 males and 31 females) were finally included. There were 16 subjects with MTs, 30 with PAs, and 19 with WTs. The characteristics of patients are described in Table 1. The pathology of MTs was variable, including five carcinoma ex pleomorphic adenomas, two aciniccell carcinomas, two adenocarcinomas, two adenoid cystic carcinomas, two mucoepidermoid carcinomas, one basal-cell adenocarcinoma, one epithelial myoepithelial carcinoma, and one salivary-duct carcinoma. One patient with PAs and eight patients with WTs had multiple or bilateral tumors. Among these patients, only the largest one was assessed. Comparison of the parameters for TBF and ADC between MTs, PAs, and WTs. Figures Tables 2 and 3 show the parameter measurements of TBF and ADC, respectively, in MTs, PAs, and WTs. There was a significant difference in the mean ADCs between MTs (1.08 ± 0.28 × 10 −3 mm 2 /s) and PAs (1.60 ± 0.34 × 10 −3 mm 2 /s, p < 0.001) but not between MTs and WTs (0.87 ± 0.23 × 10 −3 mm 2 /s, p = 0.117). There were no ADC parameters that showed significant differences for all three combinations of tumor types (MT and PA, MT and WT, and PA and WT). When differentiating MTs from WTs, the 50th percentile of TBF had the best diagnostic performance out of all TBF and ADC, with an AUC of 0.855 (95% CI, 0.733-0.977, p < 0.001), which is considered medium diagnostic performance. The best detected cutoff point was 78.02 mL/100 g/min, yielding a sensitivity and a specificity of 84.2% and 75.0%, respectively. When differentiating PAs from WTs, the 10th percentile of ADC had the best diagnostic performance out of all TBFs and ADCs, with an AUC of 0.984 (95% CI, 0.958-1.000, p < 0.001), which is considered high diagnostic performance. The best detected cutoff point was 0.79 × 10 −3 mm 2 /s, yielding a sensitivity and a specificity of 100.0% and 89.5%, respectively. Figure 4 summarizes the diagnostic performance of the parameters. In differentiating MTs from PAs, the AUC for the combination of TBF all and ADC all (0.950; 95% CI, 0.892-1.000, p < 0.001) was higher than those for TBF all alone (0.850; 95% CI, 0.739-0.961, p < 0.001) and ADC all alone (0.885; 95% CI, 0.787-0.984, p < 0.001), which suggests that the diagnostic performance improved from medium to high with the combination of TBF all and ADC all . In differentiating MTs from WTs, the AUC for the combination of TBF all and ADC all (0.905; 95% CI, 0.805-1.000, p < 0.001) was higher than those for TBF all alone (0.855; 95% CI, 0.733-0.977, p < 0.001) and ADC all alone (0.814; 95% CI, 0.664-0.964, p = 0.002), which suggests that the diagnostic performance improved from medium to high with the combination of TBF all and ADC all . In differentiating PAs from WTs, the AUC for the combination of TBF all and ADC all (1.000; 95% CI, 1.000-1.000, p < 0.001) was higher than that for TBF all showing medium TBF (arrow). The region of interest (ROI) was manually drawn on the apparent diffusion coefficient (ADC) map of the software (e, yellow), and the ROI was copied from the ADC map to the TBF map of the software (d, yellow). The TBF histogram (f) and ADC histogram (g) are presented. The 50th percentile of the TBF value was 50.92 mL/100 g/min, whereas the 10th percentile of the ADC value was 0.82 × 10 −3 mm 2 /s. www.nature.com/scientificreports/ alone (0.968; 95% CI, 0.929-1.000, p < 0.001) and the same as that for ADC all alone (1.000; 95% CI, 1.000-1.000, p < 0.001), which suggested a medium diagnostic performance for TBF all alone and high performance for both ADC all alone and the combination of TBF all and ADC all . In differentiating MTs from benign tumors (BTs), including PAs and WTs, the AUC for the combination of TBF all and ADC all (0.930; 95% CI, 0.865-0.995, p < 0.001) was higher than those for TBF all alone (0.811; 95% CI, 0.709-0.914, p < 0.001) and ADC alone (0.895; 95% CI, 0.821-0.970, p < 0.001), which suggests that the diagnostic performance improved from medium to high with the combination of TBF all and ADC all . Supplementary Figure S1 presents the scatter plots for MTs, PAs, and WTs, which represent the propensity scores of TBF all and ADC all for each tumor. Table S5 shows the intraclass correlation coefficients (ICCs) of the measurements by the two observers. Excellent agreements were observed for all parameters except for the skewness of ADC, which showed good agreement. Discussion In this study, the diagnostic performance of the combination of TBF all and ADC all for differentiating MTs from PAs and WTs improved relative to the performance of each parameter alone. However, in differentiating PAs from WTs, the diagnostic performance of ADC all alone showed perfect discrimination, and therefore, the value of adding the combination of ADC all and TBF all was low. To our best knowledge, this is the first study to evaluate the usefulness of the combination of pCASL and the ADC map by histogram analysis for differentiating malignant salivary gland tumors from PAs and WTs. According to Kato et al., qualitative analysis showed that TBF was significantly higher in WTs than PAs and MTs but did not show a significant difference between PAs and MTs 10 . However, we demonstrated that the mean, 50th, 75th, and 90th percentiles of TBF could differentiate MTs, PAs, and WTs. We speculate that the differences in ASL methods may explain why their results differed from ours. They placed the regions of interest (ROIs) on both a tumor and the contralateral normal parotid gland parenchyma at the same level and then evaluated showing a little heterogeneous contrast enhancement (arrow). Tumor blood flow (TBF) color map (c) showing low TBF (arrow). The region of interest (ROI) was manually drawn on the apparent diffusion coefficient (ADC) map of the software (e, yellow), and the ROI was copied from the ADC map to the TBF map of the software (d, yellow). The TBF histogram (f) and ADC histogram (g) are presented. The 50th percentile of the TBF value was 11.17 mL/100 g/min, whereas the 10th percentile of the ADC value was 1.71 × 10 −3 mm 2 /s. www.nature.com/scientificreports/ tumor-to-parotid signal intensity ratios from ASL images supposing that those ratios are surrogates of TBF 10 . They measured the relative ratio of salivary gland tumors to normal parotid glands, whereas we measured the TBF values of tumors quantitatively. Consequently, histogram analysis may overcome the limitations of qualitative analysis. Moreover, they used an alternating radio-frequency ASL sequence with gradient echo-type single-shot echo-planar imaging (MP-EPISTAR), which suffers from susceptibility artifacts more seriously than pCASL sequences that use 3D turbo spin-echo (TSE) acquisition 10 www.nature.com/scientificreports/ A recent report stated that metrics, such as percentiles, kurtosis, and skewness, calculated by histogram analysis are strong and reliable quantitative surrogate markers of tumor heterogeneity 13 . Thus, we consider that microenvironments of tumors could be masked by evaluating only a single parameter, such as the mean value. Yamamoto et al. demonstrated that the mean TBF value was significantly higher in WTs than in PAs by using the pCASL sequence with conventional ROI analysis 11 . They also showed that the higher mean TBF of WTs than of PAs was attributable to higher micro-vessel density in WTs than in PAs 11 . Furthermore, our results revealed that the 75th and 90th percentiles of TBF exhibited higher AUC values than the mean TBF. Consequently, histogram analysis appears to provide more detailed information about TBF. Kato et al. reported that the mean ADC values were significantly higher in PAs than in WTs and MTs but were not significantly different between WTs and MTs 10 . Their results were consistent with our results showing that all ADC parameters except for skewness and kurtosis were significantly different between PAs and WTs and between PAs and MTs, but not between WTs and MTs. Razek et al. studied ADC values by histogram analysis for diagnosis of PAs, WTs, and MTs and reported significant differences in the means and skewness of ADC among all three tumors, although these differences between WTs and MTs were weaker than those between PAs and WTs and PAs and MTs 14 . Histopathologically, PAs comprise an abundant myxoma-like stroma 6,11 , which probably contributed to the highest value obtained for it among the three types of tumors in all ADC parameters, except for the skewness and kurtosis values for ADC, in our study. In contrast, WTs showed the lowest value among all ADC parameters, except for the skewness and kurtosis values for ADC, which might reflect epithelial and lymphoid stromata with microscopic slit-like cysts filled with proteinous fluid 2,6 . Regarding the other conventional method, time-intensity curve patterns on dynamic contrast-enhanced MRI were found useful in the differentiation of salivary gland tumors 15 . Nevertheless, it requires contrast media, which can be harmful to patients with renal dysfunction or allergies to these materials. Moreover, dynamic contrastenhanced MRI only allows for one series of scans. In contrast, ASL can overcome these drawbacks and allows for repeat scanning without any contrast agents. Further, the time-intensity curve cannot provide quantitative data. For that reason, we focused on the noninvasive and quantitative MRI techniques of pCASL and ADC. There were several limitations in this study. First, the study was conducted at a single institution with a relatively small number of subjects. Further studies with a larger number of subjects and a wider range of benign and malignant tumor types are required to confirm the efficacy of pCASL imaging and ADC map in evaluating salivary gland tumors. Furthermore, we should consider classifying malignant tumors into low, intermediate, and high grades and evaluate each group to facilitate the management of patients at an earlier stage. Regarding the analytical method, we could not evaluate the whole pCASL image slices and ADC maps for each tumor. Particularly, MTs tend to have heterogenous characteristics. Thus, whole-tumor evaluation would be desirable in future studies. Moreover, we evaluated limited parameters in histogram analysis. Thus, we need to consider other parameters, such as entropy, to provide further information on tumor heterogeneity. In conclusion, the combination of TBF and ADC evaluated by histogram analysis was found to be helpful for differentiating MTs from PAs and WTs in salivary glands. Methods Subjects. This study was approved by the ethics committee of our university, and the requirement for written informed consent was waived because of the retrospective study design. All procedures were conducted according to the principles of the World Medical Association Declaration of Helsinki. We retrospectively identified 170 patients suspected of salivary gland tumors who had undergone pretreatment MRI between December 2015 and September 2020. Patients who fulfilled the following criteria were included: (a) available preoperative 3 T MRI with sufficient image quality, including pCASL images, DWI, T1-weighted images, contrast-enhanced T1-weighted images, and T2-weighted images; (b) tumor size > 10 mm; (c) tumors pathologically proven using fine-needle aspiration biopsy or surgical resection; and (d) diagnosis of MT, PA, or WT of the salivary gland. www.nature.com/scientificreports/ Patients were excluded on the absence of definitive diagnosis from biopsy or surgical resection (n = 37); histological diagnosis other than MT, PA, or WT (n = 19); lack of contrast-enhanced T1-weighted images (n = 20); lesions with large necrosis, cysts, hemorrhage, or infectious complications (n = 11); tumors smaller than 10 mm (n = 2); ADC map with artifact (n = 1); patients using a different pCASL protocol (n = 3); and data loading error in software (n = 12). A total of 65 patients met our inclusion criteria. where λ is the blood/tumor-tissue water partition coefficient (1.0 g/mL), and SI control and SI label are the timeaveraged signal intensities in the control and label images, respectively. T 1,blood is the longitudinal relaxation time of blood (1650 ms), α is the labeling efficiency (0.85), SI PD is the signal intensity of a proton density-weighted image, and τ is the label duration (1650 ms). The value of λ was 1.0 mL/g. To calculate TBF, we used the same model and conditions as those used for calculating blood flow in the brain. Conventional MRI protocol. All patients underwent MRI on a 3 T MRI system (Ingenia Image analysis. Image analysis was performed by using a custom software application developed in MAT-LAB 2020a. The custom software displays the ADC map and the pCASL map for the same patient side by side on the monitor. A slice image of each map for display can be moved. Two board-certified neuroradiologists (F.T and R.K) reviewed all MRI sequences. First, we identified the tumors on T1-weighted images, T2-weighted images, and contrast-enhanced T1-weighted images. The ROIs were manually drawn around the tumor margin in the maximum diameters on the ADC map by using the software. The ROIs were within an entire solid part of a tumor as much as visually traced, avoiding areas of necrosis, cyst, or hemorrhage. Then, the segmented ROI was copied from the ADC map and pasted to the pCASL image by using the software. The histogram features for each image were determined using those histograms. The following 10 objective features were determined as histogram features in the custom software: (1) minimum (min), (2) mean, (3) maximum (max), (4) 10th percentile, (5) 25th percentile, (6) 50th percentile, (7) 75th percentile, (8) 90th percentile, (9) skewness, and (10) kurtosis. The histogram features of TBF and ADC were measured twice in each ROI, and these measurements were averaged. Statistical analysis. Statistical analysis was performed by using SPSS v. 25.0 software (IBM SPSS Statistics for Windows, IBM Corp., Armonk, NY). Pearson's chi-square test was utilized to assess comparison of sex, tumor sub-site, and diagnostic method among the tumor types, and one-way analysis of variance was utilized to assess comparison of age and tumor diameter among the tumor types. All 10 parameters of the TBF and ADC values were assessed. Significant differences among the groups were analyzed using one-way analysis of variance followed by Tukey post-hoc tests, after Shapiro-Wilk test, which was performed to assess the normality of data distribution. A p value of < 0.05 was considered to be indicative of statistical significance. ROC curve analyses were performed to investigate the diagnostic performance of each parameter of TBF and ADC. All TBF parameters combined using binominal logistic regression were indicated as TBF all ; all ADC parameters combined using binominal logistic regression were indicated as ADC all ; and all TBF and ADC parameters combined using binominal logistic regression were indicated as TBF all + ADC all . These terms were used in differentiating MTs from PAs, MTs from WTs, PAs from WTs, and MTs from BTs, including PAs and WTs. We considered AUC values < 0.7, 0.7-0.9, and > 0.9 to indicate low, medium, and high diagnostic performance, respectively. Cutoff values were calculated with the maximum of the Youden index (Youden index = sensitivity + specificity − 1). A p value of < 0.05 was considered significant to be indicative of statistical significance. Interobserver agreement on TBF and ADC values between two readers was evaluated by ICC. ICCs are considered excellent if > 0.74 16 . Ethics statement. This study was approved by the ethics committee of Mie University School of Medicine, and the requirement for written informed consent was waived because of the retrospective study design. All study procedures were conducted according to the principles of the World Medical Association Declaration of Helsinki. Data availability The datasets generated and analyzed during the current study are available from the corresponding author on reasonable request. TBF = 6000 · · (SI control − SI label ) · e PLD T 1, blood 2 · α · T 1,blood · SI PD · 1 − e
2022-04-10T06:22:48.294Z
2022-04-08T00:00:00.000
{ "year": 2022, "sha1": "b067fdc60fb5b3602122aad8c769a1a201ea81d5", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-022-09968-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "07fc4112e7fb0d7cf157bd69d3dafa1c28765818", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247207688
pes2o/s2orc
v3-fos-license
Concomitant trans-sternal repair of Morgagni hernia and ventricular septal defect in a patient with Down syndrome: A case report Introduction Morgagni hernia is a rare type of hernia occurring secondary to potential anterior-medial defects in the diaphragm. The association of the defect with congenital cardiac pathologies and Down syndrome are well known. The defect is repaired usually by trans-abdominal or transthoracic approaches. Trans-sternal repair of the hernia is preferred in patients undergoing concomitant open heart surgery. Case presentation A 2-year-old child with Down syndrome underwent concomitant repair of Morgagni hernia and closure of his ventricular septal defect under cardiopulmonary bypass. The hernia was corrected by the sternotomy approach, without opening the hernia content, before the correction of the cardiac pathology. The patient made an uneventful recovery and was discharged on the 4th postoperative day. Discussion Preoperative diagnosis of diaphragmatic hernia in congenital heart disease is important to decrease mortality rate. However, trans-sternal exposure of the defect is also possible, as in this case, in patients undergoing open heart surgery for congenital cardiac defects. The defect can be repaired by this approach, concomitantly with the cardiac anomaly, no need for an additional incision and without opening the hernia sac. Conclusion Our experience, although very limited, in patients who are suffering from Morgagni hernia and concomitant congenital heart defects shows that simultaneous repair of Morgagni hernia through midline sternotomy prior to cardiac procedure is effective. As Morgagni hernia can be accompanied with many congenital cardiac anomalies, cardiac surgeons should be familiar with the trans-sternal approach to the defect. Introduction Morgagni hernia is a rare type of hernia occurring secondary to potential anterior-medial defects in the diaphragm [1]. This anomaly usually diagnosed incidentally on a chest radiograph ordered for some other reasons, commonly a respiratory complaint. Some cases of Morgagni hernia can be associated with congenital heart defects (atrial and/ or ventricular septal defects, patent ductus arteriosus), chest wall abnormalities, and some genetic syndromes, especially with Down syndrome [2][3][4][5][6]. The defect is usually repaired by a transabdominal or transthoracic approach. Based on the Surgical Case Report, 2020 (SCARE) guidelines, here, we report the concomitant repair of Morgagni hernia by a sternotomy approach in a child with Down syndrome who had undergone open-heart surgery for the closure of a ventricular septal defect [7]. Case presentation A 2-year-old male child with Down syndrome was referred to Mellat Medical for Ventricular Septal Defect (VSD) closure by Afghan Red Crescent Society. While he had a positive family history of Down syndrome in his older brother, the social history and psychosocial history were unremarkable. While completing the routine preoperative investigations for VSD closure, suspicion of diaphragmatic hernia was noticed with the observation of the chest roentgenography of a possible herniation of the bowel loops into the right hemithorax (Fig. 1). An anterior diaphragmatic (Morgagni) hernia was established by CT-scan of the chest which contained both small and large bowel loops (Fig. 2). The two-dimensional color-flow Doppler echocardiography of the patient demonstrated a perimembranous ventricular septal defect (VSD) with inlet widening, pulmonary hypertension, and mild pericardial effusion. The decision for VSD closure and the concomitant repair of the diaphragmatic defect was then taken after discussion with the cardiac surgeon and pediatric general surgeon. During the operation, the thorax was opened by a vertical midline sternotomy. The peritoneal sac and the defect were seen after the parietal pleura was removed from the area to the right side. The hernia sac was incised from the pericardium and retrosternal space and was reduced to the abdominal cavity through the defect with the transverse colon in it (Fig. 3). Nylon, horizontal sutures were sequentially used through the edge of the diaphragmatic defect and into the retrosternal fascia and periosteum. There was no need to use mesh as the defect was not large and skin could be closed without tension as the sutures were tied. After the diaphragmatic hernia was repaired, the pericardium was opened and serous pericardial fluid was aspirated. Cardiopulmonary bypass was established in a routine manner by aortic and bicaval cannulation. VSD was closed from the right atrium approach with an autologous pericardial patch. The patient was taken from cardiopulmonary bypass upon completion of the cardiac procedure. The nasogastric tube was kept in place until the gastrointestinal (GI) function returned to normal. The postoperative postero-anterior and lateral roentgenograms of the patient showed the disappearance of the mediastinal shadows (Fig. 4). After 4 days of hospitalization, the patient made an uneventful recovery and was discharged with normal GI function. At 3 months follow-up the patient is healthy, vital signs are within normal limits and echocardiography results are satisfactory VSD closure with normal biventricular function. Discussion Morgagni hernia is a rare type of diaphragmatic hernia in the pediatric age group and is often uncommon with congenital diaphragmatic hernia (CHD) (VSD). Its frequency varies from 1% to 9.6% in large studies [2,8]. The foramen of Morgagni is located in the retrosternal space which results from the failure of the fusion of the fibrotendinous part of the pars tendinalis arising from the costochondral arches with the fibrotendinous portion of the par sternalis. This space is usually filled with fat and covered by pleura from the top and peritoneum inferiorly. When present, the defect shows a path through which abdominal viscera can herniate into the chest. Morgagni hernia usually takes place at the anteromedial part of the junction of the thoracic wall and septum transversum. In most of the cases (90%) the defect is right-sided while the remaining defects are left-sided or bilateral [9]. Morgagni hernia is usually asymptomatic and can be seen that there is no bowel in the defect, otherwise the risk of incarceration is an urgent indication for operating room. The colon, small bowels, liver, omentum and stomach are the most contents of the defect. In a repeated manner, it causes respiratory discomforts due to the compression of the lower lobe of the ipsilateral lung. There are up to 26% of cardiac anomalies associated with CHD [10]. Of every 5 liveborn with Morgagni hernia, 3 had trisomy 21. The considerable association between the hernia and Down syndrome is possibly due to defective dorsoventral migration of rhabdomyoblasts from the paraxial myotomes, caused by an increased cellular adhesiveness in trisomy 21 [6]. The transabdominal route of repair is the most welcomed method but the transthoracic approach by a limited thoracotomy is advocated if the herniated sac has solid contents [2]. Recently, laparoscopic techniques have obtained significant attention and are considered as an alternative approach for diaphragmatic hernias [11]. However, trans-sternal exposure of the defect is also possible, as in this case, in patients undergoing open-heart surgery for congenital cardiac defects. The defect can be repaired by this approach, simultaneously with the cardiac anomaly. As Morgagni hernia can be accompanied by many congenital cardiac anomalies, cardiac surgeons should be familiar with the trans-sternal approach to the defect which is as effective as other surgical approaches. Conclusion Our experience, although very limited, with patients suffering from Morgagni hernia and concomitant congenital heart defects reveal that simultaneous repair of Morgagni hernia through midline sternotomy prior to the cardiac procedure is safe and effective. Consent Informed consent was obtained from the patient`s legal guardian (his mother), for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal on request. CRediT authorship contribution statement AAD, HGT and MAR conceived and designed the study; AFW and HGT wrote the manuscript; MAR and FN helped collect data;; SHM confirmed the eligibility of the participants' for the study; SHM and FN Supervised the whole study and approved the final version of the 3. The photo shows that hernia sac was dissected from the pericardium and retrosternal space and was reduced to the abdominal cavity through the defect with the transverse colon in it. Fig. 4. This is a postoperative X-ray which shows the disappearance of the mediastinal shadows. manuscript. Declaration of competing interest The authors report no declarations of interest.
2022-03-03T16:16:17.130Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "323a328c11fda47627bdc612cede36308854fc53", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijscr.2022.106911", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "816cb6f5b10db9cf8a3c72bb421225d7d42c0c72", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
88514925
pes2o/s2orc
v3-fos-license
Jump activity estimation for pure-jump semimartingales via self-normalized statistics We derive a nonparametric estimator of the jump-activity index $\beta$ of a"locally-stable"pure-jump It\^{o} semimartingale from discrete observations of the process on a fixed time interval with mesh of the observation grid shrinking to zero. The estimator is based on the empirical characteristic function of the increments of the process scaled by local power variations formed from blocks of increments spanning shrinking time intervals preceding the increments to be scaled. The scaling serves two purposes: (1) it controls for the time variation in the jump compensator around zero, and (2) it ensures self-normalization, that is, that the limit of the characteristic function-based estimator converges to a nondegenerate limit which depends only on $\beta$. The proposed estimator leads to nontrivial efficiency gains over existing estimators based on power variations. In the L\'{e}vy case, the asymptotic variance decreases multiple times for higher values of $\beta$. The limiting asymptotic variance of the proposed estimator, unlike that of the existing power variation based estimators, is constant. This leads to further efficiency gains in the case when the characteristics of the semimartingale are stochastic. Finally, in the limiting case of $\beta=2$, which corresponds to jump-diffusion, our estimator of $\beta$ can achieve a faster rate than existing estimators. 1. Introduction. In this paper we are interested in estimating the jump activity index of a process defined on a filtered probability space (Ω, F, (F t ) t≥0 , P) and given by dX t = α t dt + σ t− dL t + dY t , (1.1) when L is locally stable pure-jump Lévy process (i.e., a pure-jump Lévy process whose Lévy measure around zero behaves like that of a stable process) and Y is a pure-jump process which is "dominated" at high-frequencies by L in a sense which is made precise below; see Assumption A. All formal conditions for X are given in Section 2. The jump activity index of X on a given fixed time interval is the infimum of the set of powers p for which the sum of pth absolute moments of the jumps is finite. Provided σ does not vanish on the interval and has càdlàg paths, the jump activity index of X coincides with the Blumenthal-Getoor index of the driving Lévy process L (recall Y is dominated by L at high frequencies). The dominant role of L at high frequencies, together with its stable-like Lévy measure around zero, manifests into the following limiting behavior at high frequencies: as h → 0 and s ∈ [0, 1], (1.2) for every t and where S is β-stable process, with the convergence being for the Skorokhod topology. Equation (1.2) holds when β > 1 which is the case we consider in this paper. (When β < 1 the drift will be the "dominant" component at high-frequencies, and some of our results can be extended to this case as well.) We study estimation of β from discrete equidistant observations of X on a fixed time interval with mesh of the observation grid shrinking to zero. Estimation of the jump activity index has received a lot of attention recently. [20] consider estimation from low-frequency observations in the setting of Lévy processes. [4] and [6] consider estimation from low-frequency data in the setting of time-changed Lévy processes with an independent time-change process. [2] consider estimation from low-frequency and options data. [3] and [5] consider estimation from low frequency data in certain stochastic volatility models. [27][28][29] propose estimation from high-frequency data using power variations in a pure-jump setting. [1] and [16] consider estimation in high-frequency setting when the underlying process can contain a continuous martingale via truncated power variations. [23] propose estimation of the jump activity index in pure-jump setting via power variations with adaptively chosen optimal power. [22] extend [23] via power variations of differenced increments which provide further robustness and efficiency gains. [15] consider jump activity estimation from noisy high-frequency data. The estimation of β from high-frequency data, thus far, makes use of the dependence of the scaling factor of the high-frequency increments in (1.2) on β. For example, consider the power variation where µ is some constant. An estimate of β then can be simply formed as a nonlinear function of the ratio V (p,∆n) V (p,2∆n) . This makes inference for β possible despite the unknown process σ. The limit result in (1.2), however, contains much more information about β than previously used in estimation. In particular, (1.2) implies that over a short interval of time the increments of X, conditional on σ at the beginning of the interval, are approximately i.i.d. stable random variables. In this paper we propose a new estimator of β that utilizes this additional information in (1.2) and leads to significant efficiency gains over existent estimators based on high-frequency data. The key obstacle in utilizing the result in (1.2) in inference for β is the fact that the process σ is unknown and time-varying. The idea of our method is to form a local estimator of σ using a block of high-frequency increments with asymptotically shrinking time span via a localized version of (1.3). We then divide the high-frequency increments of X by the local estimator of σ. The division achieves "self-normalization" in the following sense. First, the scale factor for the local estimator of σ and the high-frequency increment of X are the same, and hence by taking the ratio, they cancel. Second, both the high-frequency increment of X and the local estimator of σ are approximately proportional to the value of σ at the beginning of the high-frequency interval, and hence taking their ratio cancels the effect of the unknown σ. The resulting scaled high-frequency increments are approximately i.i.d. stable random variables, and we make inference for β via an analogue of the empirical characteristic function approach, which has been used in various other contexts; see, for example, [8]. After removing an asymptotic bias, the limit behavior of the empirical characteristic function of the scaled high-frequency increments is determined by two correlated normal random variables. One of them is due to the limiting behavior of the empirical characteristic function of the high-frequency increments scaled by the limit of the local power variation. The other is due to the error in estimating the local scale by the local realized power variation. Importantly, because of the "self-normalization," the F -conditional asymptotic variance of the empirical characteristic function of the scaled high-frequency increments is not random but rather a constant that depends only on β and the power p. This makes feasible inference very easy. When comparing the new estimator with existing ones based on the power variation, we find nontrivial efficiency gains. There are two reasons for the efficiency gains. First, as we noted above, our estimator makes full use of the limiting result in (1.2) and not just the dependence of the scale of the high-frequency increments on β, which is the case for existing ones. Second, by locally removing the effect of the time-varying σ, we make the inference as if σ is constant; that is, the limit variance is the same, regardless of whether X is Lévy or not. By contrast, the estimator based on the ratios of power variations is asymptotically mixed normal with F -conditional variance of the form K(p, β) 1 0 |σs| 2p ds ( 1 0 |σs| p ds) 2 , for some constant K(p, β), and we note that 1 0 |σs| 2p ds ( 1 0 |σs| p ds) 2 ≥ 1 with equality whenever the process |σ| is almost everywhere constant on the interval [0, 1]. That is, the presence of time-varying σ decreases the precision of the power-variation based estimator of β. The efficiency gains of our estimator are bigger for higher values of β. In the limit case of β = 2, which corresponds to L being a Brownian motion, we show that our estimator can achieve a faster rate of convergence than the standard √ n rate for existing estimators. The rest of the paper is organized as follows. In Section 2 we introduce the setting. In Section 3 we construct our statistic, and in Section 4 we derive its limit behavior. In Section 5 we build on the developed limit theory and construct new estimators of the jump activity and derive their limit behavior. This section also shows the efficiency gains of the proposed jump activity estimators over existing ones. Section 6 deals with the limiting case of jumpdiffusion. Sections 7 and 8 contain a Monte Carlo study and an empirical application, respectively. Proofs are in Section 9. 2. Setting and assumptions. We start with introducing the setting and stating the assumptions that we need for the results in the paper. We first recall that a Lévy process L with the characteristic triplet (b, c, ν), with respect to truncation function κ (Definition II.2.3 in [14]), is a process with a characteristic function given by In what follows we will always assume for simplicity that κ(−x) = −κ(x). Our assumption for the driving Lévy process in (1.1) as well as the "residual" jump component Y is given in Assumption A. Assumption A. L in (1.1) is a Lévy process with characteristic triplet (0, 0, ν) for ν a Lévy measure with density given by Y is an Itô semimartingale with the characteristic triplet ( [14], Definition II.2.6) ( being locally bounded and predictable, for some arbitrarily small ι > 0. Assumption A formalizes the sense in which Y is dominated at high frequencies by L: the activity index of Y is below that of L. We also stress that Y and L can have dependence. Therefore, as shown in [24], we can accommodate in our setup time-changed Lévy models, with absolute continuous time-change process, that have been extensively used in applied work. Finally, we note that (2.2) restricts only the behavior of ν around zero, and ν ′ is a signed measure. Therefore many parametric jump specifications outside of the stable process are satisfied by Assumption A (e.g., the tempered stable process). We next state our assumption for the dynamics of α and σ. We note that µ does not need to coincide with the jump measure of L, and hence it allows for dependence between the processes α, σ and L. This is of particular relevance for financial applications. For example, Assumption B is satisfied by the COGARCH model of [17] in which the jumps in σ are proportional to the squared jumps in X. More generally, Assumption B is satisfied if, for example, (X, α, σ) is modeled via a Lévy-driven SDE, with each of the elements of the driving Lévy process satisfying Assumption A. 3. Construction of the self-normalized statistics. We continue next with the construction of our statistics. The estimation in the paper is based on observations of X at the equidistant grid times 0, 1 n , . . . , 1 with n → ∞, and we denote ∆ n = 1 n . To minimize the effect of the drift in our statistics, we follow [22] and work with the first difference of the increments, ∆ n i X − ∆ n i−1 X, where ∆ n i X = X i/n − X (i−1)/n for i = 1, . . . , n. The above difference of increments is purged from the drift in the Lévy case, and in the general case the drift has a smaller asymptotic effect on it. For each ∆ n i X − ∆ n i−1 X, we need a local power variation estimate for the scale. It is constructed from a block of k n high-frequency increments, for some 1 < k n < n − 2, as follows: Block-based local estimators of volatility have been also used in other contexts in a high-frequency setting, for example, in [13] and [25]. The empirical characteristic function of the scaled differenced increments is given by We proceed with some notation needed for the limiting theory of L n (p, u). Let S 1 , S 2 and S 3 be random variables corresponding to the values of three independent Lévy processes at time 1, each of which with the characteristic triplet (0, 0, ν), for any truncation function κ and where ν has the density A |x| 1+β . Then we denote µ p,β = (E|S 1 − S 2 | p ) β/p , which does not depend on κ, and we further use the shorthand notation E(e iu(S 1 −S 2 ) ) = e −A β u β for any u > 0 with A β being a (known) function of A and β. Using Example 25.10 in [21] and references therein, we have which depends only on p and β but not on the scale parameter of the stable random variables S 1 and S 2 . With this notation, we set L(p, u, β) = e −C p,β u β , u ∈ R + , (3.4) which will be the limit in probability of L n (p, u). We finish with some more notation needed to describe the asymptotic variance of L n (p, u). First, we denote for some u ∈ R + , 4. Limit theory for L n (p, u). We start with convergence in probability. Theorem 1. Assume X satisfies Assumptions A and B for some β ∈ (1, 2) and β ′ < β. Let k n be a deterministic sequence satisfying k n ≍ n ̟ for some ̟ ∈ (0, 1). Then, for 0 < p < β, we have We note that we restrict β > 1; that is, we focus on the infinite variation case. The above theorem will continue to hold for β ≤ 1, but for the subsequent results about the limiting distribution of L n (p, u), we will need quite stringent additional restrictions in the case β ≤ 1. We do not pursue this here. The other conditions for the convergence in probability result are weak. The requirements for α and σ for Theorem 1 to hold are actually much weaker than what is assumed in Assumption B, but for simplicity of exposition we keep Assumption B throughout. We note that for consistency, we have a lot of flexibility about the block size k n : (1) k n → ∞ so that we 8 V. TODOROV consistently estimate the scale via V n i (p) and (2) k n /n → 0 so that the span of the block is asymptotically shrinking to zero, and therefore no bias is generated due to the time variation of σ. In the case when X is a Lévy process, the second condition is obviously not needed. To derive a central limit theorem (c.l.t.) for L n (p, u), we will need to restrict the choice of k n more. We will assume k n / √ n → 0, so that biases due to the time variation in σ, which are hard to feasibly estimate, are negligible. For such a choice of k n , however, an asymptotic bias due to the sampling error of V n i (p) appears, and for stating a c.l.t., we need to consider the following bias-corrected estimator: We state the c.l.t. for L n (p, u, β) ′ in the next theorem. Theorem 2. Assume X satisfies Assumptions A and B with β ∈ (1, 2) and β ′ < β 2 , and that the power p and block size k n satisfy locally uniformly in u ∈ R + . Z 1 (u) and Z 2 (u) are two Gaussian processes with the following covariance structure: locally uniformly in u ∈ R + . The conditions for the power p in (4.3) are exactly the same as in [22] for the analysis of the realized power variation, and they are relatively weak. For example, the condition p > β−1 2 will be always satisfied as soon as we pick power slightly above 1 2 . Moreover, this condition is not needed in the 9 case when X is a Lévy process. Further, the condition in (4.4) for k n shows that we have more flexibility for the choice of k n whenever p is not very close to its upper bound of β/2. Due to the self-normalization in the construction of our statistic, the limiting distribution in (4.5) is Gaussian and not mixed Gaussian, which is the case for most limit results in high-frequency asymptotics (and in particular for the power variation based estimator of β); see [26] for another exception. This is very convenient as the estimation of the asymptotic variance is straightforward. The bias correction in (4.2) is infeasible, as it depends on β. However, (4.7) shows that a feasible version of the debiasing would work provided the initial estimator of β is o p (k n √ ∆ n ). When one estimates β using L n (p, u), with explicit estimators provided in the next section, β − β will be O p (1/k n ). Hence, such a preliminary estimate of β will satisfy the required rate condition in Theorem 2. 5. Jump activity estimation. We now use the limit theory developed above to form estimators of β. The simplest one is based on L n (p, u) and is given by Because of the asymptotic bias in L n (p, u), β f s (p, u, v) − β will be only O p (1/k n ), with p and k n satisfying (4.3)-(4.4). An explicit estimate of β using feasible debiasing is given by for some u, v ∈ R + with u = v, and where β f s is a suitable initial estimator of β [like the one in (5.1)]. While convenient, the above estimators have two potential drawbacks. One, we do not take into account the information about β in the constant C p,β . This is because in the asymptotic limit of the above estimators, C p,β gets canceled. Second, u and v are chosen arbitrarily, and one can include more moment conditions for the estimation of β using L n (p, u, β f s ) ′ . In the next theorem we provide a general estimator of β which overcomes these drawbacks of the explicit estimators above. Theorem 3. Assume X satisfies Assumptions A and B with β ∈ (1, 2) and β ′ < β/2, and that the conditions in (4.3) and (4.4) Denote with u l and u h two sequences of K × 1-dimensional vectors, for some finite K ≥ 1, Finally define the K × 1 vector M(p, u, β) by for n → ∞ with N being standard normal random variable. A consistent estimator for the asymptotic variance of β(p, u) is given by where M(p, u, β) is defined as M(p, u, β) with u and β replaced by u and β. Theorem 3 allows us to adaptively choose the range of u over which to match L n (p, u, β f s ) ′ with its limit. This is convenient because the limiting variance of L n (p, u, β f s ) ′ depends on β. For this reason also the weight function in (5.3) optimally weighs the moment conditions in the estimation. We discuss the practical issues regarding the construction of m(p, u, β f s , u, β) in Section 7. We now illustrate the efficiency gains provided by the new method over existing power variation based estimators of β. The power variation estimator based on the differenced increments is given by (see [22]) On Figure 1, we plot the limiting standard deviation of the estimators in (5.5) and (5.9) for different values of β. [The estimator in (5.9) is derived under exactly the same assumptions for X as our estimator here.] The asymptotic standard deviation of β(p) is computed from [22]. β(p, u) is far less sensitive to the choice of p than β(p), with lower powers yielding marginally more efficient β(p, u). The news estimator β(p, u) provides nontrivial efficiency gains irrespective of the values of p and β. The gains are bigger for high values of the jump activity. For example, for β = 1.75, β(p, u) is around two times more efficient (in terms of asymptotic standard deviation) than β(p). 6. The limiting case of jump-diffusion. So far our analysis has been for the pure-jump case of β ∈ (1, 2). We now look at the limiting case of β = 2, which corresponds to L in (1.1) being a Brownian motion. In this case the asymptotic behavior of the high-frequency increments in (1.2) holds with S being a Brownian motion. Thus deciding β = 2 versus β < 2 amounts to testing pure-jump versus jump-diffusion specification for X. It turns out that when β = 2, our estimation method can lead to a faster rate of convergence than the √ n rate we have seen for the case β ∈ (1, 2). This is unlike the power-variation based estimation methods for which the rate of convergence is √ n, both for β = 2 and β < 2; see, for example, [23]. The faster rate of convergence in the case β = 2 can be achieved by letting the argument u of the empirical characteristic function L(p, u) drift toward zero as n → ∞. In this case, − log( L(p,un,2) ′ ) C p,2 u 2 n and − log( L(p,ρun,2) ′ ) C p,2 ρ 2 u 2 n , for some ρ > 0, are asymptotically perfectly correlated, and their difference converges at a faster rate. We note that this does not work in the pure-jump case of β < 2. To state the formal result we first introduce some notation. For S 1 , S 2 and S 3 being independent standard normal random variables, we denote and then set Ξ i (p) = E( ξ 1 (p) ξ ′ 1+i (p)) for i = 0, 1. The difference from the analogous expression for the case β < 2 is in the first terms of ξ 1 (p) and ξ 2 (p). Note that the expression for the bias-correction remains exactly the same as it involves only the variance and covariance of the second elements of ξ 1 (p) and ξ 2 (p), which remain the same as their pure-jump counterparts. Theorem 4. Suppose X has dynamics given by (1.1) with L being a Brownian motion, Y satisfying the corresponding condition for it in Assumption A and α and σ satisfying Assumption B for some r < 2. Suppose p < 1, k n √ ∆ n → 0 and u n → 0, and further Then for some ρ > 0 where Z 1 and Z 2 are two zero-mean normal random variables with covariance given by Ξ 0 (p) + 2 Ξ 1 (p). When X is a Lévy process, the requirement for k n and u n reduces to The rate of convergence of the estimator for β is now √ nu −2 n and is faster than the one in Theortem 3, when u n converges to zero. The latter is determined by the restriction in (6.2), which in turn is governed by the presence of the "residual" term Y , the variation in σ and the sampling variation in measuring the scale via V n i (p). For the condition to be satisfied we need p ∈ (1/2, 1) and β ′ < 1; that is, the jumps in X are of finite variation; for testing the null hypothesis of presence of diffusion when the process can contain infinite variation jumps under the null, see the recent work of [18]. Without any prior knowledge on β ′ and r, we can set k n according to (4.4), with β = 2, and then set u n ≍ log(n) −1 . The requirement on u n can be further relaxed when X is a Lévy process as evident from (6.5). Finally, we can draw a parallel between our finding for faster rate of convergence of the estimator of β when β = 2 with the result in [9,10] for faster rate of convergence for the maximum likelihood estimator of the stability index of i.i.d. β-stable random variables when β = 2. 7. Monte Carlo. We test the performance of the proposed method for jump activity estimation on simulated data from the following model where L and Z are two Lévy processes independent of each other with Lévy densities given by x 1.5 1 {x>0} , respectively. σ is a Lévy-driven Ornstein-Uhlenbeck process with a tempered stable driving Lévy subordinator. The parameters governing the dynamics of σ imply E(σ t ) = 1 and half-life of shock in σ of around one month (when unit of time is a day). L is a mixture of tempered stable processes with the parameter β coinciding with the jump activity index of X. We fix λ = 0.25, and consider four cases for β. In each of the cases we set A 0 and A 1 so that A 0 R |x| 1−β e −λ|x| dx = 1 and A 1 R |x| 1−β/3 e −λ|x| dx = 0. In the Monte Carlo we set T = 10 and n = 100 which corresponds approximately to two weeks of 5-minute return data in a typical financial setting. We further set k n = 50 and p = 0.51. The initial estimator to construct the moments and the optimal weight matrix is simply β f s (p, u, v) with u = 0.1 and v = 1.1. If p ≥ β f s (p, u, v)/2, then we reduce the power to p = β f s (p, u, v)/4. Based on the initial beta estimator, we estimate the values of u for which L(p, u, β) = 0.95 and L(p, u, β) = 0.25, and then split this interval in five equidistant regions which are used in constructing the moment vector in (5.4). Regarding the number of moment conditions, K, in the construction of our estimator, we should keep in mind the following. Larger K helps improve efficiency of the estimator as our equal weighting of the characteristic function within each moment condition is suboptimal. However, the feasible estimate of the optimal weight matrix is unstable in small samples when K is large. (This is similar to "curse of dimensionality" problems occurring in related contexts; see, e.g., [11] and [19].) Moreover, since the characteristic function is smooth, one typically does not need many moment conditions to gain efficiency. For example, we also experimented in the Monte Carlo with ten moment conditions (by splitting the region of u into ten equidistant regions). The performance of the estimator based on the ten moment conditions was very similar to the one based on the five moment conditions whose performance we summarize below. The results from the Monte Carlo are reported in Table 1. For comparison, we also report results for β(p) where p is set to the level which minimizes the corresponding asymptotic standard deviation in Figure 1. We notice satisfactory finite sample performance of β(p, u). In all cases for β, Note: IQR is the inter-quartile range, and MAD is the mean absolute deviation around the true value. The power p for β(p) is set to the value which minimizes the corresponding asymptotic standard deviation displayed in Figure 1. β(p, u) contains relatively small upward biases. These biases, however, are well below those of β(p). We note that the finite sample bias of β(p, u) can be significantly reduced if, similar to β(p), one uses an adaptive choice of power in the range (β/4, β/3). The superiority of β(p, u) holds also in terms of precision in estimating β, with inter-quantile ranges of β(p, u) typically well below those of β(p). 8. Empirical application. We now apply the developed inference procedures on high-frequency data for the VIX index. The VIX index is a optionbased measure for volatility in the market (S&P 500 index). It serves as a popular indicator for investors' uncertainty, and it is used as the underlying asset for many volatility-based derivative contracts traded in the financial exchanges. Earlier work, consistent with parametric models for volatility, has provided evidence that the VIX index is a pure-jump Itô semimartingale. Here, we estimate its jump activity index. The estimation is based on 5-minute sampled data during the trading hours for the year 2010. Like in the Monte Carlo, we split the year into intervals of 10 days (two weeks) and estimate the jump activity over each of them. The moments, the power p and the block size k n , are selected in the same way as in the Monte Carlo. Estimation results are presented in Figure 2. The estimated jump activity index takes values around 1.6. Overall, our results support a pure-jump specification of the VIX index. 9. Proofs. In the proofs we use the shorthand notation E n i (·) ≡ E(·|F i∆n ) and P n i (·) ≡ P(·|F i∆n ). We also denote with K a positive constant that does not depend on n and u and might change from line to line in the inequalities that follow. When we want to highlight that the constant depends only on some parameters a and b, we write K a,b . Decompositions and additional notation. In what follows it is convenient to extend appropriately the probability space and then decompose the driving Lévy process L as follows: where S, S and S are pure-jump Lévy processes with the first two characteristics zero [with respect to the truncation function κ(·)] and Lévy densities A |x| 1+β , 2|ν ′ (x)|1 {ν ′ (x)<0} and |ν ′ (x)|, respectively. We denote the associated counting jump measures with µ, µ 1 and µ 2 . (Note that there can be dependence between µ, µ 1 and µ 2 .) S is β-stable process, and S and S are "residual" components whose effect on our statistic, as will be shown, is negligible (under suitable conditions). The proof of the decomposition in (9.1) as well as the explicit construction of S, S and S can be found in Section 1 of the supplementary Appendix of [24]. We now introduce some additional notation that will be used throughout the proofs. We denote for i = k n + 3, . . . , n, We further denote the function where the positive constant K u depends only on u and is finite as soon as u is bounded away from zero. With this notation, we make the following decomposition for any u ∈ R + : where Z n j (u) = n i=kn+3 z j i (u) for j = 1, 2 with and R n j (u) = n i=kn+3 r j i (u) for j = 1, 2, 3, 4 with We finally introduce the following: 9.2. Localization. We prove results under the following strengthened version of Assumption B: Assumption SB. We have Assumption B and in addition: (a) the processes |σ t | and |σ t | −1 are uniformly bounded; (b) the processes b α and b σ are uniformly bounded; (c) |δ α (t, x)| + |δ σ (t, x)| ≤ γ(x) for all t, where γ(x) is a deterministic bounded function on R with R |γ(x)| r+ι λ(dx) < ∞ for arbitrarily small ι > 0 and some 0 ≤ r ≤ β; (d) the coefficients in the Itô semimartingale representation of b α and b σ satisfy the analogues of conditions (b) and (c) above; (e) the process R (|x| β ′ +ι ∧ 1)ν Y t (dx) is bounded, and the jumps of S, S and Y are bounded. Extending the results to the case of the more general Assumption B follows by standard localization arguments given in Section 4.4.1 of [12] (u). We do this in a sequence of lemmas starting with one containing some preliminary bounds needed for the subsequent lemmas. Lemma 1. Under Assumptions A and SB and k n ≍ n ̟ for ̟ ∈ (0, 1), we have for 0 < p < β, ι > 0 arbitrarily small and 1 ≤ x < β p and y ≥ 1, Proof. We start with (9.3). We apply exactly the same decomposition and bounds as for the term A 3 in Section 5.2.3 in [22] to get the result in (9.3). We continue with (9.4). Without loss of generality we assume k n ≥ 2, and we denote the two sets With this notation, we can decompose V n i (p) into . If x ≤ 2, the result in (9.4) then follows by Jensen's inequality. If x > 2, applying again Burkholder-Davis-Gundy, we have where we also made use of the fact that the β-stable random variable has finite pth absolute moment as soon as p ∈ (0, β). If x ≤ 4, the result will then follow from an application of (9.10). If x > 4, then we repeat (9.11) with x replaced by x/2 and ζ n j replaced (ζ n j ) 2 − E n j−2 (ζ n j ) 2 . We continue in this way, applying k = sup{i : 2 i < x} times (9.11) and then (9.10). This shows (9.4). We continue with (9.5) and (9.6). We make use of the following algebraic inequality: for any a, b ∈ R with a = 0, 0 < p < 1 and K p that depends only on p. Applying this inequality as well as the triangular inequality, and using the fact that under Assumption SB the process |σ| is bounded from below, we have with some constant K that does not depend on s and t. From here (9.5) follows. Application of Corollary 2.1.9 of [12] further gives (9.14) and applying this inequality with q = y and q = 2y, for y the constant in (9.6), we have that result. We proceed by showing the bounds in (9.7)-(9.9). We can decompose with a 1 k being zero for k ≥ i − 4. Using the law of iterated expectations and the bound in (9.6), we have for k = i − k n − 1, . . . , i − 2, Using the Hölder inequality, the bound in (9.12), as well as the fact that a stable random variable has finite absolute moments for powers less than β, we have for k = i − k n − 1, . . . , i − 2, Combining (9.15) and (9.16), we get the results in (9.8) and (9.9). Further, using (9.12), we get for k = i − k n − 1, . . . , i − 2, From here we get the result in (9.7). Lemma 2. Under Assumptions A and SB and k n ≍ n ̟ for ̟ ∈ (0, 1), we have for 0 < p < β, ι > 0 arbitrarily small and every 0 < a < b < ∞, where K a,b depends only on a, and b and is finite-valued. Proof. We start with showing (9.18). We define the set and then we note that . Hence we can apply (9.3) and (9.4) and conclude We proceed with a sequence of inequalities. First, from Assumption SB, For the difference of the integrals against time, we can proceed exactly as in (9.23). Further, using the algebraic inequality in (9.10), as well as Assumption A for the measure ν ′ , we have When β ′ ≥ 1, we can apply the Burkholder-Davis-Gundy inequality and get The same inequalities hold for the analogous integrals involving S. Next, application of the Burkholder-Davis-Gundy and Hölder inequalities, as well as Assumption SB yields Finally, denoting κ ′ (x) = x − κ(x) and upon noting that κ ′ (x) is zero for x sufficiently close to zero, we have Combining the estimates in (9.23)-(9.28), as well as the inequality | cos(x) − cos(y)| ≤ 2|x − y| p for every x, y ∈ R and p ∈ (0, 1], we have ). (9.29) Equations (9.22) and (9.29) yield (9.18). We continue next with (9.19). This bound follows from a first-order Taylor expansion of f i,u (x) and the bounds in (9.2) and (9.3). We proceed by showing the result for R n 4 (u). Using a second-order Taylor expansion and the Cauchy-Schwarz inequality, as well as (9.6), we get Further, without loss of generality (because k n ∆ n → 0), we assume n ≥ 2k n + 3. Using the shorthand Applying the Burkholder-Davis-Gundy inequality for discrete martingales and making use of (9.6), we have . . , k n + 1. (9.32) Combining (9.30) and (9.32), we get the bound in (9.21). We are left with (9.20). The case β/p ≤ 2 follows from n i (p)| and by applying the bounds in (9.8)-(9.9). We now show (9.20) for the case β/p > 2. We first decompose r 3 and x is a random number between ∆ and note G(p, u, β) = |σ (i−2)∆n− | p f ′ i,u (|σ (i−2)∆n− | p ). Then direct calculation for the function xf ′ i,u (x) and the boundedness of the process |σ| yields From here, we use the Hölder inequality and (9.4), (9.6) and (9.8) to get For the sum n i=kn+3 ̺ 1 i (u), using the bounds in (9.7) and (9.8), we can proceed exactly as for the analysis of n i=kn+3 χ i above and split it into k n + 1 terms, which are the terminal values of discrete martingales. Together, this yields Next, using the bound in (9.9) as well as the boundedness of the derivative We continue with the term ̺ 3 i (u). We first introduce the set . . , n. With this notation, using (9.8) and the boundedness of the derivative Next using the boundedness of the second derivative f ′′ i,u (x), as well as the bounds in (9.8) and (9.9), we get Under Assumptions A and SB and k n ≍ n ̟ for ̟ ∈ (0, 1), we have for 0 < p < β, ι > 0 arbitrarily small and every 0 < a < b < ∞, and further if p < β/2, Proof. We start with (9.38). We split Z n We proceed with E n 2 (u). We first note that Further, using the algebraic inequalities | cos(x) − cos(y)| 2 ≤ 2|x − y| for x, y ∈ R and |e −x − e −y | 2 ≤ 2|x − y| for x, y ∈ R + , as well as the definition of the set C n i , we get Applying the above two inequalities, the bounds in (9.3), (9.4) and (9.6), as well as the algebraic inequality 2xy ≤ x 2 + y 2 for x, y ∈ R, we have 27 As a result, Finally, we need to show that the convergence holds uniformly in u ∈ [a, b]. For this we apply a criteria for tightness on the space of continuous functions equipped with the uniform topology; see, for example, Theorem 12.3 of [7]. Using again (9.41), we have Hence for arbitrarily small ι > 0, and since β > 1, we have We note that for u ∈ R + , E n i−2 (ζ i (u)) = 0, i = 2, . . . , n. (9.48) Further, making using of the inequality | cos(x) − cos(y)| ≤ 2|x − y| p for every p ∈ (0, 1] and x, y ∈ R, we have for u, v ∈ R + , Making use of (9.48) and the fact that ζ (2) i (u) depends on u only through H(p, u, β) and sup u∈R + |H(p, u, β)| is a finite constant, we have 1 Making use of (9.49) and the differentiability of G(p, u, β) in u, we also have 1 for some increasing function F (·) and some p > 1. Applying then a criteria for tightness on the space of continuous functions equipped with the uniform topology (see, e.g., Theorem 12.3 in [7]) as well as making use of the fact that k n ∆ n → 0, we have locally uniformly in u, 1 √ n − k n − 2 E r (u) P −→ 0. (9.51) We are left with the first term on the right-hand side of (9.47). First, we establish convergence for this term finite-dimensionally in u. We have the decomposition n−kn−1 i=kn+1 ζ i (u) = n−kn−1 i=kn+1 (ζ i (u) − E n i−1 (ζ i (u))) + n−kn−2 i=kn E n i (ζ i+1 (u)). Since E|χ i | < K, and applying the Burkholder-Davis-Gundy inequality for discrete martingales, we have Using inequality in means we further have Applying the above inequality with x sufficiently close to β/(2p) and the bound in (9.53), we have ∆ n (k n ∆ n ) 2p/β∧1/2−1+ι kn+1 j=1 A j P −→ 0, and together with the result in (9.52), this implies (9.46). 9.4. Proofs of Theorems 1 and 2. Theorem 1 and (4.5) of Theorem 2 follow readily by combining Lemmas 1-4 [and using (9.4) for bounding Z n 2 (u) in the proof of Theorem 1]. To show (4.7), we note first that H(p, u, β) and Ξ i (p, u, u, β), for i = 0, 1, are continuously differentiable in β. For H(p, u, β) this is directly verifiable, and for Ξ i (p, u, u, β) with i = 0, 1, this follows from the continuous differentiability of the characteristic function β → e −A β u β for u ∈ R + . Moreover, the derivative ∇ β H(p, u, β) is bounded in u. From here, (4.7) follows from an application of the continuous mapping theorem. We start with (9.54). We have We are left with (9.55). This result follows from applying the uniform convergence of L n (p, u, β f s ) ′ in Theorem 2. Finally, (5.8) follows from the continuity of G(p, u, β) and W −1 (p, u, β) in u and β. 32 V. TODOROV 9.6. Proof of Theorem 4. We will use the shorthand notation v n = ρu n . We start with the following lemma.
2015-08-18T05:31:28.000Z
2015-08-01T00:00:00.000
{ "year": 2015, "sha1": "0473a30e56ac93d5871fb8d3334d7caeaa3dada0", "oa_license": "implied-oa", "oa_url": "https://doi.org/10.1214/15-aos1327", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "0473a30e56ac93d5871fb8d3334d7caeaa3dada0", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
29511120
pes2o/s2orc
v3-fos-license
Half-life of the electron-capture decay of 97Ru: Precision measurement shows no temperature dependence We have measured the half-life of the electron-capture (ec) decay of 97Ru in a metallic environment, both at low temperature (19K), and also at room temperature. We find the half-lives at both temperatures to be the same within 0.1%. This demonstrates that a recent claim that the ec decay half-life for 7Be changes by $0.9% +/- 0.2% under similar circumstances certainly cannot be generalized to other ec decays. Our results for the half-life of 97Ru, 2.8370(14)d at room temperature and 2.8382(14)d at 19K, are consistent with, but much more precise than, previous room-temperature measurements. In addition, we have also measured the half-lives of the beta-emitters 103Ru and 105Rh at both temperatures, and found them also to be unchanged. INTRODUCTION Since the early days of nuclear science, nearly a century ago, it has been widely accepted that the decay constants of radioactive isotopes decaying by α, β − or β + emission are independent of all physical or chemical conditions such as pressure, temperature and material surroundings. This belief was based on numerous measurements in the early 1900s, some of which claimed remarkable precision (see [1] for an interesting review): for example, Curie and Kamerlingh Onnes [2] in 1913 determined that the decay constant of a radium preparation did not change by more than 0.1% when cooled to 20K. In contrast, decays proceeding by internal conversion or electron capture (ec), to which atomic electrons contribute directly, were placed in a different category, being potentially susceptible to their chemical -though not physical -condition. There is a long history of 7 Be decay measurements that demonstrate small but detectable effects on that isotope's decay constant caused by its chemical environment. Quite recently, however, measurements have been reported claiming relatively large changes in half lives for α, β − , β + and ec decays depending on whether the radioactive parent was placed in an insulating or conducting host material, and whether the latter was at room temperature or cooled to 12K. Specifically, 210 Po, an α emitter, when implanted in copper was reported to exhibit a half life shorter by 6.3(14)% at 12K than at room temperature [3]; the β − emitter 198 Au in a gold host reportedly had a half-life longer by 3.6(10)% at 12K [4]; 22 Na, which decays predominantly (90%) by β + emission, was measured as having a 1.2(2)% shorter half life at 12K [5]; and 7 Be, which decays by pure electron capture, apparently had a half-life longer by 0.9(2)% at 12K in palladium and by 0.7(2)% in indium [6]. The authors of these reports also proposed a theoretical explanation of their observations based on quasi-free electrons -a "Debye plasma" -causing an enhanced screening effect in metallic hosts. This would lead to host-dependent halflives and a smooth dependence of half-life on temperature in a metal. Needless to say, these claims led to considerable popular interest, not least because they could potentially have contributed to the improved disposal of radioactive waste [7]. Not remarked on at the time, though, was the impact that such a result would also have on all half-lives that have ever been quoted with sub-percent precision. Of greatest concern to us were the half-lives of superallowed 0 + →0 + β + transitions, essential to fundamental tests of the Standard Model [8]. Their precision has typically been quoted to less than 0.05%, well below the temperature and host-material dependence claimed by the new measurements [3,4,5,6]. Based on this concern, we first repeated the measurement on the decay of 198 Au (t 1/2 = 2.7 d) in gold [9]. While the original measurement by Spillane et al. [4] followed the decay for only a little over one half-life, we recorded the decay with much better statistics for over 10 half-lives at both room temperature and at 19K. Our results showed the half-lives at the two temperatures to be the same within 0.04%, a limit two orders of magnitude less than the difference claimed by Spillane et al. This null result was subsequently confirmed by two other measurements of 198 Au, which set limits of 0.13% in a Al-Au alloy host [10] and 0.03% in gold [11]. The latter reference also reported a new 22 Na decay measurement, which set an upper limit on the temperature dependence of that β + decay at 0.04%, again nearly two orders of magnitude below the earlier claim, in this case by Limata et al. [5]. For α decay, the 210 Po measurement has not yet been repeated but low-temperature measurements on a variety of other α emitters [12,13] have set upper limits of 1% on any possible temperature dependence in those cases. Though significantly lower than the temperature dependence claimed to have been observed in reference showing the dominant two transitions and the γ rays that follow them. The information is taken from Ref. [15]. We measured the 97 Ru half-life by following the time decay of the 216-keV γ ray. [3], this 1% limit is considerably less stringent than the limits obtained for β − and β + decays. The status of electron-capture decay is also less definitive. One new measurement of 7 Be decay in copper [10] found no temperature dependence greater than 0.3% but another [14] actually found a small change in half-life -0.22(8)% -depending on whether the host material was a conductor (Cu or Al) or an insulator (Al 2 O 3 or PVC), both at room temperature. In neither case is the result as precise as has been achieved for β − and β + decays. Furthermore, since 7 Be is known to show effects from its chemical environment, it is difficult to be certain about the cause of any observed effect and even more difficult to generalize its behavior to the electron-capture decay of other nuclei for which the K-shell electrons are much better shielded from the external environment. We thus set out to determine the temperature dependence for the ec-decay half-life of a nucleus with a Z that is considerably larger than that of 7 Be. Our goal was to achieve a precision comparable to that obtained for β − and β + decays, i.e. ≤0.1%. For our measurement we sought a nucleus that decays entirely by electron capture with a few-day half-life and a delayed γ ray that can be cleanly detected. It also had to be producible by thermal-neutron activation so that we could obtain statistically useful quantities without serious contaminants. Although there are not a lot of candidates to choose among, we found 97 Ru satisfied all our conditions. Its decay scheme appears in Fig. 1. We report here measurements of the half-life of 97 Ru at room temperature and at 19K as measured via its 216-keV β-delayed γ ray. We have found no temperature dependence in the results. Our upper limit is 0.1%, an order of magnitude below the effect claimed for 7 Be [6]. II. APPARATUS AND SET-UP We used the same set-up for both the cold and roomtemperature measurements. As we did previously for our 198 Au half-life measurement [9], we placed the ruthenium sample between two copper washers and fastened the assembly directly onto the cold head of a CryoTorr7 cryopump with four symmetrically placed screws. A 70% HPGe detector was placed facing the sample on the cryopump axis, just outside the pump's coverplate, into which a cavity had been bored so that only 3.5 mm of stainless steel stood between the detector face and the sample. The total distance between the detector face and the ruthenium sample was 49 mm and remained unchanged throughout the experiment. We monitored the temperature of the sample with a temperature-calibrated silicon diode (Lakeshore Cryogenics DT-670) [16] fastened in the same way as the ruthenium sample and placed right next to it on the head itself. The diode was connected to a Lakeshore Model 211 temperature monitor. For the low-temperature measurement, we first used a roughing pump to bring the pressure down to about 9 mtorr, and then switched on the cryopump. Although the cold head, where the sample was located, is nominally expected to reach 12K, we measured its temperature to be between 18.2 and 20.8K, with an average value of 19K. The arrangement for the room temperature measurement was identical except that these pumping and cooling steps were omitted. Note that we did not alternate temperatures for a single source but rather made a complete decay measurement at one temperature with one source at a fixed geometry; then, with a fresh source, we made a similar dedicated measurement at the other temperature. Thus our results are entirely independent of any geometrical or source differences that might have occurred between the two measurements. Our sample was a single crystal in the form of a circular disc, 8 mm in diameter and 1 mm thick, obtained from Goodfellow Corp. According to the supplier, the chemical purity of the material was 99.999%, with no identifiable impurities. For each measurement, the metal crystal was initially activated for 10 seconds in a flux of 10 13 neutrons/cm 2 s, at the Texas A&M Triga reactor. This activated crystal was then fastened directly to the cold head of the cryopump, ensuring a good thermal contact over the whole crystal area. For the measurement itself, sequential γ-ray spectra were recorded from the HPGe detector. The detector signals were amplified and sent to an analog-to-digital converter, which was an Ortec TRUMP TM -8k/2k card [17] controlled by Maestro software, which was installed on a PC operating under Windows-XP. During the entire period of the measurements, our computer clock was synchronized daily against the signal broadcast by WWVB, the radio station operated by the U.S. National Institute of Standards and Technology. For both the room-and low-temperature measurements, six-hour spectra were acquired sequentially for approximately one month. In each case, more than 110 γ-ray spectra were recorded. The TRUMP TM card uses the Gedcke-Hale method [18] to correct for dead-time losses. By keeping our system dead time below about 4% and recording all our spectra for an identical pre-set live time, we ensured that our results were essentially independent of deadtime losses. However, at a precision level of 0.1% or better, pile-up can also become an issue, so we carefully tested our system for residual rate-dependent effects, as reported in our previous article on 198 Au [9]. We first measured the 662-keV γ-ray peak from a 137 Cs source alone, and then remeasured that source a number of times in the presence of a 133 Ba source, which was moved closer and closer to the detector in order to increase the dead time and the number of chance coincidences. Each measurement was made for the same pre-set live time. We then obtained from each measurement the number of counts in the 662-keV peak and, from the decrease in that number as a function of increasing dead time, we determined that the fractional residual loss amounted to 5.5(2.5) × 10 −4 per 1% increase in dead time. At the count rates experienced during our 97 Ru measurements, the required correction was never greater than 0.2% but it was nevertheless applied to all spectra. III. RESULTS AND ANALYSIS A typical γ-ray spectrum, one of the more than 220 obtained, is shown in Fig. 2. Apart from the weak peaks due to room background, the only observed γ rays are from the decays of 97 Ru (t 1/2 = 2.8 d), 103 Ru (39 d) and 105 Rh (35 h); the latter is the daughter of 105 Ru (4.4 h), which had already decayed away by the time this spectrum was recorded. The appearance of these three ruthenium isotopes is consistent with their being produced by neutron activation of naturally occurring ruthenium. The 216-keV γ-ray peak from 97 Ru is seen to be clear of any other peaks and to lie on a smooth, though rather high, background. The 216-keV γ-ray peak in each recorded spectrum was analyzed with GF3, a least-square peak-fitting program in the RADware series [19]. This program allowed us to be very specific in determining the correct background for a peak, and the 216-keV peak in each spectrum was visually inspected to this end. So far as possible, the same criteria were applied to each spectrum. Fig. 3 shows a sample peak and the fitted background, from which its area was determined. In total, 229 spectra were subjected to this careful analysis, and the counts recorded in the 216-keV peak for each were corrected for residual losses (see Sect. Example of a measured 216-keV γ-ray peak together with the fit obtained from GF3. Note that the vertical scale has been greatly expanded to display the low-level background, and the quality of the fit to it. This spectrum was taken about five days after counting began; the peak contained a net of about 600,000 counts. fit to a single exponential. The code we used, which is based on ROOT [20], has previously been tested by us to 0.01% precision with Monte Carlo generated data. The data in Figs temperature-dependent difference at the 68% confidence level. The half-life values taken from the computer fits incorporate the correction for residual losses described in Sect. II, but they do not yet include the uncertainty in that correction, since it is correlated for the two measurements and does not contribute to the difference between them. However, for our measurements to be compared with previous measurements of the 97 Ru half-life, this systematic uncertainty is now incorporated, and yields the results 2.8370(14) d and 2.8382 (14) d for the room temperature and 19K measurements, respectively. These values are compared with previous measurements of the 97 Ru half-life in Table I and Fig. 6, where it can be seen that our results at both temperatures are much more precise than, but are entirely consistent with, the previous ones, all of which were presumably made at room temperature. As a byproduct of our primary measurement on 97 Ru we have also extracted from the same spectra half-lives at both temperatures for the nuclides 103 Ru and 105 Rh, both β − emitters. For 103 Ru, we monitored the 497-keV peak in all 237 spectra, while for the shorter lived 105 Rh there were only sufficient statistics for us to use 100 spectra to follow the 319-keV peak (see Fig. 2). These peaks were subjected to the same meticulous examination, fitting and analysis as just described for the 216-keV peak of 97Ru. Incorporating only statistical uncertainties, we ob- Table I. same, but in this case within 0.2%. As we did when making the temperature comparison with 97 Ru, we have so far quoted half-life values for 103 Ru and 105 Rh that do not yet include the (correlated) uncertainty attributable to residual losses (see Sect. II). We include that now in order to compare our results with pre- our data only encompass a little more than one half-life, during which time the overall count-rate in our detector has decreased significantly. Unlike the situation for the other two radionuclides studied in this work, the half-life of 103 Ru has been measured rather precisely in the past, with four of the previous results being of comparable precision to our current ones. Unfortunately, though, the earlier results are not particularly consistent with one another, as can be seen in Table II. The normalized χ 2 for the average of all previous measurements is 3.0, which results in our scaling up the uncertainty assigned to that average by a factor of 1.7. In comparison with this average value, our results are slightly low, though the discrepancy is not statistically very significant. Note also that our results are completely consistent with the 1981 value obtained by Miyahara et al. [30]. There are only three previous measurements of the 105 Rh half-life, none more recent than 1967; they are listed in Table III. Strikingly, the earliest measurement [33] has the tightest, ±0.06%, uncertainty and a halflife value that disagrees completely with the two later measurements. The weighted average of all three measurements yields a normalized χ 2 of 22 and, as shown in Table III, its uncertainty consequently requires scaling by a factor of 4.7. Under the circumstances, it seems more reasonable not to use this average value, but simply to disregard the offending measurement and average the two remaining, mutually consistent, results [34,35]. When compared with this new average, our results are a factor of two more precise and lie slightly lower. Considering that even the two previous measurements that have been retained are more than 40 years old and that the difference between their average and our recent results is less than two standard deviations, there seems little reason for concern. IV. CONCLUSIONS We have measured the half-life of 97 Ru in ruthenium metal at room temperature and at 19K, and have found the results to be the same within 0.1%. Since the maximum decay energy for any allowed transition from 97 Ru is 892 keV, the nucleus must decay by pure electron capture. Three years ago, Wang et al. [6] reported half-life measurements of another pure electron-capture emitter, 7 Be, situated in both palladium and indium metals, in which they observed differences of 0.9(2)% and 0.7(2)%, respectively, between room temperature and 12K. The same group also reported cases of temperature dependence for α, β − and β + decay modes [3,4,5] and interpreted them all as the result of a "Debye plasma," which purportedly acts in any metal host and leads to a smooth dependence of half-lives on temperature. In that context, their result for 7 Be decay was understood to be the indication of a generic property of all ec-decays rather than a unique property of 7 Be. Obviously we cannot comment on the validity of the 7 Be measurement itself, but we can certainly refute any suggestion that the half-lives of ec-decays in general exhibit significant temperature dependence when the source is placed in a metal host. Wang et al. [6] used their model to calculate that the half-life of 7 Be in a metal should change by 1.1% between T = 293 and 12K, a result that agrees reasonably well with their measured values. Using the same model, we calculate that the halflife change for the 97 Ru decay should be 11.2% between T = 293 and 12K and 8.4% between T = 293 and 19K, the temperature we obtained. Our measured upper limit on any half-life change over this temperature range is nearly two orders of magnitude less than this model prediction. We have previously demonstrated that the Debye model has no validity for β − decay [9]; we can now state with equal confidence that it also does not apply to ec decay. As a byproduct of this primary measurement, we also obtained half-life data for two β − emitters, 103 Ru and 105 Rh, at room temperature and 19K. These results, though slightly less precise than our measurements on the β − decay of 198 Au [9], nevertheless confirm our previous conclusion for that decay mode. With any temperature dependence for β + decay also now ruled out at the 0.04% level [11], it has become clear that there is no reason to doubt the accuracy of nuclear weak-decay half-lives that have been quoted over the past decades with sub-percent precision and without accounting for the host material or temperature. As has always been believed, those parameters indeed do not affect the result, at least not above the 0.1% level. In all three cases, 97 Ru, 103 Ru and 105 Rh, our measured half-lives are consistent with, and in two cases are substantially more precise than, previous measurements.
2009-10-22T19:55:55.000Z
2009-10-16T00:00:00.000
{ "year": 2009, "sha1": "85819d6ead508a1a4914ba4c69efaa4a53f3d055", "oa_license": null, "oa_url": "https://oaktrust.library.tamu.edu/bitstream/1969.1/127092/1/PhysRevC.80.045501.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "85819d6ead508a1a4914ba4c69efaa4a53f3d055", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
249674715
pes2o/s2orc
v3-fos-license
First decay-time-dependent analysis of $B^{0} \to K_{S}^{0} \pi^{0}$ at Belle II We report measurements of the branching fraction ($\mathcal{B}$) and direct $CP$-violating asymmetry ($A_{CP}$) of the charmless decay $B^{0} \to K^0 \pi^0$ at Belle II. A sample of $e^{+} e^{-}$ collisions, corresponding to $189.8 fb^{-1}$ of integrated luminosity, recorded at the $\Upsilon(4S)$ resonance is used for the first decay-time-dependent analysis of these decays within the experiment. We reconstruct about 135 signal candidates, and measure $\mathcal{B}(B^{0} \to K^{0} \pi^{0})= [11.0 \pm 1.2 (stat) \pm 1.0 (syst)] \times 10^{-6}$ and $A_{CP} (B^{0} \to K^{0} \pi^{0})= -0.41_{-0.32}^{+0.30} (stat) \pm 0.09 (syst)$. INTRODUCTION The B 0 → K 0 π 0 decay is mediated by flavor-changing neutral currents. In the standard model (SM), the dominant decay amplitude is given by the b → sdd loop, which is dominated by the top quark contribution and carries a weak phase arg(V tb V * ts ). Here, V ij denote the CKM matrix elements. Such processes are suppressed in the SM and provide an indirect route to search for beyond-the-SM particles that might be exchanged in the loop. In the B 0 → K 0 π 0 decay, CP violation can occur either directly in the decay amplitude (A CP ) or via the interference between decays with and without B 0 -B 0 mixing (S CP ). Neglecting subleading contributions to the amplitude, S CP is expected to be equal to sin 2φ 1 and A CP ≈ 0, where φ 1 ≡ arg(−V cd V * cb /V td V * tb ). Deviations from these expectations could be due to larger-than-expected subleading SM contributions or from non-SM physics. Combining B-meson lifetimes (τ ) with branching fractions (B) and direct CP asymmetries of four B → Kπ decays related by isospin symmetry, the sum rule proposed in Ref. [1], is expected to hold with an uncertainty below 1% and provides an important consistency test of the SM. Deviations from this isospin sum rule can be caused by an enhancement of color-suppressed tree amplitudes, or by contributions from non-SM physics. The prediction of the CP asymmetry A CP (K 0 π 0 ) from this sum-rule is −0.138 ± 0.025 [2], using up-to-date known values of other quantities [3]. Combining measurements from Belle and BaBar [4,5], the Heavy Flavor Averaging Group finds A CP = 0.01 ± 0.10 [3]. The dominant contribution to the uncertainty in this sum-rule comes from the uncertainty in A CP (K 0 π 0 ). Therefore, a precise measurement of A CP (K 0 π 0 ) is crucial for this consistency test of the SM. Preliminary results on B and A CP of B 0 → K 0 π 0 decays have been reported by Belle II using a data sample corresponding to 62.8 fb −1 . In this analysis, we utilize a larger data set (189.8 fb −1 ) and further enhance our sensitivity to A CP by using B decay-time information. At Belle II, pairs of neutral B mesons are coherently produced in the process e + e − → Υ (4S) → B 0 B 0 . When one of the B mesons decays to a CP eigenstate f CP , such as K 0 S π 0 , and the other to a flavor-specific final state f tag , the time-dependent decay rate is given by where ∆t = t CP − t tag is the proper-time difference between the decays into f CP and f tag , q equals +1 (−1) for the B 0 (B 0 ) decay to f tag , and ∆m d is the B 0 -B 0 mixing frequency. This analysis employs a decay-time-dependent CP asymmetry fit similar to the previous measurement of sin 2φ 1 [6]. The key challenge here lies in the determination of the position of the B 0 → K 0 π 0 decay vertex. For that, the K 0 S flight direction is projected back to the interaction region and the K 0 S is required to decay inside the vertex detector (VXD). The full analysis was developed and tested with simulated data, and validated with data control samples before selecting and inspecting the B 0 → K 0 π 0 candidates. Due to the limited sensitivity provided by the available data sample, we measure A CP by fixing S CP , ∆m d , and τ B 0 to their known values [3]. THE BELLE II DETECTOR AND DATA SAMPLE Belle II [7] is a particle spectrometer having almost 4π solid-angle coverage, designed to reconstruct final-state particles of e + e − collisions delivered by the SuperKEKB asymmetricenergy collider [8]. It is located at the KEK laboratory in Tsukuba, Japan. The energies of the positron and electron beams are 4 and 7 GeV, respectively. Belle II consists of a number of subdetectors surrounding the interaction region in a cylindrical geometry. The innermost one is the VXD, comprised of several position-sensitive silicon sensors. It samples the trajectories of charged particles ('tracks') in the vicinity of the interaction region to determine the decay positions of their parent particles. The VXD includes two inner layers of pixel sensors and four outer layers of double-sided microstrip sensors. The second pixel layer is currently incomplete covering one sixth of the azimuthal angle. Charged-particle momenta and charges are measured by a large-radius, small-cell, central drift chamber (CDC), which also offers particle-identification information via a measurement of specific ionization. A Cherenkov-light angle and time-of-propagation detector surrounding the CDC provides charged-particle identification in the central detector volume, supplemented by proximityfocusing, aerogel, ring-imaging Cherenkov detectors in the forward region with respect to the electron beam. A CsI(Tl)-crystal electromagnetic calorimeter (ECL) provides energy measurements of electrons and photons. A solenoid surrounding the ECL generates a uniform axial 1.5 T magnetic field. Layers of plastic scintillators and resistive-plate chambers, interspersed between the magnetic flux-return iron plates, allow for the identification of K 0 L mesons and muons. The subdetectors most relevant for our study are the VXD, CDC, and ECL. We analyse collision data collected at a center-of-mass (CM) energy near the Υ (4S) resonance, corresponding to an integrated luminosity of 189.8 fb −1 . We use large samples of simulated e + e − → qq (q = u, d, s, c), Υ (4S) → B 0 B 0 and B + B − events to optimize the event selection and study possible background contributions. Simulated B 0 → K 0 S π 0 signal events are used to determine signal models and estimate the selection efficiency. We use the EVTGEN package [9] to generate B-mesons decays and the PHOTOS package [10] to calculate final-state radiation from all charged particles. The simulation of e + e − → qq continuum background relies on the KKMC generator [11] interfaced to Pythia [12]. The interactions of final-state particles with the detector are simulated using Geant4 [13]. RECONSTRUCTION AND SELECTION Tracks are reconstructed with the VXD and CDC. Photons are identified as isolated energy clusters in the ECL that are not matched to any track. Candidate K 0 S mesons are reconstructed from pairs of oppositely-charged particles with the dipion mass between 482 and 513 MeV/c 2 . We reconstruct π 0 candidates from pairs of photons that have energies greater than 80 (223) MeV if detected in the barrel (endcap) ECL. We apply the different energy thresholds to suppress beam background, which is higher in the endcap compared to the barrel region. The selection also requires the diphoton mass to lie between 119 and 150 MeV/c 2 and the absolute value of the cosine of the angle between each photon and the B meson in the π 0 rest frame to be less than 0.953. These criteria suppress contributions from misreconstructed π 0 candidates. A B-meson candidate is reconstructed by combining a K 0 S with a π 0 candidate. For this purpose, we use two kinematic variables, the beam-energy-constrained mass (M bc ) and the energy difference (∆E), where E beam is the beam energy, and E B and p B are respectively the reconstructed energy and momentum of the B meson; all calculated in the CM frame. The presence of a high momentum π 0 causes a significant correlation between M bc and ∆E due to the shower leakage of final-state photons. To reduce this correlation, we use a modified version of M bc that is defined in terms of the beam energy and momenta of final-state particles as where all kinematic quantities are again calculated in the CM frame. We retain candidate events satisfying 5.24 < M bc < 5.29 GeV/c 2 and |∆E| < 0.30 GeV. To measure the proper-time difference ∆t, we need to determine the signal and tagside B decay vertices. The signal B vertex is obtained by projecting the flight direction of the K 0 S candidate back to the interaction region. The K 0 S flight direction is determined from its decay vertex and momentum. The intersection of the K 0 S -flight projection with the interaction region provides a good approximation of the signal B decay vertex, since both the transverse flight length of the B 0 meson and the transverse size of the interaction region are small compared to the B 0 flight length along the boost direction. The tag-side vertex is obtained with tracks that are not associated to the B 0 → K 0 S π 0 decay. We obtain ∆t by dividing the longitudinal distance between the signal and tag vertices by the speed of light and the Lorentz boost of the Υ (4S) system in the lab frame. Signal candidates with poorly measured ∆t, mainly due to K 0 S mesons decaying outside of the VXD acceptance, are suppressed by requiring the estimated uncertainty on ∆t to be less than 2.5 ps. Events from continuum e + e − → qq production are suppressed using a boosted-decisiontree (BDT) classifier [14] that exploits several event-topology variables known to provide discrimination between B-meson signal and continuum background. The following variables are those offering most of the discrimination: modified Fox-Wolfram moments [15], CLEO cones [16], the magnitude of the thrust axis for the reconstructed B candidate, and the cosine of the angle between the thrust axis of the signal B and that of rest of event. The BDT is trained on samples of simulated e + e − → qq, B 0 B 0 and B + B − events, each equivalent to an integrated luminosity of 1 ab −1 . The BDT output distribution (C out ) is shown in Fig. 1. We apply a criterion C out > 0.60, which rejects about 89% of the continuum background with a 18% relative loss in signal efficiency. We then translate C out into a new variable, where C out,min = 0.60 and C out,max = 0.99. The distributions of C out can be parametrized with Gaussian functions. After applying all selection criteria, the average number of B candidates per event is 1.009. Multiple candidates arise due to random combinations of final-state particles. To select the best combination in an event with multiple candidates, we first compare the π 0 mass-constrained fit χ 2 probability ('p-value'). If there are two or more B candidates sharing the same π 0 , we choose the one with the best p-value of the fit of the K 0 S vertex. This selection retains the correct B candidate in 74% of simulated signal events. The signal efficiency ( ) of correctly reconstructed events after all selection criteria have been applied is 12.3%. From simulation we find that signal candidates can be incorrectly reconstructed in 1.5% of the times by accidentally picking up a particle from the other B meson decay. We determine the flavor of the tag-side B meson (q) from the properties of the finalstate particles that are not associated with the reconstructed B 0 → K 0 S π 0 decay. The Belle II multivariate flavor-tagger algorithm [17] uses the information of B-decay products to determine the quark-flavor of B mesons. It gives two parameters, the b-flavor charge, q and its quality factor r. The parameter r is an event-by-event, MC determined flavourtagging dilution factor that ranges from 0 (no flavor discrimination) to 1 (unambiguous flavor assignment). DETERMINATION OF BRANCHING FRACTION AND CP ASYMMETRY We obtain the signal yield and direct CP asymmetry from an extended maximumlikelihood fit to the unbinned distributions of M bc , ∆E, C out , and ∆t. For the signal component, M bc is modeled with the sum of a Crystal Ball [18] and a Gaussian function with a common mean; ∆E with the sum of a Crystal Ball and two Gaussian functions, all three with a common mean; and C out with the sum of an asymmetric and a regular Gaussian function. The signal ∆t probability density function (PDF) is given by where w r is the fraction of incorrectly tagged events, ∆w r is the difference in w r between B 0 and B 0 , µ r is the difference in their tagging efficiency (that is the fraction of signal B 0 or B 0 candidates to which a flavor tag can be assigned), and R sig is the ∆t resolution function. The function R sig is composed of a sum of two Gaussians with a combined width of ≈ 0.9 ps, and its parameters are determined with simulated events. We set τ B 0 to 1.520 ps, ∆m d to 0.507 ps −1 , and S CP to 0.57 [3]. The data are divided into seven q × r bins with the tagging parameters for each bin (w r , ∆w r , and µ r ) fixed to the corresponding values [17]. The effective tagging efficiency eff (= r r × (1 − 2w r ) 2 , where r is the partial effective efficiency in the r-th bin), w r , and µ r are (30.0±1.2)%, (2-47)%, and (0.5-11)%, respectively. All signal PDF shapes are fixed to the values determined from a q × r binned fit to simulated events. For the continuum background component, an ARGUS function [19] is used for M bc , a linear function for ∆E, and the sum of an asymmetric and a regular Gaussian function for C out . Its ∆t distribution is modeled with an exponential function convolved with a Gaussian for the tail; we use a double Gaussian for its resolution function (R qq ). For the continuum background component, we float the PDF shape parameters, which are found to be common for all q × r bins. For the BB background component, a two-dimensional Kernel estimation PDF [20] is used to model the ∆E vs. M bc distribution, and the sum of an asymmetric and a regular Gaussian function is used for C out . Its ∆t distribution is modeled with an exponential function convolved with a Gaussian for the tail; we again use a double Gaussian for its resolution function (R BB ). The BB background shape parameters are fixed from a fit to the corresponding simulated sample. The fit parameters are the signal yield N sig ; A CP ; BB background yield, which is Gaussian constrained to the result of a fit to the ∆E sideband in data; continuum background yield; M bc ARGUS parameter; ∆E slope; and effective width of C out for the qq component. We correct the signal M bc , ∆E, and C out PDF shapes for possible data-simulation differences, according to the values obtained with a control sample of B + → D 0 (→ K + π − π 0 )π + (charge conjugated modes are implicitly included hereafter). In order to mimic the signal decay, we apply a similar π 0 selection. We use a maximum-likelihood fit to the unbinned distributions of M bc , ∆E, and C out , using PDF shapes similar to those employed to describe the signal in data. We use a control sample of B 0 → J/ψ (→ µ + µ − )K 0 S decays to validate the time-dependent analysis. To mimic the signal decay, we do not use the two muons coming from the J/ψ to reconstruct the signal B decay vertex. We use a maximum-likelihood fit to the unbinned distributions of M bc and ∆t, using PDF shapes and resolution functions similar to those employed in the fit to the signal in data. The B 0 lifetime and A CP are measured to be 1.59 +0.09 −0.08 ps and −0.03 ± 0.10, respectively, which are consistent with their known values [3]. The uncertainties quoted here are statistical only. This provides convincing data-driven support for the time-dependent part of the analysis. The same sample is also used to correct the ∆t PDF shape parameters for possible data-simulation differences. The estimator properties (mean and uncertainty) have been studied in both simplified and realistic simulated experiments and found to be as expected. Figure 2 shows the four projections of the fit to the seven q × r-integrated data samples which include both B 0 and B 0 candidates. For each projection the signal enhancing criteria, 5.27 < M bc < 5.29 GeV/c 2 , −0.15 < ∆E < 0.10 GeV, |∆t| < 10.0 ps, and C out > 0.0, are applied on all except for the variable displayed. The obtained signal yield is 135 +16 −15 , where the quoted uncertainty is statistical only. We also find 2214 +49 −48 continuum and 44 ± 5 BB background events. We determine the branching fraction using the following formula: where N BB = (197.2 ± 5.70) × 10 6 , f 00 = 0.487 ± 0.010 [21], and B s = 0.5 are the number of BB pairs, Υ (4S) → B 0 B 0 branching fraction, and K 0 → K 0 S branching fraction, respectively. The B 0 → K 0 π 0 branching fraction and direct CP asymmetry (A CP ) are measured to be (11.0 ± 1.2 ± 1.0) × 10 −6 and −0.41 +0.30 −0.32 ± 0.09, respectively. The first uncertainties are statistical and the second is systematic (described in Section 5). This extends the previous measurement [22] of B and A CP in B 0 → K 0 π 0 decays, where no information on the proper-time difference was used. SYSTEMATIC UNCERTAINTIES The various systematic uncertainties contributing to B and A CP are listed in Table I. Assuming these sources to be independent, we add their contributions in quadrature to obtain the total systematic uncertainty. The systematic uncertainty due to possible differences between data and simulation in the reconstruction of charged particles is 0.3% per track [23]. We linearly add this uncertainty in B for each of the two pion tracks coming from the decay of the K 0 S in the signal B. From a comparison of the K 0 S yield in data and simulation, we find that the ratio of the K 0 S reconstruction efficiency changes approximately as a linear function of its flight length [23]. We apply an uncertainty of 0.4% for each centimeter of the average flight length of the K 0 S candidates resulting in a 4.2% total systematic uncertainty in B. We estimate the systematic uncertainty due to possible differences between data and simulation in the π 0 reconstruction and selection by comparing the inclusive decay sample of D 0 → K − π + π 0 with D 0 → K − π + [24]. The data-simulation efficiency ratio is found to be close to unity with an uncertainty of 7.5%, which we assign as a systematic uncertainty in B. We evaluate possible data-simulation differences in the continuum-suppression efficiency using the control sample of B + → D 0 (→ K + π − π 0 )π + . As the ratio of efficiencies obtained in data and simulation is close to unity, the statistical uncertainty in the ratio (1.6%) is assigned as a systematic uncertainty to B. We estimate the systematic uncertainty in A CP due to the uncertainty in the wrong-tag fraction by varying the parameter individually for each q × r region by its uncertainty. The systematic uncertainty due to the ∆t resolution function is estimated in a similar fashion. As external inputs τ B 0 , ∆m d , and S CP are fixed to their known values in the fit, the associated systematic uncertainties are estimated by varying the values by their uncertainties. In the nominal fit, we assume the BB-background decays to be CP symmetric. To account for a potential CP asymmetry in the BB background, we use an alternative ∆t PDF given by We perform fits to simplified simulated experiments by varying S CP and A CP from +1 to −1. We then calculate the deviations in signal A CP from its nominal value. These deviations are assigned as a systematic uncertainty to A CP due to the asymmetry of the BB background. An overall uncertainty of 3.2% in B is taken as a systematic uncertainty due to the number of BB pairs used, which also includes the uncertainty in f 00 . The uncertainties due to the signal PDF shape parameters are estimated by varying their uncertainties. Similarly, the uncertainties due to the background PDF shape are calculated by varying all fixed parameters by their uncertainties, determined from the fit to simulated samples. We fix the M bc ARGUS endpoint to the value obtained from a fit to the ∆E sideband data. Subsequently we vary it by ±1σ to assign a systematic uncertainty, where σ is the uncertainty from the fit. A potential fit bias is checked by performing an ensemble test comprising 1000 simplified simulated experiments in which signal events are drawn from the corresponding simulation sample and background events are generated according to their PDF shapes. We calculate the mean shift of the signal yield from the input value and assign it as a systematic uncertainty. Tag-side interference can arise due to the presence of both CKM-favored and -suppressed tree amplitudes. The systematic uncertainty in A CP assigned to this interference is taken from Ref. [6]. A possible systematic uncertainty related to VXD misalignment is neglected in this study. We report measurements of the branching fraction and direct CP asymmetry in B 0 → K 0 π 0 decays using a data sample, corresponding to 189.8 fb −1 of integrated luminosity, recorded by Belle II at the Υ (4S) resonance. The observed signal yield is 135 +16 −15 . We measure B(B 0 → K 0 π 0 ) = [11.0±1.2(stat)±1.0(syst)]×10 −6 and A CP = −0.41 +0.30 −0.32 (stat)±0.09(syst). This is the first measurement of A CP in B 0 → K 0 π 0 performed at Belle II using a decaytime-dependent analysis. The results agree with previous determinations [3,22].
2022-06-16T01:15:49.346Z
2022-06-15T00:00:00.000
{ "year": 2022, "sha1": "48225713a397ee95e6381451fd2584432d6696ea", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "48225713a397ee95e6381451fd2584432d6696ea", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
12643349
pes2o/s2orc
v3-fos-license
Impact of Ego-resilience and Family Function on Quality of Life in Childhood Leukemia Survivors. BACKGROUND This study aimed to examine the impact of ego-resilience and family function on quality of life in childhood leukemia survivors. METHODS This study targeted 100 pediatric leukemia survivors, who visited the Pediatric Hemato-Oncology Center in South Korea from Aug to Dec 2011. A structured questionnaire of ego-resilience, family function and quality of life used to collect data through direct interview with the pediatric patients and their parents. The correlation between the study variables analyzed using the Pearson's correlation coefficient, and the impact on quality of life analyzed using a stepwise multiple regression. RESULTS Ego-resilience (r = 0.69, P<0.001) and family function (r =0.46, P< 0.001) had a positive correlation with quality of life and all the sub-categories of quality of life. Ego-resilience was a major factor affecting quality of life in childhood leukemia survivors, with an explanatory power of 48%. The explanatory power for quality of life increased to 53% when age and family function were included. CONCLUSION Ego-resilience, age, and family function affect quality of life in childhood leukemia survivors. Hence, strategies are required to construct age-matched programs to improve quality of life, in order to help restore the necessary ego-resilience and to strengthen family function in childhood leukemia survivors. Introduction Pediatric patients diagnosed with cancer, including leukemia, suffer greatly in the process of receiving treatment such as chemotherapy, stem cell transplantation, and radiotherapy, depending on their disease. Even after complete remission, they experience physical side effects such as infection, pain, deterioration of physical function, endocrine disorders, visual impairment, and growth impairment, as well as psychological side effects such as anxiety, depression, and posttraumatic stress syndrome. They also have difficulty in social adjustment or coping due to memory impairment and lack of interpersonal skills (1)(2)(3). Such problems make it difficult for pediatric patients to return to social life after cancer treatment and bring about a negative impact on their quality of life (4). Long-term treatment also affects their school life and difficulties in readjustment to normal daily life experienced even after the completion of treatment (5). Owing to the development of medical technologies, the 5-yr survival rate of Korean children with leukemia aged less than 15 yr reached 74.7% in 2011 (6). As a result, leukemia no longer is considered an incurable disease and instead viewed as a chronic disease requiring long-term treatment. Consequently, it is important to help pediatric patients return to their daily lives, including school life, after the completion of treatment. Pediatric patients with leukemia experience the deterioration of physical functions and other diverse symptoms during treatment, which lowers health-related quality of life; long-term physical and psychological side effects as well as cognitive changes after completion of treatment continue through adulthood (7)(8)(9). In addition, because patients separated from school during the treatment period, they have difficulties with participation in education and learning, as well as social difficulties making close friends (10). Pediatric leukemia patients show worse social adjustment than their peers do, and this correlated with quality of life (9,11). Therefore, it is necessary to explore the physical well-being of and emotional, social, and environmental quality of life perceived by childhood leukemia survivors after treatment completion, and related factors. Ego-resilience, recently considered to relate to quality of life (12), is the ability for successfully adjust to changing situational demands and environments by responding flexibly based on selfcontrol (13). Persons high in ego-resilience tend to handle stressful situations flexibly and dynamically, have a better ability to recover from negative emotional experiences, and have less perceived stress about such situations (13)(14). Besides, it is meaningful to identify and strengthen the level of ego-resilience in survivors of childhood leukemia facing stressful situations. Leukemia also changes home life as the patient's disease condition, activity level, and quality of relationships serve as stressors for the family and cause conflicts among family members (15)(16). Further, such family relations and conflicts cause depression or anxiety in pediatric patients with leukemia (15). Adolescents with good relationships with their parents tend to be highly autonomous and have good psychological well-being due to mutual communication (17). Therefore, it is necessary to investigate the level of family function perceived by pediatric patients with leukemia and to identify factors affecting such functioning. This study aimed to examine the impact of egoresilience and family function on quality of life in childhood leukemia survivors. Study Design This study employed a descriptive survey research design to examine the impact of egoresilience and family function on quality of life in childhood leukemia survivors. Participants and Data Collection Participants recruited on childhood leukemia survivors aged between 7 and 15 yr, diagnosed with leukemia in the Pediatric Hemato-Oncology Center of C University Hospital located in Seoul, South Korea and achieved complete remission after completing chemotherapy treatment. Convenience sampling used to recruit a sample of 100 participants considering an attrition rate. After obtaining approval from the Institutional Review Board (IRB) of C University regarding the study objectives, methods, and procedures, data collected from Aug to Dec 2011. The researcher received written consent for participation through direct interview with pediatric patients and parents, after explaining the study objectives and questionnaire content. After receiving instructions from one trained research assistant, the pediatric patients filled out the questionnaire without interruption from their parents. Instruments The ego-resilience of pediatric patients with leukemia measured using the self-report egoresilience tool for early adolescents modified by Cho and Lee (18) from the parent-report egoresilience tool (19). This instrument has 24 questions measured on a 4-point scale (1-4 points). It comprises the following areas: peer group relations and optimism, sympathy and selfacceptance, concentration and confidence about tasks, understanding, and leadership. The score ranges from 24 to 96 points, where higher scores are indicative of higher ego-resilience. The Cron-bach's α value indicating the internal consistency reliability was 0.83 in the study (16) and was 0.87 (.55-.87) in the present study. Family function measured using a tool (20) from the Family APGAR scores (21). This tool has five questions measured on a 3-point scale (0-2 points). It evaluates the adaptability, cooperation, development, affection, and resolution of family members. The total score ranges from 0-10 points. Total scores in the range of 0-3 categorized in the severely dysfunctional group, 4-6 points denoted the moderately dysfunctional group, and 7-10 points denoted the highly functional group. The Cronbach's α value for internal consistency reliability was 0.80 in the study in it developed, and 0.72 in the present study. Quality of life was measured using a standardized tool to suit the Korean context (2), based on the KIDSCREEN 52-HRQOL (22). This tool has 52 questions measured on a 5-point scale (1-5 points), across the following areas: physical wellbeing, psychological well-being, mood and emotions, social support and peers, parent relations and home life, autonomy, self-perception, school environment, social acceptance (bullying), and financial resources. In the present study, the quality of life score was calculated as the T conversion score with a mean of 50 and standard deviation of 10, to ensure that the scores comparable to those of previous studies. The Cronbach's α value for internal consistency reliability was 0.77-0.95 in the Korean version of KIDSCREEN 52-HRQOL (2) and 0.94 (0.75-0.94) in the present study. Statistical Analysis The data analyzed using SAS Version 9.2. General characteristics of participants, ego-resilience, family functioning, and quality of life were examined using frequency, percentage, mean, and standard deviation. Differences in quality of lifebased on participants' general characteristics confirmed using t-test, ANOVA, and Scheffé's test. Correlations among ego-resilience, family function, and quality of life examined using Pearson's correlation coefficients. Finally, factors affecting quality of life identified using a stepwise multiple regression. General Characteristics of Participants The mean age of the participants was 10.9 yr (7-15 yr) and those aged 7-12 yr accounted for 74% of the sample. Boys accounted for 59%, and elementary school students, middle school students, and students on leave of absence accounted for 50%, 40%, and 10% of the sample, respectively. Those with a religion accounted for 59%, and those whose fathers and mothers had jobs accounted for 94% and 43%, respectively. Further, 52% had siblings. Acute lymphoblastic leukemia (ALL) accounted for 81% of the sample (Table 1). Ego-resilience, Family Function, and Quality of Life The mean ego-resilience score of the participants was 67.65 points out of the total 96 points possible. With reference to the sub-categories, peer group relations and optimism had the lowest mean scores, while leadership had the highest. The mean family function score was 6.52 points out of a total possible 10 points. The severely dysfunctional group, moderate dysfunctional group, and highly functional group accounted for 12%, 35%, and 53% of the sample, respectively. The mean total quality of life score was 49.85, physical well-being and autonomy had the lowest scores, and parent relations, home life, and financial resources had the highest scores (Table 2). Differences in Quality of Life Based on Participants' General Characteristics The mean quality of life score in pediatric patients aged 13-15 yr was 48.19 points, which was lower than the mean of 50.43 points for those aged 7-12 yr (P=0.050). Middle school students had a mean score of 48.26 points, which was lower than the mean of 51.14 points for those in elementary school (P= 0.024). Those diagnosed with ALL had a mean score of 49.29 points, which was lower than the mean score of 52.23 for those with AML (P = 0.020) ( Table 1). (Table 3). Factors Affecting Quality of Life To examine the factors affecting quality of life of pediatric patients with leukemia, a stepwise regression analysis was conducted using the following independent variables: age and educational level (dummy variables) among the general characteristics, because they showed a difference for quality of life; and ego-resilience and family functioning, correlated with quality of life. A major factor affecting the quality of life of pediatric leukemia patients was ego-resilience, with an explanatory power of 48%, which increased to 53% when age and family function were included (Table 4). Discussion In the present study, the mean quality of life score of the participants was 49.85 points, which was similar to the 49.12 points scored by children aged 6-17 yr in the 4 th month after being diagnosed with pediatric cancer in a Swiss pediatric hospital (23). Survivors of childhood leukemia showed similar quality of life scores as compared to pediatric patients with pediatric cancer in the process of treatment, probably because they still had delayed symptoms after treatment, were receiving follow-up care even though treatment has been completed, and were in the process of recovery. Hence, it is necessary to monitor pedia-tric leukemia survivors and provide them continuous care in physical, psychological, and social domains. In the present study, physical well-being and autonomy had the lowest average scores among the sub-categories of the quality of life, while parent relations and home life, and financial resources had the highest scores. This result was similar to that of pediatric patient's aged 12-17 yr old receiving treatment for bone tumors, as their average score for the physical well-being category was low, while those for financial resources, and parent relations and home life were the highest (24). Survivors of childhood leukemia who participated in the present study were recovering after the completion of treatment, while the pediatric patients with bone tumors were within 3 months of adjuvant treatment. Therefore, the side effects of active treatment were assumed to have a negative effect on their physical well-being. However, healthy school-aged children (25) showed different results, as their scores on the financial resource, free time categories were the lowest, and those on social acceptance, and mood, and emotions were the highest. This is probably because childhood leukemia survivors cannot lead an autonomous life due to their physical health issues, whereas healthy school-aged children are able to enjoy friendships, but have limited time due to school life and studies after school. In the present study, the quality of life of patients aged 13-15 yr was lower than that of those aged 7-12 yr. This was similar to a result in which the quality of life perceived by 13-18 yr old adolescent cancer patients was lower than that of 8-12yr-old pediatric cancer patients (26). Quality of life presumably deteriorated due to physical discomfort and emotional distress after treatment in addition to the sudden physical and emotional changes that happen during adolescence. In the present study, the quality of life of patients who were middle school students was lower than that of patients who were elementary school students. This is presumably because adolescent's survived leukemia has low social adaptability. In addition, their quality of life in schoolwork likely deteriorates because they experience physical dis-comfort and emotional distress after treatment along with adolescence, which is a period when sudden physical and physiological changes occur and social relations expand according to the characteristics of this developmental stage (26). Therefore, more attention and active support needed for early adolescent patients. We found that the quality of life of ALL patients was lower than that of AML patients, which was different from a study that reported that quality of life did not differ depending on the type of pediatric cancer (26). AML patients accounted for 19% of our sample, while ALL patients accounted for 81%. In addition, previous studies (7,27) revealed that the quality of life of pediatric cancer survivors varied depending on the type of cancer, specific diagnosis, and lapse of time after treatment completion, and related to performance level and the number of side effects. Therefore, future replication studies of variables related to quality of life need to include a larger sample. According to our findings, the higher the egoresilience score, the higher was the quality of life. This was similar to the result found (28) targeting pediatric cancer patients. In previous studies, adolescents with high ego-resilience and social support showed high adaptability to school life (29) and the better they adapted to social life, the more the quality of life improved among adolescent survivors of leukemia (9). Ego-resilience is an internal element that helps the individual to respond and adapt to external stress. For childhood leukemia survivors, increased resilience toward internal and external stress leads to strengthened social adaptability, believed to have a positive effect on their physical, emotional, and social quality of life. Ego-resilience, related to an individual's internal characteristics (30), can serve as a parameter affecting quality of life along with other characteristics such as physical health and self-esteem (12). Therefore, future studies must identify factors affecting ego-resilience and quality of life. In the present study, higher family function perceived by leukemia patients corresponded to higher quality of life in the categories of mood and emotions, parent relations and home life, self-perception, autonomy, school environment, and financial resources. This was similar to a study in which higher family function and social support perceived by 13-18-yr-old adolescents corresponded to higher quality of life (31). This was also similar to another study in which higher maternal support corresponded to higher quality of life of adolescent survivors of pediatric cancer, and higher scores for physical, emotional, social, and the schoolwork related quality of life corresponded to higher general quality of life (27). Direct comparison of these findings is impossible, as no other studies have examined the relations between quality of life sub-categories and family function. However, it presumed that if family support, family cohesion, and family function perceived by pediatric patients are high, it brings about a positive effect on ego-resilience and quality of life. Therefore, it is important to examine if family support and ego-resilience serve as parameters affecting quality of life. A major factor affecting the quality of life of pediatric patients with leukemia was ego-resilience, with an explanatory power of 48%, which rose to 53% upon inclusion of age and family function. It is hard to compare these findings with other studies because no previous study analyzed factors influencing quality of life. However, these findings are similar to studies (28, 31) that revealed that ego-resilience and family function of pediatric cancer patients had a positive correlation with quality of life. In previous studies, quality of life was low in school-age children from higher grades (25) and resilience was high in adolescent pediatric cancer patients who had good communication among family members for problem-solving (30). Considering these findings, positive communication among family members presumed to affect ego-resilience and improve quality of life. A study (32) supported this result as the author claimed that the resilience of adolescents increased if family atmosphere and protective function were high and the stronger the resilience, the higher was their quality of life. Therefore, improving ego-resilience of cancer patients is an important element in the psychological and social care of pediatric patients (33). These findings show that ego-resilience, age, and family function affect quality of life in childhood leukemia survivors. Besides, it is necessary to strengthen ego-resilience in the development phase, as well as family function, to improve quality of life in childhood leukemia survivors. Study Limitations The small sample of survivors of childhood leukemia survivors limits the possibility of drawing firm conclusions regarding robustness. We have not found a correlation between clinical and psychosocial data. Conclusion The ego-resilience and family function had a positive correlation with quality of life. A major factor affecting quality of life was ego-resilience, with an explanatory power of 48%, which rose to 53% on including age and family function. Considering these findings, interventions to improve quality of life in childhood leukemia survivors configured to provide information required for physical symptom management; to increase egoresilience by evaluating the characteristics of pediatric patients, ego-resilience, and family function; and to strengthen the support of parents and family. In the future, a multi-faceted study should investigate the psychological variables and parameters affecting quality of life in childhood leukemia survivors. A longitudinal study is also required to track changes in ego-resilience, family function, and quality of life according to the growth of pediatric patients. Another study also needed to develop and apply clinical nursing practice interventions to improve quality of life in childhood leukemia survivors and evaluate their effectiveness. Ethical considerations Ethical issues (Including plagiarism, consent, misconduct, data fabrication and/or falsify-caution, double publication and/or submission, redundancy, etc.) have been completely by the authors.
2018-04-03T00:33:35.500Z
2016-11-01T00:00:00.000
{ "year": 2016, "sha1": "483321a094b5f7162017df8eccb6784088f3f29a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "483321a094b5f7162017df8eccb6784088f3f29a", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
243959092
pes2o/s2orc
v3-fos-license
Prevalence and predisposing factors of chronic kidney disease in Yazd city; a population-based study 1Yazd Cardiovascular Research Centre, Shahid Sadoughi University, Yazd, Iran 2Division of Nephrology, Department of Internal Medicine, Shahid Sadoughi University of Medical Sciences, Yazd, Iran 3Department of Statistics and Epidemiology, Shahid Sadoughi University of Medical Sciences, Yazd, Iran 4Department of Biostatistics, School of Public Health, Shahid Sadoughi University of Medical Sciences, Yazd, Iran 5Division of Nephrology, Department of Internal Medicine, Shahid Sadoughi University of Medical Sciences, Yazd, Iran Introduction Chronic kidney disease (CKD) is one of the public health challenges in the world (1) and its prevalence is increasingly high in developing countries (2). This disease ranked 27 th among the causes of death in 1990, but by 2010, it ranked 18 th (3). CKD is defined as a decrease in the amount of glomerular filtration or urinary excretion of albumin (4,5). The progression of the disease is usually mild and asymptomatic until the end-stage renal disease (ESRD). At this stage, the function of kidneys decreases to less than 15 percent and the patient requires renal replacement treatments, such as dialysis or kidney transplantation to survive (6). The ESRD patients have a lower quality of life and life expectancy than the general population. In addition, reduction of the kidney function increases the risk of cardiovascular diseases and their related mortality rate (7). Although the CKD progresses over time, causes new problems and aggravates the previous complications, its progress may be reduced and its costly management may be avoided by early diagnosis. Due to the late diagnosis of CKD in developing countries, most patients are identified at the late stages of the disease. The worldwide study over the burden of diseases showed that the mortality caused by CKD in Iran was less than one percent in 1990, but rose to higher than 2 percent in 2013. The reduction of GFR was also mentioned as one of the major causes of mortality in Iran (8). This increase in the prevalence of the disease requires urgent action and the first step is to measure the incidence and trends of CKD in Iran. Despite the extensive research in the developed countries, studies on the prevalence of CKD and its determinants are not enough in developing countries such as Iran (9). In previous studies conducted in Iran, fluctuations in the prevalence of CKD stages III-V in some provinces of Iran have been very different and wide and have been reported between 6 and 17% (10). This indicates the need to conduct this study in other parts of Iran. No studies have been conducted on the prevalence of the disease in central Iran, Yazd. Especially that diabetes, which is an important cause of CKD, has a high prevalence in this region of Iran (11). In addition, population-based studies were rarely conducted in Iran. The lack of basic and precise information on CKD in Iran prevented understanding the burden and early diagnosis of this disease. Therefore, high quality studies on CKD are required in Iran. Objectives In this population-based study, we aimed to determine the prevalence of CKD and its determinants among the 20-69 years population using data from Yazd Health Study (YaHS) data (12). Study design This cross-sectional study was conducted on YaHS recruitment phase data collected during 2014-15 (12). Ten thousand residents of Yazd Greater Area who aged 20-69 years old were selected using the cluster random sampling and took participate in YaHS. The YaHS researchers conducted interviews and calculated the anthropometric measurements (height, weight, waist circumference and hip circumference) and blood pressure according to a validated protocol. Details of YaHS have been published elsewhere (13). Overall, 40 percent of the participants (n = 3825) agreed to give their blood samples for various tests to laboratory. Of them, 175 individuals were excluded because they did not have the required information for calculating GFR. Therefore, in the current study, we used the information collected from 3649 participants. Demographic data including age, gender, educational level, marital status, and also history of tobacco smoking, cardiovascular diseases, diabetes and high blood pressure were collected using a structured questionnaire. Physical examinations such as anthropometric measurements and blood pressure were conducted by trained staff. The abdominal circumference was measured to the nearest 0.1 cm with participants wearing light clothes and without any pressure on the body surface. The hip circumference was measured at the widest part of the buttocks using the same method. Then, the waist circumference was divided by the hip circumference and the waist-to-height ratio (WHR) was obtained. The risk levels were defined as WHR ≥ 0.9 cm in men and WHR ≥ 0.85 cm in women (13). A tape measure was employed to measure the height of participants in cm with no shoes, hat, or hair clip. The weight was also measured using Omron BF511 digital body scan (Omron Inc. Nagoya, Japan), with accuracy of 0.1 kg. The body mass index (BMI) was obtained by dividing the weight in kg by the height squared in meter. The BMI in the range of 25-29.9 kg/m² was defined as overweight and BMI ≥ 30 kg/m² showed obesity. Furthermore, we utilized the standard gauge pressure to measure the participants' blood pressure after five minutes of rest in sitting position. Blood pressure was measured three times from the right hand of the individual with at least five minutes interval; the mean of the last two measurements was calculated and defined as the participant's blood pressure. Hypertension was defined as systolic blood pressure of ≥140 mm Hg, diastolic blood pressure of ≥90 mm Hg or consumption of hypertensive blood pressure drugs (14). After 12 hours of fasting, 10 mL of venous blood sample was collected from each participant. The biochemical tests including creatinine, fasting blood glucose, cholesterol, low-density lipoproteins (LDL-c), high-density lipoproteins (HDL-c) and triglycerides (TG) were measured using the enzyme colorimetric kits (Pars Azmon). In our analysis, low HDL-c was defined as less than 40 mL/dL in men and less than 50 mL/dL in women. Individuals with fasting blood sugar ≥126 mL/dL, history of diabetes or consumption of anti-diabetes medications were defined as patients with diabetes. Serum cholesterol ≥200 mL/dL, triglyceride ≥150 mL/dL and LDL-c ≥130 mL/dL were classified as higher than normal rates. The serum creatinine levels were measured according to the Jaffe's kinetic standard method (Pars Azmon). Furthermore, to calculate the estimated glomerular filtration rate (eGFR), the modification of diet in renal disease (MDRD) equation was used as recommended by the national kidney foundation (15,16). The eGFR formula is as follows: The Kidney Disease Outcome Quality Initiative guidelines defined the CKD stage 1 as eGFR ≥ 90 mL/ min/1.73 m 2 with evidence of kidney damage; stage 2, as eGFR in the range of 60-89 mL/min/1.73 m 2 (mild decrease in GFR); stage 3, as eGFR in the range of 30-59 mL/min/1.73 m 2 (moderate decrease in GFR); stage 4, as eGFR in the range of 15-29 mL/min/1.73 m 2 (severe decrease in GFR); and stage 5, eGFR ≤ 15 mL/min/1.73 m 2 (dialysis-dependent; kidney failure). In this study, we considered the eGFR ≤ 60 mL/min/1.73m 2 as the CKD (stages 3 to 5). Data analysis Moreover, all the continuous data with normal distribution were calculated as mean ± standard deviation and classified variables were indicated as percentages. The difference between the continuous variables was investigated using t-test and the differences among the classified variables were investigated by chi-square test. A multivariate logistic regression model was applied to evaluate the odds ratio (OR) of risk factors associated with CKD. All the statistical analyses were conducted using SPSS version 20 at the significance level of 0.05. Results We studied a total of 3649 patients in the age range of 20-69 years with a mean age of 46.0 ± 13.8 years. Of the total population, 53.7% (n = 1960) were women. In terms of BMI, 39% of the participants were overweight and 30% were obese. Around 74.5% of the participants had abnormal WHR. In this study, the prevalence of type II diabetes and hypertension were 20.1% and 38.9%, respectively. Furthermore, 10.2% of patients had a history of smoking and 7.3% had a history of heart disease. The mean of eGFR was 84.1 ± 17.7 mL/min/1.73m 2 for the participants since women (82 mL/min/1.73 m 2 ) had lower scores than men (86.0 mL/min/1.73 m 2 ; P < 0.001). The overall prevalence of CKD was 6.6% based on the eGFR calculated using the MDRD equation, (5.4% in men and 7.6% in women). The prevalence rates of CKD in individuals with and without diabetes were 14.3% and 4.7%, respectively. The CKD rates in participants with high blood pressure and normal blood pressure were 11.8% and 3.3%, respectively. Our findings showed that 95% of the CKD patients were in stage three, 3% were in stage four, and only 2% (n = 5) were in stage five. The prevalence of CKD increased with aging, thereby the prevalence of CKD among 50-59 year-old individuals was 21.5% (26% in women and 16.8% in men). In all the age groups, the incidence of CKD was higher in women than men ( Table 1). The mean age in CKD participants (59.8 ± 8.6 years) was significantly higher than the healthy individuals (45.6 ± 13.6 years; P < 0.001). The laboratory tests showed that the means of fasting blood sugar, serum creatinine, triglyceride, and serum cholesterol were significantly higher in CKD than non-CKD persons (P ≤ 0.05). In the two-variable analysis, the factors associated with CKD were age, gender, diabetes mellitus, blood pressure, history of cardiovascular disease, BMI, WHR and serum HDL-c (Tables 2 and 3). In the multivariate analysis, we found a significant relationship between CKD and variables of age, gender, obesity, history of heart disease, diabetes and hypertension (Table 4). In the age group of 20-49 years, only high blood pressure was associated with CKD (P = 0.004). We found that females had about 49% higher risk (OR = 1.49, 95% CI = 1.10-2.02) of having CKD than males. A trend of association was also observed between age and CKD. In other words, the odds of developing CKD for the age groups of 40-59 and 60-69 years were about 4.23 (95% CI = 1.90-9.44) and 25.04 (95% CI = 11.33-55.34), respectively, compared with the odds of 20-39 years age group. The risk of CKD was about 1.9 (95% CI = 1.3-2.8) times higher in obese persons compared with those with BMI < 25 kg/m 2 . The risk of CKD was about 1.46 (95% CI = 1.08-1.97) times higher in participants with diabetes in comparison with the non-diabetic individuals. The risk of CKD was about 1.53 (95% CI = 1.11-2.10) times higher in patients with hypertension than the non-hypertensive people. The risk of CKD was about 1.87 (95% CI = 1.28-2.72) times higher in patients with a history of CVD/ stroke compared with those with no history. Discussion This study showed that the prevalence of CKD was 6.6 percent in people within the age range of 20-69 years in Yazd Greater Area; 5.4% in men and 7.6% in women. In addition, the prevalence of CKD among 50-59 year-old individuals was 21.5%. The city of Yazd is located in the central region of Iran. Compared to other provinces of Iran, it seems that the prevalence of this disease in Yazd has some differences and similarities with other regions of Iran It has been reported that the prevalence of CKD is very different in studies conducted in Iran. The lowest prevalence of CKD is in Golestan province, which is 4.6% (17) and the highest prevalence in Urmia which is 37.9% (18). Some of the probable causes of differences in these results are the difference in the method of measuring of serum creatinine, difference in the method of determining GFR, variations in the populations, diversity in racial and ethnic and also age differences. In our study, the sampling method was population based on age more than 20 years and we utilized MDRD formula to measure kidney function. In addition, the definition of CKD was GFR ≤ 60 mL/min/1.73 m 2 . Therefore, we decided to compare the results of our study with studies that, like ours, were population based and age and also used the MDRD method to determine GFR and the definition of CKD based on GFR was less than 60 mL/ min/1.73 m 2 . These four characteristics mentioned in the research methodology were found in studies conducted in Golestan, Fars and Tehran provinces (10,17,19,20). Khajehdehi et al calculated the prevalence of CKD in Fars province and indicated that the overall prevalence of the disease among people over 18 years of age was 11.6 percent (14.9% in women and 4. 5% in men) and for those over 60 years was 31% (19). In the city of Gonabad, Naghibi et al reported that the prevalence of CKD in individuals aged 20-60 years was 5.1percent (20). Najafi et al reported a prevalence of 4.6% for CKD based on GFR among adults ≥18 years in Golestan (17). Safari Nejad et al carried out a comprehensive population-based study on 17 thousand of people over the age of 14 in Iran during [2002][2003][2004][2005] and reported that the prevalence of CKD was 7.8% (10). Moreover, in our study, the prevalence of CKD was 6.6 percent. Based on the above studies, which are based on general population and with a sample size of over one thousand and appropriate methodology in terms of CKD definition and also GFR calculation, the prevalence of CKD in Iran is between 4.6 and 11.6%. This difference can also be related to genetic differences, different prevalence of diabetes and hypertension, which are the important parameters of CKD. In our study, the prevalence of CKD in older age groups was more than that of the younger age groups; the risk of the disease in age group of 69-60 years was approximately 25 times more than that of 20-39 years age group. Furthermore, Sepanlou et al estimated the prevalence of the disease in the age group of 40-75 years as 23.7% (26.6% in women and 20.6% in men) (9). In a univariate analysis, most of the studied factors were correlated with CKD. However, in the multiple regression model, female gender, older ages, high blood pressure, diabetes and history of heart disease were the most important risk factors which were associated with CKD. In multivariate analysis, we observed that CKD did not have any significant relationship with WHR and serum lipids (triglyceride, HDL-c, LDL-c, and cholesterol). In most studies, the chance of CKD was higher in women than men (21,22). We also found that the risk of CKD in women was 1.49 times higher than men. Most studies reported that diabetes and high blood pressure increased the chance of developing CKD (8,19,23). In this regard, we found that 43% of our population had diabetes and 70% had high blood pressure, which confirms the previous findings. The risk of CKD was higher in diabetic and hypertensive patients than in healthy individuals. Previous studies detected, a significant relationship between the history of heart disease and CKD (9,22). In our study, this relationship was also significant and the risk of CKD in patients with heart disease was 1.87 times higher than the healthy participants. As reported in most of the previous studies, BMI is one of the major risk factors of CKD (19,24). Obesity and high BMI can increase the risk of developing CKD (9). In our study, the risk of developing CKD was 1.7 times higher in obese (BMI> 30 kg/m²) participants than the individuals with BMI ≤ 25 kg/m². Conclusion CKD has a high prevalence in the population of this region of Iran. Considering the growing trend of aging and CKD risk factors such as diabetes and high blood pressure in Yazd, CKD will lead to significant health outcomes and expenditure of health resources. Furthermore, the health system should strive for early detection of CKD in order to prevent morbidity and mortality of this disease. Limitations of the study In our population-based study, we used an appropriate sample size. Moreover, standard methods of data collection and laboratory tests were applied. However, we were faced with several limitations; (a) we analyzed a cross-sectional data set, (b) we measured the serum creatinine only once; where ideally we could repeat the measurement three months later and (c) we did not collect the data related to urine albumin and protein excretion; therefore, the prevalence of CKD stage 1 and 2 could not be estimated in this population.
2021-11-11T16:08:37.687Z
2021-09-18T00:00:00.000
{ "year": 2021, "sha1": "2ace1c60c128ce6f95b7af993f4b9b1603bfaba1", "oa_license": "CCBY", "oa_url": "http://journalrip.com/PDF/jrip-11-e1.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "47d9942e1680a9f221ba8e7e2b782d382ade0012", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
90807219
pes2o/s2orc
v3-fos-license
The complete chloroplast genome sequence of Solanum hougasii, one of the potato wild relative species Abstract Solanum hougasii is a wild tuber-bearing species belonging to the family Solanaceae. The complete chloroplast genome of S. hougasii was constituted by de novo assembly, using a small amount of whole genome sequencing data. The chloroplast genome of S. hougasii was a circular DNA molecule with a length of 155,549 bp and consisted of 85,990 bp of large single copy, 18,373 bp of small single copy, and 25,593 bp of a pair of inverted repeat regions. A total of 158 genes were annotated, including 105 protein-coding genes, 45 tRNA genes, and eight rRNA genes. Maximum likelihood phylogenetic analysis with 25 Solanaceae species revealed that S. hougasii is most closely grouped with S. tuberosum. Solanum hougasii, a wild tuber-bearing hexaploid species, is a relative to the cultivated potato, S. tuberosum. It was identified to be a source of resistance to late and early blight, root-knot nematode, and potato virus Y for potato breeding (Cockerham 1970;Brown et al. 1999;Inglis et al. 2007;Haynes and Qu 2016). Its EBN (Endosperm Balanced Number) value of four, theoretically makes it directly crossable for breeding purposes with cultivated tetraploid potatoes (Hawkes 1990;Ortiz and Ehlenfeldt 1992;Cho et al. 1997;Spooner et al. 2014;Haynes and Qu 2016). Moreover, its nuclear genome composition has evolutionally been identified by GISH analysis (Pendinen et al. 2012). S. hougasii has an allotropic behavior, that is, one genome belonged to AA and the other to BB. In addition, S. hougasii third genome is more intimately related to P genome or to the species related to P genome (Pendinen et al. 2012). The information of plastid genome of the wild species obtained in this study will provide an opportunity to investigate more detailed evolutionary and breeding aspects. The S. hougasii (PI161174) was originally collected in Michoacan, Mexico by International Potato Centre (CIP), provided via Highland Agriculture Research Institute and stored at Daegu University, South Korea. A paired-end (PE) genomic library was constructed with total genomic DNA, according to the standard protocol (Illumina, San Diego, USA) and sequenced using an HiSeq2000 at Macrogen (http://www. macrogen.com/kor/). Low-quality bases with raw scores of 20 or less were removed and approximately 5.1 Gbp of highquality PE reads were assembled by a CLC genome assembler (CLC Inc, Rarhus, Denmark) (Kim et al. 2015). The reference chloroplast genome sequence of S. commersonii (KM489054, Cho et al. 2016) was used to retrieve principal contigs representing the chloroplast genome from the total contigs using Nucmer (Kurtz et al. 2004). The representative chloroplast contigs were arranged in an order based on BLASTZ analysis (Schwartz et al. 2003) with the reference sequence and were connected to a single draft sequence by joining overlapping terminal sequences. DOGMA (Wyman et al. 2004) and BLAST searches were used to predict the chloroplast genes. The complete chloroplast genome of S. hougasii (GenBank accession no. MF471372) was 155,549bp in length and included 25,593bp inverted repeat (IRa and IRb) regions separated by small single copy (SSC) region of 18,373bp and large single copy (LSC) region of 85,990bp with the typical quadripartite structure of most plastids, and the structure and gene features were typically identical to those of higher plants. A total of 158 genes with an average size of 584.5bp were annotated including 105 protein-coding genes with an average size of 766.6bp, 45 tRNA genes, and 8 rRNA genes. An overall GC content was found to be 37.87%. Phylogenetic analysis was performed using chloroplast coding sequences of S. hougasii and 25 published species in Solanaceae family by a maximum likelihood method in MEGA 6.0 (Tamura et al. 2013). According to the phylogenetic tree, S. hougasii belonged to the same clade in Solanum species as expected, and was interestingly closely grouped with S. tuberosum (Figure 1). Disclosure statement The author reports no conflicts of interest. The author alone is responsible for the content and writing of this article.
2019-04-02T13:15:20.580Z
2018-07-03T00:00:00.000
{ "year": 2018, "sha1": "cca5c1c6758add72de857c8500dc1888495894aa", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23802359.2018.1491342?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "80013df3bf24e018bd6196f07b5fd4867205e1dd", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
135330964
pes2o/s2orc
v3-fos-license
Estimation of marine gravity anomaly model from satellite altimetry data (Case Study : Kalimantan and Sulawesi Waters-Indonesia) Nowadays satellite altimetry has become an advanced instrument to observe many natural physical phenomena, such as sea-level rise, ocean circulation, water mass changes, and marine gravity anomaly. The use of satellite altimetry data to compute marine gravity anomaly provides good results and costs relatively low. Those advantages make geodesists utilize this method as an alternative in geoid determination, especially over the seas. Several sets of satellite altimetry data from Cryosat 2, Jason 1 phase C, Geosat and ERS1 were used to compute gravity anomaly over the surrounding waters of Kalimantan and Sulawesi Island in Indonesia. The study area spans between -7°-7° N and 108°-127° E with a spatial resolution of 1’x1’. In the pre-processing step, the altimetry data especially Geosat and ERS1, were retracked to reduce errors due to the land influence. The main computation step was done by using two different methods, least square collocation (LSC) and Inverse Vening-Meinesz (IVM). The computed gravity anomaly models then assessed with the in-situ marine gravity data from the National Geophysical Data Center (NGDC). The best model in term of RMS error is the 10 km Gaussian filtered LSC with an RMS error of 15.042 mgal. The least accurate model is the non-filtered IVM with an RMS of 16.704 mgal. Introduction The Kalimantan and Sulawesi are two main islands of Indonesian Archipelago. In order to determine the regional geoid, the government of Indonesia in collaboration with Technical University of Denmark (DTU) ran airborne gravity surveys over those two Islands in 2008-2009 [1]. However, the surveys did not cover the waters surrounding those two islands [2]. In order to fill those gaps, some geodetic mission altimetry datasets were used to determine the gravity anomalies over an area between 7⁰S -7⁰ N and 108⁰E -127⁰E. The datasets consist of Sea Surface Height (SSH) measured by Cryosat-2, Jason-1 phase C, Geosat and ERS-1 satellite altimetry missions. By definition, SSH is the difference of altimeter range from the satellite altitude above the reference ellipsoid [3], or in another word SSH is the height of instantaneous sea surface above the reference ellipsoid. To make the SSH data usable in gravity anomaly determination, some accuracy defects related to land occurrence should be reduced by performing waveform retracking. Waveform retracking was performed to fit a model or functional form to the measured waveforms, and retrieve geophysical parameters such as range, echo power, etc [4]. Four retracking methods are available to use; threshold [5], subwaveform threshold [6], improved threshold [7], and beta-5 parameter [8]. The past research by Hsiao et al., shows that the subwaveform retracker gave the best result for SSH and altimeter-derived gravity values [9].The corrected altimetry range then used in SSH determination. Further explanation of the altimetry data handling will be discussed in Section 2 and 3. Next, the obtained SSH of those altimetry missions are then combined together to be used as the main input in gravity anomaly determination. Moreover, supplementary data are required, those are tide, reference geoid and sea surface topography (SST) models. This research objective is to make the best altimetry-only gravity model with spatial resolution of 1' x 1'. Satellite Altimetry Data There were four satellite altimetry missions used as the main input for this research, Geosat/GM, ERS-1/GM, Jason-1/GM and Cryosat-2. The "GM" phrase defines that an altimetry mission is a geodetic mission or in other word a non-repeat mission. The Geosat/GM is a sun-synchronous mission which orbited earth in the period of March 31 st 1985 to September 30 th 1986, with the sampling frequency of 10 Hz. The ERS-1/GM was gathered from the European Space Agency (ESA), which consisted of two 168 days data cycles with a sampling frequency of 20 Hz. The Jason-1/GM data was downloaded from AVISO data center or Jet Propulsion Laboratory (JPL) alternatively. This mission is a drifted orbit of Jason-1 repeat mission, with a repeat period of 406 days and sub cycles of 3.9, 10.9, 47.5, and 179.5 days [10]. The Cryosat-2 data used are the level 1B data with a sampling frequency of 20Hz [11]. Because of the different sampling rates of each satellite mission, the SSH datasets should be resampled into 2Hz. Besides, the datasets also needed to be freed from outlier by applying a Gaussian filter with a 10 km window size. NAO99 Ocean Tide Model Before taken into further process, the altimetry SSH data had to be corrected from the tidal effect. This effect was removed by using a global ocean tide model. In this research we used the NAO99b ocean tide model. This model has a spatial resolution of 0.5⁰ (global version) and 5' (Japan regional version) and predicts the 16 major tide constituents [12]. RIO05 Mean Dynamic Topography Model The RIO05 MDT model was generated from the combination of altimetry data (T/P and ERS1), in-situ measurements (buoys velocities, XBT, CTD) from 1993-2002, and refers to EIGEN-GRACE03S geoid model on a 30' x 30' regular grid [13]. The standard deviation of this model is 0.713 m according to Marchenko et. al [14]. Waveform Retracking After preparing all the data needed (see sub-section 2.1), some corrections would be performed. The altimetry range measurement had to be corrected from the coastal and shallow water effects or well known as the waveform retracking. Waveform retracking is an improvement method of altimetry range measurement by determining the location of epoch on the leading edge. This epoch, commonly called as tracking gate, is a pre-defined value which computed by the satellite data provider. However, in some cases, especially in near coastal and shallow waters, the given value is not representing the actual state of water surface. Several retracking methods, either statistic or deterministic have been developed. In this research, the applied retracker is subwaveform threshold. This retracker first identifies the leading edge based on subwaveform correlation analysis, and then computes the retracking gate by using a threshold method [6]. All altimetry data but the Cryosat-2 had been retracked by those aforementioned retrackers. The Cryosat-2 altimeter uses the latest technique of delay/Doppler [15] which improves the determination of altimeter range near coastal area [9]. Gravity Anomaly Determination The remove-compute-restore (RCR) technique was applied to determine the gravity anomalies. This technique is based on the separation of gravity anomaly signal into three different spectral components, the long wavelength, the short wavelength and the residual part (see figure 1 and eq. 1). Figure 1 shows that the addition of three different undulation (N) wavelengths gives the most realistic geoid model. Each component of N is computed from corresponding gravity anomaly (Δg) by the Stokes integration formula [16]. The geoid determination by using gravity data is called the gravimetric geoid modelling. The first step in RCR technique is computing the along-track gradient. The second is removing the reference gradient from global gravity model and followed by removing outliers. The outlier removal was done by applying τ test as mentioned in the theory of Pope [17]. The next step is computing the residual gravity anomalies. There were two different methods in this computation step, Least-Square Collocation (LSC) and Inverse Vening-Meinesz (IVM). The last step is restoring gravity anomalies from the global gravity model. Eight different models were made and would be assessed with the in-situ data. Those models were 7 LSCs and an IVM. The LSC models were differed from each other by the type and size of the filter applied. Filters were applied to overcome the remains of high frequency noise. In addition, we also assessed Sandwell v23.1 altimeter-only global gravity model. This model has RMS error of 2.6 mgal when compared to National Geospatial Intelligence Agency (NGA) shipborne data [18]. The list of models are shown in table 1. Furthermore, the defined grid-size of the models is 1' x 1' due to the cross-track spacing of the altimetry missions are about 1-2 km. Least-square collocation (LSC) The LSC method computes final gravity anomaly by summing the residual gravity anomalies (∆g res ) and those computed from a reference gravity field (Δgref) [9]. Beside SSHs from altimetry, a global geopotential model was needed as the reference gravity field. The selected reference was the EGM08 to degree and order 2190. Along with the SSHs, EGM08 gave the value of residual geoid gradient (ε res ). The computed residual geoid gradients were used to calculate the residual gravity anomalies by equation 2 [19]: with ∆g res = vectors of residual gravity anomalies ε res = residual geoid gradients ∆ = gravity anomaly-residual geoid gradients covariance matrices = residual geoid gradient-residual geoid gradient covariance matrices ∆ ∆ = gravity anomaly-gravity anomaly covariance matrices for = noise of residual geoid gradient = noise of residual gravity anomaly Δgref = reference gravity anomaly The final gravity anomaly was obtained by adding the residual gravity anomalies into those computed from EGM08 [9]. A brief explanation of this LSC method can be found in [9] and [20]. Inverse Vening-Meinesz (IVM) This method computes gravity anomaly from deflections of the vertical [21]. The deflections of the vertical (DOV) is defined as the spatial angle between the normal gravity vector on the reference ellipsoid and the actual gravity vector on the geoid [22]. The first three steps in this technique are similar as in LSC; get along-track gradient, remove reference gradient of global gravity model, and remove outliers. However, an intermediate step is needed in this technique, which is the gridding of north and east gradient components as written in equation 3. Vector l is an observation vector contains the geoid gradient derived from altimetry SSHs and EGM08 as the reference gravity field (εres, see sub-section 3.3 ). The next step is converting DOV to gravity anomaly by using the IVM formula, as written in equation 4. where = geoidal height at p ∆ = gravity anomaly at p = mean Earth radius = normal gravity C'H' = Kernel function = north and east component of DOV at q = azimuth from q to p = surface element = ϕqλq = latitude and longitude of q Because of the singularity of the Kernel function C' and H' at zero spherical distance, the innermost zone effects on geoid and gravity anomaly must be taken into account and were computed by equation 5: When using the RCR procedure, the error in using spherical approximations should be very small compared to data noise. Consider the formula of error-free LSC in the case of using ellipsoidal correction [23] as written in equation 6: e 2 is the squared eccentricity of a reference ellipsoid, which is about 0.006694 for the GRS80 ellipsoid. If the largest element of DOV in l l was assumed as 100 µrad, then the largest element in e 2 l l would be 0.66 µrad. This value was far smaller than the noise of DOV from the multi-mission satellite altimetry. In conclusion, the value of ellipsoidal correction could be neglected. NGDC Marine Gravity Data The NGDC datasets were used as the assessment values for the modelled gravity anomalies. The datasets consist of several data types such as gravity, magnetics and bathymetry, but in this research we only utilized the gravity data. Over the research area, the gravity data was observed by many shipborne survey missions dated back from 1963 to 1991. The data were available in MGD77T format [24]. As the models were in the grid format, an interpolation should be made to the NGDC data. We used a two dimensional polynomial interpolation to define an NGDC gravity value on the desired cell grid. This value then subtracted from the models corresponding value to calculate the difference. Results and Analysis In figure 2, Kalsul 1 model, it can be seen that the spatial distribution of gravity anomalies over the area is similar as in figure 3 (Kalsul 8) and figure 4 (Sandwell). All figures show gravity anomalies ranged from about -300 to 300 mgal with a pattern of negative anomalies on the waters east-side of Sulawesi Island (-2⁰ -0⁰ N and 122⁰ -127⁰ E). This pattern represents an area of deep waters. To get a better understanding of the models quality, the gravity anomaly differences were plotted on the NGDC shipborne tracks ( figure 5, 6 and 7). Figure 5 shows the differences of the non-filtered LSC model (Kalsul 1), while figure 6 and 7 display the differences of IVM (Kalsul 8) and Sandwell models respectively. Kalsul 1 and Kalsul 8 show a similar pattern with the biggest difference occurs in around latitude 6⁰ N. Furthermore, the Kalsul 1 model give a better accuracy than Kalsul 8 by 1.62 mgal. The application of Gaussian filter with a 10 km window size (Kalsul 3) had improved the accuracy of LSC model by 0.2% (Table 2). On the other side, the application of median filter with the same window size only gave 0.17% improvement. Overall, the LSC models outperformed the IVM one. Compared to Sandwell, our models are better in term of RMS error by almost 6 times. This result is not only consistent with the previous research by Hsiao et al [9], but also indicate that the Sandwell model is not accurate enough in the research area. However, the Sandwell model has a better mean deviation (0.411 mgal) than the Kalsul 8 (0.581 mgal). In term of mean deviation, the best computed model is Kalsul 7, with mean deviation of -0.019 mgal, while Kalsul 3 is the best model in term of RMS error (15.042 mgal) and STD of error (15.041 mgal). The complete results shown in table 2 below. Figure 5. a) Differences between LSC (Kalsul 1) and NGDC and b) its latitudal profile Figure 5a and 6a show green lines all over the research area. The green lines indicate near zero difference to the NGDC marine gravity data. However, in the northern part of Sulawesi waters, a pattern with blue color appeared, this is an indication of negative differences with magnitude -50 to -75 mgal. The latitudal profiles of the anomaly differences show the existence of major errors ( figure 5b and 6b), especially in latitude 2⁰ -1⁰ S, 0⁰ -1⁰ N, and 1.5⁰ -6⁰ N. Those phenomena appear because of some gross errors in at least three NGDC datasets. On the other hand, figure 7 does not show a similar pattern. Figure 7a shows that the difference between Sandwell 23.1 and in-situ data vary all over the area, indicated by multi-colored lines. However, there is a small region in the middle of research area which gives errors ±250 mgal, shown in blue/purple lines. Those errors possibly caused by high frequency noises affected by the land influence on that narrow waters region. Figure 7b shows major errors in almost all latitude. This is an indication that Sandwell v 23.1 model still has some systematical errors in the research area. The most probable systematical errors source is aliasing which occurred when cutting the global model into smaller regional parts. Conclusion This research has produced eight gravity anomaly models with mean deviation ranged from -0.019 to 0.581 mgal, and RMS error ranged from 15.042 to 16.704 mgal. Those values were obtained by using NGDC marine gravity data as the assessment values. The best model in term of RMS error is Kalsul 3, an LSC gravity anomaly model with 10 km Gaussian filter applied, while the least accurate one is Kalsul 8, an IVM gravity anomaly model with no filter applied. Kalsul 3 might not be the most optimum model, yet it is suitable enough to determine the regional geoid model of Indonesia, especially on waters area. To improve the models quality, a list of supplementary data could substitute the data used, such as Indonesian tide gauges assimilated tidal model as an alternative to NAO99, CNES-CLS09 as an alternative to RIO05 MDT model, and the incoming EGM as the substitution to EGM08 geoid reference. The use of most recent altimetry missions, such as ICESat and Saral-Altika, could also enhance the quality of the model, as it densifies the altimetry footprints over the research area.
2019-04-27T13:10:12.136Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "ec643a79cd1c43ca8f64727bf6b712840f02bb74", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/162/1/012038", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "55c6298b94407a8e483efebe52a9a5129cf65f6e", "s2fieldsofstudy": [ "Environmental Science", "Geology", "Geography" ], "extfieldsofstudy": [ "Geology", "Physics" ] }
231911013
pes2o/s2orc
v3-fos-license
Optimal Investment in Prevention and Recovery for Mitigating Epidemic Risks Abstract The worldwide healthcare and economic crisis caused by the COVID‐19 pandemic highlights the need for a deeper understanding of investing in the mitigation of epidemic risks. To address this, we built a mathematical model to optimize investments into two types of measures for mitigating the risks of epidemic propagation: prevention/containment measures and treatment/recovery measures. The new model explicitly accounts for the characteristics of networks of individuals, as a critical element of epidemic propagation. Subsequent analysis shows that, to combat an epidemic that can cause significant negative impact, optimal investment in either category increases with a higher level of connectivity and intrinsic loss, but it is limited to a fraction of that total potential loss. However, when a fixed and limited mitigation investment is to be apportioned among the two types of measures, the optimal proportion of investment for prevention and containment increases when the investment limit goes up, and when the network connectivity decreases. Our results are consistent with existing studies and can be used to properly interpret what happened in past pandemics as well as to shed light on future and ongoing events such as COVID‐19. INTRODUCTION Within a few short months, COVID-19 has ravaged the world, resulting in millions of people infected and, among them, deaths in the hundreds of thousands. Countries and regions tried many different measures to mitigate and control the disease with varying levels of success-while some, most notably New Zealand, Taiwan, and Vietnam, showed significant success, others instead suffered badly. The devastating social and economic impacts of the pandemic crisis in these harder hit countries highlight the importance of investing in the containment and treatment of contagious diseases. However, finding the optimal combination of investments in epidemic prevention and treatment is a complex problem, due to such factors as the uncertainty associated with a novel disease, the nonlinear nature of an epidemic, the nonadditive effects of interventions, and budgetary limitations (Alistar, Long, Brandeau, & Beck, 2014;Coşgun & Büyüktahtakın, 2018). The ongoing impacts of COVID-19, along with worldwide trends of increasing globalization, population mobility, and interconnectivity, emphasize the need for critical understanding of how to invest in mitigating the risks of future epidemics (Saker, Lee, Cannito, Gilmore, & Campbell-Lendrum, 2004). Understanding how an epidemic spreads is vital to containing it. For over a century, researchers have used mathematical models to examine how communicable diseases propagate through a population. The susceptible-infected-removed (SIR) model, the classical compartmental model in mathematical epidemiology developed by Kermack and McKendrick (1927), divides the population into three classes (compartments): individuals who have not yet been infected (susceptible), those who are currently infected, and those who are now immune or have died. Since then, a number of modifications and improvements have been made to the original SIR model, motivated by the growing number of outbreaks in recent years such as the 2002−2004 SARS (severe acute respiratory syndrome) outbreak and the 2009 H1N1 pandemic. Brauer (2017) provides a comprehensive review of the available models. One of the important additions to the basic SIR model, among others, is modeling the transmission of infection as a stochastic process, which captures the fact that in most outbreaks a few infected individuals usually spread the infection to many people, while most other infected individuals either do not spread it or only spread it to a few others (Brauer, 2008;Riley et al., 2003). This highlights the importance of modeling the connections between individuals in the context of epidemic propagation. Network analysis, because of its ability to capture the realistic settings of population structure, has received significant attention for modeling epidemic propagation during the past decades. In a population network, individuals are represented by vertices (nodes) and connections/contacts between individuals are given by edges. An infection thus can be transmitted from one vertex to another through edges. One of the most common network types in the literature is a random graph, where nodes are added to the network by randomly connecting to any other existing nodes with a uniform probability (Barabási & Albert, 1999). Random graph networks have been used to model epidemic propagation in different settings; for example, SIR epidemics by Volz (2008) and Miller (2011) and the susceptible-infectious-susceptible spread by Parshani, Carmi, and Havlin (2010) and Shang (2012). Despite their wide popularity, however, random graph models are unlikely to represent population networks in reality. Instead, "popular" individuals (i.e., those with already large number of connections) are more likely to receive more connections, and those with few connections are likely to remain less connected. This "preferential attachment," where a new node is likely to connect to the nodes with an already large number of connections, will result in a network composed of a majority of nodes that have limited connections and a small number of "hubs" with large number of links. Such a network topology, first proposed by physicists Barabási and Albert (1999), results in a "scale-free network" and has been successfully used to simulate numerous industry networks such as the Internet (Kim & Altmann, 2012) and the power grid (Chassin & Posse, 2005), as well as social networks such as journal citations (McGuigan, 2018) and academic collaborations (Dorogovtsev & Mendes, 2002). In particular, studies in human sexual contact (Liljeros, Edling, Nunes Amaral, Stanley, & Åberg, 2001;Schneeberger et al., 2004) and the existence of "supercarriers" in epidemics (Brauer, 2008;Riley et al., 2003) like COVID-19 (Hamner et al., 2020;Jones & Maxouris, 2020;Kwon, 2020;Stieg, 2020) show that this network topology applies well in the case of epidemic propagation. Therefore, in this study, we adopt the structure of a scale-free network, where network growth is based on preferential attachment, to evaluate investments for mitigating epidemic risks. Epidemic propagation models have been used extensively in evaluating the effectiveness of different types of interventions and investments. According to the World Health Organization (WHO), epidemics happen in four phases: emergence, localized transmission, amplification, and reduced transmission (WHO, 2018). Governments can intervene and mitigate the impact by preventing further propagations of epidemics (during the emergence and localized transmission phases) and/or reducing propagation through treatment and recovery (T & R; during the amplification and reduced transmission phases). However, finding an optimal set of interventions with budgetary constraints and different community structures is challenging. Several studies have used compartmental models to study the optimal investment allocation problem (Mylius, Hagenaars, Lugnér, & Wallinga, 2008;Ren, Ordóñez, & Wu, 2013). Among them, Brandeau, Zaric, and Richter (2003) considered optimal resource allocation in a basic susceptible and infected epidemic model in several distinct populations. Alistar et al. (2014) studied the optimal prevention and treatment resources allocation in HIV epidemic control, using a susceptible-infected-treated model, to minimize the reproduction number R 0 . However, these compartmental models do not consider the effect of the network structure of the population, a significant factor of disease transmission. Another stream of research focuses on the evaluation of a limited set of strategies to counter epidemics in different settings using network analysis and/or simulation (Zhang, Zhong, Gao, & Li, 2018). Among such efforts, Fu, Small, Walker, and Zhang (2008) evaluated epidemic thresholds using network analysis with infectivity and immunization and found that the targeted immunization scheme is more effective than the proportional scheme. Their results show that without an effective vaccine or treatment, lockdown of infected sections of the population is the best option to contain the epidemic. Siettos, Anastassopoulou, Russo, Grigoras, and Mylonakis (2016) used a small-world network model with an agent-based simulation to evaluate several policies for the containment of the Ebola virus disease. More recently, Nicolaides, Avraam, Cueto-Felgueroso, González, and Juanes (2020) used a human mobility model and simulation to study the effectiveness of hand-hygiene, recommended by WHO, in mitigating flu-like virus transmission through the air transportation networks. They showed that increasing hand-washing rate in influential locations can reduce the risk of pandemic by around 40%. The COVID-19 crisis highlights the need to understand the measures that can be most effective against an epidemic, given the size of the impact, characteristics of epidemic transmission, the (network) structure of a population, and the budgetary and economic constraints. The losses are staggering: the pandemic and the associated fiscal actions and lockdowns have resulted in $11.7 trillion, or close to 12% of global GDP, of negative economic impacts as of September 2020 (International Monetary Fund, 2020), and Cutler and Summers (2020) estimated $16 trillion of loss in the United States alone if the disease runs its course. It is critical for decisionmakers in governments and other organization to respond with measures that save lives, support vulnerable people and businesses, minimize the fallout on economic activity, and speed up the recovery (Carlsson-Szlezak, Swartz, & Reeves, 2020;Nagarajan, 2020). The current study, therefore, sets out to identify optimal prevention and treatment investments for mitigating epidemic propagation through a population network. In support of this, we introduce a new objective function to minimize risk due to an epidemic and adopt the preferential-attachment network to model the population network for the purpose of disease transmission. The investment functions that we incorporate are in general forms and can represent different types of prevention and treatment strategies. The proposed model thus allows for comparing the effectiveness of different kinds of investments in mitigating the risk of epidemic. The results of the study, consistent with studies of prior epidemic events, provide a decision framework for policymakers facing a pending pandemic. RESEARCH MODEL We model the epidemic propagation based on individual-to-individual transmission within a "network of contact," which can be an organization, a community, a city, a metropolitan area, a country, or something even larger. Such a network of contact is assumed to be closed at the time of an examination; that is, all individuals can have contact with others in the same network but not with those outside of the network. The disease propagates throughout the network via individual-to-individual transmissions, and those who contract it may be asymptomatic, need rest before recovery, require treatment to recover, or succumb to the disease. The total loss, L, due to an epidemic includes all possible consequences due to individuals contracting the disease, such as the cost of care, lost productivity, loss of life, etc. Adopting a commonly used risk measure (Aven, 2010;Boholm, 2019), we calculate the risk Z of an epidemic as the product of the probability p of individuals contracting the disease and the resulting loss L: At the network level, Equation (1) captures the sum of the risks of the entire population in the network. We use L 0 and p 0 to denote the intrinsic loss and infection probability observed in an epidemic without any intervention or mitigation, and Z 0 = p 0 L 0 is thus the associated intrinsic risk of epidemic propagation in the network. Because the objective of any intervention or mitigation is to reduce the negative impact of an epidemic, Z 0 , p 0 , and L 0 represent the maximal risk, infection probability, and potential loss of an epidemic, respectively. To examine how an epidemic can be propagated throughout the network (i.e., how individuals in this network can get infected, or the behavior of p), it is necessary to understand the structure of the network of contacts. Every individual has contact with a certain number of other individuals, and the combination of individuals ("nodes" or "vertices") and contacts ("links" or "edges") form the network. As discussed earlier, the prevalent model for such a network is based on "preferential attachment"individuals with an already large number of connections are more likely to receive more connections, and those with fewer connections are likely to remain less connected-and this results in the "scalefree network" topology, composed of the majority of nodes with limited connections and a small number of "hubs" with large number of links. To find p, the probability of a node in the network being infected by the epidemic, we adopt the derivation of the epidemic spread in such a network (see Appendix 1 for details): (2) Eq. (2) implies that epidemic spreading is determined by three factors: μ, the degree of connectivity of the node in the network; λ, the likelihood that the node may be exposed to the disease from its connections; and β, the susceptibility that the individual may be infected by the disease when exposed to it. To focus on overall network behavior in the discussion below, the same (average) value is used for each parameter across all nodes in the given network. In addition, all three factors are normalized to the interval between 0 and 1. We consider two general categories of risk mitigation measures-prevention/containment and treatment/recovery. Prevention and containment (P&C) measures aim at reducing the rate at which individuals contract and/or spread the disease, while T&R lessen the impact of the disease on those who contract it. In other words, to mitigate the epidemic risk of the network, one can make an investment of S p in P&C-to reduce p, the probability of transmissionor an investment of S L in T&R-to reduce L, the potential loss as a result of the disease. Thus, p = p(S p ) and L = L(S L ). (When no investment is made, S p = S L = 0, and we have p 0 = p(S p = 0) and L 0 = L(S L = 0) as the intrinsic infection probability and intrinsic loss, respectively, of the epidemic propagation in the network). The benefit achieved with mitigation investments is then and the task of finding an optimal investment is to maximize W, which itself is a function of p (thus β, μ, and λ) and L. In what follows, we attempt to analyze the optimal investments made by a rational decisionmaker (or by rational decisionmakers collectively) in P&C versus in T&R to mitigate the risk of epidemic propagation in a network of individuals. Investing in P&C We first examine the case when an investment is only made in P&C (i.e., S p > 0 and S L = 0). Note that the P&C investment S p affects (i.e., reduces) the disease exposure rate λ in Eq. (2) but not the susceptibility β, which reflects the nature of the disease, or the connectivity μ, which is determined by the topology of the network. As such, we can rewrite Eq. (2) as follows (see Appendix 1 for details): where ε p is a parameter describing the effectiveness of investment S p . And therefore, where we use L 0 since S L = 0. Before finding the optimal solution for the P&C investment, it is necessary to verify the boundary conditions of Eq. (6). First, the initial condition has to hold that the benefit increases when the very first investment is made (otherwise it makes no sense to even invest in such measures). In other words, Inserting Eq. (6) into Eq. (7) and rearranging the terms, we have This initial condition Eq. (8) will be revisited later. Additionally, in order to find the maximum W, it is necessary that the second derivative of W with respect to S p is negative. To check, we note that is indeed negative, as the terms inside the parenthesis are both positive. Therefore, W will yield its maximum when we optimize S p by setting: Rearranging the terms and solving for S * p , the optimal investment in P&C, we have Since S * p is always positive, the argument in the logarithm has to be greater than 1. This yields the same equation as the boundary condition Eq. (8). Rearranging the terms, we get Therefore, we have the following proposition. Proposition 1. Investment in P&C only makes sense when the intrinsic loss of the epidemic without mitigation is larger than a critical amount L . . This proposition outlines the importance of carefully investigating the nature of an epidemic (i.e., β) and the network characteristics (i.e., μ) to avoid "overinvestment," because not all epidemics are worth protecting against. Unless the intrinsic loss of epidemic is greater than a critical amount L . , one is better off not investing in P&C at all. Note that L . is a decreasing function of the connectivity μ. In other words, the more connected the network is, the lower the threshold is for making P&C investment. We examine first the behavior of optimal P&C investment in relationship with the intrinsic loss of epidemic L 0 . To do so, we differentiate S * p in (11) with respect to L 0 : and Therefore, we have the following proposition. Proposition 2. The optimal investment in P&C is a strictly increasing concave function of the intrinsic loss of the epidemic without mitigation. It is not hard to see why Proposition 2 holds in practice. When the intrinsic loss of the epidemic is high, one would likely attempt a higher level of risk mitigation with investment in P&C. However, the pace of increase in investment cannot keep up with the increasing level of potential intrinsic loss. Fig. 1 illustrates how S * p varies with respect to L 0 . Next, we examine the relationship between the optimal P&C investment and the connectivity, μ. Optimal prevention and containment investment with respect to potential loss (S p * and L 0 scaled to some maximum loss value). Taking the derivative with respect to μ in Eq. (11) and rearranging the terms, we get An examination of this result shows that when L 0 is sufficiently large (greater than or equal to e(βε p (−lnμ)) −1 ), Eq. (15) is always positive, and S * p increases with μ. However, for a small L 0 value, the numerator of Eq. (15) turns negative for high values of μ and eventually drives S * p to 0 when μ is large enough (Fig. 2). Therefore, we also have the following proposition. Proposition 3. The optimal P&C investment increases with the average connectivity of the network when the intrinsic loss of the epidemic, in the absence of mitigation, is high. However, if this intrinsic loss is small, it is better to stop investing in P&C when the connectivity becomes too high. Proposition 3 implies that, when the potential epidemic intrinsic loss is high, one should invest more in P&C when the individuals in the network are more connected (thus more exposed to possible transmission). This effectively means that the highly connected individuals (the "hubs") should invest more in prevention than the sparsely connected nodes, according to this proposition. Assuming, however, that the potential loss faced by each individual node is similar for all nodes, then it is likely that the hubs will instead invest less than the optimal amount for their level of connectivity, thus jeopardizing the safety of the whole network. This moral hazard issue will be further discussed in Section 4.2. Finally, we examine the property of S * p by rewriting Eq. (11) as where Z 0 = βμL 0 , and c ≡ Z 0 ε p (−lnμ). To find the maximum of S * p , we differentiate Eq. (16) with respect to c and set it to 0: (Eq. (16) does yield a maximum of S * p since ∂ 2 S * p ∂c 2 < 0.) So, when 1 − lnc = 0, or c = e (= 2.718…, the exponential constant), we have the maximum of S * p : Since e −1 ≈ 0.368, we have the following proposition. Proposition 4. The optimal P&C investment to mitigate epidemic propagation risks will never exceed 0.368·Z 0 , the intrinsic epidemic risk without any mitigation measures. Proposition 4 places an upper limit on the optimal P&C investment. In other words, any investment higher than 0.368·Z 0 is deemed unjustifiable. Note that this upper limit applies to all values of potential intrinsic loss L 0 and connectivity μ, since it is derived independently of and in parallel with Propositions 2 and 3. This important result will be further discussed in Section 5. Investing in T&R Instead of focusing on P&C, one can instead invest in T&R to mitigate the risk of epidemic propagation. The T&R investment will reduce the potential loss, L, due to the spread of the epidemic, although the chance of getting infected, p, stays constant. Studies have attempted to identify the proper form for describing the impact of investing in reducing the total loss due to a disaster or catastrophic event. Among them, Mackenzie and Zobel (2016) propose four different deterministic reduction functions and find that the logarithmic allocation function yields the best fit with data from actual disaster recovery. In this study, therefore, we adopt this best match function to describe the effect of reducing the potential loss of epidemic L by making T&R investment S L as follows: where ε L is a parameter describing the effectiveness of investment S L , and k ∈ (0, 1) is a normalization such that kln(1 + ε L S L ) is less than 1 (Mackenzie & Zobel, 2016 Similar to the P&C case, we want to ensure that the initial condition holds such that the benefit increases when the very first investment is made. In other words, Applying this condition to Eq. (20) and rearranging the terms, we get In other words, one will not invest in T&R unless the intrinsic loss is higher than a critical value Ḽ. To look for the optimal investment in T&R, we dif-ferentiate Eq. (20) with respect to S L and set it to 0: Note that this yields the maximum W since Solving for S L , we have the optimal T&R investment S * L as: We first note that S * L is a linear function of both L 0 and μ. That is, the optimal T&R investment increases linearly with both intrinsic loss and connectivity. To further investigate the property of S * L , we rewrite Eq. (24) using Eq. (22) as follows: Since all β, μ, and k are between 0 and 1, the product βμk ∈ (0, 1), and we have the following proposition. Proposition 5. Investment in T&R should only be made if the potential loss of an epidemic without mitigation is larger than a critical value Ḽ. In addition, the optimal T&R investment is a fraction of the difference between the intrinsic loss and the critical loss value Ḽ. Proposition 5 provides important guidance for investing in T&R. First of all, only those epidemics with the intrinsic loss higher than some critical value are worth T&R investments. This amount, Ḽ as noted in Eq. (25), is inversely proportional to the connectivity of nodes in the network. When the intrinsic loss of the epidemic is deemed higher than Ḽ, the optimal investment in T&R is still only a fraction of the difference between the intrinsic loss and Ḽ. It is thus worthwhile to carefully examine all the relevant factors-connectivity in the network, effectiveness of the T&R effort, and the likelihood of any members of the network catching the disease-to determine the best level of recovery and treatment investment. Allocation of Risk Mitigation Investments Among Prevention/Containment and Treatment/Recovery To mitigate epidemic risks, one may decide to invest simultaneously in both P&C and T&R. In reality, there is always a limit to which mitigation investments can be made. Such a limit not only includes any formal budgets to spend on curbing epidemics (as direct investments) but also the "acceptable" economic impact caused by mitigation policies. In this section, we examine the optimal allocation to these two investments in order to maximize the benefits due to such a limit. We assume that the limit of total mitigation investment due to budgetary and economic constraints is S, which will be split between P&C investment and T&R investment, that is, S = S p + S L . We use s p = S p /S and s l = S L /S to denote the proportions of investment allocated to P&C and T&R, respectively. Then, the total benefit W can be expressed as (with s l = 1 − s p ): To find the optimal allocation ratio s p (and thus s l ), we take the derivative of W with respect to s p and set it to 0. After collecting terms, we have: Note that L 0 is not in Eq. (27). That is, s p (and thus s l ) does not depend on L 0 . This is a somewhat surprising result, as one might expect to see impact of intrinsic loss on the allocation of total mitigation investment limited by budgetary and economic constraints between these two investments. But it is not the case here. A careful examination of Eq. (27) reveals that no closed-form solution of s p is possible. To investigate how the allocation varies with key parameters such as the total investment limit S and the connectivity μ, we can use the implicit function technique to derive the relationships without knowing the closedform solution of s p . Taking the derivative of s p with respect to μ using this technique and collecting terms, we get Examining the sign of Eq. (28), we note that when |lnμ|is larger than 1 (i.e., n > 1), ∂s p ∂μ is always negative. Therefore, we have the following proposition (see Appendix 2 for proof). Proposition 6. With a fixed limit of total mitigation investment due to budgetary and economic constraints, the intrinsic loss of the epidemic does not affect the allocation between prevention/containment and treatment/recovery. In addition, if an average individual in the network connects to more than one other member, the more connections it has, the less portion of the limit should be allocated to P&C investment. Since it is more than likely that an average individual would have contact with more than one other individual in the same network, Proposition 6 states that the allocation to P&C investment given a fixed limit of total mitigation investment will decline with increasing connectedness. This result seems reasonable, considering that reducing the likelihood of transmission becomes more difficult and costly when there are a large number of connections to defend against, and investing in T&R can be a more effective approach when P&C is hard to achieve. We now examine how allocation varies with a fixed limit on the total mitigation investment. To do so, we take the derivative of s p with respect to S. After manipulation, we arrive at the following relationships: These relationships, shown in Fig. 3, give the following proposition (the proof is in Appendix 3). Proposition 7. When the limit of total mitigation investment due to budgetary and economic constraints is small, one should focus on T&R investment. As the constraint relaxes, a higher portion should be allocated to P&C, while allocation to T&R decreases at a rate inversely proportional to the increasing limit of mitigation investment. Proposition 7 states an important principle in allocating investment to mitigate epidemic among P&C and T&R. When the total investment limit is small, it would be difficult and ineffective to defend against the likelihood of transmission via all the connections, and it is better to focus on reducing the impact by investing in T&R programs. The higher the limit is, however, the more realistic and economical it gets to effectively target P&C. And when the ability to invest in mitigation is very high, it would make sense to focus on containing the epidemic and preventing transmission altogether to reduce the number of individuals infected (thus lowering the overall loss). Interpretation of Modeling Results We reiterate the assumptions of several key factors in this study. The intrinsic loss L 0 is the total (economic) loss associated with an epidemic in a population network (city, country, community, etc.) if the disease spreads naturally (i.e., without any mitigation), including such items as loss of human lives, cost to the healthcare system to test and treat the infected, productivity lost due to infection (patients, healthcare providers, etc.), and opportunity cost due to providers' (such as hospitals) inability to treat other patients. Although some of these impacts may be difficult to measure or subject to ethical debate (such as the use of statistical value of human lives), they are real and assumed to be quantifiable in this study. As an example, Cutler and Summers (2020) estimated the total loss due to COVID-19 at more than $16 trillion in the United States. On the other hand, mitigation investments S p and S L are made to prevent the spread or treat the disease. Some investments are direct, such as those for vaccines, therapeutics, and hospital capacity. There are also indirect investments incurred due to the consequences of mitigation measures. For instance, social distancing policy requires closing or limiting the operations of certain businesses, resulting in economic impact not directly paid for initially. It is with this understanding that we now discuss the propositions developed in this study. Proposition 1 states that investment in P&C only makes sense when the intrinsic loss of the epidemic is larger than a critical amount L . . In other words, when the intrinsic loss is potentially low enough, it is advisable to let the disease run its course. Interestingly, the critical value L . is a decreasing function of μ (see Eq. (15)), implying that a highly connected community has a lower threshold for prevention investment. This makes sense because a disease is much more likely to become an epidemic in highly connected urban hubs (like New York City) or densely populated countries (like Taiwan), presenting in a high intrinsic loss that necessitates mitigation measures. This is also consistent with Proposition 2, which states that the optimal investment in P&C is a strictly increasing concave function of the intrinsic loss of the epidemic. So to prevent and contain a transmittable disease, it is necessary to spend more when the intrinsic loss is higher, but the spending increase cannot keep pace with the potential loss as it becomes very high (see also the discussion of Proposition 4). Proposition 3 states that when the intrinsic loss is high, the optimal investment in P&C increases as the network connectivity increases. The SARS epidemic, the first pandemic of the 21st century when it appeared in 2002, spread through 26 countries, infecting 8,098 people and resulting in 774 deaths. 1 Its potential negative impact (as a result of the high fatality rate and difficulty of treatment) prompted countries such Taiwan, a highly connected island nation of over 23 million people (density of over 1,700 per square mile), to invest significantly in P&C infrastructure to coordinate their public health and medical services across the country for future epidemics, consistent with Proposition 3. This investment has been tested and modified through the pandemics of H1N1 (2009)(2010) and Ebola (2014Ebola ( -2016 (Kao, Ko, Guo, Chen, & Chou, 2017) and is proven to be effective in dealing with the current case of COVID-19 (more in Section 4.2). On the other hand, Proposition 3 states that when the intrinsic loss is small, it may be advisable to not invest in P&C in a highly connected community. Prevention of a transmittable disease is always difficult in densely populated or highly connected areas; when the disease is less potent or lethal, it may be better to allow the infection to spread instead of trying to contain it. For instance, before the varicella vaccine was developed for chickenpox, people in some communities intentionally exposed themselves in the hope that they would achieve immunity. Such an approach for less severe diseases is consistent with Proposition 3. 1 https://www.who.int/csr/sars/country/table2004_04_21/en/ Proposition 4 states that the optimal P&C investment to mitigate epidemic propagation risks should not exceed 0.368·Z 0 (Z 0 = βμL 0 , the intrinsic epidemic risk without any mitigation measures). Since both β and μ are less than 1, this upper bound for the optimal investment in P&C is a fraction of (and can be significantly lower than) the intrinsic loss L 0 of an epidemic. Interestingly, Courtemanche, Garuccio, Le, Pinkston, and Yelowitz (2020) indicate that based on evidence from the 1918 influenza pandemic, social distancing measures would reduce the average contact rate by 38%, a number very close to the factor (as the inverse of e, the Euler's number or the natural exponential constant) in this proposition. This relationship between this "limit" of prevention effectiveness and the upper bound for optimal prevention investment is worth further exploration in future research. Proposition 5 addresses investments in T&R. It states that such investments are called for only in those epidemics with potential loss higher than some critical value Ḽ. This may be the case for seasonal flu, especially when the flu vaccine proves to be effective, in that there is little need for specific investment targeting specific treatment for such a disease. This proposition also states that the optimal T&R investment is a fraction of the difference between the intrinsic loss and Ḽ. In a Dutch study, Lugnér and Postma (2009) estimate that the cost-effectiveness cut-off point for investment in stockpiling a combination of antiviral drugs for treatment is an 11% risk of a pandemic during a 30-year period (9% for a single antiviral drug treatment), or 29% (23% for a single antiviral drug treatment) when production loss offsets are not included. (The authors note that there is no official threshold below which a cost-effectiveness ratio is considered acceptable in the Netherlands, possibly due to the ethical difficulty in measuring the value of human lives.) Balicer, Huerta, Davidovitch, and Grotto (2005) evaluate multiple strategies of treatment for the entire population, or those at heightened risk, in Israel and find that investing in antiviral stockpiling is always cost-effective as long as the estimated annual pandemic risk remains greater than one in 80 years. Both cases are consistent with Proposition 5. Propositions 6 and 7 deal with the optimal allocation between both P&C and T&R measures when there is a limit to total investment due to budgetary and economic constraints. Proposition 6 states that the allocation between prevention efforts and treatment efforts is not dependent on potential intrinsic loss. Although surprising, this result appears to be consistent with the concept of risk tolerance, where one responds to varying levels of risk by spending more or less on mitigation instead of changing the mitigation strategies. Proposition 6 also states that, in a more connected network, a greater proportion of the limited investment amount should be allocated to treatment and less to prevention. Preventing or containing the spreading of an epidemic in a highly connected network can be difficult and costly, and with a limited investment capability it may be more effective to focus on treatment. Wang, Li, and Guo (2012) discuss the allocation of a fixed budget between vaccines (prevention) and antivirals (treatment) based on the cost and efficacy of the two options. They find that if both options are 100% effective, a higher investment in vaccines is the obvious option. But even in this case, they allocate a certain amount to antivirals to address any possibility of R 0 >1. As the infection rate increases, with greater connectivity, there should be more investment allocated to treatment and less to prevention. This allocation is further supported in cases when there is no availability or low efficacy of prevention measures (such as vaccines against a virus). Finally, Proposition 7 states that if the investment is very limited, it is better to focus primarily on treatment. This can be seen in the typical antiviral stockpiles that many countries maintain, especially when the source of threats is not specifically known in advance. It is also the usual strategy to prepare for responding to potential bio-terrorist attacks. The proposition also states that as the investment limit increases, a higher proportion should be allocated to P&C. After the SARS and H1N1 epidemics, many Asian nations opted to invest significantly over the past two decades to combat future epidemics, and a large proportion was allocated to prevention, consistent with Proposition 7. Xue and Zeng (2019) detail investments by China in a variety of prevention measures including strengthening national emergency response teams, building institutional capacity for national monitoring and epidemiological training, growing vaccine production capacity, and developing international and regional collaborations. It is expected that as countries begin to see epidemics as a national biosecurity threat based on the experience of COVID-19, governments are likely to invest significantly in combating future pandemics and a large portion of such spending would be applied toward P&C. Analysis of COVID-19 Pandemic Faced with the COVID-19 pandemic, countries and communities have rushed to mitigate the risks via a plethora of strategies (see Table I), with varying degree of success. In this section, we examine how the modeling results developed in this study can inform this current case and provide guidelines for the future. Walker et al. (2020) estimate that in the absence of any interventions, COVID-19 would have resulted in 7 billion infections and 40 million deaths (at R 0 = 3) globally in 2020. More recently, Cutler and Summers (2020) put the total loss in the United States due to COVID-19 at more than $16 trillion. Such a potential impact is so enormous that it would be likely exceed any conceivable critical threshold for intervention (Proposition 1). Perhaps the most natural reaction to an epidemic breakout is prevention, particularly when investment limit is not an issue (Propositions 2 and 7). Although almost all nations spent heavily in prevention when facing a pandemic as serious as COVID-19, the key difference appears to be not only the amount and but the type and pace of mitigation efforts. As of fall 2020, several countries and regions, most notably Taiwan, South Korea, and Vietnam, have successfully contained COVID-19 in spite of the proximity (both geographically and socially) to the disease epicenter, China. These countries all have densely populated cities and regions, resulting in high connectivity and thus requiring massive investment in prevention (Proposition 3). To be able to afford such a large investment (primarily in isolation, tracing, and tracking systems), it must be made preemptively and over time. For instance, Kao et al. (2017) discuss the extensive and continued investment in the development of the Communicable Disease Control Medical Network (CDCMN), a collaboration of the public health system and the medical system that Taiwan established in 2003 following the SARS outbreak. CDCMN was successfully activated during the H1N1 influenza (2009)(2010) and the Ebola outbreak (2014-2016) and has been effective in addressing the COVID-19 pandemic (Duff-Brown, 2020;Wang, Ng, & Brook, 2020). For countries and regions that do not have mitigation measures in place at the time of an outbreak, the most effective prevention measure is social distancing or even complete lockdown (Chu et al., 2020;Flaxman et al., 2020;Hsiang et al., 2020). Although the "intrinsic cost" of social distancing-that is, to separate people from one another-may appear to be low, the economic impact of executing such lockdowns, as reflected in business closures, layoffs and furloughs, and overall reduced economic output, can be extremely high (Proposition 2), requiring billions or even trillions of dollars in government support (Cassim, Handjiski, Schubert, & Zouaoui, 2020;Humphries, Neilson, & Ulyssea, 2020). In densely populated areas without preventive alert systems, like New York City, prevention is almost impossible without virtually unlimited monetary support (Proposition 3 and Fig. 2). In this case, the investment can perhaps be better made to boost treatment (such as increasing hospital capacity) until the spread is under control (Proposition 7). In general, although prevention may be desirable, highly connected communities or countries with no little prior investment in disease containment should allocate heavily to treatment to reduce the impact of the disease, because prevention may turn out to be too expensive (Proposition 6). In the case of a very limited ability to invest, Proposition 7 recommends focusing more on treatment than prevention, and one such option is to develop herd immunity through natural transmission. Herd immunity occurs when most individuals of a population are immune to an infectious disease and thereby will not spread it, indirectly protecting those who are not immune (D'Souza & Dowdy, 2020). During the COVID-19 pandemic, Sweden, with its broadly liberal society and decentralized healthcare system, has taken on a strategy to protect the elderly and the fragile and to avoid overloading hospitals, while keeping the economy open with minimum curtailment of people's movements and by relying on the individual judgment of people to behave appropriately. Such an approach was justified by low population density and high percentage of single dwelling (therefore low connectivity, cf. Proposi-tion 6), as well as the relative healthy population, which has an effect of lowering the cost of treatment (Leatherby & McCann, 2020). The result, however, was controversial: although there were signs of success as of summer 2020 (Erdbrink, 2020), Sweden's economy has generally experienced slumps similar to those of other European nations that enforced stricter social distancing policies (Lindeberg, 2020), and, by fall 2020, Sweden maintains three to five times the number of infections and deaths per million population compared to its Scandinavian neighbors of Denmark, Finland, and Norway. Such an approach is certainly even less advisable in densely populated areas (high connectivity) or less healthy populations (high cost of treatment). It is also important to note that the ethical issue of putting a price tag on human lives makes such a decision difficult, even when it is supported by economic and risk analysis. Although the model discussed above uses network-wide averaged parameters, it is also interesting to examine the mitigation measures at the individual level. In a population network that exhibits preferential attachment characteristics, the optimal investment in prevention would be disproportionally high for the few "hubs" with high connectivity compared to the average individuals in an epidemic with high intrinsic loss (Proposition 3). To be effective in slowing down the transmission of COVID-19, it is necessary and even paramount to shut down those hubs (often small businesses such as bars and gyms), but such closings cost them disproportionally more compared to individuals staying at home. This creates an inherent moral hazard: the hubs may not have the incentive to spend much more than the vast majority of the nodes for the good of the whole network, and it is natural for some to attempt to open despite the shutdown order to reduce their outsized burden from the prevention measures. It is, therefore, recommended that decisionmakers identify and subsidize those businesses and individuals that act as hubs in order to effectively enforce prevention mitigation measures such as social distancing. Finally, it is worth noting that Proposition 7 is about the optimal allocation between prevention and treatment, given the size of the budgetary and economic constraints; it does not provide an eitheror choice. Based on the modeling results, allocating all resources to prevention or treatment only happens when limit of investment is infinite or infinitesimal, neither of which is possible in reality. For instance, although treatment is highly favored in the case of very limited investment capability (Proposition 7) and high connectivity (Proposition 6), low-cost prevention measures can still be pursued (such as policies adopted in African countries during COVID-19 pandemic; see, for instance, Massinga Loembé et al., 2020). On the other hand, no matter how much effort one puts into preventing an epidemic from spreading, it is always necessary to invest in effective treatment of the disease in order to minimize the impact on the population in the network. This has also been verified in the case of COVID-19, where stockpiling personal protection equipment for healthcare workers, a modest measure to ensure treatment availability, proved to be critical in mitigating risks of the epidemic. CONCLUSION In this article, we build a mathematical model to optimize the investments in mitigating the risks of epidemic propagation throughout a network of individuals by treating the prevention/containment and treatment/recovery measures separately. We find that when investment concentrates on one category of measures (i.e., prevention or treatment only), given an epidemic that may cause large enough potential loss, then optimal investment increases with connectivity and potential loss but is limited to only a fraction of loss amount. When the total investment is limited by budgetary and economic constraints, however, the proportion allocated to prevention increases when the total investment limit goes up and when the network connectivity is lower. We further show that these results are consistent with previous studies and can be used to properly interpret what happened in past pandemics as well as to shed light on future and ongoing events such as COVID-19. As with all analytical research, assumptions that are made to keep the models manageable always result in a simplified representation of reality. One such key assumption is the use of average transmission pa-rameters for all nodes. In reality, transmission rates can be different for each node and they can be dynamic over time. We also assume that investments are made at a single point in time and that their effects are instantaneous, whereas in reality they can be made over time and have delayed impacts. Our model is deterministic in nature at a snapshot in time. In other words, we assume that one can obtain (or estimate) the values of all the parameters at a particular point in time in order to derive a steady state solution to inform the decisionmakers of their investment options. In the future, we can extend the current work to include probabilistic models for input parameters such as the exposure (μ) and effectiveness of investment (ε), in order to take into account the uncertainties of those parameters. Finally, we adopt the expected loss (i.e., the probability of an event multiplied by its consequences) to quantify the risk of a pandemic because the objective of our model is to minimize the total expected loss of a pandemic. Although such a definition is supported by and frequently adopted in the literature, we do acknowledge that, as highlighted by Aven (2010Aven ( , 2019, using expected loss in its statistical term may not be informative when comparing risks of different scenarios. In such cases, uncertainties beyond probabilities should be considered in measuring risk. Future studies that relax these and other assumptions, as well as provide empirical verification of the modeling results, can extend the applicability and generalizability of this line of research. This study makes several theoretical and practical contributions. Instead of using epidemiological parameters within the objective function, our economics-based model optimizes the risk-reducing performance of mitigation investments. Additionally, our model explicitly accounts for network topology, a critical element of epidemic propagation, and derives critical relationships between optimal investments and key network characteristics. Overall, our results provide both a solid theoretical foundation and practical guidance to decisionmakers in determining proper actions to take to mitigate the risks of an epidemic such as COVID-19. ACKNOWLEDGMENT This material is based upon work supported by the National Science Foundation under Grant nos. 1952792 and 1735139. Derivation of p To find p, the probability of a node in a network being infected by the epidemic, we follow the derivation of the SIR-based epidemic spread in a scalefree network (Chang & Young, 2005;Pastor-Satorras & Vespignani, 2001). Let P k (t) denote the relative density of infected nodes with k connections-that is, the probability that a node with k connections is infected-at time t. The mean field rate equation gives where (λ) is the probability that any given connection points to an infected node, as a function of λ, the epidemic spreading rate (Pastor-Satorras & Vespignani, 2001). Solving for P k in a steady state (i.e., ∂P k (t )/∂t = 0), one gets Note that (λ) can be expressed in the lowest order of λ and n, the average number of node connections (Chang & Young, 2005): where β represents the probability that any individual may be infected when exposed to the disease, a factor determined by the nature of the epidemic but exogenous to the network, and μ = e −1/n represents the connectivity of the node (μ ∈ [0, 1), μ = 0 when n = 0, and μ = 1 when n → ∞). The effect of the P&C investment S p is in the reduction of the epidemic spreading rate λ in Eq. (A.4). As such, λ and S p satisfy certain boundary conditions. We know that, without any investment, a disease would be spread freely to any other nodes; in other words, λ = 1 when S p = 0. (Note that although the epidemic propagates "freely" without P&C investment, the rate of individuals contracting the disease is determined by the susceptibility β in Eq. (A.4).) But, any finite investments, no matter how large, would never be able to block propagation completely, that is, λ → 0 only when S p → ∞. Without loss of generality, the relationship between P&C investment and spreading rate can be expressed as λ ≡ 1 1+ ε p S p to satisfy the above boundary conditions, where ε p is a parameter describing the effectiveness of investment S p . Therefore, we have p = βμ 1+ ε p S p (A.5) APPENDIX 2 Proof of Proposition 6 The first part of the proposition that the allocation is independent of the intrinsic potential loss is trivial, since L 0 is not in Eq. (27), the equation for s p . To prove the second part of this proposition, we first have to derive Eq. (28). To facilitate the implicit function manipulation, we make the following assignments: Since F in Eq. (A.7) is a function of both s p and μ, we can use the implicit function rule to find the derivative of s p with respect to μ as follows: Since the numerator of Eq. (A.9) is always positive, the sign is determined by the denominator, or, to be exact, the sign of the term 1 − (1 + ε L S(1 − s p ))ln 1 μ . (We rewire the logarithmic term to make it positive, since lnμ is always negative.) Because 1 + ε L S(1 − s p ) is always greater than 1, the term is negative when ln 1 μ > 1, or e −1 > μ = e − 1 n , where n is the number of connections. Therefore, when n > 1, ∂s p ∂μ is always negative, and s p , the allocation to P&C, decreases with increasing connectivity μ.
2021-02-14T06:16:19.498Z
2021-02-13T00:00:00.000
{ "year": 2021, "sha1": "a64690a9fcb28549de6ef92c475f075ccce7b136", "oa_license": null, "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/risa.13707", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "2778491ffc5f118615cd94166fb5d185d6348c8c", "s2fieldsofstudy": [ "Economics", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216121996
pes2o/s2orc
v3-fos-license
Relationship between Some Environmental and Climatic Factors on Outbreak of Whiteflies, the Human Annoying Insects Background: The reports of numerous outbreaks of whiteflies from different parts of the world have increased its medical importance. The aim of this study was to determine relationship between environmental changes and climatic factors with the outbreak of the whitefly population in Tehran, the capital of Iran. Methods: This study was carried out in urban areas of Tehran, where the increasing population of whiteflies was reported frequently during 2018. In order to entrap the whiteflies, 20 yellow sticky cards smeared with white refined grease were installed on the trunks of the trees at twice per month as trapping time intervals. The captured flies were transferred and conserved in cans containing 70% alcohol and were counted accurately under a stereomicroscope. To determine the relationship between air quality index, precipitation, air temperature and air humidity as environmental and climatic factors with the abundance of whiteflies, change point analysis and Generalized Estimating Equations (GEE) was used. Results: The most density of white flies per trap was 256.6 and 155.6 in early October and late September respectively. The number moved closer to zero from November to April. The population of whiteflies was inversely correlated with the level of air quality index (p= 0.99) and precipitation (p= 0.95), and it had a direct correlation with the high temperature. Also, the population of whiteflies had a direct correlation with the level of air humidity in the first half of the year Conclusion: According to these findings, during spring and summer from early May to early October. Introduction Whiteflies (Hemiptera: Aleyrodidae) feed on a wide range of hosts in a way that for some species more than 900 plant species have been identified (1,2). Cucurbits and ornamental plants, agricultural crops, palms, and weeds are the main hosts of this pest, though there are many weeds which are the secondary hosts (3). The life cycle of whiteflies from egg to adult complete one month depending upon environmental temperature. Adult whiteflies may be surviving for one to two months (4). This insect is con-sidered a health problem and an important medical pest that can also threaten human safety in some cases. Accidental entry of a whitefly into the human respiratory tract can cause inflammation and infection in the upper respiratory tract leading to the emergence of opportunistic fungal and bacterial infections (5). The population of this insect has increased in many parts of the world which has caused many problems for humans, especially in urban areas (6). According to experts, repeated and uncontrolled use of various formulations and concentrations of pesticides can have many adverse effects, such as the resistance of pests to pesticides, the emergence of new pests, and the eradication of natural enemies (parasitoids and predators). Whiteflies are among the pests that have been evolved by continuous use of chemicals and the lack of proper management of pesticides (7). In addition to direct physical and biological harm for human, these insects cause a sharp decline in the production of agricultural products. Also, these flies cause the growth and development of saprophytic fungi on their honeydew which reduces the quality and nutritional value, as well as the consistency and shape of the products. Moreover, they physically damage non-productive plants. Nowadays, whiteflies, due to their increased resistance to various types of pesticides (8), cause more damage to a large number of crops and ornamental plants. Male and female winged insects feed on the leaf juice of plants. This leads to yellow spots on the leaves that directly damage the host plant and make it seem short and sick. Insect secretions on plant leaves can cause fungal growth (9). So far, about 1556 species of whiteflies have been identified in different parts of the world (10). Whiteflies in Iran were identified for the first time during the faunistic research and the identification of flies in Fars Province in 1995 (11). After that 14, 18 and 24 species of whiteflies have been reported in Isfahan (9), Gilan (12) and Golestan Provinces respectively (13). Subsequently, in 2000, morphological and biological studies were conducted on common species in Esfahan, which revealed that whiteflies in Esfahan were of the European race (9). Such flies are greenhouse pests, but, unfortunately, the lack of proper management of chemicals and pesticides has caused resistance to some of the pesticides and adverse environmental conditions in these insects. As a result, these flies have been able to adapt themselves to greenhouse conditions and easily grow and reproduce. Some experts also believe that these flies have been released into the open air through whitefly infestations, and since the ecosystem cycle and the population of predator in-sects have not been balanced in the environment, this has led to a widespread outbreak of whiteflies in the open air (14). Moreover, since these insects are expected to reproduce in places more similar to greenhouse in terms of climate and food resources, it is necessary to identify the ecological factors in the reproduction of the insects, and take measures to control them outdoors (15). Over the past few years, the outbreak of whiteflies in Iran, especially in the residential areas of Tehran, has caused different allergies to humans. Therefore, the present ecological study was conducted with the aim of determining the relationship between some environmental and climatic factors on outbreak of whiteflies, the human annoying insects Tehran; District 6 in 2018. Study area The current study was carried out in the urban area of Tehran; the capital of Iran (District 6) suffers from a severe air quality index, where the abundance of whiteflies has been reported frequently during 2018 (Fig. 1). The city of Tehran is located between the mountainous region and the plain. Tehran's climate is generally described as mild in spring and fall, hot and dry in summer, and cold and wet in winter. Based on the 2016 census, Tehran has a population of approximately 10 million. Vegetation coverage in Tehran is including natural and hand-planted forests (16). The air pollution indicators including Air Quality Index (AQI) with monthly activity of whiteflies were received from the Iran Meteorological Organization (17). Study design To entrap whiteflies, 20 yellow sticky cards (fly traps) of 30×50cm (18), depending on the diameter of the tree, were installed on the trees each time at twice per month as trapping time intervals from Early April to late March 2018. Totally 480 yellow sticky cards have been ap-80 http://jad.tums.ac.ir Published Online: March 31, 2020 plied for catching adult whiteflies. In addition, we replaced the old traps with new one in each visit. The chemicals on the traps were odorless and smeared with purified white grease purchased wholesale from reputable stores. The traps were installed on the trees of Maple, Acacia, European ash, Sycamore, and Berries for the purpose of catching whiteflies. In this regard, to examine the effect of colorful traps on whiteflies, three different colors of yellow, blue and green for attracting and capturing of them were used in a selected greenhouse. In this study, colored cards of 10×22cm in two heights of 1.3 and 2.3m were installed on tomato plants. After two weeks, the numbers of trapped insects on colored cards were counted. Whiteflies were isolated and counted in two ways: traps (sticky cards) were immersed in warm water for 2 minutes (min) until the grease on the surface containing the flies was softened, and the flies were released in hot water. Then, the insects were removed from the hot water using a brush and were transferred and conserved in cans containing 70% alcohol. Then, at appropriate times, the whiteflies in the canned glass were released into the appropriate plates and carefully counted under a stereomicroscope. In another method, the traps containing whiteflies were peeled off the trees, and given the grid pattern on the surface of traps, the number of flies in every grid was randomly counted and the total number of whiteflies was estimated on the surface of the traps. The yellow and blue colored traps used in this study were supplied by Russel IPM Company; and the green traps were made of Tangle foot adhesive supplied by Kerman Chemistry Company (applied on green cards with brush). Statistical analysis Change analysis is organized to answer two questions 1) whether there are any change points or not 2) if so in which times change point(s) occurs. We deal with Hypothesis test and estimation in first and second question, respectively. Generally, suppose X1, X2, …, Xn to be sequence of independent random variables with probability distribution function F1, F2, …, Fn respectively. Change point analysis intended to test following null hypothesis H0: F1= F2= … = Fn versus alternative hypothesis H1: Where 1< K1< K2< … < K q< n and q is unknown parameter that shows number of change points and K1, K2, …, Xq are the change points and have to be estimated. Although in majority of studies which point analysis has been applied probability distribution function supposed to be normal (19) So, according to the information criterion principle was rejected if . In order to estimating change point(s) was used (19,20). After determination of change point (s) in the next stage for considering correlation between data, which were collected over time, marginal longitudinal model was applied. Link function because of being count response variable, log Poisson fulfilled. Coefficients were estimated by Generalized Estimating Equations (GEE). This method provides predictive model for the response variable by explanatory variables and it takes into account possible correlation between repeated measures of depend variable of a subject. In this study since data are collected over time it is likely that repeated measure on an individuals are correlated. When GEE method is used in order to analyze longitudinal data, correlation structure is formulated and possible correlations between measurements over time is incorporated (21). Suppose that is a count and we are interested to investigate changes in expected count to the covariates. Since counts are often modeled as poisson random variable, using poisson variance function and log link function, marginal model for would be illustrated as follow: The mean of ( is related to the independent variables by log link function: Also the variance of each depends on : In addition to these, an unstructured pairwise correlation pattern is assumed for the within-subject association among the repeated responses vector: The vector of parameters shows the pairwise correlation among responses. The marginal model specified above is a log-linear regression model, with an extra-poisson variance assumption (22). So, to determine the relationship between air quality index, precipitation, air temperature and air humidity as environmental and climatic factors with the abundance of whiteflies this model was applied. The goodness of fit of model was evaluated by QIC. Results The density of whiteflies per trap in different seasons were calculated during the 12-month period of sampling in district No. 6 of Tehran in 2018. The most density of white flies per trap was 256.6 and 155.6 in early October and late September respectively due to low temperature and rainfall and high humidity (Fig. 2). The density of whiteflies per trap in other times of year was shown in Table 1. The number moved closer to zero from November to April. The population of whiteflies was inversely correlated with the level of air quality index (p= 0.99) and precipitation (p= 0.95), and it had a direct correlation with the high temperature. Also, the population of whiteflies had a direct correlation with the level of air humidity in the first half of the year, and it was inversely correlated in the final months of the year. In the current study, up to the 13 th whiteflies trapping (early October), using the marginal model, the estimates were calculated by the GEE method, and considering the Poisson link function, the following results were obtained: A) For a unit of increase in air quality index (AQI), the population of whiteflies decreased Table 2). In this method, all four variables were significant. In addition, the following results were obtained after the 13 th measurement (early October): A) For a unit of increase in air quality index (AQI), the population of whiteflies decreased by 0.98 (decreases). B) For a unit of increase in temperature, the population of whiteflies increased by 2.37 (increases). C) For a unit of increase in humidity, the population of whiteflies decreased by 0.92 (decreases). D) For a unit of increase in precipitation, the population of whiteflies decreased by 0.44 (decreases) ( Table 3). In this method, none of the four variables were significant. In both GEE models, QIC showed models were well-fitted. Discussion The results of this study showed that the population of the whiteflies was inversely correlated with the level of air quality index and precipitation, and it had a direct correlation with high temperature. Also, the population of whiteflies had a direct correlation with the level of air humidity in the first half of the year, and it was inversely correlated in the final months of the year. Modern transportation and rapid trading of plants, cuttings, branches and other parts of plants, which often contain eggs, larvae and whitefly nymph, have led to the transfer of these pests into new places. However, there may be few whiteflies that enter a new place independently and without encountering the natural enemies with which they are evolving. Appropriate level of humidity and temperature is another factor for the developing population of whiteflies. The whiteflies were commonly able to complete their life cycle including egg, larvae, nymph and adult in the temperature of 15-35 °C while survival was usually reduced at temperatures <20 °C or >30 °C (23,24). The rate of survival of the whiteflies at unfavorable low and high temperatures were also affected by host plants (25). For instance, according to life span of mulberry trees in Tehran City, white flies strongly infest these trees (26) and when the leaves fall, the whiteflies population also decreases. Importing of seedlings and cuttings of flowers, as well as the types of wood susceptible to the growth of whiteflies are one of the most important factors in the spread of whiteflies in different places. Whiteflies unable to fly long distances (27). Also, lack of precipitation is another factor in the survival of whiteflies. The high level of pollutants containing CO2 in the air of Tehran, due to defective fuel consumption of worn out vehicles and other airborne pollution, is another important factor in the survival of whiteflies in urban areas since these flies live in conditions similar to greenhouse. It should be mentioned that, in fact, the high level of pollution and weather conditions are important factors in the reproduction of whiteflies in Tehran (28). The abundance of whiteflies in Tehran can cause health problems including itching, red and sore eyes, runny nose, allergies, and problems in the respiratory system of individuals, especially asthmatics (29). Children and people with poor immune system, the elderly populations and pregnant women are more susceptible to these problems. Also, the presence of whiteflies in food and beverages, besides causing fear, is concerning in terms of contamination with pathogenic microorganisms. Therefore, this study was conducted in different months of the year to consider the effects of temperature, precipitation, humidity and environmental contaminants. The materials on the wings and bodies of the whiteflies can act as the pollen and cause allergic reactions in individuals (30). Rain and low humidity (below 60%) and low or high temperatures disrupt development of the insects. Whiteflies stay in one place because they cannot migrate long distances. The longest reported displacement of this insect is 7km (31, 32). Whiteflies can also cause problems for the eyes and respiratory tracts (31). The reasons for eradica-85 http://jad.tums.ac.ir Published Online: March 31, 2020 tion of the natural enemies of whiteflies include the destruction of their reproductive habitats, uncontrolled use of authorized and unauthorized pesticides, and the lack of using specialized biological controlling methods. These factors influence the growth of whiteflies in metropolitan cities like Tehran. Reducing the use of smoky and pollutant vehicles and monitoring the factories that produce vehicles with incomplete combustion (33, 34) is one of the major factors in making the environment unstable for whiteflies. The use of different least-hazardous pesticides to reduce the resistance of whiteflies, as well as the use of systemic toxins for nonproductive trees (35), and reducing the use of insecticides as the only methods of controlling whiteflies, and ultimately the use of growth regulators and integrated pest management (36) will help control these pests in such cities as Tehran. Conclusion It should be mentioned that, in fact, the high level of pollution in different times of year and weather conditions are important factors in the reproduction of whiteflies in Tehran. According to these findings, during spring and summer from early May to early October that temperature and humidity for development of withe flies are supplied in Tehran City, personal protection against these pests was recommended by Tehran residents.
2020-04-23T09:07:57.774Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "1051f02e7df457f1894429f805fb7674d7a6185d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.18502/jad.v14i1.2714", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b6652d1f8184aecbd285d6e3b221b96abbdee5a3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
37353219
pes2o/s2orc
v3-fos-license
Assessment of Cardiovascular Risk Factors in a Rural Community in the Brazilian State of Bahia Despite the decrease in mortality due to cardiovascular diseases concomitant with the intensification in controlling arterial hypertension in developed countries 1,2, cardiovascular diseases still remain the major cause of death 3. In Brazil, these diseases not only represent an important cause of morbidity and mortality 4,5, but are also associated with high cost, both social and those resulting from hospitalizations and retirement 6. In different regions of Brazil, great cultural and socioeconomic differences are observed, especially between the urban and rural populations 7. These aspects may interfere with the cardiovascular risk profile of individuals. Socioeconomic differences, for example, are associated with variations in blood pressure in different populations as seen in the variation of prevalence reported in some studies performed in different regions, such as the southern and northeastern regions 8. In this context, while most cardiovascular risk factors are more frequent in large cities 9-14, systemic arterial hypertension associated with a high ingestion of sodium seems to be common in rural populations of the Brazilian northeastern region 15,16. Despite the decrease in mortality due to cardiovascular diseases concomitant with the intensification in controlling arterial hypertension in developed countries 1,2 , cardiovascular diseases still remain the major cause of death 3 .In Brazil, these diseases not only represent an important cause of morbidity and mortality 4,5 , but are also associated with high cost, both social and those resulting from hospitalizations and retirement 6 . In different regions of Brazil, great cultural and socioeconomic differences are observed, especially between the urban and rural populations 7 .These aspects may interfere with the cardiovascular risk profile of individuals.Socioeconomic differences, for example, are associated with variations in blood pressure in different populations as seen in the variation of prevalence reported in some studies performed in different regions, such as the southern and northeastern regions 8 .In this context, while most cardiovascular risk factors are more frequent in large cities [9][10][11][12][13][14] , systemic arterial hypertension associated with a high ingestion of sodium seems to be common in rural populations of the Brazilian northeastern region 15,16 . Methods In this cross-sectional study, 160 adult individuals were randomly drawn.They were at least 19 years old and represented 25.6% of the individuals in that age group residing in the village of Cavunge, Ipecaetá, in the Brazilian state of Bahia.The individuals were randomly drawn from the databank of the population census performed in the Cavunge Project [a research project financed by CNPq (National Council of Research), which studies the latent infection and the subclinical form of visceral leishmaniasis in the population of the village of Cavunge].Cavunge is a small rural community located in the semi-arid northeastern region of the state of Bahia, 162 km away from the city of Salvador, the capital of the state.The village of Cavunge has an area of 63.5 km 2 . The following individuals were excluded from the study: those with physical or mental disabilities that could jeopardize the procedures or who could not understand the Objective -To assess the frequency of cardiovascular risk factors in the rural community of Cavunge, in the Brazilian state of Bahia.written formal consent, and those residing in the village for less than 3 years. Methods The individuals randomly drawn were invited to a lecture aimed at providing the community with explanations about the study and with guidance about the conditions necessary for the investigation (12-hour fasting, no physical exercise, no alcoholic beverages or coffee) 17 .The project was approved by the Committee on Ethics in Research of the CPqGM-Fiocruz.All individuals included in the study signed the written consent. The following cardiovascular risk factors were analyzed: 1) age, defined as a risk factor in males > 45 years and females > 55 years; 2) race, in which the classification was based on subjective criteria according to the scale: white, mulatto, and black; 3) male sex; 4) familial history of early atherosclerotic disease (E-ACD) when affecting males aged less than 55 years and females aged less than 65 years; 5) arterial hypertension (AH) defined as systolic blood pressure > 140 mmHg or diastolic blood pressure > 90 mmHg, or both, or use of antihypertensive drugs; 3 measurements at 3-minute intervals were taken in the right arm with a properly calibrated aneroid sphygmomanometer with the patient in the sitting position according to the procedure specified in the III CBHA (Brazilian Consensus on Arterial Hypertension) 17 ; 6) diabetes mellitus, defined as fasting blood glucose above 140 mg/dL or use of glucose-lowering drugs or insulin, or both; 7) obesity, defined as body mass index (BMI) > 30 kg/m 2 ; individuals with BMI < 25 and between 25 and 29.99 were respectively classified as normal and overweight.Weight and height were measured using an anthropometric scale in dressed patients not wearing coats or shoes; 8) abdominal obesity assessed by the waist-hip ratio (WHR); for males, this ratio was corrected with the formula (-0.02265+1.00459x WHR) used by Larsson et al 18 , and the ratios > 0.95 and 0.85 were considered risk factors for males and females, respectively.The reference point used to measure the waist was either the umbilicus or the narrowest region (visible waist) of the abdomen when that region did not coincide with the region of the umbilicus 19 .The pubic symphysis and the most protuberant point of the gluteal region were the reference points for hip measurement 19 ; 9) smoking, considered present in individuals reporting smoking until the day of the interview; 10) physical activity classified into the 2 following groups: high caloric expenditure (HCE), individuals reporting participation in physical exercises at least 3 times a week, or those whose work involved a high caloric expenditure (carpenter, vulcanizer, hard laborer, or agriculturist) 20 ; and low caloric expenditure (LCE), individuals not participating in physical exercises according to a predefined criterion, or those whose work involved low ca1oric expenditure (driver, salesperson, waiter, wall painter, domestic workers, mechanic, and joiner) 20 ; 11) menopause, defined as lack of menstruation for more than 3 months in females > 45 years, once other causes of amenorrhea were excluded. The lipid profile included the following tests: total cholesterol (TC), HDL-cholesterol (HDL-C), LDL-cholesterol (LDL-C), and triglycerides measured after a fasting period of at least 12 hours.Blood was drawn at the headquarters of the Cavunge Project, and the biochemical analysis was performed in a specialized laboratory in Salvador.The parameters used were those established by the II Brazilian Consensus on Dyslipidemia 21 .Total cholesterol and triglycerides were measured with an enzymatic method, HDL-cholesterol was measured with precipitation, and LDL-cholesterol was calculated with the Friedewald formula [LDL-C = TC -(HDL-C + TG/5)] for triglyceride levels < 400 mg/dL 22 . Based on the Framingham Study score 23 , the overall cardiovascular risk of each individual was calculated.In its analysis, the Framingham Study score comprises the following risk factors: age, total cholesterol, HDL-C, systolic blood pressure, diabetes, and smoking with a specific level for each item.Addition of the points provided the overall score of each individual.The level obtained was listed in a table of relative risk for coronary artery disease identified by a numeric scale, in addition to the absolute risk for coronary events in 10 years 24 . Data were analyzed with SPSS (Statistical Package for Social Science) software.The Student t test was used for analyzing the means and the chi-square test for the proportions.In a 2x2 table, the Yates correction and Fisher exact test were used when the frequencies expected were lower than 5 in 1 or more cells, respectively.In those tests, the difference was considered significant when the probability (P) of type I error was < 5%. Results Of the 160 individuals randomly drawn, 143 came for the interview, but, in 17 of them, the blood sample underwent hemolysis.The final sample comprised 126 (78.8%) individuals, whose mean age was 46.6+19.2, of whom 43.7% were males and 56.2% were females, 40.8% of whom were menopausal and not receiving hormone replacement therapy.The following frequencies were observed: systemic arterial hypertension, 36.5%;diabetes mellitus, 4%; and smoking, 11.9%.Obesity was observed in 7.9% and overweight was observed in 27.8% of the individuals.Abdominal obesity, the most prevalent risk factor, was identified in 52 (41.3%) participants.These and other demographic and clinical characteristics are shown in table I.When comparing the sexes, only abdominal obesity (57.7% vs 20%, P<0.001) and smoking (18.2% vs 7%, P=0.05) had a statistically significant difference.Table I also shows the mean values of the biochemical parameters, and only the following were statistically different between the sexes: total cholesterol (218+44.3vs 188.3+53.2;P=0.001), LDL-C (128+44.3vs 102+47.2;P<0.002), and glycemia (96.7+43.9vs 84.5+15.3;P=0.05). In the distribution by band of lipid profile, 48.4% of the individuals had TC < 200 mg/dL and 20.4% had TC > 240 mg/dL, and 68.9% and 85.7% had, respectively, normal levels of LDL-C and triglycerides (figs.1, 2, and 3).When comparing the sexes, males had a greater frequency of TC < 200 (67.3%vs 33.8%; P<0.001) and a lower frequency of TC ranging from 200 to 239 (14.5% vs 43.7%; P<0.001) than females did (fig.1).No statistically significant difference was observed between the sexes for the other lipid variables. The caloric expenditure was not estimated in 2 individuals because they did not have a well-defined occupational activity.The distribution of that variable in regard to the sexes was different: 72.2% (39/54) of the males had HCE and 27.8% (15/54) had LCE, while 44.3% (31/70) of the females had HCE and 55.7% (39/70) had LCE (P=0.002). Physical activity was correlated with other clinical variables, and the distributions of diabetes mellitus (HCE 0%; LCE 5/54= 9.3%) (P=0.01)(tab.II) and dyslipidemia were different in the HCE and LCE groups.In the individuals with normal triglyceride levels, a greater frequency of HCE (63/107 = 58.9%)and lower of LCE (44/107 = 41.1%) was observed; however, in individuals with high levels of triglycerides, a greater frequency of LCE (9/13= 69.2%) and a lower frequency of HCE (4/13 = 30.8%)(P=0.05) were observed (tab II).Likewise, individuals with normal WHR had a greater frequency of HCE (46/73 = 63%) and lower frequency of LCE (27/73 = 27%), while those with risky WHR had an The cutoff point of HDL-C was elevated from 35 to 40, because this is the recommendation of the current consensus on dyslipidemia.In our study, only 6 (4.8%) individuals had HDL-C levels below 35 mg/dL; when the cutoff point was elevated to 40 mg/dL, a 1.5% increase occurred, but no difference was observed in the distribution of this lipid variable in the groups of caloric expenditure according to physical activity. The analysis of the overall cardiovascular risk indicated that 39.7% of the population had a high risk for coronary artery disease in 10 years.The comparison between the sexes showed no significant difference in risk stratification (fig.4).However, a significant difference was observed between the profiles of menopausal and nonmenopausal females, showing that most females in the former group (79.3%) had a high risk for coronary artery disease in 10 years, but only 9.5% of the females in the latter group had a similar risk (P<0.001). Discussion The results of this study show a high frequency (36.5%) of systemic arterial hypertension in the population studied.This frequency was slightly higher than that of other urban Brazilian population groups, such as those from the cities of Piracicaba 25 , Catanduva 26 , and Araraquara 27 in inner São Paulo state, whose frequencies of systemic arterial hypertension were, respectively, 32.7%, 31.5%, and 28.3%.A study 15 carried out in a community in the western region of the State of Bahia in 1977, whose criteria for diagnosing systemic arterial hypertension were blood pressure levels > 160/95 mmHg, reported that 18.2% of the population were hypertensive.When individuals with blood pressure levels > 140/95 mm Hg were also included, a prevalence of 31.4% was observed.The criteria used in this study carried out in the State of Bahia may have favored the underestimation of the frequency of systemic arterial hypertension; however, when comparing our data with those of the studies carried out in the cities of Piracicaba 25 and Catanduva 26 , whose diagnostic criteria for systemic arterial hypertension were similar to those of this study, we also observed that, in rural communities, systemic arterial hypertension is a very common clinical condition, as seen in some studies in other countries, such as Spain and Italy, whose prevalences of the disease were 41% and 45%, respectively 28,29 . Salt ingestion is known to be strongly correlated with an increase in the prevalence of systemic arterial hypertension, as reported in a Japanese study 30 , in which the prevalence of the disease is 21% in the southern region, where the mean salt ingestion is 13 g, and 38% in the northern region, where the salt ingestion doubles.Therefore, the Brazilian northeastern habit of salting food for preservation, which is very common in rural populations, is believed to explain the high prevalence of systemic arterial hypertension found in this study. The analysis of the lipid profile in this study showed that most of the population studied had LDL-C and triglyceride levels within the desired range, as well as extremely elevated mean levels of HDL-cholesterol; the 20.4% frequency of hypercholesterolemia (TC > 240 mg/dL) was greater than that found in other Brazilian populations, such as in a study about the Brazilian capitals, whose overall frequency of hypercholesterolemia was 8.8% 31 .Considering the TC levels greater than 200 mg/dL, the 51.6% prevalence found in this study was greater than the 40% prevalence found in the city of Porto Alegre and the 36.7%prevalence found in the city of Salvador 31 .The dietary habits of the population studied may once again have contributed to an increase in the frequency of hypercholesterolemia, because, in rural area, the use of saturated fat in food preparation is common.However, the low frequency of hypertriglyceridemia (14%) and low levels of HDL-C (4.8%) are noteworthy.We believe that the professional occupation of these individuals, most of whom are agriculturists, with high caloric expenditure activities may justify these results 32 .The comparative analysis between the high caloric expenditure and low caloric expenditure groups showed that most individuals with normal levels of triglycerides had professional activities requiring a high caloric expenditure, while in the group with high LDL-C levels, low caloric expenditure activities predominated. In this study, the importance of physical activity as an indicator of better cardiovascular risk profile 32 is confirmed by the tendency towards a lower frequency of abdominal obesity in the high caloric expenditure group and the higher frequency of males with normal weights.In addition, as compared with males, females had a greater frequency of low caloric expenditure activities, a higher frequency of abdominal obesity, greater mean levels of total cholesterol and LDL-C, and also a lower frequency of cholesterol levels within the normal range.Menopause may also have contributed to the less favorable cardiovascular risk profile in females, because 40.8% of the females in this study already were menopausal, and 79.3% were at high risk for coronary artery disease in 10 years estimated by the Framingham Study score 25 as compared with only 9.5% in the group of childbearing-age females. In this study, due to operational questions, the WHO diagnostic criterion, which requires a single glucose measurement (fasting blood glucose > 140 mg/dL) for diabetes mellitus was used instead of that of the Brazilian Consensus on Diabetes, which requires 2 measurements of fasting blood glucose with values greater than 126 mg/dL 33 .The use of the WHO criterion may have resulted in an underestimation of the incidence of diabetes mellitus.However, the low 4% prevalence of diabetes mellitus, lower than the mean 7.6% prevalence in the Brazilian population 33 , is believed to be justified by the occupational high caloric expenditure 32 .This is also evident in the comparison between the sexes, when a higher incidence of diabetes is found in females, who, in this study, had fewer high caloric expenditure professional activities than males did, and had a higher incidence of abdominal obesity, which is a well-known risk factor for the plurimetabolic syndrome and type 2 diabetes mellitus 34 .In addition, no case of diabetes was observed in the high caloric expenditure group. Despite the existence of some relevant limitations, such as the different demographic characteristics between the populations of Cavunge and Framingham, the use of the Framingham Study risk score in this study brought relevant information from the preventive and economic viewpoints.The assessment of the overall cardiovascular risk showed that 39.7% of the population studied had a high risk for coronary events in 10 years.Considering the mean age of 46.6 years, the finding that more than one third of the population studied has its professional activity and remaining productive period limited to 10 years is very disturbing.In addition, the impact of the association of multiple risk factors in the early development of CAD and the need for their early identification and adequate control have been well established 35 . From the methodological point of view, a 21.2% loss was observed in the initial sample due to nonadherence of select individuals (10.6%) and to hemolysis of the blood sample (10.6%).Sample losses around 20% may constitute a selection bias; however, when these losses are random, they are interpreted as a representative subsample of the original sample, that does not significantly interfere with the interpretation of the results 36 . Our results showed that the risk factors for cardiovascular diseases with a high prevalence in urban populations are also frequently found in rural communities.On the other hand, the lecture about risk factors delivered to the community in the initial phase of this study showed that, although the population studied completely lacked knowledge about preventive measures, they also avidly desired information about health promotion.These data point to the need for prevention and control educational programs in these communities with a special emphasis on dietary habits, involving not only the regional health care professionals, but the population in general, which will act as information-proliferating agents. Fig. 1 -Fig. 2 -Fig. 3 - Fig. 1 -Distribution of the population studied by TC range and by sex.All Patients Males Females Fig. 4 - Fig. 4 -Distribution of the overall cardiovascular risk by sex and in the menopausal females. Cardiovascular risk factors in a rural community Arq Bras Cardiol 2003; 81: 297-302. -A cross-sectional study was carried out with 160 individuals (age > 19 years) randomly drawn from those listed in the population census of the Cavunge Project.The following parameters were studied: arterial hypertension, dyslipidemia, diabetes, obesity, smoking, waist-hip ratio (WHR), physical activity, and overall cardiovascular risk classified according to the Framingham score.The assessing parameters used were those established by the III Brazilian Consensus on Hypertension and the II Brazilian Consensus on Dyslipidemia. Table I -Demographic and clinical characteristics of the population studied ** Caloric expenditure/professional category; SBP-systolic blood pressure; DBP-diastolic blood pressure.
2018-04-03T04:00:57.941Z
2003-09-01T00:00:00.000
{ "year": 2003, "sha1": "40b5355be41d5347ab80b0205ec0ffd09b0f7352", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/abc/a/SgNrVWsdFTGXYFHhzb8ZC8y/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "676b9298748eb23caf8ff2e5a6bcbb9c07493f1d", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259302814
pes2o/s2orc
v3-fos-license
Genetic interactions between Polycystin-1 and TAZ in osteoblasts define a novel mechanosensing mechanism regulating bone formation in mice Molecular mechanisms transducing physical forces in the bone microenvironment to regulate bone mass are poorly understood. Here, we used mouse genetics, mechanical loading, and pharmacological approaches to test the possibility that polycystin-1 and TAZ have interdependent mechanosensing functions in osteoblasts. We created and compared the skeletal phenotypes of control Pkd1flox/+;TAZflox/+, single Pkd1Oc-cKO, single TAZOc-cKO, and double Pkd1/TAZOc-cKO mice to investigate genetic interactions. Consistent with an interaction between polycystins and TAZ in bone in vivo, double Pkd1/TAZOc-cKO mice exhibited greater reductions of BMD and periosteal MAR than either single TAZOc-cKO or Pkd1Oc-cKO mice. Micro-CT 3D image analysis indicated that the reduction in bone mass was due to greater loss in both trabecular bone volume and cortical bone thickness in double Pkd1/TAZOc-cKO mice compared to either single Pkd1Oc-cKO or TAZOc-cKO mice. Double Pkd1/TAZOc-cKO mice also displayed additive reductions in mechanosensing and osteogenic gene expression profiles in bone compared to single Pkd1Oc-cKO or TAZOc-cKO mice. Moreover, we found that double Pkd1/TAZOc-cKO mice exhibited impaired responses to tibia mechanical loading in vivo and attenuation of load-induced mechanosensing gene expression compared to control mice. Finally, control mice treated with a small molecule mechanomimetic MS2 had marked increases in femoral BMD and periosteal MAR compared to vehicle control. In contrast, double Pkd1/TAZOc-cKO mice were resistant to the anabolic effects of MS2 that activates the polycystin signaling complex. These findings suggest that PC1 and TAZ form an anabolic mechanotransduction signaling complex that responds to mechanical loading and serve as a potential novel therapeutic target for treating osteoporosis. Pkd1/TAZ Oc-cKO mice to investigate genetic interactions. Consistent with an interaction between 23 polycystins and TAZ in bone in vivo, double Pkd1/TAZ Oc-cKO mice exhibited greater reductions of 24 BMD and periosteal MAR than either single TAZ Oc-cKO or Pkd1 Oc-cKO mice. Micro-CT 3D image 25 analysis indicated that the reduction in bone mass was due to greater loss in both trabecular bone 26 volume and cortical bone thickness in double Pkd1/TAZ Oc-cKO mice compared to either single 27 Pkd1 Oc-cKO or TAZ Oc-cKO mice. Double Pkd1/TAZ Oc-cKO mice also displayed additive reductions in 28 mechanosensing and osteogenic gene expression profiles in bone compared to single Pkd1 or TAZ Oc-cKO mice. Moreover, we found that double Pkd1/TAZ Oc-cKO mice exhibited impaired 30 responses to tibia mechanical loading in vivo and attenuation of load-induced mechanosensing 31 gene expression compared to control mice. Finally, control mice treated with a small molecule 32 mechanomimetic MS2 had marked increases in femoral BMD and periosteal MAR compared to 33 Introduction 38 In vivo and in vitro studies demonstrate that the polycystin-1(PC1)/polycystin-2 (PC2) 39 heterotrimeric complex functions in osteoblasts and osteocytes to regulate bone mass 1,2 and acts 40 as a mechanosensor to transduce the bone anabolic response to mechanical loading in vivo 3,4 . 41 Genetic ablation of either PC1 or PC2 deficiency in osteoblasts or osteocytes has similar effects 42 in reducing bone mass by decreasing osteoblast-mediated bone formation 3,5-8 . PC1 and PC2 43 mechanosensing functions are mediated by heterotrimeric complex activation of common signal 44 transduction pathways. In this regard, PC1 and PC2 conditional knockout models exhibit 45 concordant effects in impairing osteoblast-mediated bone formation. However, PC1 and PC2 46 have discordant effects on bone marrow adipogenesis that implicates separate signaling 47 mechanism 4,6,9 . In this regard, PC1 deficiency stimulates adipogenesis, leading to increased 48 bone marrow adipose tissue (MAT) deposition 2,6 , whereas PC2 loss-of-function inhibits 49 adipogenesis 4 . Additional in vitro and in vivo data show that PC1 activates Runx2 transcriptional 50 activity to stimulate osteoblastogenesis but diminishes PPARγ signaling leading to reduced bone 51 marrow fat 2,4 . In agreement with the low turnover bone disorder in Pkd1 mouse models, the blood 52 level of bone-specific alkaline phosphatase was significantly lower in patients with ADPKD 53 compared to control subjects without ADPKD 10-12 . The molecular mechanisms underlying the 54 different effects of PC1 and PC2 on osteoblastogenesis and adipogenesis are not clear. In the 55 current studies, we sought to understand the mechanism for the apparent PC1-specific effect in 56 reciprocally regulating transcriptional control of osteoblastogenesis and adipogenesis. 57 The Hippo-YAP/TAZ pathway is also regulated by mechanical forces [13][14][15][16] . Alterations of 58 matrix stiffness in cell culture modulates nuclear translocation of non-phosphorylated TAZ 59 resulting in co-activation of Runx2 and stimulation of osteoblastogenesis and in TAZ binding to 60 PPARγ to inhibit adipogenesis 2,17,18 . The physiological importance of TAZ in bone homeostasis 61 is revealed by transgenic overexpression of TAZ in osteoblasts in mice, which increases 62 osteoblast-mediated bone formation and inhibits bone marrow adipogenesis 19 ; depletion of TAZ in zebrafish, which impairs bone development 18 , and global knockout of TAZ in mice, which 64 have small stature and ossification defects 20 . Based on these observations, we posit that PC1 65 dependent TAZ signaling might explain the differential functions of PC1 and PC2 on 66 adipogenesis. 67 Recent studies show crosstalk between PC1 and TAZ signaling that is mediated by the 68 binding of TAZ to the PC1 C-terminal tail 2,17 . The PC1/TAZ complex is cleaved to allow nuclear 69 translocation of TAZ, a mechanism of TAZ regulation that differs from the canonical Hippo 70 regulation of YAP/TAZ signaling 21 . TAZ binds to the PC1-CTT and undergoes nuclear 71 translocation in response to changes in bone ECM microenvironment to stimulate 72 osteoblastogenesis and inhibit adipogenesis through transcriptional co-activation of Runx2 and 73 co-repression of PPARγ activity 2 . TAZ also binds to PC2, leading to PC2 degradation 22 . These 74 observations in bone parallel the interactions between PC1/PC2 and TAZ in primary cilia in renal 75 epithelial cells 20,23,24 . In this regard, TAZ knockout from the kidney result in cystic kidney disease 76 in mice, similar to polycystin complex inactivation, suggesting that PC1/PC2 and TAZ act through 77 common pathways in the kidney as well as bone 20,23,24 . Furthermore, a small molecule 78 mechanomimetic (named as molecular staple, MS) that binds to the PC1:PC2 C-terminal tail in a 79 presumptive coiled-coil region (e.g. PC1 residue Tyr 4236 and PC2 residues Arg 877 , Arg 878 , and 80 Lys 874 ) has been shown to activate this complex and mimic the effects of physical forces to 81 activate polycystins/TAZ signaling and stimulate bone mass in mice 2 . Collectively these 82 observations suggest that TAZ modulates polycystin's mechanosensing functions to differentially 83 regulate osteoblastogenesis and adipogenesis. 84 In this study, we examined the interaction between PC1 and TAZ in mouse bone by using 85 Osteocalcin (Oc)-Cre to conditionally delete both Pkd1 and TAZ in osteoblasts. We compared 86 skeletal phenotypes of double Pkd1/TAZ Oc-cKO mice with single conditional Pkd1 and TAZ null 87 mice under baseline conditions, after mechanical loading, and following treatment with a more 88 potent mechanomimetic, MS2, that activates the PC1/PC2 complex. We found that genetic ablation of PC1 and TAZ in osteoblasts results in additive loss of bone mass and anabolic 90 responses to mechanical loading. Compound PC1 and TAZ deficient mice were also resistant to 91 the bone inductive effects of the MS2 mechanomimetic in vivo. Our findings provide a new 92 mechanism whereby TAZ regulates skeletal homeostasis through co-dependent functions with 93 PC1 in osteoblasts. 94 well as YAP signaling such as increased CYR61 and CTGF gene expressions inhibits osteoblast 121 differentiation and mineralization. An increase in RankL and the RankL/OPG ratio promotes 122 osteoclast differentiation, leading to greater TRAP staining and higher osteoclast activity in 123 conditional TAZ Oc-cKO null mice compared with control mice (TAZ +/+ ). Conditional deletion of TAZ 124 also resulted in increased adipocyte markers such as PPARγ2, aP2 and Lpl gene expressions 125 (Table 1). 126 Unexpectedly, global TAZ +/heterozygous mice, which had a ~40% reduction in TAZ 127 message expression in bone, did not have significant changes in BMD or bone volume. Single-128 heterozygous TAZ +/showed normal bone gene expression profiles as well ( Table 1) Micro-CT 3D image analysis and Goldner staining (Fig 3 & 4). Also, the reductions in bone mass 142 were similar in male and female adult mice. 143 The skeletal phenotype of double Pkd1/TAZ Oc-cKO mice was more severe than either single 144 Pkd1 Oc-cKO or TAZ Oc-cKO null mice. Double Pkd1/TAZ Oc-cKO mice had greater losses in BMD, 145 trabecular bone volume, and cortical bone thickness with 33%, 53%, and 27% reductions, respectively in both male and female adult mice. This indicates the additive effects of Pkd1 and 147 TAZ in postnatal bone homeostasis (Fig 3 & 4). Consistent with lower bone mass, combined Pkd1 148 and TAZ deficiency also resulted in additive reductions in osteoblast-related gene expressions, 149 such as in Runx2-II, Osteocalcin, and Dmp1 (Table 2), as well as mechanosensing responsive 150 genes such as in Wnt10b, c-Jun, and PTGS2 ( Table 2). Periosteal MAR (Fig 4) was decreased 151 by ~73% in conditional double Pkd1/TAZ Oc-cKO mice compared to controls, whereas TAZ Oc-cKO and 152 Pkd1 Oc-cKO single conditional knockout mice had reductions in periosteal MAR of 55% and 53%, 153 respectively compared to control mice. Loss of either Pkd1 or TAZ resulted in enhanced marrow 154 adipogenesis, but no additive effects on adipocyte differentiation-related gene expressions were 155 observed in the conditional double Pkd1/TAZ Oc-cKO mice ( Table 2). 156 We found that the conditional deletion of Pkd1 or TAZ in osteoblasts has opposite effects 157 on osteoclast activities. There was a decreased RankL/OPG expression ratio and TRAP 158 immunostaining in Pkd1 Oc-cKO mice but an increased RankL/OPG expression ratio and TRAP 159 immunostaining in TAZ Oc-cKO mice ( Table 2 and Fig 4). In contrast, double Pkd1/TAZ Oc-cKO had 160 similar RankL expression and TRAP immunostaining when compared to control, indicating a 161 recovery of osteoclast activities in the double null mice ( Table 2 and Fig 4). Changes in gene 162 expression and immunostaining in bone correlated with alterations in serum biomarkers (Table 163 3). In this regard, further evidence for osteoblast and osteocyte dysfunction includes reductions 164 in serum osteocalcin and FGF-23 from in single Pkd1 Oc-cKO or TAZ Oc-cKO mice compared with age-165 matched control mice and an even greater decrement in double Pkd1/TAZ Oc-cKO null mice (Table 166 3). In contrast, serum levels of TRAP, a marker of bone resorption, were decreased in single 167 Pkd1 Oc-cKO mice, increased in single TAZ Oc-cKO mice, but restored in double Pkd1/TAZ Oc-cKO null 168 mice compared with control littermates ( Table 3). In addition, serum level of leptin was 169 significantly higher in single Pkd1 Oc-cKO or TAZ Oc-cKO mice than age-matched control mice. 170 However, we did not observe further increase in double Pkd1/TAZ Oc-cKO null mice (Table 3). These findings suggest that Pkd1 and TAZ have distinct functions among osteoblasts, adipocytes, and 172 osteoclasts in bone in vivo. 173 Loss of mechanical loading response in conditional Pkd1 and TAZ deficient mice. 174 The cross-sections of tibiae from control and double Pkd1/TAZ Oc-cKO null mice after 175 mechanical tibia loading studies in vivo are shown in Fig 5. In wild-type control mice, loaded tibia 176 showed a 2-fold increase in periosteal mineral apposition rate. In contrast, there was no 177 measurable increase in periosteal mineral apposition in the loaded tibia from double Pkd1/TAZ 178 knockout mice (Fig 5). In addition, a real-time RT-PCR analysis revealed that loaded tibia from 179 the control mice had a dramatic response to mechanical stimulation, evidenced by significant 180 increases of mechanosensing and osteogenic gene expressions including Wnt10b, FzD2, Axin2, 181 PTGS2, c-Jun, c-Fos, Runx2-II, Osteocalcin, ALP, and Dmp1 when compared with no load control 182 tibia. In contrast, even when using the same loading regimen, no changes of mechanosensing 183 and osteogenic gene expression profiles were observed in the loaded tibia from double 184 Pkd1/TAZ Oc-cKO null mice when compared with no load control tibia (Table 4). Thus, PC1 and TAZ 185 are important in mediating mechanotransduction in bone. 186 Validation of MS2 key binding to residues in PC1/PC2 C-terminus tails. 187 We have previously showed that the small molecule MS2 activates PC1/PC2 complex 188 signaling 2 . Using computational modeling, we engaged in an induced fit docking campaign and 189 predicted several potential ligand binding complexes. From these predicted poses, we identified 190 key residues in the PC1 and PC2 C-terminus tail regions with which MS2 is predicted to interact. 191 As shown in Fig 6, the PC2-CTT binding site for MS2 is predicted to include Lysine 874 and 192 Arginine 877, whereas the PC1-CTT binding site for MS2 involves Tyrosine 4236 (Fig 6A & 6B). 193 To test these predictions, we performed site-mutagenesis of key residues in both PC1 and PC2 194 and tested the effects of MS2 on PC1 and PC2 assembly using a BRET 2 assay. We observed 195 that the compound MS2 markedly enhances BRET 2 signal in wild-type constructs, while 196 mutagenesis of key residues in either PC1-CTT or PC2-CTT constructs completely abolished the BRET 2 signal in the presence of compound MS2, confirming a role of MS2 in binding and 198 enhancing the PC1 and PC2 interaction (Fig 6C & 6D). 199 Next, we examined PC1/PC2 complex formation during MC3T3-E1 osteoblast 200 differentiation in vitro. We observed culture duration dependent increase of PC1/PC2 complex 201 formation by western blot analysis. Incubation with 1 µM of MS2 in osteogenic cultures markedly 202 increased the amount of PC1 and PC2 protein as assessed by western blot analysis (Fig 6E & 203 6F). These data suggests that MS2 may molecularly stabilize the PC1/PC2 complex in osteoblast 204 culture in vitro. 205 Loss of MS2-mediated stimulated increase in bone mass in conditional Pkd1 and TAZ 206 deficiency mice. 207 Finally, we treated wild-type and conditional double Pkd1/TAZ Oc-cKO null mice with vehicle 208 or MS2 (50 mg/kg) i.p. daily and assessed their skeletal response. After only 2 weeks, we 209 observed that wild-type control mice treated with MS2 had a 15% increment in femoral bone 210 mineral density compared to vehicle control (Fig 7). Micro-CT 3D images revealed that MS2 211 treated wild-type mice had a 39% increase in trabecular bone volume and 16% increase in cortical 212 bone thickness. 213 In contrast, administration of MS2 had no effects on bone mineral density and bone 214 structure in double Pkd1/TAZ Oc-cKO null mice, suggesting specific-target dependent effects of MS2 215 on polycystins/TAZ signaling (Fig 7). We also observed that there were 1.6-fold increases in bone 216 formation rate in wild-type mice treated with MS2 compared to the vehicle control, in agreement 217 with enhanced osteoblastogenesis (e.g., Runx2-II, OOsteocalcin, ALP and Dmp1) and 218 suppressed marrow adipogenesis (e.g., PPARγ2, aP2, and Lpl) by a real-time RT-PCR analysis 219 (Table 5 and Fig 7). Again, administration of MS2 had no effects on bone formation rate and 220 bone gene expression profiles in double Pkd1/TAZ Oc-cKO null mice. Furthermore, MS2 stimulated 221 mechanosensing gene expressions, including Wnt1, Wnt10b, FzD2,eNOS,222 and PTGS2, consistent with MS2 acting as a small molecule "mechanomimetic". MS2 treatment decreased RankL/OPG expression ratio and TRAP immunostained osteoclasts in the MS2-224 treated mice compared to vehicle control (Table 5 and Fig 7). Administration of MS2 had no 225 effects on osteoblast-mediated bone formation rate, marrow adipogenesis, and osteoclast activity 226 in conditional double Pkd1/TAZ Oc-cKO null mice (Table 5 and Fig 7). These data support that MS2 227 functions as anabolic drugs through the polycystins/TAZ pathway to promote the bone remodeling 228 process. 229 230 Discussion 231 In the current study, we provide loss-of-function genetic and gain-of-function 232 pharmacological evidence for the co-dependent roles of PC1 and TAZ in regulating osteoblast-233 mediated bone formation and bone mass, First, using Oc-Cre to conditionally delete Pkd1 and 234 TAZ in osteoblasts in mice, we found that deletion of both genes in double Pkd1/TAZ Oc-cKO null 235 mice resulted in a more severe skeletal phenotype than loss of either PC1 or TAZ alone in the Osteocytes regulate osteoclast activity through the RANKL/OPG paracrine pathway 36,37 . 300 no difference in OPG expression in the Pkd1 Oc-cKO null and TAZ Oc-cKO null mice, this could account 302 for the differential effects of PC1 and TAZ on osteoclast activity in bone. Moreover, MS2 303 significantly decreased RANKL expression in bone and attenuated osteoclast activity. Our 304 understanding of TAZ regulation of osteoclast function is further supported by the observation by 305 Yang et al that either global or osteoclast-specific knockout of TAZ led to a low-bone mass 306 phenotype due to elevated osteoclast formation 38 . Thus, PC1 and TAZ signaling have divergent 307 effects on osteoclasts and bone resorption. 308 Finally, a strong association exists in senile osteoporosis between decreased 309 osteoblastogenesis and increased adipogenesis leading to increased bone marrow fat 39-42 . We 310 have previously reported that global PC1 deficiency in mice has an inverse effect, inhibiting 311 osteoblastogenesis and stimulating adipogenesis 2,7 . Similar to conditional Pkd1 Oc-cKO null mice 312 3,6 , we also observed conditional TAZ Oc-cKO null mice had greater increments in adipogenic 313 markers than global or conditional TAZ heterozygous mice in the current study, suggesting a 314 gene-dosage dependent effect of loss-of-TAZ in osteoblasts on bone marrow adipogenesis. 315 Interestingly, we found a similar increase in adipogenic markers in both PC1 and/or TAZ 316 osteoblast conditional knockout mice. The increase of adipogenic markers could be theoretically 317 explained by increased transdifferentiation of osteoblast precursors to adipocytes 43,44 , or effects 318 of PC1 and TAZ in osteoblasts/osteocytes differentially releasing paracrine factors that modulate 319 adipogenesis 45-47 , analogous to paracrine factors that regulate osteoclastogenesis. Regardless, 320 our studies revealed that double Pkd1/TAZ Oc-cKO null mice had no differences in adipogenic 321 markers relative to single Pkd1 Oc-cKO or TAZ Oc-cKO null mice, suggesting that polycystin-1 and TAZ 322 regulate adipocyte differentiation through the common polycystins/TAZ pathway. to determine bone volume (BV/TV) and cortical thickness (Ct.Th) as previously described 3,4,6 . 359 Real-time quantitative reverse transcription PCR (real-time qRT-PCR) and western blot 360 analysis. For real-time qRT−PCR, 1.0 g total RNA isolated from either the long bone of 6-week-361 old mice or 8-days cultured BMSCs in differentiation media was reverse transcribed as previously 362 described 4,6 . PCR reactions contained 20 ng template (cDNA), 375 nM each forward and reverse 363 primers, and 1 X EvaGreen Supermix (Bio-Rad, Hercules, CA) in 10 l. The threshold cycle (Ct) 364 of tested-gene product from the indicated genotype was normalized to the Ct for cyclophilin A. 365 Then the tested-gene product vs cyclophilin A is normalized to the mean ratio of wild-type or 366 control group, which has been set to 1. 367 For Western blot analysis, protein concentrations of the supernatant were determined with 368 a total protein assay kit (Bio-Rad, Hercules, CA). Equal quantities of protein were subjected to 4-369 12% Bis-Tris or 3-8% Tris-Acetate gradient Gels (Invitrogen, Carlsbad, CA) and were analyzed 370 with standard western blot protocols as previously described 4,6 . Polycystin-1 antibody (7E12, sc- were from Santa Cruz Biotechnology (Paso Robles, CA). The intensity of the bands was 376 quantified using Image J software (http://rsb.info.nih.gov/ij/). National Laboratory and The University of Tennessee, Knoxville, we previously identified a 379 compound that is thought to bind to the polycystin1 (PC1-CTT)/polycystin2 (PC2-CTT) complex 380 in their C-terminus tails that we refer to as molecular staple two (MS2) 2 . Here, using computational 381 ligand docking with an initial rigid receptor search using the proxy triangle algorithm and London 382 dG scoring function and subsequent induced-fit refinement using a "free" receptor geometry, the 383 were plated in 96-well black isoplate and cultured for 48 hours after transfection. We used the 397 Synergy H4 plate reader to monitor the BRET 2 signal (Fluorescence/Luminescence ratio). The 398 relative fluorescence (515/30 nm) and luminescence (410/80 nm) raw data were detected from 399 each well after adding DeepBlue C (5 µM) in the presence or absence of compound MS2 (10 µM). 400 In addition, based on the identification of crucial contact residues [e.g. Lys(K) 874 structure by micro-CT 3D images analysis from male mice. Data are expressed as the mean ± 595 S.D. from serum samples of individual mice (n=6). *P < 0.05, **P < 0.01, ***P < 0.001 compared 596 with wild-type mice, ## P < 0.01 compared with, TAZ Oc-cKO mice, and && P < 0.01 compared with Pkd1 Oc-cKO mice, respectively. P values were determined by 1-way ANOVA with Tukey's multiple-598 comparisons test. conditional Pkd1 and/or TAZ deleted mice compared with age-matched control mice. c Periosteal 604 mineral apposition rate (MAR) by Calcein double labeling. There was a significant reduction in 605 periosteal MAR in single Pkd1 Oc-cKO or TAZ Oc-cKO mice compared with age-matched control mice 606 and an even greater decrement in double Pkd1/TAZ Oc-cKO null mice, indicating a synergic effect 607 of PC1 and TAZ on osteoblast-mediated bone formation. d TRAP staining (red color) for 608 osteoclast activity. Data are expressed as the mean ± S.D. from 6 individual mice (n=6). *P < 609 0.05, **P < 0.01, ***P < 0.001 compared with wild-type mice. P values were determined by 1-way 610 ANOVA with Tukey's multiple-comparisons test. Data are mean  S.D. from 6 tibias of wild-type control and compound Pkd1/TAZ Oc-cKO null mice. 635 *P < 0.05, **P < 0.01, ***P < 0.001 compared with wild-type control mice. P values were 636 determined by 1-way ANOVA with Tukey's multiple-comparisons test. 637 Table 5. Gene expression profiles in bone from MS2-treated wild-type control and
2023-07-02T05:09:30.510Z
2023-05-29T00:00:00.000
{ "year": 2023, "sha1": "5e41897aaea5a66ec018eeba9f6384f882df8cb3", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "5e41897aaea5a66ec018eeba9f6384f882df8cb3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
7625010
pes2o/s2orc
v3-fos-license
A 20-Year Longitudinal Study of Plasmodium ovale and Plasmodium malariae Prevalence and Morbidity in a West African Population Background Plasmodium ovale and Plasmodium malariae have long been reported to be widely distributed in tropical Africa and in other major malaria-endemic areas of the world. However, little is known about the burden caused by these two malaria species. Methods and Findings We did a longitudinal study of the inhabitants of Dielmo village, Senegal, between June, 1990, and December, 2010. We monitored the inhabitants for fever during this period and performed quarterly measurements of parasitemia. We analyzed parasitological and clinical data in a random-effect logistic regression model to investigate the relationship between the level of parasitemia and the risk of fever and to establish diagnostic criteria for P. ovale and P. malariae clinical attacks. The prevalence of P. ovale and P. malariae infections in asymptomatic individuals were high during the first years of the project but decreased after 2004 and almost disappeared in 2010 in relation to changes in malaria control policies. The average incidence densities of P. ovale and P. malariae clinical attacks were 0.053 and 0.093 attacks per person per year in children <15 years and 0.024 and 0.009 attacks per person per year in adults ≥15 years, respectively. These two malaria species represented together 5.9% of the malaria burden. Conclusions P. ovale and P. malariae were a common cause of morbidity in Dielmo villagers until the recent dramatic decrease of malaria that followed the introduction of new malaria control policies. P. ovale and P. malariae may constitute an important cause of morbidity in many areas of tropical Africa. Introduction Plasmodium ovale and Plasmodium malariae have long been reported to be widely distributed in tropical Africa and in other major malaria-endemic areas of the world [1][2][3][4][5]. In forest and wet savannahs areas of West and Central Africa, high prevalence of the two species is common in children, often reaching 15-40% for P. malariae and 4-10% for P. ovale in studies where thick blood films were carefully examined by trained microscopists since parasitemia is usually low [6][7][8][9][10][11]. In these areas with a long rainy season and/or perennial or semi-perennial Anopheles breeding sites, P. falciparum is always highly endemic, and most P. ovale and P. malariae infections are associated with P. falciparum infections [8,12]. In the Sahel and other dry savannah areas, prevalence rates of P. ovale and P. malariae are much lower, rarely exceeding 1% for P. ovale and 10% for P. malariae [13]. The clinical features of P. ovale and P. malariae infections in endemic populations are poorly known, and most clinical data come from studies conducted in non-immune travelers returning from endemic areas or in patients that were given malaria therapy for the treatment of neurosyphilis during the period 1920-1960. High fever is the main symptom, attacks are mild and severe complications appear very rare [3,4,14,15]. In malaria endemic areas of tropical Africa, almost all malaria attacks are attributed to P. falciparum, possibly due either to underdiagnosis of P. ovale and P. malariae clinical attacks, to partial cross-immunity between malaria species, or to rapidly acquired species-specific protective immunity against P. ovale and P. malariae. In the literature, there is little evidence that P. malariae may be responsible for fever episodes in children or adults living in highly malaria endemic areas, but the occurrence of fever episodes temporally related to peaks of P. malariae parasitemia was reported in Liberia [16], and the responsibility of this species in fever episodes occurring in children during the dry season has been suspected in The Gambia [17]. By contrast, the responsibility of P. ovale in fever episodes has been clearly established in Senegal both in children and in adults, but the incidence of the disease was much lower than for P. falciparum [18,19]. In 1990, a longitudinal prospective study of malaria infection and the determinants of the disease was set up in the population of Dielmo village, a holoendemic area of Senegal [8]. Until 2008, when long-lasting insecticide-treated nets were deployed [20], the only intervention was to provide prompt specific treatment for malaria attacks and other diseases occurring in this village where the occurrence of fever cases was monitored daily. Here, we present parasitological and clinical data on P. malariae and P. ovale infections that we collected over a 20-year period in this population. Ethics Statement The project was initially approved by the Ministry of Health of Senegal and the assembled village population, and renewed on a yearly basis. Written informed consents were obtained individually from all participants or the parents of children younger than 15 years. For children participants aged 15-18 years, written informed consent were obtained individually since the National Ethics Committee of Senegal considered that participants in this age are responsible for their own person. Audits were regularly carried out by the National Ethics Committee of Senegal and adhoc committees of the Ministry of Health, the Pasteur Institute (Dakar, Senegal, and Paris, France), and the Institut de Recherche pour le Développement (formerly ORSTOM, Paris and Marseille). Clinical and parasitological monitoring The study was carried out from June 1990 to December 2010 in Dielmo, a village situated in a Sudan-savannah region of central Senegal. It is an area of intense and perennial transmission where the mean entomological inoculation rate was 258 and 132 infected bites per person per year during 1990-2006 and 2007-2010 periods, respectively [20]. Most of the population of Dielmo (all 247 inhabitants of the village in June 1990 and 468 of 509 inhabitants in December 2010) was involved in a longitudinal study of malaria. The clinical, parasitological, entomological and epidemiological monitoring has been described elsewhere [8,20]. Briefly, to identify all episodes of illness, a field research station with a dispensary was built and was open 24 h a day and 7 days a week for the detection of cases both active and passive. Each household was visited daily, and nominative information was collected at home 6 days a week (i.e. excluding Sunday) on the presence or absence in the village of each individual enrolled, their location when absent, and the presence of fever or other symptoms. The body temperature was recorded three times a week (every second day) in children younger than 5 years, and in older children and adults in case of suspected fever or fever-related symptoms (hot body, asthenia, cephalalgia, vomiting, diarrhea, abdominal pain, cough). In case of fever or other symptoms, thick blood films were made by finger prick and examined for malaria parasites, and medical examination and specific treatment were provided. In addition, thick blood films were made bi-weekly from June to September 1990, then weekly, monthly or quarterly according to periods and age-groups from October 1990 to December 2010 in all individuals enrolled in the project. Two hundred oil-immersion fields (approximately 0.5 ml of blood) were examined on each slide and the parasite: leukocyte ratio was measured separately for each plasmodial species. Since there was no simultaneous measurement of leukocytemia, when expressing the results in numbers of trophozoites per ml of blood, a mean standard leukocyte count of 8,000 per ml of blood was adopted for all age groups. Four first-line antimalarial treatment were successively used during the 20 years study period: oral quinine (QuinimaxH) (October 1990-December 1994 Measurements of P. falciparum, P. ovale and P. malariae prevalence Prevalence rates for each Plasmodium species presented in this study were those observed from 1990 to 2010 during bi-weekly (June-September 1990) or quarterly cross-sectional surveys (November 1990, then four quarterly studies each year from 1991 to 2010). All measurements of parasitemia (29,280) were taken into account, even if fever or other symptoms were documented at the same period. Establishment of criteria for the diagnosis of P. ovale and P. malariae malaria attacks In persons living in areas where malaria is endemic, most P. ovale and P. malariae infections are asymptomatic. To distinguish the episodes of fever caused by P. ovale or P. malariae from those caused by other diseases when these parasites were present by chance, the relationship between parasitemia and fever was investigated by a case-control approach then the occurrence of age-dependent pyrogenic thresholds was investigated by a logistic regression method. A total of 64,284 simultaneous measurements of parasitemia and temperature made from June 1990 to December 2010 among 760 individuals aged from one month to 99 years were included in the analysis. The following definitions of case and control observations were used: Case observations. Individual observations were regarded as fever cases if the rectal temperature measured by active case detection at home or by passive case detection at the clinic was $38uC (young children) or the axillary temperature was $37.5uC (older children and adults). Two fever episodes were considered independent if they occurred fifteen days apart or more. When several simultaneous measurements of parasitemia and temperature were available for the same fever episode, only the highest measure of parasitemia and the temperature associated were taken into account. 14,841 observations of parasitemia and temperature matched the case definition. Control observations. Owing to the erratic nature of hyperthermia during malaria attacks and a number of other diseases, individual observations from the cross sectional surveys were considered to be asymptomatic controls if rectal/axillary temperatures were lower than 38u/37.5uC and if there was no episode of illness (allegation of fever and/or other fever related symptoms) between fifteen days prior and seven days after the temperature was taken. Measurements from pregnant women were excluded from the analysis. 49,443 simultaneous measurements of parasitemia and temperature performed during biweekly, weekly, monthly or quarterly cross-sectional surveys matched the control definition. Since parasite prevalence and the occurrence of fever cases associated with the presence of malaria parasites decreased considerably during the most recent years, age-dependent pyrogenic threshold were investigated separately for the periods 1990-2004, 2005-2007, and 2008-2010. The method to calculate the age-dependent pyrogenic threshold by a random-effect logistic regression model of age and parasitemia was described earlier for P. falciparum parasitemia [21,22]. Briefly the logit of the probability p ij that the individual i presented a febrile episode during the observation j was expressed in the form of a linear function of age z i and parasite density x ij . The best fit was obtained using a series of three dummy variables z ik where k was an index coding for the four following age groups: 0-1, 2-6, 7-12, and $13 years old and using the rth power of x for the parasitemia. A threshold value in addition to the previous continuous effect of parasitemia was introduced as a binary variable s ij and improved the goodness of fit of the model. The pattern of the threshold depending on age and parasitemia, the parameters used and the equation obtained was described previously [22]. b 0 was a constant, a i was the randomeffects individual terms, b 1 , b 2 and b 3 were the regression coefficients. The models for each study period were compared according to the maximum likelihood (minimum deviance) using the Akaike method by minimizing the Akaike criterion [23]. All analyses were done with STATA software version 11.0 (College Station, TX, USA). Incidence density of P. ovale and P. malariae clinical attacks For incidence density calculation, P. ovale and P. malariae clinical attacks were defined as any case with fever or allegation of fever and/or fever-related symptoms whose parasitemia was higher than the age and period corresponding threshold value of each species derived from the model (P. malariae all periods and P. ovale 1990-2004) or the case-control study (P. ovale 2005-2010). Two clinical attacks were counted separately if they occurred fifteen days apart or more. When P. falciparum associated parasitemia was higher than the age and period corresponding threshold of this species [22], the clinical attack to P. falciparum only was attributed, even in the few cases where P. ovale or P. malariae parasitemia thresholds were reached simultaneously. However, when distinct peaks of high parasitemia involving two malaria species were successively observed within fifteen days and the threshold was reached for each species, the fever episode was attributed to both malaria species. Results P. ovale, P. malariae and P. falciparum prevalences during cross-sectional survey From 1990 to 2010, of 29,280 thick blood films realized during bi-weekly (June-September 1990) or quarterly cross-sectional surveys (from November 1990 to 2010), 14,341 (49.0%) were found positive for the presence of one or several malaria species, including 719 P. ovale (2.5%), 3,584 P. malariae (12.2%), and 13,216 P. falciparum (45.1%) infections with the parasite alone or in association. There were 2,912 mixed infections, including 2,320 P. falciparum/P. malariae, 314 P. falciparum/P. ovale, 12 P. ovale/P. malariae, and 266 P. falciparum/P. malariae/P. ovale associations. Fig. 1 shows trends in malaria prevalence for each Plasmodium species from 1990 to 2010. P. falciparum prevalence was always higher than the prevalence of the two other species, and P. malariae prevalence was always higher than P. ovale prevalence. For all species, there was a marked decrease in prevalence during the most recent years. Fig. 2 shows P. ovale prevalence by year and age group. There were important annual variations in all age groups, with marked peaks in 1990, 1999 and 2001 reaching up to 7-13% in several age-groups in children. However, P. ovale almost disappeared after 2004. As shown in Fig. 3, P. malariae prevalence was very high (<40%) in children during the first years of the project. The following years, prevalence declined markedly in young children, but remained high and almost unchanged until 2004 in older children and adults. There was a marked decrease in 2005-2009 and this species almost disappeared in 2010. Criteria for diagnosing P. ovale attacks The relationship between P. ovale parasitemia and the occurrence of fever during the period 1990-2004 is presented in Table 1. Of 12,182 episodes of fever, 598 (4.9%) occurred among patients infected by P. ovale. Of 37,074 observations among asymptomatic persons, 818 (2.2%) involved subjects infected by P. ovale. There was more fever cases with P. ovale associated with other Plasmodium species (412 cases, 68.9%) than fever cases with P. ovale only (186 cases, 31.1%). Risk of fever increased by 10-fold when P. ovale parasitemia was above 800/ml of blood (p,0.001). Above 4,000 parasites per ml of blood, only 3 (2.0%) P. ovale infections were asymptomatic and the 146 other infections (98.0%) presented fever. Maximum P. ovale parasitemia observed during a fever episode was 88,320/ml in a 4 year old child (second highest value: 36,000/ml in a 3 year child) and the maximum parasitemia without documented symptoms was 5,040/ml in a 4 year old child. Fig. 5 shows the age-dependent pyrogenic threshold levels of parasitemia during the period 1990-2004 are shown. Estimates of the parameters defining the age-dependent threshold are presented in Table 2. Highest threshold parasitemia in 4 year old children was 3,800/ml and lowest threshold parasitemia in adults was 350/ ml. The relationship between P. ovale parasitemia and the occurrence of fever during the period 2005-2010 is presented in Table 1. Of 2,659 episodes of fever, only 4 (0.15%) occurred among patients infected by P. ovale and three of these cases (adults 19, 30 and 33 year old, respectively, all with parasitemia .3,000/ml) corresponded to infections with P. ovale only, the fourth case (parasitemia 7/ml in a 7 year old child) being associated with P. falciparum. Risk of fever increased by 13-fold when P. ovale parasitemia was above 800/ml of blood (p,0.001). Of 12,369 observations in asymptomatic persons, only 10 (0.08%) involved persons infected by P. ovale and parasitemia ranged from 2 to 960/ ml. Criteria for diagnosing P. malariae attacks Fig. 6 shows the mean P. malariae symptomatic and asymptomatic parasitemias according to age group between 1990 and 2010. Considering the changes in P. malariae prevalence during the 20 years of follow up, the relationship between P. malariae parasitemia and the occurrence of fever was investigated separately during three different periods: from June 1990 to December 1994, from January 1995 to December 2004 and from January 2005 to December 2010 (Table 3). During the first period (1990)(1991)(1992)(1993)(1994), 797 (27.1%) of 2,938 fever cases occurred in patients infected by P. malariae. Of 16,785 observations in asymptomatic persons, 3,150 (18.8%) involved persons infected with P. malariae. They were more fever cases with P. malariae associated with P. falciparum and/or P. ovale (765 cases, 96.0%) than fever cases with P. malariae only (32 cases, 4.0%). The risk of fever increased moderately when P. malariae parasitemia was higher than 3,000/ml or 6,000/ml of blood. All infections were symptomatic when parasitemia was higher than 8,500/ml of blood. Maximum P. malariae parasitemia observed during a fever episode was 16,560/ml in a 4 year old child (second highest value: 13,600/ ml in a 3 year old child) and the maximum parasitemia without documented symptoms was 8,400/ml in a 3 year old child. The age-dependent pyrogenic threshold levels of P. malariae parasitemia during the three periods 1990-1994, 1995-2004 and 2005-2010 are shown in Fig. 7. Estimates of the parameters defining these thresholds are presented in Table 4. During the first period, highest threshold parasitemia in 4 year old children was 3,800/ml and lowest threshold parasitemia in adults was 500/ml. During the second period, highest threshold parasitemia in 4 year old children was 2,000/ml and lowest threshold parasitemia in adults was 400/ml. During the third period, highest threshold parasitemia in 3 year old children was 2,000/ml and lowest threshold parasitemia in adults was 300/ml. P. ovale and P. malariae clinical attacks Over the 20 years of study, there were 22,266 episodes of fever or fever-related symptoms during the 2,110,321 person-days of clinical monitoring of the study population (children: 1,013,054 days; adults: 1,097,267 days). 219 clinical malaria attacks were attributable to P. ovale and 290 to P. malariae. P. ovale attacks were observed in 155 persons from all compounds of the village (1, 2, 3, 4 and 5 attacks in 106, 38, 8, 2 and 1 villagers, respectively). Of the 38 individuals with two attacks, 18 presented their second attack within twelve months after the first attack (13 within six months). Of the 11 individuals with three or more attacks, 10 presented all attacks (5 individuals) or all attacks except one (5 individuals) within a twelve month period between two successive attacks, and one individual with three attacks had all attacks separated by at least two years each. The youngest child who presented P. ovale clinical malaria was a 2 month-old infant and the oldest was a 89 year old man. The median parasitemia reached during the 219 attacks was 8,426/ml. A temperature below 38uC (rectal) or 37.5uC (axillary) with feverrelated symptoms only was observed in 40 attacks. A temperature $39uC was documented in 126 attacks. The average incidence density of P. ovale attacks at the community level from 1990 to 2010 was 0.04 attacks per person per year. Fig. 8 shows that the annual incidence density of P. ovale attacks was maximum in 1999, 29,20,8,4,6 villagers, respectively, and 9 and 13 attacks in one villager). Of the 29 individuals with two attacks, 16 presented their second attack within twelve months after the first attack (13 within six months). Of the 40 individuals with three or more attacks, 34 presented all attacks (16 individuals) or all attacks except one (18 individuals) within a twelve month period between two successive attacks, and six individuals with three attacks or more had from three to five attacks separated by at least one year. The youngest child who presented P. malariae clinical malaria was a 2 month-old infant and the oldest was a 42 year old man. The median parasitemia reached during the 290 attacks was 4,167/ml. A temperature below 38uC (rectal) or 37.5uC (axillary) with feverrelated symptoms only was observed in 34 attacks. A temperature $39uC was documented in 188 attacks. The average incidence density of P. malariae attacks at the community level from 1990 to 2010 was 0.05 attacks per person per year. Fig. 10 shows that the annual incidence density of P. malariae attacks was maximum in 1998, reaching 0.11 attacks per person per year, and minimum in 2009 and 2010 where no attack was observed. Fig. 11 shows the incidence density of P. malariae clinical attacks according to age during the whole study period. 265 of 290 cases (91.4%) occurred in children under 15 years-old and the mean incidence density was maximum in children 5-9 years where it averaged 0.15 attacks per child per year. Maximum incidence density in a given year and age group was observed in 2002 in children 5-9 year who presented 0.39 attacks per child. A total of 25 of 290 attacks (8.6%) Discussion Few studies have attempted to measure the burden P. ovale and P. malariae in malaria endemic areas of tropical Africa. Although many comprehensive studies have investigated P. falciparum malaria both in patients attending clinics and in cohort population studies where the occurrence of clinical malaria episodes was closely monitored, generally no information was published regarding data collected on P. ovale and P. malariae during these studies. Several reasons may explain the rarity of published works on these two malaria parasites, including the low incidence of the disease, and -due to the lack or rarity of severe cases-its much lower importance than P. falciparum in terms of public health, but also the practical difficulties of such studies that need specifically trained microscopists (young ring forms of the three species may be difficult to distinguish in thick blood films, especially during coinfections) and may require specific data analysis methods since P. ovale and P. malariae are often associated with P. falciparum at various but often very low levels of parasitemia. The longitudinal study conducted in Dielmo during 20 years shows dramatic changes in the prevalence of the three malaria species between 1990 and 2010. Prevalence was high the first years of the project and declined to very low values in the most recent years. However, trends differed significantly according to species, especially for P. ovale between 1990 and 2004, which clearly presented strong annual variations independent of treatment policies. It would have been interesting to know if both (sub) species of the P. ovale complex (wallikeri and curtisi [24]) were involved in these annual variations but our preliminary retrospec- tive PCR investigations with preserved stained thick blood films were unsuccessful to clarify this point. After 2004, P. ovale almost disappeared and P. malariae became much rarer, probably in relation to the switch from chloroquine to amodiaquine + SP as first line treatment of malaria attacks, although P. ovale and P. malariae infections have always been sensitive to chloroquine and other antimalarials. Unfortunately, P. ovale and P. malariae specific entomological inoculation rates were only monitored during the first years of the study [8], due to the lack of availability of good quality monoclonal antibodies for these species the following years. Between 1990 and 1992, 15.5% of A. gambiae s.l. and A. funestus mosquitoes with sporozoites in their salivary glands detected by dissection were infected by P. malariae alone or in association with P. falciparum and/or P. ovale and 8.2% were infected by P. ovale [8]. In malaria endemic areas, where asymptomatic infections are highly prevalent, the detection of malaria parasites in persons with fever is not sufficient criteria for distinguishing malaria from other causes of fever. Methods based on parasite density are widely used to confirm or discard the diagnosis of P. falciparum clinical malaria and to assess the burden of malaria [21,22,[25][26][27], but these methods have never been applied to P. malariae and to P. ovale. To our knowledge the only comprehensive study of P. malariae morbidity in tropical Africa is the work of Miller [16] in Liberia who investigated the level of parasitemia associated with fever in a cohort of 20 adults aged from 20 to 30 years and 10 children aged from 3 to 7 years. Seven attacks were attributed to P. malariae during the study period, including two among adults with parasitemia ranging from 22 to 136/ml (mean: 79/ml) and five among children with parasitemia ranging from 1,650 to 5,935/ml (mean: 3,172/ml). In our study, in contrast to P. falciparum and P. ovale, P. malariae attacks in each age group were often associated with levels of parasitemia only scarcely higher than levels of parasitemia commonly observed during asymptomatic infections. In the case of P. ovale, we presented in a previous paper an analysis of the relationship between parasitemia and fever in Dielmo during the period 1990 to 1996 using a case control approach [19]. Results indicated that only parasite densities $800/ml were significantly associated with clinical symptoms. A constant threshold value of 800/ml for all age-groups during 20 years would have given 261 clinical P. ovale attacks, i.e. 42 attacks more than the 219 attacks measured by the age-dependent thresholdeffect model, most of these additional attacks occurring in children under 10 years. The incidence of P. ovale and P. malariae attacks in Dielmo population was much lower than the incidence of P. falciparum attacks. Using a similar model, 7,978 P. falciparum malaria attacks were diagnosed between October 1990 and December 2010 [22]. All P. ovale and P. malariae attacks were mild and of short duration, but fever was often high and one P. ovale attack was responsible for a stillbirth. P. malariae was more involved than P. ovale in mixed infections with P. falciparum, even at high parasitemia. There were 7 attacks due to P. ovale and 3 attacks due to P. malariae within 15 days of a P. falciparum clinical attack. Although an analysis of interaction between malaria species and the presumed respective roles of species-specific and cross immunity is out of the scope of this paper, some observations are of interest. Both P. ovale and P. malariae first infections and attacks in infants were observed during the third month of life, although the mothers of these infants have spent much of their life under high transmission conditions, suggesting that maternotransmitted immunity was insufficient to prevent infection and disease, this in particular when there was no pre-existing P. falciparum infection. In children, the incidence of P. ovale and P. malariae attacks was much lower than expected from data on incidence (P. ovale) or prevalence (P. malariae) of patent infections as measured by bi-weekly, weekly or monthly microscopy [8], and this was particularly the case during the first years of the project when the prevalence of P. falciparum asymptomatic infections was maximum. The duration of patent infections was much shorter for P. ovale than for P. malariae and the dynamics of parasitemia differed (even when considering than microscopic examination underestimates parasite prevalence compared to PCR studies [28,29]), clearly indicating that only a low proportion of new infections in a low number of individuals each year were responsible for a clinical attack, and suggesting a protective role of co-infections with P. falciparum. In fact most P. ovale clinical attacks occurred in individuals free of malaria parasites the previous days or weeks (independently of P. falciparum malaria treatments), but this was less the case for P. malariae attacks. Furthermore, no marked peak of parasitemia was observed for most P. malariae attributed clinical attacks, and since most P. malariae infections were long lasting with often high levels of chronic parasitemia in several children, it remains unclear to what extent our model was able to provide accurate measurements of P. malariae morbidity for all individuals. The two children who were attributed 13 and 9 P. malariae attacks were permanent resident of Dielmo and suffered these attacks between ages 2-13 years and 7-13 years, respectively. Almost one third (31.1%) of P. ovale clinical attacks occurred among adults, versus only 8.6% for P. malariae. For both species a high proportion of clinical attacks occurred among permanent residents of Dielmo. On average, the mean incidence density of clinical attacks in adults was 2.9 fold higher for P. ovale (0.023 attacks per person per year) than for P. malariae (0.008 attacks per person per year). These results suggest that at least for some individuals continuously exposed since birth to many reinfections by P. ovale and P. malariae, acquired immunity may be lost or insufficient to prevent any clinical attack. However, as also observed for P. falciparum attacks, the duration of fever and other symptoms rarely exceeded one or two days, even when no antimalarials were given [30]. Chronic nephrotic syndromes attributed to P. malariae have been reported in the literature [5,31,32]. They carry a high rate of mortality both in children and adults, but this association remains at least in part controversial and was not specifically investigated in our study. During the 20-year period of surveillance, 60 deaths occurred in Dielmo villagers, including two deaths with a renal failure which were unlikely related to P. malariae (one men and one women aged 78 and 71 years, respectively). In the case of P. ovale, life-threatening illness may also occur at least occasionally, as recently reported in a non-immune traveler returning from Nigeria [33]. Although representing together only 5.9% of the whole malaria morbidity among Dielmo villagers, P. ovale and P. malariae were a relatively common cause of morbidity in most age groups, including adults, until the recent dramatic decrease of malaria that followed the introduction of new control policies combining ACTs and LLINs. P. ovale and P. malariae may remain an important cause of morbidity in many areas of tropical Africa.
2018-04-03T03:38:16.651Z
2014-02-10T00:00:00.000
{ "year": 2014, "sha1": "047b7cbbca7ccdf117253f54953e7953d3627928", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0087169&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "047b7cbbca7ccdf117253f54953e7953d3627928", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
220970915
pes2o/s2orc
v3-fos-license
Anxiety and related factors in frontline clinical nurses fighting COVID-19 in Wuhan Abstract The aim of this study was to examine the anxiety status of the frontline clinical nurses in the designated hospitals for the treatment of coronavirus disease 2019 (COVID-19) in Wuhan and to analyze the influencing factors, to provide data for psychologic nursing. This study used a cross-sectional survey design and convenience sampling. The questionnaires were completed by 176 frontline clinical nurses. Anxiety was determined using the Hamilton anxiety scale. General data were collected using a survey. Correlation analyses were used. Among the 176 frontline nurses, 77.3% (136/176) had anxiety. The anxiety scores of the frontline clinical nurse fighting COVID-19 were 17.1 ± 8.1. Anxiety symptoms, mild to moderate anxiety symptoms, and severe anxiety symptoms were found in 27.3%, 25%, and 25% of the nurses, respectively. Sex, age, marital status, length of service, and clinical working time against COVID-19 were associated with anxiety (P < .05). The frontline nurses working in the designated hospitals for the treatment of COVID-19 in Wuhan had serious anxiety. Sex, age, length of service, and clinical working time against COVID-19 were associated with anxiety in those nurses. Psychologic care guidance, counseling, and social support should be provided to the nurses to reduce their physical and mental burden. Nursing human resources in each province should be adjusted according to each province's reality. Introduction China is a vast country with complicated terrain in various provinces and cities. Major natural disasters, accidents, public health and safety incidents, and diseases and epidemics occur from time to time. [1] Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is currently endemic in China, causing a large number of cases of coronavirus disease 2019 (COVID-19), which can cause severe respiratory disease and death in severe cases. Since December 2019, the virus has been spreading throughout the country and the entire world, and the World Health Organization raised a level 1 alert. As of February 2, 2020, a total of 14,423 cases were confirmed in China, of which 304 died, for a mortality rate of 2.1% in China. Nowadays, person-to-person transmission and aerosol transmission are recognized as transmission ways between nurses and patients and within families. Medical personnel is the core of the rescue team. [2] Nurses are always present to the frontline of any public health situation or crisis, and human-to-human transmission and aerosol transmission will not only harm the frontline nursing staff but also bring great psychologic impact. At present, China has a large number of nurses engaged in the battle against COVID-19. Due to the sudden outbreak of SARS-CoV-2, the number of nurses involved in the response was very limited, and most of them did not have enough experience and preparation to deal with it. [3] Disasters always cause psychologic problems of varying degrees. [4] COVID-19 was not only a disaster to the Chinese community but also a critical challenge for the medical staff, with its load of detrimental psychologic impacts. Nevertheless, when facing a deadly situation involving a dangerous virus, large numbers of patients, and highly intensive work, psychologic problems of different degrees are bound to occur. [5] To fight the psychologic war against this "psychologic epidemic" secondary to COVID-19, it must first be characterized to manage it appropriately. The first of patients with COVID-19 reported exposure to a large seafood and live animal market in Wuhan City, Hubei province, suggesting a potential zoonotic origin. Wuhan city is at the core of the battle against SARS-CoV-2 and is also the hardest-hit area in China. To understand the psychologic state of the first cohort of frontline nurses in the designated hospitals in Wuhan city, we investigated and analyzed their anxiety and the related factors, hoping to provide data for the psychologic intervention of frontline and rescue nurses. Subjects Frontline nurses in hospitals treating COVID-19 in Hubei province in January 2020 and February 2020 were enrolled. The nurses were from the tertiary hospitals in Wuhan city, Hubei province, that were designated to receive new patients with COVID-19. This study was approved by the ethics committee of the Guangxi University of Chinese Medicine. Informed consent was obtained from all participants included in the study. Data collection Two scales were used to collect the data. The general information questionnaire included sex, age, ethnicity, length of service, professional title, education level, marital status, and clinical working time against COVID-19. The Hamilton rating scale for anxiety (HAMA) [6] is the most commonly used clinician-rated measure of anxiety in the treatment studies of depression. [7] It consists of 14 symptom-defined elements, and covers both psychologic and somatic symptoms, comprising anxious mood, tension (including startle response, fatigability, and restlessness), fears (including of the dark, strangers, and crowds); insomnia; "intellectual" (poor memory or difficulty concentrating); depressed mood (including anhedonia); somatic symptoms (including aches and pains, stiffness, and bruxism); sensory (including tinnitus and blurred vision); cardiovascular (including tachycardia and palpitations); respiratory (chest tightness and choking); gastrointestinal (including irritable bowel syndrome-type symptoms); genitourinary (including urinary frequency and loss of libido); autonomic (including dry mouth and tension headache), and observed behavior at interview (restless, fidgety, etc). According to the data provided by the scale collaboration group in China, a total score ≥29 indicated severe anxiety, ≥21 points indicated obvious anxiety, ≥14 points indicated anxiety; ≥7 points indicated possible anxiety, and <7 points indicated no symptoms of anxiety. Study design In this survey, a total of 176 participants completed the survey. For those participants, 176 questionnaires were collected, and all the answers were completed, for an effective recovery rate of 100%. The questionnaires were made on the network questionnaire platform "Wenjuan Star" and distributed on the platform "WeChat." Before the investigation, a WeChat group was established to invite the frontline clinical nurses to join the group. The researchers explained in detail the purpose of the survey, the principle of anonymity and confidentiality in the group, required the respondents to truthfully answer according to their actual situation, forwarded the QR codes to the WeChat group, and notified the respondents to fill in and submit it during their rest time. Statistical analyses The collected data were analyzed using SPSS 21.0 (IBM, Armonk, NY). Categorical data were expressed as absolute numbers and percentages (%). Continuous data were expressed as mean ± standard deviation and analyzed using the Student t test, analysis of variance, and correlation analysis. Statistical significance was defined as P < .05. Characteristics of the participants There were 176 participants included in the study. The characteristics of the frontline clinical nurses working against COVID-19 are shown in Table 1. Anxiety levels The average anxiety score of the 176 nurses was 17.1 ± 8.1, and 77.3% of them had anxiety symptoms. Univariable analyses of the influencing factors To determine the factors that influenced the anxiety of the frontline clinical nurses against COVID-19, univariable analyses were performed. The results showed that sex, age, length of service, and clinical working time against COVID-19 were associated with anxiety (all P < .05, Table 2). The anxiety scores Table 1 The sociodemographic characteristics of the frontline clinical nurse against COVID-19 (n = 176, %). in females were significantly higher than that of males (P < .05). Older nurses had higher levels of anxiety than younger nurses (P < .05). Married nurses had higher levels of anxiety than unmarried nurses (P < .05). The longer the clinical hours spent fighting COVID-19, the higher the anxiety level (P < .05). The shorter the clinical service, the higher the anxiety level (P < .05) Current situation and causes of anxiety among frontline nurses The pressure source of nursing work can come from the objective environment as well as from subjective perception. [8] The SARS-CoV-2 is a new, highly infectious coronavirus never before encountered by humans. Thus, patients need long-term care by the doctors and nurses, and this will disrupt the normal life and work to a certain extent. At the same time, long-term fights, instability, and uncertainty of patients' condition, and concerns about the health status of patients have a huge impact on the physiology, psychology, and quality of life of the nurses. [9] With the outbreak of infectious public health events, most frontline nurses do not know much about the new or sudden infectious diseases and closed management, leading to fear. [10] In the present study, 136 (77.3%) frontline nurses had symptoms of anxiety, and 44 (25%) had severe anxiety, which is consistent with the 72.8% incidence of anxiety and depression symptoms of nurses in previous studies. [11] This indicates that the disaster brings serious psychologic problems to the frontline nurses, whose inner trauma is an urgent problem to be solved. The reasons for psychologic stress response in frontline nurses The causes for the psychologic response in the frontline nurses mainly include the following aspects. 4.2.1. The supply of protective equipment is tight, and nurses are insecure and worried about infection. In the face of COVID-19, the protection requirements for paramedics are very strict. Various papers and textbooks plainly describe wearing level-D protective clothing against respiratory viruses, but in practice, this is not a simple task. It takes at least 5 minutes to wear it, and taking off level-D protective clothing is even more difficult than putting it on. To save the protective clothing and the time to change protecting clothing, nurses wear diapers to work, are unable to drink water, and are unable to go to the toilet. Because adult diapers do not contain much, the nurses are limited to the consumption of small amounts of milk, which will aggravate the anxiety and depression of nurses. Our research showed that female nurses were more anxious than male nurses. The physiologic characteristics of a female are divided into 2 aspects: physiologic and psychologic. Physically, females are not as physical as men; psychologically, females' nursing personnel were slightly more resilient than males, and females are more sensitive than males. This physical discomfort exacerbated the female nurse's anxiety. Our research shows that the longer the nurses work at the frontline against COVID-19, the more anxiety they experience. Because they did not know the virus, the source of infection, and transmission, and they lacked awareness of prevention and control in the early stages, the nurses had high levels of anxiety. Despite the improving knowledge of COVID-19, the psychologic pressure of the nurses was increasing. Anxiety among nurses has been exacerbated by the recent discovery that COVID-19 can also be transmitted through aerosols. In this information age, a large number of unverified statements are reported in the news, causing panic among the public. At first, a large number of patients rushed to the hospital, aggravating the burden of the first cohort of designated hospital nurses. In such an environment, the anxiety of the frontline nurses in Wuhan hospitals, which have been exposed to patients with COVID-19 for the longest time and have the largest number of patients, will be further intensified. According to a study, 100% of the nurses in the infection department of the emergency department requested to be transferred, because they were concerned about the threat of environmental safety to health. [12] The results also showed that marital status was another relevant factor. The main reason is that nurses worry about spreading the virus to their families, or that they do not have the equipment or medication to treat them. There were nurses or family members of nurses who are infected, who had no beds, no hospitalization, and no privilege. Heavy workload. In the face of anxiety, the body can relieve stress through its own mechanism. Nevertheless, the frontline nurses were in a state of overload and super-intense work, constantly under stress, and on the verge of physical and Table 2 The HARS scores of the frontline clinical nurses against COVID-19 (n = 176). psychologic limits. The intensity and strain of the work of the medical staff in the isolation wards during the response to SARS was one of the main factors for psychologic stress. [13] From the perspective of sex, women's physical ability is not as good as men's, and the excessive workload inevitably leads to women's greater anxiety than men. Another factor was that the longer they spent at the frontline, the more anxious and depressed the nurses were. The frontline nurses were scheduled in the APN mode, and each shift lasts 8 hours. Due to a large number of patients, unstable conditions, and rapid changes in the condition, nurses actually worked an average of 10 hours per day, and some nurses worked up to 40 hours per week. Nurses are expected to work in a meticulous, long, and focused environment, which is another major contributor to anxiety, which is supported by a previous study. [14] In the face of such a huge workload and a strong source of infection, many frontline nurses were infected, some have fallen ill, suspected infected nurses were isolated, the number of nurses able to work normally was declining, and the workload was increasing. Long-term overload and super-intense work make nursing staff in the state of constant stress, on the verge of physical and psychologic limits. In addition to preventing infection, mental health is crucial. Only by maintaining a healthy state of mind can we put ourselves into work efficiently. Less experience in dealing with public emergencies. In the face to the patients suddenly increasing fever and human resource gap, hospital nursing staff from other departments were deployed, the non-nursing staff was also deployed, leading to less experienced staff having to deal with COVID-19, further increasing stress and psychologic pressure. Our results showed that age and length of service were 2 related factors influencing the anxiety of frontline nurses. By comparing the anxiety of the frontline nurses of different working ages, it was found that the incidence of anxiety in nurses with low seniority was higher, while the incidence of anxiety in nurses with high seniority was lower, which may be related to long-term psychologic stress and clinical knowledge. The results show that most frontline nurses were young and middle-aged, mainly female, and with experience of 1 to 10 years (81.8%). They lack experience, and, in the face of such a sudden disaster, their psychologic fear, psychologic endurance will be poor. As someone once said: "they are just a group of children, changed in a suit of clothes, the appearance of scholars predecessors, healing and saving people, and death rob people!" 4.2.4. Guidance of public opinion. This outbreak is significantly different from the information transmission speed of SARS in 2003. In the present outbreak, the information transmission speed is faster, but the authenticity of a lot of information cannot be guaranteed, which aggravates people's suspicion and worry and easily causes fear, which further aggravates the psychologic burden of the nurses. Strategies to counter anxiety among first-line nurses 4.3.1 As a professional medical worker, they first need to do is acquire a correct understanding of COVID-19 and avoid the rumors on the Internet. It is also essential to be able to disseminate knowledge accurately and not panic in the face of the disease. To carry out authoritative interpretation and education in a timely manner, the frontline nurses can update their knowledge of COVID-19 timely through professional education in 5 minutes at the time of shift change or authoritative release of WeChat groups, compilation manual of data, etc, so that the frontline nurses can know clearly that what they have done is the best treatment plan to achieve consistent thinking, consistent action, confident, and orderly in their work. In this way, the anxiety caused by fear, remorse, and guilt is eliminated. 4.3.2 COVID-19 knowledge training should be taken up with their jobs. They should carry out preventive interviews to discuss their inner feelings with the appropriate resources for healthcare workers to manage their stress. 4.3.3 After entering the frontline of the epidemic, scientific, rational, and clear management and division of labor should be strengthened, with clear working standards and targets. Unnecessary repetitive work should be reduced, and reasonable and effective incentive strategies should be established. Reasonable schedule, appropriate relaxation and rest, and adequate sleep and diet should be emphasized. Good interpersonal relationships, including medical care relationship, doctor-patient relationship, the nurse-patient relationship should be established and maintained to improve medical tasks and achieve medical goals by establishing a harmonious working atmosphere. 4.3.4 In the face of an outbreak, everyone has more or less negative emotions, especially healthcare workers, directly facing the patients. When there are negative emotions, we should reasonably face and accept the emergence of these emotions, and fully accept the rationality of the emergence of these emotions. Passing over and self-blame because of these emotions will eventually lead to a vicious circle of emotions and aggravation. Negative emotion management should be done well, as follows. 4.3.4.1 When emotions are difficult to control and affect work status, it is recommended to leave the stressor temporarily if possible. For example, the sense of helplessness in the face of illness, in the face of criticism from patients or family members. Taking time off can help calm emotions quickly and allowing a return to work. 4.3.4.2 Learning the correct expression, confiding to colleagues and friends around, making daily scheduled calls and information exchange with family members, and writing down the emotions on paper and then tearing up the paper into the trash can might help emotional catharsis. Crying is not a characteristic of the weak. Tears can be a source of emotional catharsis and relaxation, conducive to the maintenance of mental health. In addition, patients with very severe anxiety should use this time for possible psychologic treatment. 4.3.4.3 When taking a break from work, the nurses should try not to get information about the epidemic, should avoid relevant materials and circle of friends, chat with the people around about some irrelevant topics, and pay attention to nutrition, appropriate physical exercise, and relaxation. 4.3.4.4 During work, the nurses should focus on doing a good job in each medical process, focus on helping everyone around, affirm the value of each work, and timely encourage and affirm the work of their colleagues. In particular, they should avoid feeling guilty for a small mistake or blame others for mistakes. What is most needed in emergency work is mutual help and awareness of making up for it. In some powerless occasions, the nurses should tell themselves that they are not omnipotent, that their energy is limited, that it is impossible to do all on their own to help everyone around them, and to rely on partners. 4.3.5 Whenever possible, they should try to keep in touch with their family and the outside world. They should control the situation of their family and friends to alleviate the worry about family and friends. They should be aware of the outside world and reduce the feelings of isolation. 4.3.6 They should build a place in their heart that is their own and cannot be disturbed by outsiders or living things. It must be a safe environment for use and control. It can be a familiar bed, a small yard, a small room, etc. In the process of memory, we are already feeling rest and relaxation. In the process, the nurses can mentally direct themselves to suggest to themselves "I am particularly comfortable and safe in that place, and this place is bounded and relaxed," to stimulate and evoke physical sensations, and to allow the body to fully relax and rest before resuming the fight. 4.3.7 For relaxation training, they should lie flat on a bed in a comfortable position with one hand on their abdomen and the other on their chest. They should exhale slowly to feel that their lungs have enough space to breathe deeply. They should breathe in slowly through their nose until they can breathe no more, then slowly exhale through the mouth, with the thought that all the annoyance pressure is exhaled with the dirty gas. This should be repeated for 10 minutes with smooth, soothing music. In summary, 77.3% of the nurses working at the clinical frontline against COVID-19 had anxiety symptoms. About 25% of the frontline nurses had severe anxiety symptoms, indicating the emotion and burden of the nurses are not optimistic. Sex, work experience, and frontline care time were major influencing factors of frontline clinical nurses' emotions. These results indicate that clinical nurses should receive psychologic care guidance, counseling, and social support to improve their mental health. Limitations The research time was limited, and the number of participants was limited. Therefore, there are some limitations in the investigation of the psychologic state of the frontline medical workers.
2020-07-23T09:02:26.283Z
2020-07-24T00:00:00.000
{ "year": 2020, "sha1": "3e07b8ce0126aceb908bcf768cb7cad9a0d7327d", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1097/md.0000000000021413", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a3e7c0e5d3f3443668d5ecfef0afb245ca49f291", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
70463926
pes2o/s2orc
v3-fos-license
Effect of Spices on Consumer Acceptability of Purple Tea ( Camellia sinensis ) Spices have been used by consumers worldwide to improve flavours of food including tea. A study was done to determine the effect of selected spices on consumer acceptability of spiced purple tea, their antioxidant properties and economic impact. TRFK 306 (purple tea Variety) was used. Flavored teas were developed by blending the un-aerated purple tea with selected spices including ginger, lemon grass, nutmeg, cinnamon, tea masala (spice mix), and rosemary at different ratios and resulting products brewed and assessed by a sensory panel. Antioxidant activity, catechin analysis and sensory evaluation were done and results showed that all the spices had low antioxidant activities as compared to un-aerated tea from TRFK 306. Cinnamon had an antioxidant capacity of 89.89%, ginger 69.23%, rosemary 89.47%, tea masala 55.79%, nutmeg 46.99% and Purple tea (TRFK 306) 92.53%. Spices had a positive effect on consumer acceptability of purple tea at various threshold ranges. The three best rated spices included cinnamon at 10%, lemongrass at 10% and nutmeg at 25% with mean values of 6.88, 6.24 and 6.92 respectively on a hedonic scale. The results showed that some spices are preferred more with tea than others and some have lower threshold detection values than others. Overall, addition of suitable spices to the purple tea led to an increase acceptability of tea. Economic evaluation of purple tea blended with nutmeg showed a significant increase in cost, from Ksh 56.00, Ksh 58.07 and Ksh 61.17 for 0%, 10% and 25% spice to tea ratio respectively. Introduction The Kenyan population only consumes 5% of the tea produced in the country which is mostly aerated (black) Cut Tear and Curl (CTC) tea [1] [2].According to the Food and Agriculture Organization (FAO) of the United Nations (UN), the world market for aerated (black) tea is anticipated to shrink in future whereas that for un-aerated (green, purple etc.) tea and other forms of specialty teas is expected to grow [3]- [5].Kenya has therefore embarked on opportunities to diversify its tea products in order to access this market [6].To address these issues, the Tea Research Institute (TRI) has taken the challenge to develop technologies for production of these tea products [3] [7].Product diversification is expected to lead to increased production and utilization of the tea crop through value addition [3] [6] [8]- [10] a practice that has commenced in several areas.Un-aerated purple tea is a relatively new product in the Kenyan market and the world market at large and for this reason it has lower market share compared to black and green tea [2].In terms of taste un-aerated purple tea is as mild as black tea owing to the anthocyanin content in them [7] [8].Spices have been used by people from various walks of life to modify flavour of foods and beverages making them more appealing to consumers.Different spices are suitable for different foods while others have a broad range of application.In the Kenyan market, spices are sold for use with a broad range of foods while others are specifically sold for specific foods like tea (tea masala, ginger) and rice (pilau masala).Apart from the flavours and aroma, spices have health benefits which can be championed to sell the tea product [11]- [13].This study focused on determining the effect of addition of spices to consumer acceptability, anti-oxidant properties and pricing on the developed spiced un-aerated purple tea product from TRFK 306 [14] [15].The work was carried out in three phases which included developing flavoured teas by blending the processed un-aerated purple tea with selected spices including ginger, lemon grass, nutmeg, cinnamon, tea masala, and rosemary.Different ratios of tea and spices were blended and the resulting products assessed using a sensory panel.Antioxidant capacities of the products and the impact of spice addition to the resulting tea spice mixes were also determined.The results of the assays were statistically analyzed and the interpretation given [14]- [16]. Raw Materials Purple tea (TRFK 306) and TRFK 6/8 was used.TRFK 6/8 is usually used as a standard for quality black or green tea in Kenya [17], while TRFK 306 is a new tea variety characterized with purple leafs and is rich in anthocyanin [18] [19].Un-aerated purple tea was processed at TRI, Kericho, Kenya using the method by [20], Spices including ginger, lemon grass, nutmeg, cinnamon, tea masala, and rosemary were obtained from local retail stores. Spiced Tea Development Selected spices were obtained dried and ground.Blending was done on a weight to weight ratio (w/w) of spice to processed teas and manually mixed by hand in stainless steel holding vessels for about fifteen minutes and left overnight for complete flavouring.Initially the six spices were blended with the processed un-aerated purple tea at high percentages of 25, 50 and 75 and later lowered to 5, 10, 15 and 25 to determine sensory economic threshold levels. Extraction and Quantification of Anthocyanins Extraction and analysis of anthocyanins was done according to the method by [21].Anthocyanins were only quantified in purple tea (TRFK 306).The standards used were Cyanidin-3-O-galactoside, Cyanidin-3-O-glucoside, Cyanidin chloride, Delphinidin chloride, Petunidin chloride, Pelargonidin chloride and Malvidin chloride purchased fron Sigma Aldrich, UK.Quantification of anthocyanins was performed at 520 nm.The total anthocyanin content was expressed as a concentration by mass on a sample dry matter (Table 1). Antioxidant Activity The antioxidant properties of the all the spices and spiced un-aerated purple tea from TRFK 306 were determined using the method described by (Ochanda et al., 2011).The stable 2, 2-diphenyl-1-picrylhydrazyl radical (DPPH) was used for determination of free radical scavenging of tea and spice.A spectrophotometer (UV-Vis Shimadzu 1800) was used to determine the absorbance at 517 nm.The percentage inhibition of the DPPH radical was calculated [23] (Table 2). Sensory Evaluation Sensory evaluation was done on the 6 developed un-aerated spiced purple tea products from TRFK 306.Two (2) grams of the teas were infused with hot water (2 g of tea/250 ml of water) and sweetened with sugar (2 g of sugar/250 ml of infused tea) before serving to a sensory evaluation panel for assay.Panelists of mixed gender of between 18 -65 years of age participated in the exercise [14]- [16] [24].Randomized warm (25˚C -30˚C) samples, of 20 to 25 mL, were served in clear 170 mL glasses marked with random digit numbers and covered with aluminum foils.Potable clean water was provided for rinsing of the palate during the exercise.Evaluation was conducted at room temperatures of 20˚C -22˚C under natural light.The samples were evaluated using a 9 point Hedonic scale (IDF, 1987).This scale consisted of the test parameters of taste, smell, texture, general acceptability, and colour accompanied by a scale of nine categories as: 1 = dislike extremely; 2 = dislike much; 3 = dislike moderately; 4 = dislike slightly, 5 = neither dislike nor like, 6 = like slightly; 7 = like moderately; 8 = like much; 9 = like extremely [14].Prior to evaluation, a session was held to familiarize panelists with the evaluation process.Panelists were asked to read through the questionnaires and the meaning of each attribute (taste, smell, flavour, colour, acceptability) explained [15] [24].No discussions were allowed during the exercise.The sensory evaluation data was presented as means of five groups of panelist's scores using SAS 9.1 Statistical package [25].Significant differences were accepted at P ≤ 0.05 [5] [26] [27]. Economic Evaluation The best rated spiced tea was used to determine economic threshold values for the consumer market.Prices of un-aerated purple tea products and cinnamon spice were averaged and used for the economic evaluation and analysis.Statistical evaluation was used to determine significant differences in prices (P ≤ 0.05) [5] [27]. Results and Discussion The assay of purple tea in comparison to that of the standard quality TRFK 6/8 revealed that the purple TRFK 306 had less total catechins (9.11%) compared to the green tea from TRFK 6/8 (14.68).However, TRFK 306 had higher levels of anthocyanis (945.30µg/ml) which were significantly greater (P ≤ 0.5) than that of green tea from TRFK 6/8 (50.0030 µg/ml) Table 1.The purple have high quantities of anthocyanis as well as catechins.This phytochemicals profiling is essential for determination of tea quality.The antioxidant activities assayed for the spices and teas revealed that tea had the highest contents of antioxi- dants of 92.73% followed by cinnamon at 89.89%, Rosemary at 89.47%, ginger at 69.23% tea masala at 55.70% and nutmeg at 46.99% (Table 2).Spices significantly (P ≤ 0.05) lowered antioxidant activities of the un-aerated tea from TRFK 306 (Table 2).Increasing quantities of the spices decreased the antioxidant activity of the resulting spiced tea even further [22] [23] [28].This does not mean that spices do not have other intrinsic benefits of their own as shown by an examination of work by other scientists [12] [13] [29]- [31].Indeed some of the assayed spices have been associated with such health benefits as antimicrobial, anti-cancer and anti-inflammation effects among others [8] [32]- [36]. The addition of spice in tea at the rage of 0% -75% had varying results for overall mean rating depending on the spice used (Table 3(a)).Ginger had a mean value of 5.56, 5.75, 5.93, and 6.99 for blending at 0%, 25%, 50% and 75% respectively indicating an increase with each addition of spice in the tea.Lemongrass mean values were 6.29, 6.67, 5.60 and 5.76 for blending done at 0%, 25%, 50% and 75% respectively indicating a decline in liking at 50% and above blending.Cinnamon had mean ratings of 6.42, 6.42, 6.76 and 6.74 for blending at 0%, 25%, 50% and 75% respectively showing a decrease in liking at addition above 75%.Nutmeg mean ratings were 6.53, 6.63, 5.92 and 5.17 for blending at 0%, 25%, 50% and 75% respectively indicating a decrease at above 50% spice addition.Rosemary had ratings of 5.68, 6.20, 6.27 and 6.21 at blending rates of 0%, 25%, 50% and 75% respectively indicating a decrease in liking at spice addition at above 75%.Tea masala mixed spice had ratings of 5.53, 5.55, 5.56 and 5.05 at blending ratios of 0%, 25%, 50% and 75% respectively showing a decrease in liking at addition of over 75% spice ratio.Optimization of spice to get critical and economic thresholds for tea blending was then done by spicing of teas at spice ranges below 25%.The results are shown in Table 3(b).The results for the first three best rated spices have been shown including cinnamon, lemongrass and nutmeg.Cinnamon ratings were 4.90, 6.54, 6.88, 5.59 and 6.46 respectively for ratios of 0%, 5%, 10%, 15% and 25% respectively indicating an optimum threshold at 10% blending ratio.Lemongrass on the other had had ratings of 5.48, 5.76, 6.08, 5.95, and 6.24 respectively at 0%, 5%, 15%, 20% and 25% ratios also giving an optimum at 25% addition but this was not significantly different from the rating at 10% spice addition.Nutmeg ratings were 5.06, 6.55, 6.92, 5.95 and 6.29 respectively for 0%, 5%, 15%, 20% and 25% ratios giving an optimum at 15% blending ratio. Costing Unit Cost was calculated by dividing the product cost and quantity.Costs of commodities are in Kenya Shillings (Ksh) and that of quantity in grams (g).Values of unit cost of Table 4(a) were used to generate the cost of dif-ferent tea spice quantities.Table 4(b) shows possible costs of the nutmeg purple tea product at different percentage mixes. Table 4(b) shows that there was a significant (P ≤ 0.05) increase in the cost of production with an increase in the quantity of spice added.So sensory evaluation data and economic analysis of the spice and tea mixes have to be used in the development of products which are acceptable to the consumer and profitable to the producer [20] [24] [37]. Conclusion The results reported in this article have shown that preference for un-aerated purple tea palatability is enhanced by spice or flavour addition to the tea.Further, the data have shown that some spices are preferred at higher quantities than others for optimum tastes of the un-aerated purple tea blends.Commercial production of spiced un-aerated purple tea will therefore have to get suitable threshold levels for product development in order to minimize the quantities of spices required for developing the blends, because the addition of spice in tea leads to increase in production costs.The research has also shown that some spices may not be compatible with un-aerated purple tea at high concentrations but at lower concentration the acceptability is increased.Values are Mean ± SD of 3 replicates for spices and 3 replicates of tea Means in the same column with the same letter(s) are not significantly (P > 0.05) different.Pdt = 1 unit of product comprising of spice and tea mixed at a specified ratio.Tea and product prices are in Kenya shillings per 100 g.TCA = Total catechin, ANT = Anthocyanins.tion (NACOSTI), Kenya and the Tea Research Institute.The technical support of staff from Tea Processing and Value Addition Programme of the TRI is also acknowledged. Table 2 . Percentage antioxidant activity of selected spices and Purple tea (TRFK 306). Values are Mean ± SD of 3 replicates.Means in the same row with the same letters are not significantly different. Table 3 . (a) Effect of spices on sensory attributes of spiced un-aerated purple tea (TRFK 306); (b) Sensory evaluation of unaerated purple tea (TRFK 306) with cinnamon, lemongrass and nutmeg at less than 25% spice to tea ratio. Values are Means ± SD of 21replicates for ginger 27 for lemongrass 34 for cinnamon, 23 for nutmeg, 24 for rosemary and 34 for tea masala.Means in the same row with the same letter(s) are not significantly different (P > 0.05). Table 4 . (a) Un-aerated purple tea and nutmeg spice product calculated with unit prices; (b) Summary of the price of un-aerated purple tea at different nutmeg spice ratios.Product cost (Ksh) = Kenya shillings.Quantity of the product (g) = grams.Unit cost (Ksh/g) = cost of one gram of the tea or spice (Kenya shillings per gram of product).
2019-01-02T19:34:39.877Z
2015-05-26T00:00:00.000
{ "year": 2015, "sha1": "83d8ec62ddf06614c351b24868a83a172ff24e6e", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=56634", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "83d8ec62ddf06614c351b24868a83a172ff24e6e", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
236258366
pes2o/s2orc
v3-fos-license
Environmental heterogeneity and sampling relevance areas in an Atlantic forest endemism region a r t i c l e i n f o Introduction The lack of studies on species distribution and the great amount of undescribed taxa have hampered biodiversity conservation worldwide (Hortal et al., 2015(Hortal et al., , 2008. Paradoxically, investments in biodiversity characterization have been higher in temperate habitats than in the tropical regions that concentrate the main global biodiversity hotspots (Collen et al., 2008). Further, within the tropics, available information can be highly geographicallybiased, being site accessibility, distance from research institutes, and the proximity of the protected areas to human settlements the determinants of the best surveyed areas (Sastre and Lobo, 2009). The negative consequences of uneven sampling include: (i) the inefficiency of conservation unit designing, (ii) the lack of parameterization for distribution predictive models, and (iii) the reduced probability of describing new taxa, many of which will become extinct before being known to science (Bini et al., 2006;Brito, 2010;Hortal et al., 2015Hortal et al., , 2008Pontes et al., 2016). Given the limited funding for conservation and the growing rates of biodiversity loss, optimizing sampling efforts is urgently needed, especially in the developing countries that retain much of the world biodiversity. An efficient way to mitigate spatial survey bias is to incorporate regional habitat heterogeneity into sampling design (Funk et al., 2005). This approach relies on the assumption that environmentally distinct areas may harbor communities with different species composition. Thus, areas that are environmentally distinct from those already studied may be more likely to have new species (Schmidt et al., 2020). With this procedure, the inclusion of environmental gradients is also important because it permits to investigate the whole set of conditions in which target species can occur, improving the performance of predictive models and of biodiversity mapping (Hortal et al., 2015(Hortal et al., , 2008. Landscape features and bioclimatic variables have been recognized as biodiversity surrogates (Lindenmayer et al., 2008;Williams et al., 2002), and they can be useful to represent the environmental heterogeneity necessary to identify areas of high species survey relevance. In a recent example, Schmidt et al. (2020) identified poorly-sampled environmentally distinct areas for Amazon forest ant communities with the use of environmental maps of soil, temperature, and precipitation. Although this approach has some limitations, such as the availability of environmental data that could influence the species communities, this approach can be particularly useful to delineate species inventories in environmentally heterogeneous areas and with a lack of studies. The Atlantic forest is a biodiversity hotspot that has been severely impacted by habitat loss and fragmentation (Ribeiro et al., 2009). It has been connected and disconnected from Amazon, the main forest formation in South America, in the past millions of years, and presently is isolated in Eastern South America by a diagonal of dry formations composed by the Cerrado, Caatinga and Chaco (Silva and Casteleti, 2003). Repeated connections and disconnections with other biomes, altitudinal and latitudinal gradients, and isolation resulted in a unique biota composed by more than 20,000 species of plants, 321 species of mammals, 861 species of birds, 300 species of reptiles, and 625 species of amphibians (Monteiro-Filho and Conte, 2017;Silva and Casteleti, 2003). The great environmental heterogeneity of the Atlantic forest (i.e. variations in relief and pluviometric regimes) also contributed to the high species diversity and levels of endemism (Tabarelli et al., 2010). This biome has been subdivided into five centers of endemism, based on the distribution of butterflies, birds, and mammals: Brejos Nordestinos, Diamantina, Pernambuco, Bahia, and Serra do Mar (Silva and Casteleti, 2003). The Pernambuco Endemism Center (PEC) is the portion of the Atlantic forest located in northeastern Brazil northern from São Francisco river, distributed in the states of Alagoas, Pernambuco, Paraiba, and Rio Grande do Norte. Of the five centers of endemism, PEC is the most fragmented, which together with the remarkably high species richness led it to be considered as a hotspot within a hotspot (Pontes et al., 2016). In this region, less than 6% of the original forest cover has remained and large continuous forest fragments no longer exist, with only 23 fragments presenting more than 1000 ha (Pontes et al., 2016), and none is larger than 10,000 ha (Ribeiro et al., 2009). Differently from other Atlantic forest regions, PEC is characterized by a low percentage of protected areas (only about 1%; Ribeiro et al., 2009) and by a limited number of public conservation unities. It highlights the importance of the private conservation unities, denominated by Brazilian legislation as Private Reserves of Natural Heritage (hereafter RPPNs). The RPPNs in PEC are characterized as small conservation unities, but more abundant and homogeneously distributed in the landscapes than the public conservation unities and, therefore, have a great potential to maintain species in this fragmented landscape. Despite the high rates of habitat fragmentation and local species extinctions, new and endemic species are still being described in PEC (Peixoto et al., 2003;Pontes et al., 2013;Silva et al., 2004), and others have been considered extinct even before their scientific description (Pontes et al., 2016). Thus, the indication of areas of high species survey relevance is urgently needed. Here, we characterized the environmental heterogeneity of the Pernambuco Endemism Center in terms of vegetation, soil, drainage density, altitude, and climatic variables. Then, we assessed whether private reserves are preserving the environmental heterogeneity of the PEC region and how fragmented are the landscapes around these reserves. Finally, we assessed the most relevant regions in PEC in terms of environmental dissimilarity for sampling vertebrates. These results will elucidate how to plan new species surveys to give support to conservation actions. Environmental heterogeneity in PEC areas To evaluate the environmental heterogeneity in the PEC, we used variables related to climatic, soil type, altitude, drainage density, land use, and vegetation type. Limits of PEC area were taken from Ribeiro et al. (2009) (Fig. 1). For climatic variables, we retrieved climatic data (annual mean temperature, maximum annual temperature, minimum annual temperature, annual precipitation, precipitation of wettest quarter, and precipitation of driest quarter) from WordClim (Fick and Hijmans, 2017). Data for soil type, altitude, drainage density and vegetation types were taken from AmbData repository (http://www.dpi.inpe.br/Ambdata/) and land use maps from MapBiomas 2018 version 4.1 (Souza et al., 2020). All these variables were transformed into raster layers with spatial resolution of 30 s ( 1 km 2 ). The number of cell ( 1 km 2 ) from each environmental variable in the PEC area was counted using the function values from raster R package (Hijmans and Etten, 2012). Private reserves To evaluate how much of the environmental heterogeneity has been preserved in private reserves, we first searched for all federal and state private reserves of natural heritage (RPPNs) set in PEC area (Fig. 1). The coordinates of private reserves were taken from federal and state environmental agencies repositories (Table S1 and S2). Then, we extracted the environmental variables from private reserve coordinates using the function values from raster R package (Hijmans and Etten, 2012). In addition, we estimated the percentage of land use classes (forest formation, pasture, agriculture, annual and perennial crop, mosaic of agriculture and pasture) and isolation degree in a 2 km buffer of each private reserve coordinate. In this case, we used the land use map from MapBiomas (see above) that originally has a spatial resolution of 30 m. Isolation degree was estimated as the mean of Euclidean nearest-neighbor distance among forest land use class, using the plugin LecoS (https://github.com/Martin-Jung/LecoS) from QGIS (www.qgis.org). Maps of sampling relevance We estimated the sampling relevance in the PEC that means how relevant is each cell of 1km 2 grid covering the PEC for further survey studies. It was based on the environmental dissimilarity of each cell of 1km 2 in relation to sites already sampled in the literature and the result is a raster containing a gradient of sampling relevance to PEC. To evaluate the sampling relevance in the PEC, we first searched for studies performed with terrestrial vertebrates in the region and then estimated the sampling relevance of each cell of 1 km 2 of the PEC following Schmidt et al. (2020). We used the data available in data papers for amphibians, birds, mammals, and camera traps (Bovendorp et al., 2017;Culot et al., 2019;Hasui et al., 2018;Lima et al., 2017;Muylaert et al., 2017;Souza et al., 2019;Vancine et al., 2018) (Fig. 1). They are the most complete datasets published so far and include published (peer-reviewed papers, books, chap-ters, thesis, technical documentation, and scientific conferences) and unpublished data. The authors of these datasets searched for data in the following sources: (i) online academic databases (e.g., ISI Web of Knowledge, Google Scholar, Scielo, Scopus, JStore) (ii) digital libraries of state and federal Brazilian universities, (iii) references cited in literature, and (iv) email contacts with experts and organizations that have conducted studies with vertebrate groups. In addition, these datasets were done by expertise of each taxonomic group and all data were checked for correct taxonomy. Considering the Atlantic forest distribution, data paper for amphibian accounts for 1163 sites, birds 4122 sites, bats 205 sites, primates 700 sites, small mammals 300 sites, medium and large-sized mammals 244 sites and 144 sites for camera trap studies. Camera traps comprise mainly records of medium and large mammals, and few opportunistic records of birds, bats, primates, and small mammals. Camera trap has become a major advance for monitoring terrestrial mammals in biodiversity rich ecosystems because allowed the record of species difficult to observe and detect otherwise (Lima et al., 2017). Small mammals include marsupials and small rodents (i.e. families Caviidae, Cricetidae, Ctenomyidae, Echimyidae, Cricetidae and Sciuridae, Bovendorp et al., 2017). Medium and large-sized mammals include non-volant terrestrial mammal species over 1 kg . Unfortunately, there is no data paper published so far comprising reptile and fish communi-ties and PEC areas, thus these vertebrate groups were not included in our analysis. We did not used data occurrence from GBIF because of high rates of error in the coordinates and incomplete inventories of species occupying a survey location (Troia and McManamay, 2016). Based on the coordinates provided by the studies performed with terrestrial vertebrates (hereafter, survey sites), we assessed the relevance of terrestrial vertebrates sampling for further studies in PEC. The sampling relevance was estimated as the environmental dissimilarity between each cell of 1 km 2 grid covering the PEC and the survey sites, considering eight uncorrelated environmental variables at once: vegetation and soil type, altitude, drainage density, maximum annual temperature, minimum annual temperature, precipitation of wettest quarter and precipitation of driest quarter. The selection of uncorrelated environmental variables was done by calculating the variance inflation factor (VIF) considering all environmental variables and excluded the highly correlated from the set through a stepwise procedure. Continuous variables were previously standardized by z-score using the function scale from R package. Then, for each cell of 1 km 2 , we calculated the environmental dissimilarity between the cell and the survey sites using Gower distance (Legendre and Legendre, 2012). Next, the average among the values of environmental dissimilarity were calculated to obtain a single value of sampling relevance for each cell of 1 km 2 . The environmental dissimilarity was estimated using the function vegdist from vegan R package (Dixon, 2003) which calculates a single environmental dissimilarity among sites based on several environmental variables. We chose Gower distance because it is appropriate to measure dissimilarities of two sites with mixed numeric and non-numeric data. Finally, we normalized all values from 0 to 1, performing a Min-Max normalization (Patro and Sahu, 2015), in such a way that values close to 1 represent areas environmentally different from areas where groups of vertebrates have already been sampled. Environmental heterogeneity Most of the PEC areas are characterized by seasonal semidecidous forest, open ombrophilous forest and dense ombrophilous forest (Fig. S1A). The other types of forest are transition zones between steppe and savanna vegetation or zone of marine influence (Fig. S1A). The PEC areas present mainly yellow oxisol soil and red-yellow argisol soil, that are characterized by low fertility (Fig. S1B). Pastures and croplands predominate, with less than 15% of the pixels consisting of forests, savanna and mangrove (Fig. S1C). The region has high heterogeneity in drainage (Fig. S1D) and most areas are up to 200 m in altitude (Fig. S1E). The annual mean temperature varies from 21 to 27 • C (Fig. S2A), being the maximum temperature 32 • C (Fig. S2B) and the minimum temperature 15 • C (Fig. S2C). Among sites, a maximum of 7 • C of temperature variation was observed. Annual precipitation presents a great variation, from 500 mm to 2145 mm (Fig. S2D). The precipitation in the wettest quarter varies from 300 mm to 1000 mm (Fig. S2E), and in the driest quarter from 20 mm to 200 mm (Fig. S2F). In general, the private reserves preserve high environmental heterogeneity. There are private reserves in all main forest formations (semidecidous forest, open ombrophilous, and dense ombrophilous forest), and also in the transition zones between vegetation formations (Fig. S1A). The proportions of vegetation and soil types, drainage density, altitude, and climatic variables in private reserves followed the same proportions found for the whole PEC area (Fig. S1 and S2). However, many soil formations that occur in low proportion throughout PEC regions are not present in the private reserves (Fig. S1B). Most private reserves are in are highly fragmented landscapes (% forest formation below 50% and at least 1000 m to the nearest forest fragment) surrounded by pasture and agriculture fields (Fig. S3). Sampling relevance Except for bats and birds that were mainly surveyed in ombrophilous forest, most of the vertebrate surveys were carried out in seasonal semidecidous forests and in low altitude areas ( Fig S4-S17). Notably, no surveys were conducted in the driest areas ( Fig S4-S17). For most vertebrate groups, the most western portion of the PEC presents the highest sampling relevance in terms of environmental dissimilarity (Fig. 2 and 3). This area is mainly in seasonal semidecidous forest and in transition zones between this type of forest and steppe vegetation. In the case of large mammals, highest sampling relevance sites extend to all portions of the PEC, except for south-central portion (Fig. 2). Coastal and northwest region of PEC present high sampling relevance (>0.75 sampling relevance) for a maximum of two vertebrate groups, usually for terrestrial mammals or non-volant mammals (Fig. 3). Correlations among the sampling relevance values showed that bats have similar patterns to amphibians, birds and primates; and medium and large mammals the most distinct pattern (Fig. 3A). Terrestrial mammals or non-volant mammals are the vertebrate groups presenting more areas of high sampling relevance, while primates, bats, amphibians, and birds are the ones with more areas of low sampling relevance (Fig. 2, S18). Most of high sampling relevance sites (>0.75 sampling relevance) are in fragmented areas, with average forest cover of 8% and isolation of 1500 m (Fig. 4). Sampling relevance of the private reserves are presented in Table S2. Discussion The Pernambuco Endemism Center shows high environmental heterogeneity, mainly in relation to forest and soil types, drainage density and levels of precipitation, while temperature and altitude vary only slightly in this region. In general, private reserves preserve the environmental heterogeneity found in the PEC; however, they are in landscapes composed by agriculture and pasture matrix wherein natural vegetation is very fragmented and isolated. Few sites have been surveyed in the PEC, being the mammals, in general, the least studied vertebrate groups. Because of the high environmental heterogeneity, we found many sites of high sampling relevance for all vertebrate groups, but in general, the western region of the PEC presents the highest sampling relevance in terms of environmental dissimilarity. For all vertebrate groups, the sites with the highest sampling relevance are threatened by fragmentation, and sampling efforts must be allocated in these areas before they get totally converted into agricultural fields and pasturelands. PEC represents the narrowest Atlantic forest region in term of longitude and shares extensive borders in the west with the most dried Brazilian biome, the Caatinga, and in the east with the Atlantic ocean. This causes the PEC to present a wide range of precipitation with low temperature variation. Precipitation is one of the most important selective pressures for species diversification worldwide, because different physiological adaptations are need, especially for those surviving in harsh dried environments (Dewar and Richard, 2007;Irl et al., 2015). This hypothesis still needs to be tested for the PEC and this can be done using landscape genomics tools (Carvalho et al., 2020). Private reserves are in areas with different precipitation rates thus, if the above idea is applicable to the PEC, these areas can be crucial to preserve species and populations adapted to different environmental conditions. Private reserves are the main areas for the biodiversity protection in the PEC. Although we have shown that the private reserves maintain areas with high environmental heterogeneity, they are in isolated and fragmented landscapes. For example, we showed that most private reserves are isolated at least 1 km from other forest fragments, and they are placed in landscapes with less than 30% of forest cover. Many studies have shown that more than 30% of forest cover is needed to maintain species richness in degraded landscapes because species loss is more dramatic below this threshold level (Banks-Leite et al., 2014;Muylaert et al., 2016). In addition, the isolation of the remaining populations can increase inbreeding rates leading to genetic erosion and compromising the health of the populations in the long term. Thus, probably the main protected areas in the PEC might not be sufficient to protect all species in this region and more conservation effort must be done to encourage the creation of more private reserves. Moreover, population genetic studies are urgently needed to assess the conservation status of the remaining populations and, when necessary, promote genetic management to increase their genetic diversity. Amphibians, primates, and birds are the vertebrate groups with more sampled sites in PEC and new species still have been recently described for these groups (Peixoto et al., 2003;Silva et al., 2004). This is indicative that, if more sites with known data deficiency are sampled, more species are likely to be discovered in this region (Bini et al., 2006;Brito, 2010). Small and large mammals, on the other hand, were the least studied vertebrates, in terms of number of study sites, and it has been estimated that at least half of them have been locally extinct in the PEC (Pontes et al., 2016). In addition to mammals, several birds have already become extinct or are threatened with extinction in this region (Pereira et al., 2014). Thus, to prevent that more species become extinct, it is needed to know where these species still occur to preserve them. Few studies were performed in the driest regions (low precipitation), comprising the most western region of the PEC. These uneven records, in addition to preventing new species from being discovery, can lead to errors in species distribution maps and impair their management plans, mainly because most of these maps are based on climatic data (Hortal et al., 2015(Hortal et al., , 2008. Finally, most of the highest sampling relevance sites are in very isolated and fragmented areas, which indicate the urgency to study these areas to prevent species from becoming extinct even before they are discovery. In conclusion, PEC is one of the least studied regions in the Atlantic forest biome and the characterization of environmental variations showed that this region needs to be urgently studied. Because the survey studies in the PEC are spatially biased, it is necessary additional surveys to improve the spatial and environ- mental coverage of the region. These additional surveys can help to improve ecological niche modeling that can be used to propose areas of potential relevance for conservation. Moreover, based on these additional surveys, it will be possible to assess the importance of the private reserves for conservation. The carrying out this type of study is not yet possible in the PEC due to the few species surveys in this region. Surveying species and collecting data in the field, however, are expensive and time-consuming endeavors. Thus, efforts must be made to use funds allocated to this task in the most efficient manner. Here we found the regions and environments with high sampling relevance based on the environmental dissimilarity with sites already sampled. For this task, we used vertebrate groups that are the most studied species worldwide. Despite that, many regions in the PEC still need to be studied to generate a database useful to help conservation decisions and management planning. Our findings highlight the importance of the existing private reserves in the PEC, and are potentially helpful to improve the efficiency of new conservation units designing, boost the performance of distribu-tion predictive models, and increase the probability of describing new taxa in an important endemism area within the Atlantic forest. Conflict of interests The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2021-07-26T00:06:02.339Z
2021-06-09T00:00:00.000
{ "year": 2021, "sha1": "ac2344760f9f4af3dfd9465a49b8ffe17ea4d536", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.pecon.2021.05.001", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2bd48846e98949703f61d118fbe43cde87dd943e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
208248140
pes2o/s2orc
v3-fos-license
Constrained Heterogeneous Vehicle Path Planning for Large-area Coverage There is a strong demand for covering a large area autonomously by multiple UAVs (Unmanned Aerial Vehicles) supported by a ground vehicle. Limited by UAVs' battery life and communication distance, complete coverage of large areas typically involves multiple take-offs and landings to recharge batteries, and the transportation of UAVs between operation areas by a ground vehicle. In this paper, we introduce a novel large-area-coverage planning framework which collectively optimizes the paths for aerial and ground vehicles. Our method first partitions a large area into sub-areas, each of which a given fleet of UAVs can cover without recharging batteries. UAV operation routes, or trails, are then generated for each sub-area. Next, the assignment of trials to different UAVs and the order in which UAVs visit their assigned trails are simultaneously optimized to minimize the total UAV flight distance. Finally, a ground vehicle transportation path which visits all sub-areas is found by solving an asymmetric traveling salesman problem (ATSP). Although finding the globally optimal trail assignment and transition paths can be formulated as a Mixed Integer Quadratic Program (MIQP), the MIQP is intractable even for small problems. We show that the solution time can be reduced to close-to-real-time levels by first finding a feasible solution using a Random Key Genetic Algorithm (RKGA), which is then locally optimized by solving a much smaller MIQP. I. INTRODUCTION As the agricultural industry in east Asia suffers from increasingly severe labor shortage, the need to automate away as much work as possible is pressing. As a result of this automation trend, the market for autonomously spraying pesticides and fertilizer with Unmanned Aerial Vehicles (UAVs) have been growing quickly (Fig. 1). In fact, the market for pesticide-spraying UAVs will expand fifteen times from 2016 to 2022, according to a survey conducted by Seed Planning, Inc. [1]. To cover large farmland, it is necessary to deploy a team of multiple ground and aerial vehicles because the area which can be covered with a single take-off and landing by one UAV is limited by its battery life and maximum communication distance. In such deployments, a ground vehicle is used to recharge UAV batteries and monitor the UAV fleet. Coverage path planning (CPP) is a class of algorithms that find paths for one or multiple robotic agents that completely cover/sweep a given task area. It is an essential component for applications such as room floor sweeping, coastal area inspections [3], 3D reconstruction of buildings [4], and, of course, autonomous pesticide and fertilizer spraying (also known as crop dusting) [2], [5]. As we will detail in the rest of this paper, autonomous crop dusting by a fleet of UAVs and a ground vehicle poses interesting and unique constraints that existing CPP planners cannot handle effectively. Our contribution is a novel, fast planner for crop dusting which provides locally optimal paths that satisfy the aforementioned applicationspecific constraints. The rest of the paper is structured as follows: after related work is reviewed in Sec. II, Sec. III gives a mathematical description of the heterogeneous vehicle coverage problem and our proposed planning framework; Sec. VIII takes Niigata's (a prefecture in Japan) farmland (Fig. 1a) as an example to demonstrate the effectiveness of our algorithm. II. RELATED WORK According to surveys conducted by Galceran and Carreras [6] and Choset [7], CPP algorithms can be classified into two broad categories: cellular decomposition and grid-based methods. Cellular decomposition partitions general nonconvex task areas, typically in the form of 2D polygons, into smaller sub-regions with nice properties such as convexity [8], [9], [10], [11]. In the partitioned sub-regions, coverage path patterns can then be easily generated using zig-zag or contour offset [12]. This technique can be readily extended to multiple agents by adding a planning step that assigns sub-regions to agents in the fleet. Although existing partitioning techniques have implicit notions of optimality such as path efficiency (e.g. total turning angles) [8], they do not jointly optimize multiple objectives, which is important for finding sub-regions in which a fleet of UAVs can efficiently operate. In addition, despite specialized CPP algorithms that can handle specific types of constraints [13], [14], existing techniques are not good at finding optimal paths which also satisfy more general and complex constraints that are crucial for crop dusting. On the other hand, grid-based methods, as the name suggests, discretize the task area into uniform grids. Cells in the grid can be interpreted as a nodes of a graph, so that graph search methods such as Traveling Salesman Problem (TSP) or vehicle routing problem (VRP) can be applied to find paths that visit all nodes [15], [16]. However, the discretization makes it difficult to enforce the constraint that pesticides should not be sprayed in non-farmland areas, especially when the boundary of such areas lies inside cells. This can be somewhat relieved by increasing the resolution of the grids, but doing so artificially inflates the problem size and increases solving time [17]. Primarily due to its Turing completeness, Mixed-Integer Programming (MIP) has been applied to many flavors of planning problems [18], [19], [20]. However, MIPs have exponential worst-case complexity and typically do not scale well in practice. In this paper, we propose a coverage planning framework that both capitalizes on the expressiveness of MIPs to satisfy constraints, and accelerates MIPs by finding good, feasible initial guesses using Genetic Algorithms (GA). We also propose a GA-based partitioning method that optimizes multiple objectives. III. OVERVIEW Patches of farmland are abstracted into (possibly non-convex) polygons in R 2 . The task is to design paths for all UAVs and ground vehilces that • completely cover the given polygons, • cannot fly during spraying above designated areas inside the farmland (obstacles), such as warehouses or pump stations, • minimize the UAV flying distances, • respect UAV battery life, and • ensure the ground vehicle stay within the communication radius of all UAVs. Our proposed solution as shown in Fig. 2 is composed of the following four sequential steps. 1) Partitioning operation area: As the entire target area is too large to be covered without recharging, the target area is first partitioned into smaller pieces, or sub-areas. The size of each sub-area is limited by the number of UAVs in the fleet, the UAV's battery life, and the communication radius of UAVs. 2) Trail generation: In each sub-area, multiple UAV flying paths (trails) which completely covers the sub-area are generated based on the UAV's coverage width (analogous to a camera's field of view). 3) UAV path planning within sub-areas: Trails need to be assigned to individual UAVs in the fleet. Moreover, each UAV's assigned trails need to be connected. The assignment and the connecting paths are found by solving an optimization which minimizes the flight distance of the entire fleet. 4) Car routing between sub-areas: After covering one sub-area, the UAV fleet returns to a ground vehicle (car) to get recharged and transported to the next sub-area. This step finds a path that visits all sub-areas and minimizes the distance traveled by the car. IV. PARTITIONING OPERATION AREA First, the boundaries of farmland and the obstacles are extracted from Google Maps, as shown in Fig. 1a. The farmland polygons, denoted by (ρ), are then split into subareas (ρ 1 , ρ 2 , . . . , ρ n ). As stated in in Sec. III-.1, the size of the sub-area is constrained by the number of UAVs in a fleet, K, their maximum pesticide spraying area, A max , and maximum communication distance between the fleet and remote controller L. Another objective of partitioning is to split the original polygon into "round" rather than "skinny" sub-areas, so that it is easier for UAVs to stay close to the ground vehicle. Thus, the total number of sub-areas, n, equals to the area of ρ divided by the maximum area that K UAVs can cover, that is n = A(ρ) KAmax . If the size of partitioned sub-regions are larger than the maximum area, KA max , or the maximum radio transmission distance, we will increase the number of sub-areas until reading a feasible solution. A Genetic Algorithm (GA), similar to Algorithm 2 but with a different definition of chromosomes and fitness function, is used to find a relatively balanced division based on the area and dimension constraints. 1) Definition of chromosomes: The population of GA is a 3d array P ∈ R N ×n×2 , where N is the number of chromosomes. P i [j] ∈ R 2 is the gene of the chromosomes. It is a 2 dimensional coordinate of a point inside farmland ρ. P i ∈ R n×2 is a chromosome, representing the coordinates of a list of n seed points illustrated as blue points in Fig. 3. 2) Evaluate the fitness of a chromosome: At each iteration, a partition is generated by computing the Voronoi Diagram of the seed points in a chromosome ( Fig. 3). The fitness of a chromosome is defined as: where σ 2 is the variance, µ is the mean, A(P i ) and C(P i ) are the list of all areas and perimeters of sub-regions generated from Chromosome P i and ω i ∈ R are the weights. The first term in the fitness function minimizes the variances of the areas; the second term is heuristics for sub-areas to be more "round"; the third term minimizes the maximum perimeter among all sub-regions. After evaluating the fitness of all chromosomes, the ones with the highest fitness, together with some random offsprings generated by cross-over and mutation, are passed on to the next iteration until the fitness converges. The partitioning result of the proposed method is shown in Fig. 3. V. TRAIL GENERATION WITHIN SUB-AREAS Mitered offset is a commonly used tool in CAM (Computer-Aided Manufacturing) software to generate tool paths [21]. As the UAVs sweep a sub-area in a similar way as a CNC (Computer Numerical Control) mill cuts profiles in a workpiece, mitered offset is employed to generate polygonal paths, or trails, which completely cover the designated subarea, even when the sub-area has obstacles, as shown in Fig. 4. Trails generated in this way are preferred over zig-zag paths because they contain significantly less sharp turns. After generating coverage trails for a sub-area, the next step is to assign the trails to a fleet of UAVs. An assignment, which we will call a plan, is defined as • the sequence of trails each UAV visits, and • the entry/exit points on each trail. Note that each trail is assigned to only one UAV. As UAVs need to fly at a fixed altitude during spraying, constraining them to fly on trails that do not cross each other significantly reduces the chances of collision. As an example, a simple plan of a fleet of one UAV is shown in Fig. 5. In this plan, UAV 1 starts at x 11 , traverses Trail 1, returns to x 11 , flies to x 12 following the green dotted line, traverses trail 2, returns to x 12 and finishes the plan. We will refer to x 11 and x 12 as the access points of Trail 1 and Trail 2, respectively. A plan also needs to satisfy the following requirements: • each trail is assigned to exactly one UAV, and • each UAV needs to complete its assigned trails within battery constraint. The assignment should also minimize the total flight distances of all UAVs in the fleet. In this section, we show that the search for the optimal assignment can be formulated as an MIQP. However, the full MIQP has too many binary variables, thus becomes intractable for practical (moderately large) problems. To circumvent this limitation, we first search for a feasible assignment using Genetic Algorithm, which fixes most of the integer variables. A much smaller scale MIQP is then solved to find the access points to locally optimize the path. A. Convex hull formulation First, we demonstrate how to describe the constraint that a UAV stays on a trail using linear equalities and inequalities. Mathematically, this constraint means x ∈ R 2 belongs to the union of some line segments, where x is the UAV's coordinate. As shown in Fig. 5, a trail consists of the edges of a polygon. Each edge, denoted by l i , is a line segment, which can also be written as the following convex set: where Matrix A i and Vector b i are the collection of the linear constraints in Eq. (2a). An example of the constraints that define l 3 is shown in Fig. 5. x is on the trail indexed by j and can be formally written as where I j is the set of indices of all edges which belong to Trail j. As an example, for the trails shown in Fig. 5, The constraint stated in Eqn. 3 can be converted to the following mixed-integer implications [22]: where H is a vector of binary variables. The constraint stated in Eqn. 4 means that x belongs to Line Segment , the i-th element of H). This implication can be further converted to a set of linear constraints using the convex hull formulation [22]: where x lb , x ub ∈ R 2 are the lower and upper bounds of Trail j. For instance, x lb = [0, 0] and x ub = [2, 1] for Trail 1 in Fig. 5. B. Full MIQP formulation This sub-section formulates the optimal assignment searching problem defined at the beginning of Sec. VI, as an MIQP. The objective of this MIQP is to minimize the total distance traveled by all UAVs in the fleet. Since the optimization has a constraint that all trails must be assigned, the cost function, given by Eqn. 10, only needs to account for distances traveled between trails: min. x,H In Eqn. 10, K ∈ N is the number of UAVs in the fleet; T ∈ N is the planning horizon; x is a 3-dimensional array of shape (K, T, 2); and x kt ∈ R 2 is a shorthand notation for the slice x[k, t], which is the coordinate of UAV k at planning Step t. x kt is also the coordinate at which UAV k enters and exits its assigned trail at Step t. The MIQP also needs to satisfy the following constraints: ∀k, t, ∀i, ∀k, where N t is the total number of trails; L is the total number of line segments (l's) in all trails; I i is the set of line segments indices of Trail i; C i is the perimeter of Trail i; D is the maximum distance a UAV can travel with one battery charge; H is a 3-dimensional binary array of shape (K, T, L). Constraint (11) means if H ktl = 1, UAV k is on Line Segment l at Step t. It can be expanded into linear constraints using Eqn. 7 to 9. Constraint (12) means each UAV cannot appear on more than one line segment at each planning step. Constraint (13) guarantees that each trail is assigned exactly once to one UAV. Eqn. 14 ensures that each UAV can traverse all of its assigned trails without changing batteries. After obtaining the optimal solution, the plan can be constructed from the solution as follows: Looping through all k and I i recovers the sequence of trails assigned to each UAV. • For each UAV, the entrance/exit point of each of its assigned trail can be calculated using Eqn. (7). The MIQP formulated in this sub-section has K × T × L binary variables, which quickly becomes intractable even for moderately-sized problems: the planner simply has too many decisions to make. To reduce the number of binary variables, we decompose the planning problem into two stages: 1) A relative good trail assignment is obtained using a combination of Random Key Genetic Algorithm (RKGA) and Modified Vehicle Routing Problem (MVRP). (detailed in Sub-sec. VI-C) 2) A smaller MIQP is solved to optimize access points over the fixed trail assignment. (detailed in Sub-sec. VI-D) Although global optimality is sacrificed, the proposed twostep optimization approach has proven to generate good enough plans within a reasonable amount of time. C. Finding good trail assignment with RKGA and MVRP Genetic Algorithms are used to search for "optimal" solutions by evolving a set of feasible solutions, or a population of chromosomes, until the maximum generation achieved, or the termination condition is meet. 1) Chromosomes: To solve the trail assignment problem, we structure the population as a matrix, P ∈ [0, 1) N ×Nt , where N is the number of chromosomes, and N t the total number of trails in a map. P i , the i-th row of P , is a chromosome. P i [j] ∈ [0, 1) can be mapped from an access point on Trail j using the following encoder function (E : R 2 → [0, 1)): where x ∈ R 2 is the coordinate of a point on a trail, v i ∈ R 2 the coordinates of the vertices of the trail, v k v k+1 the line segment between v k and v k+1 , and C the perimeter of the trail. E(x) represents the normalized distance of x from v 0 measured along the perimeter of the trail. The encoding allows generating feasible access point candidates via random sampling [23]. An example of such encoding is shown in Fig. 6. where p is the encoded value of Point x, and k is such that [assignment, tourLength]← MVRP(X, K, Trails) 7: fitness.append(tourLength) 8: assignments.append(assignment) 9: end for 10: [P , fitness, assignments] ← Sort(P , fitness, assignments) 11: return P , fitness, assignments 12: end function 2) Evaluating fitness of a population: The function defined in Alg. 1 evaluates the fitness of every chromosome P j in a population P . Trails in Line 1 refers to the coordinates and encoded values of the vertices of all trails. Each P j is first decoded into 2D coordinates of access points of all trails by repeatedly calling Eqn. 16 (Line 5). After the access points are fixed, the problem of assigning trails to a fleet of K UAVs is equivalent to a variant of the Vehicle Routing Problem (VRP) with capacity constraints and arbitrary start and end points [24], which we term as the Modified VRP (MVRP). The input to the MVRP is a fully-connected graph whose nodes are made up by X, the decoded chromosome. Accordingly, the fitness of a chromosome can be defined as the length of the longest tour (tourLength in Line 6), which can be interpreted as the maximum flight distance among all UAVs. The MVRP can be efficiently solved by an open source combinatorial optimization software called OR-Tools developed by Google AI [25]. The function call to solve MVRP (Line 6) returns the optimal assignment of trails to UAVs together with the fitness of P j . Lastly, the population and the corresponding trail assignments are sorted in ascending order by their fitness values (Line 10). [P , fitness, assignments] ← EvaluateFitness(P new , N , N t , K, Trails) 10: end for 11: return P 0 , assignments[0] 3) Evolving the population: Alg. 2 evolves a population using Genetic Algorithm, with fitness of chromosomes evaluated by Alg. 1. Firstly, a population is initialized by uniform sampling between 0 and 1 (Line 1). The population's fitness is also initialized (Line 2). Secondly, the population is evolved using the classical genetic algorithm: parents are randomly selected (Line 4); off-springs are created by crossover (Line 5) and mutation (Line 6); and the next-generation population (P new ) is created by combining current-gen elites and off-springs (Line 7). After a fixed number of iterations, the fittest chromosome P 0 and its UAV assignments are returned (Line 10). D. Reduced MIQP for access points optimization With a fixed trail assignment returned by Alg. 2, a smaller MIQP with L binary variables can be formulated to locally optimize the access points. The objective of this MIQP is given by: where x ∈ R 2×Nt , x i (Row i of x) is the access point of Trail i, O(k, t) is a function that returns the index of the trail assigned to UAV k at Step t. This function is completely defined by a given trail assignment. The following constraints confine x i to Trail i: Although global optimality is not guaranteed, the local optimization can still make a significant improvement over the feasible solution returned by Alg. 2. The MIQP problem is solved with Drake [26], an open-source optimization toolbox with interface to Python. As shown in Fig. 7, MIQP optimizes the positions of access points obtained by MVRP-RKGA to reduce inter-trail distances by 37%(left) and 12%(right). Moreover, it only takes the solver 0.09s (left) and 0.83s (right) to find these locally optimal solutions. VII. CAR ROUTING GPS information of roads around and inside the farmlands is extracted from OpenStreetMap [27], as shown in Fig. 8. The extracted road network is represented as a graph G road = (V road , E road , W road ), where v ∈ V road represents parking spots, e ∈ E road denotes road segments and W road is the distance of the road segment connecting two intersections. A. Car routing within sub-area After identifying the routes for UAVs, we will assign UAVs' take-off and landing spot (car location) in each sub-area to minimize the total flight distance between start/end access points and the distance traveled by the ground vehicle (car): where P car is the position of the ground vehicle and P uav(k) is the position of the k th UAV. Assuming that the car can only dispatch and receive UAVs at V road in each sub-area and yellow dots represent all possible positions of the car. Therefore, the route of the car within the sub-area is the shortest path from a red dot to a green dot along the weighted road graph. B. Car routing between sub-areas Given the start and end locations of the car in each subarea, we would like to find the shortest path of the car to visit all sub-areas. As the start and end positions of the car in each sub-area sometimes are different, this problem is formulated as a Asymmetric-cost Traveling Salesman's Problems (ATSP), which can be solved with Google OR-Tools [25]. Each sub-area is considered as a node, while the distance from Sub-area i to Sub-area j equals to the length of shortest path from the final car position in Sub-area i to the start car position in Sub-area j. VIII. RESULTS AND DISCUSSION We use the following set of UAV specifications based on the agriculture drone T16 released by DJI [28] when generating the numerical results in this section. Each UAV has a flight endurance of 10 minutes. All UAVs cruise at a speed of 6m/s with 6.5m coverage width. During each take-off and landing cycle, 10 minutes is spent on spraying pesticides and 5 minutes on traveling between the farmlands and a ground vehicle (car). The car only releases and picks up UAVs at intersections of roads in the given map. Furthermore, the car must always stay within the transmission distance of all UAVs in this example we set 500m. We tested the area partitioning algorithm with multiple maps based on the maximum coverage area for a fleet of UAVs and the maximum communication distance between the UAVs and the car. Compared with the most common approach of assigning sub-areas to UAVs [29], [3], [30], [16], [31], our approach reduces the total number of turns from 48 turns to 16 turns as demonstrated in Fig. 10 for the miteredoffset path. The partition in Fig. 11 was generated using a population of 200 chromosomes, and 15 iterations takes 7s on a 2.7 GHz Intel Core i5 laptop. The next step is to generate paths within sub-areas. In many situations, paths generated by mitered-offset are shorter, have less turning angles and total number of turns than zig-zag paths. (Table I and Fig. 12). Next, in each sub-area, mitered-offset trails are assigned to UAVs using MVRP-RKGA and MIQP. The trail assignments generated by MVRP-RKGA100 (MVRP-RKGA with a population of 100) are shown in Fig. 14. The access points generated by MVRP-RKGA100 are shown as connected by red line segments in Fig. 13. The access points improved by the MIQP in Sub-sec. VI-D are shown in the same figure, connected by green lines. Compared with MVRP-RKGA100, access points further optimized by MIQP can reduce flying distances between trails by up to 50%, as shown in Table II. The total computation time of MVRP-RKGA100 and MIQP is less than 4 minutes when running on a 2.7 GHz Intel Core i5 laptop. Execution of a full plan for a fleet of four UAVs and one car, including the planned UAV paths in Fig. 14 planned car paths in Fig. 16, is simulated in the Simulinkbased 3D environment shown in Fig. 15. The heterogeneous fleet successfully covers the given farmland. The operation time in each sub-area is illustrated in Table III. Assuming the time for swapping battery and the car traveling between sub-areas takes 10 min, it takes 1.5 hours to cover the farm with an area of 617,210 m 2 .
2019-11-22T05:32:21.000Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "37b477cc433987612d4293084a258156131ca051", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1911.09864", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "37b477cc433987612d4293084a258156131ca051", "s2fieldsofstudy": [ "Engineering", "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
14509465
pes2o/s2orc
v3-fos-license
Event-ready entanglement preparation We present an experiment which prepare entanglement between photons that nowhere interacted and whose paths nowhere crossed. The experiment puts together two photons from two (non-maximal) singlet-like photon pairs and make them interfere at an asymmetrical beam splitter. As a result one finds polarization correlations between the other companion photons from the pairs whose paths nowhere crossed each other even when no polarization measurements have been carried out on the former photons. The latter set of photons that nowhere interacted are therefore event-ready prepared by their pair-companion photons. The result reveals nonlocality as a property of selection which can even be a preselection. It also reveals that one can predict spin-correlated behaviour of photons in space and at beam splitters by controlling a no-spin observable. [From the Book of Abstracts as appeared at the Workshop.] Abstract. All Bell experiments carried out so far have had ten or more times fewer coincidence counts than singles counts and this, in effect, means a detection efficiency under 10%. Therefore, all these experiments relied only on coincidence counts and herewith on additional assumptions. Recently, however, Santos devised hidden variable models which do not obey the assumptions and thus made the experiments inconclusive. This, as well as recent improvements in detectors efficiencies, prompted an increasing interest in the loophole-free Bell experiments which do not rely on additional assumptions and which originally stem from the idea of the event-ready detectors (introduced by J.S. Bell) which would preselect Bell pairs ready for detection. Till recently it was assumed that such detectors would distort the pairs. Here we devise those that would not do so and propose an experiment which can realistically improve the detection efficiency and visibility up to over 80%. The set-up uses two nonlinear crystals of type-II both of which simultaneously downconvert a singlet-like pair. We combine one photon from the first singlet with one from the second singlet at a beam splitter and consider their coincidence detections. Detectors determine optimally narrow solid angles for the downconverted photons. However, for their two companions (from each singlet) we use five times wider solid angles or even drop pinholes altogether and resort to frequency filters. So, we are able to realistically collect close to 100% of them. The latter pairs-preselected by coincidence detection at the beam splitter-appear entangled in (non)maximal singlet-like states, i.e., detectors at the beam splitter act as event-ready detectors for such Bell pairs. Introduction Although many convincing EPR (Einstein-Podolsky-Rosen) experiments violating the local hidden variable models and various forms of Bell inequalities were performed in the past thirty years, an experiment involving no supplementary assumptions-usually called a loophole-free experiment-is still waiting to be carried out. Until recently loophole-free experiments were not considered because they require very high detection efficiency [4] and all experiments carried out till now have had an efficiency under 10% [10,15]. On the other hand, the most important supplementary assumption, the no enhancement assumption and the corresponding postselection method were considered to be very plausible. Then Santos devised [22][23][24][25] local hidden-variable models which violate not only the low detection loophole but also the no enhancement assumption as well as post-selection loophole, and these models, as well as considerable improvements in techniques, in particular, detector efficiencies, resulted in an interest into loophole-free experiments. In the past two years several sophisticated proposals appeared which rely on the recent improvement in the detection technology and meticulous elaborations of all experimental details. [6,11,12,14,18] The first three use maximal superpostions and require detection efficiency of at least 83% [7] and the other two use nonmaximal superpositions relying on recent results [5,19,20] which require only 67% detection efficiency for them. All proposals are very demanding and at the same time all but the last proposal invoke a postselection which is also a supplementary assumption. [25] In this paper we analyze several supplementary assumptions and propose a feasible method of doing a loophole-free Bell experiment which requires only 67% detection efficiency, can work with a realistic visibility, and uses a preselection method for preparing non-maximally entangled photon pairs. The preselection method is particularly attractive for its ability to employ solid angles of signal and idler photons (in a downconversion process in a nonlinear crystal) which differ up to five times from each other. This enables a tremendous increase in detection efficiency-from 10% to over 80%-as elaborated below. Bell inequalities and their supplementary assumptions As we mentioned in the introduction the recent revival of the Bell issue has been partly triggered by new types of local hidden variables devised by Santos [22,25] which made all experiments carried out so far inconclusive. However, from the very first Bell experiments it was clear that one day a conclusive loophole-free experiment must be carried out. [3,4] At the time, such experiments were far from being feasible and as a consequence all experiments so far relied on coincidental detections and on an assumption that a subset of a total set of events would give the same statistics as the set itself . In other words no real experiment so far dealt with proper probabilities, i.e., with ratios of detected events to copies of the physical system initially prepared . [12] Let us see this point in some more details, first, for the Clauser-Horne [3] form of the Bell inequality, and then for Hardy's equality. [9] We consider a composite system containing two subsytems in a (non)maximal superposition. When a property is being measured on subsystem i by detector D i , which has got an adjustable parameter a i corresponding to the property, the probability of an independent firing of one of the two detectors is p(a i ) = N (a i )/N (i = 1, 2) and of simultaneous triggering of both detectors is p(a 1 , is the number of coincident counts, and N is the total number of the systems the source emits. Let a classical hidden state λ determine the individual counts and the probabilities of individual subsystems triggering the detectors: p(λ, a i ) and p(λ, a 1 , a 2 ). These probabilities are connected with the above introduced long run probabilities by means of: where Γ is the space of states λ and ρ(λ) is the normalized probability density over states λ. The locality condition-which assumes that the probability of one of the detectors being triggered does not depend on whether the other one has been triggered or not-can be formalized as p(λ, a 1 , a 2 ) = p(λ, a 1 )p(λ, a 2 ). Clauser-Horne's form of the Bell inequality reads: The experiments carried out so far invoked the no-enhancement assumption A i = p(λ, ∞) (where ∞ means that a filter for a property corresponding to parameters a i is switched off), wherewith Eq. (1) after multiplication by ρ(λ) and integration over λ yields Thus-because of the low detection efficiency-all the experiments performed till now measured nothing but the above ratios. Then Santos devised [22,23,25] hidden variables based on p(λ, a i ) > p(λ, ∞) and left us only with the loophole-free option The above cited loophole-free proposals used the right inequality which requires 83% detection efficiency for maximal superpositions and 67% detection efficiency for nonmaximal ones. The left inequality always requires 83% detection efficiency but it makes clear that if we want a loophole-free experiment we must always either register or preselect practically all the systems the source emits in order to obtain proper probabilities, i.e., ratios of detected events to the number of emitted systems. An excellent test which immediately shows whether a particular experiment can be loophole-free is to see whether we can obtain p(a 1 ) ≈ p(a 1 , ±) ≈ p(a 1 , ∞), where '±' means that a two-channel filter (corresponding to a property a and property non-a), e.g., a birefringent prism, is used; '∞' means that the filter has been taken out altogether. Unfortunately all experiments carried out so far have p(a 1 ) > 10 p(a 1 , ∞). This applies to other approaches as well. E.g., Ardehali's additional assumptions [1,2] are weaker than the no enhancement assumption but that does not help us in obtaining the proper probabilities. The latter is also true for the Hardy's equality experiment recently carried out by Torgerson, Branning, Monken, and Mandel [27] although they misleadingly claim that their "method does not depend on the use of detectors with high or even known quantum efficiencies." [26] Let us look at the experiment in some detail. Experiment A schematic representation of the experiment is shown in Fig. 1. Two independent type-II crystals (BBO) act as two independent sources of two independent singlet pairs. Two photons from each pair interfere at an asymmetrical beam splitter, BS and whenever they emerge from its opposite sides, pass through polarizers P1' and P2', and fire the detectors D1' and D2', they open the gate (activate the Pockels cells) which preselects the other two photons into a nonmaximal singlet state. We achieve the high efficiency (over 80%) by choosing optimally narrow solid angles determined by the openings of D1' and D2' and five times wider solid angles determined by D1 and D2. [Type-II crystal, as a source of only one singlet pair [8,15], suffers from low efficiency (at most 10% [15]) due to necessarily symmetric detector solid angles.] An ultrashort laser beam (a subpicosecond one) of frequency ω 0 simultaneously (split by a beam splitter) pumps up two nonlinear crystals of type-II producing in ph Type-II crystal Figure 2: Photons from the cones are mutually perpendicularly polarized and therefore the photons from the intersections of the cones are in a singlet-like state. We chose pinhole ph five times bigger than the other one (determined by detectors D1' and D2') so that for each photon which passed through the latter pinhole, its companion photon will pass through ph. each of them intersecting cones of mutually perpendicularly polarized signal and idler photons of frequencies ω 0 /2 as shown in Fig. 2. The idler and signal photon pairs coming out from the crystals do not have definite phases and therefore cannot exhibit second order interference. However they do appear entangled along the cone intersection lines because one cannot know which cone which photon comes from. By an appropriate preparation one can entangle them in a singlet-like state. [15] Their state is therefore The outgoing electric-field operators describing photons which pass through beam splitter BS and through polarizers P1' and P2' (oriented at angles θ 1 ′ and θ 2 ′ , respectively) and are detected by detectors D1' and D2' will thus read (see Fig. 3) where t 2 x , t 2 y are transmittances, r 2 x , r 2 y are reflectances, t j is time delay after which photon j reaches BS, τ 1 ′ is time delay between BS and D1', and ω j is the frequency of photon j. The annihilation operators act as follows:â 1x |1 x 1 ′ = |0 x 1 ′ , a 1x |0 x 1 ′ = 0. E 2 ′ is defined analogously. Operators describing photons which pass through polarizers P1 and P2 (oriented at angles θ 1 and θ 2 , respectively), and through Pockels cells and are detected by detectors D1 and D2 will thus read E 2 is defined analogously. The probability of detecting all four photons by detectors D1, D2, D1', and D2' is thus where η is detection efficiency; A = Q(t) 11 ′ Q(t) 22 ′ and B = Q(r) 12 ′ Q(r) 21 ′ ; here Q(q) ij = q x sin θ i cos θ j − q y cos θ i sin θ j ; φ = (k 2 − k 1 ) · r 1 + (k 1 − k 2 ) · r 2 = 2π(z 2 − z 1 )/L; here L is the spacing of the interference fringes (see Fig. 3). φ can be changed by moving the detectors transversally to the incident beams. Data for this expression are collected by detectors D1' and D2' whose openings are not points but have a certain width ∆z. Therefore, in order to obtain a realistic probability we integrate Eq. (7) over z 1 and z 2 over ∆z to obtain where v = sin(π∆z/L)/(π∆z/L) 2 is the visibility of the coincidence counting. We assume the near normal incidence at BS so as to have r 2 x = r 2 y = R and t 2 x = t 2 y = T = 1 − R. Next we assume a symmetric position of detectors D1' and D2' with respect to BS and the photons paths from the middle of the crystals so as to obtain φ = 0. [17,21] Representing photons by a Gaussian amplitude distribution of energies we have shown in Ref. [21] that the visibility is reduced when the condition ω 1 ′ = ω 2 ′ is not perfectly matched and when the coincidence detection time is not much smaller than the coherence time. We meet the latter demand by using a subpicosecond laser pump beam and the former by reducing the size of the detector (D1' and D2') pinholes. By reducing the size of the detector pinholes we reduce the number of events detected by D1 and D2 but, on the other hand, this enables us to increase visibility of the Bell pairs at D1 and D2 by sizing pinholes ph (see Fig. 2) so as to make solid angles five times wider than the pinholes of D1' and D2'. (Cf. Joobeur, Saleh, and Teich. [13]) Alternatively, we can put ω 0 /2 filters (ω 0 is the frequency of the pumping beam) in front of detectors D1 and D2 and drop the pinholes ph altogether. Analogously, the singles-probability of detecting a photon by D2 is Introducing the above obtained probabilities into the Clauser-Horne inequality (2) we obtain the following minimal efficiency for its violation. This efficiency is a function of visibility v and by looking at Eqs. (11), (12), and (13) we see that for each particular v a different set of angles should minimize it. A computer optimization of angles-presented in Fig. 4-shows that the lower the reflectivity is, the lower is the minimal detection efficiency. Also, we see a rather unexpected property that a low visibility does not have a significant impact on the violation of the Bell inequality. For example, with 70% visibility and 0.2 reflectivity of the beam splitter we obtain a violation with a lower detection efficiency than with 100% visibility and 0.5 (ρ = 1) reflectivity. A similar calculation can be carried out for the Hardy equalities given at the and of Sec. 2. It can be shown that the lowest possible R, with only 5-10 standard deviations, should be taken and not the one which gives the greatest P (θ 1 , θ 2 ) > 0, again because the impact of a low visibility is the lowest when the beam splitter is the most asymmetrical. Thus our preselection scheme can be used for a loopholefree "Hardy experiment" as well. Conclusion Our elaboration shows that the recently found four-photon entanglement [18,21] can be used for a realization of loophole-free Bell experiments. We propose a set-up which uses two simultaneous type-II downconversions into two singlet-like photon pairs. By combining two photons, one from each such singlet-like pair, at an asymmetrical beam splitter and detecting them in coincidence we preselect the other two completely independent photons into another singlet-like state-let us call them 'Bell pair '. (See Figs. 1 and 3.) Our calculations show that no time or space windows are imposed on the Bell pairs by the preselection procedure and this means that we can collect the photons within an optimal solid angle. If we take their solid angles five times wider than the angles of preselector photons (determined by the openings of detectors D1' and D2'-see Fig. 1), then we can collect all Bell pairs and at the same time keep a probability of the "third party" counts negligible. For our set-up we can use the result presented in Fig. 4 which enables a conclusive violation of Bell's inequalities with a detection efficiency lower than 80% even when the visibility is under 70% at the same time. If we, however, agree that it is physically plausible to take into account only those Bell pairs which are preselected by actually recorded detections at the beam splitter (firing of D1' and D2'), then we can eliminate the low visibility impact altogether. In this case, we can set v = 1 and for a different set of angles obtain a conclusive violation of Bell's inequalities and Hardy's equalities with still lower (under 70%) detection efficiency. In the end, we stress that the whole device can also be used for delivering ready-made Bell pairs in quantum cryptography and quantum computation and communication.
2014-10-01T00:00:00.000Z
1999-07-09T00:00:00.000
{ "year": 1999, "sha1": "1487c7bc7a99ce682944fd61e6181a08729e773d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "cf103b0681c7e2bfa73b80f547ed898e97ea5b54", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
221368910
pes2o/s2orc
v3-fos-license
Hypoglycemic and hypolipidemic activity of combined milk thistle and fenugreek seeds in alloxan-induced diabetic albino rats Background and Aim: Despite the availability of antidiabetic drugs, they are not free from associated adverse side effects. This study aimed to evaluate the hypoglycemic and hypolipidemic effects of oral administration of seeds from two medicinal plants: (1) Milk thistle and (2) fenugreek. Materials and Methods: Plant seeds were washed in distilled water and ground with a coffee grinder. Alloxan was used to induce diabetes in 20 male albino rats. Diabetic rats were randomly divided into two groups: (1) Group 1 (n=10), diabetic rats fed with 0.5 g/kg milk thistle and 2 g/kg fenugreek seeds per day and (2) Group 2 (n=10), diabetic rats fed standard rodent food for 4 weeks. Results: Oral administration of milk thistle and fenugreek seeds for 2 weeks resulted in significant improvement in body weight, blood glucose, glycosylated hemoglobin (HbA1c), cholesterol, and triglyceride levels in alloxan-induced diabetic rats. After 4 weeks, this ameliorative effect was significantly elevated with respect to blood glucose (155.00±9.70 mg/dL vs. 427.50±5.70 mg/dL; p<0.001), HbA1c (5.5±0.19% vs. 13.65±1.77%; p<0.001), cholesterol (281.50±10.95 mg/dL vs. 334.30±6.80 mg/dL; p<0.001), triglyceride (239.60±6.87 mg/dL vs. 284.20±9.95 mg/dL; p<0.01), and body weight (265.30±8.10 g vs. 207.40±11.4 g; p<0.01) as compared with non-treated diabetic rats. Conclusion: Milk thistle and fenugreek seeds possess hypoglycemic and hypolipidemic properties and could be used as natural compounds that are suitable as parent compounds for the development of new antidiabetic drugs. Introduction Diabetes mellitus (DM) is a chronic complex metabolic disorder that occurs in response to complete or insufficient cessation of insulin secretion or synthesis and/or insulin peripheral resistance causing disturbances in carbohydrate, proteins, and fat metabolism [1]. An estimated 425 million adults worldwide have DM, and this number is predicted to rise to 629 million by 2045. This increase in the prevalence of DM will cause large social and economic burden, especially in low-to middle-income countries, where about 75% of people with DM live [2]. Although different types of antidiabetic drugs are available and most are effective in providing long-term glycemic control [1], they are not free from some associated adverse side effects such as flatulence, cramps, diarrhea, nausea, and gastrointestinal irritation [3]. In addition, prolonged use of these drugs results in a response deficiency [4]. For example, after 6 years of sulfonylurea treatment, the effectiveness of the drug is insufficient in 44% of patients [5]. Thus, there is an urgent need to explore options that include traditional medicinal plants with no side effects for DM management. Silybum marianum, or milk thistle, belongs to the family Carduus marianum and has been known for more than 2000 years to be an herbal remedy used for a variety of disorders [6]. The components of this plant scavenge free radicals to protect the body against oxidative peroxidation. In addition to its antioxidant, anti-inflammatory, and anticancer properties, it is regarded as a potent agent against diabetes-induced hyperglycemia and insulin resistance [7]. In addition, fenugreek, which belongs to the Fabaceae family, is reported to have neuroprotective, antioxidant, antilithogenic potential, antihyperlipidemic, and a stimulating/regenerating effect on β-cells and antidiabetic effects [8,9]. The hypoglycemic properties of fenugreek seeds have been demonstrated in experimentally induced diabetic rats, diabetic patients, and healthy volunteers [8,10]. Taken together, medicinal plants are useful dietary supplements to existing therapies as well as provide oral antidiabetic bioactive compounds for new pharmaceutical development [8]. Although various kinds of medicinal plants have been reported to have hypoglycemic and hypolipidemic effects, these plants have failed to achieve greater effectiveness. Therefore, the aim of this study was to investigate the effect of oral administration of combined Available at www.veterinaryworld.org/Vol.13/August-2020/35.pdf milk thistle with fenugreek seeds on body weight, blood glucose, glycosylated hemoglobin (HbA1c), cholesterol, and triglyceride levels of alloxan-induced diabetic rats. Ethical approval All animal experiments were performed in accordance with the guidelines of the National Council for Animal Experimentation Control and the Ethical Committee approval was obtained from ethical committee of Middle East University, Jordan. Study period and location The study was conducted from May 2019 to December 2019 at Middle East University, Amman, Jordan. Plant material Fenugreek seeds were purchased from a local supermarket of Syrian origin, and milk thistle was purchased from Frontier Herbs. Seeds were washed in distilled water and ground with a coffee grinder to an average particle diameter of 0.3 mm. Animals and induction of experimental diabetes Twenty male albino rats weighing between 250 and 300 g were obtained from the Faculty of Pharmacy, Middle East University, Amman, Jordan. Before initiating the experiments, rats were fed standard rodent food for 1 week for acclimation to the laboratory conditions. Diabetes induction in these rats was carried out 4 weeks before the start of the experiment. Immediately before use, alloxan monohydrate (Sigma-Aldrich Chemical) was dissolved in sterile normal saline. The dose of alloxan for 160 mg/kg body weight was injected intraperitoneally to all male adult albino rats after 6 h. For the next 24 h, the rats were maintained on 5% glucose solution bottles in their cages to prevent hypoglycemia [11]. Fasting blood glucose values >7 mmol/L (126 mg/dL) were considered diabetic [12]. Experimental design Before the diabetic rats were feed milk thistle and fenugreek seeds, their body weight was recorded and blood samples were collected from the tail vein to estimate blood glucose, HbA1c, cholesterol, and triglyceride levels. The 20 diabetic rats were randomly divided into two groups (10 rats/group) as follows: Group 1 (n=10), diabetic rats fed daily with 0.5 g milk thistle and 2 g fenugreek seeds per 1 kg of body weight per day (~0.5 g fenugreek/rat with 0.125 g milk thistle/ rat/day) for 4 weeks by mixing the ground fenugreek and milk thistle with digestive biscuits (sugar free), which were prepared as dough (1.5 g) and Group 2 (n=10), diabetic rats fed standard rodent food without milk thistle and fenugreek seeds for 4 weeks. The body weight of the rats was recorded at the 2 nd week and at the end of the experiment (4 th week), and blood samples were collected to measure glucose, HbA1c, cholesterol, and triglyceride levels. Blood collection Blood samples were collected from the tail vein, during which a large volume of blood (up to 2 mL/withdrawal) was drawn. Briefly, local anesthetic cream was applied to the surface of the tail for 30 min, and then, the tail was dipped into warm water (40°C). A 23G syringe with a needle was inserted into the vein, and blood was collected in the EDTA Vacutainer tubes [13]. HbA1c test HbA1c was determined using whole EDTA blood. In the test samples, HbA1c was absorbed onto the surface of the latex particles, which reacted with anti-HbA1c (antigen-antibody reaction) to provide agglutination. The amount of agglutination was measured as the absorbance. The HbA1c value was obtained from a calibration curve. The procedure is described in the insert (Spectrum, Egypt). Biochemical analysis Plasma was separated by centrifugation at 3000 rpm for 10 min and then subjected to biochemical analysis. The plasma sample was used for the quantitative determination of glucose using the enzymatic colorimetric method (glucose oxidase-peroxidase), of cholesterol using the enzymatic colorimetric method (PAP), and of triglycerides using the enzymatic colorimetric method (glycerol-3-phosphate oxidase) level in blood using commercial kits (Arcomex, Jordan). The procedure followed the instructions described in the kits [14]. Statistical analysis Data were presented as mean±SD of three parallel measurements. Statistical significance was assessed by t-test (using two-tailed distribution) using SPSS software version 20.0 (SPSS Inc., Chicago, IL). p<0.05 was set as statistically significant. Results Diabetes induction with alloxan was associated with body weight loss and elevated levels of glucose, HbA1c, cholesterol, and triglyceride levels. In contrast to pre-treatment findings (Figure-1), oral administration of 0.5 g/kg milk thistle and 2 g/kg fenugreek seeds per day (~0.5 g fenugreek/ rat with 0.125 g milk thistle/rat per day) for 2 weeks resulted in a significant reduction in blood glucose (271.80±35.60 vs. 415.80±29. 10 Figure-3; Table-1). Table-2 presents the decrease or increase in the percentage of body weight, glucose, HbA1c, cholesterol, and triglycerides in the treated diabetic rats as compared with the diabetic control group. Discussion DM is a growing health problem in most countries. It is a major and chronic endocrine disorder caused by acquired and/or inherited deficiency in insulin production by the pancreas or by secreted insulin ineffectiveness. In addition, DM is associated with many complications, such as neuropathy, kidney disease, retinopathy, and heart disease [15]. Although several drugs are used to reduce hyperglycemia, such as α-glucosidase inhibitors, metformin, and sulfonylureas, diabetes and its linked complications still constitute important medical problems [16]. Many natural medicinal plants and herbs have strong antidiabetic properties and are safe, relatively non-toxic, and even free from serious side effects [9]. Results of this study revealed that in alloxan-induced diabetic rats, oral administration of 0.5 g/kg milk thistle and 2 g/kg fenugreek seeds per day for 2 weeks caused a significant reduction in blood glucose, HbA1c, cholesterol, and triglyceride levels and a significant improvement the body weight. This ameliorative effect was significantly elevated after 4 weeks of oral administration of seeds. These results further suggest that milk thistle and fenugreek seeds could be used as a potential treatment for diabetes. The results of many studies using diabetic experimental models have demonstrated that milk thistle Figure-2: After 2 weeks of oral administration of 0.5 g/kg milk thistle and 2 g/kg fenugreek seeds per day, there was a significant improvement in the body weight, blood glucose, glycosylated hemoglobin, serum cholesterol, and serum triglyceride in alloxan-induced diabetic rats. Table-1: Changes in body weight, blood glucose, HbA1c, serum cholesterol, and serum triglyceride in alloxan-induced diabetic rats after oral administration of 0.5 g/kg milk thistle and 2 g/kg fenugreek seeds per day. Loss of weight Diabetes tests Lipid tests Body weight (g) Glucose (mg/dL) HbA1c (%) Cholesterol (mg/dL) Triglyceride (mg/dL) [17][18][19]. It was suggested that by triggering the gut-brain-liver axis, milk thistle has functional potential as an antidiabetic food ingredient. In this study, findings revealed a significant inhibition of blood glucose in diabetic rats administrated milk thistle for 4 weeks. In addition, the activation of neurons in the nucleus of the solitary tract and expression of glucagon-like peptide-1 receptor in the duodenum increased, whereas hepatic glucose production decreased after milk thistle administration [19]. On the other hand, peroxisome proliferator-activated receptor γ (PPAR-γ), the molecular target of thiazolidinediones, is used clinically as an insulin sensitizer to lower blood glucose levels in diabetic patients. A substance from milk thistle has been shown to possess PPAR-γ agonist properties. Studies indicated that partial PPAR-γ agonism induces promising activity patterns by retaining the positive effects attributed to the full agonists, with reduced side effects [6]. Moreover, in hypercholesterolemic rats, treatment with milk thistle had a significant ameliorative effect on lipid profile, and it decreased both serum and hepatic total cholesterol, triglycerides, very low-density lipoprotein cholesterol, and low-density lipoprotein cholesterol and increased high-density lipoprotein cholesterol [17]. Milk thistle has also demonstrated beneficial effects on several diabetic complications, including non-alcoholic steatohepatitis, diabetic nephropathy, and diabetic neuropathy mainly by means of its antioxidant properties [6]. Preliminary human [20,21] and animal [22][23][24] trials have suggested the possible hypoglycemic effect and antihyperlipidemic properties of oral fenugreek seeds. Fenugreek seeds have also previously been shown to have hypocholesterolemic and hypoglycemic effects on diabetic patients and experimental diabetic animals [25]. Like insulin, fenugreek also induces phosphorylation of the insulin tyrosine kinase receptor in adipocytes and liver cells [26]. In addition, the circulating antioxidant activity of fenugreek has been reported through the significant decrease in lipid peroxide level, which exerts beneficial effects on the increased oxidative stress in diabetic patients [27,28]. Phytochemical screening demonstrated that fenugreek contains trigocoumarin, trigonelline, and trimecoumarine alkaloids, which have antihyperglycemic effects [28]. Fenugreek seed fibers also decrease the glucose absorption rate and may delay gastric emptying, thereby preventing a rise in blood glucose levels after a meal [29]. In diabetic patients, fenugreek guar gum prevents the rapid uptake of glucose in the small intestine, aids in blood glucoses retention, and may also be effective in the treatment of hypercholesterolemia [30]. Moreover, seed fibers contain an amino acid, 4-hydroxyisoleucine, that stimulates insulin secretion, because the cells are more sensitive to insulin, and increases the number of insulin receptor sites to burn cellular glucose [31]. Fenugreek seed extracts have also been reported to exhibit antidiabetic potential by protecting β-cells and restoring the function of pancreatic tissue, elevating the serum insulin level, possibly through stimulation of insulin release from existing β-cells of islets or by β-cell regeneration, and stimulating glycogen synthetase activity [9]. Conclusion Our study demonstrated that the oral administration of dried milk thistle and fenugreek seeds is associated with hypoglycemic and hypolipidemic effects and they can be used as natural compounds suitable for the development of new antidiabetic drugs. Author's Contributions MS designed the study, drafting the manuscript, performed all the experimental procedures, and conducted data analysis and interpretation. The author read and approved the final version of the manuscript.
2020-08-29T11:38:43.851Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "dd5b4c76f80eb05603ec2dfe6a36c5b50f426611", "oa_license": "CCBY", "oa_url": "http://www.veterinaryworld.org/Vol.13/August-2020/35.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dd5b4c76f80eb05603ec2dfe6a36c5b50f426611", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226192652
pes2o/s2orc
v3-fos-license
Polyethylene glycol 35 ameliorates pancreatic inflammatory response in cerulein-induced acute pancreatitis in rats BACKGROUND Acute pancreatitis (AP) is a sudden inflammatory process of the pancreas that may also involve surrounding tissues and/or remote organs. Inflammation and parenchymal cell death are common pathological features of this condition and determinants of disease severity. Polyethylene glycols (PEGs) are non-immunogenic, non-toxic water-soluble polymers widely used in biological, chemical, clinical and pharmaceutical settings. AIM To evaluate the protective effect of a 35-kDa molecular weight PEG (PEG35) on the pancreatic damage associated to cerulein-induced acute pancreatitis in vivo and in vitro. METHODS Wistar rats were assigned at random to a control group, a cerulein–induced AP group and a PEG35 treatment group. AP was induced by five hourly intraperitoneal injections of cerulein (50 μg/kg/bw), while the control animals received saline solution. PEG35 was administered intraperitoneally 10 minutes before each cerulein injection in a dose of 10 mg/kg. After AP induction, samples of pancreatic tissue and blood were collected for analysis. AR42J pancreatic acinar cells were treated with increasing concentrations of PEG35 prior to exposure with tumor necrosis factor α (TNFα), staurosporine or cerulein. The severity of AP was determined on the basis of plasma levels of lipase, lactate dehydrogenase activity, pancreatic edema and histological changes. To evaluate the extent of the inflammatory response, the gene expression of inflammation-associated markers was determined in the pancreas and in AR42J-treated cells. Inflammation-induced cell death was also measured in models of in vivo and in vitro pancreatic damage. RESULTS Administration of PEG35 significantly improved pancreatic damage through reduction on lipase levels and tissue edema in cerulein-induced AP rats. The increased associated inflammatory response caused by cerulein administration was attenuated by a decrease in the gene expression of inflammation-related cytokines and inducible nitric oxide synthase enzyme in the pancreas. In contrast, pancreatic tissue mRNA expression of interleukin 10 was markedly increased. PEG35 treatment also protected against inflammation-induced cell death by attenuating lactate dehydrogenase activity and modulating the pancreatic levels of apoptosis regulator protein BCL-2 in cerulein hyperstimulated rats. Furthermore, the activation of pro-inflammatory markers and inflammation-induced cell death in pancreatic acinar cells treated with TNFα, cerulein or staurosporine was significantly reduced by PEG35 treatment, in a dose-dependent manner. CONCLUSION PEG35 ameliorates pancreatic damage in cerulein-induced AP and AR42J-treated cells through the attenuation of the inflammatory response and associated cell death. PEG35 may be a valuable option in the management of AP. INTRODUCTION Acute pancreatitis (AP) is an inflammatory disease of the exocrine pancreas characterized by abnormal intracellular activation of proteolytic enzymes. Parenchymal injury, pancreatic acinar cell death and an intense inflammatory reaction are common pathological features of this condition and determine the severity of the disease [1] . A majority of patients presenting with AP have the mild form of the disease, which is mostly self-limited and consists of the appearance of edema and inflammation of the pancreas [2] . In this group, organ failure and local complications are generally not observed, and the disease usually resolves in the first week. However, between 20% and 30% develop a severe form requiring intensive care unit admission, which is often associated with local and systemic complications and, in some occasions, leads to death [3] . To date, no drug is available to prevent or treat this condition, and any improved clinical outcomes have mostly been due to continuous October 21, 2020 Volume 26 Issue 39 advancement of various supportive treatments. Although pancreatic inflammation may be firstly caused by acinar events such as trypsinogen activation, it finally depends on the subsequent stimulation of components of the innate immune system. The initial acinar cell damage triggers the release of pro-inflammatory cytokines and chemokines, leading to increase of microvascular permeability and subsequent formation of interstitial edema [4] . Activation of inflammatory cells then provokes the production of additional cytokines and other mediators that initiate the inflammatory response. These mediators recruit different types of leukocytes (first neutrophils, followed by macrophages, monocytes and lymphocytes) to the pancreas. In parallel to the pro-inflammatory response, an anti-inflammatory response is also released [5] . If the anti-inflammatory response is adequate, the local inflammation resolves at this stage. However, in some cases, an overwhelming pro-inflammatory response drives the migration of inflammatory mediators into systemic circulation, leading to distant organ dysfunction [6] . Polyethylene glycols (PEGs) are hydrophilic polymers comprised of repeating ethylene glycol units [7] . PEGs have several physicochemical properties that make it advantageous in diverse biological, chemical and pharmaceutical settings, especially in view of its low toxicity. For instance, these polymers have been found to exert beneficial effects in several in vivo and in vitro models of cell and tissue injury [8][9][10] . There are very few studies linking PEGs of different molecular weight with an antiinflammatory activity. In a model of traumatic inflammation, the intraperitoneal administration of 4-kDa molecular weight PEG prevented the formation of initial adhesions and reduced the leukocytes number in the peritoneal cavity as a consequence of an inflammatory peritoneal reaction [11] . Oral treatment with 4-kDa PEG in experimental colitis reinforced the epithelial barrier function and reduced the inflammation of the colon [12] . Likewise, in two different models of gut-derived sepsis, therapeutic administration of PEG reduced inflammatory cytokine expression and activation of neutrophils [13] . Our group has recently demonstrated an antiinflammatory role for PEG35 in an experimental model of severe necrotizing AP. In this sense, the therapeutic administration of PEG35 notably alleviated the severity of AP and protected against the associated lung inflammatory response [14] . Based on the protective features of PEGs, we now have evaluated the effects of PEG35 in experimental models of pancreatic damage in vivo and in vitro. Experimental animals and model of cerulein-induced AP All experimental animal proceedings were conducted according with European Union regulatory standards for experimentation with animals (Directive 2010/63/EU on the Protection of Animals Used for Scientific Purposes). The Ethical Committee for Animal Experimentation (CEEA, University of Barcelona, April 11, 2018, ethic approval number: 211/18) authorized all animal experimentation. The protocol was designed to minimize pain and discomfort to animals. Adult male Wistar rats (n = 21) weighing 200-250 g were purchased from Charles River (Boston, MA, United States) and accommodated in a controlled environment with free access to standard laboratory pelleted formula (A04; Panlab, Barcelona, Spain) and tap water. Rats were kept in a climate-controlled environment with a 12-h light/12-h dark cycle for one week. For the 12 h prior to the experiment of AP induction, rats were fasted with free access to drinking water. Rats were randomly selected and assigned to three equal groups: (1) Treated with saline, as controls (n = 7); and (2) Treated with cerulein, to induce AP (CerAP, n = 7) and (3) Treated with cerulein after a PEG35 pretreatment (PEG35 + CerAP, n = 7). Immediately before the first injection of PEG35 or saline, 0.05 mg/kg of buprenorphine was administered as an analgesic. Cerulein (Sigma-Aldrich, St. Louis, MO) was dissolved with phosphate-buffered saline (PBS) and administered intraperitoneally at a supramaximal stimulating concentration of 50 μg/kg/body weight (bw) at 1-h intervals (total of 5 injections); control animals received intraperitoneal saline solution with the same regime. The use of this supramaximal dosage of cerulein induce a transient form of interstitial edematous AP characterized by marked hyperamylasemia, pancreatic edema and neutrophil infiltration within the pancreas, as well as pancreatic acinar cell vacuolization and necrosis [15] . PEG35 was administered intraperitoneally at a dose of 10 mg/kg, 10 min prior to each cerulein injection. Immediately after the last injection of cerulein or saline, animals were euthanized by intraperitoneal injection of 40-60 mg/kg of sodium pentobarbital, and blood was collected from the vena cava in heparinized syringes. Harvested blood was centrifuged and the obtained plasma was stored at −80 °C until analysis. Four tissue samples from each animal were taken from the head of the pancreas. One portion of each tissue sample was immediately weighed and oven-dried for the wet-to-dry weight ratio calculation. Another portion was fixed in 10% phosphate−buffered formalin for histological analysis. The third portion was frozen and stored at −80 ºC for western blot analysis, and the last portion was saved in RNAlater solution for real-time qRT-PCR analysis. Histopathological examination Pancreas tissue was fixed in 10% phosphate-buffered formalin and then embedded in paraffin. 3-μm thickness sections were mounted on glass slides. Slides were dewaxed and rehydrated and stained with hematoxylin and eosin. Assessment of changes in the tissue was carried out by an experienced pathologist through the examination of different microscopic fields randomly chosen from each experimental group in a blinded manner. Pancreatic tissue sections were evaluated for the severity of pancreatitis based on edema, inflammatory infiltration, parenchymal necrosis, and vacuolation of acinar cells. Cell lines and treatments The rat pancreatic acinar AR42J cell line was purchased from Sigma (St. Louis, MI, United States). Cells were grown at 37 ºC in RPMI medium supplemented with 100 mL/L fetal bovine serum, 100 U/ml penicillin and 100 μg/mL streptomycin in a humidified atmosphere of 50 mL/L CO 2 . Acinar cells were plated at a density of 3 × 10 5 /well in 12-well culture plates, or at a density of 2 × 10 4 /well in 96-well plates, and allowed to attach for 24 h or 48 h. Cells were pretreated with PEG35 diluted in PBS, at a concentration of 0.5, 1, 2, 4, or 6% for 30 min prior to treatment with the appropriate stimuli: 2 µmol/L or 4 µmol/L staurosporine, 100 ng/mL TNFα or 10 nM cerulein. All three reagents were purchased from Sigma-Aldrich (St. Louis, MO, United States). Time points of 3 h were used for TNFα treatment, and of 24 h for the remaining stimuli. Lipase activity Plasma lipase activity levels were determined using a turbidimetric assay kit from Randox (County Antrim, Crumlin, United Kingdom), in accordance with the supplier's specifications. Briefly, the degradation of triolein by the pancreatic lipase results in lowered turbidity, which was determined in the sample at 340 nm using a microplate reader (iEMS Reader MF; Labsystems, Helsinki, Finland). The activity of the sample was obtained in U/L. All samples were run in duplicate. Pancreas wet-to-dry weight ratio Edema formation in the pancreas was evaluated by the determination of the wet-todry weight ratio. A portion of the pancreas was weighed. The content of water was measured by calculating the wet-to-dry weight ratio from the initial weight (wet weight) and its weight after incubation in an oven at 60 °C for 48 h (dry weight). Lactate dehydrogenase activity Lactate dehydrogenase (LDH) activity was measured in plasma samples and cell culture supernatants using the Lactate Dehydrogenase Assay Kit (Abcam; Cambridge, United Kingdom). Briefly, LDH reduces NAD to NADH, which interacts with a specific probe to produce colour. Changes in absorbance due to NADH formation were measured at 450 nm at 37 °C using an automated microplate reader (iEMS Reader MF; Labsystems, Helsinki, Finland). Sample activity was expressed in mU/mL. All samples were run in duplicate. The lower limit of detection for ELISA ranged from 14 to 36 mU/mL. MTT cell proliferation assay The cell proliferation was determined by measuring metabolic activity of the cells through the reduction of the tetrazolium dye MTT [3-(4,5-dimethylthiazol-2-yl)-2,5diphenyltetrazolium bromide] to its insoluble formazan. AR42J cells were seeded in 96-well plates at a density of 2 × 10 4 cells/well in 100 μL of culture medium with or without the compounds to be tested for 24 h. MTT reagent was added and incubated for 2 h at 37 °C, and the formazan produced in the cells formed dark crystals at the bottom of the wells. Crystal-dissolving solution was added and the absorbance of each sample was quantified at 570 nm using an automated microplate reader (iEMS Reader October 21, 2020 Volume 26 Issue 39 MF; Labsystems, Helsinki, Finland). All samples were run in duplicate. The absorbance intensity was proportional to the number of viable cells. Real-time qRT-PCR Total RNA from the pancreatic tissue and cultured cells was extracted with Nucleozol reagent (Macherey-Nagel, Dueren, Germany) in accordance with the manufacturer's protocol. RNA concentration and quality were measured with the OD A260/A280 ratio and the OD A260/A230 ratio, respectively. Reverse transcription was performed on a 1 µg RNA sample employing the iScript cDNA Synthesis Kit (Bio-Rad Laboratories, Hercules, CA, United States). PCR amplification was performed using SsoAdvanced™ Universal SYBR ® Green Supermix (Bio-Rad Laboratories, Hercules, CA, United States) on a CFX Real-Time PCR Detection System (Bio-Rad Laboratories, Hercules, CA, United States) using 10 µL of amplification mixture containing 50 ng of reverse-transcribed RNA and 250 nmol/L of the corresponding forward and reverse primers. PCR primers for the detection of interleukin (IL) 6, IL1β, IL10, inducible isoform of nitric oxide synthase (iNOS), or glyceraldehyde-3-phosphate dehydrogenase (GAPDH) were validated primers from BioRad (Hercules, CA, United States). PCR primers for tumor necrosis factor α (TNFα), designed with Primer3.0 plus [16] , were: TNFα forward, 5'-ATGGGCTCCCTCTCATCAGT-3' and reverse, 5'-GCTTG GTGGTTTGCTACGAC-3'. The specificity of amplicon was determined by melting curve analysis. Threshold cycle values were normalized to GAPDH gene expression and the ratio of the relative expression of target genes to GAPDH was calculated by the DCt formula. Western blot Pancreatic tissue was homogenized in ice-cold RIPA buffer. Lysates were then centrifuged at 15000 g for 20 min at 4 °C, and the supernatants were collected. Supernatant protein concentrations were measured using the Bradford protein assay (Bio-Rad Laboratories, Hercules, CA, United States). SDS-PAGE was performed on a 10% gel and proteins were transferred to a polyvinylidene difluoride membrane for blotting. Statistical analysis All data were exported into Graph Pad Prism 4 (GraphPad Software, Inc.) and presented as means ± SEM. Statistical analyses were carried out by one-way analysis of variance (ANOVA), followed by Tukey's multiple comparison test to determine the significance between pairs. The minimal level of statistical significance was considered to be < 0.05. PEG35 reduced the release of lipase associated with cerulein-induced AP Cerulein-induced AP in rats was associated with significant raised plasma levels of lipase, comparing with the control group, reflecting the degree of pancreatic injury ( Figure 1A). Such increase was significantly reduced in rats pre-treated with intravenous PEG35 at 10 mg/kg. PEG35 abrogated pancreatic edema following cerulein-induced AP As cerulein-induced pancreatitis is characterized by a progressive interstitial edema development, we analyzed the pancreas wet-to-dry weight ratio ( Figure 1B). A significant increase in the pancreas wet-to-dry weight ratio was observed in rats after AP induction with cerulein (7.865 ± 0.86) as compared to control rats (2.76 ± 0.28). However, this increase could be largely prevented by co-treatment with PEG35 in cerulein-treated rats, with a wet-to-dry weight ratio of 3.8 ± 0.85. PEG35 reduced local pancreatic tissue damage associated with cerulein-induced AP Histopathological results showed that cerulein hyperstimulated rats caused an interstitial edematous acute pancreatitis with considerable areas of interstitial edema, local necrosis, infiltrated polymorphonuclear neutrophils and vacuolation of the acinar October 21, 2020 Volume 26 Issue 39 cells. (Figure 1C). In the PEG35-treated group, there were consistent reductions in these characteristics. PEG35 ameliorated the expression of inflammatory markers in cerulein-induced AP and AR42J-treated cells Further, we explored whether PEG35 treatment improves the inflammatory response after cerulein hyperstimulation in rats, by measuring the gene expression of inflammatory mediators in the pancreas. Pancreatic tissue levels of IL6, IL1β, TNFα, IL10 and iNOS increased markedly in rats after AP induction as compared to that of control rats (Figure 2). Notably, PEG35 treatment significantly reduced the APinduced increases in IL1β, IL6 and iNOS. While TNFα expression levels showed a tendency to decrease, this was not statistically significant. Finally, as expected based on its anti-inflammatory role, the gene expression of the IL10 cytokine was not reduced in PEG35-treated animals. In addition to its in vivo effects, a direct anti-inflammatory effect of PEG35 was also observed in in vitro model. Specifically, in a model of cerulein-induced inflammation in the acinar cells using cultured AR42J cells, PEG35 attenuated the gene expression of the pro-inflammatory IL1β and TNFα in a dose-dependent manner ( Figure 3A). Additionally, TNFα-treated cells induced the production of iNOS as well as of TNFα itself, both of which were markedly reduced after the treatment with increasing concentrations of PEG35 ( Figure 3B). PEG35 lessened inflammation-associated cell death in cerulein-induced AP To investigate the potential protective effects of PEG35 on the pancreas, cell death was determined through LDH release and expression of the apoptosis-related proteins BCL-2 and cleaved-caspase-3 by Western blot. Indeed, a significant increment in LDH activity in plasma occurred in cerulein AP-induced animals ( Figure 4A). Notably, rats that had PEG35 co-treatment had significantly reduced levels of the LDH necrotic marker. The pancreatic levels of cleaved caspase-3 and BCL-2 were also markedly higher following cerulein-induced AP as compared to the control group, (Figure 4B and C). The administration of PEG35 promoted a further increase in the levels of antiapoptotic BCL-2 as compared with cerulein hyperstimulated rats while the reduction in the pro-apoptotic cleaved caspase-3 was not statistically significant. PEG35 reduced inflammation-associated cell death in models of pancreatic damage in vitro The decreased LDH activity observed in vivo in PEG35-treated animals led us to examine cell death in in vitro models of inflammation. AR42J cells are a wellestablished cell model for studying intracellular mechanisms involved in the cell death and inflammatory responses of acute pancreatitis. We therefore analysed whether PEG35 affected the release of LDH in AR42J cells in the presence of the proinflammatory stimulus cerulein or TNFα ( Figure 4D). Indeed, both cerulein and TNFαinduced cell death were significantly reduced by PEG35 in a dose-dependent manner. Likewise, PEG35 markedly prevented staurosporine-induced AR42J apoptotic cell death in a dose-dependent manner ( Figure 4E). These results suggest that PEG35 exerts a protective role against inflammation-induced cell death in vitro and in vivo. DISCUSSION Acute pancreatitis (AP) is an inflammatory disease that can have a mild to severe course. We have recently reported an anti-inflammatory role for PEG35 in a severe necrotizing AP experimental model. To further investigate the effect of this polymer in a milder form of the disease, we used a model of cerulein-induced mild edematous pancreatitis that is mainly characterized by a dysregulation of the production and secretion of digestive enzymes, interstitial edema formation, infiltration of neutrophil and mononuclear cells within the pancreas, cytoplasmic vacuolization and the death of acinar cells [17] . We determined that PEG35 reduced the course of cerulein-induced AP by inhibiting the inflammatory response as well as inflammation-induced cell death. In our study, treating the animals with PEG35 significantly abrogated the severity of cerulein-induced AP, as indicated by the lessened activity of lipase in plasma and edema formation as well as histopathological features of AP in the PEG35-treated animals. A sudden inflammatory response in the pancreas contributes to the development of AP, primarily through the release of inflammatory cytokines. TNFα has long been considered as one of the initial triggers of the inflammatory cascade in experimental pancreatitis [18] . In this setting, stimulation of acinar cells of the pancreas by TNFα have been reported to cause a direct activation of pancreatic enzymes, contributing to premature protease activation and cell necrosis [19] . Increased accumulation of TNFα promotes the production of other inflammatory cytokines, including IL1β and IL6, which result in the activation of an inflammatory cascade that leads to widespread tissue damage in multiple tissues and organs. Indeed, the levels of TNFα, IL1β and IL6 have been correlated with the severity of AP [20][21][22][23] . In the current study, treatment with PEG35 was capable to significantly reduce the AP-induced raises in pro-inflammatory IL1β and IL6. However, no significant effect on TNFα was observed. This fact could be explained by the levels of IL10 found in rats co-treated with cerulein and PEG35, which were similar to those found in those only treated with cerulein. As IL10 plays a fundamental role in the attenuation of the cytokine response during acute inflammation, the significant increase of IL10 found in hyperstimulated rats may contribute to slow TNFα production, with an observed tendency towards a decrease in its expression. Indeed, in an experimental model of cerulein-induced AP, intraperitoneal IL10 administration attenuated TNFα production, which was associated with dramatically lessened pancreatitis severity and mortality [24] . Furthermore, a direct anti-inflammatory effect of PEG35 was observed in cultured AR42J cells. In an in vitro model of cerulein-induced inflammation, PEG35 was able to attenuate the gene expression of pro-inflammatory IL1β and TNFα in a dosedependent manner. Moreover, PEG35 reduced the levels of TNFα in AR42J cells stimulated with TNFα. Pro-inflammatory cytokines are known to activate the inducible iNOS and the subsequent production of nitric oxide, thus contributing to the pathophysiology of AP. October 21, 2020 Volume 26 Issue 39 In fact, the degree of pancreatic inflammation and tissue injury of cerulein-induced AP has been found to be markedly reduced in iNOS-deficient mice [25] . In our study, we observed an increased mRNA expression of iNOS following cerulein hyperstimulation in rats, which underwent a significant reduction after PEG35 treatment. Likewise, PEG35 abrogated TNFα-induced iNOS expression in acinar cells in a concentrationdependent manner. Altogether, these results suggest that PEG35 treatment reduced pancreatic inflammation in pancreatitis by suppressing the expression of proinflammatory mediators. These changes in the inflammatory response brought about by PEG35 treatment were further emphasized by a reduction in pancreatic cell death. PEG35 treatment reduced cell death in cerulein-induced AP rats by lowering plasmatic LDH activity. In addition, the increased release of LDH observed in cerulein and TNFα-treated acinar cells in vitro was reverted upon incubation with increasing concentrations of PEG35. In the pancreas, inflammation is associated with injured acinar cells that can go through necrosis or apoptosis. Thus, we measured the apoptosis index in pancreatic tissue following cerulein-induced AP. Injured pancreatic tissue induced a significant increase in cleaved caspase-3 and BCL-2 apoptotic proteins as compared to the respective controls. Following treatment with PEG35, anti-apoptotic BCL-2 further increased as compared with cerulein-treated animals while cleaved caspase-3 levels were similar to that found in cerulein hyperstimulated animals. Collectively, these findings suggest that PEG35 has anti-apoptotic and anti-necrotic properties for cerulein-induced pancreatitis. CONCLUSION In conclusion, results from this study reveal a mechanism by which PEG35 exerts antiinflammatory effects that alleviate experimental cerulein-induced AP, by inhibiting the inflammatory response as well as inflammation-induced cell death. Because of its low toxicity as well as its proven biocompatibility, PEG35 could be used as a new therapeutic tool to resolve the cellular damage associated to mild AP. Research background Acute pancreatitis (AP) is a common gastrointestinal condition with an increasing incidence worldwide. The course of the disease ranges from a mild, self-limiting condition to a more severe acute illness with a high morbidity and mortality. Our group has previously demonstrated an anti-inflammatory role for a 35-kDa molecular weight polyethylene glycol (PEG35) in an experimental model of severe necrotizing AP. The therapeutic administration of PEG35 notably alleviated the severity of AP and protected against the associated lung inflammatory response, which is the main contributing factor to early death in patients with this condition. Research motivation To date, the treatment of AP continues to be supportive as there are no effective pharmacologic therapies available. Polyethylene glycols (PEGs) are neutral polymers widely used in biomedical applications due to its hydrophilic properties combined October 21, 2020 Volume 26 Issue 39 with a low intrinsic toxicity. In this study, we demonstrated the protective role of PEG35 in a mild form of AP. Research objectives To evaluate the effect of PEG35 in experimental models of mild acute pancreatitis in vivo and in vitro. Research methods AP was induced by five hourly intraperitoneal injections of cerulein (50 μg/kg/bw). PEG35 was administered intraperitoneally 10 minutes before each cerulein injection in a dose of 10 mg/kg. After AP induction, samples of pancreatic tissue and blood were collected for analysis. AR42J pancreatic acinar cells were treated with increasing concentrations of PEG35 prior to exposure with tumor necrosis factor α, staurosporine or cerulein. The severity of AP was determined on the basis of plasma levels of lipase, lactate dehydrogenase activity, pancreatic edema and histological changes. To evaluate the extent of the inflammatory response, the gene expression of inflammation-associated markers was determined in the pancreas and in AR42Jtreated cells. Inflammation-induced cell death was also measured in both in vivo and in vitro models of pancreatic damage through apoptosis and necrosis-related assays. Research results PEG35 treatment significantly improved pancreatic damage in cerulein-induced AP in rats through reduction on lipase levels and tissue edema. Furthermore, PEG35 ameliorated the inflammatory response and associated cell death in vivo and in vitro, in treated-acinar cells, by lowering inflammatory-related cytokines and iNOS gene expression, levels of apoptotic markers and the activity of lactate dehydrogenase. Research conclusions PEG35 ameliorated pancreatic damage in cerulein-induced AP and cultured acinar AR42J-treated cells through the attenuation of the inflammatory response and associated cell death. Research perspectives Our study provided evidence of a protective role of PEG35 in a mild form of AP suggesting that PEG35 may be a valuable option in the management of clinical AP.
2020-10-28T19:17:25.570Z
2020-10-21T00:00:00.000
{ "year": 2020, "sha1": "5c9769b0736eacf6f890ef74704a576c6987e926", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v26.i39.5970", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "cb1c01db32052d595103287a9f63dd103d042718", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10416767
pes2o/s2orc
v3-fos-license
A Novel Three-dimensional Flow Chamber Device to Study Chemokine-directed Extravasation of Cells Circulating under Physiological Flow Conditions Extravasation of circulating cells from the bloodstream plays a central role in many physiological and pathophysiological processes, including stem cell homing and tumor metastasis. The three-dimensional flow chamber device (hereafter the 3D device) is a novel in vitro technology that recreates physiological shear stress and allows each step of the cell extravasation cascade to be quantified. The 3D device consists of an upper compartment in which the cells of interest circulate under shear stress, and a lower compartment of static wells that contain the chemoattractants of interest. The two compartments are separated by porous inserts coated with a monolayer of endothelial cells (EC). An optional second insert with microenvironmental cells of interest can be placed immediately beneath the EC layer. A gas exchange unit allows the optimal CO2 tension to be maintained and provides an access point to add or withdraw cells or compounds during the experiment. The test cells circulate in the upper compartment at the desired shear stress (flow rate) controlled by a peristaltic pump. At the end of the experiment, the circulating and migrated cells are collected for further analyses. The 3D device can be used to examine cell rolling on and adhesion to EC under shear stress, transmigration in response to chemokine gradients, resistance to shear stress, cluster formation, and cell survival. In addition, the optional second insert allows the effects of crosstalk between EC and microenvironmental cells to be examined. The translational applications of the 3D device include testing of drug candidates that target cell migration and predicting the in vivo behavior of cells after intravenous injection. Thus, the novel 3D device is a versatile and inexpensive tool to study the molecular mechanisms that mediate cellular extravasation. Introduction Cell extravasation is the process by which circulating cells exit the bloodstream and is a critical component of many physiological responses in the body. The process is also important for tissue regeneration, as for example, when therapeutic cells mobilized from tissues into the blood vessels (or injected intravenously) migrate through the vasculature and exit the circulation to sites of injury or degeneration. Extravasation is also a critical component of the pathogenesis of many diseases, including inflammation, immune rejection, autoimmunity, and tumor metastasis. Extravasation is a complex multi-step process that involves the interaction of circulating cells with endothelial cells (EC) under conditions of physiological flow. The process comprises (a) cell rolling, (b) firm adhesion to the luminal surface of EC, and (c) transmigration across the EC. Importantly, each step of the extravasation cascade is regulated by a subset of cell type-specific and species-specific molecules. However, the detailed molecular mechanisms that regulate extravasation of specific cell subsets are not well understood, largely due to the technical difficulty of recapitulating the conditions in the bloodstream. The novel 3D device described here is designed to overcome these technical challenges and allow experiments to be performed that will improve our understanding of the biology of cell migration. The molecules that mediate cell migration are important therapeutic targets. A detailed understanding of the molecular events that control the migration of specific cell types will assist in identifying novel targets for the therapeutic promotion or inhibition of extravasation. For example, interventions that enhance the migration of therapeutic stem cells (whether derived from adult, neonatal or even from fetal tissues) toward sites of injury or degeneration would be of great utility in tissue regeneration. There is growing interest in the in vitro generation of therapeutic stem cells, including cells derived from pluripotent sources, and in ex vivo manipulated adult stem cells (expanded, genetically manipulated, and pretreated with various enzymes) 1 of stem cell migration. In contrast, strategies that block cell migration by targeting specific homing molecules would be useful for the treatment of inflammatory and autoimmune diseases as well as metastatic cancer. Thus, understanding the molecular mechanisms that mediate the interactions between circulating cells and EC during cell migration and extravasation is relevant to translational medicine and drug discovery as well as to basic science. There are currently a number of methods available to study different aspects of cell migration. However, these methods have shortcomings that can be overcome with the new 3D device. Animal models: Animal models, such as immunocompromised mice and genetically manipulated mice, have been useful tools to study the migration of human cells in vivo. However, one significant drawback to these models is that human cells interact poorly with adhesion molecules present on mouse EC, in part due to species-specific sequence differences in many cell surface molecules. Thus, the use of rodents to study human cell migration is unlikely to authentically reflect the events in human organ-specific vascular beds. In addition, in vivo models are not suitable for highthroughput screening of drug candidates. The conventional in vivo models using to study cell homing do not discriminate between the different steps of the extravasation cascade, making it difficult to identify and target novel homing molecules. The intravital microscopy approach was developed to address this need and has been informative; however, this technique is extremely time-and labor-intensive 7,8 . Static transmigration assays: Transwell or Boyden chamber assays measure cell migration across a porous membrane and are widely used in migration studies. The assay has the advantage that it can be used to study not only cell motility and cell-extracellular matrix (ECM) interactions, but also chemokine-mediated migration of cells across an EC monolayer grown on the porous membranes. Unfortunately, the phenotype and function of EC under static conditions differ significantly from those under physiological flow 9,10 . Thus, the chemotactic events that occur in Transwell assays do not faithfully mimic the interactions between migrating cells and EC under shear stress. In addition, the "rolling" step of the homing cascade, which adds an extra dimension of selectivity to the overall process, does not occur in static Transwell assays. Thus, while this technology does allow quantitative evaluation of chemokine-mediated cell migration across the EC monolayer, it is limited by its inability to provide the shear stress that mimics blood flow in vivo. Assays under shear stress: Wall shear stress is known to play an important role in regulating EC function 9,10 . Shear forces induce rapid activation of signaling cascades, transcription factors, and differential gene expression in EC 11,12 . This information led to the development of the next generation of adhesion assays -parallel laminar flow chambers and capillaries -to study rolling and adhesion of cells to EC under conditions of shear stress 7,13 . The limiting factor for these assays is that they can measure only rolling and adhesion, but do not differentiate between an adherent cell that would go on to transmigrate and an adherent cell that is "arrested" on the EC monolayer and would not transmigrate. Moreover, these assays cannot measure migration of cells toward a chemokine gradient. Several reports have described the ability of adherent cells to crawl beneath an EC monolayer grown on glass slides under conditions of flow 14 . The limitations to this crawling technique include: it analyzes only a limited number of cells; it is non-quantitative; it analyzes cell crawling on the matrix and cell surface but not migration toward a chemotactic gradient; and it does not allow the transmigrated cells to be isolated. Thus, although this assay has the advantage of applying shear stress to the EC monolayer, it cannot quantify migration of cells toward a chemokine gradient. The novel 3D technology described here overcomes many of these shortcomings by combining shear force (upper compartment) with a static chemokine gradient (lower compartment) and permits quantitative evaluation of each step of the extravasation cascade (Figure 1). The device can also be used to investigate how crosstalk between EC and the microenvironment influences the ability of EC to support extravasation of circulating cells. The following protocol describes the step-by-step methodology for using the 3D flow chamber device. 2. Take the Petri dishes with collagen-coated inserts prepared in section 1; do not remove the inserts from the dishes. Place several small drops of the cell suspension (approximately 20 μl per drop) in a circle on the surface of the collagen-coated membrane (do not let the pipette tip touch the surface) and then carefully add the rest of the cell suspension to form one large drop. Ensure that the cell suspension remains hemispherical and does not drain from the membrane surface. 3. Replace the lids and carefully place the dishes in a 5% CO 2 cell culture incubator and incubate (without shaking) at 37 °C for 30 min to allow the cells to attach to the membrane. 4. Gently add 5 ml of culture medium to each Petri dish ensuring that the insert remains on the bottom of the dish and is completely covered by the medium. Replace the lids and carefully put the dishes back in the incubator. Culture the cells at 37 °C overnight. 5. Use the same approach to prepare the second (optional) lower insert with cells representing the local microenvironment, which will be placed below the insert containing the EC layer. Note: if the lower inserts are included in the experiments, the upper and lower inserts are first connected to each other before placing the inserts into the 3D device ( Figure 4B). Assemble the 3D Flow Chamber Device 1. Screw together the upper and lower plates, connect the plates to the gas exchange unit using tubing, place the assembled device into a clean plastic bag, and sterilize the bag and contents by irradiation (11 Gy). 2. Remove a tray-shelf from the cell culture incubator, place into a sterile tissue culture hood, spray with 70% ethanol, wipe off, and then cover the tray with a large sterile napkin. 3. Place the irradiated plastic bag into the hood, remove the device from the bag and place on the sterile incubator tray. Connect the device to the peristaltic pump with the tubing provided. Program the pump for an optimal speed of 0.2 ml/min. Note: To connect the pump with the electrical socket, extrude the electrical cord through the hole in the backside of the incubator. Use robber plug around the cord to assure that the hole is airtight. 4. Place 5 -7 ml of desired culture medium into a sterile 15 ml tube (for the "inlet"). Prime the 3D Flow Chamber Device 1. Disconnect the inlet tubing from the outlet by pulling the metal needle from the tubing (use small sterile napkins to maintain sterility of the tubing). Use the metal needle connected to the tubing as an "inlet" and the disconnected end of tubing as an "outlet". Place the metal needle inlet into the 15 ml tube with the media; place the outlet into the empty 15 ml tube. 2. Turn on the pump so the flow is counterclockwise and set the flow rate at 0.2 ml/min. Allow the negative pressure to draw the medium from the 15 ml inlet tube into the tubing and stop the pump when the medium reaches the end of tubing immediately before it connects to the device (Figure 2, Stop #1). 3. Unscrew the upper and lower plates of the device. Carefully remove the upper plate and place 1,500 -1,550 μl of medium (either medium alone or plus the experimental chemokines) into the lower wells. Make sure the well contains sufficient medium to form a hemisphere that reaches approximately 1 -2 mm above the lower plate. 4. Transfer the prepared insert from the Petri dish to the wells of the lower plate using sterile forceps, making sure that no bubbles are trapped beneath the inserts. Aspirate any medium that appears on the surface of the lower plate to ensure that the plate remains dry. Place the upper plate back onto the lower plate and re-connect the plates using the screws. 5. Immediately turn on the peristaltic pump and allow medium to flow through the 3D device. Elevate the outlet end of the device and maintain the chamber in this position to prevent formation of bubbles within the space between the plates. 6. Allow the medium to fill the chamber and the tubing connecting the chamber to the gas exchange unit. Stop the pump immediately before the medium reaches the gas exchange unit (Figure 2; Stop #2). Place 3 ml of medium into the gas exchange unit, turn on the pump, and allow the air bubbles to escape through the gas exchange unit. Elevate the gas exchange unit to allow the medium to fill the tubing exiting the gas exchange unit and stop the pump when the medium reaches 2 -3 cm before the end of the outlet tubing (Figure 2, Stop #3). 7. Connect the inlet metal needle to the outlet using small sterile napkins. Reprogram the pump so the medium flows in the opposite direction (clockwise) toward the gas exchange unit and ensure any air bubbles are removed once they reach the gas exchange unit. Check that no air bubbles remain in the system. 8. Reprogram the pump so the medium flows counterclockwise. Add 100 μl of the test cell suspension to the gas exchange unit. Close the lid of the gas exchange unit, place the tray with the working system back into the incubator, and allow the test cells to circulate in the 3D device at 37 °C for the desired time. Note: Clean the electrical cord of the pump with 70% ethanol and place it inside of the incubator. Representative Results The murine bone marrow-derived EC line STR-12 was grown on inserts with 5 μm pores. The rate of EC growth was monitored under a microscope and when the EC were 100% confluent, the inserts were transferred into the wells in the lower compartment of the 3D device. Immediately before placing the inserts, the wells of the lower compartment were filled with culture medium alone (negative control) or with medium supplemented with stromal cell-derived factor-1 (SDF-1; 5 ng/ml and 50 ng/ml). Thereafter, the 3D device was assembled and the chamber was filled with medium as described in the protocol. The test cells to be circulated in the upper compartment of the device were freshly harvested murine bone marrow cells (3.5 x 10 6 cells per chamber). A defined shear stress of 0.8 dyn/cm 2 was applied by setting the peristaltic pump speed at 0.2 ml/min. The entire working system was then placed in the 5% CO 2 incubator at 37 °C and the cells were allowed to circulate and interact with the EC monolayer for 4 hr. At the end of that time, the circulating cells were collected, the chamber was disassembled, and the inserts were removed as described in the protocol. The transmigrated cells were harvested from the lower wells, washed, resuspended in fresh medium, and transferred to methylcellulose cultures supplemented with hematopoietic growth factors for colony-forming cell (CFC) assay (Figure 3). As expected, we found a significantly higher number of CFC had migrated across the EC monolayer to the wells containing 50 ng/ml SDF-1 than to wells containing 5 ng/ml SDF-1 or medium alone. As we described earlier, none of the current in vitro techniques available to study cell migration are capable of testing the effect of the local microenvironment on the ability of EC to support extravasation of migrating cells. To illustrate how this can be achieved with the 3D device, we examined extravasation of circulating hematopoietic cells across a layer of EC and a layer of bone marrow stromal cells. For this, a second (lower) insert containing a layer of stromal cells was juxtaposed to the upper insert containing the EC monolayer in the lower plate ( Figure 4). The experiment was then performed as described above and the transmigrated cells were harvested from the wells and counted. The results demonstrated that insertion of an additional layer of stromal cells beneath the EC monolayer significantly increased the migration of hematopoietic cells toward SDF-1 (Figure 4). This finding is consistent with the notion that the local microenvironment contributes to the recruitment of circulating cells to the tissue. NK cells, and other migratory cells. In vascular biology, the device will be useful for assessing the effect of the local microenvironment on the ability of EC to support cell recruitment and to mimic the blood-brain barrier in vitro. For researchers focusing on disease pathogenesis, the device can be used to study the migration of cytotoxic T cells into the pancreas during type I diabetes, the migration of eosinophils into the lungs of asthma patients, the migration of neutrophils, monocytes, and lymphocytes into sites of inflammation in a variety of disorders, the recruitment of cells to wound healing sites, the interaction of cytotoxic T cells with the neovasculature, and recruitment of cytotoxic T cells into tumors. The novel 3D device will also be a useful tool for translational science and for preclinical drug development and screening. For example, the device can be used to optimize the preparation of therapeutic cell suspensions for intravenous administration, to test the recruitment of therapeutic cells to injured tissues versus normal tissues, and to examine the role of the specific organ or tissue endothelium and microenvironment in this process. For drug development, the device could be used to test drug candidates that target tumor cells and block each step of the extravasation cascade , to test migrating cells as a drug delivery vehicle to tumor sites, to test drugs that regulate the function of human organ-specific endothelium or the local tissue-specific microenvironment, and to test combinations of drugs that regulate different steps of the homing cascade. Finally, when converted into a high-throughput system, the device can be used for screening small molecules, peptides, and antibodies that target molecules mediating cell trafficking. Technical details and tips Rolling and adhesion under flow are the critical steps in the extravasation cascade and contribute significantly to the efficiency of cell migration in vivo. Notably, these steps do not contribute to static assays in vitro. However, static transmigration assays are useful as negative controls in experiments testing the effects of shear stress on cell survival and function, including the ability of EC to support low affinity interactions (rolling) and firm adhesion. , which may influence adhesive interactions under conditions of physiological flow, vary considerably in commercial culture media. Other technical details that should be taken into account when planning experiments with the 3D device are discussed below. Selection of the EC monolayer The cell surface signature of EC is influenced by various parameters, including the species and tissue of origin and the cell culture conditions, which should be taken into account in the experimental design. Some commonly used EC include lung-derived microvascular EC (LDMVEC), bone marrow-derived EC (BMDEC), brain-derived EC (BDEC), and HUVEC. The properties of EC also vary with their origin. For example, BMDEC constitutively express selectins and VCAM-1, which are responsible for rolling and adhesion, whereas BDEC, LDMVEC, and HUVEC do not express these molecules under normal conditions. Therefore, inserts coated with BDEC, LDMVEC, and HUVEC can be pretreated with TNF α (10 ng/ml, 4 hr) or other factors to mimic inflammation and induce expression of homing molecules. Testing crosstalk with the local microenvironment There is growing interest in understanding how the local microenvironment regulates the functions of EC and participates in cell extravasation. Our results demonstrate that insertion of a monolayer of stromal cells beneath the EC monolayer significantly increases extravasation of circulating cells. This finding demonstrates that crosstalk between EC and stroma enhances the ability of the EC to support extravasation of circulating cells. Moreover, the insertion of cells from different microenvironments (e.g. mesenchymal stem cells, lung fibroblasts, tumor cells, astrocytes) could help to mimic specific vascular beds. In particular, a combination of brain-derived EC and astrocytes could be a useful approach to mimic the blood-brain barrier. Similarly, cells obtained from patients, or normal cells manipulated in vitro, could help mimic a specific diseased microenvironment. In our studies, we used lower inserts with stromal cells grown to 50% confluence. Although the optimal density of microenvironmental cells on the inserts will depend on the goals of the study, this can be readily manipulated and controlled. Selecting a shear stress rate The shear stress of 0.8 dyn/cm 2 was used here because this has been demonstrated by Von Andrian et al. to be the shear stress in the bone marrow microvasculature, where hematopoietic progenitor cells exit the circulation and enter tissues under normal physiological conditions 15 . In contrast, higher levels of shear stress are observed in larger vessels, where cell extravasation is limited. Therefore, high shear stress rates could be used as additional controls for experiments examining extravasation of various cell types. The survival of cells under shear stress depends on several factors, including the cell type and ex vivo cell treatments (Goncharova et al., unpublished observations), and should thus be carefully evaluated during the test. Cell sensitivity to differing levels of shear stress (shear stress resistance) could be evaluated by programming the peristaltic pump to increase the shear stress rate incrementally from 0.8 dyn/cm 2 to 6 dyn/ cm 2 . During these tests, cells can be collected periodically from the gas exchange chamber to monitor cell death using assays such as trypan blue exclusion, annexin V and PI staining, and apoptosis marker expression. If experiments are designed to test the shear stress resistance of ex vivo engineered cells or cells derived from parenchyma, it is recommended that additional positive controls, such as blood borne cells, are included in the experiments. Selection of the insert pore size and ECM coating Inserts are available with several pore sizes (3, 5, and 8 μm). The choice of pore size will depend on the size and properties of the circulating test cells, which is also the case for static Transwell assays. Inserts with a large pore size (8 μm) are recommended to test extravasation of large cells such as tumor cells of epithelial origin. Leukocytes or hematopoietic stem cells are more commonly tested with 5 μm pore inserts, and 3 μm In our initial studies, we tested various ECM for their ability to support growth of EC on the inserts and to withstand shear stress for >4 hr. The ECM tested included Matrigel, poly-D-lysine, fibronectin, laminin, type I collagen, Hydrogel, Meta-keratin I, II, III, IV, and Extracel. The best results for EC growth were obtained with Extracel, fibronectin, and collagen. However, in our hands, Extracel at 0.8 μg/cm 2 significantly reduced the transmigration of test cells across the membrane. Therefore we now use fibronectin (5 μg/cm 2 ) or collagen (5 μg/cm 2 ) for coating of inserts. However, the optimal choice of ECM will probably be influenced by both the type of EC and the transmigratory properties of the test cells. A preliminary experiment should be performed to test the integrity of the EC monolayer grown on selected ECM. For example, place pre-coated inserts with EC monolayers in Petri dishes filled with culture medium, place the dishes on a shaker to create shear stress (low setting), and incubate at 37 °C and 5% CO 2 for 12 hr. The integrity of the EC after exposure to the shear stress can be evaluated using crystal violet staining. Selection of optimal test cell concentrations The ability of test cells to exit the circulation depends on many characteristics specific to each cell type. To generate statistically significant results, the circulating cell density must be optimized to ensure that a sufficient number of cells migrate into the positive control wells. Therefore, we recommend performing initial tests with a range of cell densities (e.g. 10 3 /ml to 10 7 /ml) to understand the relationship between the number of cells loaded into the system and the number of cells that undergo transmigration. In addition, the optimal circulating concentration may differ for the same cell type obtained from different sources. For example, CD34 + cells are a heterogeneous cell population containing both hematopoietic stem cells and committed lineage-specific progenitors. They can be obtained from bone marrow, mobilized peripheral blood, and umbilical cord blood. The CD34 + cells derived from these three sources possess different homing and engrafting abilities [16][17][18] , so the conditions for testing these cells in the 3D device should be optimized before a full-scale study. Selection of optimal cell circulation time The optimal circulation time should be determined empirically for each cell type. The minimum time required for circulation could be extrapolated from the results of static migration assays, but this may be extended when the cells are subjected to shear stress. Pilot studies testing circulation times between 4 and 96 hr are recommended. The gas exchange unit The main purpose of the gas exchange unit is to maintain the optimal concentration of CO 2 in the medium circulating through the 3D device, similar to the settings in standard tissue culture incubators. The gas exchange unit can also be used for adding test cells and compounds and for collecting probes and samples during the test. Evaluation of test cell numbers Circulating cells can be sampled at varying times during the experiment through the gas exchange unit. At the end of experiment, the cells remaining in the circulation could be collected from the outlet. When the 3D device is disassembled at the end of the experiment, the transmigrated cells can be harvested from the lower wells and collected for further analysis. If the input cells are unlabeled, the transmigrated cells can be stained with trypan blue and live/dead cells enumerated microscopically. Alternatively, the input cells could be labeled with one of the many "cell tracker" dyes available for live cell imaging before loading into the 3D device; in this case, the harvested transmigrated cells should be lysed for quantitation of fluorescence. Selection of the optimal sample size To allow statistical analysis, a sample size must be selected that will provide approximately 80% power to detect the hypothetical difference in the mean percent changes between two groups when tested at a significance level of 0.05 using a two-sided t-test. Based on our experience with bone marrow cells, we use three wells per experimental condition for the 3D device experiments. However, the optimal sample size will vary with the goal of the experiment and should be determined empirically. Multiple regression statistical tests, two-sided t-tests, and analyses of variance can be used to verify statistical relevance of the results. Selection of chemokines As for all transmigration assays, the choice of chemoattractant will be dictated by the properties of the test cells and the goal of the study. SDF-1 is a well established chemoattractant for hematopoietic cells. In our hands, a concentration of 50 ng/ml SDF-1 was optimal to stimulate extravasation of bone marrow-derived hematopoietic progenitor cells. We also use C3a and bFGF as chemoattractants for mesenchymal stem cells 19 . However, we recommend titrating each chemokine and cross-titrating combinations of chemokines to identify the optimal concentration range before a full-scale study. For certain cell types, a combination of chemotactic factors may be beneficial. If chemokines for specific test cells are unknown or unavailable, conditioned media from stimulated lymphocytes, fibroblasts, and other cells can be used as a source of chemoattractants. Selection of the readout parameters The versatility of the 3D device will expand the type of transmigration experiments that can be performed, and the success of the experiments will depend on thoughtful consideration and design of the readout parameters. As a rule, cells collected from the upper (circulating) compartment and the lower (static) compartment should be tested for viability at the same time as cell counting. Some cells possess low tolerance to the physiological shear stress observed in the microvasculature. In this case, the number of dead cells will be increased in the upper compartment. The number of adherent cells arrested (or trapped) on the EC layer can be evaluated by immunocytochemistry of markers expressed specifically The types of analyses performed with the collected cells will depend on the investigators' experimental goals, but could include analyzing changes in gene expression by microarrays and qPCR, surface molecule expression by FACS analysis, activation of signaling pathways by FACS and western blotting, and factor secretion by ELISA. An enormous array of functional tests are possible on the collected cells in vitro, as evidenced by our own work in which we cultured the transmigrated bone marrow cells to enumerate the hematopoietic progenitor cells (Figure 3). The collected samples could also be tested in a variety of in vivo assays. One important parameter that can help to predict the behavior of the test cells in vivo is the tendency of some cells to form aggregates under different rates of shear stress. For example, therapeutic cells that undergo aggregate formation in the 3D device might have a greater tendency to form clumps following intravenous administration and cause acute vascular obstruction in vivo (Goncharova et al., unpublished observations). Aggregate formation could be monitored microscopically by sampling the test cells during or after completion of the 3D device experiment. Disclosures Cascade LifeSciences Inc. possesses exclusive rights to patent No. US 7,927,867 B2 "Device for evaluating in vitro cell migration under flow conditions and methods for uses thereof". SKK is an inventor of the 3D flow chamber device technology and a scientific co-founder and shareholder of Cascade LifeSciences Inc.
2016-05-12T22:15:10.714Z
2013-07-15T00:00:00.000
{ "year": 2013, "sha1": "0ac0e02a8e9abe9dcc4eda0d3c989eceb18f9f03", "oa_license": "CCBYNCND", "oa_url": "https://www.jove.com/pdf/50959/a-novel-three-dimensional-flow-chamber-device-to-study-chemokine", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "0ac0e02a8e9abe9dcc4eda0d3c989eceb18f9f03", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
245574108
pes2o/s2orc
v3-fos-license
The functional logic of odor information processing in the Drosophila antennal lobe Recent advances in molecular transduction of odorants in the Olfactory Sensory Neurons (OSNs) of the Drosophila Antenna have shown that the odorant object identity is multiplicatively coupled with the odorant concentration waveform. The resulting combinatorial neural code is a confounding representation of odorant semantic information (identity) and syntactic information (concentration). To distill the functional logic of odor information processing in the Antennal Lobe (AL) a number of challenges need to be addressed including 1) how is the odorant semantic information decoupled from the syntactic information at the level of the AL, 2) how are these two information streams processed by the diverse AL Local Neurons (LNs) and 3) what is the end-to-end functional logic of the AL? By analyzing single-channel physiology recordings at the output of the AL, we found that the Projection Neuron responses can be decomposed into a concentration-invariant component, and two transient components boosting the positive/negative concentration contrast that indicate onset/offset timing information of the odorant object. We hypothesized that the concentration-invariant component, in the multi-channel context, is the recovered odorant identity vector presented between onset/offset timing events. We developed a model of LN pathways in the Antennal Lobe termed the differential Divisive Normalization Processors (DNPs), which robustly extract the semantics (the identity of the odorant object) and the ON/OFF semantic timing events indicating the presence/absence of an odorant object. For real-time processing with spiking PN models, we showed that the phase-space of the biological spike generator of the PN offers an intuit perspective for the representation of recovered odorant semantics and examined the dynamics induced by the odorant semantic timing events. Finally, we provided theoretical and computational evidence for the functional logic of the AL as a robust ON-OFF odorant object identity recovery processor across odorant identities, concentration amplitudes and waveform profiles. Response to the reviewers Based on the responses of the reviewer, we have made the following changes: • In response to the request by both reviewers of making the paper more readable, we have substantially restructured the manuscript. In our revision, we focused on the development and analyses of two AL circuit models: single-channel (single glomerulus) in Section 2.1 and multi-channel (across glomeruli) in Section 2.2, with figures 3,4 redrawn to reflect the new structure. The comparative analyses of different circuit architectures that were previously the emphasis of 2.1 and 2.2 have been reduced to only cover the Pre-LN pathway (in the Discussion and Methods section). Other more extensive and exhaustive comparisons only appear now in the previously released preprint on BioRXiv [1]. • In response to the reviewer's comment regarding the response of the Antennal Lobe model to more complex odorant waveforms, in particular the concern regarding the lack of 'Steady-state' when the odorant concentration is more complex than the staircase waveform considered, we provided a different perspective on the AL response for complex stimuli. Instead of analyzing the Peri-Stimulus Time Histogram computed from the spiking output of the model, we now focus on the phase-space representation of limit cycles and their significance to the information representation at the PN single and population level. Consequently, figures 6,7,8,9 have been redrawn to reflect the phase-space perspective. We have also significantly expanded upon the discussion around the robustness of the AL model in Section 2. 3 and in supplementary materials. We show that the proposed functional logic of the AL circuit is largely invariant to concentration changes even for highly complex concentration waveforms. • We believe that in the revised manuscript the question of how semantic information, often associated with subjective perception, can be characterized, is much more accessible to a wider audience. Moreover, it becomes abundantly clear, that the separation between the semantic information and syntactic information can not be tackled by single channel neurophysiology recordings. This observation highlights the need for renewed emphasis for multichannel neurophysiology recordings and formal characterizations of multi-input multi-output neural circuits. • Both reviewers also mentioned connecting the AL model to the available connectome data of the adult fly. While we agree that this is an important direction, it adds significant additional complexity to the current paper. We refer the reviewers to our recently released manuscripts on BioRXiv [2,3] that explore this and other research directions. Reviewer 1 also suggested that we consider odorant mixture stimuli, a subject that we have studied in a followup paper [4]. We believe that the highly complex nature of odorant mixtures, especially the semantic information of a mixture of odorants, warrants a dedicated manuscript. Reviewer 1 Summary The Drosophila Antennal lobe (AL) is the first relay center for olfactory information processing. While the responses properties of the output of AL, the projection neurons (PNs) have been extensively studied, the functional logic of AL neural circuit in olfactory information processing is not fully understood. Various different type of Local neurons (LN) in AL have been thought to play important role in shaping the output of PNs. Based on previous experiment, in this paper, Lazar et al hypothesized that neural circuit in AL can segregate odorant identity and robustly detect odorant onset and offset. To test their hypothesis, they set out to explore all the possible single glomerulus models and multi-glomeruli models with different local neurons innervation patterns that enable PNs to have the above two properties. They found that presynaptic inhibition across glomeruli is essential to enhancing concentration-invariance of the PN responses, while postsynaptic LN excitation and inhibition strongly boost the responses of PNs to odorant onset and offset, respectively. Finally, the authors showed that the AL circuit is functionally equivalent to three parallel differential divisive normalization processors that extract the concentration-invariant and ON/OFF contrast-boosting features independently. Given the highly conserved organization of olfactory systems across different species, understanding the computational function of AL have a much broader implication in early sensory systems. The authors approached this problem by extensive modeling of possible neural circuit in AL, providing an useful source for comparing the anatomy AL in different insects. The conclusion seems to be well supported by the numeric simulations. I would like to see the paper to be published in PLoS Computational Biology after the authors address the following issues Major The motivation of modeling framework The model description is not clear 1. As a computational work, I expect that the modeling part contains enough details. For example, the authors stated "we modeled the Pre-LN inhibition of the OSN Axon-Terminal similar to the inhibition exerted by the calcium channel of the OTP". Without referring the cited reference 2, it is difficult for the readers to understand the difference between pre-iLNs and post-iLNs. Also, the meaning of each term in Eq. 1-4 is not easy to interpret. It would be much better if the authors can explain each term in model 1 in figure 9B. Or the authors could make an illustration to explain the "calcium channel of the OTP"? Although the authors searched a very large parameter space, how did they choose the range of each parameter? It should be explained in the Material and Methods. • We thank the reviewer for pointing out the lack of clarity of the model description in our manuscript. We have significantly revised the presentation and expanded upon model description of the single-channel and multi-channel AL circuit. In the Materials and Methods section, we have provided detailed model description, along with associated free parameters, for two example circuits. Additionally, the relationship between Eq 1-4 and the AL circuit are also made explicit for the examples circuits. • The explanation regarding the choice of range for parameter values was indeed missing in the manuscript. We have included a paragraph describing the heuristics used to specified the parameter range in the Methods section on L734-741 in the optimization related portion of the Methods section. Objective function for odorant identity 1. In line 226-229, the authors stated "As the odorant identity is represented by the affinity vector in steady-state, the objective is the angular distance between the PN steadystate response and the odorant affinity vector". For me, this is not obvious and why would this be a good odorant identity objective function? For example, a more natural way to define the objective function is to train a simple decoder based on the responses patterns of PNs. The decoding error can be used as an objective function. • We thank the reviewer for providing decoder as alternative to evaluating PN recovery of odorant identity vector. However, training an additional decoder to evaluate our model performance has the following problems: 1) it introduces additional variables into the evaluation of our model, 2) a different decoder will need to be trained for every circuit architecture and parameter set, rendering it computationally intractable. In contrast, the angular distance is a scale-invariant measure of similarity that requires no training, and does not introduce additional unknowns into the evaluation procedure. More importantly, we believe that for a model to achieve a low angular distance is a much stronger requirement than for it to achieve a low decoding error, since a low angular distance necessarily implies that a naive decoder based on such distance metric will result in low decoding error. We also note that angular distance requires far fewer assumptions about the choice of decoder. The model prediction are not clear 1. It would be great to see testable predictions from this modeling study. For example, the authors can link their three parallel differential DNP with known anatomical structure of AL. The authors can also make some comments on their model with previous models regarding biological adaptation. For example, in bacterial chemotaxis, both concentration-invariant response and ON/OFF response are observed. The signaling transduction pathway and biophysical mechanisms are well understood. • We thank the reviewer for providing this feedback. We have included references to the literature [5] regarding how the DNPs abstract anatomical structure. Note however, that the point of view introduced here on semantic/syntactic information and making parallel with the existing literature is a major task. As far as we are aware of, there is no notion of semantics or of a quantitative characterization of semantic information for that matter in biological adaptation and bacterial chemotaxis. Minor 1. The use of 'temporal model', and 'spatio-temporal model' are unconventional and might be confusing. Temporal, typically refers to the dynamics of certain systems, while, spatio-temporal, are typically used to emphasize the spatial aspect of an changing signal, such as the spatial pattern of neural activities. I think in this study, they are used to refer, single-channel/glomerulus, and 'multi-channel' model. The authors can clarify this point. • Thank you for the suggestion. To avoid confusion, we have changed the language used in the manuscript to "single channel" and "multi-channel". 2. As the authors aimed to exhaustively explore possible AL LN circuits, three types of innervation patterns are considered: pre-LN, post-iLN and post-eLN. Could the author explain why they eliminate presynaptic excitatory LNs to the terminal of OSN axons? Is that because no experiment support the existence of such LNs? • 5. SNR is extensively used as a metric to quantify the goodness of fitti ng. In the definition (line 604), what is the "clean" signal and what is the "noise" when we interpret figure 3B? Some explanation in the main text when referring to figure 3 would be great. • We've re-written the SNR definition on L623-624. As written, the 'Signal' is the physiology recording and the 'Noise' is the difference between the physiology recording and the model response. • Figures 3 and 4 have been removed in the latest revision. We believe that this is not the case with the newly drawn figures. 9. The conclusion that "the steady-state and transient response features are, respectively, decoupled by the presynaptic and postsynaptic LNs" is not supported by figure 3. It seems that most of the models give very high correlation between steady state PN responses and the concentration waveform. • Figure 3 Figure 13 focuses on recovery of odorant identity in the pre-LN pathway, where the effect of different circuit architectures on odor semantic information recovery is clearly visible. We note that, the steady-state for real-time odorant processing language construct has been changed to stable attractors in the phase-space of PN Biophysical Spike Generators (as depicted in figure 6,7,8,9). In the revised manuscript, the correlation metric is no longer used. has been removed. Instead of focusing on steady-state vs transient, the comparative analysis in Methods section 4.4 and 10. If odor identify information is encoded in the steady state firing rate of PNs which typically requires several hundreds ms to reach, how do you reconcile the observation that flies can recognize an odor within 100 ms (work from Rachel Wilson lab)? Further more, for highly dynamic natural odor plumes, the responses of PNs may never reach steady states. In this scenario, how does the AL encode odor identity? • Thank you for your question -does the work that you are mentioning refer to detecting the presence of an odorant concentration waveform or the recognition of its identity? The latter seems a bit difficult to ascertain as it may involve memory. In any event, the transient responses of the AL circuit that encode on/off timing information occur at a much faster time scale then the identity-encoding response in our experiments. Such transient responses would signal changes in odorant identy, providing downstream circuits with the information needed for further processing. We also note that while the current work proposes that the identity can be recovered at the output of the AL, it does not make any prediction about the exact recognition mechanism. The latter may involve, e.g., a form of predictive processing that could accelerate odorant recognition. In our work, we envisioned that the confounding Reviewer 2 Overview Lazar and colleagues attempt to present the functional logic of a popular neural system, firstto-second-order olfactory processing. They tackle the D. melanogaster antennal lobe. This is a topical and timely investigation because wet-lab investigation has already elucidated the circuit role of many neurons in the antennal lobe (Wilson 2013), and connectomic work has revealed the logic of its wiring diagram (nearly) in its entirety (Schlegel et al. 2020). The authors are interested in showing how the AL may dissociate odour identity ('semantic information') from odor concentration ('syntactic information'). They focus on three different roles local neurons (LNs) can play in the AL: LN-OSN inhibition, LN-PN inhibition and LN-PN excitation and describe their action as that of three differential Divisive Normalization Processors (DNPs). The authors examine temporal coding in a single channel (DM4, section 2.1, critical for coding syntactic information) and temporal-spatial coding (two glomeruli, section 2,2, critical for coding identity). For 2.1, the authors build on their prior theoretical work (e.g. (Lazar and Yeh 2020)) and physiological work (Kim et al. 2015) to model the DM4 olfactory glomerulus (Or59b OSN) of the adult fly. They assume three specific classes of LNs (Pre-iLNs, Post-eLNs and Post-iLNs) and test 12 plausible configurations within DM4. With these simple configurations, each involving one OSN unit, one PN and one or two LNs, the authors simulated ∼ 5 × 10 8 circuit+parameterization combinations (up to 23 free parameters, randomly sampled), comparing to real DM4 physiological data to evaluate performance. In 2.2, the authors similarly assess 20 architectures. In agreement with other work (Olsen et al. 2010) they show that pre-synaptic global inhibition is necessary for odour identification. In section 2.3, the authors seek an 'algorithmic' description for LN action in the AL. They compare it to a 'Divisive Normalization Processor', a concept the authors had previously built in their prior work on the fly visual system ). The authors reinterpret their architectures as composites of 'DNPs' in order to bring their work into a more normative framework. In section 2.4, they then asses whether the AL acts to signal the onset and offset of odor identities, something they term an 'ON-OFF odorant identity recovery processor'. Interestingly, they suggest that multi-neuron LN cell types may help increase the fidelity of odour identification, assuming each is differently parametrised and reducing the brain's optimization burden. I commend the authors on making some python code available (https://github.com/ TK-21st/AntennalLobeLLY22). The authors approach is considered, and their results are interesting to research community. I recommend their article for publication in PLOS. I do not necessitate revisions, but I have some thoughts on improvements that I would like the authors to consider, which I detail below. My comments mainly pertain to readability, comprehension and context. Lack of attention of these three points can make neuro-computational work less impactful upon the community than it might otherwise be, which is a problem I think. I say this as someone whose background is more in neurobiology and neuroinformatics but little engineering familiarity -researchers of my fingerprint should be a large portion of the potential target audience, I feel. Connectome I strongly think that, with the advent if a publicly available connectome (https://neuprint. These neurons have been broken down into 4 super classes based on gross morphology. These neurons vary in their degree of polarisation and connectivity patterns, and to a lesser degree their transmitter usage. Their identity matters. To this end, I strongly encourage the authors to attempt to contextualise their results within the connectome. Specifically, cell type candidates for their Pre-LNs, Post-eLNs and Post-iLNs relative to the DM4 glomerulus could be sought and found. Different LNs clearly have a bias to be pre-or post-synaptic to OSNs, but most do at least a bit of both, and there is variation between glomeruli. Providing the cell types helps enable future work by other authors to, for example, experimentally test the models from Lazar et al. by targeting specific cell types for experimentation (in particular for ideas in section 2.3 and 2.4). Specifically, the authors hypothesise that 3 DNPs 'function independently to capture concentration invariance, ON and OFF contrast boosting, respectively'. What is the substrate for this in the connectome? Are their proposed architectures discriminable from the connectome at some connectivity threshold, and at some level of neuron pooling? The authors are aware of and use these tools (https://www.fruitflybrain.org/#/posts/resources). Showing that a mechanistic connectome-accurate model for say, DM4, is equivalent to the more normative models in the paper, would be an interesting supplement. • We thank the reviewer for the suggestion. There is a deep question of methodology that the reviewer is raising, both directly and indirectly. Methodologically, the study of the functional logic of odor signal processing can be approached from a number of points of view including questions raised by thinking originating depending on ones background in system neuroscience, computational neuroscience, and theoretical neuroscience. We have approached this problem from all these different but often complementary points of view. The current manuscript focuses on studying the functional logic of odor signal processing from a computational/theory point of view. The 'aestetics' of computation/theory calls for abstractions that are inspired by but not necessarily coming out of experimental/systems neuroscience. In particular, the notion of semantics introduced here is based on extending classical information theory and it has not been raised by systems neuroscience. Of course, the connectome/synaptome datasets of the fly provide new ways to ask questions about the functional logic of odor signal processing. We are addressing some of these questions and avenues of research separately as they warrant substantial explorations. In this context, exploring the structure of adult fly connectome has been made publicly available at [3]. In the current work, we took advantage of the known diversity in connectivity and neurotransmitter profiles of the LNs, and found that the 3 LN types considered in the current study are sufficient to quantitatively explain the observed AL circuit dynamics. We have added additional clarifications in L56-60 in the Introduction section, as well as in L458-465 in the Discussion section. Writing the paper is written in a dense manner. It's accessibility and range could be improved by using clearer, more simple language to more effectively communicate its science. The reader's understanding of certain principles is taken for granted. For example, in section 2.2 it is not immediately clear what is happening: the authors are now modelling a two-channel circuit, where each channel has a different affinity for their simulated odor, acetone. Simple explanations like this are absent in a myriad of locations through the paper. This will unfortunately decrease the impact of the work at hand, not least because many interested in this area are pure biologists unfamiliar with terms taken from electrical engineering and commonly known terms in theoretical neuroscience. In particular, the concept of a DNP is critical to the paper but poorly explained within the main text of the paper. (A discussion of its use/implementation in the visual versus the olfactory system might also have been interesting to read) • We thank the reviewer for the suggestion and has made extensive revisions to the manuscript to improve the clarity of the writing. For a list of key changes in the revision to improve readability, please refer to the description at the top of this document. The substantial changes and restructuring of the presentation of our research results was in large part due to address this very issue raised by the reviewer. • In the revised Section 2.1, we have also made explicit the connections and differences between the Differntial DNP model proposed in the current manuscript and the DNP model previously proposed for the fly visual system. Medium 1. If the models were given a short-hand name that helped remind readers what they contained, rather than given numerals, the text would be easier to follow. A (slightly long) version is seen in Fig.3.C inset. • We have relegated the entirity of the comparative analyses to the Methods section 4.4 and supplementary materials. The revised main text deals only with the canonical single-channel and multi-channel AL circuits. We believe in the latest revision, the models are now clearly defined. 2. The authors break LNs into three groups, "1) pre-synaptic pan-glomerular (innervating all glomeruli) inhibitory LNs (Pre-LNs), 2) post-synaptic uni-glomerular (innervating a single glomerulus) excitatory LNs (Post-eLNs), and 3) post-synaptic uni-glomerular inhibitory LNs (Post-iLNs)." When they introduce these groups, they do not immediately define what they are pre/post synaptic to. Later, they say 'presynaptic (to OSN-to-PN synapse)' and 'postsynaptic (to OSN-to-PN synapse)', but seeing as they are not discussing tri-partite synapses this is slightly confusing. Presynaptic here means LN-OSN, and postsynaptic means LN-PN. • Section 2.2 has been completely restructured. Just to clarify, the circuit in Figure 4 (A1) in the previous version of the manuscript contained all channels (glomeruli) across the entire AL. We have also added an extensive description of the optimization procedure in Section 4.3 of the Methods section. Briefly, the affinity vector is estimated from physiological experiments [8,9], and the result is considered to be the "ground-truth" representation of odorant identity. To measure the degree of the odorant identity recovery by our in silico model of the Antennal Lobe, the affinity vector is compared against the model PN response (in silico) . The Divisive Normalization Processor is something developed in the authors' previous work. The present work lacks a satisfactory qualitative explanation. • We added additional clarification in the section 2.1 of the revised manuscript to clarify the DNP models. We would like to emphasize, however, that only the critical point solutions the Differential DNP model described in equation (1)(2)(3)(4)(5) relate directly to the DNP model previously proposed for the fly visual system [5] -which motivated us to describe the dynamical system models in the current manuscript as differential DNP model. This has been made explicit in L168-170. To provide more intuition, we also included on L167 a reference to the history of divisive normalization models [10]. 6. When the authors use the term 'PN' they always refer to the excitatory uniglomerular PNs. The fly contains many more multiglomerular PNs, whose function is less well understood. For clatity, I opine it is better to term these neurons uniglomerular PNs, i.e. uPNs, explicitly. • The model proposed here is not a complete reflection of the AL connectome but rather a simplified model that captures the essential features of the AL computa- 7. The existence of Pre-LNs, Post-eLNs and Post-iLNs are taken for granted. References to anatomical work establishing the existence of each, and a discussion of what is already known about their action is warranted. For example their roles in divisive normalisation and gain control. It is not clear to me why the potential of possible Pre-eLNs (the authors only consider Pre-iLNs, though they call them Pre-LNs) is ignored. • We thank the reviewer for the feedback. The current manuscript focuses on studying the functional logic of the Antennal Lobe computation using a simplified model of the Antennal Lobe connectome. A study relating the model presented in the current work to the adult fly connectome is publicly available at [3]. In the current work, we took advantage of the known diversity in connectivity and neurotransmitter profiles of the LNs, and found that the 3 LN types considered in the current study are sufficient to explain the observed AL dynamics. We have added additional clarifications in L56-60 in the Introduction section, as well as in L458-465 in the Discussion section. 8. The authors use their prior mode for the Calcium Feedback Loop of the Odorant Transduction Process (OTP). It and the choice to use it is not adequately qualtiatively explained in the main text, and so the reader easily misses out on some important framing. • We provided more information on the modeling choice in Section 2.1 on L160-170. 9. I think that making a paper's text as accessible as possible benefits every paper. A glossary of terms such as: Conor-Stevens point neuron, concentration-invariance, object identity recovery, contrast-boosting., semantic information, syntactic information, as well as anatomical terms used in the paper, would greatly assist non-specialist readers and increase the potential reader pool for the paper. • We have added two glossary tables for both the terms and the mathematical notation used in the revised manuscript in the Supplementary materials section. 10. The paper refers to an OSN Axon-hillock. I am not aware that insect neurons are understood to have an axon hillock in the manner established with mammalian neurons. I think this term might therefore be a little misleading, but am open to being corrected here by the authors. • Indeed, Axon-Hillock is used only to help reference to the biological locality (spike initiation zone) of the spike generation process in insect neurons. We opted to remove all references to Axon-hillock, and simply refer to the Biophysical Spike Generator model of OSNs. 11. The qualitative description of the models' free parameters in table 9.b could be greatly
2021-12-31T16:17:32.222Z
2021-12-28T00:00:00.000
{ "year": 2023, "sha1": "b87946d2f517809fa96d80f5e2d8a2ffe83cc03c", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1011043&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "830c5b1d66d3a7be7610ff293f7164220e950d6f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
2422902
pes2o/s2orc
v3-fos-license
Identifying and quantifying metabolites by scoring peaks of GC-MS data Background Metabolomics is one of most recent omics technologies. It has been applied on fields such as food science, nutrition, drug discovery and systems biology. For this, gas chromatography-mass spectrometry (GC-MS) has been largely applied and many computational tools have been developed to support the analysis of metabolomics data. Among them, AMDIS is perhaps the most used tool for identifying and quantifying metabolites. However, AMDIS generates a high number of false-positives and does not have an interface amenable for high-throughput data analysis. Although additional computational tools have been developed for processing AMDIS results and to perform normalisations and statistical analysis of metabolomics data, there is not yet a single free software or package able to reliably identify and quantify metabolites analysed by GC-MS. Results Here we introduce a new algorithm, PScore, able to score peaks according to their likelihood of representing metabolites defined in a mass spectral library. We implemented PScore in a R package called MetaBox and evaluated the applicability and potential of MetaBox by comparing its performance against AMDIS results when analysing volatile organic compounds (VOC) from standard mixtures of metabolites and from female and male mice faecal samples. MetaBox reported lower percentages of false positives and false negatives, and was able to report a higher number of potential biomarkers associated to the metabolism of female and male mice. Conclusions Identification and quantification of metabolites is among the most critical and time-consuming steps in GC-MS metabolome analysis. Here we present an algorithm implemented in a R package, which allows users to construct flexible pipelines and analyse metabolomics data in a high-throughput manner. Electronic supplementary material The online version of this article (doi:10.1186/s12859-014-0374-2) contains supplementary material, which is available to authorized users. the reproducibility of the intensity data generated by AMDIS and, therefore, its direct utility for comparative metabolomics studies. Such data may, for example, lead to erroneous identification of chemical signatures (i.e. biomarkers) and, potentially, to the misinterpretation of the activity of metabolic pathways. AMDIS is also known to yield a high rate of false identifications of metabolites, referred to simply as the false positive rate [10]. Furthermore, AMDIS reports different results according to the zoom level applied to the chromatogram under analysis. Some compounds are only correctly identified when a smaller portion of the chromatogram is analysed. Finally, the layout of metabolomics data preprocessed by AMDIS is such that it requires further manipulation before it is amenable to subsequent processing and analysis [11]. The necessary manual curation of AMDIS-generated datasets can, therefore, potentially require months to complete. Recent years have seen exponential growth in the number of metabolomics studies. At the same time, spectral libraries have themselves continued to grow in size, thereby enabling an ever-increasing number of target metabolites to be identified within individual GC-MSanalysed samples. Additionally, high impact scientific journals have raised their standards with respect to the validation of results from metabolomics studies, requiring higher numbers of samples and technical replicates. The net result has been an explosion in the amount of GC-MS-generated data [4], making manual curation postprocessing by AMDIS impracticable. An algorithm which more reliably identifies and quantifies metabolites analysed by GC-MS and which is implemented in a software package that reports results in a format that facilitates further data processing without manual intervention is urgently needed. Numerous programs and software packages to automate processes for the analysis of metabolomics data have become available in the last couple of years. These tools enable quick data normalisation, statistical analysis and the production of graphs for data visualisation [6,12]. Among them is web-based XCMS Online ( [13]; https://xcmsonline.scripps.edu/). It is widely used for the comparative analysis (i.e. comparisons between pairs of experimental conditions) of the abundances of unidentified IMFs in raw GC-MS data. While XCMS Online enables the identification of metabolites present at significantly different levels across experimental conditions, it is important to note that this involves manual processing. Thus, although XCMS Online can be particularly useful when searching for potential biomarkers, it does not fit the requirements of high-throughput identification and quantification of GC-MS data. Consequently, despite AMDIS's limitations, it remains the most popular software for the identification and quantification of metabolites in raw GC-MS metabolomics datasets. We introduce here a new algorithm, PScore, which we have developed for the identification and quantification of metabolites in biological samples analysed by GC-MS. PScore scores the metabolites contained in a pre-defined spectral library according to their likelihood of being associated with a specific chromatographic peak; the higher the score, the greater the similarity between the expected (i.e. defined in the spectral library) and observed spectra and RTs (i.e. measured in the biological sample). For a given metabolite: (1) the closer its fragments' detected peaks are to its expected RT, (2) the more closely its fragments' relative intensities follow those defined in the spectral library, and (3) the higher the correlation between the intensities of its fragments, the higher its score. PScore enables the use of threshold scores based on the certainty requirements of each metabolomics experiment, with higher threshold scores resulting in greater precision in compound identification. PScore is implemented in our new R package, MetaBox, which generates an integrated list of identified metabolites and their corresponding intensities from replicate samples analysed by GC-MS. MetaBox includes functions for removing specific ion mass fragments from GC-MS files and for the generation of graphical outputs. The reports generated by MetaBox can be directly applied to other tools, such as MetaboAnalyst [12] and the R package Metab [6], in order to perform further data processing and statistical analyses. In addition, MetaBox accepts spectral libraries built using AMDIS, including the original formats in which they were generated. Furthermore, MetaBox's use of pop-up dialog boxes makes it more accessible to novice R users. Finally, being an R package, MetaBox is open-source, allowing users to adapt it to their own pipelines for data analysis. We validated the results produced by PScore through MetaBox via a two-step approach. First, we compared its performance against AMDIS's when identifying and quantifying volatile organic compounds (VOCs) present in standard mixtures of metabolites. MetaBox yielded a smaller proportion of misidentifications and higher accuracy in quantification. Second, we used XCMS Online to generate reference datasets for comparing MetaBox's performance against AMDIS's when identifying compounds present at different levels in faecal samples from female and male mices. MetaBox yielded a higher percentage of metabolites matching XCMS Online's results. PScore: The algorithm PScore is a GC-MS-based retention time (RT) scoring algorithm used to assess the likelihood that the observed RTs in a biological sample correspond to known metabolites within a user-defined spectral library. Metabolite identification and quantification by GC-MS GC-MS instruments usually generate a single file per biological sample, each file containing a list of mass spectra together with their corresponding RTs. These spectra are commonly shown on a chromatogram represented by RT on the horizontal axis and signal intensity on the vertical axis. Peaks in intensity on the chromatogram correspond to putative metabolites in the analysed sample. PScore performs metabolite identification based on a spectral library containing the RT and fragmentation patterns of potential target metabolites. Spectral library requirements Metabolite identification and quantification require a spectral library containing reference information against which observed spectra can be compared. PScore requires that for each metabolite, M say, in a spectral library, L say, information is included about its expected retention time, E RT , and typically its four most abundant IMFs' massto-charge (m/z) ratios, which we will denote by M i (i = 1, 2, 3, 4). Additionally, PScore requires that L contains the intensity ratios R i = I i /I 1 (i = 2, 3, 4), where I i denotes the expected intensity of IMF M i , i.e. R i is the intensity of M i relative to that of M 1 . We will refer to relative intensities simply as intensity ratios. For example, consider the first row of the spectral library shown in Table 1, corresponding to the compound ethanol. It has an expected retention time of 6.64 minutes; its four most abundant IMFs have m/z ratios of 31, 45, 46 and 29; the intensities of the last three of these IMFs, relative to the first, are 0.777, 0.343 and 0.249, respectively. Many algorithms applied for identifying metabolites analysed by GC-MS, such as AMDIS and X-Rank [14], for example, make use of more than 4 ion mass fragments, if available, when calculating the similarity between two mass spectra. Our experience analysing GC-MS data suggests that the 4 most abundant ion mass fragments and the RT are generally the key factors defining the identity of an analyte. For many compounds, the remaining fragments are generally close to or at the noise level, which increases their variability across samples and may reduce the accuracy in identification. In addition, in the way PScore was developed, every additional fragment to be analysed requires additional computer power, which may considerably increase the analysis' time. Compounds showing less than 4 fragments in their spectra may have the existent fragments recycled. For example, a compound X containing only the fragments 58 and 106 in their spectra would have these fragments analysed twice by PScore. In this case, the row of the ion library defining compound X would have its most abundant fragment defined as M1 and M3 in the ion library and the second most abundant fragment defined as M2 and M4. In the remainder of this section we describe PScore, a peak scoring method which utilises the information available within a single GC-MS sample to score observed peaks occurring within a range of RTs and that are potentially associated with a metabolite, M, in the spectral library, L. The highest scoring peak is inferred as belonging to M. We describe the PScore algorithm according to the four stages shown in Figure 1. Table 1 This table shows Compound Stage 1: Scoring peaks associated with IMFs M 1 -M 4 When a metabolite elutes from the gas chromatography column and enters the mass spectrometer, it is bombarded by electrons and fragmented into ionised components, or IMFs. In theory, the IMFs from the parent metabolite, M, should almost simultaneously reach the mass spectrometer's detector, where their intensities and RTs are recorded. This information is commonly used to build both their individual chromatograms and their cumulative or total ion chromatogram. Ideal process would result in entire complement of IMFs yielding a set of overlapping peaks centered precisely on a single expected RT. In practice, however, RT shifts may be observed depending on the type of sample being analysed and the variability across GC-MS runs. Consequently, a metabolite's IMF peaks may occur in the vicinity of, but not precisely at, its expected RT. Thus, a search must be conducted across a window of RTs spanning the region of the chromatogram which most plausibly contains the IMF peaks corresponding to the metabolite. Consider a metabolite M in spectral library L with expected retention time E RT . We define a RT window W = E RT ± w, with the window parameter, w, being userdefined. The region W is searched for groups of peaks potentially corresponding to IMFs M 1 , . . . , M 4 belonging to M. The jth group's observed peak intensities are recorded as P j = (Î 1j ,Î 2j ,Î 3j ,Î 4j ; t j ), whereÎ ij is the observed intensity of IMF M i and t j is the RT at which M 1 's peak is observed. LettingÎ max = max{Î ij }, each observed intensity,Î ij , in P j is scored according to 3, ifÎ ij occurs at time t j ± 1s andÎ ij =Î max 2, ifÎ ij occurs at time t j ± 1s butÎ ij <Î max 1, if 0 <Î ij <Î max but does not occur at t j ±1s 0, otherwise . The total score for P j is the sum over the scores assigned to each of its IMFs, i.e. allowing a maximum possible score of 12. Stage 2: Similarity scoring of theoretical and observed spectra If metabolite M is present in a GC-MS-analysed sample, not only do we expect a group of peaks to be observed at its expected RT, we also expect its observed intensity ratios to be identical to their corresponding theoretical values in L. However, due to variability across GC-MS runs and the possible convolution of metabolites, the values of the observed and theoretical ratios may differ from one another. Thus, at Stage 2 we compute the intensity ratios R j = (R 2j ,R 3j ,R 4j ) from the jth group's observed peak intensities, P j , whereR i j =Î i j /Î 1j (i = 2, 3, 4). It follows that if the observed intensities in P j are from metabolite M then we expectR i j = R i or, equivalently, We make allowance for variability between observed and theoretical intensity ratios by introducing a match factor f (0 < f < 1) which we use to construct intervals around each theoretical ratio, R i , associated with metabolite M. The lower and upper limits of this interval are given by L i = fR i and U i = (2 − f )R i , respectively, with the value of f chosen to yield sufficiently narrow intervals such that only observed peaks from a group of IMFs corresponding to M will lie within them. To reflect this, we give each observed ratioR i j a score of 1 if it falls within its match factor interval [L i , U i ]. The total score for R j is given by the sum over all of its ratios' scores, i.e. allowing a maximum possible score of 3. Stage 3: Scoring the correlation between IMFs' intensities The ion chromatogram of each IMF originated from a single compound is expected to form an approximately bell-shaped curve over a range of RTs t j ± , where is chosen to capture the non-zero intensities with magnitudes that are dependent on RT. We represent this by expressing the intensity of IMF M i of M (i > 1) as a function of retention time t, i.e.Î ij (t). If the IMFs corresponding to the intensities in P j are perfectly aligned, then theoretically their intensity ratios would be expected to be constant across t ∈ t j ± , i.e. r ij (t) =Î ij (t)/Î 1j (t) = c ij , where c ij denotes the proportionality constant in the linear relationship betweenÎ ij andÎ 1j and independent of RT. In other words, IMFs originating from the same compound are expected to have highly correlated intensities, as they are expected to increase and decrease at the same time. At stage 3 we compute the correlation between the intensitiesÎ i and I 1 , of M 1 , (i = 2, 3, 4), across the retention time window t j ± , denoted by ρ i1|t j which is calculated using Pearson's correlation coefficient. In our experience, the optimal neighborhood of t j is = 0.07. Ideally, ρ i1|t j = 1. However, this is not always the case. Metabolite coelution, for example, may affect the correlation between IMFs' intensities. Thus, we define a correlation threshold, ct, such that 0 < ct < 1. We then give metabolite M a score of 1 for each of its observed IMFs at t j which have ρ i1|t j ≥ ct; that is, the value of the Pearson's correlation is greater or equal to the correlation threshold ct. The Stage 3 score function is then given by Metabolites found at similar RTs, e.g. Stage 4 -Defining the RT and the abundance of metabolite M We calculate the score S M of metabolite M at time t j by Then, we obtain the intensity of M 1 at the t j associated with the highest score, S M (t j ) , and with the lowest difference to the expected RT, E RT . This intensity represents the abundance of M 1 . Stages 1, 2, 3 and 4 are performed for every metabolite M in library L. After all metabolites in L are analysed, it may happen that different metabolites were associated to the same time t j . In these cases, we select for each time t j only the metabolite showing the highest score S M (t j ) and the lowest difference between time t j and the E RT . Implementing PScore in MetaBox We have implemented our PScore algorithm in an R package named MetaBox. For each GC-MS sample, it generates a list of metabolites, M, with their respective abundances, P M(j) , their unique RT, t j , at which they were identified and their calculated score S M (t j ) . MetaBox then merges the results of individual GC-MS samples into a single R data frame called Total using metabolite's names as reference (Additional file 1: Table S1). Optionally, the data frame Total can be exported to a csv file. Ideally, S M (t j ) = 18 when metabolite M is actually present in the analysed sample. However, it is not always the case. A specific compound's spectrum may vary slightly from sample to sample as a result of GC-MS variation, matrix effect and metabolite coelution. Therefore, we define a score threshold s t , such that 8 ≤ s t ≤ 18. MetaBox then selects metabolites that have a calculated score S M (t j ) ≥ s t and stores them in a second R data frame called cutOff , containing the name of each metabolite in the first column and their respective abundances in each GC-MS sample in the following columns (Additional file 1: Table S2). Optionally, the data frame cutOff can be exported to a csv file. The RT index is an excellent system for obtaining reproducible results within and across labs. It is currently implemented in AMDIS and other tools such as TagFinder [15]. However, PScore was initially developed to use only the RT. The possibility to use the RT index will most probably be implemented in a further version of MetaBox. Validation As we implemented PScore in the R package MetaBox, we compared MetaBox's performance against AMDIS's in identifying and quantifying VOCs present in standard mixtures of metabolites and in faecal pellets of female and male mice. Standard mixtures A single standard mixture containing 13 metabolites (Table 1) was prepared and divided into 10 aliquots: 5 aliquots of 50 μL and 5 aliquots of 100 μL. Each 50 μL aliquot was diluted by adding 50 μL of water, resulting in a final volume of 100 μL. Each aliquot was then warmed in an incubator oven at 60°C for 30 minutes, then VOCs were adsorbed onto a solid phase microextraction fiber CAR-PDMS 85 μm (Sigma-Aldrich) for 20 minutes and analysed by a Perkin Elmer (Clarus-500) GC-MS using solvent delay, 6 min; temperature program (40°C), 1 min; ramp of 5°C/min to 220°C; finally held at 220°C for 4 min (total run time 41 min). The MS was operated in EI positive mode scanning mass ions in the range 10 to 300 (6-41 min). Room and lab air were used as controls. Metabolite identification Metabolites were identified using a mass spectral library built using AMDIS and NIST (Version 2.0) ( Table 1) (NB. The library used by AMDIS contains additional ions than shown in Table 1). We first characterised algorithm performance on a per-sample basis, calculating the percentage of false positive and false negative metabolite identifications, defining the percentage of false positives as 100p + i %, where p + i is the proportion of misidentified metabolites (in relation to the total number of identified compounds) in the ith standard sample, and the percentage of false negatives as 100p − i %, where p − i is the proportion of unidentified metabolites in the ith standard sample. For example, consider the standard sample described above containing 13 metabolites. If an algorithm identifies 100 metabolites, including 10 of which are in the standard sample, it is reported as having 23.1% of false negatives (i.e. 100 × 3/13) and 90% of false positives (i.e. 100 × 90/100). High percentages of both false positives and false negatives may lead to erroneous inferences being drawn from the data. Optimal metabolite identification tool is one which yields the smallest percentages of both false positives and false negatives. We evaluate the performances of AMDIS and MetaBox over all n = 10 with these criteria in mind. The match factor used by AMDIS may affect the number of false negatives and positives reported. Therefore, AMDIS was applied using the match factor values of 70, 80 and 90. MetaBox was applied using match factor of 70, correlation of 0.95 and score cut of 13. Metabolite quantification All aliquots from the standard mixture were analysed by both AMDIS and MetaBox. For AMDIS, its 'Base Peak' values were reported for the metabolite intensities. A reference dataset (Reference), containing the intensity of each metabolite's most abundant IMF, was manually obtained for each sample using the R package XCMS [16]. The abundances reported by MetaBox, AMDIS and Reference for each metabolite are expected to be very similar. We confirmed this by performing a hierarchical cluster analysis (HCA) and a principal component analysis (PCA) on the combined datasets. Mice samples Five female and five male five-week old inbred wild-type C57BL/6 mice were purchased from Charles River Laboratories (Margate, UK) and acclimated to standard animal house conditions at the University of Liverpool for a minimum of 1 week. The mice were individually housed for a total of 8 weeks, when one ten-pellet faecal sample was taken from a clean cage. Mice were then sacrificed under Schedule 1 Animals Act 1986. Mice were used in accordance with local ethics approved from the University of Liverpool. Each (n = 10; Female = 5; Male = 5) tenpellet sample was then analysed by GC-MS using the same configuration described in Standard mixtures. The mice samples were analysed using AMDIS and MetaBox, using a mass spectral library built using AMDIS and NIST database (Version 2.0) (Additional file 1: Table S3). In order to remove potential false positives, we only analysed those metabolites present in at least 2 samples per experimental condition (i.e. Female and Male). It is difficult to generate a reference or control when analysing mice samples, as the identity and concentrations of metabolites in these samples are unknown. Therefore, we applied an approach used for biomarker discovery [16]. We used XCMS Online to generate a reference dataset containing the list of IMFs present at significantly different levels between female and male samples (Welch t-test; p-value < 0.05), including the RT where the peak of each IMF is detected. Then, we used our spectral library (Additional file 1: Table S3), which contains the expected RT and the IMFs of each metabolite, to identify the IMFs reported by XCMS Online. We then conducted a Welch's t-test on the AMDIS and MetaBox datasets comparing males and females for each listed metabolite and compared these algorithms' performances against the ttest results from XCMS Online. For clarity, compounds found at significantly different levels between female and male mice samples will be called as biomarkers. (NB. All chromatograms were left untreated and no data normalisations were applied to metabolite abundances.) The CAS numbers of all metabolites used in this study are available in Table S7 of the Additional file 1. Standard mixtures For clarity, aliquots of 50 μL of standard mixture + 50 μL of water will be described simply as 50 μL samples, while aliquots of 100 μL will be described as 100 μL samples. Metabolite identification To enable the comparison of AMDIS's and MetaBox's efficacies in metabolite identification, we calculated the percentages of false positives and false negatives reported by each algorithm when analysing 10 samples of a standard mixture of metabolites (i.e. 5 samples of 50 μL and 5 of 100 μL), using match factors of f = 70, 80 and 90 for AMDIS; and match factor of f = 70 and score cut of 13 for MetaBox. Every compound reported by AMDIS was considered in the analysis, including multiple identifications for a single RT. For f = 70, AMDIS reported an average ± SE (n = 10) of 32.8% ± 1.8% of false positives and an average of 6.9% ± 0.8% of false negatives. f = 80 and 90 resulted in 30.3% ± 1.9% and 27.8% ± 1.0% of false positives, respectively, and 6.2% ± 1.0% and 4.6% ± 1.3% of false negatives, respectively ( Figure 2). MetaBox performed overwhelming better than AMDIS, reporting no false positives and no false negatives. Although, AMDIS performed reasonably well in terms of low percentages of false negatives, it was a poor performer with respect to its high reporting of false positives. It may be that AMDIS is actually performing as expected given the primary motivation for its development, singlesample analyses of complex chemical mixtures to identify any signs of potential target compounds or chemical weapons [7]. In this context a low false negative rate is crucial and AMDIS's performance meets this requirement. However, the primary motivation for most metabolomics experiments, is the identification and quantification of the highest possible number of metabolites present in biological samples for the comparisons of their abundances, or relative abundances, across experimental conditions. It is non-targeted analysis generally limited only by the metabolites represented in the spectral library. The biological interpretation is then achieved based on the metabolite profile generated by each sample. In this case, the percentages of both false negatives and false positives are crucial for biologically meaningful interpretations of the data. A high percentage of false negatives represents potential losses of biological evidence, while a high percentage of false positives may provide misleading evidences. Therefore, results generated by AMDIS should be manually curated and critically assessed in order to achieve sound biological interpretations. Metabolite quantification Average-linkage hierarchical cluster analysis (HCA) ( Figure 3A) and principal component analysis (PCA) ( Figure 3B) were performed on the metabolite abundances reported by AMDIS and MetaBox (Additional file 1: Table S4). The HCA yielded two main nodes, or clusters: one containing the 50 μL samples and the other the 100 μL samples. Within samples, the MetaBox and reference datasets always clustered together under the same node in the first agglomeration round and this node excluded the corresponding AMDIS dataset. This is indicative of MetaBox-generated abundances being closer in value to those in the reference datasets than the AMDIS- from the same sample consistently yielded approximately equal values for PC 2, once again showing a high degree of similarity between the two sets of data. AMDIS, on the other hand, yielded datasets with PC 2 values less than or equal to zero, demonstrating that only when a high match factor is used will AMDIS yield datasets containing abundances approaching values close to those in the reference datasets. Part of the dissimilarity between the AMDIS and the reference datasets may be a result of background noise subtraction performed by AMDIS and/or the use of different IMFs when deconvoluting and quantifying the same metabolite across samples. The potential use of different IMFs for metabolite quantification by AMDIS is another indication of its development without a view to comparing the same metabolite across different samples, and yet this is a fundamental concern of metabolomics studies. Further evidence lies in the format it uses for reporting results. AMDIS can generate two types of reports: individual reports or a single report (batch report) for several samples by simply appending results sample-by-sample without actually matching metabolites identified in the different samples. Furthermore, AMDIS reports multiple potential identities associated to a single RT. Consequently, when applied to metabolomics studies, AMDIS's results must be manually cleaned (i.e. the correct hit for each RT must be manually selected), the ion mass fragment used to quantify each metabolite must be manually verified and the results produced for different GC-MS files must be combined in a single table or spreadsheet, and this can be enormously time-consuming depending on the number of samples being processed. MetaBox, however, was developed specially for metabolomics studies. Its results are reported in a single spreadsheet containing the identified metabolites and their respective abundances in every analysed sample, and in the format most commonly required for downstream data normalisation and analysis. Mice samples To compare the efficacies of AMDIS and MetaBox in identifying potential biomarkers, we evaluated the datasets generated by each against the XCMS Online reference dataset. XCMS Online reported a total of 387 IMFs (features), from which 73 showed significantly different intensities (Welch t-test; p-value <0.05) between female and male mice faecal samples (Additional file 1). Based on the IMFs and RTs in the spectral library used by AMDIS and MetaBox (Additional file 1: Table S3), we identified 19 compounds associated to the total list (387) of IMFs reported by XCMS Online. Eleven compounds were associated to 47 of the 73 IMFs reported by XCMS Online at significantly different intensities between female and male samples (Additional file 1: Table S5). However, only 4 of these compounds ( Table 2) showed IMFs that were both present at significantly different levels according to XCMS Online results and used by AMDIS and MetaBox for metabolite quantification. Therefore, only these 4 compounds were expected to be found as potential biomarkers by AMDIS and MetaBox. AMDIS and MetaBox were able to identify all 19 compounds associated to the XCMS Online results (Additional file 1: Table S6). For all match factors tested, AMDIS identified 3 potential biomarkers, being only one confirmed by XCMS Online (Additional file 1: Table S5). MetaBox identified 4 potential biomarkers, being two confirmed by XCMS Online (Additional file 1: Table S5). In summary, AMDIS was able to report 1 out of 4 potential biomarkers, while MetaBox reported 2 out of 4. Although MetaBox missed the identification of 2 potential biomarkers, its results represent 100% improvement in relation to AMDIS'. Conclusions Identification and quantification of metabolites is among the most critical and time-consuming steps in GC-MS metabolome analysis. The reliability of the biological inferences that can be drawn from metabolomics studies is directly related to the quality of the data upon which they are based. In addition, as the size and number of metabolomics studies conducted by individual laboratories has grown, the time available to analyse each single dataset has reduced. Therefore, to satisfy the criteria of metabolomics studies ideally software must reliably identify and quantify metabolites, and the results must be reported in a format that facilitates further data analysis. Although AMDIS has been widely used in metabolomics, results show that its performance no longer meets the requirements of modern high-throughput analysis of metabolomics experiments. We presented here a new algorithm, PScore, which uses a spectral library to analyse GC-MS samples and score retention times according to their probability of representing a metabolite. We implemented PScore in an R package, MetaBox, and compared its performance against AMDIS when analysing standard mixtures of metabolites and mice faecal samples. PScore greatly reduces the percentage of false positives and false negatives, and it considerably improves the quantification of metabolites analysed by GC-MS. In addition, our new R package MetaBox incorporates functions to generate graphical outputs and reports results in a format accepted by other software, such as Metab and MetaboAnalyst, allowing users to perform further data processing and statistical analyses in a high-throughput way. As an R package, MetaBox allows users to construct flexible pipelines for data analysis and allows pop-up dialog boxes, which facilitate its usage by R beginners.
2016-01-15T18:20:01.362Z
2014-12-10T00:00:00.000
{ "year": 2014, "sha1": "3b2ab918dca681ed9ed3cbf0c269bb0e45add7e5", "oa_license": "CCBY", "oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/s12859-014-0374-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "86a20ba3b6516aa5753c220c75fda9d0842e3efc", "s2fieldsofstudy": [ "Biology", "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology", "Computer Science" ] }
85431533
pes2o/s2orc
v3-fos-license
First haematic results for the sea bass (Dicentrarchus labrax) metabolic profile assessment Abstract The assessment of blood reference ranges of farmed fish can be extremely useful in improving production and product quality. A first attempt to establish the normality ranges for the most important haematochemical parameters of farmed sea bass was carried out by analysing the trend of Haematocrit, Glucose, Total Protein, Albumin, Globulin, Total Cholesterol and some electrolytes in 353 sea bass farmed with two different farming systems within 1 year. A strong seasonal effect was found with regard to each parameter; the role of some environmental conditions was evaluated; and some reference ranges were proposed for the culturing methods considered. with formulated diets only (Marine MRF, Trouvit, Hendrix) ( Table 2). Blood samples were drawn in each farm once a month, from September 2002 to September 2003, from 15 fish that fasted for 12 hours. In the semi-intensive farming system samples were not collected in December, January and February because of the "wintering" season and the impossibility of fish handling in this period. Rearing conditions were those shown in Table 1. Experimental subjects all belonged to the same batch, and were different for each rearing method. Sea bass were randomly caught in the morning and anaesthetised (Ethylene glycol monophenyl ether, 0.4 cc/l); all fish were weighed and measured; blood was taken from each subject by puncturing the dorsal aorta with a 2.5 ml sterile plastic syringe and was divided into BD Vacutainer serum and EDTA (K3) tubes. The Haematocrit (Hct %) was measured (Redacrit centrifuge, 3600 rpm, 5 min) with micro-haematocrit heparinized capillary tubes, then sample tubes were refrigerated and carried to the laboratory of the Dipartimento di Scienze Zootecniche (Università di Firenze, Italy) where plasma and serum were obtained by centrifugation with Refrigerated Centrifuge ALC 4227R (3000 rpm, 30 min). Samples were frozen at -20°C and the following analyses were performed with UV/VIS Spectrometer Lambda EZ 150 (PerkinElmer) using Sclavo Diagnostics Inc. kits: Introduction The determination of some haematochemical parameters is considered by many authors (Payne, 1972;Caldwell and Hinshaw, 1994;Ravarotto et al., 1996) an extremely useful instrument for assessing the health status of farmed animals. Intensive aquaculture conditions have placed increasing demands on the fish, which must be able to cope with many stress factors that may affect their basic physiological functions, thus also affecting production and product quality. Periodic blood analyses are an inexpensive and easy method to point out metabolic disorders, deficiencies and chronic stress status before they appear in a clinical setting. It is necessary to establish the reference ranges of species-specific haematic parameters for blood analysis to be affirmed as a standard method for the evaluation of the health status of cultured fish and for the correct interpretation of the results of haematochemical analyses. The purpose of the present study was, therefore, to contribute to the collection of data for the exact determination of haematic reference ranges in two different rearing conditions for sea bass (D. labrax), one of the most commonly farmed seawater fish in the Mediterranean sea. Material and methods Blood samples of 353 reared sea bass were collected in two different farms in the province of Grosseto (Italy): the semi-intensive plant "Il Padule" in Castiglione della Pescaia and the intensive one "Il Vigneto" in Ansedonia. Sea bass of the semi-intensive farm were reared in rectangular ponds and fed commercial diets (Marine Basic, Trouvit, Hendrix) ( Table 2) supplemented by natural food; water was the Diaccia Botrona brackish marshes. Sea bass of the intensive farm were reared in concrete tanks with ground water and fed Monthly values of dissolved oxygen (DO), water temperature, oxygen percentage saturation and salinity were registered for the semi-intensive farm only, while for the intensive farm annual ranges were collected through farmer information. Seasonality and farming system were analysed by 1-Way ANOVA (P<0.05) considering month or farm as fixed effect, respectively. Means separation was computed by Fisher's test. Sampling dates were divided into classes according to the Fisher's test results: a number was assigned to any combination of significant letters -the smaller the number, the higher the monthly mean. Relationships between environmental and blood parameters were investigated by regressions and Pearson correlation. Results and discussion From results obtained by comparing the 2 farming conditions, the Haematocrit showed values in accordance with the literature (Roche and Bogé, 1996;Pavlidis et al., 1997;Papoutsoglou et al., 1998) and no difference was found between the semi-intensive and the intensive farmed stocks (34.4 ± 6.3, 35.1 ± 5.4, respectively). A wide variability within each parameter occurred, especially for the Glucose content, which showed the highest Coefficient of Variability: 70.86 and 55.53 % for the semi-intensive and intensive farm, respectively. The comparison between the two farming conditions highlighted a difference in the haematic Glucose content with a higher value for the semi-intensive system (Table 4). This difference is probably not due to the farming conditions, but can be attributed to the difficulty in catching the fish due to the pond characteristics: catching fish in the Padule farm was more timeconsuming and more difficult than in the Vigneto one, resulting in a stressed fish stock. In fact, many authors reported Glucose as an index of stress, capture stress included (Benfey and Biron, 2000;Sadler et al., 2000). The ANOVA carried out on monthly means within the same parameter indicated a strong effect of the sampling date on all the blood parameters in both the farming systems with the exception of the Glucose content in the intensive farm (Table 3). Many authors agree in the assertion that haematic parameters are affected by the sampling season (Kavadias et al., 2003) because the food intake increases as temperature rises and so does the general metabolism. Due to the special condition of the intensive farm -supplied by ground water with constant temperature all the year the feeding is constant in time and quantity and it reflects on the glycaemic blood content (Table 3). The semi-intensive farm, on the contrary, suspends the fish feeding in winter (December, January and February in this study) and gradually starts again as soon as the water temperature allows the fish to experience normal metabolic activity. This hypothesis is even confirmed by the growth: in the intensive farm fish gained 155% of their body weight in 13 months, while in the semi-intensive farm they gained only 120%. From Table 3 it is clear that in the semi-intensive system Glucose, Total Protein and Globulin have a strong trend to higher values during the summer season (from May to September). The same trend was also found in the other parameters, although it was less marked. In the intensive system the period showing higher haematic values is longer, starting already in April for all parameters, with the exception of the Total Cholesterol, which was higher only in June. A difference between the two farming methods was also found in the blood electrolytes that showed wider ranges ( Figure 1) and higher values (Table 4) in the intensive farm. The differences in the haematic content of Calcium, Inorganic Phosphorus, Magnesium and Chloride can be attributed to the water quality and the feed. In fact, ground water undergoes remarkable fluctuations during wet and dry seasons and this explains the wider ranges in the blood electrolytes of the intensive farming; moreover, the feed used in the intensive farm had a higher Inorganic Phosphorus and Calcium content (Table 2). No effect of the fish weight on the haematic parameters was found. Correlations between blood parameters and environmental parameters, carried out for the semi-intensive system only, showed a strong positive correlation between the water temperature and the Haematocrit, Glucose, Albumin, Chloride and Total Cholesterol (r=0.295; 0.362; 0.402; 0.423; 0.207, respectively). The same blood parameters are negatively correlated to the DO (r = -0.231; -0.323; -0.510; -0.398; -0.335, respectively). Moreover it seems that Haematocrit, Glucose, Calcium and Chloride tend to rise as salinity rises (r = 0.360; 0.201; 0.181; 0.279, respectively), while Magnesium decreases (r=-0.182). Finally, the oxygen saturation was negatively correlated to the Haematocrit, Glucose, Albumin and Chloride (r=-0.344, r = -0.310, r = -0.252, r = -0.389). The comparison of variances that resulted from the regressions (Table 5) (carried out for the semiintensive farming system only) indicated that the DO explains most of the variance in most of the blood parameters considered. As shown by the correlations, in fact, the dissolved oxygen plays an important role on Haematocrit, Glucose, Albumin, Chloride and Total Cholesterol blood content, while the oxygen saturation affects the values of Calcium and Magnesium only. Temperature and salinity seem to have a secondary role with respect to DO. The former affects the haematic values of Total Protein, Globulin and Magnesium while the latter affects Total Cholesterol, Inorganic Phosphorus and Magnesium. As already described by Caldwell and Hinshaw (1994), the decreasing trend of Haematocrit as DO increases suggests, that exposure to hyperoxic conditions results in moderate anaemia (Edsall and Smith, 1990), but also confirms a capacity of the spleen of sea bass to adapt its blood cell producing activity to changes in environmental conditions, as found in trout by Wells and Weber (1990) and Pearson and Stevens (1991). Conclusions The effects of season on all the measured parameters are confirmed by the present study, but it seems that the age/weight of the fish does not affect the haematic content. This study has highlighted the importance of the rearing method and the evaluation of blood composition and has demonstrated the variability in the blood parameters even when using the same fish stocks. As confirmed by this study, the DO affects the values of most of the blood parameters, while Oxygen saturation, temperature and salinity seem to have a secondary role. Thus, the blood analyses should always consider the farming system and conditions. The ranges proposed in Table 6 refer to the semi-intensive and the intensive systems separately and consider the means of the 75% of the sampled population for each parameter during the year. Values out of the given range cannot be considered pathological, but should prompt the repetition of the analyses, perhaps increasing the num-ber of samples and the control of the common farming practices. It is should be noted that a slight increase in summer or decrease in winter of some parameters is normal. Despite the great variability observed, the proposed ranges must be taken as a first assessment of normal values, characteristics of sea bass and farming method.
2019-03-22T16:15:34.121Z
2005-01-01T00:00:00.000
{ "year": 2005, "sha1": "ff41237bb93c50fd03aa3b40bc7ced697bbb8297", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.4081/ijas.2005.167?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "95cdc8c30862df35dec72213c46ef80077ed58cc", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
231818805
pes2o/s2orc
v3-fos-license
Signal Diversity for Laser-Doppler Vibrometers with Raw-Signal Combination The intensity of the reflected measuring beam is greatly reduced for laser-Doppler vibrometer (LDV) measurements on rough surfaces since a considerable part of the light is scattered and cannot reach the photodetector (laser speckle effect). The low intensity of the reflected laser beam leads to a so-called signal dropout, which manifests as noise peaks in the demodulated velocity signal. In such cases, no light reaches the detector at a specific time and, therefore, no signal can be detected. Consequently, the overall quality of the signal decreases significantly. In the literature, first attempts and a practical implementation to reduce this effect by signal diversity can be found. In this article, a practical implementation with four measuring heads of a Multipoint Vibrometer (MPV) and an evaluation and optimization of an algorithm from the literature is presented. The limitations of the algorithm, which combines velocity signals, are shown by evaluating our measurements. We present a modified algorithm, which generates a combined detector signal from the raw signals of the individual channels, reducing the mean noise level in our measurement by more than 10 dB. By comparing the results of our new algorithm with the algorithms of the state-of-the-art, we can show an improvement of the noise reduction with our approach. Introduction With the advancement of laser-Doppler vibrometers (LDVs), various additional applications are continuously made available in which contactless vibration measurement is possible [1][2][3]. These applications are difficult to implement with conventional methods of vibration measurement, impossible in the case of rotating or hot parts [4], or unwanted in medical applications [5]. In this case, no light reaches the photodetector at a certain point in time and consequently no information about the vibration can be obtained. The impact of this effect is the subject of several publications and has already been widely researched [9][10][11][12][13]. Even though the effect can be utilized for some specific measurement methods [14,15], in most LDV applications, it reduces the signal quality and limits the minimally detectable amplitude of a vibration [7]. Therefore, a reduction of the impact of this effect is desirable. Various approaches using adaptive optics have been used to accomplish this [16][17][18]. For this purpose, however, commercial vibrometers must be modified to a large extent. Another method to achieve this is through signal diversity, which is widely used in radio communications [19,20]. With signal diversity, the signal is detected from multiple channels. The fundamental idea of improving the signal quality through signal diversity is the assumption of stochastically independent signal dropouts of the individual channels, caused by the laser speckle effect. Therefore, the probability of a signal dropout occurring on all channels at a given time is exponentially lower with an increasing number of channels [7,21]. First results to improve the signal quality through signal diversity have been published by Dräbenstedt [18,21]. In his publications, an algorithm for calculating a combined signal from two or more demodulated velocity signals was developed and subsequently tested in a practical experiment. This article first aims to confirm Dräbenstedt's results by an experimental verification using four channels of a Multipoint Vibrometer (MPV). For our purposes, the individual measuring heads (channels) of the MPV can be seen as independent conventional LDVs and therefore all results can be achieved with several conventional LDVs in the same way. The limitations of the algorithm are shown by evaluating our measurements. Subsequently, we derive a modified algorithm, which generates a combined signal from the raw signals of the individual channels and compare the results of the algorithms. By this comparison we can show the improvement of the resulting combined signal with our modified algorithm. We would like to mention at this point that we do not require real-time performance if the result is obtained within a reasonable amount of time. We are focusing on investigating whether an improvement of the signal quality is possible, so a slightly increased processing time is not relevant at this time. Materials and Methods To verify the results obtained by Dräbenstedt [18,21], an experiment using four channels (measuring heads) of a MPV is conducted. The measuring heads are aimed at a shaker with a nearly identical angle of incidence. To verify the alignment of the measuring heads, the raw signals of the four channels are acquired and subsequently demodulated by an ATAN demodulation [22]. A correct alignment can be verified by a match of the four demodulated velocity signals. We aimed for a difference of less than 5% between the amplitudes of the four velocity signals. According to Dräbenstedt's publication, the signal reliability can be improved by combining multiple demodulated velocity signals. Using Equation (1), the combined signal S is obtained using any number n of individual velocity signals X j S = n ∑ j=1 w j X j (1) with the weighting factors w j calculated in Equation (2) [21] by the CNR (Carrier to Noise Ratio) The CNR results from Equation (3) with the carrier power P c and the noise power P n . CNR = 10 log 10 P c P n The carrier and the noise power can be calculated with an estimation of the spectral power density with the MATLAB™ function periodogram. For calculating the carrier power P c , for short signal lengths of 1000 samples, we assume a bandwidth of 600 kHz around the carrier frequency of 2.5 MHz. The noise power P n is calculated from the power of the remaining frequency band (bandwidth depending on the sampling rate Fs). The following experiment is intended to generate suitable velocity signals, for testing the algorithm based on Equations (1) and (2) [21]. Our goal is to generate an artificial and easy to replicate signal dropout by disrupting the beam paths to a shaker, which is used as a source for a known vibration. We achieve this with a rotating disc with holes for letting the beams pass through. Due to the rough surface of the disc and its placement out of focus of the laser beams, very little light is reflected, and a signal dropout is forced. This experimental setup, where the rotating disc is moved by a stepper motor, is shown in Figure 1. around the carrier frequency of 2.5 MHz. The noise power is calculated from the power of the remaining frequency band (bandwidth depending on the sampling rate Fs). The following experiment is intended to generate suitable velocity signals, for testing the algorithm based on Equations (1) and (2) [21]. Our goal is to generate an artificial and easy to replicate signal dropout by disrupting the beam paths to a shaker, which is used as a source for a known vibration. We achieve this with a rotating disc with holes for letting the beams pass through. Due to the rough surface of the disc and its placement out of focus of the laser beams, very little light is reflected, and a signal dropout is forced. This experimental setup, where the rotating disc is moved by a stepper motor, is shown in Figure 1. The four beams are aligned to ensure that at least one of the laser beams of the channels CH1-CH4 always passes through the holes and is positioned on the shaker. The results for this experiment are discussed in Section 3. Following the initial experiment with an artificial signal dropout, a more realistic experiment is performed. For this purpose, the four channels of the MPV are focused to the same spot on a test object. Unlike the previous experiment, only one of the measuring heads actively generates a laser beam (active channel), whereas the other three measuring heads only receive scattered light (passive channels). The experimental setup is shown in Figure 2. The four beams are aligned to ensure that at least one of the laser beams of the channels CH1-CH4 always passes through the holes and is positioned on the shaker. The results for this experiment are discussed in Section 3. Following the initial experiment with an artificial signal dropout, a more realistic experiment is performed. For this purpose, the four channels of the MPV are focused to the same spot on a test object. Unlike the previous experiment, only one of the measuring heads actively generates a laser beam (active channel), whereas the other three measuring heads only receive scattered light (passive channels). The experimental setup is shown in Figure 2. For this experiment, the measuring heads are aligned on a vibrating speaker with a known frequency. Due to the angle of the laser beams and the vibration of the speaker, inplane movement also occurs, resulting in noise caused by the laser speckle effect. With both experimental setups, several measurements are acquired and analyzed. The For this experiment, the measuring heads are aligned on a vibrating speaker with a known frequency. Due to the angle of the laser beams and the vibration of the speaker, in-plane movement also occurs, resulting in noise caused by the laser speckle effect. With both experimental setups, several measurements are acquired and analyzed. The following section describes the results of our evaluation. Results from the First Experiment Using Dräbensted's Algorithm for a Combined Signal The four signals of a measurement obtained from the first experimental setup, as shown in Figure 1, are demodulated and the resulting velocity signals are displayed in Figure 3. In addition, the combined signal derived from the velocity signals of CH1-CH4 with Equations (1) and (2), is also pictured. For this experiment, the measuring heads are aligned on a vibrating sp known frequency. Due to the angle of the laser beams and the vibration of the plane movement also occurs, resulting in noise caused by the laser speckle both experimental setups, several measurements are acquired and an following section describes the results of our evaluation. Results from the First Experiment Using Dräbensted's Algorithm for a Combin The four signals of a measurement obtained from the first experimen shown in Figure 1, are demodulated and the resulting velocity signals are Figure 3. In addition, the combined signal derived from the velocity signals with Equations (1) and (2), is also pictured. In Section 3.2, a detailed explanation of the implemented algorithm for raw signals can be found, which is applicable to velocity signals as well. For the channels CH1-CH4, strong peaks (due to signal dropouts) are visible in the velocity signals. In our application with forced signal dropouts, we know that at any time a channel exists, where a signal without any disturbances can be detected. This fact can generally be assumed for any signal, as the signal dropouts caused by the laser speckle effect are stochastically independent [21]. We can confirm this by examining the combined signal (VeloComb from [21] in Figure 3). In contrast to the individual velocity signals (CH1-CH4), the vibration of the shaker at 100 Hz is clearly visible in the combined signal. The functionality of the algorithm can also be shown in the spectral results of this measurement shown in Figure 4. It should be mentioned that the detected frequency at 100 Hz has approx. The same amplitude for all signals; however, the noise level of the combined signal is significantly lower. Figure 3). In contrast to the individual velocity signals (CH1-CH4), the vibration of th shaker at 100 Hz is clearly visible in the combined signal. The functionality of the algorithm can also be shown in the spectral results of thi measurement shown in Figure 4. It should be mentioned that the detected frequency a 100 Hz has approx. the same amplitude for all signals; however, the noise level of th combined signal is significantly lower. Figure 4. It should be mentioned that the detected freque 100 Hz has approx. the same amplitude for all signals; however, the noise level combined signal is significantly lower. The cause of these peaks can be explained by examining the weighting factors w j from Equation (2), required for the determination of the combined signal. We calculate these with a section of the signal with a length of 1000 samples. In the section shown in Figure 5, the weighting factors are w 1 = 0.8, w 2 = 0.11, w 3 = 0.05 and w 4 = 0.04. Thus, CH1 has the greatest influence on the combined signal at 80%. Considering the corresponding section of the raw signal, shown in Figure 6, this estimation is realistic. from Equation (2), required for the determination of the combined signal. We calculate these with a section of the signal with a length of 1000 samples. In the section shown in Figure 5, the weighting factors are = 0.8, = 0.11, = 0.05 and = 0.04. Thus, CH1 has the greatest influence on the combined signal at 80%. Considering the corresponding section of the raw signal, shown in Figure 6, this estimation is realistic. Considering Figures 5 and 6, the cause of the peaks of the combined signal can be attributed to the voltage drop of CH2, which is illustrated with the upper signal envelope of the raw signals, shown in Figure 6b. Despite a small weighting factor for CH2, this causes a large peak in the resulting combined velocity signal. An additional source of error are the transition points of the sections where discontinuities can occur. This will be discussed in more detail further in the paper. To account for the error of the voltage drop from CH2 (possibly caused by a signal dropout), 2 needs to be close to zero at the displayed section. For this purpose, the time interval used for deriving the weightings factors must be shorter than approximately 5 s. Depending on the sampling rate (we recorded 10 MSamples and sampled with either 25 MHz or 10 MHz, which is both sufficient for the carrier frequency of 2.5 MHz), this corresponds to 200 or 80 samples for the determination of the CNR to calculate the weighting factors . In addition to the significantly increased computing cost, the susceptibility to errors of the calculated CNR is significantly higher with fewer samples. This in turn can lead to further errors that cannot easily be compensated. Therefore, the signal quality is only slightly better even with a short sample length for determining the weighting factors. A possible solution to this problem is to use a considerably higher sample rate, which leads to an exponentially higher computing cost and, therefore, is not possible. An easier to implement method to prevent these errors is to introduce an exponent in the calculation of the weighting factors. This still only works in certain cases and is implemented in Equation (6) in the following section, which describes a modified algorithm developed by us. Another approach for signal optimization is the combination of the raw signals of the individual channels instead of the velocity signals. However, there is a non-constant phase difference of the individual channels, which prevents a simple addition of the raw signals [21]. Therefore, a simple addition of the signals can lead to an elimination of the combined signal. This is illustrated in Figure 7, showing the combined signal calculated from the Considering Figures 5 and 6, the cause of the peaks of the combined signal can be attributed to the voltage drop of CH2, which is illustrated with the upper signal envelope of the raw signals, shown in Figure 6b. Despite a small weighting factor for CH2, this causes a large peak in the resulting combined velocity signal. An additional source of error are the transition points of the sections where discontinuities can occur. This will be discussed in more detail further in the paper. To account for the error of the voltage drop from CH2 (possibly caused by a signal dropout), w 2 needs to be close to zero at the displayed section. For this purpose, the time interval used for deriving the weightings factors must be shorter than approximately 5 s. Depending on the sampling rate (we recorded 10 MSamples and sampled with either 25 MHz or 10 MHz, which is both sufficient for the carrier frequency of 2.5 MHz), this corresponds to 200 or 80 samples for the determination of the CNR to calculate the weighting factors w j . In addition to the significantly increased computing cost, the susceptibility to errors of the calculated CNR is significantly higher with fewer samples. This in turn can lead to further errors that cannot easily be compensated. Therefore, the signal quality is only slightly better even with a short sample length for determining the weighting factors. A possible solution to this problem is to use a considerably higher sample rate, which leads to an exponentially higher computing cost and, therefore, is not possible. An easier to implement method to prevent these errors is to introduce an exponent in the calculation of the weighting factors. This still only works in certain cases and is implemented in Equation (6) in the following section, which describes a modified algorithm developed by us. Another approach for signal optimization is the combination of the raw signals of the individual channels instead of the velocity signals. However, there is a non-constant phase difference of the individual channels, which prevents a simple addition of the raw signals [21]. Therefore, a simple addition of the signals can lead to an elimination of the combined signal. This is illustrated in Figure 7, showing the combined signal calculated from the raw signals from the first experimental setup, shown in Figure 1, based on the weighting factors from Equation (2). At the magnified section (top right in Figure 7), the combined signal is obtain equal parts of CH2 and CH4 by the weighting factors. As these channels have a amplitude and are shifted by approx. 180° in their phase, the resulting signal is zero. Overall, the algorithm from Dräbenstedt [21] for calculating the combined does still yield very good results, as shown in Figures 3 and 4. In the following we attempt to further improve the results of the combined signal, to achieve an eve signal reliability. Modified Algorithm to Obtain the Combined Signal from Raw Signals In order to solve the discontinuity problems at the transition points of the s an examination of these transition points is necessary. In this section we examin problem of the non-constant phase difference of the raw signals of the individual c with respect to each other can be solved simultaneously. For this purpose, the individual channels are digitized with a sampling rate F 10 or 25 MHz). Each digital signal consists of N samples (here 10 MSamples) subsequently split into blocks with a length of k (here 1000 samples). As mentioned a much shorter block length requires a lot of computing power and causes prob the reliable determination of the CNR for the weighting factors. This process is illu for a channel in Figure 8. At the magnified section (top right in Figure 7), the combined signal is obtained from equal parts of CH2 and CH4 by the weighting factors. As these channels have a similar amplitude and are shifted by approx. 180 • in their phase, the resulting signal is close to zero. Overall, the algorithm from Dräbenstedt [21] for calculating the combined signal does still yield very good results, as shown in Figures 3 and 4. In the following sections we attempt to further improve the results of the combined signal, to achieve an even better signal reliability. Modified Algorithm to Obtain the Combined Signal from Raw Signals In order to solve the discontinuity problems at the transition points of the sections, an examination of these transition points is necessary. In this section we examine, if the problem of the non-constant phase difference of the raw signals of the individual channels with respect to each other can be solved simultaneously. For this purpose, the individual channels are digitized with a sampling rate Fs (either 10 or 25 MHz). Each digital signal consists of N samples (here 10 MSamples) and is subsequently split into blocks with a length of k (here 1000 samples). As mentioned above, a much shorter block length requires a lot of computing power and causes problems in the reliable determination of the CNR for the weighting factors. This process is illustrated for a channel in Figure 8. At the magnified section (top right in Figure 7), the combined signal is obtained from equal parts of CH2 and CH4 by the weighting factors. As these channels have a similar amplitude and are shifted by approx. 180° in their phase, the resulting signal is close to zero. Overall, the algorithm from Dräbenstedt [21] for calculating the combined signal does still yield very good results, as shown in Figures 3 and 4. In the following sections we attempt to further improve the results of the combined signal, to achieve an even better signal reliability. Modified Algorithm to Obtain the Combined Signal from Raw Signals In order to solve the discontinuity problems at the transition points of the sections, an examination of these transition points is necessary. In this section we examine, if the problem of the non-constant phase difference of the raw signals of the individual channels with respect to each other can be solved simultaneously. For this purpose, the individual channels are digitized with a sampling rate Fs (either 10 or 25 MHz). Each digital signal consists of N samples (here 10 MSamples) and is subsequently split into blocks with a length of k (here 1000 samples). As mentioned above, a much shorter block length requires a lot of computing power and causes problems in the reliable determination of the CNR for the weighting factors. This process is illustrated for a channel in Figure 8. time or frequency domain (for calculating the CNR). For the calculation in the time domain, an auxiliary factor is calculated from the median of the absolute value of the block of a signal according to Equation (4). Alternatively, this factor can be determined in the frequency domain according to Equation (5) with the CNR, which is calculated The factors A which are proportional to the signal strength, are then normalized so that their sum is one. The resulting weighting factors F CH,i , for one block of the length k, are thus calculated by Equation (6). The exponent α allows a stronger weighting to be implemented. For large α the weighting of the channels with greater signal strength in the combined signal is exponentially increased (for α → ∞, max(F CH,i ) = 1 ). For the calculation via the CNR and for α = 1, these factors are equivalent to the factors calculated by Equation (2). For α > 2 the shown problem in Figure 5 can already be decreased significantly. The combined signal can then be calculated blockwise from the resulting weighting factors F CH,i . Altogether, the combined raw signal CH combined is given by Equation (7), with N total samples split into m = N/k blocks, with a length of k samples each (in this case m = 10, 000). The result of the calculation using Equation (4) or Equation (5) differs only slightly, as both methods have a similar proportionality to the signal strength. For future work we will consider their computing times. As mentioned above, one problem with this calculation are discontinuities at the combined signal blocks, which contribute to a distortion of the signal and to a higher noise level. Furthermore, there is a time-invariant phase offset between the individual raw signals, which, as shown in Figure 5, can lead to an elimination as well as other incorrectly detected frequencies. To solve this, we implemented an algorithm that shifts the blocks of the individual channels in phase by means of peak detection, to match their phase. The algorithm first finds the peaks → P i,j of the sinusoidal raw signals by Equation (8). We implemented this with the MATLAB TM function peaks, but a customized implementation via a local maxima detection is also possible. Then the sample length of the phase offset a i,j is calculated in Equation (9) by using the first detected peak p i,j = → P i,j (1). with the calculated sample length of the phase offset a i,j the individual raw signals are phase shifted according to Equation (10). An example of a small section of one block of the raw signals before and after correction, is shown in Figure 9. An example of a small section of one block of the raw signals before and after correction, is shown in Figure 9. In the selected section of the raw signal, the first detected peak amplitude belongs to CH1, consequently all other channels are aligned accordingly. In this case CH2 is shifted by three samples, CH3 by five samples and CH4 by eight samples. Afterwards, the samples before and after the transition points are interpolated to correct missing samples due to the phase shift and discontinuities due to the blockwise combination. In Figure 10, a small section of the combined demodulated velocity signal around a transition point of two blocks (between j = 1 and j = 2 with k = 1000) is shown before and after interpolation. Figure 10. Section of the combined, demodulated velocity signal before and after interpolation to correct discontinuities and error due to phase shifting. Due to this preceding method, we can minimize the impact of both the discontinuities and the phase offset without distorting the original signal, as can be seen by the magnified section in Figure 10. This is possible, because these errors always occur in the same locations around the transition points. Errors similar to the one shown in Figure 5 are random and therefore much harder to compensate. In the selected section of the raw signal, the first detected peak amplitude belongs to CH1, consequently all other channels are aligned accordingly. In this case CH2 is shifted by three samples, CH3 by five samples and CH4 by eight samples. Afterwards, the samples before and after the transition points are interpolated to correct missing samples due to the phase shift and discontinuities due to the blockwise combination. In Figure 10, a small section of the combined demodulated velocity signal around a transition point of two blocks (between j = 1 and j = 2 with k = 1000) is shown before and after interpolation. An example of a small section of one block of the raw signals before and after correction, is shown in Figure 9. In the selected section of the raw signal, the first detected peak amplitude belongs to CH1, consequently all other channels are aligned accordingly. In this case CH2 is shifted by three samples, CH3 by five samples and CH4 by eight samples. Afterwards, the samples before and after the transition points are interpolated to correct missing samples due to the phase shift and discontinuities due to the blockwise combination. In Figure 10, a small section of the combined demodulated velocity signal around a transition point of two blocks (between j = 1 and j = 2 with k = 1000) is shown before and after interpolation. Figure 10. Section of the combined, demodulated velocity signal before and after interpolation to correct discontinuities and error due to phase shifting. Due to this preceding method, we can minimize the impact of both the discontinuities and the phase offset without distorting the original signal, as can be seen by the magnified section in Figure 10. This is possible, because these errors always occur in the same locations around the transition points. Errors similar to the one shown in Figure 5 are random and therefore much harder to compensate. Figure 10. Section of the combined, demodulated velocity signal before and after interpolation to correct discontinuities and error due to phase shifting. Due to this preceding method, we can minimize the impact of both the discontinuities and the phase offset without distorting the original signal, as can be seen by the magnified section in Figure 10. This is possible, because these errors always occur in the same locations around the transition points. Errors similar to the one shown in Figure 5 are random and therefore much harder to compensate. Comparison of the Algorithms With our modified algorithm a combined velocity signal is calculated (with α = 5) from the same measurement and from the first experimental setup as before. The combined signal from the raw channels, the demodulated velocity signals of the individual channels and the combined signal from the velocity signals is shown in Figure 11. Comparison of the Algorithms With our modified algorithm a combined velocity signal is calculated (with α = 5) from the same measurement and from the first experimental setup as before. The combined signal from the raw channels, the demodulated velocity signals of the individual channels and the combined signal from the velocity signals is shown in Figure 11. Figure 11. Comparison of the demodulated velocity signals of the channels CH1-CH4 and the combined signals (from [21] "VeloComb" and from our algorithm "RawComb"); zoomed sections only for combined signals to illustrate less noise; (RBW = 2.5 Hz). The resulting velocity signal from our algorithm is significantly less noisy. The peaks of the combined signal, in comparison to the combined signal derived from the demodulated velocity signals, are mostly gone. Figure 12 additionally shows the displacement signals derived from the velocity signals. The offset resulting from the difference between the actual carrier frequency and the carrier frequency assumed for demodulation was compensated for each signal. The resulting velocity signal from our algorithm is significantly less noisy. The peaks of the combined signal, in comparison to the combined signal derived from the demodulated velocity signals, are mostly gone. Figure 12 additionally shows the displacement signals derived from the velocity signals. The offset resulting from the difference between the actual carrier frequency and the carrier frequency assumed for demodulation was compensated for each signal. Comparison of the Algorithms With our modified algorithm a combined velocity signal is calculated (with α = 5) from the same measurement and from the first experimental setup as before. The combined signal from the raw channels, the demodulated velocity signals of the individual channels and the combined signal from the velocity signals is shown in Figure 11. Figure 11. Comparison of the demodulated velocity signals of the channels CH1-CH4 and the combined signals (from [21] "VeloComb" and from our algorithm "RawComb"); zoomed sections only for combined signals to illustrate less noise; (RBW = 2.5 Hz). The resulting velocity signal from our algorithm is significantly less noisy. The peaks of the combined signal, in comparison to the combined signal derived from the demodulated velocity signals, are mostly gone. Figure 12 additionally shows the displacement signals derived from the velocity signals. The offset resulting from the difference between the actual carrier frequency and the carrier frequency assumed for demodulation was compensated for each signal. The functionality of the algorithms is evident in both combined signals-because most of the time just one channel is affected by signal dropouts, this time segment can be replaced in the combined signal by the other channels. The shaker's vibration frequency of 100 Hz is visible in all signals. For the combined signal from the old algorithm, a slight offset can still be seen compared to the combined signal from the algorithm developed by us. The cause of this offset can be explained by the disturbances visible in the velocity signal in Figure 11. The difference between the algorithms can also be shown by the significantly lower noise in the frequency spectrum, as shown in Figure 13. The functionality of the algorithms is evident in both combined signals-because most of the time just one channel is affected by signal dropouts, this time segment can be replaced in the combined signal by the other channels. The shaker's vibration frequency of 100 Hz is visible in all signals. For the combined signal from the old algorithm, a slight offset can still be seen compared to the combined signal from the algorithm developed by us. The cause of this offset can be explained by the disturbances visible in the velocity signal in Figure 11. The difference between the algorithms can also be shown by the significantly lower noise in the frequency spectrum, as shown in Figure 13. The relatively large amplitude of the shaker at 100 Hz is detected with a similar amplitude in all four channels as well as in the combined signals. Because of the signal dropouts, forced by our setup, in the individual channels, the noise level is significantly higher. For CH1 and CH2, the amplitude at 100 Hz is just slightly above the noise level, making the detection of an unknown frequency unrealistic. Generally, only higher amplitudes can reliably be detected in the individual channels, that are affected by signal dropouts. In order to estimate the noise reduction more accurately, Figure 13b shows the smoothed frequency spectra of the combined as well as one individual velocity signal. For this measurement, the algorithm, that calculates the combined signal from the velocity signals (VeloComb), reduces the mean noise level in the frequency range up to 5000 Hz by 17 dB. Our algorithm that calculates the combined signal from the raw signals (RawComb) decreases the mean noise level by an additional 15 dB. Therefore, only the algorithm using the raw signals will be described in the following parts of this article, as it consistently achieves better results. All previously shown results used measurements from the first experimental setup, which had the main objective of generating a reliable, reproducible signal as a basis for developing and testing the presented algorithm. To test our algorithm further, the results from the second experiment with only one active channel, which is much closer to real world applications, are presented in the following section. The relatively large amplitude of the shaker at 100 Hz is detected with a similar amplitude in all four channels as well as in the combined signals. Because of the signal dropouts, forced by our setup, in the individual channels, the noise level is significantly higher. For CH1 and CH2, the amplitude at 100 Hz is just slightly above the noise level, making the detection of an unknown frequency unrealistic. Generally, only higher amplitudes can reliably be detected in the individual channels, that are affected by signal dropouts. In order to estimate the noise reduction more accurately, Figure 13b shows the smoothed frequency spectra of the combined as well as one individual velocity signal. For this measurement, the algorithm, that calculates the combined signal from the velocity signals (VeloComb), reduces the mean noise level in the frequency range up to 5000 Hz by 17 dB. Our algorithm that calculates the combined signal from the raw signals (RawComb) decreases the mean noise level by an additional 15 dB. Therefore, only the algorithm using the raw signals will be described in the following parts of this article, as it consistently achieves better results. All previously shown results used measurements from the first experimental setup, which had the main objective of generating a reliable, reproducible signal as a basis for developing and testing the presented algorithm. To test our algorithm further, the results from the second experiment with only one active channel, which is much closer to real world applications, are presented in the following section. Further Examination of the Developed Algorithm through the Second Experiment The first experiment results in predictable signals that are helpful for the development of the algorithm. The signals are thus applicable to real world applications in a limited extent only. For a more representative comparison, the second experiment is more suitable. Using the experimental setup for the second experiment shown in Section 2, measurements are recorded and demodulated (Fs = 10 MHz, RBW = 1 Hz). In Figure 14, the demodulated velocity signals of the four channels as well as a combined signal derived from the four raw signals of one measurement are shown. By aiming the measuring beam at a poorly reflecting part of the speaker, the signal level is relatively low and signal dropouts can be seen in the demodulated signal. Further Examination of the Developed Algorithm through the Second Experiment The first experiment results in predictable signals that are helpful for the development of the algorithm. The signals are thus applicable to real world applications in a limited extent only. For a more representative comparison, the second experiment is more suitable. Using the experimental setup for the second experiment shown in Section 2, measurements are recorded and demodulated (Fs = 10 MHz, RBW = 1 Hz). In Figure 14, the demodulated velocity signals of the four channels as well as a combined signal derived from the four raw signals of one measurement are shown. By aiming the measuring beam at a poorly reflecting part of the speaker, the signal level is relatively low and signal dropouts can be seen in the demodulated signal. Compared to the measurement from the first experimental setup on the shaker and four active channels, the measurement on the speaker and only one active channel results in a considerably higher noise level in the velocity signals. For the active channel CH4 and the passive channels CH1-CH3, numerous signal dropouts can be seen. Since the signal dropouts are uncorrelated, it is likely that one channel detects the vibration of the speaker at any given time. The fundamental vibration of the speaker at 205 Hz is recognizable in the displacement signals derived from the velocity signals, shown in Figure 15. Compared to the measurement from the first experimental setup on the shaker and four active channels, the measurement on the speaker and only one active channel results in a considerably higher noise level in the velocity signals. For the active channel CH4 and the passive channels CH1-CH3, numerous signal dropouts can be seen. Since the signal dropouts are uncorrelated, it is likely that one channel detects the vibration of the speaker at any given time. The fundamental vibration of the speaker at 205 Hz is recognizable in the displacement signals derived from the velocity signals, shown in Figure 15. The disturbances of the individual channels are also visible in the displacement signals, resulting in a higher noise level of the frequency spectra, shown in Figure 16. The disturbances of the individual channels are also visible in the displacement signals, resulting in a higher noise level of the frequency spectra, shown in Figure 16. The disturbances of the individual channels are also visible in the displacement signals, resulting in a higher noise level of the frequency spectra, shown in Figure 16. Due to the lower noise level of combined signal, the functionality of the algorithm can be shown. The relatively large amplitude of the speaker's vibration frequency is detected almost identically by all signals. With these results, we can demonstrate that the algorithm yields good results for close to real-world applications. For this measurement, the mean noise level in the frequency range up to 5000 Hz of the combined signal (RawComb) was reduced by 10 dB compared to the mean noise level of the signals of the individual channels. Due to the lower noise level of combined signal, the functionality of the algorithm can be shown. The relatively large amplitude of the speaker's vibration frequency is detected almost identically by all signals. With these results, we can demonstrate that the algorithm yields good results for close to real-world applications. For this measurement, the mean noise level in the frequency range up to 5000 Hz of the combined signal (RawComb) was reduced by 10 dB compared to the mean noise level of the signals of the individual channels. Conditions for Successful Diversity Measurements For a successful measurement, a correct alignment of the passive measuring heads is essential, otherwise insufficient amounts of light reach the detectors resulting in a correspondingly poor signal quality. Figure 17 shows an example of such a case. Conditions for Successful Diversity Measurements For a successful measurement, a correct alignment of the passive measuring heads is essential, otherwise insufficient amounts of light reach the detectors resulting in a correspondingly poor signal quality. Figure 17 shows an example of such a case. For this purpose, the measurement object is arbitrary since the aim of this section is only to demonstrate a measurement with incorrect alignment. For this measurement, the speaker was removed, and the measurement was conducted on the laboratory table underneath (no active vibration). The resulting poor alignment of the measuring heads results in a poor signal quality for all passive channels, as very little light reaches the sensors of the measuring heads. This causes numerous peaks in the velocity signals of CH1 and CH2. In such a case, the combined signal obtained by our algorithm is largely For this purpose, the measurement object is arbitrary since the aim of this section is only to demonstrate a measurement with incorrect alignment. For this measurement, the speaker was removed, and the measurement was conducted on the laboratory table underneath (no active vibration). The resulting poor alignment of the measuring heads results in a poor signal quality for all passive channels, as very little light reaches the sensors of the measuring heads. This causes numerous peaks in the velocity signals of CH1 and CH2. In such a case, the combined signal obtained by our algorithm is largely equivalent to the signal with the highest signal strength (in this case CH4). This is visible in the matching velocity signal in Figure 17 and in the almost identical frequency spectrum, shown in Figure 18. Discussion With the first experiment, we verified the functionality of the algorithm developed by Dräbenstedt [21] for deriving a combined signal from four velocity signals. By examining the resulting combined signal, we identified sources of potential errors, which limit the signal quality of the combined signal which has already been significantly improved. We derived an algorithm that calculated a combined signal based on four raw LDV signals. Subsequently, both algorithms were applied to the same measurement and revealed that the newly developed algorithm allows an improvement of the combined signal. For measurements with the first experimental setup, we were able to show that the mean noise level of the combined signal is reduced by an additional 15 dB (RBW = 1 Hz) in the frequency range up to 5000 Hz, when calculated with our algorithm (RawComb), compared to the algorithm from the literature (VeloComb), as shown in Figure 13b. For the second experimental setup the additional reduction was approx. 6 dB (RBW = 2.5 Hz). Based on the evaluations of the first and second experiments, we demonstrated the basic functionality of the developed algorithm for determining a combined signal from the raw signals of multiple channels. Specifically, it was shown that the combined demodulated signal is at least as good as the signal of the best individual channel. Depending on the disturbances and signal dropouts that occur, the signal quality, and accordingly the noise level, of the combined signal can be significantly better than that of a signal from a single channel. In the results of our measurements of the second experimental setup, as shown in Figure 16, a mean noise reduction in the frequency range up to 5000 Hz of more than 10 dB was achieved (RBW = 2.5 Hz). The results seem plausible and support the findings of [18,21], as both algorithms for determining a combined signal significantly reduced the noise level caused by signal Discussion With the first experiment, we verified the functionality of the algorithm developed by Dräbenstedt [21] for deriving a combined signal from four velocity signals. By examining the resulting combined signal, we identified sources of potential errors, which limit the signal quality of the combined signal which has already been significantly improved. We derived an algorithm that calculated a combined signal based on four raw LDV signals. Subsequently, both algorithms were applied to the same measurement and revealed that the newly developed algorithm allows an improvement of the combined signal. For measurements with the first experimental setup, we were able to show that the mean noise level of the combined signal is reduced by an additional 15 dB (RBW = 1 Hz) in the frequency range up to 5000 Hz, when calculated with our algorithm (RawComb), compared to the algorithm from the literature (VeloComb), as shown in Figure 13b. For the second experimental setup the additional reduction was approx. 6 dB (RBW = 2.5 Hz). Based on the evaluations of the first and second experiments, we demonstrated the basic functionality of the developed algorithm for determining a combined signal from the raw signals of multiple channels. Specifically, it was shown that the combined demodulated signal is at least as good as the signal of the best individual channel. Depending on the disturbances and signal dropouts that occur, the signal quality, and accordingly the noise level, of the combined signal can be significantly better than that of a signal from a single channel. In the results of our measurements of the second experimental setup, as shown in Figure 16, a mean noise reduction in the frequency range up to 5000 Hz of more than 10 dB was achieved (RBW = 2.5 Hz). The results seem plausible and support the findings of [18,21], as both algorithms for determining a combined signal significantly reduced the noise level caused by signal dropouts due to the laser speckle effect. The increased reduction of the noise level (up to 6 dB in the second experiment) by our algorithm (RawComb) compared to the algorithm from the literature (VeloComb) is reasonable, since our algorithm does not have the limitations shown in Section 3.1.1. The implementation of our findings could improve the signal quality in various applications involving measurements on rough or moving surfaces, where the signal quality is reduced by laser speckle effects and the resulting signal dropouts [7,10]. A possible use case is medical applications, where LDVs are used for contactless measurements, such as monitoring cardiovascular activity [5,23]. In such cases, it is not always possible to guarantee sufficient reflectivity of the skin and that the patient does not move, or the appropriate procedures involved in ensuring this require greater effort [23]. For this application, signal diversity could decrease noise contribution of laser speckle effects, or even eliminate the need for time-consuming preparation of the skin. For future research we are investigating additional real-world applications as well as the real-time capabilities of our algorithm and the impact of implementing peak-filtering algorithms in the individual signals before combining them. Furthermore, the question of to what extent the measurements can be compensated with respect to the angle of incidence needs to be addressed in order to obtain reliable and accurate measurements in real-world applications.
2021-02-06T06:17:35.684Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "2c6a8398018e92f9ca5cb02813004a999df33c6d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/s21030998", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "87fb926283825c036dd2ee0d0a80b49802877dc7", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
214678802
pes2o/s2orc
v3-fos-license
Impact of b‐value on estimates of apparent fibre density Abstract Recent advances in diffusion magnetic resonance imaging (dMRI) analysis techniques have improved our understanding of fibre‐specific variations in white matter microstructure. Increasingly, studies are adopting multi‐shell dMRI acquisitions to improve the robustness of dMRI‐based inferences. However, the impact of b‐value choice on the estimation of dMRI measures such as apparent fibre density (AFD) derived from spherical deconvolution is not known. Here, we investigate the impact of b‐value sampling scheme on estimates of AFD. First, we performed simulations to assess the correspondence between AFD and simulated intra‐axonal signal fraction across multiple b‐value sampling schemes. We then studied the impact of sampling scheme on the relationship between AFD and age in a developmental population (n = 78) aged 8–18 (mean = 12.4, SD = 2.9 years) using hierarchical clustering and whole brain fixel‐based analyses. Multi‐shell dMRI data were collected at 3.0T using ultra‐strong gradients (300 mT/m), using 6 diffusion‐weighted shells ranging from b = 0 to 6,000 s/mm2. Simulations revealed that the correspondence between estimated AFD and simulated intra‐axonal signal fraction was improved with high b‐value shells due to increased suppression of the extra‐axonal signal. These results were supported by in vivo data, as sensitivity to developmental age‐relationships was improved with increasing b‐value (b = 6,000 s/mm2, median R 2 = .34; b = 4,000 s/mm2, median R 2 = .29; b = 2,400 s/mm2, median R 2 = .21; b = 1,200 s/mm2, median R 2 = .17) in a tract‐specific fashion. Overall, estimates of AFD and age‐related microstructural development were better characterised at high diffusion‐weightings due to improved correspondence with intra‐axonal properties. | INTRODUCTION Diffusion magnetic resonance imaging (dMRI; Le Bihan & Breton, 1985) offers a magnified window into white matter by probing the tissue microstructure properties. Various dMRI modelling and analysis techniques are available, which aim to summarise the local architecture of white matter as a quantitative metric. However, the biological interpretations around commonly investigated dMRI metrics rest heavily on whether the acquisition protocol can capture the relevant microstructural attributes (Lebel & Deoni, 2018;Tournier, Mori, & Leemans, 2011). One such measure of microstructural organisation, termed apparent fibre density (AFD), can indicate relative differences in the white matter fibre density per unit volume of tissue. Given that the specificity to the intra-axonal water signal is maximised at high b-values due to higher restriction of water diffusion (Figure 1), AFD can be sensitive to axon density at high diffusion-weightings (Raffelt et al., 2012). Analysis frameworks such as fixel-based analysis (FBA; Raffelt et al., 2017) provide a means to test fibre-specific differences in AFD within a population. FBA offers two major advantages over alternative dMRI analysis techniques: sensitivity to fibre properties (density and morphology), and specificity to fibre populations within voxels (or "fixels"). This combination of improved sensitivity and specificity increases the possibility of assigning group differences in fibre properties to specific fibre populations (Dimond et al., 2019;Gajamange et al., 2018;Mito et al., 2018). In practise, FBA is compatible with both single-shell (Dhollander, Raffelt, & Connelly, 2016) and multi-shell (Jeurissen, Tournier, Dhollander, Connelly, & Sijbers, 2014) dMRI data. An intuitive choice might be to use all available dMRI data to compute fibre-specific AFD. However, this might not be compatible with the underlying assumptions of AFD reflecting intra-axonal properties. In addition, sensitivity to the extraaxonal signal upon the inclusion of lower b-values can influence the response function choice, resulting in a potential mismatch between the response function and the true underlying fibre properties. Combining FBA with the very latest in MRI gradient hardware (300 mT/m) , we explore the impact of sampling scheme on AFD estimates using a rich developmental dataset comprising multi-shell diffusion MRI data with b-values ranging from 0 to 6,000 s/mm 2 . Firstly, we simulate multiple fibre geometries to showcase how discrepancies in "true" microstructural configurations can influence the interpretations of AFD generated from both single-shell and multi-shell dMRI data. We then conduct experiments to confirm the theory that AFD is more sensitive and specific to axon density at higher b-values, demonstrated by sensitivity to detecting age-relationships in a developmental population of children and adolescents. | Simulations Single fibre populations were simulated with the intra-and extraaxonal spaces represented by axially symmetric tensors; the second and third eigenvalues were set to zero for the intra-axonal tensor and equal but non-zero for the extra-axonal tensor (Jespersen, Kroenke, Ostergaard, Ackerman, & Yablonskiy, 2007;Kroenke, Ackerman, & Yablonskiy, 2004). The intra-axonal and extra-axonal parallel diffusivities were set to 1.9 μm 2 /ms, and 42 different combinations were simulated with intra-axonal signal fraction f = [0. 2,0.3,0.4,0.5,0.6,0.7,0.8] and extra-axonal perpendicular diffusivity D e,⊥ = [0.2,0.4,0.6,0.8,1, 1.2] μm 2 /ms. 100 Rician noise generalisations were computed with three different signal-to-noise ratio (SNR) values on the b = 0 signal (SNR = 50;35;and 20). The response function, which should reflect the properties of a single fibre population (Tax, Jeurissen, Vos, Viergever, & Leemans, 2014), was set to have f = 0.3 and D e,⊥ = 0.8 μm 2 /ms informed by values estimated from the groupwise response function used in this study. These values are in the F I G U R E 1 Spherical harmonics (zero order) maps derived from a representative participant (aged 8 years). Visually, increasing b-value from 0 to 6,000 s/mm 2 leads to greater specificity to the signal attributed to the intra-axonal space range of previously reported estimates of white matter in vivo (Fieremans, Jensen, & Helpern, 2011;Novikov et al., 2018). | Participants We scanned a sample of typically developing children aged 8-18 years recruited as part of the Cardiff University Brain Research Imaging Centre (CUBRIC) Kids study Raven et al., 2019). This study was approved by the School of Psychology ethics committee at Cardiff University. Participants and their parents/guardians were recruited via public outreach events. Written informed consent was provided by the primary caregiver of each child participating in the study, and adolescents aged 16-18 years additionally provided written consent. Children were excluded from the study if they had nonremovable metal implants, and if they reported history of a major head injury or epilepsy. All procedures were completed in accordance with the Declaration of Helsinki. A total of 78 children between the ages of 8-18 years (Mean = 12.4, SD = 2.9 years) were included in the current study (45 female). | Image processing and analysis To compare multiple sampling schemes, pre-processed dMRI data were further processed and analysed separately for each sampling scheme in a common population-template space, using a recommended framework (Raffelt et al., 2017). Firstly, data were intensity normalised and spatially upsampled to 1.3 mm 3 isotropic voxel size to increase anatomical contrast and improve tractography (Dyrby et al., 2014). For single-shell (ss) single-tissue constrained spherical deconvolution (CSD), a fibre orientation distribution (FOD; Tournier, Calamante, & Connelly, 2007) was estimated in each voxel with maximal spherical harmonics order l max = 8 for shells with high angular resolution (b = 2,400, 4,000, 6,000 s/mm 2 -60 directions each) and l max = 6 for shells with lower angular resolution (b = 1,200 s/mm 2 -30 directions). Multi-shell (ms) multi-tissue CSD was performed using a separate framework (Dhollander et al., 2016;Jeurissen et al., 2014). Following FOD estimation, we derived a population template using all diffusion volumes (ms all ), and subsequently registered subject-specific and sampling-scheme-specific FOD maps to this template ( Figure S1). We then computed an apparent fibre density (AFD) map containing fibre-specific AFD along each fixel for each subject (Raffelt et al., 2017). In order to estimate AFD along various commonly investigated white matter fibre pathways, white matter tract segmentation was per- Linear models were computed, whereby AFD in each tract was entered as the dependent variable, age was entered as the independent variable, and sex and RMS displacement were set as nuisance variables. To compare sampling schemes in terms of their relationship with age, the difference in R 2 was bootstrapped with 10,000 samples to compute 95% bias corrected accelerated (BCa) confidence intervals. Hierarchical clustering was performed to discern clusters of sensitivity to age-relationships across various combinations of bvalue sampling schemes and white matter tracts. These results were visualised as a heatmap with hierarchical clustering using the "gplots" package (Warnes et al., 2015) using Euclidean distance and complete agglomeration for clustering. To account for family-wise error (FWE) we made use of a strict Bonferroni correction by adjusting our p-value threshold by the 152 comparisons (38 tracts × 4 sampling schemes). As a result, our statistical significance was defined as p < 3.3e-4. | Whole-brain fixel-based analysis Separate statistical analyses were performed for each single-shell sampling scheme (b = 1,200; 2,400; 4,000; 6,000 s/mm 2 ) using connectivity-based fixel enhancement (CFE), which provides a permutation-based, family-wise error (FWE) corrected p-value for every individual fixel in the template image (Raffelt et al., 2015). For each sampling scheme, we tested the relationship between AFD and age, covarying for sex. For these whole-brain analyses, statistical significance was defined as p FWE < .05. Statistically significant fixels were converted into binary fixel maps, and an intersection mask was computed to quantify the proportion of significant fixels overlapping between sampling schemes. | Simulations The results of the simulations for AFD across various fibre geometries and sampling schemes is summarised in Figure 2. Compared to the highest single shell acquisition (ss 6000 ), we observe a statistically significant three-way interaction between D e,⊥ , f, and sampling-scheme for F I G U R E 2 AFD for simulated fibre geometries across five sampling schemes. Variations to simulated intra-axonal signal fraction and perpendicular diffusivity of the extra-axonal space (D e,⊥ ) were tested to compare AFD across multiple fibre geometries. Sampling schemes reflect the chosen b-values, in s/mm 2 in AFD computed from the highest b-value shell could more directly reflect a change in the underlying f, reducing the potentially confounding effect of discrepancies with the response function. The addition of noise had negligible effects on these relationships ( Figure S3). However, we observed that with decreasing SNR (greater noise), the estimated AFD was more variable. 3.2 | In vivo developmental data 3.2.1 | Impact of b-value sampling scheme In order to assess the impact of b-value sampling-schemes on tractspecific age relationships, we visualise our data as a heatmap (Figure 3; S4). The coefficient of determination (R 2 ) derived from the linear model for each tract is organised into hierarchical clusters with branching dendrograms. The first tract cluster is composed of a sub-cluster of regions where a high proportion of age-related variance is described across all diffusion weightings (median R 2 = .40). The first sub-cluster (Figure 3: cluster 1) includes several association tracts (left MLF, bilateral IFOF, left SLF II, bilateral SLF III, bilateral ATR, bilateral AF) and commissural tracts (corpus callosum: full extent, genu, rostral body). Significant age-relationships are observed for all of the sampling schemes (b = 1,200; 2,400; 4,000; 6,000 s/mm 2 ), with an increase in the estimated R 2 when going to higher diffusion weightings (Figure 4). The proportion of variance explained for the high diffusion-weightings (b = 4,000 and 6,000 s/mm 2 ) ranged from 38% to 53% (Table 1). Despite the consistent sensitivity to age-related development in this tract cluster, a greater b-value dependence on these relationships was observed when moving from low to high b-values, particularly for association tracts such as bilateral SLF III, left SLF I, left IFOF and left MLF. | Multi-shell multi-tissue FBA Consistent with the single-shell single-tissue results, sensitivity to age relationships was improved at high diffusion-weightings for multi-shell analyses (Table S1). We observed two main clusters of multi-shell bvalue sampling schemes; the first including multiple combinations of low, moderate, and high b-value sampling schemes, and the second including various combinations of high b-value sampling schemes ( Figure S5). In addition, we observed two main tract-clusters consistent with the single-tissue results: the first including various leftlateralised association tracts and corpus callosum projections; and the second including predominantly cerebellar tracts, projection tracts (CST) and association tracts (including right SLF_II, SLF_I, ILF, CG, and OR). Overall, we observed a general reduction in the proportion of detectable age-related variance when adding multiple shells for AFD estimation ( Figure S5) across various tracts. | Whole brain fixel-based analysis In order to evaluate the sensitivity of FBA to age-related microstructural development across sampling-schemes, we performed four separate statistical analyses. For each single-shell sampling scheme (b = 1,200; 2,400; 4,000; 6,000 s/mm 2 ) we tested the relationship between age and AFD using the CFE method (Raffelt et al., 2015). FBA revealed a significantly positive relationship between AFD and age across all b-values (p FWE < .05). No significant age effects were observed in the opposite direction (p FWE > .05). We observed a general decrease in the number of significant fixels (n sig ) when moving from high to low b-values (ss 6000 : n sig = 13,382; ss 4000 : n sig = 10,070; ss 2400 : n sig = 7,283; ss 1200 : n sig = 5,506). In terms of anatomical overlap between results, 58% of significant fixels overlapped between ss 6000 and ss 4000 , 43% of significant fixels overlapped between ss 6000 and ss 2400 ; and 20% of significant fixels overlapped between ss 6000 and ss 1200 . Visualisations of significant and overlapping fixels across diffusion-weightings are depicted in | DISCUSSION In this study we demonstrate a b-value dependence on estimates of apparent fibre density. Our results highlight that AFD more prominently reflects age-related white matter development at high b-values. F I G U R E 3 Dendrogram heatmap highlighting clusters of tracts which differentially describe age-related differences in apparent fibre density (AFD) across various single-shell b-value sampling schemes. Heatmap colour intensity reflects range of R 2 values derived from a linear model including age, sex, and RMS displacement. Significant age-effects (p < 3.3e-4) are annotated with an asterisk (*). A depiction of several fibre pathways in one cluster is presented on the right T A B L E 1 Variance in AFD explained by age for each single-shell sampling scheme across tracts F I G U R E 4 The relationship between AFD and age across four regions including: the right anterior thalamic radiation (ATR_right), inferior longitudinal fasciculus (ILF_right), corticospinal tract (CST_left), and superior longitudinal fasciculus I (SLF_I_right). Each region is representative of individual tract clusters where a progressive increase in the coefficient of determination (R 2 ) is observed when moving from low to high diffusionweightings. Sampling schemes whereby AFD was significantly associated with age are coloured in purple | Simulations The simulations for multiple sampling schemes revealed an improved correspondence between estimated AFD and the underlying intraaxonal fibre properties when using high b-value shells (b = 4,000 or b = 6,000 s/mm 2 ). When moving to lower b-values, or including the complete set of multi-shell data, we observed a larger dependency of AFD on extra-axonal perpendicular diffusivity. This could suggest that any changes in the true underlying fibre density could be camouflaged by concomitant changes in perpendicular diffusivity, whereby a simultaneous reduction of the intra-axonal volume fraction and D e,⊥ could result in the AFD remaining the same. AFD is hypothesised to be proportional to the intra-axonal signal fraction of a fibre population (Raffelt et al., 2012). With increasing b-value, the intra-and extra-axonal signal is differentially attenuated, leading to greater signal contribution from the intraaxonal space (Tournier et al., 2013). Therefore, an increase in AFD can suggest alterations to axonal properties, such as axon count, packing density, and diameter (Raffelt et al., 2017). However, our results suggest that AFD is dependent on the extra-axonal signal when including lower b-values, as the mismatch between estimated AFD and simulated intra-axonal signal fraction across varying D e,⊥ is exaggerated. As such, a change in AFD estimated at high diffusion-weightings (in this case b = 4,000 or 6,000 s/mm 2 ) could more directly reflect a change in the underlying axon density compared with lower b-value shells or multi-shell acquisitions, reducing the potential confounding effect of discrepancies with the response function. | In vivo developmental data When considering in vivo developmental data, the dependence of bvalue on estimates of AFD was reflected by improved sensitivity to age relationships. Several association tracts consistently described Ladouceur, Peper, Crone, & Dahl, 2012;Lebel & Beaulieu, 2011;Sawiak et al., 2018). A group of left-lateralised association tracts (e.g., left CG, MLF, OR, SLF_III, SLF_I, IFOF) better described age-related variance in AFD when comparing the highest b-value (b = 6,000 s/mm 2 ) with high to moderate b-values (b = 4,000 or 2,400 s/mm 2 ). Leftlateralisation of language has been well documented (Catani, Jones, & ffytche, 2005) and related to microstructure (Lebel & Beaulieu, 2009). The microstructure of lateralised association tracts is likely linked with the ongoing development of complex cognitive processes throughout childhood and adolescence (Blakemore & Choudhury, 2006;Jung & Haier, 2007). Our results suggest that lateralised association tracts linked with language and cognitive development are better characterised at high b-values. This is likely due to improved sensitivity and specificity to axonal microstructure in the branching endpoints of these tracts integrating such higher order functions across fronto-parietal, fronto-occipital, and occipitotemporal pathways. Future work should focus on investigating subject-specific branching endpoints of these tracts, to assess individual variation in microstructure. One key observation was that a higher proportion of age-related variance was observed in the single-tissue analyses compared with the multi-tissue analyses. A decrease in discriminative power of age- single-shell three-tissue CSD (Aerts, Dhollander, & Marinazzo, 2019;Dhollander, Mito, Raffelt, & Connelly, 2019) and simultaneous voxelwise estimation of the response function and FOD (Jespersen et al., 2007) are warranted to explore this further. The results of the whole-brain FBA revealed a b-value dependence on age-related differences in AFD. Notably, more widespread associations with age were observed at high diffusion-weightings, implicating a number of regions which were not found using other sampling-schemes. This b-value dependence suggests that whilst some core regions such as the body and splenium of the corpus callosum are clearly exhibiting strong age-related development across all sampling schemes, a degree of anatomical sensitivity and specificity is lost at lower diffusion-weightings. This is not to say that studies performing FBA with low-to-moderate b-values will completely lose sensitivity to age-related effects or clinical group differences. However, in conditions with subtle differences in underlying neurobiology or microstructure, going to higher b-values may improve the characterisation of AFD and thus improve the detectability of clinically significant group differences. Overall, AFD derived from high b-values (b = 4,000 or 6,000 s/mm 2 ) best modelled age-relationships for the majority of white matter tracts tested. These results, combined with the simulations, suggest that axonal properties (such as axon density) dominate age-related variance in AFD at high b-values, whereas extra-axonal signal contamination at decreasing diffusion-weightings incrementally suppress this effect. | Implications Our results bear implications for fixel-based analysis applications using retrospectively collected dMRI data which may not be optimal for the estimation of AFD. The biological interpretation of group differences in AFD should be tailored to the acquisition scheme used. Promisingly, our simulation results suggest that the effect of b-value and discrepancy with the response function dominates the effect of noise ( Figure S3), even at a lower SNR which closely matched our in vivo data (SNR = 50). Therefore, we expect that our observations at high b-values may be reproducible on a standard 3.0T system. As strong gradient systems become increasingly available, the practicalities of acquiring such high quality dMRI data at higher b-values is becoming less cumbersome (Chamberland, Tax Whilst in this study we have used a developmental population of children and adolescents as an exemplar of a b-value dependence on estimates of AFD, these findings can be applied more broadly and bear implications for a range of group studies (e.g., clinical groups or ageing adults). | Limitations and future directions One limitation of the current study is that we have no ground truth on the development of axonal density over childhood and adolescence. Therefore, our interpretations of improved intra-axonal signal sensitivity rests on age-relationships investigated here, which has also been used previously (Maximov, Alnaes, & Westlye, 2019;Pines et al., 2019). Whilst we have attempted to understand how AFD can vary across multiple simulated fibre geometries, we do not know how the underlying fibre properties (such as axon diameter) vary with age. Despite this consideration, a recent study of histological validation suggests that AFD is a reliable marker of axonal density in the presence of axonal degeneration (Rojas-Vitea et al., 2019). This is a promising indicator of the neurobiological properties proportional to AFD. Future work should adopt multi-dimensional approaches to extract meaningful components , enhance data quality (Alexander et al., 2017) and harmonise existing data (Maximov et al., 2019;Tax et al., 2019). | CONCLUSION We summarise our findings with three main conclusions: (a) the correspondence between apparent fibre density and simulated intra-axonal signal fraction is improved with high b-value shells; and (b) AFD better reflects age-related differences in axonal microstructure with increasing b-value (b = 4,000 or 6,000 s/mm 2 ) over childhood and adolescence; and (c) these relationships differ across the brain, with a greater b-value dependence in association tracts and posterior projections of the corpus callosum. Together, our results suggest that axonal properties dominate the variance in AFD at high b-values. ACKNOWLEDGMENTS We are grateful to the participants and their families for their par- CONFLICT OF INTEREST All authors disclose no real or potential conflicts of interest. DATA AVAILABILITY STATEMENT The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
2020-01-23T09:21:14.837Z
2020-01-15T00:00:00.000
{ "year": 2020, "sha1": "fb669878030f0fb5a03100f5d190135bbbe2bec6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1002/hbm.24964", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "abdef68518aec588cc87d33343807c43dbee3e82", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Medicine", "Biology" ] }
235619022
pes2o/s2orc
v3-fos-license
Starch extracted from pineapple (Ananas comosus) plant stem as a source for amino acids production Pineapple plant (Ananas comosus) is one of the largest productions in Asia and its increasing production has generated a huge amount of pineapple wastes. Pineapple plant stem is made up of high concentration of starch which can potentially be converted into value-added products, including amino acids. Due to the increasing demand in animal feed grade amino acids, especially for methionine and lysine, the utilisation of cheap and renewable source is deemed to be an essential approach. This study aimed to produce amino acids from pineapple plant stem hydrolysates through microbial fermentation by Pediococcus acidilactici Kp10. Dextrozyme was used for hydrolysis of starch and Celluclast 1.5 L for saccharification of cellulosic materials in pineapple plant stem. The hydrolysates obtained were used in the fermentation to produce methionine and lysine. Pineapple plant stem showed high starch content of 77.78%. Lignocellulosic composition of pineapple plant stem consisted of 46.15% hemicellulose, 31.86% cellulose, and 18.60% lignin. Saccharification of alkaline-treated pineapple plant stem gave lower reducing sugars of 13.28 g/L as compared to untreated, where 18.56 g/L reducing sugars obtained. Therefore, the untreated pineapple plant stem was selected for further process. Starch hydrolysis produced 57.57 g/L reducing sugar (100% hydrolysis yield) and saccharification of cellulosic materials produced 24.67 g/L reducing sugars (56.93% hydrolysis yield). The starch-based and cellulosic-based of pineapple plant stem were subjected as carbon source in methionine and lysine production by P. acidilactici Kp10. In conclusion, higher methionine and lysine production were produced from starch-based hydrolysis (40.25 mg/L and 0.97 g/L, respectively) as compared to cellulosic-based saccharification (37.31 mg/L and 0.84 g/L, respectively) of pineapple plant stem. Background In food basis, fruits and vegetables are considered as the highest wastage rate produced, which accounted approximately 50% globally, including pineapple industry sector. The pineapple, Ananas comosus, is a tropical plant and most economically significant plant in the Bromeliaceae family. Increasing pineapple production leads to massive amount of waste products generated from the industry, as they were disposed in landfills or burnt for energy production and these might pollute the environment if not handled properly [1,2]. Nearly, one-third of the pineapple plant is lost or wasted, which accounts for approximately 1.3 billion metric tonnes [3]. In fact, pineapple is considered the most favoured of all tropical fruits and it is a prominent ingredient in fruit and juice products, including jams, juice concentrates, essence, jellies, squash, and pickles. By weight basis, approximately 55% of total pineapple parts are discarded mainly due to the transportation and storage purposes [4]. The by-products generated from pineapple industry not only involved food processing industry, but also enzyme industry as the plant can be employed for bromelain extraction [5]. In plantation area, numerous amount of waste has been generated during pruning, harvesting as well as post-harvesting, which includes leaves and plant stem core. Malaysia, known as one of the largest producers in Asia, has certainly generated metric tonnes of pineapple wastes or by-products, which can potentially be used as a feedstock for the production of value-added products. In relation to the concepts of the sustainable development and integrated environmental protection, the renewable raw materials, such as waste, should be utilised in bioconversion of value-added products as its costeffective process as well as zero waste generation [6]. This measure is able to provide the customers both ecological and economic benefits. In fact, in food, proteins are vital components, which composed of the 20 proteinogenic amino acids. In this point of view, the animal synthesises their own specific amino acid spectrum which generally the essential amino acids (methionine, lysine, and threonine) presence in limiting amount in crude feeds. Clearly, methionine and lysine are considerably ecological importance to meat producing industry due to the essential role of feed in transformation into animal protein. Starch is the most abundant molecule on earth after cellulose and the major carbohydrate reserve in plants. Starch is a major energy source on earth, providing up to 80% of the calories consumed by humans [7]. Starch is a carbohydrate extracted from agricultural raw materials which is widely present in literally thousands of everyday food and non-food applications. Starch is a so-called green alternative material and is a most promising candidate for future use [8] due to its low cost, availability from renewable resources, and broad-ranged capability in food and non-food products. Basically, starch is a carbohydrate material that exists naturally as granules. Starch granules are normally found in seeds, roots, tubers, stems, and leaves. Demand for native starches increased globally, as it can minimise the use of chemically modified starches. Native starches have many applications in the food industry, pharmaceutical industry, paper making industry, cosmetics industry, etc. The starch industry separates the components of the plant: starch, protein, cellulose envelope, soluble fractions, and others, such as lignocellulosic material, as found in pineapple plant stem or basal stem (Fig. 1). However, the methods of manufacture are specific to each plant and the industrial tools are normally dedicated to a raw material. Starch is usually used in its native form, where it was extracted from raw materials in its purest form. However, modifications on the native starch, termed modified starch, can be carried out to obtain certain properties or better characteristics of the starch, either through physical, chemical, enzymatic or genetic modifications [9]. Methionine is a proteinogenic amino acid, best known for its role in the initiation of translation. It possesses an unbranched, hydrophobic side chain and it is the only amino acid that contains a thioether (i.e., C-S-C bonding). Methionine is widely used as a feed additive in the poultry, swine and fish farming industries. It is produced as a racemic mixture from petrochemical feedstocks, with global production capacities in the hundreds of thousands of tonnes per annum [10]. While, L-lysine is known to be an essential amino acid in animal as well as human nutrition. On the other hand, it is beneficial as chemical agent, food materials, feed additive, and medicament. The efforts in the production of these amino acids through microbial fermentation have been conducted in several studies. As reported by Ezemba et al. [11], a total of 2.06 mg/mL of methionine has been produced from plaintain-starch hydrolysate/groundnut meal. On the other note, Sgobba et al. [12] have investigated the synergy effect of synthetic Escherichia coli-Corynebacterium glutamicum consortia for the production of L-lysine. As a result, 0.4 g/L of lysine has successfully been produced using commercial starch as carbon source. These situations indicated the possibility of pineapple stem, which contained high amount of starch content, to be an alternative feedstock in production of lysine and methionine using microbial fermentation. The utilisation of pineapple plant stem as alternative starch supply made it possible for the conversion of waste into wealth. This study aimed to produce amino acids from pineapple plant stem hydrolysates through microbial fermentation by Pediococcus acidilactici Kp10. Dextrozyme was used for hydrolysis of starch and Celluclast 1.5L for saccharification of cellulosic materials in pineapple plant stem. Sample collection and preparation Pineapple plants stems were collected from AlafPutra Biowealth Sdn Bhd pineapple plantation, Kulai, Johor, Malaysia. The cross-sectional image of pineapple plant stem is shown in Fig. 2. The leaves were removed from pineapple plants stems and the plants stems were chopped into small pieces before they were washed with tap water. The plant stems were then subjected to drying process at 60 °C for 24 h. The dried pineapple plant stems were ground and kept at room temperature for further use. Pretreatment of pineapple plant stem Pretreatment of pineapple plants stems was carried out based on the method described by Umikalsom et al. [13]. A 5% (w/v) pineapple plant stem was soaked in 2% sodium hydroxide solution for 4 h before autoclaved at 121ºC for 5 min. The autoclaved sample was washed with distilled water until no alkaline was detected. It was then dried in the oven at 60ºC for 24 h, subsequently stored at room temperature for further experiment. Hydrolysis of starch into fermentable sugar Enzymatic hydrolysis of starch present in pineapple plant stem was carried out based on the method described by Awg-Adeni et al. [14]. A 7% (w/v) dried pineapple plant stem was added into Erlenmeyer flask containing 0.1 M acetate buffer solution at pH 4.2. It was gelatinised by boiling at 100 °C for 15 min using a water bath before it was cooled down to 60 °C. The hydrolysis was conducted by adding 5.56 U/mL Dextrozyme DX 1.5 X (Novozymes, Denmark) with glucoamylase activity of 31.55 U/ mL in the hydrolysis flask. The mixture was stirred continuously and the temperature was maintained at 60 °C for 60 min. After the hydrolysis process, it was allowed to cool down to room temperature. The sugars solution was centrifuged using Heraeus Multifuge X3R Centrifuge (Thermo Fisher Scientific, Germany) at 4 °C and 3000 × g for 10 min followed by filtration using 1.2 µm of Whatman glass microfibre filter attached to a vacuum pump. The recovered sugar, namely, pineapple plant stem hydrolysate, was analysed for reducing sugar and glucose concentration and stored at 4 °C before the fermentation process. The solid residue was collected and dried in the oven at 60 °C overnight for further saccharification process. The hydrolysis yield was calculated, as shown in Eq. 1 [14]. A correction error of 0.9 was used in the calculation of the number of polysaccharides hydrolysed, because hydrolysis of polysaccharides involves water and 1 mol of water is required for 1 mol of reducing sugar released. Saccharification of cellulosic components into fermentable sugars Fermentable sugars were obtained from saccharification using Celluclast 1.5 L (Novozymes, Denmark) based on the method described by Linggang et al. [15]. The saccharification was carried out in 100 mL working volume using 250 mL shake flask. A 100 mL of 0.05 M acetate buffer at pH 4.8 was added to a 5% (w/v) of the solid residues obtained from prior hydrolysis. A 10 FPU Celluclast 1.5 L with the initial activity of 14.63 FPU was added into the saccharification flask. It was then incubated in a shaker incubator (Labwit, China) at 50 °C, with an agitation speed of 200 rpm for 96 h. The sugars solution was filtered using 1.2 µm of Whatman glass microfibre filter attached to a vacuum pump. The hydrolysate was analysed for reducing sugar and glucose concentration and stored at 4 °C before the fermentation process. The hydrolysis yield was calculated, as shown in Eq. 2 [15]: where potential sugars are the total percentage of hemicellulose and cellulose. A correction error of 0.9 was used in the calculation of the number of polysaccharides hydrolysed, because hydrolysis of polysaccharides involves water and 1 mol of water is required for 1 mol of reducing sugar released. Amino acids production from pineapple plant stem hydrolysates Medium preparation MRS (de Man, Rogosa and Sharpe) medium used for amino acid production using P. acidilactici Kp10 based on Toe et al. [16]. Inoculum preparation P. acidilactici Kp10 employed for amino acids production in this study was obtained from the culture collection of Professor Dr Arbakariya Ariff [17] from Department of Bioprocess Technology, Faculty of Biotechnology and Biomolecular Sciences, Universiti Putra Malaysia. The inoculum was prepared based on the method explained by Toe et al. [16]. Prior to use, the stock culture was revived by inoculating the culture into a centrifuge tube containing 1 mL de Man, Rogosa and Sharpe (MRS) broth. The mixture was let to sit for 15 min and 500 µL was transferred into MRS agar plate for initial colonyforming unit determination and another 500 µL was transferred into MRS broth to be incubated in shaker incubator (Labwit, China) at 37 °C for 24 h. The inoculum with MRS broth was kept in an incubator with agitation speed 100 rpm. Then, 1% (v/v) of the 24 h culture was transferred into another 100 mL MRS broth. The broth was incubated at 37 °C for 24 h at 100 rpm. Subculture was done until the bacteria was stable enough to enter the fermentation process. Shake flask fermentation Fermentation of P. acidilactici Kp10 was done based on the method described by Toe et al. [16]. A volume of 10% (v/v) active 12-h culture with an initial colony-forming unit of 1.82 × 10 7 CFUs/ml was inoculated into 250 mL shake flask containing 100 mL MRS medium. The pH of the fermentation medium was controlled by buffer system using phosphate buffer. The flasks were incubated in a shaker incubator (Labwit, China) at 37 °C and agitated at 100 rpm for 24 h. The sampling was done in triplicates at a 1-h interval. Three types of medium with different (2) Hydrolysisyield(%) = Amount of reducing sugar produced(g/L) × 0.9 × 100% Amount of substrate used(g/L) × Potential sugars(%) , carbon sources were used as the production medium for the fermentation of P. acidilactici Kp10. MRS broth using commercial glucose as carbon source was used as a control in the fermentation process. Other MRS medium used pineapple plant stem hydrolysates from starch and cellulosic materials as the carbon source for amino acids production. Analytical analysis All the chemical compositional analyses were done using standard method. All analyses were done in triplicates. The starch content was determined based on the method by Nakamura et al. [18] with slight modification of absorbance at 580 nm. While, the method used in the determination of lignin, hemicellulose and cellulose content of the sample was modified based on the method reported by Iwamoto et al. [19]. The hydrolysates obtained from the hydrolysis and saccharification process were subjected to simple sugars determination using High-Performance Liquid Chromatography (HPLC) based on method explained by Linggang et al. [15]. A Rezex RPM-Monosaccharide Pb +2 column and RI detector were used. The mobile phase was deionised water at a flow rate of 0.6 mL/min. The analysis was performed at 80 °C. Standard solutions were prepared by dissolving appropriate masses of glucose and xylose in deionised water. The retention times of glucose and xylose were 15.42 min and 16.96 min, respectively. The glucoamylase activity of Dextrozyme was determined based on the method described by Leaes et al. [20]. The FPU enzyme activity for Celluclast 1.5 L was determined using NREL standardised filter paper assay as stated by Tsai and Meyer [21]. Viable bacterial counts were determined by colony-forming method based on the method by Paulino et al. [22] at 37 °C for 24 h. The pellet of the sample was analysed for cell concentration using optical density method based on the method proposed by Toe et al. [16]. The samples from the fermentation process were centrifuged at 10,000 rpm (9,300 rcf ) for 5 min using a microcentrifuge (5415 D, Eppendorf ). The supernatant containing methionine and lysine (Sigma-Aldrich, Switzerland) was filtered using a 0.22 µm of nylon driven filter into a clean Eppendorf tube. The filtered sample was kept at − 20 °C prior to HPLC analysis. The mobile phases were sonicated using a Thermo-6D Ultrasonic Cleaner (Thermoline, Australia). The determination of methionine and lysine was based on the modified method proposed by Toe et al. [16]. Amino acid yield and productivity were calculated, as shown in Eq. 3 and Eq. 4, respectively: Chemical compositions of pineapple plant stem Pineapple plant stem is biomass with great potential to produce sugar from the extraction of starch and cellulosic materials, which can be used as a carbon source for the production of other products, such as amino acids. Table 1 shows chemical compositions of pineapple plant stem used in this study. The moisture content of pineapple plant stem showed a high percentage at (75.84 ± 0.24)%, which was almost similar to the moisture content present in pineapple shell and core as reported by Cordenunsi et al. [23]. Nakthong et al. [8] also reported a high value of moisture content in pineapple plant stem at 70% moisture. Reduction in moisture content helps in increasing the stability and shelf life of the sample, as well as facilitates its storage [24,25]. Kiharason et al. [26] reported that drying will increase the dry matter and nutrient content of the sample and oven-dried is the most suitable method, because less nutrient was lost due to the fast rate of drying. Total ash is crucial in the quality determination of a biomass, since it accounts for the total minerals of the biomass [27]. Ash content of pineapple plant stem obtained from this study was 0.40% of the sample (3) Aminoacidyield(g/g) = Concentration of amino acid produced(g/L) Substrate consumed at respected time(g/L) (4) Amino acid productivity (g/L/h) = Concentration of amino acid produced (g/L) Incubation time (h) on a dry weight basis, which accounted for the lowest percentage as compared to other components in the sample. The low value of ash obtained indicated that pineapple plant stem contained a low proportion of minerals and inorganic residue. Based on Table 1, the ash percentage was slightly lower compared to pineapple shell in Perola variety, which has a value of 0.53% as reported by Cordenunsi et al. [23]. The difference in percentage of ash found in pineapple plant stem might be affected by the varieties of pineapple plant used, maturity of the plant, and part of pineapple plant stem used for the determination [28][29][30]. In this study, the nitrogen content of the pineapple plant stem was recorded at 1.85%. This result was in agreement with the nitrogen content of pineapple plant stem reported by Hanafi et al. [31], where the pineapple plant stem of Moris and Gandul pineapple plant was reported at 1.55% and 1.63%, respectively. Since the underground part of pineapple plant stem was used in this study, the nitrogen content obtained might be affected by the nitrogen content in soil and the plant cultivars. Crude protein consists of true protein and nonprotein nitrogen, where nitrogen accounts for 16% of all biological proteins on average. In this study, pineapple plant stem contained as much as 11.56% crude protein, which was even higher than the result obtained by Zainuddin et al. [3], where Moris pineapple leaves have a crude protein of 7.05%. The amount of crude protein can vary according to the stage of plant growth as well as the parts of the plant [32]. The protein content and composition can also be affected by environmental conditions and nutrients availability in soils, especially nitrogen fertilisation and the strategy of its application [33]. Crude fat content in pineapple plant stem measures the estimation of total fat content, including triacylglycerides, alcohols, waxes, terpenes, steroids, pigments, esters, aldehydes, and other lipids [3]. The crude fat content in the pineapple plant stem used in this study was recorded at 1.53% on a dry weight basis. As the primary components of biomass, carbohydrates are the most potential biomass component in a biorefinery process, since it acts as storage polysaccharides (starch) or structural polysaccharides (cellulose, hemicelluloses, pectin, and chitin) [34]. Pineapple plant stem used in this study has reported a carbohydrate value of 9.91% on a dry weight basis, which indicated that it consisted of quite a number of sugars and starch; thus, it is a potential biomass to be used in the production of value-added products, such as amino acids. Pineapple plant stem consisted of high starch and cellulosic content, which made it great potential biomass. The starch content was 77.78% on a dry weight basis. Nakthong et al. [8] reported a starch content of 97.77%, with amylose content 34.37% (w/w) of the whole sample. The percentage of starch content in this study was the highest in pineapple plant stem as compared to other chemical compositions. This result was in line with the statement reported by Sanewski et al. [35], which stated that the stele of pineapple plant stem mainly consists of compact parenchyma with an abundant of the starch present. Thus, pineapple plant stem was expected to be potential starch-based biomass for amino acids production due to the high level of starch present, which can then be converted into fermentable sugars, mainly glucose, through enzymatic hydrolysis. Gelatinisation was done before starch determination to break down the intermolecular association between amylose and amylopectin at solid state with heating [36]. During heating in water, the starch present in the pineapple plant stem undergoes a transition process, where the starch granules swell and eventually break down into a mixture of polymers-in-solution, making the starch suspension viscous [37]. This process changes the semi-crystalline phase of amylose and amylopectin to an amorphous phase [38]. Thus, the ratio of amylose and amylopectin present in the starch can affect the gelatinisation temperature and properties [39]. The value of lignin, hemicellulose and cellulose of pineapple plant stem were recorded at 18.60%, 46.15%, and 31.86%, respectively. Sodium chlorite treatment was used in the removal of lignin to determine the holocellulose content of the biomass. The holocellulose was then allowed to undergo alkali treatment using potassium hydroxide to further remove the hemicellulose content in the pineapple plant stem [19]. The cellulose content of the pineapple plant stem was determined as the residue after complete removal of lignin and hemicellulose. The lignin content obtained in this study was comparable with the results obtained in pineapple leaves and pineapple plant stems as reported by Zainuddin et al. [3]. The aforementioned author also claimed that plant maturation and parts of the plant used will affect the lignin content of the sample, in which the rigidity of the pineapple plant stem also contributes to the high lignin content. Effects of alkali pretreatment on saccharification of cellulosic materials in pineapple plant stem The main purpose of alkaline pretreatment is delignification of biomass to improve its digestibility with minimal formation of inhibitory compounds, which can result in a higher yield of fermentable sugars from lignocellulosic biomass [40,41]. However, the result obtained from this study indicated that alkali pretreatment on the biomass has resulted in a lower concentration of reducing sugar in pineapple plant stem hydrolysate after enzymatic saccharification of the cellulosic materials, as illustrated in Fig. 3. The concentration of fermentable sugars was higher in the untreated sample after 24 h, and this trend continued until the end of the process. The total reducing sugars obtained after 96 h of saccharification in the untreated sample was recorded at 18.56 g/L, which was 5.28 g/L higher as compared to the pretreated sample. Therefore, the untreated pineapple plant stem was used in the following step for amino acids fermentation. This result was in agreement with Casabar et al. [42], who reported a decreasing sugars production with increasing concentration of sodium hydroxide solution used in the pretreatment of pineapple peel, where sodium hydroxide concentration is inversely proportional to the sugar production in pineapple peel. The pineapple plant stem used in this study was initially acidic, in which a low level of pH at 3.88 was detected. This result was in line with the acidity level in pineapple plant stem as reported by Ketnawa et al. [43], where the sample has a pH of 4.64, which indicates that it consists of a high proportion of citric and malic acid. The low level of acidity might also be due to the presence of ascorbic acid, where Zaki et al. [44] has recorded 0.84 mg of the acid present per 100 mL of pineapple core. Another study by Salomé et al. [45] also recorded a similar pH value of 3.85 for pineapple fruits derived from in vitro propagation plant, with 12.03 mg ascorbic acid present per g fresh weight of the fruit. Since sodium hydroxide was used in the pretreatment process, the chemical might have reacted with the natural acid content present in the pineapple plant stem. The presence of inhibitory compounds and crystallinity index of the cellulose may affect sugar production in the substrate [42]. Hydrolysis of starch into fermentable sugar About 57.57 g/L reducing sugar was generated from the substrate which originally contained 77.78% of starch. The hydrolysis yield obtained was 100%, indicating that the starch in pineapple plant stem was fully hydrolysed into fermentable sugars. This might be due to the high solubility of starch in pineapple plant stem, making it easier to be extracted and hydrolysed by the enzyme. The high starch content was in accordance with the study carried out by Nakthong et al. [8], where pineapple stem starch has reported the highest percent solubility compared to rice, corn, and cassava starches. The pineapple plant stem showed no formation of blueblack colour upon the addition of iodine solution, indicating that there was no starch left in the substrate after hydrolysis. Gelatinisation of pineapple plant stem involves a heating process in which the starch molecules are dissolved and the viscosity increases. At high temperature, the α-glucan chains of starch in pineapple plant stem become more susceptible to hydrolysis by amylase action due to the loss of its ordered structure [46]. Dextrozyme was chosen as the enzyme used in starch hydrolysis, because it was found to be effective in breaking down the starch granules into fine particles. It consists of glucoamylase obtained from Aspergillus niger and pullulanase from Bacillus acidopullulyticus. Dextrozyme was the most effective in hydrolysing corn starch when being compared with other hydrolytic enzymes, namely, bacterial α-amylase, β-amylase and glucoamylase as reported by Ma et al. [47]. The concentration of glucose in pineapple plant stem hydrolysate was 62.91 g/L. The high amount of glucose produced indicated that starch content in pineapple plant stem is a potential carbon source to be used in fermentation for amino acids production. Saccharification of cellulosic materials into fermentable sugars From the result obtained in Fig. 4, the concentration of reducing sugar was the highest after 96 h of hydrolysis of 24.67 ± 0.03 g/L reducing sugars and a hydrolysis yield of 56.93%, almost half the hydrolysis of starch in the pineapple plant stem. The time taken for the complete process was longer than that in the hydrolysis of starch using Dextrozyme, because the cellulosic materials have a more complex structure which require the combination of endoglucanases, exoglucanases and cellobiohydrolases for the hydrolysis process [48]. Production of lysine and methionine from fermentable sugars The pineapple plant stem hydrolysates obtained from enzymatic hydrolysis of starch and saccharification of cellulosic materials were used as the carbon sources for the production of amino acids, namely, lysine and methionine, through microbial fermentation. Commercial glucose was used as the control in this study. To carry out microbial fermentation, P. acidilactici Kp10 was chosen as the amino acid-producing microorganism. A preliminary study of the bacteria was done prior to production fermentation to monitor the growth profiling of the bacteria. Preliminary study of Pediococcus acidilactici Kp10 used in the production of lysine and methionine The preliminary study aimed to evaluate the potential of P. acidilactici Kp10 in the production of amino acids, in which lysine and methionine has been done using commercial glucose as carbon source. In the production of primary metabolites, microorganisms usually produce amino acids enough only for their own needs due to feedback inhibition to prevent wasteful production [49,50]. Therefore, genetic manipulation of microorganisms was used to enhance the production of amino acids at higher yields [51]. This can be seen in the production of lysine and methionine using genetically modified C. glutamicum, in which the strain was capable to produce as high as 120 g/L lysine as recorded by Becker et al. [52]. Lysine Fig. 4 Saccharification of pineapple plant stem for fermentable sugars production using Celluclast 1.5 L production can also achieve up to 170 g/L using genetically modified strains [53]. Although the bacteria are generally recognised as safe (GRAS), the use of the genetically modified (GM) strain was not preferable in the production of feed grade lysine and methionine, in which there is a strong limitation of using the GM strain in organic farming and this has led to increasing demand of amino acids production using non-GM organisms [54]. Microorganisms generally produce 20 amino acids only in the amounts needed by the cells [55,56]. Therefore, the selection of strain which is able to produce an excess amount of amino acids is vital in the production of methionine and lysine. A study carried out by Lim et al. [57] showed higher production of amino acids, especially lysine and methionine, through microbial fermentation using Pediococcus sp. as compared to Lactobacillus sp. The ability of P. acidilactici to produce lysine and methionine was also proven by the study carried out by Toe et al. [16]. The ability of P. acidilactici to produce amino acids at levels beyond the needs of its metabolism without the need to genetically modified as reported by KiBeom et al. [58] has increased the potential of the bacteria to be used in the microbial fermentation of lysine and methionine production in this study. A preliminary study of P. acidilactici Kp10 was done to determine the ability of the bacteria to produce lysine and methionine in the presence of glucose carbon source. Figure 5 shows the growth profiling of P. acidilactici Kp10 in MRS medium for 24 h incubation period. The growth profiling of the bacteria showed the highest growth of P. acidilactici Kp10 at an optical density (OD) 1.47 after 22 h of fermentation. From Fig. 5, it can be observed that the bacteria have undergone lag phase at the first 3 h, where the cell concentration only showed a slight increase. During this period, the cell was still adapting to the new environment and did not reproduce immediately in the medium. It was the stage, where cells are undergoing intense metabolic activity to prepare for population growth. Synthesis of enzymes and various molecules also takes place during lag phase. After 3 h of fermentation, the bacteria demonstrated a sharp increase in the cell concentration, indicating that the cell was undergoing exponential phase at 3 h to 12 h. During the log phase, high amount of glucose was consumed for cell growth. Log phase is also known as exponential growth phase, where the cells begin to divide and grow in population. Within this phase, cells are most active metabolically and products can be produced efficiently for industrial purposes. The bacteria then started to enter stationary phase at 12 h, in which the rate of cell growth was equal to the rate of cell death. The exponential growth stopped during this phase, probably due to exhaustion of nutrients, accumulation of waste products and harmful changes in pH. Once the stationary phase reached an end, the bacteria undergo death phase. In this case, the cells showed a slight decrease in cell concentration at 23 h. During the death phase, the number of cell death generally exceed the number of new cells formed, and the condition continue until the population dies out. Carbohydrate or sugars are the primary source of energy for microorganisms [59]. The glucose concentration present in the fermentation media demonstrated a decreasing trend, where it was continuously being consumed by the bacteria. The initial glucose concentration in the media was set at 20 and 10.98 g/L of the carbon source was left in the medium at the end of the fermentation. This indicated that a total of 9.02 g/L glucose was consumed within 24 h of the fermentation using P. acidilactici Kp10. Therefore, 10 g/L glucose was used in the subsequent production medium as a carbon source for MRS media. The sugar consumption was similar to that reported by Toe et al. [16], where only 10 g/L of reducing sugar was consumed by P. acidilactici UP-1, P. pentosaceus UP-2, and P. acidilactici UL-3. The pH of the culture dropped from 5.42 to 4.34 ± 0.03, which was supported by Sriphochanart et al. [59] who reported a pH of 4.88 in the end product of microbial fermentation with initial pH of 6.35 using P. acidilactici in the presence of both lysine and methionine. The aforementioned author also claimed that the growth of the bacteria was optimum at pH 5.5-5.8, similar to the initial pH used in this study. Since P. acidilactici Kp10 is lactic acid bacteria, thus the decreased in pH might be caused by the production of organic acid, mainly lactic acid which is acidic [17,60]. Production of amino acids using various carbon sources Fermentation of amino acids, especially methionine, is not always cost-efficient [55]. Carbon source plays an important role in the biosynthesis of the structural frames for amino acids and provides energy for microorganisms. Although glucose is commercially used as the carbon source in the production of amino acids, alternatives, such as the utilisation of various carbon sources from biomass, should be carried out due to its lower cost. Starch and cellulosic materials from pineapple plant stem hydrolysates have the potential to be utilised as a carbon source in the production of amino acids. The carbon source was set at 10 g/L for all tested carbon source to get fair comparison. Figure 6 illustrates the profiling of P. acidilactici Kp10 in the production of lysine and methionine using starch and cellulosic hydrolysate obtained from pineapple stem. The cells demonstrated a short lag phase which lasted for only 1 h in the production medium, followed by log phase. This indicated that the cells have a good adaptation to the new environment present in the production media, probably due to the presence of glucose as the main carbon source [61,62]. However, the duration of lag phase may also be affected by the number of bacteria present in the culture, where the duration for lag phase may decrease with an increasing number of cells [63]. The log phase was observed after 1-h incubation until 11 h of incubation. It then started to enter stationary phase before entering the death phase. P. acidilactici Kp10 has the highest cell population when using starch-derived glucose as the carbon source, which has resulted in maximum OD 600 of 2.26. This was followed by fermentation using commercial glucose as carbon source, with maximum OD 600 of 1.86. The cell concentration of P. acidilactici Kp10 was the lowest when using cellulosic materials from pineapple plant stem hydrolysate as the carbon source; however, the growth was only slightly behind fermentation using commercial glucose, where OD 600 of 1.69 has been reported. An initial of 10 g/L sugars was provided in the fermentation medium using glucose and reducing sugars from hydrolysates as carbon sources. The remaining reducing sugars left in the fermentation medium after 24 h of incubation showed the amount of glucose consumed by P. acidilactici Kp10 for the cell to reach its maximum concentration. Highest sugar consumption was observed in starch-based fermentation, where a high value of 9.86 g/L reducing sugars was consumed. This has resulted in the highest population of cell growth under the same condition. Sugar consumption using commercial glucose as carbon source was the lowest at 8.38 g/L, lower than using cellulosic materials from pineapple plant stem hydrolysates as carbon source, where 9.09 g/L sugars were consumed for cell growth. The initial carbon source in the culture medium was supplied in a low concentration of 10 g/L, enough to support the consumption for cell growth. Fermentative production of methionine usually involves 10% glucose or 5% maltose as the carbon source, in addition to other supplementation, such as inorganic salts, biotin and vitamin [53]. Using a low sugar concentration, total culture period of the strain can be reduced, with a decreased in the duration for lag phase, and in some cases will lead to an increase in amino acids yield. This is due to the formation of by-products, such as acetate and lactate, which might lead to inhibition of cell growth or reduction of production yield when a high concentration of sugar at 20%, which is usually used in amino acid fermentation, is present in the medium [64]. Higher initial glucose concentration may also lead to catabolic repression effect, causing reduction of amino acids yield from 0.11 to 0.06 mol/mol methionine with increasing glucose concentration when initial glucose concentration of 40 g/L was used, in comparison with 20 g/L of the glucose [65]. At 10 h of incubation, the cell population of P. acidilactici Kp10 in all conditions only showed a slight difference in which the OD 600 was around 1.60 using cellulosic materials and 1.90 when starch was used as carbon source. During this period, the cell was almost at the end of the log phase. This result indicated that both glucose and cellulosic materials have the same efficiency as carbon sources, where the same amount of sugars was consumed to result in the same concentration of the cell population. It is interesting to mention that the cell was observed to consume more carbon source in starch hydrolysate as compared to cellulosic hydrolysate. This situation might be due to the presence of glucose in starch hydrolysate, while in cellulosic hydrolysate comprised of mixture of sugar monomers. Although microorganisms generally utilise glucose as preferred sugar during fermentation due to carbon catabolite repression, utilisation of other reducing sugars produced from saccharification of cellulosic materials was still possible [66]. P. acidilactici can be used as the amino acids producing bacteria, because it can hydrolyse the protein found in MRS medium. It can produce and accumulate proteinase in the fermentation medium, which helps in the accumulation of amino acids by hydrolysing the protein [16]. The accumulation of amino acids, including methionine and lysine, can be promoted by utilising citrulline, cysteine and glycine [16,58]. Unlike secondary metabolites which are produced during stationary growth of the bacteria, known as idiophase, primary metabolites are formed during trophophase, where the products are formed at the same time as the cells grow, which causes the production curve to be parallel to the logarithmic growth phase. This phenomenon can be seen in the production of amino acids, which usually belong to primary metabolites [67]. Therefore, samples from the first 12 h were picked for methionine and lysine determination using HPLC, before P. acidilactici Kp10 started to enter stationary phase after 12 h. Increment of methionine and lysine concentration in the cell-free supernatant indicated that the cell has the ability to produce amino acids extracellularly [16]. This is important for overproducing microorganisms, whereby the accumulation of product intracellularly would require an additional downstream process for cell disruption, which is more expensive in an industrial scale. Negative side effects caused by the accumulation of intracellular accumulation of amino acids in the cytosol may also lead to decreasing production rate due to cell signalling to avoid intracellular destruction [68]. The overall production of methionine and lysine showed an increasing amount for the first 4 h, followed by a decline in the production of the amino acids using starch and cellulosic materials as the carbon source. This result was supported by Toe et al. [16], where P. acidilactici UP-1 showed a maximum concentration of lysine production at 4 h of incubation. However, a different trend can be seen in the production of both amino acids when glucose was used as the carbon source up to 12 h of incubation, where methionine demonstrated a continuous decline in production, whereas lysine showed an increasing production. The declination of methionine in the glucose-based fermentation indicated that the amino acid was continuously utilised for the cell growth during the exponential growth of P. acidilactici Kp10. The bacteria have numerous nutritional requirements which include amino acids as its nitrogen source. In medium containing easily convertible nitrogen, especially amino acids, it can be stimulated to grow faster and reach higher densities [59]. The utilisation of methionine by other metabolic pathways may also lead to a decrease of methionine concentration during fermentation [65]. Low amount of amino acids may be supplied in the fermentation medium of P. acidilactici Kp10 which will act as the primary nitrogen source to avoid the consumption of the amino acid produced by the cell during the exponential phase. In the presence of initial amino acids as nitrogen source, the cells would be able to consume those amino acids for their growth, thus producing the required amino acids in excess. This may reduce the possibility of continuous decreased in amino acid production during the log phase of the bacteria. Lysine was continuously produced as the cells grow, indicating that the strain was able to produce the amino acid in excess. A high amount of glucose was consumed after 4 h of incubation, where the cells were undergoing exponential grow. Within this period, a high amount of amino acids was produced, and the amino acids were then utilised for cell growth in the following hours. The concentrations of lysine produced in this study were comparable to starch-based glucose from cassava, sorghum, and sweet potato, with lysine production of 1.01 g/L, 1.02 g/L and 1.07 g/L, respectively, using Bacillus laterosporus as the inoculum [69]. Different carbon sources resulted in different production yield and productivity, as indicated in Table 2. Although starch-based fermentation has the maximum cell growth and the highest amount of overall sugar consumption, it has the lowest production yield for both methionine and lysine. However, it showed higher productivity for methionine as compared to cellulosic-based fermentation. Maximum product formation and product yield were the highest for both amino acids when commercial glucose was used as the carbon source for P. acidilactici Kp10. The performance of cellulosic materials as the carbon source used in amino acids production was comparable to commercial glucose, especially in the production of lysine, in which the product yield was same as using glucose as carbon source. In term of productivity, cellulosic-based glucose has resulted in the highest productivity for lysine. The production of methionine and lysine in the fermentation using P. acidilactici Kp10 indicated that the strain has the ability to produce amino acids. This was supported by Lee et al. [70], who reported an increasing amount of amino acids in the fermentation by P. acidilactici, but a decreased in the production of methionine and lysine when L. salivarius and L. plantarum was used as the inoculum. In comparison, this study has produced a comparable methionine production with KiBeom et al. [58]. Toe et al. [16] also reported that P. acidilactici UB-6 consumed the amino acid present in the medium at the initial stage of fermentation for the cell growth, followed by maximum production of amino acids at the later stage of fermentation. Production of methionine and lysine can be achieved using various carbon sources and biomass, as well as different microorganisms for fermentation. Tables 3 and 4 show the comparison of methionine and lysine production, respectively, using various microorganisms and carbon sources. Based on Table 3, the productivity of methionine using starch-based and cellulosic-based of pineapple plant stem hydrolysates produced by P. acidilactici Kp10 was slightly higher than the productivity of methionine using glucose as a carbon source by P. acidilactici as studied by KiBeom et al. [58]. For lysine production (Table 4), the productivity was the highest using cellulosic materials from pineapple plant stem hydrolysate as a carbon source. The productivity was similar to lysine production by C. glutamicum using raw corn starch as a carbon source as studied by Tateno et al. [71]. For starch-based fermentation from pineapple plant stem hydrolysate by P. acidilactici Kp10, the result obtained in this study was slightly higher than lysine productivity using grass silage juice as a carbon source by C. glutamicum. Lysine productivity using commercial glucose as the carbon source in this study has recorded the same value as lysine fermentation using jackfruit seed hydrolysate as a carbon source by C. glutamicum. This situation may be resulted by the composition of monomers in the hydrolysate, since the starch-based medium was mainly glucose is present, thus resulted in preference for the bacteria. Production yields in the fermentation process may be affected by several parameters, including aeration, agitation, pH and temperature [65,79]. Distribution of the sugar and oxygen in the fermentation medium may affect the cell physiology of the inoculum, where undesirable stress response might be triggered, which is able to switch biosynthesis from the desired amino acids to undesirable by-products, such as carbon dioxide, acids, and biomass. In addition to that, the medium composition also strongly influenced the fermentation process. Natural organic substances, such as soybean hydrolysate, corn steep liquor, yeast extract or peptone, are sometimes used in lysine fermentation, with the addition of various carbon and nitrogen sources, inorganic ions and trace elements, amino acids, vitamins and numerous complex organic compounds [79]. The ability of P. acidilactici Kp10 to produce methionine and lysine using starch-based and cellulosic-based fermentable sugars from pineapple stem hydrolysates indicated that pineapple plant stem is potential biomass to be utilised in the production of amino acids by P. acidilactici Kp10. This present as an added value for the pineapple plant industry in the effort of converting waste to wealth and producing amino acids to cater the increasing needs of methionine and lysine, especially in the animal feed industry. Conclusions As a conclusion, pineapple plant stem is potential biomass for amino acids production due to its high starch content at 77.78%. The lignocellulosic composition of pineapple plant stem consisted of 46.15% hemicellulose, 31.86% cellulose and 18.60% lignin. Starch-based hydrolysis of pineapple plant stem has resulted in 57.57 g/L fermentable sugars. Cellulosic-based saccharification of pineapple plant stem produced 24.67 g/L of fermentable Lactobacillus salivarius Glucose 15 0.12 0.008 0.02 [58] sugars. Methionine and lysine were successfully produced from pineapple plant stem hydrolysates through microbial fermentation using P. acidilactici Kp10. Starchbased fermentation has produced 40.25 mg/L methionine and 0.97 g/L lysine, higher than cellulosic-based fermentation, which produced 37.31 mg/L methionine and 0.84 g/L lysine using pineapple plant stem as a substrate. This study has successfully indicated the potential of pineapple stem as feedstock in the production of lysine and methionine using microbial fermentation. The production of these amino acid can be further improved in several approaches, including the better understanding in fermentation effects as well as statistical tools utilisation.
2021-06-23T20:02:24.920Z
2021-06-16T00:00:00.000
{ "year": 2021, "sha1": "efa4c6ec50512998aba504c7ef68f90eeb3b6484", "oa_license": "CCBY", "oa_url": "https://chembioagro.springeropen.com/track/pdf/10.1186/s40538-021-00227-6", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "efa4c6ec50512998aba504c7ef68f90eeb3b6484", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
251741307
pes2o/s2orc
v3-fos-license
Time-lapse image classification using a diffractive neural network Diffractive deep neural networks (D2NNs) define an all-optical computing framework comprised of spatially engineered passive surfaces that collectively process optical input information by modulating the amplitude and/or the phase of the propagating light. Diffractive optical networks complete their computational tasks at the speed of light propagation through a thin diffractive volume, without any external computing power while exploiting the massive parallelism of optics. Diffractive networks were demonstrated to achieve all-optical classification of objects and perform universal linear transformations. Here we demonstrate, for the first time, a"time-lapse"image classification scheme using a diffractive network, significantly advancing its classification accuracy and generalization performance on complex input objects by using the lateral movements of the input objects and/or the diffractive network, relative to each other. In a different context, such relative movements of the objects and/or the camera are routinely being used for image super-resolution applications; inspired by their success, we designed a time-lapse diffractive network to benefit from the complementary information content created by controlled or random lateral shifts. We numerically explored the design space and performance limits of time-lapse diffractive networks, revealing a blind testing accuracy of 62.03% on the optical classification of objects from the CIFAR-10 dataset. This constitutes the highest inference accuracy achieved so far using a single diffractive network on the CIFAR-10 dataset. Time-lapse diffractive networks will be broadly useful for the spatio-temporal analysis of input signals using all-optical processors. Introduction Machine learning and artificial intelligence research has experienced rapid growth in the past two decades 1 . One of the core engines that has driven this growth is deep learning 2 , permitting efficient and rapid training of deep artificial neural network models. The ability to train deep neural networks has revolutionized artificial intelligence, and electronics has been the undisputed platform of choice for implementing artificial neural networks. Specialized processing hardware such as Graphics Processing Units (GPUs) are widely used today for deep learning. However, these electronic processors are powerhungry and bulky, making researchers wary of the environmental impact of machine learning 3,4 . Therefore, there is strong interest in low-power and fast computing platforms for machine learning applications. Optical computing has been identified as a promising potential alternative for such purposes because of the large bandwidth, high speed, and massive parallelism of optics 5 . Diffractive deep neural networks (D 2 NNs), also known as diffractive optical networks or diffractive networks, form a passive all-optical computing platform that exploits the diffraction of light waves to perform computation 6 . These diffractive networks are composed of several spatially-engineered surfaces, separated by free-space. The diffractive features/elements of a layer, also termed 'diffractive neurons', locally modulate the amplitude and/or the phase of the light incident upon the layer. Successive modulation by and diffraction through the layers give rise to an all-optical transformation between the input and the output fields-of-view at the speed of light propagation without any external power. The amplitude and/or the phase values of the diffractive neurons corresponding to a desired optical transformation or computational task are trained/learned through a digital computer using deep learning. Once the training is complete, the layers can be fabricated and assembled to form a 'physical' network that performs the desired computation in a passive manner and at the speed of light propagation. Diffractive networks can achieve universal linear transformations [7][8][9] , and various applications using diffractive processors have been demonstrated such as object classification, pulse processing, imaging through random diffusers, hologram reconstruction, quantitative phase imaging, class-specific imaging, super-resolution image display, all-optical logic operations, beam shaping and orbital angular momentum mode processing, among others 10-30 . While diffractive networks have shown competitive performance on the classification of relatively simpler objects, for example, hand-written digits and fashion products 11 , for more complex natural objects such as those from the CIFAR-10 dataset 31 , their performance gap compared to the classification accuracy of electronic neural networks is still large 11,32 . Ensemble learning through multiple D 2 NNs has been demonstrated to improve the inference and generalization of diffractive networks at the cost of reducing the compactness and simplicity of the optical hardware 32 . In this work, we demonstrate, for the first time, a 'time-lapse' image classification scheme with a standalone diffractive optical network that significantly enhances the inference and generalization performance of diffractive computing. In this scheme, the objects and/or the diffractive network laterally move relative to each other, either randomly or in a controlled manner, during the detector integration time, enriching the information provided to the diffractive network. In a different context and application, lateral shifts of the object of interest relative to the imager have been routinely used for pixel super-resolution imaging, enhancing the resolution of the reconstructed images [33][34][35][36] . Inspired by the success of these pixel superresolution approaches, here we use the controlled or random relative displacements between the input objects and the diffractive network for time-lapse image classification and report a numerical blind testing accuracy of 62.03% for the classification of grayscale CIFAR-10 images, which constitutes the highest classification accuracy for this dataset achieved so far using a single diffractive optical network. In addition to significantly advancing the inference and generalization performance of D 2 NNs, these time-lapse diffractive networks can also find broader use in the all-optical processing of spatio-temporal information of a scene or object. Results The concept of time-lapse image classification with a diffractive network is illustrated in Fig. 1. A diffractive network comprising 5 phase-only diffractive layers, axially separated by 40 , is placed between the object plane and the detector plane. The detector plane includes 20 detectors 11 : 2 detectors for each class of the CIFAR-10 dataset, i.e., a 'positive' detector ,+ and a 'negative' detector ,− . The integration time of the output detectors is assumed to be , where is the number of lateral object shifts and each of the individual shifts has an equal integration time of . Without changing our conclusions, in alternative implementations, the diffractive network can also laterally move relative to the static object, or both the object and the diffractive network can laterally move at the same time. Each detector ,± is assigned an exponent ,± which operates on the integrated detector power to yield the detector signal ,± (see Fig. 1 and the Methods section). We will report diffractive classification results under two different conditions: (1) the exponents are assumed to be trainable, and (2) non-trainable, fixed as ,± = 1. The normalized differential class scores = ,+ − ,− ,+ + ,− are calculated from these detector signals, and the prediction/inference is made in favor of the class receiving the highest differential optical score (see Fig. 1). For all the D 2 NNs reported in this work, each trainable diffractive layer consists of 200×200 diffractive elements (diffractive neurons) of size 0.53 ×0.53 . The objects are assumed to be phase-only and the diffractive networks are trained using the grayscale CIFAR-10 dataset (refer to the Methods section for details). The hyperparameters that define the grid of lateral displacements of the objects during the timelapse image classification are and , where is the maximum (relative) lateral displacement along / and 2 refers to the total number of points on the grid, see Fig. 2. The size of the input aperture is another hyperparameter that affects the classification performance of the time-lapse diffractive networks. The impact of these hyperparameters, , and the input aperture size, on the performance of time-lapse diffractive classifiers is shown in Fig. 2. The classification performance is quantified by the blind testing accuracy of the networks on 10,000 previously unseen images belonging to the test set of the CIFAR-10 dataset. To obtain each data point in Fig. 2, we trained 3 different diffractive networks with the same hyperparameters and calculated the mean and standard deviation of blind testing accuracies of these 3 trained networks. We see from Fig. 2a that as is increased from 3.20 to 6.40 (while keeping = 5 and the aperture size = 44.8 ×44.8 constant), the mean blind testing accuracy increases until = 5.33 , where it reaches its highest value of 61.35%. Beyond = 5.33 , the mean classification accuracy starts to decrease. In Fig. 2b, we set = 5.33 , aperture size = 44.8 ×44.8 and vary . As is varied between 3 and 6, the mean accuracy increases rapidly from 58.56% to 61.35% until = 5, beyond which the mean accuracy reaches a plateau. For Fig. 2c, we selected = 5 and = 5.33 (as optimized from Figs. 2a-b) and the width of the input aperture was varied between 32.0 and 53.3 . The highest mean accuracy (Fig. 2c) is observed for an input aperture size of 38.4 ×38.4 , which is smaller than the object support 44.8 ×44.8 . We compared this observation with its counterpart for time-static diffractive image classification (see Supplementary Table S1), where the aperture size corresponding to the highest mean blind testing accuracy is larger than the object support. This comparison indicates that a time-lapse diffractive network prefers a relatively smaller input field-of-view compared to its time-static counterparts. Next, we juxtapose a time-lapse image classification diffractive network with a time-static diffractive network; see Fig. 3. For this comparison, we chose the time-lapse diffractive network with the best individual blind testing accuracy (62.03%) among the networks constituting the results of Fig. 2 and the time-static diffractive network with the best individual blind testing accuracy (53.14%) among the networks constituting the results of Supplementary Table S1. For the time-lapse image classification diffractive network, the hyperparameters corresponding to the highest individual accuracy were = 5, = 5.33 and input aperture size = 38.4 ×38.4 ; while for the time-static network, the input aperture size corresponding to best individual accuracy was 51.2 ×51.2 . Another difference to be noted between the time-static and the time-lapse diffractive networks chosen for comparison in Fig. 3 is that for the timestatic one, the detector exponents were not trainable, i.e., ,± = 1, whereas the detector exponents were trainable for the time-lapse network. The reason for this selection is that, unlike the time-lapse diffractive networks, time-static diffractive networks showed overfitting when the detector exponents are trainable, leading to inferior generalization; see Supplementary Table S2. For an example object from the image class 'ship' (true label: 8), we show in Fig. 3a the detector plane intensity, detector signals and the class scores for the time-static network; similarly, in Fig. 3b we show the time-integral of the detector plane intensity, detector signals and the class scores for the time-lapse image classification network. While the time-static network misclassifies the object for an 'automobile', the time-lapse image classification diffractive network correctly predicts the object to be a 'ship' (predicted label: 8). We also show in Fig. 3c the confusion matrices calculated over 10,000 test images of the CIFAR-10 dataset: the time-lapse image classification diffractive network performs consistently better than the time-static one for all the CIFAR-10 data classes. Note also that the time-lapse image classification diffractive network designed with non-trainable detector exponents (i.e., ,± = 1) achieved a blind testing accuracy of 60.35% on the same grayscale CIFAR-10 test dataset (see Supplementary Fig. S1), performing much better than the time-static one for all the CIFAR-10 data classes. The diffractive layers for all these networks are shown in Supplementary Fig. S2. During the training of the time-lapse diffractive networks, we followed a method similar to the 'dropout' method, which is used in deep learning to reduce overfitting and improve the generalization of a trained model 37 . We defined a hyperparameter which is the probability that a point on the object-plane grid is 'active' during training, i.e., the probability that the object is positioned at that lateral point during the signal integration at the detector. All the time-lapse networks described thus far were trained with = 0.5. As we describe below, the resilience of the trained time-lapse image classification diffractive networks to deviations from the training settings can be improved by a proper choice of , which is intuitively equivalent to the dropout strategy in deep learning literature. Related to this hyperparameter , next, we explored the impact of decreasing the number of lateral shifts, , on the blind testing accuracy of time-lapse classifiers: see Fig. 4. The value for each data point in Fig. 4 represents the mean of the classification accuracies over 25 independent blind tests with the same . For Fig. 4a, these lateral displacements were restricted to coincide with the pre-determined training grid points, and for the case of N < 2 , 2 − of the 2 lateral shifts were randomly eliminated (not used). For Fig. 4b, however, the lateral displacements were randomly selected without following the training grid points. As we can see in Fig. 4a, the blind testing accuracy decreases as is decreased; however, the slope of this performance degradation varies depending on the training hyperparameter . For example, in the case of the time-lapse image classification diffractive network shown in Fig. 3b, trained with = 0.5 (green curve in Fig. 4a), the test accuracy drops from 62.03% to 60.69% and 59.37% as decreases from 25 to 15 and 10, respectively. Compare this with the case of a time-lapse diffractive network trained with = 1.0 (red curve in Fig. 4a), for which the classification accuracy is affected much more severely and decreases from 61.61% to 59.61% and 57.45% as is decreased from 25 to 15 and 10, respectively. We see that networks trained with lower values show less sensitivity to decreasing , which is further corroborated by the curves corresponding to two other time-lapse diffractive networks trained with = 0.2 and = 0.3. Another advantage of training with lower values is decreased sensitivity to the exact object positions (see Fig. 4b). For Fig. 4b, we selected the lateral displacements without following the training grid points, allowing the object to be displaced (during the time-lapse imaging process) to arbitrary, randomly selected points within the area 2 ×2 . In general, for a given , the blind testing accuracies corresponding to such arbitrary displacements (left y-axis of Fig. 4b) are lower than their counterparts for the on-grid displacements shown in Fig. 4a. However, the degradation in classification accuracy, which is shown on the right y-axis of Fig. 4b, is much smaller when is lower. For example, at = 25, the mean accuracy drop is ~2% for the diffractive network trained with = 0.2, whereas the accuracy drop is ~6% for the = 1.0 diffractive network. The accuracy of time-lapse diffractive network-based image classifiers for arbitrary lateral displacements of the input objects can be improved by utilizing such random displacements of the objects during the training, rather than training with a pre-determined grid of lateral displacements. For this, the training hyperparameters and can be absorbed into a single hyperparameter , where refers to the number of arbitrary displacements within 2 ×2 . To demonstrate this, we trained three time-lapse diffractive networks with = 10, = 15 and = 25 and compared their accuracies for = 10, = 15, and = 25 arbitrary displacements of the input objects, respectively, against the classification accuracies of the time-lapse diffractive networks reported in Fig. 4a-b. The result of this comparison is shown in Fig. 4c: for = 10, = 15 and = 25 arbitrary lateral displacements during the time-lapse imaging process, the mean blind testing accuracies of the corresponding = diffractive networks are 1.26%, 1.77%, and 1.54%, respectively, higher than the accuracies of the = 0.2 time-lapse diffractive network. This generalization improvement and the inference accuracy increase are due to using arbitrary random lateral displacements of the input objects during the training process instead of blindly applying such random lateral shifts only during the testing phase. Discussion In previous work, we reported a significant improvement in diffractive network inference performance by ensemble learning and combining the output of several different diffractive networks. For example, mean blind testing accuracies of 61.14% and 62.13% on the CIFAR-10 test set were reported for ensembles of 14 and 30 different D 2 NNs, respectively 32 . However, the improvement with such a strategy is accompanied by a sacrifice in the compactness of the optical hardware and increased complexity in aligning several diffractive networks within the ensemble. Another shortcoming of ensemble learning of diffractive networks is the large training time. In our previous work, 1252 diffractive models were trained, and ensemble pruning was then performed to arrive at the final design 32 . Time-lapse diffractive networkbased image classification provides blind testing accuracies comparable to ensemble learning with only a single trained diffractive network. For comparison, the time-lapse diffractive network of Fig. 3b gives 62.03% blind testing accuracy on CIFAR-10 test images. The trade-off for such an advantage is the increase in the imaging/classification time due to the lateral shifts of the objects. However, the alignment and synchronization requirements associated with diffractive network ensembles are evaded. Also, the training of a time-lapse diffractive classifier takes ~20 hours on an NVIDIA GeForce RTX 3090 GPU (see the Methods section), which is orders of magnitude less than the time required to design an ensemble of diffractive networks working together. Regarding the implementation of time-lapse diffractive network-based image classification, Spatial Light Modulators (SLMs) can be used to perform the lateral displacements of the input objects digitally if a digital representation of each object is available. In an alternative implementation, the diffractive layers and the detectors could be mounted on a movable stage to shift the entire system with respect to the object or input FOV. Perhaps, the simplest implementation of time-lapse diffractive network-based image classification would exploit the natural jitter or movement of the input objects during the integration time of the class detectors. As shown in Fig. 4c, ~60% blind testing accuracy on CIFAR-10 test images can be reached with arbitrary object displacements during the time-lapse inference. While time-lapse image classification significantly boosts the inference of a single D 2 NN on the classification of complex objects, there remains plenty of room for improvement to potentially close the large performance gap with their electronic counterparts, convolutional deep neural networks 32 . One possible avenue for such an improvement could be the incorporation of ensemble learning with time-lapse image classification, where the outputs of diversely trained time-lapse D 2 NNs could be combined for further improvement in generalization and statistical inference. Moreover, in the same way that the timelapse scheme utilizes the complementary information resulting from the input objects that are laterally shifted, other attributes of light such as polarization or wavelength could also be utilized 9,38 . For example, time-lapse diffractive networks can be trained to work with RGB images instead of grayscale images to benefit from the complementary information carried by different color channels. The incorporation of optical nonlinearities between the diffractive layers of D 2 NNs could also extend their approximation capability and consequently improve their statistical inference; for further details, see the Supplementary Information of Ref. 6, where the impact of optical nonlinearities within a D 2 NN architecture was first discussed. All of these constitute possible future directions to explore for further decreasing the performance gap between electronic deep neural networks and D 2 NNs. In summary, we reported a time-lapse diffractive network-based image classification scheme for significantly improving the performance of D 2 NN classifiers with only a single trained diffractive network. The presented time-lapse diffractive network scheme could be vital for realizing compact, lowcost and passive optical processors for all-optical spatio-temporal analysis of information. Materials and methods Forward model. The propagation of coherent light across +2 parallel planes defined by the input (object) plane, successive diffractive layers, and the output (detector) plane is modeled using the Rayleigh-Sommerfeld theory of scalar diffraction 39 , according to which the propagation of a complex wave ( , ) through a distance in free-space is described by a linear shift-invariant system with an impulse response defined as follows: where is the illumination wavelength, = √ 2 + 2 + 2 and = √−1. Upon propagation through the free-space separating layer -1 and layer , the complex field is modulated by the spatially varying complex transmittance ( , ) of layer , i.e.: Here, is the axial coordinate of the -th plane, and = 1, ⋯ , , whereas ( , ) and ( , ) are the amplitude and the phase of the complex field transmittance ( , ). For the phase-only diffractive networks reported in this work, ( , ) is assumed to be 1. In a differential classification scheme, each of the 10 classes of the CIFAR-10 dataset is assigned to two detectors: a virtual positive detector and a virtual negative detector. ,+ ( ,− ) denotes the active area of the positive (negative) detector assigned to class , = 0,1, ⋯ ,9. Here, is an optoelectronic detector-specific constant, and we assume that the propagation delay of light between the object plane and the detector plane is negligible compared to . The detectors are assigned the exponents ,± , which operate on the optoelectronic signals ,± (after ,± are normalized to have a maximum value of 1) and generate the detector signals ,± : ,± = ( ,± ) ,± Finally, the differential class scores are calculated as: = ,+ − ,− ,+ + ,− and the prediction for the object class is defined to be arg max . Numerical implementation. When numerically modeling light propagation through the diffractive networks, the grid spacing along the transverse directions ( and ) was chosen to be ~0.53 . The Rayleigh-Sommerfeld convolution integrals were computed using the Angular Spectrum Method 39 based on the Fast Fourier Transform (FFT). For all the results presented in this paper, the diffractive networks consisted of 5 phase-only diffractive layers, axially separated by 40 . Each layer comprised 200×200 diffractive features/neurons, the phases of which were trainable. The (physical) size of each diffractive neuron was assumed to be ~0.53 ×0.53 . The RGB images in the CIFAR-10 dataset were converted to grayscale to represent the input objects illuminated by a monochromatic and spatially-coherent wave. The objects were resized to span an area of 44.8 ×44.8 . The object information was assumed to be encoded in the phase channel of the input light, i.e., within the input field of view, ( , ; 0 ) = exp( 2 ( , )) where ( , ) is the object function, with its values normalized to lie between 0 and 1. On the output plane, the active area of each detector was assumed to be 6.4 ×6.4 , and the spacing between the detectors was ~4.27 along both and directions (see Fig. 1). Training. The diffractive networks were trained using the cross-entropy loss function. The differential class-scores { } =0 9 were converted to probabilities { } =0 9 over the classes using the softmax function, i.e., = exp( ) ∑ exp( ) where = 10 was used. The training loss was defined as: where is the (true) label, and is the Kronecker delta function, i.e., = 1 if = and 0 otherwise. The trainable parameters of the model were trained by minimizing the loss ℒ using the Adaptive Momentum ('Adam') stochastic gradient descent algorithm 40 . The forward model was implemented using the open-source deep learning library TensorFlow 41 . The automatic differentiation functionality of TensorFlow was exploited to facilitate the gradient computations for optimization. A batch size of 8 was used to implement the stochastic gradient descent. The built-in TensorFlow implementation of Adam optimizer was used with the default values except for the learning rate, which had an initial value of 0.001 and was reduced by a factor of 0.7 every 8 epochs. All the networks were trained for 100 epochs using 45000 images from the training set of the CIFAR-10 dataset. The remaining 5000 images of the CIFAR-10 training set were left out for validation, i.e., after every epoch, the accuracy of the model on these 5000 images was evaluated. The model state at the end of the epoch for which the validation accuracy was maximum was ultimately used for blind testing. The training time of the time-lapse diffractive networks depended upon the hyperparameters and . For = 5 and = 0.5, the training took ~20 hours on an NVIDIA GeForce RTX 3090 GPU in a machine running on Windows 10. Fig. 1 Time-lapse image classification using a D 2 NN. (a) A diffractive network with 5 phase-only diffractive layers followed by 20 detectors at the detector plane for differential image classification. The integration time of each detector is . During each one of the intervals of duration, the center of the object is laterally displaced to a new point (red circle); these lateral displacements can be entirely random or follow a predefined grid (blue circles). (b) Labeling of the detectors where ,+ ( ,− ) denote the positive (negative) detectors assigned to class . The differential class-scores are used for the final classification decision based on the maximum score. and . represents the maximum displacement along either the vertical or the horizontal direction, whereas 2 is the total number of grid points. Top right: the dashed white square represents the input aperture immediately following the object, the area of which is another hyperparameter. (a) Dependence of the blind testing accuracy on with and the input aperture kept constant. (b) Effect of on the blind testing accuracy of the trained time-lapse diffractive classifiers as and the input aperture are kept constant. (c) Dependence of the blind testing accuracy on the input aperture size while and are kept constant. For (a)-(c), the data points and the error bars represent the mean and the standard deviation values, respectively, calculated from three designs, which are obtained by training three different timelapse D 2 NN classifiers for the same set of hyperparameter values. The curves are linearly interpolated between the data points. (c) Improvement of blind testing accuracies for a given , by training the timelapse diffractive network with = arbitrary/random displacements within the range 2 ×2 instead of training with a set of fixed lateral displacements defined by a pre-determined lateral grid. For (a)-(c), the values (errors) corresponding to the data points represent the mean (standard deviation) values calculated through the blind testing of the same trained network 25 times, every time with arbitrary lateral displacements of the input objects.
2022-08-24T01:16:18.072Z
2022-08-23T00:00:00.000
{ "year": 2022, "sha1": "84d8977afa4f043173fa081e35596e02363ab71b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "84d8977afa4f043173fa081e35596e02363ab71b", "s2fieldsofstudy": [ "Physics", "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
266180373
pes2o/s2orc
v3-fos-license
The Influence of Social Support in PROMs of Patients with COPD in Primary Care: A Scoping Review Chronic obstructive pulmonary disease (COPD) is a prevalent and multidimensional disease with symptoms that greatly influence patients’ health. Healthcare professionals utilize patient-reported outcome measures (PROMs) to classify and better manage the disease. Despite the value of PROMs, they inadequately represent some important dimensions of COPD, like social support and healthcare access/utilization. This is important, especially for social support, since it can positively influence PROMs results and the overall health of patients with COPD. Therefore, a scoping review was conducted to determine how social support affects PROMs of patients with COPD in primary care. The PRISMA–Scoping approach was adopted, and we sought articles published in MEDLINE and COHRANE. We screened 2038 articles for inclusion and finally included a total of 10 articles. Most of the articles were conducted in the U.S. and Norway. Social support had a strong positive impact on PROMs. Additionally, different types of social support were observed. Moreover, higher levels of social support were linked to better quality of life, mental health, self-care behaviors, self-management, functionality, and less severe COPD. Consequently, this scoping review highlights the value of social support in patients with COPD and its underrepresentation and misrepresentation in PROMs literature. Introduction Chronic obstructive pulmonary disease (COPD) is widely accepted as a leading cause of chronic disability, morbidity, and mortality, imposing a major and growing economic and social burden [1][2][3].It is characterized by a persistent, often progressive obstruction of airflow due to several airway abnormalities (such as bronchitis, bronchiolitis, and emphysema) and chronic respiratory symptoms (e.g., shortness of breath, mucus production, and/or exacerbations) [2].COPD represents an important preventable, multidimensional, manageable, and treatable public health challenge [2].Interestingly, to manage and treat patients with COPD, healthcare professionals have to use validated questionnaires to assess quality of life and health status.However, there is a limited association between the severity of airflow obstruction and patient symptoms/health status impairment [2]. Given the aforementioned factors, there is an evident need for validated questionnaires that enable healthcare professionals to assess all dimensions of COPD (e.g., symptoms, physical functioning, psychosocial well-being, etc.) [4][5][6].For this purpose, several patientreported outcome measures (PROMs) have been developed and are valuable tools in day-to-day clinical practice [7][8][9][10][11].These measures are standardized questionnaires that healthcare professionals utilize and to which patients with COPD can respond based on their perception of their health and illness [12].Through PROMs, healthcare professionals can gain a better understanding of the impact and progression that COPD has on patients and provide better quality of care [7][8][9][10][11]13]. Consequently, by utilizing PROMs, healthcare professionals can address the physical, emotional, and social functioning of COPD patients [4][5][6].However, despite the value of PROMs, certain dimensions of COPD, such as social support and healthcare access and utilization, remain inadequately represented within these measures [14]. The concept of social support involves the provision of emotional, informational, and instrumental assistance to individuals through their social networks, which includes family, friends, peers, and healthcare professionals [15].Social support has been classified into two domains: structural and functional.Structural social support encompasses the features of the social network surrounding an individual and their interactions within it (e.g., marital status and living arrangements) [16].Conversely, functional social support pertains to specific assistance given to an individual through their social network [17].For patients with COPD, higher levels of social support can play a pivotal role in shaping their experience with the disease and their overall health, potentially influencing the results measured by PROMs [14,18]. Taking into account the effect that adequate levels of social support can have in patients with COPD [19], it is plausible to hypothesize that higher levels of perceived social support may have a positive impact on many PROMs of patients with COPD.However, there is a lack of consistent evidence to support the relationship between perceived social support and self-perceived health in patients with COPD using PROMs, especially in primary care settings.Therefore, a scoping review that explores the interplay between social support and PROMs in primary care patients with COPD could provide valuable insights to the healthcare community.These insights could offer a nuanced understanding of the role of social support in shaping the holistic well-being of primary care patients with COPD.Furthermore, it could contribute to the design of interventions and policies that optimize the patient experience, promote better coping strategies, and improve the overall management of COPD in primary care settings.In light of the aforementioned considerations, the aim of this review was to explore the interplay between social support and PROMs in primary care patients with COPD. Materials and Methods This scoping review was conducted in accordance with the principles recommended in the Preferred Reporting Items for Scoping Reviews (PRISMA-ScR): Checklist and Explanation [20] and the Joanna Briggs Institute Reviewers' Manual for Scoping Reviews [21]. Search Strategy Regarding the search strategy, we performed a comprehensive literature search in two electronic biomedical literature databases (MEDLINE and COCHRANE) in September 2023.Since this was a scoping review [22], we wanted to make the literature search as broad as possible.Therefore, we used the following keyword combinations and Boolean operators (AND, OR, NOT) [23] in both databases: "COPD" AND "social support" OR "chronic obstructive pulmonary disease" AND "social" AND "patient-reported outcome measures" OR "PROMs" AND "primary care". Study Inclusion and Exclusion Criteria The titles and abstracts were initially examined for possible inclusion by two independent reviewers.After removing duplicates, the two reviewers collaborated to screen the remaining records.During the screening process, any inconsistencies between the investigators were resolved by a third reviewer. We included both qualitative and quantitative studies, such as cross-sectional studies, observational studies, interventional trials, longitudinal studies, randomized controlled trials, and qualitative research (e.g., interviews and focus groups).These articles reported on the influence of social support in PROMs in primary care patients with COPD.Additionally, we included articles that examined the influence of various aspects of social support (emotional, instrumental, and informational) in the PROMs of patients with COPD.The articles we included had the full texts available, in the English language, and covered adult patients (18 years and older) that were diagnosed with COPD who received care in primary care settings.It is worth mentioning that the benefits of social support on health have been recognized since 1976 [24].In addition, this was a scoping review of a relatively understudied topic (the interplay between social support and PROMs in primary care patients with COPD); therefore, we did not specify a timeframe for the inclusion of published studies. We excluded case reports, case series, commentaries, editorials, letters, conference abstracts, review studies, book chapters, and studies published in languages other than English.Studies involving patients with conditions other than COPD or those conducted in non-primary care settings (e.g., hospital settings and specialized clinics) were also excluded.In addition, gray literature, such as articles that were not peer-reviewed, were excluded. Data Extraction and Analysis The methodology employed in this review entailed the extraction of information regarding study design, student population characteristics, and outcomes of interest from the full texts of the included articles by a single reviewer utilizing a standardized data extraction form.Subsequently, the accuracy of the data extraction form was verified by two independent reviewers via a thorough appraisal process, followed by a discussion to resolve any discrepancies.The extracted data were subsequently presented in a descriptive manner in this review.It should be noted that we used the terms "social support", and "COPD" to identify and include articles, whereas the terms "review", "cancer", and "children" were used to identify and exclude articles. Trial Flow and Overview of Selected Studies The initial database search yielded 3187 articles for this scoping review.After the first screening and removal of duplicates, 2038 articles were screened based on their titles.Following that, 81 titles met the inclusion criteria and were selected for a second/further evaluation based on their abstracts.Subsequently, 18 abstracts met the inclusion criteria; therefore the full texts were retrieved for further screening.However, from the 18 full texts retrieved, 8 articles were excluded based on the inclusion/exclusion criteria discussed in the methods section.Therefore, 10 full-text articles were finally included for this review.The PRISMA flow diagram [25] for the literature search is presented in Figure 1. Characteristics of Included Studies The characteristics of the 10 included studies are presented in Table 1.Out of the 10 articles, 6 were categorized as cross-sectional, 3 as prospective, and 1 as longitudinal.Regarding the country in which articles took place, most articles were from the U.S. (2 articles) and Norway (2 articles), while the rest came from Australia (1 article), the United Kingdom Characteristics of Included Studies The characteristics of the 10 included studies are presented in Table 1.Out of the 10 articles, 6 were categorized as cross-sectional, 3 as prospective, and 1 as longitudinal.Regarding the country in which articles took place, most articles were from the U.S. (2 articles) and Norway (2 articles), while the rest came from Australia (1 article), the United Kingdom (1 article), Taiwan (1 article), Spain and Colombia (1 article), Korea (1 article), and China (1 article). The articles included employed different assessment tools to evaluate social support.In particular, the most common approach was to rely on self-reported questions (4 articles) rather than using validated tools to assess social support.The tools used were the Behavioral Risk Factor Surveillance System (BRFSS) (one question about social support), Medical Outcomes Social Support Scale (MOSSS), Duke-UNC Functional Social Support questionnaire (DUFSS), Multidimensional Scale of Perceived Social Support (MSPSS), European Social Survey (ESS), Illness-Specific Social Support Scale (ISSS), and Social Support Rating Scale. The PROMs used in patients with COPD were the Hospital Anxiety and Depression Scale (HADS) (2 articles), self-reported questions for COPD The articles included employed different assessment tools to evaluate social support.In particular, the most common approach was to rely on self-reported questions (4 articles) rather than using validated tools to assess social support.The tools used were the Behavioral Risk Factor Surveillance System (BRFSS) (one question about social support), Medical Outcomes Social Support Scale (MOSSS), Duke-UNC Functional Social Support questionnaire (DUFSS), Multidimensional Scale of Perceived Social Support (MSPSS), European Social Survey (ESS), Illness-Specific Social Support Scale (ISSS), and Social Support Rating Scale. The PROMs used in patients with COPD were the Hospital Anxiety and Depression Scale (HADS) (2 articles), self-reported questions for COPD ( 4 • Initial findings showed a positive correlation between higher social support and higher MCS scores at baseline. • However, this correlation *** no longer existed 1 year after the patient education program. • Social support did not mediate the correlations * between illness perceptions and HRQoL. Chen et al. (2016) [28] Cross-sectional n = 19 participants (Taiwan) • Qualitative method through in-depth interviews.The topics included questions about social support. • Qualitative method through in-depth interviews.The topics included questions about experience of illness and psychological status. • Patients indicated being provided with positive support from both their family members and healthcare professionals. • Thematic analysis based on Miles and Huberman's (1994) [29] guidelines was performed and showed that social support had a significant effect on COPD self-management. • Factors including physical and psychological well-being, disease-related cognition, and social support influenced the self-management efficacy of COPD participants. Chen et al. (2017) [30] Longitudinal n = 282 participants (USA) • Participant response to four questions: (1) whether participants live alone or live with others, (2) whether they are partnered, (3) the number of close friends and relatives they have, and (4) the presence of a family/friend caregiver ("Which family member or friend is most involved in your care now?") for structural social support. • Medical Outcomes Social Support Scale (MOSSS) for functional social support. • Hospital Anxiety and Depression Scale (HADS) for psychological symptoms. • Participants' response to four questions about carelessness, forgetting, stopping medication when feeling better, and using less of the medication than prescribed when feeling better in the past 3 months for adherence to inhaler. • High levels of structural and functional social support, as the majority had a supportive environment. • Participants with a spouse or partner as their caregiver had 11 times greater odds ** of participating in pulmonary rehabilitation compared to those without a caregiver. • Neither structural nor functional support appeared to have * any impact on adherence to inhaler.• Duke-UNC Functional Social Support questionnaire (DUFSS) for perceived functional social support. • Living with Chronic Illness Scale (LW-CI Scale) for complex process of living with long-term conditions (LTC). • Satisfaction Life Scale (SLS-6) for satisfaction with life during the process of living with an LTC. • Patient-Based Global Impression of Severity Scale (PGIS) for self-perception of disease severity. • Satisfaction with life and social support were highlighted as key contributors to the overall experience of individuals living with LTCs, such as COPD patients. • There was a positive correlation * between social support and improved general and emotional health, as well as overall well-being. Halding et al. (2010) [32] Prospectiveinterventional n = 18 participants (Norway) • Qualitative method through in-depth interviews.The topics included questions about social support.Participants responded to questions regarding family life, sources for support, experiences from contact with peers in the last year, and how the participant perceives current everyday life. • Participants responded to questions about experiences in everyday life with COPD prior to pulmonary rehabilitation, symptoms, problems, impact on everyday activities, and psychosocial changes associated with the illness. • The participants emphasized that social integration in rehabilitation groups and support from peers and health-care personnel are important dimensions regarding pulmonary rehabilitation (PR). • The support of social groups encouraged mutual trust, support, increased self-confidence, and motivation for self-care. • The support of social groups and integration in those groups had a positive effect * on quality of life. • The support provided by health professionals relieved the patients' symptoms. • Memorial Symptom Assessment Scale (MSAS) for experience of symptoms. • Functional Performance Inventory-Short Form (FPI-SF) for functional performance. • High levels of social support were associated * with a decrease in experiencing symptom. • The more social support individuals received, the better their coping mechanisms were. • Higher levels of social support were significantly associated * with lower symptom experience and higher functional performance. • State Trait Anxiety Inventory for anxiety. • Positive social support was identified as a factor contributing * to decreased levels of depression and anxiety, whereas negative social support was identified as a factor contributing * to increased levels of depression and anxiety. • No significant relationship * was found between high levels of positive or negative social support and quality of life. • Participant response to questions about self-reported general health. • Poor family and social support were found to be significantly correlated with a decrease in QoL score. • The traits of depression and poor family and social support had the most pronounced impact * on the decline in QoL. Associations between Social Support and PROMs in COPD Patients Table 1 provides a summary of the main findings from the 10 included articles, examining the correlation between social support and PROMs in patients with COPD. The general consensus among the articles was that social support has a favorable influence on PROMs.There were three domains in which there was consistent evidence of the positive impact of social support on patients with COPD: mental well-being, quality of life, and self-efficacy. Four articles [26,27,31,33] were identified that reported a positive impact of social support on mental well-being, specifically on depressive symptoms (2 articles), anxiety (1 article), and psychological well-being (1 article) in patients with COPD.Arabyat et al. indicated in their unadjusted analysis of a large U.S. population-based health survey that a reduced level of social/emotional support was associated with a greater likelihood of experiencing depressive symptoms [26].Notably, patients lacking sufficient social/emotional support were almost four times as likely to report more than 14 mentally unhealthy days within the past month, in contrast to patients with adequate social/emotional support [26].Furthermore, in a previous article, increased levels of negative social support were identified as being linked not only to higher levels of depression but also to anxiety symptoms [33].Patients who received sufficient support from people with whom they had close relationships exhibited a positive correlation with better mental health, although this correlation was no longer present one year after a COPD-specific patient education program [27].Also, social support was related to better reported general and emotional health and well-being in people [31]. Self-efficacy and self-care behavior, including adherence, showed consistent improvement when participants reported greater social support in six articles [19,28,[30][31][32]35].In a qualitative study, Chen et al. demonstrated a positive association between social support and self-management [28].In another article, the support of social groups provided mutual trust, support, and increased self-confidence and motivation for self-care [32].The greater the level of social support received by individuals, the more effective their coping mechanisms became [19].The Duke-UNC Functional Social Support questionnaire identified social support as a significant factor in the process of living with the illness, as evaluated by the Living with Chronic Illness Scale (LW-CI Scale), which encompasses acceptance, coping, self-management, integration, and adjustment [31].One additional article [30] found that participants with a spouse or partner as their caregiver had 11 times higher odds of participating in a pulmonary rehabilitation program than those without a caregiver.In the same article, neither structural nor functional support had an impact on adherence to inhaler or nebulizer medications. The evidence on the relationship between social support and the variables of functional status, quality of life, and self-rated health was inconclusive, and different articles reported different results (8 articles) [19,[26][27][28][31][32][33][34].The research conducted by Arabyat et al. [26] on a community sample of 1.261 patients with COPD revealed a significant correlation between insufficient social/emotional support and disability, as well as impairment in all aspects of health-related quality of life (HRQoL).In the adjusted analysis, patients with COPD who rarely or never received social/emotional support had a higher likelihood of experiencing diminished physical and mental HRQoL days than those who reported receiving sufficient social/emotional support.Despite this, social/emotional support was not significantly associated with disability or general health.However, it is worth highlighting that the assessment of social support was based solely on a single question ("How often do you receive the social/emotional support you need?").Another article [27] found that among the quality of life of 60 patients with COPD, as assessed by Short Form 12 (SF-12v2) for quality of life (physical and mental components of quality of life), the mental component score was positively associated with social support, which was evaluated through one question ("I think I have enough support from people with whom I have a close relationship.").This was not the case in another prospective study involving 406 patients with COPD, since poor family and social support score were positively associated with lower quality-of-life scores [34].The research conducted by Halding et al. showed that the support of social groups and integration into the groups had a positive effect on quality of life [32]. Conversely, a previous article did not find a meaningful correlation between elevated levels of positive or negative social support (as measured by the ISSS) and quality of life, evaluated using the SGRQ [33].Moreover, no significant correlation was observed between social support (MSPSS) and functional performance (FPI-SF) [19].Nevertheless, social support was positively associated with better a experience of symptoms in patients with COPD, thereby affecting their functional performance. Discussion This scoping review explored the interplay between social support and PROMs in primary care patients with COPD and elucidated ways in which social support influences the multi-faceted dimensions of COPD-related well-being.The 10 articles included in this review indicate that social support has a strong positive influence on various health-related PROMs of COPD.Additionally, higher levels of social support were related to better quality of life, self-care behaviors, and self-management in patients with COPD in primary care settings.Interestingly, social support was positively associated with better mental health (anxiety and depression), which in turn was associated with better quality of life and lower severity of symptoms in patients with COPD.In addition, social support was positively associated with less severe COPD and better functional performance in patients with COPD.Simultaneously, another finding was that, despite the clear definition of social support and its domains (structural and functional), researchers measured social support through family, friends, health professionals, and social groups. A major finding of the present review was that social support could help alleviate multiple health-related problems that patients with COPD experience, such as physical, psychological, and financial burden and stress.Furthermore, social support could have a positive effect on motivation for self-management and adherence to treatment in patients with COPD [36].This effect could help improve the dyspnea, anxiety, depression, and overall health status and prevent disease deterioration in patients with COPD, as shown in other diseases, such as silicosis [37].Additionally, patients with COPD described feelings of social isolation and reported suffering from negative emotions [38].Their personal integrity and self-esteem were threatened due to their dependence on others and their self-blame for the disability inflicted by their condition, which is mainly caused by smoking [38][39][40][41].A potential explanation for the positive effect that social support has on patients with COPD was that it serves as a protective mechanism during stressful life events, such as COPD diagnosis/exacerbations [42].This means that social support could act as a barrier to mitigate the negative effects of COPD on patients [43].Moreover, the impact of stressful situations could be more significant for individuals who feel that they receive lower levels of social support than those who feel that they receive higher levels of social support [42,43].In addition, social support may have the potential to improve the coping mechanisms of patients with COPD by boosting their problem-solving skills, enhancing their comprehension of the disease, and fostering increased motivation to take action [44]. Patients with other chronic diseases, such as diabetes, chronic heart disease, and chronic kidney disease, have also been found to exhibit a connection between improved self-care behaviors and increased levels of social support [45][46][47].However, there have been only a handful of articles on patients with COPD that have examined the relationship between social support and self-care behaviors.For example, two articles revealed that participants with COPD were able to better manage their condition when they received functional social support from their family members [48,49].A potential explanation for this finding could be that having sufficient social and emotional support can directly improve mental health regardless of whether one is facing stressful situations [43].Moreover, individuals with greater levels of social support tend to experience increased self-esteem, a sense of security, and better decision-making when it comes to healthcare [43].Indeed, studies have emphasized the relationship between social support and its potential impact on stress-related conditions such as COPD [50,51].Specifically, research has suggested that greater levels of social support may lead to a reduction in psychological problems and a more rapid recovery from stressors, including COPD exacerbations [50,51].Additionally, higher social support has been associated with a decrease in severe and disabling COPD exacerbations [50,51].However, the majority of these studies have not investigated the connection between the structural and functional aspects of social support and the performance of self-care activities among patients with COPD.Evidently, there is a great discrepancy in the terminology of social support.For example, almost all studies included in this review defined social support differently (family, friends, health professionals, and social groups), with only one notable exception [30] that measured both domains of social support (structural and functional) [16] by using two different measures.This means that there is an urgent need to better inform healthcare professionals about social support and its dimensions and to decide on common and standardized [52] terminology between different healthcare professionals. The international and regional guidelines for the management of COPD could concentrate on all aspects of social support and their implementation in patients with COPD, since this has not been highlighted enough.The articles examined in this review not only highlighted the relationship between social support and PROMs but also emphasized the benefits of social support in the overall health of patients with COPD.However, worldwide, there is limited evidence on the influence of social support in COPD.The included articles focused either on pharmacological and medical aspects (symptoms) related to COPD or on non-pharmacological aspects mainly related to exercise or mental health.Additionally, another finding of our review was that the measures and tools used to evaluate social support are not uniform and vary widely across studies.In particular, the majority used self-reported questions and non-validated tools to assess social support.It should be noted that the CCQ [7], a broadly used PROM for COPD, includes one question about an aspect of social support (social activities such as talking, being with children, visiting friends/relatives).However, although the CCQ is included in the Global Initiative for Chronic Obstructive Lung Disease (GOLD) as a suggestion, only the mMRC and CAT are suggested as PROMs to classify patients [2].Therefore, more efforts are needed to establish social support as an important indicator of PROMs in patients with COPD. As previously stated, patients with COPD encounter a multitude of challenges in managing their condition, such as breathlessness, fatigue, anxiety, and the social burden it entails [1][2][3].Primary care plays a crucial role in the diagnosis and management of COPD by monitoring disease progression, exacerbations and medication adherence, and by developing individual action plans [53].With the aim of the better diagnosis and management of COPD, primary care should utilize PROMs [53].Conversely, primary care could foster social support for patients with COPD by offering emotional, educational, and informational support through a social network that includes healthcare professionals, caregivers, family, and friends [26,27,33].Evidently, patients with COPD could benefit greatly from social support, as it can assist them in managing their illness and improving their overall health [26,27,33].Consequently, the incorporation of social support and PROMs into everyday primary care for COPD patients holds the potential to greatly enhance their overall health and, consequently, their quality of life.It should be noted that healthcare professionals must possess a comprehensive understanding of the daily lives and influencing factors of individuals with long-term conditions (LTCs) to deliver thorough, personalized, and patient-centric care [54,55].For example, understanding the determinants of living with LTCs, specifically from the individual's perspective, is an underrepresented topic in the literature.This is important, since two major outcomes of the complex experience of living with LTCs are quality of life and satisfaction with life [55]. The strength of this scoping review was its comprehensive approach to social support and PROMs in patients with COPD.Social support has been associated with improved self-management [56] and general self-efficacy in patients with COPD [27].These factors have been positively associated with the functionality of patients [57]. Based on these findings, this scoping review provides opportunities for future research.First, we propose that there is a need for validated tools for social support and that further articles are needed.Second, more research should be conducted using a comprehensive approach to clarify the potential causal contributions of social support to patients with COPD.Fourth, guidelines are single-disease oriented and usually approach a chronic condition like COPD by focusing mainly on diagnosis and management, despite the fact that they should approach it multidimensionally (e.g., social and psychological dimensions, health determinants, frailty, multimorbidity, etc.). Limitations Despite the useful findings, the present scoping review is subject to a few notable limitations.First, we included only articles in the English language, therefore limiting our results.Second, PROMs include many different questionnaires, thus making it difficult to compare articles.Third, the definition of social support differed greatly between studies; thus, further analysis/comparisons were difficult.Finally, we did not evaluate the quality of the articles and simply described their findings without further analysis. Conclusions The results of this review show that social support is positively associated with mental health, quality of life, and self-efficacy in patients with COPD.Specifically, higher levels of social support were associated with lower levels of depressive symptoms and better self-care behaviors (adherence) and self-management in patients with COPD.Furthermore, it should be noted that the majority of research pertaining to patient-reported outcome measures (PROMs) in patients with COPD has overlooked the significance of social support.Additionally, there is a general misrepresentation of social support regarding its definition and domains in the current literature.Given this insufficiency in the current literature, it is crucial that future research focus on the significance of social support in PROMs in primary care patients with COPD.Consequently, our review emphasizes the vital role of social workers in the multidisciplinary health team of COPD patients and social support as one of the cornerstones of holistic care for them.Therefore, healthcare managers could aim to provide higher levels of social support in order to improve the quality of life, mental health, and self-efficacy of patients with COPD. Figure 1 . Figure 1.PRISMA flow diagram of the literature. Figure 1 . Figure 1.PRISMA flow diagram of the literature. Table 1 . Articles investigating the impact of social support on COPD PROMs.
2023-12-13T16:02:45.035Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "b0621b76a2733a23fd5fe61d9f0629ba57f74031", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9032/11/24/3141/pdf?version=1702294233", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c2bee326f6331a1ddd101fdec8298e2341c67984", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
203951338
pes2o/s2orc
v3-fos-license
Euler Characteristics and their Congruences in the Positive Rank Setting The notion of the truncated Euler characteristic for Iwasawa modules is an extension of the notion of the usual Euler characteristic to the case when the homology groups are not finite. This article explores congruence relations between the truncated Euler characteristics for dual Selmer groups of elliptic curves with isomorphic residual representations, over admissible $p$-adic Lie extensions. Our results extend earlier congruence results from the case of elliptic curves with rank zero to the case of higher rank elliptic curves. Introduction Iwasawa theoretic invariants for modules arising from the Iwasawa theory of ordinary Galois representations provide key insights into the arithmetic of such objects. Of particular interest is the behaviour of these invariants when one considers two ordinary Galois representations whose associated residual representations are isomorphic. Greenberg and Vatsal [8] initiated such a study for elliptic curves, and this was developed further in the works of Emerton-Pollack-Weston [7]. Similar investigations were carried out in the context of non-commutative Iwasawa theory in, for instance, [4] and [6]. In all the works referenced above, congruences between the corresponding L-values and Euler characteristics of dual Selmer groups of elliptic curves were established over special p-adic Lie extensions. A fundamental hypothesis when considering Euler characteristics was that the Euler characteristic was defined, which often entailed the assumption of finiteness of the dual Selmer group over the ground field. This article sets out to explore possible thematic generalizations of such congruence results when this finiteness hypothesis is removed. The natural substitute for the Euler characteristic is the truncated Euler characteristic as considered in [19] for the cyclotomic extensions. This definition was generalized to a broader class of admissible p-adic Lie extensions in [4] and [12]. It is striking that the congruence results extend to the rank one case, and to more general p-adic Lie-extensions. This leads us to believe that the truncated Euler characteristic is also an intrinsic arithmetic invariant in non-commutative Iwasawa theory. This paper consists of five sections including this introduction. In section 2, we set up notation and requisite preliminaries. Section 3 proves the congruence results in the case of the cyclotomic Z p -extension and section 4 establishes similar congruence results in the setting of admissible, non-commutative p-adic Lie extensions. More specifically, we prove our results for false Tate curve extensions and GL 2 extensions. In section 5, we discuss explicit numerical examples which demonstrate that our results are optimal. Throughout, let p ≥ 5 be a prime. Let E 1 and E 2 be two elliptic curves over the field Q of rational numbers. Let ρ i : G Q → GL 2 (Z p ) be the Galois representation on the p-adic Tate module of E i and E i [p] denote the Galois module of the p-torsion points of the elliptic E i . Denote byρ i : G Q → GL 2 (F p ) the residual Galois representation. Let N i be the level of E i and set N = N 1 N 2 . Assume that (1) E 1 and E 2 both have the same algebraic rank g, (2) the p-adic Galois representations ρ 1 ≡ ρ 2 mod p, (3) the residual Galois representationsρ i are irreducible, (4) E 1 and E 2 both have good ordinary reduction at p, (5) X(E i /Q)[p] is finite for i = 1, 2, (6) the p-adic height pairings on E 1 and E 2 are non-degenerate. Conditions (5) and (6) are always expected to be true. Let Q cyc denote the cyclotomic Z p -extension of Q and put Γ := Gal(Q cyc /Q). Let Q ∞ /Q be an admissible p-adic Lie-extension of Q and G := Gal(Q ∞ /Q). Let χ t (Γ, E i ) (resp. χ t (G, E i )) denote the truncated Euler characteristic of E i with respect to Γ (resp. G), see section 2 for the definition. The main conjecture of Iwasawa theory relates the algebraic invariants attached to the dual Selmer group of an elliptic curve with the p-adic L-function, which interpolates values of the complex L-functions. Vatsal [18] and Greenberg-Vatsal [8] study congruence properties for complex L-values as well as for the p-adic L-functions attached to E 1 and E 2 . On the algebraic side, the main conjecture in conjunction with work of Schneider [15] and Perrin-Riou would then predict congruences for the Euler characteristic of the Selmer group. For an elliptic curve E with algebraic rank zero, denote the Euler characteristic of the Selmer group of E over the cyclotomic Z p -extension by χ(Γ, E) (cf. [5, chapter 3]). When E has algebraic rank zero, the Euler-characteristic χ(Γ, E) coincides with the truncated Euler-characteristic χ t (Γ, E). Shekhar and the second author [16] deduced congruence results for the Euler characteristic of pairs of congruent elliptic curves E 1 and E 2 of rank zero. In particular, they deduce that if χ(Γ, E 1 ) = 1, then χ(Γ, E 2 ) = 1. Let m be a p-power free integer coprime to p as well as to the conductors of both elliptic curves. For the false Tate curve extension defined by [16,Theorem 3.4] if χ(G, E 1 ) = 1 then so is χ(G, E 2 ) = 1. In this article, analogous results for truncated Euler characteristics in the positive rank setting are proved. We stress that our methods are different from those in [16], and hence yield another proof of the results in the rank zero case as well. Generalizations of the results in this paper to modular forms and abelian varieties over arbitrary number fields are currently being investigated. The p-adic version of the Birch and Swinnerton-Dyer conjecture has been studied, for instance in [1], [12] and [15]. The results in this paper for the truncated Euler characteristic yield interesting consequences for the p-adic Birch and Swinnerton-Dyer conjecture. Let E be an elliptic curve over Q with good ordinary reduction at p for which E[p] is irreducible as a Galois module. The p-adic L-function, denoted by L(E/Q, T ), is a power series in Z p [[T ]]. The characteristic series of the dual Selmer group of E is denoted by f alg E (T ). The main conjecture predicts that L(E/Q, T ) coincides with f alg E (T ) up to a unit in Z p [[T ]]. The p-adic Birch and Swinnerton-Dyer conjecture predicts that the order of vanishing of L(E/Q, T ) at T = 0 is equal to the rank of E(Q) and that there is an exact formula for the leading coefficient of L(E/Q, T ). The leading coefficient is predicted to be equal to p −g ρ p (E), where (see Proposition 2.3). In the above formula, R p (E/Q) is the p-adic regulator of E, defined as the determinant of the p-adic height pairing studied in [12], [14] and [15]. When the rank of E(Q) is equal to zero, the p-adic regulator is set to be equal to 1. When the rank of E(Q) is positive, the p-adic regulator is conjectured to be nonzero. The term τ (E) : being the subgroup of l-adic points with nonsingular reduction modulo l. The termẼ is the reduced curve at p. It is shown by Perrin-Riou and Schneider that if the p-adic regulator of E is not zero, then the order of vanishing of f alg E (T ) is equal to the rank of E(Q) and the leading term of f alg E (T ) is equal to p −g ρ p (E), up to a p-adic unit. As a consequence, the p-adic Birch and Swinnerton-Dyer conjecture for E follows from the main conjecture for E. This is where the truncated Euler characteristic intervenes. The truncated Euler-characteristic χ t (Γ, E) is equal to the leading coefficient of f alg E (T ) up to a unit (see [19,Lemma 2.11]). As a consequence of the results of Schneider and Perrin-Riou mentioned above, χ t (Γ, E) = p −g ρ p (E). Let E 1 and E 2 be elliptic curves satisfying conditions (1) to (6). In particular, E 1 [p] and E 2 [p] are isomorphic as Galois representations. We show that there is an explicit relationship between χ t (Γ, E 1 ) and χ t (Γ, E 2 ) (see Theorem 3.3) which further relates p −g ρ p (E 1 ) and p −g ρ p (E 2 ) (see Corollory 3.4). Next we consider the coefficient of T g of the p-adic L-function L(E i /Q, T ) which is given by The p-adic Birch and Swinnerton-Dyer conjecture predicts that Using the results of Perrin-Riou and Schneider, which related the latter term p −g ρ p (E i ) to the truncated Euler-characteristic, our results would translate under the p-adic Birch Swinnerton-Dyer conjecture, to a congruence relation between L (g) (E 1 /Q, 0) and L (g) (E 2 /Q, 0). However, we show that this relationship is indeed satisfied independent of the p-adic Birch and Swinnerton-Dyer conjecture (see Theorem 3.7), thereby providing evidence for the conjecture. We provide a sketch of the methods used in this manuscript. The set of primes Σ 0 is chosen to consist of exactly those primes at which E 1 or E 2 have bad reduction. One may choose a larger set, however, results are optimal for this set. Greenberg and Vatsal in [8] consider the Σ 0 -imprimitive Selmer group over the cyclotomic Z p -extension, which we denote by Sel Σ 0 (E/Q cyc ) (see section 2 for the definition). Set f alg E i ,Σ 0 (T ) for the characteristic series of Sel Σ 0 (E i /Q cyc ) and denote by µ alg E i ,Σ 0 and λ alg E i ,Σ 0 the µ and λ-invariants of f alg E i ,Σ 0 (T ) respectively. Greenberg and Vatsal show that (see the discussion on [8, pp. 43] preceding Remark (2.10)). Using these relations, it becomes possible to deduce that the coefficient of is p-adic unit if and only if that of T g in f E 2 ,Σ 0 (T ) is a p-adic unit (see the proofs of Corollory 2.7 and Theorem 3.3). By the results of Schneider and Perrin-Riou discussed earlier, the order of vanishing of f alg E i (T ) at T = 0 is g, and its leading coefficient is equal to χ t (Γ, E i ), up to a p-adic unit. Setting Our results are proved independent of the main conjecture. Note that the main conjecture is known in this setting for E i if there is a prime q = p such that q||N i and the residual representationρ E i ,p on E i [p] is ramified at q. This is the celebrated theorem of Skinner and Urban [17]. We stress that this additional condition is not imposed in proving our results. Leveraging results proved for the cyclotomic extension allows us to prove analogous results for other p-adic Lie-extensions (Theorem 4.3 for the false-Tate extension and Theorem 4.5 for certain GL 2 -extensions). Acknowledgements: The authors would like to thank Ravi Ramakrishna, Sudhanshu Shekhar and Christian Wuthrich for helpful discussions. The second author gratefully acknowledges support from NSERC Discovery grant 2019-03987. Preliminaries Let p be a prime number and assume that p ≥ 5. Let E be an elliptic curve over Q with good ordinary reduction at the prime p and E p ∞ denote the Galois-module of p-power division points on E. Let Σ be a finite set of primes containing p and the primes at which E has bad reduction. 2.1. Selmer groups and their Characteristic Polynomials. Denote by Q Σ the maximal extension of Q in which all primes l / ∈ Σ are unramified, and denote by G Q,Σ the Galois group of Q Σ over Q. Let L ⊂ Q Σ be a subfield, we shall be concerned with the cyclotomic Z p -extension or an admissible p-adic Lie extension. Set G Σ (L) := Gal(Q Σ /L), the Selmer group Sel(E/L) := kerλ Σ (L). where H l (L) is taken as in [8, pp. 23]. Let Σ 0 ⊆ Σ be a subset of primes which contains the primes at which E has bad reduction and does not contain p. In section 3, we shall specialize Σ 0 further. The Σ 0 -imprimitive (or non-primitive) Selmer group Sel Σ 0 (E/L) is the kernel of the localization map Let G be a profinite group, the Iwasawa algebra Λ(G) is defined as where the inverse limit is taken with respect to open subgroups of G. Put Γ = Gal(Q cyc /Q) and set Λ to be equal to the Iwasawa algebra Λ(Γ). Note that for any extension L/Q, finite or infinite, the Selmer groups Sel(E/L) and Pontrjagin duals of the Selmer and Σ 0 -imprimitive Selmer groups respectively. It follows from a deep theorem of Kato [10] that the dual Selmer group X(E/Q cyc ) is a torsion Λ-module. This is equivalent to the localization map λ Σ (Q cyc ) being surjective (cf. [4, Lemma 2.1]). It is an easy result that X(E/Q cyc ) is a finitely generated Λ-module. By the classification theorem of finitely generated torsion Λ-modules, there is a pseudo-isomorphism Here, one identifies Λ with the formal power series ring Z p [[T ]] after making a choice of a topological generator γ of Γ and letting T := γ − 1. The f i (T ) are irreducible monic polynomials all of whose non-leading coefficients are divisible by p. Such polynomials are called distinguished polynomials. The algebraic Iwasawa invariants are defined by The characteristic polynomial of X(E/Q cyc ) is defined by It is the unique element generating the characteristic ideal of X(E/Q cyc ) which is a power of p times a distinguished polynomial. It is expected that when E is an elliptic curve for which E[p] is irreducible, then µ alg E = 0. It is well known that the condition µ alg E = 0 depends only on the residual representation E[p] (see [8, pp. 19]). That is, if two elliptic curves E 1 and E 2 over Q with good ordinary reduction at p have isomorphic residual representation at p, then µ alg E 1 = 0 if and only if µ alg E 2 = 0. The λ-invariant λ alg E need not be zero and moreover, need not depend on the residual representation (see [8, pp. 22]). We define Iwasawa invariants µ alg E,Σ 0 and λ alg E,Σ 0 associated to the Σ 0 -imprimitive dual Selmer group [8, pp. 23]. The characteristic polynomial of the Λ-module There is a short exact sequence As a result, It is related to λ alg E according to the formula [8, equation (7)] There is a similar story for p-adic L-functions. Let g denote the algebraic rank of E and Ω E denote the the real Néron period of E. For any even Dirichlet character ρ, it is well known that L(E/Q, ρ, 1)/Ω E ∈Q p . Recall that in the previous subsection, a topological generator γ ∈ Γ has been fixed. Let µ p ∞ denote the Galois module of p-torsion roots of unity. Given ξ ∈ µ p ∞ , let m be the integer such that the order of ξ is p m−1 . Associate to ξ a character ρ : Γ → µ p ∞ of finite order defined by ρ(γ) = ξ. Let α p ∈Q p denote the unit root of the characteristic polynomial 1 − a p (E)X + pX 2 . Mazur and Swinnerton-Dyer associate to E an element L(E/Q, T ) ∈ Λ ⊗ Q p satisfying the property Here, τ (ρ −1 ) denotes the Gauss-sum of the character ρ −1 . Let χ be the p-adic cyclotomic character and let κ = χ ↾Γ : Γ Let P l (E/Q, ρ, T ) be the local-factor defined as follows If l is a prime of bad reduction then When ρ is the trivial character it is dropped from the notation. The local Euler The function L(E/Q, T ) is modified to a non-primitive version L Σ 0 (E/Q, T ) for which [8] and [11]). For l ∈ Σ 0 , let γ l denote the Frobenius automorphism of l in It is well known that if E[p] is irreducible as a Galois module, then L(E, T ) ∈ Λ (see [8,Proposition 3.7]). By the Weierstrass Preparation theorem, where u(T ) is a unit and g(T ) a distinguished polynomial. Let µ anal E := µ as above and f anal Under additional hypotheses, this has been proved by Skinner and Urban [17]. In this manuscript we do not assume these additional hypotheses and the results are independent of the celebrated theorem of Skinner-Urban. One has analogous definitions for analytic invariants associated to the Σ 0imprimitive p-adic L-function L Σ 0 (E, T ). For l ∈ Σ 0 , it is the case that P l generates the characteristic ideal of the Pontryajin dual of H l (Q cyc ) [ . As a result, if the main conjecture is true for E 1 and the µ-invariant for E 1 is zero, then the same is true for E 2 . Generalized Euler Characteristics. Recall that an admissible p-adic Lie-extension Q ∞ /Q is a Galois extension for which The truncated G-Euler characteristic is a variation of the above definition. Denote by The truncated Euler characteristic χ t (Γ, E) is, up to a p-adic unit, the leading term of the p-adic L-function L p (E, s) (cf. Theorem 3.5) and is therefore a p-adic integer. LetẼ denote the reduced curve at p. For a prime l, we let c l (E) := #(E(Q l )/E 0 (Q l )) and set τ (E) := l c l . Two p-adic integers a ∼ b if a and b are not zero and a/b is a p-adic unit in Z p . In the rank zero case, if Sel(E/Q) is finite, the Euler characteristic χ(Γ, E) is related to the Birch Swinnerton-Dyer exact formula as follows In the positive rank case, the p-adic height pairing (cf. [14] and [15]) plays a role. For an elliptic curve E over Q with positive rank, the p-adic height pairing is conjectured to be non-degenerate (cf. [15]). The p-adic regulator R p (E/Q) is the determinant of the p-adic height pairing. Proposition 2.3. (cf. [4, section 3] and [12]) Assume that X(E/Q)[p] is finite, that the residual Galois representation on E[p] is irreducible, that E has good ordinary reduction at p and that If the order of X(E/Q)[p] is known to be finite and the p-adic regulator of E is non-zero, then it is known that the order of vanishing of f alg E (T ) at T = 0 is equal to g, the algebraic rank of E. This is a result of Schneider and Perrin-Riou (see for instance [15, Theorem 2 ′ ]). These conditions are imposed on the elliptic curves E 1 and E 2 . The truncated Euler characteristic is related to the characteristic series f alg E (T ). Write f alg E (T ) = T g g E (T ) where g E (0) = 0. The following is a direct consequence of [19, Lemma 2.11]. Lemma 2.4. Assume that E is an elliptic curve over Q for which (1) E has good ordinary reduction at p, (2) E[p] is irreducible as a Galois module, (3) X(E/Q)[p] is finite, the p-adic regulator of E is non-zero. Then, χ t (Γ, E) = |g E (0)| −1 p . In particular if E is an elliptic curve satisfying the above conditions, it follows that χ t (Γ, E) = p N for some N ∈ Z ≥0 . Definition 2.5. Let E be an elliptic curve over Q and Σ 0 a set of primes containing the primes at which E has bad reduction and not containing p. Set Φ E := l∈Σ 0 |L l (E, 1)| p . Here, |L l (E, 1)| p is set to be equal to 0 if L l (E, 1) −1 = 0. Writing f alg E,Σ 0 (T ) = T g g E,Σ 0 (T ), the following is a direct consequence of Lemma 2.4. Lemma 2.6. Let E be an elliptic curve satisfying the conditions of Lemma 2.4. Then p is taken to be 0 if g E,Σ 0 (0) = 0. Corollary 2.7. Let E be an elliptic curve satisfying the conditions of Lemma 2.4. Then Φ E × χ t (Γ, E) = 1 if and only if µ alg E = 0 and λ E,Σ 0 = g. Proof. Suppose that Φ E × χ t (Γ, E) = 1, then g E,Σ 0 (T ) is a unit. As a result, f alg E,Σ 0 (T ) is T g times a unit. It follows that µ alg E,Σ 0 = 0 and that f alg E,Σ 0 (T ) is a distinguished polynomial. As a result, f alg E,Σ 0 (T ) = T g and in particular, λ E,Σ 0 = g. Finally, we point out that since each factor P l is not divisible by p and consequently, µ alg E = µ alg E,Σ 0 = 0. Conversely, suppose that µ alg E = 0 and that λ alg E,Σ 0 = g. Since f alg E,Σ 0 (T ) is divisible by T g it follows that f alg E,Σ 0 (T ) = p m T g , where m = µ alg E,Σ 0 . However, µ alg E,Σ 0 = µ alg E and the result follows. Congruences over the Cyclotomic extension This section proves congruences for truncated Γ-Euler characteristics. For i = 1, 2, let f i be the eigencuspform associated to E i . From here on in, let Σ 0 be the finite set of primes l at which either E 1 or E 2 has bad reduction. Since both E 1 and E 2 have good reduction at p, the set Σ 0 does not contain p. Proof. Recall that the elliptic curves E 1 and E 2 have good ordinary reduction at p and therefore p / ∈ Σ 0 . Let l ∈ Σ, the value L l (E i , 1) −1 = l −1 (l + β i (l) − a l (f i )) is a p-adic unit if and only if p does not divide l + β i (l) − a l (f i ). The value We stress that our results are proved without stipulating condition (⋆) for E 1 or E 2 , however, stipulating condition (⋆) simplifies the results. In our numerical examples we consider cases when (⋆) is satisfied and otherwise. Proof. It follows from Corollory 2.7 that On the other hand, it is shown by Greenberg and Vatsal that if As a result, if in addition to conditions (1) to (6), condition (⋆) is satisfied, then The following corollary is a direct implication of the above theorem and the p-adic Birch and Swinnerton-Dyer formula (cf. Proposition 2.3). We note that E i (Q)[p] = 0 for i = 1, 2 since the Galois representations on E i [p] are assumed to be irreducible. Corollary 3.4. Let E 1 and E 2 be elliptic curves satisfying conditions (1) to (6). Then is a p-adic unit if and only if is a p-adic unit. The following Proposition is a direct consequence of Lemma 2.4. There are analytic analogs of Theorem 3.3 which we discuss. The following is a result of Greenberg and Vatsal. Theorem 3.6. [8, Theorem 3.10] Let E 1 and E 2 elliptic curves satisfying the conditions (1) to (6). There exists a unit w ∈ Z × p for which Vatsal (cf. [18,Corollary 1.11]) also deduces congruences for special values of complex L-functions attached to eigenforms whose Fourier coefficients are congruent. Such congruences are interpolated by congruences of p-adic L-functions. The following is the analytic version of Theorem 3.3. Theorem 3.7. There exists a unit u ∈ Z × p such that there is a congruence Proof. As mentioned previously, under the assumptions on E 1 and E 2 , the order of the zero of f alg E i (T ) is equal to g (cf. [15,Theorem 2]). It is a well known result of Kato that f alg The assertion follows from Theorem 3.6. . Let E be an elliptic curve with good ordinary reduction at p andẼ be the reduced curve at p. The extensions Q cyc and Q(µ p ∞ ) are deeply ramified extensions (see [2]). Suppose F ∞ /Q is a deeply ramified extension. Let l be a prime and w|l a prime of F ∞ above l. If l = p, there is a canonical isomorphism This follows from a standard Kummer theory argument. If l = p, which is proved for instance in [2, Proposition 4.8]. Lemma 3.8. Let E be an elliptic curve over Q with good ordinary reduction at p. There is a canonical isomorphism induced by restriction Proof. Consider the following diagram We show that g is an isomorphism and that h is injective. An application of the Snake lemma implies that f is an isomorphism. First let us show that g is an isomorphism. By inflation-restriction, Since the order of ∆ is coprime to p, ker g = 0. Likewise, and it follows that cok g = 0 and therefore g is an isomorphism. The map , the order of ∆ w is coprime to p. This restriction map fits into the inflation-restriction sequence Since ∆ w has order coprime to p, it follows that H 1 (∆ w , D(Q(µ p ∞ ))) = 0 and therefore h w is injective. This completes the proof of the Lemma. The inflation map fits the inflation-restriction sequence Since the order of ∆ is coprime to p, it follows that H 1 (∆, Sel(E/Q(µ p ∞ ))) = 0 and therefore the inflation map is an isomorphism. Therefore, ψ can be identified with ϕ. By Lemma 3.8, the map ϕ can be identified with the map Putting it all together, Let Γ ′ = Gal(Q(µ p ∞ )/Q), the following congruence is a consequence of Theorem 3.3 and Proposition 3.9. Congruences of G-Euler Characteristics In this section, we prove congruences over G where G = Gal(Q ∞ /Q) is the Galois group of an admissible extension. This is achieved by relating the truncated Euler characteristic over G to that over the cyclotomic extension. The extensions considered will be the false Tate curve extension and the admissible p-adic Lie-extension arising from the p ∞ -torsion points of the elliptic curves E i , for i = 1, 2. In the false Tate curve case, the corresponding Galois group is a semidirect product Z × p ⋉ Z p and in the other case it is a finite index subgroup of GL 2 (Z p ). 4.1. Congruences for the False Tate Curve Extension. Let m be a positive integer coprime to Np, where we recall that N is the product of the conductors of E 1 and E 2 . Let Q ∞ be the false Tate curve extension Q ∞ := Q(µ p ∞ , m 1 p ∞ ). Our assumptions imply that the reduction type of E does not change in any number field extension of Q contained in Q ∞ . In particular, if w is a prime above l in Q(µ p ) then E has the same reduction type at l as well as w. Recall that G := Gal(Q ∞ /Q) and H := Gal(Q ∞ /Q cyc ). Let Γ ′ = Gal(Q(µ p ∞ )/Q) and identify G/H ≃ Γ. Let Σ be a set finite set of primes containing all primes l|mp and all primes l at which either E 1 or E 2 has bad reduction. Let P 0 be the set of primes l = p which are ramified in Q ∞ , this is the set of primes l|m. Recall that it is stipulated that E has good reduction at each prime l|m and as a consequence the set of primes M(E) ⊂ P 0 coincides with the set of primes Σ 3 (E) defined in [16]. Proof. Let E denote any one of the elliptic curves E 1 or E 2 . Recall that m is by assumption, coprime to pN E and hence E has good reduction at each prime l|m. By [16,Lemma 2.5], the set of primes M(E) is the set of primes l|m at which p|L l (E, 1) −1 . At each prime l|m, E 1 and E 2 both have good reduction, therefore, a l (f 1 ) ≡ a l (f 2 ) mod p and therefore, On the other hand, by Corollary 3.10, Putting it all together, one obtains the congruence The assertion of the Theorem follows. Kato [10] that Sel(E/Q cyc ) is a cotorsion Λ(Γ)-module. It follows from this that the Selmer group Sel(E/Q ∞ (E)) is a cotorsion Λ(G E )-module (cf. [3]). Let M E be the set of primes at which the j-invariant of E is non-integral. We let G i := G E i . Theorem 4.5. Let E 1 and E 2 be elliptic curves satisfying conditions (1) to (6) as in the introduction. And further assume that at each prime l where E i does not have potentially good reduction, Then, it is the case that If in addition condition (⋆) is satisfied for E 1 and E 2 , then χ t (G 1 , E 1 ) = 1 if and only if χ t (G 2 , E 2 ) = 1. Proof. By Theorem 3.3, By Theorem 4.4, By [13,Proposition 5.5], a prime l ∈ M E i if and only if E i does not have potentially good reduction at l. By our assumption, at every prime l ∈ M E i , we have that L l (E i , 1) is a p-adic unit. The assertion follows from this. Numerical Examples In this short section we discuss some concrete examples which illustrate our results for p = 5. All our computations are aided by Sage. 5.1. Example 1: Let E 1 = 201c1 and E 2 = 469a1. Both elliptic curves have rank 1 and good ordinary reduction at the prime 5. Further, there is an isomorphism of the residual Galois representations E 1 [5] ≃ E 2 [5], at the prime 5, and the conditions (1) to (6) are satisfied. The example shows that χ t (Γ, E i ) = 1 and Φ E i = 1 for i = 1, 2. As a result, This verifies Theorem 3.3. We further discuss results for more general 5-adic Lie-extensions as in Theorem 4.3 and Theorem 4.5. The elliptic curve E 1 has bad reduction at 3 and 67 and E 2 has bad reduction at 7 and 67. Also, note that Therefore for E 1 and E 2 , condition (⋆) is satisfied and as a consequence, By Theorem 3.3, χ t (Γ, E 1 ) = 1 ⇔ χ t (Γ, E 2 ) = 1. The 5-adic regulators Let us consider congruences over extensions cut out by 5 ∞ -torsion points of E 1 and E 2 . For i = 1, 2, all primes l of bad reduction, l − a l (f i ) is not divisible by 5. Since condition (⋆) is satisfied, by Theorem 4.5, By Theorem 4.4, Example 2: This example illustrates the role played by the factors Φ E i for i = 1, 2. In this example, condition (⋆) is not satisfied. Let E 1 and E 2 be the rank 1 elliptic curves E 1 = 37a1 and E 2 = 1406g1. Both curves satisfy conditions (1) to (6). In this example χ t (Γ, E 1 ) = 1, and χ t (Γ, E 2 ) = 5 2 , in particular, the truncated Euler-characteristics for E 1 and E 2 are different. On the other hand, it will be shown that Φ E 1 = 5 2 , and Φ E 2 = 1. This implies that These values are both divisible by 5, as predicted by Theorem 3.3, however, they are different. Hence the truncated Euler characteristic is not determined by the residual representation, only determined up to congruence. We now consider congruences over false Tate curve extensions. The elliptic curve E 1 has bad reduction at 2 and 41, E 2 on the other hand has bad reduction at 2,11 and 41. For any integer m not divisible by 2, 11 and 41 and G the Galois group of the false Tate curve extension G := Gal(Q(µ 5 ∞ , m 1 5 ∞ )/Q). By Theorem 4.3, χ t (G, E 1 ) ≡ χ t (G, E 2 ) mod 5Z 5 . Therefore, 5 divides χ t (G, E 2 ) for any such m.
2019-10-09T07:30:35.000Z
2019-10-09T00:00:00.000
{ "year": 2019, "sha1": "3fe86d851e3d23e08547c652ea7a07bbfceeef1e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1910.03819", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3fe86d851e3d23e08547c652ea7a07bbfceeef1e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
203169900
pes2o/s2orc
v3-fos-license
3D Neighborhood Convolution: Learning Depth-Aware Features for RGB-D and RGB Semantic Segmentation A key challenge for RGB-D segmentation is how to effectively incorporate 3D geometric information from the depth channel into 2D appearance features. We propose to model the effective receptive field of 2D convolution based on the scale and locality from the 3D neighborhood. Standard convolutions are local in the image space ($u, v$), often with a fixed receptive field of 3x3 pixels. We propose to define convolutions local with respect to the corresponding point in the 3D real-world space ($x, y, z$), where the depth channel is used to adapt the receptive field of the convolution, which yields the resulting filters invariant to scale and focusing on the certain range of depth. We introduce 3D Neighborhood Convolution (3DN-Conv), a convolutional operator around 3D neighborhoods. Further, we can use estimated depth to use our RGB-D based semantic segmentation model from RGB input. Experimental results validate that our proposed 3DN-Conv operator improves semantic segmentation, using either ground-truth depth (RGB-D) or estimated depth (RGB). Introduction Most deep networks specialized on semantic segmentation currently follow a fully convolutional architecture [21,1,20,4,40,5] on 2D images and return pixel-level classifications. However, as shown in [8,21,14], semantic segmentation improves when 3D or depth information is available by specialized hardware. One way is that the local segmentation boundary can be refined by the scene geometry, where there is an occlusion boundary in the 2D image projection. More generally, the global high-level semantics can benefit from the 3D scene distribution by removal of the possible projection ambiguity. In this work, we investigate if and how we can embed 3D scene information into the 2D Figure 1. The concept of 3D neighborhood and its difference from 2D neighborhood. Locality from depth: A and B are neighbours in 2D image but not in 3D space; Scale from depth: D is further away than C, so the 3D neighborhood of D is smaller on 2D image than C. For both cases we can find a explicit cue from the depth value. convolution in RGB images. Recently, great progress has been witnessed in deep learning on 3D data, such as voxels [32,3] and point clouds [24,25]. Yet, 3D data face problems which prevent it from large-scale or real-time usage. This changes our attention to 2.5D representations in the form of depth maps and RGB-D data for two reasons. On the one hand, processing 2.5D data is almost as computationally efficient as 2D data, contrast to 3D representations that blow up computations. On the other hand, 2.5D RGB-D data can easily be acquired by either low-cost 2.5D commercial depth sensors like Kinect or disparity maps from binocular cameras,What is more, in the absence of sensory depth, monocular depth estimation methods [9,8,18,36,10] has recently been able to provide reasonable depth maps, even with RGB images alone. The better availability and efficiency of 2.5D data render them an inexpensive and yet effective solution for incorporating geometry information. A reasonable question, therefore, is how to effectively incorporate depth into a model, such that to learn depth-aware features amenable for scene semantic segmentation. In this paper, we integrate depth into the 2D convolutional operation. We do not just add depth as additional input channels Figure 2. Visualization of the effective receptive fields [22] sampled at three certain points. Left: original image. Middle: effective receptive field of 3DN-Conv. Right: effective receptive field of standard convolution. Results in six sequential layers of as usual, but we define a 3D neighborhood and modify the 2D convolutional filters with it. As illustrated in Figure 1, the two properties of 3D neighborhood are depth locality and scale. The depth locality comes from the locality in the 3D space; the varying scale in 2D image plane is due to the requirement of scale consistency in 3D. Depth locality and scale are important because they determine the receptive field. Taking the perspective that convolution operation is a local function, we manage to learn more 3D-aware features by tuning the receptive field of the 2D convolution according to the 3D neighborhood. We present 3D Neighborhood convolutions (3DN-Conv) In 3DN-Conv the receptive fields dynamically adapt to the local depth structure. To be specific, the size of the receptive field is inversely proportional to the depth at the same pixel location, determined by the rule of scale, and only the region within a certain range of depth are incorporated to the convolution, according to the rule of depth locality. The visualization in Figure 2 supports that the effective receptive field of our 3DN-Conv bears awareness of both properties of scale and depth locality. We also use the depth map returned by the proposed Depth Discriminative Feature Network (D-DFN) for semantic segmentation, to obtain depthaware pixel-level label predictions. We make the following contributions. First, we propose 3D Neighborhood Convolutions (3DN-Conv) a novel spatially variant and depth-aware convolutional operator. The proposed convolution considers depth as a cue for both locality along the camera z-axis and the receptive field scale of the kernel. To the best of our knowledge, we are the first to propose a convolution operation that explicitly considers both aspects. Experiments show that our 3DN-Conv is more effective than other methods to incorporate depth for semantic segmentation. Second, we propose a depth estimation model D-DFN that recovers accurate local depth gradients and sharp 3D edges, compared with the state-ofthe-art depth estimation algorithms. Third, we show that the proposed depth estimation can be successfully combined with RGB-D segmentation models and reach accuracies for which normally one requires depth information by specialized hardware sensors. Related Works RGB-D semantic segmentation RGB-D segmentation extends RGB semantic segmentation [21,1,4,40,5,38]. A widely applied method (e.g. in [21,8,6]) is to encode depth into hand-craft HHA features [13]: horizontal disparity, height above ground and angle with gravity. Hazirbas et al. [14] uses a separate encoder to process depth channel and fuses the feature maps at every stage. Qi et al. [26] built a 3D k-nearest neighbor graph network on point clouds with extracted features from a CNN. In this work, we focus on using depth to build spatially variant convolutional filters, with the advantage to learn better geometry-aware features by modeling the receptive field of the 2D convolution in accordance with the 3D local neighborhood. Following this trend, Wang and Neumann [34] augments the standard convolution by adding depth similarity as a weight term to consider the locality on dimension of depth. Different from [34], depth can also be a cue for scale [23,6]: pixels with larger depth values are processed with convolutions with a smaller receptive field. Unlike those works, our 3D neighborhood is the first concept framework to explicitly cover both the scale and the locality from depth in theory, which learns better 3Daware features, as we will show experimentally. Supervised monocular depth estimation We denote with monocular depth estimation (MDE) the task to recover the depth map from a single RGB image as input. While handcrafted features and probabilistic graphical models are used for MDE in early years [28,29], recent methods benefit from learned structural deep features for local and global contextual information. Eigen et al. [9,8] proposed a coarse-to-fine network to refine the output depth from a coarse prediction stage by stage. Laina et al. [18] introduced residual block design into fully convolutional network. Xu et al. [36] fuses multi-scale depth output in different stages by Conditional Random Fields (CRFs). Fu et al. [10] proposed ordinal regression as multiple binary classifications for pixel-wise depth, which achieves current state-of-the-art performance. Adaptive Convolutions Our work also relates to generic adaptive convolutions, where local image filters are defined based on a set of different basis functions [17], features of the previous layer [33,2], the correlation of the input or features [35,30], or a different modality [34]. In particular, the most similar variants to our methods are the spatial sampling location methods, where filters are locally adaptive based on the spatial structure of the data itself [7,42,39,27]. In contrast to these works that learn the local filter from the data, our convolution filter is defined using depth as privileged information. 3D Neighborhood Convolution In this section, we describe our 3D Neighborhood Convolution (3DN-Conv) models. 3D neighborhood in RGB-D coordinates Every 2D pixel location on the image frame corresponds to local neighborhood in the 3D space. While a convolutional filter normally has a predefined receptive field, we argue that the spatial extent of the convolutional filter should depend on the 3D neighborhood around the real-world point projected into a particular pixel. To define the 3D neighborhood around an image location, we consider the 3D cube around real-world point p = (p x , p y , p z ), with a radius of σ. Which we subsequently approximate from 2D image coordinates and depth (p u , p v , d), resulting in depth aware 2D filters in image space. Under the constraints of a classic pin-hole camera model, the image coordinates of real-world point p are: where µ relates to the camera focal length. A σ neighborhood in 3D around p can be approximated by the following 2D image neighborhood: Which shows that a 3D cube with size σ corresponds to an image based convolution operator with a receptive field of: which suggest that the receptive field of the 2D kernel should be inversely proportional to the depth d, instead of ∆p u = ∆p v being a predefined filter size of the convolution. The required depth value d can be estimated from the z-buffer of the RGB-D image. The value of depth d is not influenced by camera projection, yet a direct measurement of the 3D world, therefore: This shows that a local neighborhood on the depth channel is defined by the radius of the 3D neighborhood. In the rest of this section we use the insights from Eq. 3 and Eq.4 for the design of our 3DN-Conv operator. Depth locality For clarity of presentation, we first formulate the standard convolutional filter for pixel i as follows: where y i denotes the D dimensional output vector for pixel i, and x j denotes the C dimensional input vector for pixel j. Furthermore, f (·) denotes the activation function (e.g. ReLU), b denotes the bias (b ∈ R 1×D ), j ∈ N i enumerates over the spatial neighborhood around pixel i, n ji denotes the relative position of j with regard to i inside the neighborhood to select the relevant slice of the filter W, where for each relative location we learn a filter W ∈ R D×C . Modeling depth locality From Eq. 4 we know that the locality of depth should be within a range of To use this as effective filter size, we use a function that decays with depth as the local window to reweigh the local convolution kernel, which is equivalent to reweighing the local input features. We evaluate different functions for reweighing, see Figure 3. Intuitively a step function, which uses a hard threshold on the locality seems an obvious choice, yet this fails in practice. We argue this is because 2D receptive field can gradually grow along with more convolution kernels, while receptive field on depth can hardly grow with 2D convolution. By noting that the effective receptive field follows the shape of a Gaussian distribution, see e.g. [22], we choose to model the window function by a 1-D Gaussian function N (d i , σ), where σ, the size of the expected 3D neighborhood, is used as the standard deviation, the size of the Gaussian window. This yields the following convolutional filter: In the following we denote this local convolutional filter with: W L nji = L ji W nji . In standard 2D CNNs the effective receptive field of a kernel grows per layer, due to the aggregation of information and the use of pooling layers. Therefore, we scale σ The exponential decay function (orange), used in [34], has too long tails. We use the Gaussian curve (blue), since it is in accordance with the effective receptive field theory [22]. per layer with respect to the size of 2D convolution kernel, so that the effective 3D neighborhood remains similar. According to Eq. 3, the size of the 2D kernel is proportional to the desired size of the 3D neighborhood. However, in different layers in the ConvNet, as the features are usually downsampled in 2D spatial resolution stage by stage, the relative size of the canonical 2D kernel, which we regard as the size of the 3D neighborhood σ, is actually enlarged as the network goes deeper. So σ varies with layer in the network. The exact scaling factor depends on the used architecture and is clarified in the supplementary material. Neighborhood scale selection We know that the receptive field of the 2D convolution ∆ d should be inversely proportional to the local depth value d, see Eq. 3. In order to incorporate this into our design, we choose to bilinearly resample the convolutional patch, i.e., the 2D local neighborhood N i to a rescaled version N S i . This yields the following convolutional filter: where N S i is the scaled 2D neighborhood, defined by: in which δ(i, j) denotes the distance of the two points i and j in 2D space. r S is the scaled kernel size, and r 0 is the original kernel size and it usually equals to dilation rate. In practice, we need to set a canonical depth value d 0 as a hyperparameter so that the size of the local convolution patch adapts according to Eq. 10. We include an overview in Figure 4 on how we build our 3DN-Conv to model the scale and depth locality. In practice, we first scale the convolution patch size and sample the scaled neighborhood by Eq. 9 and Eq. 10, and then adapt the local filter to incorporate depth locality by Eq. 6 and Eq. 7. Leverage RGB edges The first few layers of standard convolutional networks extract edge features and other low level features. While our 3D-Neighborhood Convolution incorporates features in accordance with their real world spatial location, the limitation of looking at only 3D neighbors could be to lose edge kind features. Especially the edges caused by depth occlusions, since both sides of the occlusion edge are not regarded as the neighborhood in our convolution being aware of 3D neighborhood. In order to leverage edge and other low-level features, we combine standard 2D convolution filters, denoted by W E for edge features with the proposed 3DN-Conv: Note that we deliberately use a separate set of parameters for W E without any weight sharing with W L to ensure the two kernels learning discriminate features. This joint convolution is only applied in the first stage of convolution layers conv_1 in the ResNet backbone. In depth comparison with [34] Our proposed 3DN-Conv bears resemblance to the depth-aware convolution operator proposed by Wang and Neumann [34]. The depth-aware spatial-variant convolution reweighs features to be convolved by an exponential function that measures the difference in depth. We clarify some significant differences: (i) We explicitly model a scaling function for our kernel, while [34] only considers depth similarity. This is the largest difference and traces back to the fact that our motivation being different from [34] as we design our kernel based on modeling the 3D neighborhood. (ii) Our local weight function is a Gaussian depth locality function to resemble the shape of the effective receptive field. While [34] uses an exponential decay function (see Figure 3). (iii) We use varying depth locality window size σ in different layers in the network while in [34] the window size is fixed in the network (see the last paragraph in Section 3.2). (iv) We include the edge convolution component to enhance low-level features. Learning Depth for RGB-D Segmentation In RGB-D segmentation, depth provides extra information such as 3D boundaries to eliminate some ambiguities in projected 2D images. Unfortunately, it is not always possible to have sensory depth as this requires specialized equipment. For this reason, we propose to first estimate depth, followed by semantic segmentation using our 3DN-Conv described in the previous section. The estimated depth map should be locally correct, including fine local details, sharp depth boundaries and consistent depth gradients, in order to guide the semantic segmentation model. Depth Discriminative Feature Network The task of depth estimation shares some common properties with semantic segmentation. For one, both are pixelwise prediction tasks. Also, both tasks require not only recognizing both global contextual information as well as local details, but also a successful fusion strategy for combining the two. It stands to reason, therefore, that depth estimation and semantic segmentation may can benefit from each other when it comes to designing a suitable network architecture. Inspired by the recently proposed semantic segmentation network Discriminative Feature Network (DFN) [38], we describe our proposed a novel model for estimating depth specialized for encoding local depth structure. In DFN architecture, the Channel Attention Block module (CAB) is proposed to incorporate features at multiple scales, with the idea originate from channel attention [16]. We describe the architecture of our D-DFN. The model structure is illustrated in Figure 5, comprising an encoder and a decoder architecture. The encoder is a ResNet-101 backbone, followed by a global pooling layer for capturing global information. The Residual Refinement Block upconvolves the high-level features to the output stage by stage. The Channel Attention Block incorporates features from different stages with channel-attention. See supplementary material for a more detailed comparison of our D-DFN and the original DFN architecture. Training Loss. We adopt the L 1 loss for depth error We also consider an auxiliary loss for incorporating depth gradients and thus learning fine local depth details. The auxiliary loss takes the form where N denotes the total number of pixels, d i the predicted depth value at the i-th pixel andd i is the ground truth. This auxiliary loss enforces smoothness, by penalizing inconsistent depth gradients [19,15]. The final loss is a weighted combination: L depth + λ grad L grad , with λ grad = 1. Experimental Results We provide an extensive evaluation of our 3DN-Conv. We adopt Deeplab V3 [5] and DFN [38] as our baseline semantic segmentation models. The popular Deeplab V3 is used for most of the experiments and ablation studies, while DFN is for compare with state-of-the-art methods as it is empirically more powerful for complex scene segmentation. Following [21], we adopt the following common metrics to evaluate semantic segmentation: mean intersectionover-union (mIoU), mean accuracy (mAcc) and pixel accuracy (Acc). For more implementation details we kindly ask readers to refer to the supplementary material. Datasets Evaluation is performed on two RGB-D segmentation datasets: NYUDv2 [31] and KITTI [37]. NYUDv2 contains a total of 1,449 RGB-D image pairs from 464 different scenes. The dataset is divided into 795 images from 249 scenes for training and 654 images from 215 scenes for testing. We use the 40-class segmentation setting [12]. KITTI provides parallel camera and LIDAR data for outdoor driving scenes. We use the semantic segmentation annotation proposed in [37], which contains 70 training and 37 testing images and annotations in 11 classes. Depth-aware convolutions: ablation study We examine the importance of each of the proposed components of 3DN-Conv. We use our implementation of depth-aware conv in [34] and applied on the same ResNet50-DeeplabV3 model. We evaluate the performance of semantic segmentation on the NYUDv2 dataset. The results are shown in Table 1. We observe the following: First both depth-aware convolution methods improve over the RGB segmentation baseline by a large margin, as was also noted in [34]. Our improved 3DN-Conv, using W L with a Gaussian window brings an additional improvement over the exponential decay in [34]. Further-mIoU(%) mAcc(%) Baseline (RGB only) 35.1 44.1 Depth-aware conv [34] 38.1 48.0 3DN-Conv (this paper) 39.2 50.8 Table 2. Semantic segmentation on KITTI, trained from scratch more, we examine integrating scale adaptation to the proposed depth-aware convolution in two ways. First, the integration of scale can be done in a discretized manner. The depth intervals are first binned. Then, different dilation rate kernels are considered, similar to [6]. The second way of integrating scale is by considering bilinear rescaling of the convolutional receptive field, as described in Sec. 3.2. With the bilinear rescaling a continuous value for the scale is returned, thus allowing for more fine-tuned convolutions. In any case, per location a single scale is selected. We observe that adding scale adaptation improves the depth-aware convolutions considerably, especially when considering continuous scale values. This is not surprising, as the distribution of depth values is highly non-uniform, thus it is not straightforward how to bin fairly. Last, when considering also the RGB-only component for the convolution, W E , the performance improves further in all three metrics. We further evaluate on KITTI dataset [11,37] to examine our method in various types of scenes. As in Table 2, we see that our 3DN-Conv outperforms both RGB baseline and the depth-aware convolution in [34]. We conclude that the proposed depth-aware convolutions improve semantic segmentation, especially when considering adaptive scaling of the receptive field, as well as additional RGB-only filters. Fusing depth & RGB for segmentation Next, we are explore the optimal way of integrating depth information for semantic segmentation. Specifically, we consider the following choices: (i) early-fusion of HHA features (ii) late-fusion of HHA features, (iii) feature reweighting (modulation): feature map rescaled by depth, with a simple linear model, and (iv) feature reweighting (non-local): feature map rescaled by depth, with non-local attention [35]. See supplementary material for the details. For all the aforementioned methods we use the same backbone architecture and training pipeline. We compare the four methods of fusion above with the depth-aware convolutions from [34], as well as our 3D-Neighborhood convolution. We report results in Table 3. We observe that the methods which perform convolution-level incorporation of depth, including the depth-aware convolution [34] the fusion methods are simply processing the HHA encoding of depth as extra channels in the network, and the assumptions of the feature reweighting method (see supplementary material) are too simple to capture the influence from the depth map. Overall, these methods to incorporate depth into deep network are regardless of the essence of geometry. We conclude that our 3DN-Conv is able to capture better depth-aware features by convolution-level incorporation and an explicit modeling of 3D geometry. Segmentation with estimated depth Next, we evaluate whether estimated depth can be used to improve semantic segmentation, in a similar way like depth returned by specialized sensory equipment. First, we examine the quality of the depth estimations. Then, we examine the benefits of using depth to semantic segmentation. Depth estimation Following [9], we evaluate the depth estimation performance by taking the 304 × 228 center crop out of the downsampled image. As the goal is to use depth as a cue for semantic segmentation, it is important to have accurate local depth estimation. To this end, we evaluate depth estimation with global and local metrics, that is (i) Root mean squared error (rms): Whereas the root mean squared error and the mean log10 error focus on the global evaluation of depth estimation, the local gradient error measures how well the local depth details are predicted. We report results in Table 4. We observe that the proposed D-DFN improves in terms of both the global mean log10 error, however, it performs slightly worse in terms of root mean squared error. We attribute this to the fact that the logarithm scale normalizes the possible output values in a more reasonable range, in which regression is easier. Further, the proposed method improves the local gradient error by a noticeable 30% compared to [10]. This is important, as for semantic segmentation the local depth structure indicates the presence of se- Global Local RMS log10 Local grad Laina et al. [18] 0.573 0.055 0.137 DORN [10] 0.509 0.051 0.140 D-DFN (this paper) 0.528 0.049 0.092 mantic boundaries and can help with ambiguities. We corroborate this by showing some qualitative results in Figure 6, where the proposed D-DFN returns smoother outputs and finer local depth details. Depth-aware semantic segmentation Next, we evaluate the performance of using estimated depth for RGB-D segmentation. We report results in Table 5. We make several observations. First, we confirm the findings that ground truth depth improves semantic segmentation. When using the proposed D-DFN to estimate depth and help semantic segmentation, we improve considerably on top of the standard RGB baseline, coming close to Table 6. State-of-the-art in RGB-D segmentation with different source of depth. Our 3D neighborhood convolution successfully leverages the estimated depth by the proposed D-DFN depth estimation network to improve RGB-D semantic segmentation, without requiring sensory depth at inference time. the benefits from using ground truth depth. Last, whereas DORN [10] has a lower global RMS error, it leads to notably lower semantic segmentation accuracies. This confirm our hypothesis that for semantic segmentation it is the local depth structures that are important. We conclude that estimated depth with D-DFN leads to improved semantic segmentation accuracy. State-of-the-art comparison Last, we compare our final model with the state-of-theart. For the state-of-the-art comparisons we rely on DFN [38] semantic segmentation networks instead of Deeplab V3 [5], as DFN yields empirically the best results. The backbone network is ResNet-101 pre-trained by ImageNet. We report results in Table 6. We make the following observations. For one, the baseline DFN model is close to the top performing Re-fineNet [20], when considering only RGB channels as the input. What is more, when relying on estimated depth for helping semantic segmentation, the proposed 3D Neighborhood convolution manages to get very close to models that must rely on ground truth depth, like [34] and ours. This is quite substantial, as the estimated depth is obtained for free with no extra sensory equipment at test time. Note that [34] rely on a RefineNet [20] with an additional pretraining on ADE20K dataset [41] that contains similar indoor scenes, whereas our models do not require extra pretraining. We also include qualitative results in Figure 7 to show the effectiveness of our 3DN-Conv. We show that a model with depth incorporated by 3DN-Convs outperforms its baseline model in terms of local edge quality, intra-object consistency and high-level semantic classification. We conclude that 3D Neighborhood convolution can successfully leverage the estimated depth by the proposed D-DFN depth estimation network to improve RGB semantic segmentation, without sensory depth at inference time. Conclusion In this work we introduce depth-aware convolutions around 3D neighborhoods, which adapt the receptive field of convolutions according to the local depth. Further, we introduce the D-DFN model for estimating depth maps that are locally accurate around semantic boundaries. As a result, we can now use estimated depth to improve RGB-D based semantic segmentation. Results on the two datasets validate that indeed using estimated depth we improve semantic segmentation considerably. We conclude that convolutions that are aware of depth locality and scale improve RGB-D semantic segmentation, even when estimated depth.
2019-09-27T09:03:21.056Z
2019-09-01T00:00:00.000
{ "year": 2019, "sha1": "bdc1abe786f9c9d43f94bc2801fb69b93e95b953", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1910.01460", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "162922367e2cc965c4d061a31bbd037ea22703a7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
260434269
pes2o/s2orc
v3-fos-license
Beauty or function? The opposing effects of natural and sexual selection on cuticular hydrocarbons in male black field crickets Although many theoretical models of male sexual trait evolution assume that sexual selection is countered by natural selection, direct empirical tests of this assumption are relatively uncommon. Cuticular hydrocarbons (CHCs) are known to play an important role not only in restricting evaporative water loss but also in sexual signalling in most terrestrial arthropods. Insects adjusting their CHC layer for optimal desiccation resistance is often thought to come at the expense of successful sexual attraction, suggesting that natural and sexual selection are in opposition for this trait. In this study, we sampled the CHCs of male black field crickets (Teleogryllus commodus) using solid‐phase microextraction and then either measured their evaporative water loss or mating success. We then used multivariate selection analysis to quantify the strength and form of natural and sexual selection targeting male CHCs. Both natural and sexual selection imposed significant linear and stabilizing selection on male CHCs, although for very different combinations. Natural selection largely favoured an increase in the total abundance of CHCs, especially those with a longer chain length. In contrast, mating success peaked at a lower total abundance of CHCs and declined as CHC abundance increased. However, mating success did improve with an increase in a number of specific CHC components that also increased evaporative water loss. Importantly, this resulted in the combination of male CHCs favoured by natural selection and sexual selection being strongly opposing. Our findings suggest that the balance between natural and sexual selection is likely to play an important role in the evolution of male CHCs in T. commodus and may help explain why CHCs are so divergent across populations and species. | INTRODUC TI ON Few biologists today would challenge the importance of sexual selection to the evolutionary process. There is good reason for this general agreement: there is an abundance of convincing evidence from theoretical models (e.g. Kirkpatrick, 1996;Lande, 1981Lande, , 1982Rowe and Houle, 1996), comparative studies (e.g. Arnqvist, 1998;Cooney et al., 2019;Wickman, 1992) and experimental evolution experiments conducted in the laboratory (e.g. Hawkes et al., 2019;House et al., 2013;Hunt et al., 2012) showing that sexual selection can drive the evolution of male sexual traits, often over relatively short time frames. Moreover, sexual selection gradients in the wild are stronger than natural selection gradients, suggesting that sexual selection is also likely to be an important evolutionary force in natural populations Kingsolver et al., 2001). However, despite some evidence that sexual traits may evolve faster than nonsexual traits (Pitchers et al., 2014), examples of contemporary evolution by sexual selection in natural populations appear to be rare (Svensson & Gosden, 2007). Although numerous explanations have been provided to explain this paucity of examples, such as a regime of fluctuating selection (Siepielski et al., 2013; but see de Villemereuil et al., 2020) and/or genetic constraints (Hansen & Houle, 2004;Merilä et al., 2001;Pitchers et al., 2014), an oftenneglected explanation is how changes in the interaction between natural and sexual selection can alter how sexual traits evolve (Svensson & Gosden, 2007). Most (if not all) sexual traits are also likely to be targeted by natural selection meaning that how these modes of selection interact is key to understanding how sexual traits will evolve (Svensson & Gosden, 2007). Indeed, the interaction between natural and sexual selection is built into most theoretical models of sexual selection, although the exact nature of this interaction varies across models (Mead & Arnold, 2004). Classically, it has been argued that sexual selection is opposed by natural selection, at least once male sexual traits have become sufficiently elaborated (Mead & Arnold, 2004). This occurs because sexual selection favours the greater elaboration of male sexual traits and female preferences for them, but both become increasingly costly to their bearer, reducing their survival (Fisher, 1930;Kirkpatrick, 1982;Kirkpatrick & Ryan, 1991;Lande, 1981;Price et al., 1993). Natural selection will therefore act as an evolutionary 'brake' that prevents the continued evolution of male sexual traits once they pass their naturally selected optima (Fisher, 1930;Kirkpatrick, 1982;Lande, 1981). In contrast, good-genes models of sexual selection propose that natural and sexual selection are reinforcing, with both modes of selection favouring males with the highest fitness (Mead & Arnold, 2004). This argument is based on male sexual traits being honest signals of genetic quality (Zahavi, 1975) so that females preferring males with more elaborate sexual traits produce high-quality offspring that are both more attractive and have higher viability, thereby gaining indirect benefits from their mate choice (Iwasa et al., 1991;Iwasa & Pomiankowski, 1999;Kirkpatrick & Ryan, 1991). It is possible, however, that the interaction between natural and sexual selection will be more complex than this simple dichotomy, with the outcome varying across populations (Long et al., 2012) or with different environmental conditions (e.g. Parrett & Knell, 2018). Cuticular hydrocarbons (CHCs) are an excellent trait for studying the interaction between natural and sexual selection as this trait has a clearly defined role in both processes (Blomquist & Bagnères, 2010). CHCs are organic compounds that are deposited as a waxy layer on the surface of the cuticle in most terrestrial arthropods, and the total abundance and structural composition of these compounds have been shown to play a key role in reducing evaporative water loss (Blomquist & Bagnères, 2010). Indeed, a large number of laboratory studies have now shown that the total abundance and/or the proportion of longer-chained CHCs increase when individuals are maintained at a warmer temperature (e.g. Sharma et al., 2012;Wagner et al., 2001;Woodrow et al., 2000) and a lower humidity (e.g. Sprenger et al., 2018;Woodrow et al., 2000), as well as in populations artificially selected for desiccation resistance (e.g. Ferveur et al., 2018;Gibbs et al., 1997;Kwan & Rundle, 2009). Similar changes in CHC composition with temperature and humidity have also been shown across some natural populations (e.g. Buellesbach et al., 2018;Rajpurohit et al., 2017Rajpurohit et al., , 2020, but not all (e.g. Frentiu & Chenoweth, 2010;Leeson et al., 2020). Although far fewer studies exist, there is also direct evidence that an increase in longer-chained CHCs reduces evaporative water loss through the cuticle (e.g. Gibbs et al., 1997;Toolson, 1982). More recently, CHCs have also been shown to play an important role in sexual selection, especially in the context of female mate choice (Wyatt, 2003). Whereas in some species females prefer one or a small number of CHC components (e.g. Ferveur & Sureau, 1996;Grillet et al., 2006;Snellings et al., 2018), it is far more common for females to prefer a combination of different male CHCs Steiger et al., 2015). However, the exact combination of male CHCs that females prefer varies across species, with some preferring shorter-chained (and more volatile) male CHCs (e.g. Simmons et al., 2014;Steiger et al., 2013Steiger et al., , 2015, whereas others prefer more specific combinations that appear unrelated to chain length (e.g. Hunt et al., 2012;Rundle et al., 2005;Thomas & Simmons, 2009a). Despite the independent effects of natural and sexual selection on male CHCs having been well documented, surprisingly few studies have directly examined the interaction between these two modes of selection. A notable exception to this is work on two species of Drosophila (Blows, 2002;Hine et al., 2011;Sharma et al., 2012;Skroblin & Blows, 2006). Blows (2002) used a factorial design to manipulate the intensity of natural and sexual selection in experimental populations of D. serrata to show that the evolutionary response of male CHCs was greater when natural and selection operated together compared to when they operated alone, suggesting that these modes of selection are reinforcing. However, a subsequent multivariate selection analysis on breeding values for male CHCs in this species found that the direction of natural selection opposed the direction of sexual selection, at least for the subset of male CHCs examined (Skroblin & Blows, 2006). Furthermore, artificial index selection on the vector of male CHCs that are most attractive to females resulted in the rapid evolution of male CHCs for the first seven generations but further evolution beyond this point was halted presumably due to the opposing effects of natural selection (Hine et al., 2011). In a similar factorial design to Blows (2002), natural selection, sexual selection and their interaction were all shown to influence the evolution of male CHCs in D. simulans (Sharma et al., 2012). Importantly, some combinations of male CHCs only evolved in the direction of natural selection when sexual selection was relaxed, suggesting that these modes of selection are opposing (Sharma et al., 2012). Although the weight of evidence from these Drosophila studies suggests that the effects of natural and sexual selection on male CHCs are opposing, this outcome is not universal. One reason for this may be the different ways that natural selection has been applied (or measured) in these studies: in D. serrata, natural selection was measured (Skroblin & Blows, 2006) or manipulated (Blows, 2002) via female productivity, whereas natural selection was manipulated via temperature in D. simulans (Sharma et al., 2012). That is, none of these studies directly measured or manipulated evaporative water loss, which is the main proposed target of natural selection on CHCs. Consequently, how the interaction between natural and sexual selection shapes the evolution of male CHCs very much remains an open empirical question that requires more studies that directly examine evaporative water loss and encompass a broader range of arthropod species. Field crickets have proved important models for testing sexual selection theory (Zuk & Simmons, 1997). By far, the majority of empirical research on field crickets has focussed on the male acoustic signal (or call), including what call properties females prefer (e.g. Bentsen et al., 2006;Brooks et al., 2005) and how they benefit from this choice (e.g. Ting et al., 2017;Wagner & Harper, 2003), as well as how this sexual signal is countered by natural selection through predation and attack by natural enemies (e.g. Hedrick, 2000;Sakaluk & Belwood, 1984;Wagner, 1996). In contrast, considerably less is known about the role that CHCs play in sexual selection, with our current knowledge limited to two field cricket species: the Australian field cricket (Teleogryllus oceanicus) and the decorated cricket (Gryllodes sigillatus). Females of both species prefer certain combinations of male CHCs to others and this exerts significant nonlinear sexual selection on this male sexual trait (Simmons et al., 2013;Steiger et al., 2015;Thomas & Simmons, 2009b). Male CHCs are heritable in both species (Thomas & Simmons, 2008;Weddle et al., 2012) and female T. oceanicus preferentially mate with males with a more dissimilar CHC profile to their own (and therefore less likely to be related; Thomas & Simmons, 2011) but a similar pattern does not occur in G. sigillatus (Steiger et al., 2015). Males and females physically transfer CHCs to each other during mating and are able to detect these subtle changes to adjust their mating behaviour (Capodeanu-Nägler et al., 2014;Thomas & Simmons, 2009a;Weddle et al., 2013). In G. sigillatus, females are able to recognize their own CHCs transferred to a male during copulation via a system of 'online processing' and use this information to avoid mating with previous mates (Capodeanu-Nägler et al., 2014;Weddle et al., 2013). In T. oceanicus, males are able to detect the CHCs from rival males on a female and adjust the proportion of viable sperm in their ejaculate in accordance with the risk of sperm competition (Thomas & Simmons, 2009a). In contrast, we know far less about the effects of natural selection on male CHCs in these species. A recent study on T. oceanicus showed a negative genetic correlation between the combinations of male CHCs that confer attractiveness and desiccation resistance (Berson et al., 2019). While this suggests that the sexually and naturally selected functions of CHCs are opposing in this species, this study only included the seven most abundant CHCs (of the 22 possible CHCs for this species, Thomas & Simmons, 2009a) and because these were measured after attractiveness and desiccation resistance assays, the potential exists for these assays to directly influence male CHCs (Berson et al., 2019). Clearly, more work is still needed to understand how the interaction between natural and sexual selection has shaped male CHCs in these species, as well as in field crickets more generally. Here, we examine the role that CHCs play in restricting evaporative water loss (EWL) and enhancing mating success in male black field crickets, Teleogryllus commodus. While sexual selection has been well studied in this species (e.g. Bentsen et al., 2006;Brooks et al., 2005;Bussière et al., 2006;Hall et al., 2008Hall et al., , 2013Hunt et al., 2004), we currently do not know if male CHCs play a role in this process or the extent to which CHCs are also shaped by natural selection. Using a custom-built desiccation chamber, we directly measured the EWL of a random sample of males. In a second random sample of males, we measured mating success using 'nochoice' mating trials. We used solid-phase microextraction (SPME) to sample the CHC profile of each male prior to these measurements to ensure that any possible contaminants or the physical transfer of CHCs did not influence our results. We conducted multivariate selection analysis (Lande & Arnold, 1983) on these data to characterize the strength and form of natural selection (acting via EWL) and sexual selection (acting via mating success) operating on male CHCs. We then formally compare these modes of selection to determine if natural and sexual selection on male CHCs are opposing or reinforcing in this population. We discuss how the interaction between natural and sexual selection is likely to shape the evolution of male CHCs in T. commodus, as well as the more general diversification of CHCs across populations and species. | Animals and husbandry The T. commodus used in this study were collected from the wild in March 2009 from Smith's Lake, New South Wales, Australia (32.3871° S, 152.4109° E), and used to establish a large laboratory culture. Approximately 400 gravid females were collected and placed into a single 90-L plastic container with cardboard egg carton for shelter, water in 50-mL test tubes plugged with cotton wool, cat biscuits for food (Purina Go Cat Senior©) and a total of eight eggs pads for oviposition. Each egg pad consists of moist cotton wool provided in a Petri dish (90 mm diameter). Each week we removed the egg pads, express couriered them to our insect facility at the University of Exeter (Cornwall Campus) and replaced them with fresh egg pads. This process was repeated for three consecutive weeks. Nymphs were collected from egg pads on the day they were hatched and distributed at random between four large 110-L plastic culture containers. Each container was provided with an abundance of cardboard egg carton for shelter, water ab libitum in 50-mL test tubes plugged with cotton wool and a 50% mixture of cat biscuits (Purina Go Cat Senior©) and rat pellets (SDS Diets). Culture containers were housed in a constant temperature room set to 28 ± 2°C and a 13 h light: 11 h dark cycle. Culture containers were cleaned and fresh food and water bottles were provided weekly. When newly eclosed were observed in each culture container, eight egg pads were added for oviposition. Once sufficient nymphs were collected to establish four new culture containers (~2000 nymphs per container), adults were killed by freezing at −20°C to prevent overlapping generations. To preserve genetic variation in our culture, nymphs were distributed at random between culture containers each generation to enforce gene flow, and the number of breeding adults in each culture container was always kept high (~500 crickets). At the time of our experiment, crickets had been maintained according to this protocol for a total of 12 generations. | Experimental procedure A total of 2000 nymphs were taken at random from our culture on the day they hatched from eggs and established in individual containers (5 × 5 × 5 cm) provided with a single piece of cardboard egg carton for shelter, a small 5 mL tube plugged with cotton wool for water and ground cat biscuit provided in the lid of a 1.5 mL Eppendorf for food. Each container was cleaned and fresh food and water were provided weekly. After 3 weeks, we replaced the ground found with two cat biscuits per cricket. When crickets reached fourth instar, containers were checked daily for eclosion to adulthood. These crickets were maintained in the same constant temperature room, and therefore the same temperature and light conditions, as our cultures. On each day of eclosion, half of the males were randomly allocated to measure mating success and the remaining half to measure EWL. For both mating success and EWL, adult males were measured at 8 days of age and because they were reared in individual containers, were all virgin and had not interacted physically with other crickets at the time of measurement. A random subset of the adult females that were reared were used to assess male mating success and these females were also 8-day-old virgins and socially naive when used. In total, we measured the EWL of 300 adult males and the mating success of an additional 300 adult males (total n = 600 males). | Measuring male evaporative water loss We used a custom-built device that enabled us to measure the EWL of eight males simultaneously. This device consisted of laboratorygrade compressed zero air (21%O 2 and 79%N 2 mix; BOC) from a cylinder passed through two glass columns (7 cm diameter, 29 cm tall; Drierite®), using Tygon® S3™ laboratory-grade tubing (E-3603). The first column contained indicating drierite (Drierite®) to remove any water and the second contained activated charcoal (Finest-Filters®) to remove any volatile organic compounds. The outlet of the second column was connected to a stainless steel eight-way airline splitter (One Stop Grow Shop), with each outlet connected by tubing to an independently calibrated flow meter (MR300 Series, 2-30 L/ min, Brooks® Instrument). In turn, each flow meter was connected by tubing to a plastic holding vial (85 mm long, 28 mm diameter). A 9.5 mm hole was drilled in the bottom of the vial to serve as an inlet, with the tubing secured in place with silicon sealant. Half of the internal diameter of the screwcap lid was removed and replaced with wire mesh (size 12 mesh, 1.6 mm opening) that was also secured in place with silicon sealant. Each holding vial was housed in an incubator (Sanyo MIR 553) set to 28 ± 2°C with reduced florescent lighting to minimize movement. On the day of testing, we ensured that the airflow to each holding vial was set to 10 L/min. We then sampled the CHCs of each male using SPME and weighed them to the nearest milligram on an electronic microbalance (UMX2; Mettler Toledo). Each male was then introduced to one of the holding vials at random and kept there for 2 h to measure EWL. We used this time period as our pilot data showed that males exhibited the greatest rate of EWL in the first 2 h of measurement ( Figure 1a). Importantly, we also found that males with the highest EWL in the first 2 h were significantly more likely to die when returned to the holding vial for a further 6 h (Logistic regression: χ 2 (1) = 44.65, p = 0.0001; Nagelkerke R 2 = 0.83; 90% of cases correctly classified; Figure 1b). After 2 h, each male was removed from the holding vial and reweighed. We used the reduction between the initial and final weight as our measure of EWL for each male. To account for the variation in male size, we expressed this weight change as a percentage of the initial weight of the cricket for analysis. | Measuring male mating success We measured the mating success of each adult male in noncompetitive ('no-choice' mating trial) male-female pairs (see Hall et al., 2008;Shackleton et al., 2005). We sampled the CHCs of each male using SPME and then allocated a female at random to the male. The male from each pair was then introduced into a plastic container (20 × 10 × 10 cm) with the bottom lined with paper towel and given 5 min to acclimate. After acclimation, we introduced the female into the container. When the pair had made antennal contact, we commenced timing the observation, and the pair was given 2 h to mate. If the male performing the full courtship repertoire (i.e. producing a courtship call while positioning his rear end to the female) was mounted by the female and transferred a spermatophore to the female, the mating was considered successful and the male was assigned a score of 1. However, if the male courted and was mounted by the female but he did not transfer a spermatophore, the mating was considered unsuccessful and the male assigned a score of 0. If the male did not court the female in the 2 h provided, he was tested the following day with a different female. If this male did not court three consecutive females, he was excluded from the experiment and replaced with another male. This occurrence was rare, however, with only 9 of 300 males (3%) needing to be replaced in this manner. In T. commodus, less than 5% of males that fail to mate in 2 h successfully mate if given a further 2 h (J. Hunt, personal observations), indicating that our observation period is sufficient time to accurately assess male mating success. All mating behaviour was observed under red lighting in a constant temperature room set to 28 ± 1°C during the dark phase of the light cycle. | Analysis of male cuticular hydrocarbons Immediately prior to measuring EWL and mating success, we sampled the CHCs of each male using SPME. Crickets were sampled by lightly rubbing a 7 μm polydimethylsiloxane (Supelco) fibre across the dorsal surface of the pronotum and fore wings continuously for a 1-min period. Each SPME fibre was manually injected into an Agilent 7890A GC coupled to an Agilent 5975B mass spectrometer equipped with an HP-5 MS capillary column (30 m × 0.25 mm ID × 0.25 μm; Agilent J&W). Fibres were injected into a split/splitless inlet and held at 250°C in splitless mode for 1 min. The helium carrier gas flow was 1 mL/min. The initial oven temperature was held at 50°C for 1 min, then ramped at a rate of 20°C/min to 250°C followed by a 4°C/min ramp to 320°C and a 5 min hold at this temperature. Ionization was achieved by electron ionization (EI) at 70 eV. The quadrupole mass spectrometer was set to 3.2 scans/s, ranging from m/z 40 to 500. The abundance of each CHC peak in chromatograms was estimated using MSD ChemStation software (version E.02.00.493; Agilent Technologies) by measuring the area under the peak, using ion 57 as the target ion ( Figure 2). CHCs were identified using NIST library matches provided in MSD ChemStation. Prior to analysis, we divided the abundance of each CHC peak by the abundance of peak 1 (a methyl alkane, Figure 2), and the resulting value was log 10 -transformed (to produce a log contrast for each CHC peak) to achieve a normal distribution. This meant that although we identified 45 unique CHC peaks for T. commodus ( Figure 2, Table 1), only 44 of these (peaks 2-45) were available for further analysis. | Statistical analysis Due to the large number of CHCs examined for T. commodus, we used principal component (PC) Analysis to reduce the dimensionality of this data set. PCs were extracted from males and used to measure F I G U R E 1 Pilot data showing that (a) males had the greatest rate of evaporative water loss in the first 2 h of measurement. Different letters represent statistically significant differences (at p < 0.05) across sampling intervals. (b) Males with the highest percentage of water loss in the first 2 h of measurement were significantly more likely to die if returned to the desiccation device for a further 6 h. Individual data points for males are given by grey circles. The solid line represents a thin-plate spline through the data and the dashed lines represent the 95% confidence interval for this spline. EWL and mating success together to ensure that PCs were directly comparable. PCs were extracted using the correlation matrix and we retained PCs with eigenvalues exceeding 1 for further analysis (Tabachnick & Fidell, 1989). We interpret factors loading that exceeds |0.25| as biologically important (Tabachnick & Fidell, 1989). We used standard multivariate selectin analysis (Lande & Arnold, 1983) to evaluate the strength and form of linear and nonlinear selection acting on male CHCs through EWL and mating success. As increased EWL reduces survival, we refer to the selection acting through this mechanism as natural selection. Conversely, as increased mating success is likely to improve reproductive fitness, we refer to the selection acting through this mechanism as sexual selection. Each male was assigned an absolute fitness score: for natural selection, we used the percentage water loss and for sexual selection we used mating success (1 = mated, 0 = not mated). Following Lande and Arnold (1983), we transformed absolute fitness into relative fitness by dividing by the mean absolute fitness of the population. As high values of EWL and mating success are likely to have opposite effects on fitness, we reversed the sign of relative fitness for EWL (larger values mean lower EWL) to facilitate the direct comparison of natural and sexual selection on male CHCs. To estimate the standardized linear selection gradients for natural and sexual selection (ß), we fit a first-order linear multiple regression that used the three PCs that described the variation in male CHCs as the predictor variables and relative fitness as the response variable (Lande & Arnold, 1983). We then used a second-order quadratic multiple regression model that included all the linear, quadratic and cross-product terms to estimate the matrix of nonlinear selection gradients (γ) for natural and sexual selection. As standard multiple regression analysis underestimates the quadratic regression coefficients by 0.5, we doubled the standardized quadratic selection gradients from this model (Stinchcombe et al., 2008). Relative fitness did not conform to a normal distribution and although this does not influence the sign and magnitude of the resulting selection gradients (Lande & Arnold, 1983), it can impact the significance testing of these gradients (Mitchell-Olds & Shaw, 1987). We, therefore, tested the significance of all standardized selection gradients using a resampling procedure where we randomly shuffled relative fitness scores across males in our dataset to obtain a null distribution for each selection gradient where there is no relationship between our PCs describing the variation in male CHCs and relative fitness. We used a Monte Carlo simulation to determine the proportion (p) of times (of 10 000 iterations) that each gradient pseudoestimate was equal to or less than the original estimated gradient, and this was used to calculate a two-tailed probability value (as 2p if P < 0.5 or as 2(1-p) if P > 0.5) for each selection gradient in the model (Manly, 1997). We conducted separate randomization tests for the linear and full quadratic model for natural and sexual selection following the procedure outlined above. We used univariate splines to visualize the linear and nonlinear natural and sexual selection acting on each of the three PCs using the 'SPLINES' package of R (version 4.1.1, www.r-proje ct.org). As the strength of nonlinear selection can be underestimated by interpreting the size and significance of γ (Blows & Brooks, 2003), we examined the extent of nonlinear selection acting on the PCs describing the variation in male CHCs by conducting a canonical rotation of the γ to locate the major eigenvectors of the fitness surface for natural and sexual selection (Phillips & Arnold, 1989). For both fitness surfaces, we used the permutation procedure outlined in Reynolds et al. (2010) to locate and determine the strength F I G U R E 2 A chromatograph of a typical cuticular hydrocarbon (CHC) profile of male Teleogryllus commodus. All peaks were present in each individual male but in different relative amounts. The numbers above peak correspond to the peak number provided in Table 1. Peak 1 (a methyl alkane) was used as a divisor for all other peaks to generate logcontrasts for analysis and therefore is not present in Table 1. and significance of nonlinear selection operating along the eigenvectors of γ for natural and sexual selection. We adapted the permutation procedure of Reynolds et al. (2010) to also estimate the strength and significance of linear selection operating along the eigenvectors of γ for natural and sexual selection (R code provided in Text S1). The strength of linear selection along each eigenvector (m i ) is given by theta (θ i ), whereas the strength of nonlinear selection is given by their eigenvalue (λ i ) (Phillips & Arnold, 1989). We used thin-plate splines (Green & Silverman, 1994) to visualize the major eigenvectors of the fitness surface for natural and sexual selection. We used the Tps' function in the 'FIELDS' package of R to fit the thin-plate splines and visualized the splines in both the perspective and contour views. In each instance, we visualized the thin-plate splines using the smoothing parameter (λ) that minimized the generalized cross-validation score (Green & Silverman, 1993). We used a sequential model building approach to determine whether the linear and nonlinear selection targeting male CHCs differed for natural and sexual selection (Draper & John, 1988). Formal comparison showed that the linear and quadratic gradients differed significantly between natural and sexual selection, but the correlational selection gradients did not (Table 4). The significant difference in the standardized linear selection gradients was due to the fact that the linear gradient for PC1 was positive for natural selection and negative for sexual selection (Figure 3a,d) and because the linear gradient for PC3 was negative for natural selection and positive for sexual selection (Figure 3c,f; Table 3). The significant difference in the standardized quadratic gradients is due to the gradient for PC1 being more negative for sexual selection (indicating stronger stabilizing selection) than for natural selection (Figure 3a,d; Table 3). Therefore, although natural and sexual selection both impose significant linear and stabilizing selection on male CHCs, they appear to be targeting very different combinations of these traits. | RE SULTS Indeed, the angle (θ) between the linear vectors (β) of selection for natural and sexual selection was 126.90° (95% credible interval: 103.70°, 156.90°), demonstrating that these modes of selection on male CHCs are strongly opposing in this population (Table 4). | DISCUSS ION Although the interaction between natural and sexual selection features prominently in most models of sexual selection (Mead & Arnold, 2004), surprisingly few direct empirical tests of this interaction exist. Empirically testing this interaction has proven difficult because it is not always easy to quantify how each mode of selection targets a given phenotypic trait. Cuticular hydrocarbons (CHCs) are widespread in terrestrial arthropods and represent a 'dual trait' that has clear functions in preventing evaporative water loss (EWL) and also as a chemical cue that operates in many different social contexts, including sexual interactions that directly influence male mating success (Chung & Carroll, 2015). In this study, we characterize the strength and form of natural selection (acting via EWL) and sexual selection (acting via mating success) operating on male CHCs in the black field cricket (Teleogryllus commodus). We show that EWL was reduced when there was an increase in the total abundance of CHCs, especially those with a longer chain length. In contrast, mating success was highest at a low total abundance of CHCs, with the exception of a few specific CHCs (six peaks in total) that increased mating success. Importantly, natural and sexual selection acting on male CHCs was strongly opposing, with a large angle between the linear vectors of selection (126.90°). Our findings therefore suggest that balance between natural and sexual selection is likely to play an important role in the evolution of male CHCs in T. commodus and that this interaction may help explain why CHCs are so divergent across populations and insect species. By far, the majority of studies examining the relationship between CHCs and EWL in insects have been indirect. That is, most studies have altered EWL by manipulating temperature (e.g. Sharma et al., 2012;Wagner et al., 2001;Woodrow et al., 2000) or humidity (e.g. Sprenger et al., 2018;Woodrow et al., 2000), shown an increase in the total abundance and/or the proportion of longer-chained CHCs in response to warmer temperature, lower humidity and with desiccation resistance, the possibility that these manipulations are influencing CHCs beyond EWL cannot be ruled out. Reassuringly, studies that have directly measured EWL across the cuticle have largely confirmed these patterns: an increase in longer-chained CHCs reduces EWL through the cuticle (e.g. Toolson, 1982;Gibbs et al., 1997). Our finding that EWL was reduced at high values of PC1 (more total CHCs) and low F I G U R E 4 Thin-plate spline visualizations provide a perspective (a and c) and contour (b and d) view of the two major axes of nonlinear natural (a and b) and sexual (c and d) selection (m 2 and m 3 ). On each surface, white colouration represents regions of highest fitness, whereas red colouration represents regions of lowest fitness. Individual data points are provided as black circles on the contour views. values of PC2 (more long-chained and less short-chained CHCs) is therefore consistent with these earlier studies. Importantly, our work builds on these earlier studies by providing the first quantitative estimates of linear and nonlinear natural selection acting on male CHCs through EWL. Our estimates of linear natural selection acting on PC1 and PC2 were markedly lower than the median (|ß| = 0.16) reported for natural populations, whereas our estimate of quadratic natural selection acting on PC2 was similar (|γ| = 0.10) and generally considered weak . Collectively, this demonstrates that the natural selection we document on these two vectors is relatively weak. We also show significant (albeit weak) negative linear natural selection on PC3 which represents a trade-off between specific CHCs, independent of carbon chain length. Understanding how PC3 influences EWL is more speculative, but it is interesting that nine of the ten CHCs that weigh heavily on this vector contain either methyl (peaks 4, 13, 14 and 16) or dimethyl (peaks 3, 7, 8, 15 and 29) groups. The presence of methyl branches is known to lower the melting temperature (and therefore increase cuticular permeability and EWL) of CHCs because molecular packing is less tight Menzel et al., 2019). Furthermore, the position of methyl branches is also important with CHCs containing methyl groups located more centrally melting earlier than those with methyl groups located more distally (Gibbs & Pomonis, 1995). More work is clearly needed on the chemical structure of the CHC components contributing to PC3 before we understand how this vector influences EWL in T. commodus. Our work shows that male CHCs in T. commodus are also targeted by sexual selection imposed by female mate choice. The role of CHCs in mate choice is widespread in insects (e.g. Chung & Carroll, 2015;Steiger & Stokl, 2014), and for most species, females tend to prefer certain combinations of CHCs over others rather than exhibiting a preference for one or a few specific CHCs (but see Ferveur & Sureau, 1996;Grillet et al., 2006;Snellings et al., 2018). However, exactly what combination of CHCs females prefer is highly variable across species. In some species, females prefer combinations of shorter-chained CHCs that are more volatile (e.g. Simmons et al., 2014;Steiger et al., 2013Steiger et al., , 2015, (Bussière et al., 2006;Shackleton et al., 2005). Inspection of the individual data points along the major axis of nonlinear sexual selection (m 3ss , Figure 3d), which is most heavily weighted by PC1, shows that male mating success only decreases at the very highest PC1 scores (i.e. where males are largest). Given that males were paired with a female at random in our study, it is also possible that this reduction in mating success with an increase in PC1 occurs due to a mismatch in size between the sexes (e.g. Han et al., 2010). It is more difficult to interpret the effects of PC3 on mating success and clearly work is needed, possibly using electroantennography or single sensillum recordings (e.g. Jacob, 2018) to determine how individual male CHC components stimulate the female olfactory system. A key finding of our work is that natural selection and sexual selection acting on male CHCs are strongly opposed in T. commodus. This was confirmed by the significant differences in our sequential model and the large angle (126.90°) between the linear vectors of natural and sexual selection and indicates that males cannot have a CHC profile that is optimal for both mating success and evaporative water loss. This finding is therefore broadly consistent with previous studies on CHCs showing that natural selection opposes sexual selection (Hine et al., 2011;Sharma et al., 2012;Skroblin & Blows, 2006), as well as a number of iconic studies in sexual selection including the opposing effects of predation on the evolution TA B L E 4 Sequential model building approach used to statistically compare the sign and strength of standardized linear, quadratic and correlational for the natural and sexual selection acting on male CHCs in T. commodus. When an overall significance was detected, univariate interaction terms are provided (below of male colour patterns in guppies (Endler, 1980) and calling in the Túngara frog (Ryan et al., 1992). Our results do, however, contrast the findings of Blows (2002) that showed the effects of natural and sexual selection on the evolution of male CHCs were reinforcing in D. serrata. It is important to note that with the exception of Skroblin and Blows (2006), all of these previous studies have examined the evolutionary response of CHCs to different regimes of natural and sexual selection (Blows, 2002;Sharma et al., 2012) or artificial selection (Hine et al., 2011) rather than directly quantifying the strength and form of each mode of selection targeting CHCs. Moreover, Skroblin and Blows (2006) did not directly measure natural selection acting on male CHCs through EWL (but rather indirectly through male productivity) and they did not formally estimate the degree of divergence between these two modes of selection. Our work is therefore novel by directly quantifying both natural and sexual selection targeting male CHCs, as well as the degree to which these modes of selection are opposing for this trait in T. commodus. However, understanding how the opposing natural and sexual selection we document shapes the overall pattern of selection on male CHCs requires more information on the relative contribution of EWL and mating success to total male fitness (e.g. lifetime reproductive success; Hunt et al., 2009). Given that unit changes in EWL and mating success are unlikely to have equivalent effects on total fitness, empirically quantifying these effects will be an important first step in understanding the broader implications of our findings to evolution of male CHCs in T. commodus. CHCs are some of the most highly divergent traits across insect populations and species (e.g. Kather & Martin, 2012;Menzel et al., 2019;Otte et al., 2018) and our findings suggest that the balance between natural and sexual selection may play an important role in explaining some of this diversity. Whenever natural and sexual selection targets the same sexual trait but acts in opposing directions, the trait optimum will be determined by the balance between these two modes of selection (Svensson & Gosden, 2007). In this case, the most obvious effect of sexual selection will be to push a population away from the mean sexual trait optimum determined by natural selection (Kirkpatrick, 1982;Lande, 1981). Although this will temporarily reduce local adaptation in a single population, theory suggests that there are several possible ways that this can promote divergence between allopatric populations and potentially drive reproductive isolation (Servedio & Boughman, 2017). First, it is possible that as sexual selection pushes the mean sexual trait away from one naturally selected peak, it moves into a broad zone of instability between alternate peaks. On entering this unstable region, the combined action of natural and sexual selection can drive the rapid evolution of the mean sexual trait across this region to a new naturally selected peak, resulting in ecological divergence (Bonduriansky, 2011;Lande & Kirkpatrick, 1988;Miller, 1994). Second, the interaction of natural and sexual selection with genetic drift can promote the rapid evolution of preference and sexual traits in geographically separated populations (Lande, 1981), resulting in reproductive isolation when preferences are either neutral or costly (Uyeda et al., 2009). Third, if preference landscapes are rugged (i.e. have multiple peaks), it is possible that populations may evolve to different sexually selected peaks as novel sexual traits emerge, even when these populations initially experience similar natural and sexual selection (Mendelson et al., 2014). Although the occurrence of gene flow poses more challenges (by potentially bringing maladapted migrants into the population), theory suggests that sexual trait divergence can still evolve across sympatric populations if preference is relative (to the population mean sexual trait), open ended (Lande, 1982) or based on a condition-dependent sexual trait that indicates locally adapted males (Proulx, 2001). However, sexual trait divergence across sympatric populations is most likely to occur when preferences are under direct selection, as occurs when preferences become locally adapted through sensory drive (Endler, 1992), are based on context-dependent benefits (Cornwallis & Uller, 2010) or are directed towards a trait that is also possessed by the female (i.e. phenotyping matching; Kirkpatrick, 2000;Servedio, 2011). Despite the many theoretical conditions that can promote the diversification of sexual traits under opposing regimes of natural and sexual selection, relatively few empirical tests currently exist (Svensson & Gosden, 2007). Consequently, there is a clear need for more empirical studies and the dual function of CHCs in reducing EWL and enhancing mating success, making this trait an excellent model for future work. We know that male CHC profiles in T. commodus are genetically divergent across populations in southern Australia but we do not know the role (if any) that the balance between natural and sexual selection plays in shaping this divergence (C. Mitchell & J. Hunt, unpublished data). An obvious first step is therefore to formally quantify natural and sexual selection targeting CHCs in these populations and determine if any differences in these modes of selection are related to CHC divergence across populations. A similar approach including other Australian field cricket species within a phylogenetic context could be used to understand if changes in the balance between natural and sexual selection can drive speciation, although this is likely to prove more challenging given how rapidly individual CHCs components appear to evolve in arthropods (e.g. . AUTH O R CO NTR I B UTI O N S CM and JH conceptualized the work. CM, JR and JH conducted the experimental work and data collection. JH and EDC conducted the formal analyses. ZW, CMH and JH wrote the original draft. All authors contributed to the final version of the manuscript.
2023-08-04T06:17:43.554Z
2023-08-03T00:00:00.000
{ "year": 2023, "sha1": "10043dcc900d3646ed920ab2644a43ea4cd021e5", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jeb.14198", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "b7a40bde807a0d2de0395235fe2e2ee3a3dfd473", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
2722174
pes2o/s2orc
v3-fos-license
Upper Extremity Freezing and Dyscoordination in Parkinson's Disease: Effects of Amplitude and Cadence Manipulations Purpose. Motor freezing, the inability to produce effective movement, is associated with decreasing amplitude, hastening of movement, and poor coordination. We investigated how manipulations of movement amplitude and cadence affect upper extremity (UE) coordination as measured by the phase coordination index (PCI)—only previously measured in gait—and freezing of the upper extremity (FO-UE) in people with Parkinson's disease (PD) who experience freezing of gait (PD + FOG), do not experience FOG (PD-FOG), and healthy controls. Methods. Twenty-seven participants with PD and 18 healthy older adults made alternating bimanual movements between targets under four conditions: Baseline; Fast; Small; SmallFast. Kinematic data were recorded and analyzed for PCI and FO-UE events. PCI and FO-UE were compared across groups and conditions. Correlations between UE PCI, gait PCI, FO-UE, and Freezing of Gait Questionnaire (FOG-Q) were determined. Results. PD + FOG had poorer coordination than healthy old during SmallFast. UE coordination correlated with number of FO-UE episodes in two conditions and FOG-Q score in one. No differences existed between PD−/+FOG in coordination or number of FO-UE episodes. Conclusions. Dyscoordination and FO-UE can be elicited by manipulating cadence and amplitude of an alternating bimanual task. It remains unclear whether FO-UE and FOG share common mechanisms. Introduction A motor block, or "freezing" event, is the sudden inability to produce effective movement, which has been documented during speech, upper extremity (UE) movements, and gait, and is often experienced by individuals with Parkinson's disease (PD) [1][2][3][4]. Freezing of gait (FOG) is arguably the most debilitating motor block, as it contributes to increased risk of falls and is associated with reduced quality of life and depression [5]. FOG is difficult to study because it is not easily elicited within the laboratory setting. Individuals with PD who experience FOG (PD+FOG) often demonstrate decreasing steplength in combination with increased cadence prior to a freezing event [4,6]. Additionally, studies have demonstrated that people with PD+FOG exhibit greater steplength variability, increased cadence, increased step-time asymmetry, and poorer coordination compared to individuals with PD who do not experience FOG (PD-FOG) [7][8][9]. Plotnik et al. suggest that each of these gait parameters may have a certain level of dependency on each other, and that decline in one or more of these parameters can push an individual past the threshold for functional gait resulting in an episode of FOG [9]. Recent research investigated a possible shared mechanism between FOG and impaired upper extremity (UE) movements [10][11][12]. Nieuwboer et al. observed trends towards decreased coordination and increased variability of movement in freezers, nonfreezers, and controls during an alternating, high speed, and small amplitude bimanual task compared to an alternating, normal speed, and large amplitude task [11]. Additionally, they showed a strong correlation between UE freezing and Freezing of Gait Questionnaire (FOG-Q) scores. Similarly, Vercruysse et al. [10] observed UE freezing most often during alternating flexion/extension movements of the index finger during small, fast movements. Most recently, the same group [12] observed the effects of manipulating amplitude, frequency, and movement complexity (in-phase versus antiphase) during alternating flexion/ extension movements of the index finger with and without auditory cueing in PD-FOG and PD+FOG. They noted that the PD+FOG group demonstrated the most movement variability during small amplitude tasks. These results suggest that variability of UE movement and freezing of the UE (FO-UE) during bimanual tasks may be related to FOG. Additionally, FO-UE may be influenced by manipulations of amplitude and cadence that reflect characteristics of FOG, that is, small amplitude and fast cadence. However, the extent to which small amplitude or increased cadence in isolation or in combination contributes to dyscoordination of UE movement or FO-UE has yet to be determined. Further, no studies to date have compared similar manipulations of amplitude and cadence of the UE and of gait in order to gain insight into potential shared mechanisms of motor blocks in the UE and during gait. The purpose of this study was (1) to investigate how specific manipulations of amplitude and cadence during an alternating bimanual task affect UE coordination, as measured by the phase coordination index (PCI), and number of FO-UE events and (2) to gain further insight into potential shared mechanisms between UE and gait coordination in people with PD and healthy controls. We hypothesized that decreasing amplitude or increasing cadence would decrease coordination in people with PD compared to healthy controls, with the combination of small amplitude and fast cadence eliciting the poorest coordination. Furthermore, we hypothesized that the PD+FOG group would be more affected by amplitude and cadence manipulations thereby exhibiting worse coordination and increased FO-UE episodes compared to PD-FOG and healthy controls. Finally, we hypothesized that coordination during each UE task would be correlated with coordination of a parallel gait task. Participants. Twenty-eight participants with idiopathic PD (16 PD-FOG, 12 PD+FOG) and 19 healthy older adults participated. Sex, age, and disease severity characteristics are included in Table 1. Participants were recruited from the Movement Disorders Center database at Washington University in St. Louis School of Medicine (WUSM). All participants with PD had a diagnosis of idiopathic PD according to established criteria [13,14]. Inclusion criteria included the ability to independently ambulate a minimum of twenty feet and normal or corrected to normal vision. Exclusion criteria included the presence of a diagnosed neurological or medical condition (aside from PD) and an inability to withhold anti-Parkinson medication for a limited duration. Data were collected following a minimum 12-hour overnight withdrawal of anti-Parkinson medication. Healthy older adults (>30 years old) were often the spouses of participants with PD. All healthy individuals met the above inclusion and exclusion criteria except those specific to PD. Healthy older adults were age-matched to participants with PD. Data from these individuals has been previously reported elsewhere [15]. Data were collected in the Locomotor Control Laboratory at WUSM Program in Physical Therapy. All participants gave informed consent as approved by the WUSM Human Research Protection Office. Participants with PD were further divided into two groups, those who experience freezing of gait (PD+FOG) and those who do not (PD-FOG), based upon a score of ≥2 on item three of the Freezing of Gait Questionnaire (FOG-Q), which indicated at least weekly freezing episodes [16]. All participants with PD participated "OFF" medication (≥12 hour withdrawal of anti-Parkinson medication). One healthy older adult was excluded from all analyses due to the inability to follow directions adequately. One participant with PD+FOG was excluded from all analyses due to inability to perform the tasks. Two additional participants with PD+FOG were excluded only from UE PCI analyses due to the inability to perform continuous alternating bilateral UE movements during one or more of the conditions. Procedure: Upper Extremity and Gait Tasks. Participants with PD were assessed by a trained research physical therapist using the Movement Disorder Society Unified Parkinson's Disease Rating Scale Motor Subscale III (MDS-UPDRS-3) to quantify disease severity [13] and completed the FOG-Q [16] to quantify frequency and severity of FOG events. All participants completed four UE tasks and four gait tasks: Baseline, Fast, Small, and SmallFast. A full description of the methods used during the parallel gait tasks are reported in Williams et al [15]. In short, all participants were assessed while walking at a preferred speed across a 4.9 m GAITRite instrumented walkway (CIR Systems, Inc., Sparta, NJ, USA) placed on a level surface in a large open room. For this experiment, these data were used to determine the cadence of each individual's UE task. Ten trials were performed to obtain an average baseline cadence for each individual and each trial was visually monitored for FOG events or atypical gait events such as stumbles, falls, or lateral deviation off of the GAITRite mat. Any trials consisting of these events were removed and repeated. During the UE tasks, participants were seated comfortably at a table in an open room. Each individual performed alternating, bilateral UE movements under four conditions: Baseline (baseline cadence, 10 cm target), Fast (+50% baseline cadence, 10 cm target), Small (baseline cadence, 5 cm target), and SmallFast (+50% baseline cadence, 5 cm target). Baseline UE cadence was determined by an individual's cadence during preferred gait as reported in Williams et al. [15]. That is, if a person walked at a rate of one step per second, we had him/her perform UE movements to one reach per second. All conditions were randomized. Five, 15-second trials of kinematic data for each condition were captured using 8 Hawk cameras and Cortex data acquisition software (Motion Analysis Corporation, Santa Rosa, CA, USA). Prior to each recorded trial, the participant was given a 20.32 cm × 27.94 cm (8 × 11 in) sheet marked with the appropriate targets ( Figure 1). Instructions were given to use his/her index fingers to tap the targets, alternately tapping the left front/right rear targets and then the left rear/right front targets simultaneously. A metronome was turned on to the appropriate cadence while the individual tapped the targets. Once the individual practiced with the targets and metronome, the metronome was turned off and the targets were removed without the individual stopping his/her UE movement. The 15-second trial was captured after the visual and auditory cues were removed. This allowed for observation of the participant's internally generated movement state during each condition. Further, auditory and visual cues were removed as these cues are known to enhance performance in individuals with PD [14,17,18], and the purpose of this study was to observe each participant's internally generated movement without external cues. Outcome Variables. A quantitative assessment of freezing episodes based upon established definitions [10] and phase coordination index (PCI) were the primary outcomes. PCI was developed to quantify interlimb coordination during gait by taking into account the accuracy and consistency of the timing of stepping phases [19]. Higher PCI values indicate poorer coordination. Previous investigations have used PCI to quantify temporal coordination of steps during gait by measuring the timing of consecutive footfalls [8,19]. In the current study, we use the same metric to assess the temporal coordination of alternating UE movements. In this case, each "footfall" in the standard PCI calculation was represented by the index finger making contact with the target furthest from the body. Therefore, only the time of taps aimed at the target furthest from the body were analyzed. A "stride" was defined as two consecutive taps of the same finger. A "step" was defined as consecutive taps of alternating fingers and from hereon will be referred to as a "cycle. " For three consecutive taps, the phase ( ) was determined as cycle time divided by "stride" time and scaled to 360 ∘ ( ): where ( ) and ( ) represent the timing of the th finger contact of the UE with shorter and longer average "step" times, respectively. Once had been determined, 180 was subtracted from each value. The absolute value of each data point was calculated, and the mean of the array was taken to produce a measure of temporal accuracy ( ABS ): The degree of consistency of was calculated as the coefficient of variation of values ( CV ) and given as a percentage. PCI was then calculated as where ABS = 100( ABS /180). Periods of freezing, as defined in the following paragraph, were not included in the PCI analysis. For the quantitative assessment of FO-UE episodes, trials were analyzed for the presence of FO-UE episodes by a blinded rater. In order to assess FO-UE episodes, we determined the duration and amplitude of the average antiphase cycle (AAPC) [10]. The AAPC was calculated using the first six consecutive cycles of alternating UE movement in each trial. FO-UE episodes were then defined using the calculated AAPC for a given trial. FO-UE episodes were defined as a sudden halt or decrease in amplitude of movement, which deviated from the calculated AAPC in one of two ways: (1) UE movement halted for ≥75% of the AAPC duration or (2) UE movement amplitude that was ≤50% of the AAPC amplitude, was accompanied by an irregular cycle frequency, and continued as such for at least twice the AAPC duration [10]. Additionally, voluntary stops and in-phase movements were excluded from assessment. A normal and an FO-UE event trajectory are illustrated in Figure 2. As a secondary analysis, we determined correlations between FO-UE, UE PCI, PCI during parallel gait tasks (as reported in Williams et al. [15]), and FOG-Q score. Data Processing. Kinematic data were processed using Motion Monitor software (Innovative Sports Training, Inc., Chicago, IL, USA) and analyzed with custom written Matlab software (MathWorks, Natick, MA, USA). Position and velocity data were low pass filtered at 10 Hz before kinematic analyses. Each group's average amplitude, cadence, and PCI values for each task were determined. Statistical Approach. The same statistical approach as reported in Williams et al. [15] was used to analyze UE PCI. Mixed model repeated measures ANOVA with an unstructured covariance structure was implemented using SAS v 9.3 (SAS Institute, Inc., Cary, NC, USA). Group was used as the between subject factor and condition as the within-subject factor. We corrected for multiple comparisons by dividing = 0.05 by the number of comparisons made (Bonferroni correction); a post hoc value of 0.004 was considered significant for evaluating interactions. Additionally, we compared number of FO-UE episodes between PD−/+FOG groups, which was analyzed as percent of trials with FO-UE episodes. Data were rank-transformed prior to performing a repeated measures ANOVA. Spearman's correlation was used to determine relationships of FO-UE events with UE PCI, PCI during gait, and FOG-Q score and of UE PCI with PCI during gait as reported in Williams et al. [15]. Aside from evaluating interactions, a value of ≤0.05 was considered significant for all statistical analyses. Results Mean performance ± standard deviation of each group is shown in Figures 3(a) and 3(b). Values are expressed as percent difference from instructed baseline. As such, ideal performance in the Baseline condition would have cadence and amplitude values of 0%. Ideal performance in the Fast condition would have cadence values of +50% and amplitude values of 0%. Ideal performance in the Small condition would have cadence values of 0% and amplitude values of −50%. Ideal performance in the SmallFast condition would have values of +50% for cadence and −50% for amplitude. Overall, there was no between-group difference in performance of cadence ( = 0.21), while there was a difference between healthy older adults and individuals with PD in performance of amplitude ( ≤ 0.02). Table 2. There was no difference between conditions ( = 0.61) in percent of trials with FO-UE. A trend toward significance was present between PD-FOG and PD+FOG in percent of trials with FO-UE ( = 0.07). Phase Coordination Index (PCI) during Upper Extremity Tasks. Overall, UE PCI values were different between groups ( = 0.005) and conditions ( < 0.001), and a group by condition interaction effect was observed ( = 0.05) (Figure 4). Post hoc analyses showed that PD+FOG had poorer coordination compared to healthy older adults during the SmallFast condition ( < 0.001). Correlational Analyses. All groups were included in the analysis between UE PCI and gait PCI. Healthy older adults were excluded from analysis of FO-UE and FOG-Q, as freezing is specific to PD. UE PCI was correlated with the number of FO-UE events in the Baseline and Small conditions (Table 3). Gait PCI was correlated with UE PCI for the SmallFast (rho = 0.34; = 0.03) condition. Furthermore, FOG-Q scores were correlated with FO-UE events during Fast (rho = 0.45; = 0.02). FOG-Q scores were not correlated with UE PCI. Additionally, UPDRS scores were correlated with UE PCI (rho = 0.41, = 0.04) but were not correlated with the number of FO-UE episodes (rho = 0.21, = 0.29). Discussion The results from this study demonstrate that dyscoordination and FO-UE can be elicited by manipulating cadence and amplitude of an alternating UE bimanual task. Contrary to our hypothesis, there was no difference between participants with PD and healthy controls in PCI during Small or Fast conditions. Additionally, there was no difference between PD−/+ FOG in PCI during any condition. However, PD+FOG were more affected by the combination of Small-Fast, which resulted in poorer coordination in PD+FOG compared to healthy older adults. A trend toward significance between PD−/+FOG was also observed in the percent of trials exhibiting FO-UE episodes. Although periods of freezing were excluded from the PCI calculation, UE PCI and the quantitative assessment of FO-UE events were correlated during the Baseline and Fast conditions. An additional relationship was demonstrated between PCI during the SmallFast Previous work demonstrated that bimanual, antiphase movement coordination is impaired in people with PD compared to healthy controls [3,10,12]. In keeping with this, a number of FO-UE events were elicited in this study and people with PD+FOG had poorer coordination during the SmallFast task compared to healthy controls. Further, FOG was not elicited during the parallel gait tasks reported in Williams et al. [15]. This suggests that FO-UE may be elicited more easily than FOG in individuals with PD [3,[10][11][12]. Previous work demonstrated that FO-UE was more common in more complex tasks, that is, anti-phase movement with a small amplitude and fast frequency [10], and participants with PD+FOG exhibited increased difficulty with coordination compared to participants with PD-FOG [12]. This work also suggested that FO-UE increased with small amplitude movements [10]. However, in the present study, there were no significant differences between conditions in number of FO-UE events. Only 27% of participants with PD+FOG exhibited FO-UE during the SmallFast condition, while 54% exhibited FO-UE during the Fast condition. For individuals with PD, the Small condition only accounted for 20% of the total number of FO-UE events. Additionally, there was no significant difference between the PD−/+FOG groups in the assessment of PCI and only a trend toward significance in the quantitative assessment of FO-UE episodes. In fact, two participants with PD-FOG exhibited FO-UE in each of the four conditions, and 37% of PD-FOG exhibited FO-UE during the SmallFast condition. This difference may be due to the way the participants with PD−/+FOG were qualified. Prior studies have qualified individuals with PD+FOG as experiencing monthly or more frequent FOG episodes [10,12]. In the present study, we defined PD+FOG as those individuals with PD experiencing weekly or more frequent FOG episodes (score of ≥2 on item 3 of the FOG-Q). Four participants in this study reported experiencing FOG once per month (score of ≥1 on item 3 of the FOG-Q). To determine if these individuals were indeed driving the difference between our work and prior studies, we did a secondary analysis wherein the four participants with FOG once per month were placed in the PD+FOG group. Using this alternate classification scheme, we again analyzed differences in the percent of trials with FO-UE between the PD−/+FOG. This analysis yielded the same results as the original analysis; that is, there was no difference between PD−/+FOG in percent of trials with FO-UE episodes. There was also no difference between conditions of percent of trials with FO-UE events. As such, the differences in results of the present study compared to results of previous work are unlikely due to PD−/+FOG method of classification. It has been hypothesized that freezing may be a somatotopic phenomenon, which initially affects the UE or LE and may eventually come to impact both UE and LE tasks [10,11]. Interestingly, two of the four participants in the PD-FOG group who reported FOG once per month accounted for 67% of FO-UE events in the Baseline and Small condition and 40% of FO-UE events in the Fast condition. Though none of the four experienced FO-UE during the SmallFast condition, those in the PD-FOG group who did may experience motor blocks of the UE and not yet experience FOG. This may also explain why not all of those with PD+FOG experienced FO-UE. Based upon the results of the present study, it remains unclear whether FO-UE and FOG are related. However, FO-UE can be elicited by manipulating amplitude and frequency characteristics in a way that mimics changes in these variables just before an episode of FOG. The group of Nieuwboer et al. demonstrated a strong correlation between FO-UE episodes and the FOG-Q [10][11][12]. There may be common mechanisms underlying FO-UE and FOG, but further research is needed to investigate this, as the FOG-Q score was correlated with number of FO-UE events only during the Fast condition. Additionally, the number of FO-UE events was correlated with poor gait coordination (i.e., gait PCI) during the parallel SmallFast task, but no FOG episodes were elicited during this gait task. To our knowledge, this is the first time that gait coordination, that is, PCI, has been used to correlate interlimb coordination during UE tasks with gait coordination of parallel tasks. Prior work demonstrated that individuals with PD+FOG exhibit ongoing movement impairments during gait, that is, greater steplength variability and increased cadence compared to individuals with PD-FOG [7][8][9]. Our work supports this as participants with PD+FOG made, on average, smaller movements during the Fast condition and faster movements during the Small condition than the two other groups. It remains unclear whether decreased amplitude, increased cadence, or a combination of the two is associated with the freezing mechanism of the UE. Vercruysee et al. [10] conclude that smaller amplitudes elicit more FO-UE, but there were no significant differences between conditions in the present study. The differences between the present study and the previous literature suggest that small amplitude, fast cadence, or a combination of small, fast movements may not be the sole contributors to FO-UE episodes. As Plotnik et al. suggest with FOG [9], we suggest that FO-UE episodes may represent a culmination of breakdown in several aspects of control. This breakdown can be elicited by alternating bimanual Small tasks, Fast tasks, or SmallFast tasks in people with PD as measured by our quantitative assessment of FO-UE events. Though cadence and amplitude immediately prior to a FO-UE event were not measured in this study, as with FOG, we hypothesize that FO-UE is preceded by involuntary simultaneous decreasing amplitude with an accompanying hastened cadence that either a Fast, Small, or SmallFast task has the potential to elicit this response in the UE. Functional, complex, rhythmical tasks that require manual coordination include typing, handwriting, playing an instrument, and certain forms of exercise such as UE strength training. These tasks can replicate Small, Fast, or SmallFast conditions depending on an individual's ability. As demonstrated in the present study, decreased amplitude and increased cadence alone or together can elicit FO-UE. FO-UE during daily tasks can severely impact an individual's form of communication, hobbies, and quality of life. It is therefore important to educate patients with PD regarding these functional tasks that may elicit FO-UE. Limitations of this study are acknowledged. First, only one independent rater determined the presence of FO-UE based upon established definitions [10], and reliability of this method was not established. Further, preselected amplitudes and cadence were utilized and we cannot say whether a large amplitude or slow cadence would have elicited the same or lesser amount of dyscoordination or FO-UE. Additionally, cadence was determined from a gait task rather than from an UE movement task. This methodology was employed as the gait task provided a parallel motor task, without introducing the UE task and allowing for motor learning effects to bias the study. We acknowledge the difference between UE and lower extremity tasks and that perhaps movement frequency may be a higher in UE tasks. Additionally, participants were not sex-matched, participants with PD were not matched for disease severity, and UPDRS scores were correlated with PCI. We cannot conclude definitively whether our measures of dyscoordination or FO-UE are due to disease severity, FOG status, or both. Finally, the sample size of this study was relatively small with large amounts of variation within each condition per group, which makes it difficult to detect significant differences between groups and conditions. Conclusions and Future Direction Imposed manipulations of cadence and amplitude that mimic changes in gait associated with FOG can affect UE coordination and elicit FO-UE episodes in people with PD. People with PD+FOG have poorer coordination compared to healthy controls during a SmallFast task, but no other differences in UE coordination were noted between healthy controls and individuals with PD. FO-UE and FOG may be related, but future research is needed to explore potential links between the two. Future clinical studies could also examine the utility of instructions to increase movement amplitude and decrease movement cadence as a means of enhancing coordination and reducing FO-UE and FOG.
2016-10-31T15:45:48.767Z
2013-08-21T00:00:00.000
{ "year": 2013, "sha1": "dcb258a426244bb34a47973a9c8bc1dd4a96a249", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/pd/2013/595378.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dcb258a426244bb34a47973a9c8bc1dd4a96a249", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16382845
pes2o/s2orc
v3-fos-license
Imaging of carbonic anhydrase IX with an 111In-labeled dual-motif inhibitor. We developed a new scaffold for radionuclide-based imaging and therapy of clear cell renal cell carcinoma (ccRCC) targeting carbonic anhydrase IX (CAIX). Compound XYIMSR-01, a DOTA-conjugated, bivalent, low-molecular-weight ligand, has two moieties that target two separate sites on CAIX, imparting high affinity. We synthesized [111In]XYIMSR-01 in 73.8-75.8% (n = 3) yield with specific radioactivities ranging from 118 - 1,021 GBq/μmol (3,200-27,600 Ci/mmol). Single photon emission computed tomography of [111In]XYIMSR-01 in immunocompromised mice bearing CAIX-expressing SK-RC-52 tumors revealed radiotracer uptake in tumor as early as 1 h post-injection. Biodistribution studies demonstrated 26% injected dose per gram of radioactivity within tumor at 1 h. Tumor-to-blood, muscle and kidney ratios were 178.1 ± 145.4, 68.4 ± 29.0 and 1.7 ± 1.2, respectively, at 24 h post-injection. Retention of radioactivity was exclusively observed in tumors by 48 h, the latest time point evaluated. The dual targeting strategy to engage CAIX enabled specific detection of ccRCC in this xenograft model, with pharmacokinetics surpassing those of previously described radionuclide-based probes against CAIX. INTRODUCTION Renal cell carcinoma (RCC) is the most common neoplasm of the kidney [1], with an estimated 60,000 patients diagnosed annually in the United States [2]. Among cases of RCC, the clear cell subtype (ccRCC) is the most prevalent, accounting for up to 70% of RCCs [3][4][5]. Common to ccRCC is loss of the Von Hippel-Lindau (VHL) tumor suppressor gene [6]. Loss of VHL in turn leads to over-expression of carbonic anhydrase IX (CAIX) [7], a membrane-associated enzyme responsible for catalyzing the reversible hydration of carbon dioxide to a bicarbonate anion and a proton [8,9]. Over-expression of CAIX has been demonstrated in approximately 95% of ccRCC tumor specimens [10][11][12], making it a useful biomarker for this disease. CAIX has limited expression in normal tissues and organs with the exception of the gastrointestinal tract, gallbladder and pancreatic ducts [8,9,[13][14][15]. No report has demonstrated CAIX expression in normal renal parenchyma or benign renal masses [8,9,[13][14][15]. Feasibility for the non-invasive detection of ccRCC based on CAIX expression has been proved with the radiolabeled antibody G250 [16] and its clinical potential has been reviewed [17]. However, antibodies as molecular imaging agents suffer from pharmacokinetic limitations, including slow blood and non-target tissue clearance (normally 2-5 days or longer) and non-specific organ uptake. Lowmolecular-weight (LMW) agents demonstrate faster pharmacokinetics and higher specific signal within clinically convenient times after administration. They can also be synthesized in radiolabeled form more easily, and may offer a shorter path to regulatory approval [18][19][20]. Targeting CAIX with LMW inhibitors has proved challenging in part because fifteen human isoforms of carbonic anhydrase, with high sequence homology, have been identified. Those isoforms share common structural features, including a zinc-containing catalytic site, a central twisted β-sheet surrounded by helical connections, and additional β-strands. The isoforms, however, do vary widely in terms of intracellular location, expression levels, and tissue and organ distribution [8,9]. Significant effort has been expended on development of sulfonamides and other LMW CAIX ligands for nuclear imaging of CAIX, but most reported agents have been fraught with low tumor uptake and significant off-target accumulation [21][22][23][24][25][26]. A new LMW CAIX targeting agent has recently been reported that is composed of two binding motifs, one accessing the CAIX active site and the other binding to an as yet unidentified site [27]. Conjugated with the infrared dye IRDye ® 750, the dual-motif inhibitor showed 10% ID/g tumor uptake. In comparison, agents targeting only the active site show 2% ID/g [27]. However, that optical agent also demonstrated high kidney as well as other non-specific organ uptake at 24 h post-administration. Additionally, utility of that agent for in vivo studies is somewhat limited due to the substantial attenuation of light emission through tissue inherent to optical agents. Such limitations call for an agent that retains affinity for CAIX, but clears rapidly from non-target tissues and can be detected with existing clinical instrumentation. Here we report the synthesis and in vivo performance of [ 111 In]XYIMSR-01, a modified dualmotif CAIX inhibitor with improved tumor uptake and pharmacokinetics for nuclear imaging of ccRCC. This reagent may enable imaging not only of metastatic ccRCC but also localized disease within the kidney due to relatively rapid clearance from normal renal tissue. RESULTS Recently Wichert and co-workers [27] identified 4,4-bis(4-hydroxyphenyl)valeric acid/acetazolamide as a dual-motif CAIX inhibitor from a DNA-encoded chemical library [28][29][30][31]. The addition of a second binding motif significantly improved the potency of sulfonamide inhibitors (up to 40 times) [27], while also suggesting a solution to the problem of generating an isoform-selective CAIX inhibitor caused by conserved structures at the active site. We hypothesized that the slow renal clearance and high liver uptake of the reported optical agent might derive from the hydrophobicity of the molecule. To improve the pharmacokinetics, we replaced the IRDye ® 750 portion of the molecule with 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA), a more hydrophilic species that also enables convenient radiolabeling with metal isotopes for positron emission tomography (PET), single photon emission computed tomography (SPECT), and radiopharmaceutical therapy [32,33]. We chose indium-111 as our initial radionuclide for its relatively long half-life (2.8 day) to enable extended monitoring of pharmacokinetics. Chemical synthesis of XYIMSR-01 was achieved as in Scheme 1. Following a reported procedure, key intermediate 1 was obtained via solid support synthetic methods [27]. We generated XYIMSR-01 by conjugating the commercially available DOTA-NHS ester 7 with 1 in 82% yield. In(III) was incorporated into DOTA in nearly quantitative yield in 0.2 M NaOAc buffer at 60°C, providing the nonradiolabeled standard, [ 113/115 In]XYIMSR-01. After optimization, baseline separation between XYIMSR-01 and [ 113/115 In]XYIMSR-01 could be achieved by high performance liquid chromatography (HPLC). We synthesized fluorescein isothiocyanate (FITC)labeled 8 as a standard to measure CAIX binding affinities of the corresponding radiotracers. Compound 8 bound specifically to CAIX-expressing SK-RC-52 cells, but not to CAIX-negative BxPC3 cells measured by fluorescence activated cell sorting (Fig. 1A, B) [27]. CAIX-selective binding was confirmed by fluorescence microscopic analyses of SK-RC-53 and BxPC3 cells labeled with 8 ( Fig. 2). Only SK-RC-52 cells were stained with 8 on the surface of the cells, where CAIX resides (Fig. 2D). In order to test the relative binding of XYIMSR-01 and [ 113/115 In]XYIMSR-01 to CAIX we modified a competitive fluorescence polarization assay [34] for use with 8. For the competitive binding assay, after optimization for background fluorescence, we chose concentrations of 80 nM and 100 nM for 8 and CAIX, respectively. As a positive control, we employed non-fluorescent 1, which has a reported K d value of 2.6 nM [27]. (Fig. 3). These findings suggest that the DOTA-modified adducts were capable of binding CAIX with high affinity, on the order of positive control 1. We took advantage of fluorescence polarization using 8 to measure relative binding affinities for other isoforms of the carbonic anhydrases. We chose to test one cytosolic (CAII) and an additional membrane-localized isoform (CAXII). Compound 8 exhibited poor binding affinity to cytosolic CAII (Fig. 4A) and about three-fold lower affinity to CAXII (Fig. 4C), indicating selectivity to CAIX. [ 111 In]XYIMSR-01 was administered intravenously to two mice with SK-RC-52 flank tumors, followed by SPECT/CT. As shown in Fig. 5 and Supplementary Fig. 1, radiotracer uptake was observed within the tumors at 1 h post-injection. By 24 h post-injection, nearly all of the radioactivity in the kidneys and other organs had been eliminated, with tumor still retaining significant amounts of radiotracer. Image contrast improved even further by 48 h post-injection. (Table 1 and Supplementary Table 1). At 1 h post-injection, 26.0% ID/g of radiotracer uptake was observed within the tumor. Tumor/blood and tumor/muscle ratios were 19.7 and 12.7, respectively. Major non-specific organ uptake was observed in kidney, lung, stomach, small intestine and liver (Table 1). Biodistribution studies conducted at later time points showed that radiotracer continued to clear from those organs while being retained within tumor. Considerable but lower affinity of [ 111 In]XYIMSR-01 to CAXII compared to CAIX (Fig. 4.) may explain the initial renal uptake of [ 111 In]XYIMSR-01 but more rapid early clearance from kidney than from tumor, since CAXII is abundant in kidney [35]. At 24 h post-injection, tumor/blood and tumor/muscle ratios reached 178 and 68, respectively. Importantly, tumor/ kidney ratio reached 1.7, suggesting that it might be possible to detect local ccRCC in the kidney at 24 h. The enhanced hydrophilicity of [ 111 In]XYIMSR-01, relative to the reported optical analog [27], may have contributed to the low liver uptake. The tumor/liver ratio for [ 111 In]XYIMSR-01 and the optical agent were 8.5 and 4.0 at 24 h, respectively [27]. All other organs showed tumor/organ ratios close to or higher than 10, indicating that suitable image contrast could be expected from these imaging agents. Biodistribution of [ 111 In]XYIMSR-01 simultaneously injected with nonradioactive competitor 1 showed competitive inhibition of uptake within tumors down to 1% ID/g at 24 h and 48 h post-injection, indicating CAIX-mediated binding ( Table 1). The fast normal tissue clearance and the long-lasting tumor retention may enable applications to radiopharmaceutical therapy with appropriately selected therapeutic radiometals. Despite intensive effort expended in the development of CAIX inhibitors designed to engage only the active site, nuclear imaging analogs have continued to demonstrate limited success, showing < 2% ID/g within tumor and high radiotracer uptake within kidney and liver [21][22][23][24][25][26]. Peptides that bind to the surface of CAIX may provide an alternative solution to selective targeting, but they are limited by low potency and in vivo stability [27]. Dual-motif ligands that may concurrently engage the CAIX active site and surface binding demonstrated high potency and tumor uptake for [ 111 In]XYIMSR-01 and for the previously reported optical agent [27]. The hydrophilicity of [ 111 In]XYIMSR-01, with multiple carboxylates and heteroatoms, improved non-target organ clearance, including that from kidney and liver. Further detailed studies on the selectivity of [ 111 In]XYIMSR-01 for CAIX and its stability in vivo are under way. Chemical shifts (δ) were reported in ppm downfield by reference to proton resonances resulting from incomplete deuteration of the NMR solvent. ESI mass spectra were obtained on a Bruker Daltonics Esquire 3000 Plus spectrometer (Billerica, MA). HPLC purification of non-labeled compounds was performed using a Phenomenex C18 Luna 10 × 250 mm 2 column on an Agilent 1260 infinity LC system (Santa Clara, CA). HPLC purification of radiolabeled ( 111 In) ligand was performed on another Phenomenex C18 Luna 10 × 250 mm 2 and a Varian Prostar System (Palo Alto, CA), equipped with a Varian ProStar 325 UV-Vis variable wavelength detector and a Bioscan (Poway, CA) Flow-count in-line radioactivity detector, all controlled by Galaxie software. The specific radioactivity was calculated as the ratio of the radioactivity eluting at the retention time of product during the preparative HPLC purification to the mass corresponding to the area under the curve of the UV absorption. The purity of tested compounds as determined by analytical HPLC with absorbance at 254 nm was > 95%. Radiosynthesis of [ 111 In]XYIMSR-01 20 μg XYIMSR-01 was dissolved in10 μL of 0.2M NaOAc followed by addition of 3.3 mCi of 111 InCl 3 solution to provide a final pH = 5.5-6. The mixture was heated in a water bath at 65 o C for 30 min. Radiolabeling was monitored by HPLC. At completion, the reaction mixture was diluted with 1 mL of water then loaded onto a preparative HPLC column for purification. Retention FACS analysis CAIX-positive SK-RC-52 and CAIX-negative BxPC3 cells were maintained in RPMI 1640 media supplemented with 10% FBS and 1 x penicillinstreptomycin in a 37°C humidified incubator. Cells were detached from the flask with trypsin and reconstituted in RPMI 1640 media supplemented with 1% FBS at a density of 1 x 10 6 cells per mL. FITC-labeled 8 was added to the cells at the indicated concentration and incubated at room temperature for 30 min. Cells were washed twice with the same media for staining and analyzed using the FACSCalibur (BD Bioscience, San Jose, CA) instrument. Microscopic analyses CAIX-positive SK-RC-52 and CAIX-negative BxPC3 cells were seeded on to 8-well chamber glass slides (Lab-Tek ® IICC 2 ™, Nunc, Rochester NY) and incubated in RPMI 1640 media supplemented with 10% FBS and 1 x penicillin-streptomycin in a 37°C humidified incubator for 48 h. Cells were stained with 100 nM of FITC-labeled 8 for 1 h in the same growth media followed by washing twice with the same media. Cells were fixed with 10% formaldehyde (Sigma-Aldrich, Saint Louis, MO) and washed three times with PBS. Cells were treated with 20 nM DAPI (4′,6-diamidino-2-phenylindole) in PBS. The chambers were removed and the Vectashield mounting solution (Vector Laboratories, Inc., Burlingame, CA) was added to the sample. Fluorescence microscopic images were taken using the Nikon Eclipse 80i epifluorescence microscope (Nikon Instruments Inc., Melville, NY) and the images were processed by the Element software (Nikon Instruments Inc.). Competitive fluorescence polarization assay [34] Fluorescence polarization (FP) experiments were performed in 21 μL of the assay buffer (12.5 mM Tris-HCl, pH 7.5, 75 mM NaCl) in black flat bottom 384well microplates (Corning, Inc., New York, NY). The FP reaction employed 100 nM of purified CAIX (R&D systems, Minneapolis, MN) and 80 nM FITC-labeled 8 [27] within the assay buffer. The FP values were measured as mP units using the Victor3 multi-label plate reader equipped with excitation (485 nm) and emission (535 nm) filters (Perkin Elmer, Waltham, MA). 100 nM CAIX was incubated with serially diluted (from 1 μM to 61 fM) concentrations of the three targeting molecules, 1, XYIMSR-01, and [ 113/115 In]XYIMSR-01 for 30 min at room temperature in 384-well plates. 80 nM 8 was added to each well and the reaction was incubated for 30 min at room temperature followed by FP measurement. Experiments were carried out in triplicate and the concentration resulting in 50% response (IC 50 ) was calculated in GraphPad Prism 5 (GraphPad Software, La Jolla, CA) using the sigmoidal dose-response regression function. Fluorescence polarization for affinity comparison Human recombinant CAII, CAIX, and CAXII were purchased from R&D systems (Minneapolis, MN). 5 nM of FITC-labeled 8 were mixed with serially diluted (from 1 μM to 19 fM) isoforms of the carbonic anhydrases in PBS within 384 well Small Volume™ LoBase Microplates (Greiner Bio-One, Frickenhausen Germany). The mixtures were incubated at room temperature for 1 h. Fluorescence polarization was measured using a Safire2™ plate reader (Tecan, Morrisville, NC), with 475 nm excitation and 530 nm emission wavelengths. Imaging Mice harboring subcutaneous SK-RC-52 tumors with the lower left flank were injected with 14.8 MBq (400 μCi) of [ 111 In]XYIMSR-01 in 250 μL of PBS (pH = 7.0) intravenously (tail vein). Anesthesia was then induced with 3% isofluorane and maintained at 2% isoflurane. Physiologic temperature was maintained with an external light source while the mouse was on the gantry. Imaging employed a CT-equipped Gamma Medica-Ideas SPECT scanner (Northridge, CA). SPECT data were acquired in 64 projections at 65 s per projection using medium energy pinhole collimators. A CT scan was performed in 512 projections at the end of each SPECT scan for anatomic co-registration. CT and SPECT scans were performed at 1, 4, 8, 24, and 48 h post-injection of [ 111 In]XYIMSR-01. Imaging data sets were reconstructed using the manufacturer's software. Display of images utilized Amide software (Dice Holdings, Inc. NY). Heart, lungs, pancreas, spleen, fat, brain, muscle, small intestines, liver, stomach, kidney, urinary bladder, and tumor were collected. Each organ was weighed and the tissue radioactivity was measured with an automated gamma counter (1282 Compugamma CS, Pharmacia/ LKBNuclear, Inc., Mt. Waverly, Vic. Australia). The percentage of injected dose per gram of tissue (% ID/g) was calculated by comparison with samples of a standard dilution of the initial dose. All measurements were corrected for radioactive decay. Biodistribution Data were expressed as mean ± standard deviation (SD). Prism software (GraphPAD, San Diego, California) was used to determine statistical significance. Statistical significance was calculated using a paired t test. P-values < 0.0001 were considered significant. ACKNOWLEDGMENTS We thank CA134675 and CA197470 for financial support. We thank Dr. Yuchuan Wang for help with image www.impactjournals.com/oncotarget processing and Ala Lisok for expert assistance with the biodistribution studies. Disclaimers none. CONFLICTS OF INTEREST The authors declare no relevant conflicts of interest.
2017-11-19T14:28:43.579Z
2015-09-16T00:00:00.000
{ "year": 2015, "sha1": "fe63dc26c25416d61bb06e44f2c53360c5a8eee4", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=14441&path[]=5254", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fe63dc26c25416d61bb06e44f2c53360c5a8eee4", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
14836047
pes2o/s2orc
v3-fos-license
Relativistic central--field Green's functions for the RATIP package From perturbation theory, Green's functions are known for providing a simple and convenient access to the (complete) spectrum of atoms and ions. Having these functions available, they may help carry out perturbation expansions to any order beyond the first one. For most realistic potentials, however, the Green's functions need to be calculated numerically since an analytic form is known only for free electrons or for their motion in a pure Coulomb field. Therefore, in order to facilitate the use of Green's functions also for atoms and ions other than the hydrogen--like ions, here we provide an extension to the Ratip program which supports the computation of relativistic (one--electron) Green's functions in an -- arbitrarily given -- central--field potential $\rV(r)$. Different computational modes have been implemented to define these effective potentials and to generate the radial Green's functions for all bound--state energies $E<0$. In addition, care has been taken to provide a user--friendly component of the Ratip package by utilizing features of the Fortran 90/95 standard such as data structures, allocatable arrays, or a module--oriented design. and the allowed energies. Apart from obeying the proper boundary conditions for a point-like nucleus, namely, Z(r → 0) = Z nuc > 0 and Z(r → ∞) = Z nuc − N electrons ≥ 0, the first derivative of the charge function Z(r) must be smaller than the (absolute value of the) energy of the Green's function, ∂Z(r) ∂r < |E|. Unusual features of the program: Xgreens has been designed as a part of the Ratip package [5] for the calculation of relativistic atomic transition and ionization properties. In a short dialog at the beginning of the execution, the user can specify the choice of the potential as well as the energies and the symmetries of the radial Green's functions to be calculated. Apart from central-field Green's functions, of course, the Coulomb Green's function [6] can also be computed by selecting a constant nuclear charge Z(r) = Z eff . In order to test the generated Green's functions, moreover, we compare the two lowest bound-state orbitals which are calculated from the Green's functions with those as generated separately for the given potential. Like the other components of the Ratip package, Xgreens makes careful use of the Fortran 90/95 standard. LONG WRITE-UP 1 Introduction The use of Green's function has a long tradition for solving physical problems, both in classical and quantum physics. From perturbation theory, for instance, these functions are known for providing a rather simple access to the complete spectrum of a quantum system and, hence, to facilitate the computation of perturbation expansions beyond the first order in perturbation theory. Applications of the Green's functions can be therefore found not only in atomic and molecular physics but also in quantum optics, field theory, solid-state physics, and at various place elsewhere. In atomic physics, however, the use of Green's function methods was so far mainly restricted to describe the motion of electrons in a pure Coulomb field, i. e. to the theory of hydrogen and hydrogen-like ions. For these one-electron systems, calculations have been carried out, for example, for the two-photon [1,2] and multi-photon ionization [3,4], the two-photon decay [5], the second-order contributions to the atomic polarizabilities [6] as well as for determining radiative corrections [7,8]. Less attention, in contrast, has been paid to utilize Green's functions for non-Coulomb fields or for describing the properties of many-electron atoms and ions. Unlike to the generation of the -bound and free-electron -wave functions to the Schrödinger and Dirac equation, for which a number of programs are available now also within the CPC-library [9,10], there is almost no code freely available which helps generate the relativistic centralfield Green's functions. As known from the literature, however, nonrelativistic central-field Green's functions were constructed by McGuire [11] and by Huillier and coworkers [12], and were successfully applied for studying the multi-photon ionization of valence-shell electrons in alkali atoms. Therefore, in order to facilitate the use of relativistic central-field Green's functions for atomic computations, here we describe and provide an extension to the Ratip package which calculates these functions (as the solution of the Dirac equation with a δ−like inhomogeneity) for an arbitrary central field V(r). In the following section, we start with summarizing the basic formulas for the computation of relativistic central-field Green's functions. Apart from a brief discussion of the Dirac Hamiltonian, this includes the defining equation for central-field Green's function and the separation of the three-dimensional Green's function into radial and angular parts. However, since the separation has been discussed in detail elsewhere in the literature [13], we restrict ourselves to a short account on that topic and mainly focus on the computation of the radial components of the Green's functions. Section 3 later describes the program structure of Xgreens, its interactive control and how the code is distributed. Because the Xgreens program is designed as part of the Ratip package, we have used and modified several modules which were published before along with other components of the program. Section 4 explains and displays two examples of Xgreens, including (a) a dialog in order to calculate a central-field potential from Grasp92 wave functions [9] and (b) the generation of the radial Green's functions if the potential is loaded from an external file. Finally, a short summary and outlook is given in section 5. the Coulomb potential is replaced by some (arbitrarily given) central-field potential, V C (r) = − Z r −→ − Z(r) r . As in the nonrelativistic case where the Green's function obeys the defining equation (Ĥ − E ) G E (r, r ′ ) = δ(r − r ′ ) , the relativistic Green's function is given by a 4 × 4 matrix [14] which satisfies the inhomogeneous equation with I 4 being the 4 × 4 unit-matrix and where, as usual in atomic structure theory, E refers to the total energy of the electron but without its rest energy m e c 2 . Moreover, since in polar coordinates a central-field potential Z(r) in the Hamiltonian (1) does not affect the separation of the variables, the central-field Green's function has the same radial-angular representation as in the pure Coulomb case [13] G E (r, r ′ ) = i g SL Eκ (r, r ′ ) Ω −κm (r) Ω † κm (r ′ ) g SS Eκ (r, r ′ ) Ω −κm (r) Ω † −κm (r ′ )   , (3) where Ω κm (r) = Ω κm (ϑ, ϕ) denotes a standard spherical Dirac-spinor and κ = ± (j + 1/2) for l = j ± 1/2 is the relativistic angular momentum quantum number; this number carries information about both the total angular momentum j as well as the parity (−1) l of the Green's function. While the summation over κ = ±1, ±2, ... runs over all (non-zero) integers, the summation over the magnetic quantum number m = −j, −j + 1, . . . , j is restricted by the corresponding total angular momentum. Moreover, the radial part of the central-field Green's function g LL Eκ (r, r ′ ) g LS Eκ (r, r ′ ) g SL Eκ (r, r ′ ) g SS Eκ (r, r ′ ) in (3) can be treated simply as a 2 × 2 matrix function which satisfies the equation with I 2 now being the 2 × 2 unit-matrix. Note that in Eq. (4), α refers to Sommerfeld's fine structure constant and that, in order to keep the equations similar to those for the Coulomb Green's functions [13], we make use of a nuclear charge function Z(r) = −rV(r) to define the central-field potential, instead of V(r) explicitly. In the radial-angular representation (3), two superscripts T and T ′ were introduced to denote the individual components in the 2 × 2 radial Green's function matrix. These superscripts take the values T = L or T = S to refer to either the large or small component, respectively, when multiplied with a corresponding (2spinor) radial solution of the Dirac Hamiltonian (1). For a pure Coulomb potential, Z(r) ≡ Z eff , an explicit representation of the (four) components g T T ′ Eκ (r, r ′ ) of the radial Green's function can be found in Refs. [13,14,15]. Generation of radial Green's functions When compared with the Coulomb Green's functions, not much need to be changed for the central-field functions in Eqs. (3) and (4) except that the nuclear charge Z = Z(r) now depends on r and, hence, that analytic solutions to these equations are no longer available. Therefore, to find a numerical solution to Eq. (4), let us first mention that this matrix equation just describes coupled equations for the two independent pairs (g LL Eκ , g SL Eκ ) and (g SS Eκ , g LS Eκ ) of the radial components. For example, if we consider the first pair (g LL Eκ , g SL Eκ ) of radial components of the Green's function, it has to satisfy the two equations and a similar set of equations holds for the second pair (g SS Eκ , g LS Eκ ). In the following, therefore, we will discuss the algorithm for solving Eqs. (5) and (6) but need not display the analogue formulas for the pair (g SS Eκ , g LS Eκ ). Inserting g SL Eκ (r, r ′ ) from Eq. (6) into (5), we arrive at the second-order inhomogeneous differential equation for the component g LL Eκ (r, r ′ ). Solution of this equation can be constructed as product of two linearly independent solutions [14] g LL Eκ (r, for the corresponding homogeneous case [cf. Eq. (10) below], where M LL Eκ (r) denotes a solution which is regular at the origin, and W LL Eκ (r) a solution regular at infinity. Below, we will obtain these functions following the numerical procedure as suggested by McGuire [11]. For this, let us start with approximating the nuclear charge function Z(r) on some grid r i , i = 1, . . . , i max in terms of a set of straight lines and from the homogeneous part of Eq. (7) as given for the i-th interval [r i , r i+1 ] of the grid. In this Eq., we can drop the second argument r ′ in the component g LL Eκ , since it now appears only as a parameter, and replace it by the superscript i to denote the particular piece of the grid for which we want the solution. From the approximation (9), moreover, we see that the nuclear charge function Z i (r) within the i−th interval gives rise to a pure Coulomb potential Z 0i /r and a constant which is simply added to the energy: E → Z 0i + E. We therefore find that, within the given interval, the regular and irregular solutions at the origin in (8) can both be written as linear combinations of the corresponding solutions for a Coulomb field with constants { f i,1 , f i,2 , g i,1 , g i,2 } which still need to be determined. For the Coulomb potential, the functions M i, Coulomb Eκ (r) and W i, Coulomb Eκ (r) are known analytically and given by [14] M i, Coulomb where M(a, b, r) and U(a, b, r) denote the Kummer and Tricomi functions [16,17], respectively, and where the quantities s i , t i and q i are given by The set of constants { f i,1 , f i,2 , g i,1 , g i,2 ; i = 1, ..., i max } can be determined from the fact that the two functions M LL Eκ (r) and W LL Eκ (r) in ansatz (8) as well as their derivatives need to be continuous in r, and that they behave regularly at the origin or at infinity, respectively. The constraint of being regular at the origin, for instance, requires the coefficients f i=1,1 = 1 and f i=1,2 = 0 and can be used together with the continuity M LL Eκ (r) and M ′ LL Eκ (r) , [where the superscript C here refers to the Coulomb functions in Eqs. (13) and (14)] in order to determine all the coefficients f i,j up to a normalization constant. A similar recurrence procedure also applies to the coefficients g i,j , but by starting from 'infinity', that is with g imax,1 = 0 and g imax,2 = 1, and by going backwards in the index i towards the origin. To determine finally the normalization of the radial component g LL Eκ (r, r ′ ) , e. g. of the coefficients {f i,j , g i,j }, we may return to Eq. (7) and re-write it in the form Taking the integral over latter equation, we see that should be valid for ε → + 0 and for any r ′ and, hence, that -by using Eq. (7) and carrying out some algebraic manipulations -the derivative 'jumps' at r = r ′ . We can use the right hand side of Eq. (20) together with the derivative of Eq. (8) to determine the normalization constants c f and c g with which the coefficients f i,j and g i,j need to be multiplied in order to obtain the normalized radial component g LL Eκ (r, r ′ ) . Having constructed g LL Eκ (r, r ′ ) piecewise as solution to Eq. (7), we may obtain the second component g SL Eκ (r, r ′ ) of this pair simply from Eq. (6), where the derivatives of the Kummer and Tricomi functions in expressions (13) and (14) can be calculated by means of standard formulae [17] In practise, both functions M(a, b, z) and U(a, b, z) are required only for real arguments a, b, and z but need to be calculated with a rather sophisticated algorithm [18] in order to ensure numerical stability and to provide sufficiently accurate results. A similar numerical procedure has to be carry out also for the second pair (g SS Eκ , g LS Eκ ) of radial components. This gives rise of course to another set of coefficients {f i,j ,g i,j } and, finally, to the full radial Green's function from Eq. (4). The 4 i max coefficients for the (piecewise) regular and irregular solutions in ansatz (11)(12) and (8) certainly provide -together with a few numerical proceduresthe most compact representation of the radial central-field Green's functions. For the further computation of matrix elements and atomic properties, however, these radial components are usually represented (and stored) at some (2-dimensional) grid in r and r ′ as we will discuss below. Tests on the accuracy of the radial Green's functions To make further use of the Green's functions in applications, it is necessary to have a simple test on their numerical accuracy. For the radial-spherical representation of these functions as defined in Eq. (3), such a test is easily constructed since the Green's function contains the information about the complete spectrum of the Hamiltonian (1) and, hence, about any of its eigenstates ψ n (r). Making use of the well-known expansion of the Green's function in terms of the eigenstates of the Hamiltonian, the relation can be easily derived from the orthogonality of the eigenfunctions and holds for all energies E = E n . With the representation (3) in mind, the relation (22) can be written also in terms of the radial wave and Green's functions P nκ (r ′ ) where P nκ (r) and Q nκ (r) are the large and small components of the (Dirac) radial 2-spinors. As indicated by the tilde on the left-hand-side of Eq. (23), therefore, this relation may serve as a test on the accuracy of the Green's functions if, for instance, the radial components from both side of the equation are compared with each other or if some proper overlap integral is calculated. To test on the precision of the Green's functions in Xgreens, we generate the radial bound-state wave function P nκ (r) and Q nκ (r) as solution of the radial Dirac equation with the same potential − Z(r) r as applied before and by making use of the program by Salvat etal [10]. This solver has been embedded in our code and is utilized in order to calculate the relativistic components. With these radial functions, we then compute the two integrals in (23) for obtainingP nκ (r) andQ nκ (r), respectively, as a function of r. As default in Xgreens, we use the overlap integral P nκ (r) P nκ (r) +Q nκ (r) Q nκ (r) dr for the two lowest principal quantum numbers n of the (given) symmetry κ, together with the corresponding normalization integral as a numerical measure on the accuracy of the generated Green's functions. These integrals are displayed explicitly by the program (on demand). For a standard (logarithmic) grid with about 300 nodes, the overlap integral (26) is typically within the range 10 −2 . . . 10 −3 . For such a grid, the accuracy is limited in the computations by the linear interpolation of the wave and Green's functions. The accuracy can be increased however for a larger number i max of grid points as shown in section 4. 3 Program structure 3 suite of program components to calculate a variety of (relativistic) atomic transition and ionization properties. Among other features, the components of Ratip support for instance investigations on the autoionization of atoms and ions, the interaction with the radiation field, the parametrization of angular distributions in the emission of electrons and photons, or the analysis of interference effects between radiative and non-radiative processes. In quite different case studies, Ratip has helped analyze and interprete a large number of spectra and experiments. In order to provide efficient tools for atomic computations, it also incorporates a number of further components (beside of the main program components for calculating certain physical properties) which facilitate the transformation of wave functions between different coupling schemes or the generation of continuum orbitals and angular coefficients. With the development of the Xgreens program, we now provide an additional component to generate the relativistic central-field Green's functions within the Ratip environment. This is rather independent of their later 'use' where different properties such as the two-photon ionization and decay or various polarizabilities of atoms and ions might be calculated with the help of these functions. Owing to the large number of possible applications, however, a support of certain properties will largely depends on the requests by the users and on our further experience with the code. Not much more need to be said about Ratip's overall structure. A detailed account on its present capabilities has been given previously [22]. With regard to the further development of Ratip, we just note that our main concern now pertains to a long life-cycle of the code and to an object-oriented design within the framework of Fortran 90/95. With the present set-up of the Xgreens component, we continue our effort for providing an atomic code which is prepared for future applications in dealing with open-shell atoms and ions. Data structures and program execution Certainly, the major purpose of Xgreens is the computation of (one-electron) radial Green's functions for some -specified or externally given -central-field potential, which can later be utilized for calculating radial matrix elements of the type where j Λ (kr) denotes a spherical Bessel function, g TT Eκ (r, r ′ ) one of the components of the radial Green's function in (4), and where α and β refer to the radial wave functions of some bound or free-electron state, respectively. Matrix elements of this type play a key role, for instance, for studying two-photon ionization (bound-free) or decay (bound-bound) processes. Since the (four) components of the radial Green's functions depend on two radial coordinates, r and r ′ , a rather large amount of data need usually to be calculated by means of Xgreens. These data are finally stored in an external (.rgf) radial Green's function file as will be discussed in the next subsection. To generate and utilize the Green's functions, two derived data types TGreens single rgf and TGreens rgf have been introduced as shown in Figure 1. Using these data types, a single Green's function is kept internally in terms of its expansion coefficients from Eqs. (11) and (12) for the large-large and a similar set of coefficients for the small-small component on the given grid. Apart from the particular representation of the grid, these data structures contain the energy and symmetry of the radial Green's function, the nuclear charge function(s) as well as information about the mode of interpolation of the radial components between the grid points and the maximum tabulation point mtp. In Xgreens, a variable of type(TGreens rgf) is used in order to store the information about all the requested functions, and a few additional procedures are provided to calculate from these structures the values of the radial components for any set of arguments. The basic steps in the execution of the program are very similar as for other components of the Ratip package. At the beginning, all the necessary input data are read in by an (interactive) dialog. The requested Green's functions are then calculated in turn and tested for their accuracy. The (main) output of the program is a formatted Ascii file which provides an interface for further applications, either within the Ratip environment or as worked out by the user. Interactive control and output of the program Like the other components of the Ratip package, Xgreens is controlled by a dialog at the beginning of the execution. In this dialog, all information need to be specified for obtaining the central-field potential as well as about the number and symmetry of the radial Green's functions to be generated. Owing to various choices for defining the spherical potential, however, slightly different dialogs may occur during the execution. An example is shown in Figure 2, where first a Hartree-plus-statistical-exchange (HX) potential due to Cowan [23,24] is generated from the ground-state wave functions of atomic gold, and then two radial Green's functions are calculated within this potential. Apart from the nuclear charge, given in terms of a Grasp92 .iso isotope data file, the dialog first prompts for specifying (or confirming) some basic parameters such as the speed of light, the grid parameters as well as for the energy units, in which the further input and output is done by the program. This is followed by a prompt for determining the central-field potential for the radial Green's functions. At present, Xgreens supports four models including a pure Coulomb potential (for the calculation of Coulomb Green's functions) as well two potentials (HFS and HX) to incorporate also the exchange interaction in some approximate form. In the Hartree model, in contrast, only the direct potential of some atomic level is applied in the later computation of the Green's functions. In all these cases, however, the wave functions of the selected state have to be specified in terms of the corresponding Grasp92 files for the configuration state functions (CSF), mixing coefficients as well as the radial orbitals. If computed internally, the potential can be written also to disc and used in further applications. Beside of the internal set-up of the potential, in addition, a user-defined potential can be given also explicitly and read in by the program. Although the nuclear charge is specified by means of a .iso file (in line with all other components of Ratip), the Green's functions are always generated for a point-like nucleus in order to 'utilize' their regular behaviour of the radial components at the origin [cf. section 2.2]. For each radial Green's function, then the energy and the one-particle symmetry κ need to be specified. The functions are finally written to the .rgf radial Green's function file whose name has also to be given at the end of the dialog. In this file, the radial Green's functions are given at the grid (in r and r ′ ) as specified originally. Moreover, to test the accuracy of the Green's functions, a number of overlap and normalization integrals are calculated on request in order to 'compare' a few low-lying orbitals, as obtained from the Green's functions by an integration over r ′ , with independently generated functions from Salvat's program [10]. In average, the computation of a single Green's function requires about 2 min on a 450 MHz Pentium PC and approximately the same time for the 'test' integrals. In the latter case, the rather large demand on CPU time arises mainly from the 2 i max radial integrals in r ′ −space, which need first to be performed in order to obtain the radial orbital functions from the given Green's function components. Distribution and installation A program of Ratip's size cannot be maintained over a longer period without that certain changes and the adaptation of the code to recent developments (in either the hardware or the operating systems) become necessary from time to time. For the distribution of the code, therefore, we follow our previous style in that the Ratip package is provided as whole. The main emphasis with the present extension of the program, however, is placed on the design and implementation of the two (new) modules rabs greens.f90 and rabs utilities 2.f90 which contain the code for the generation and the test of the radial Green's functions. In addition, the four modules (rabs special functions.f90, rabs error control.f90, rabs greens external.f90, and rabs cpc salvat 1995.f) became necessary and had to be appended to the code in order to support an accurate computation of the hypergeometric and a few related functions from mathematical physics as well as for providing Salvat's solver [10] for the generation of the radial wave functions (P(r), Q(r)) within a given potential. In the present version, the Ratip package now contains the source code for the 8 components Anco, Cesd, Lsj, Greens, Rcfp, Relci, Reos as well as the Utilities for performing a number of small but frequently occurring tasks. Together with the six additional modules from above, the overall program therefore comprises about 40.000 lines of code, separated into 22 modules (apart from the main program components and three libraries). All these source files are provided in the Ratip root directory. As before, there is one makefile for each individual component from which the corresponding executable can be obtained simply by typing the command make -f make-component, that is make -f make-greens in the present case. For most components of the Ratip package, in addition, we provide a test suite in a subdirectory test-component of the root where component refers to the names above. For example, the directory test-greens contains all the necessary files to run the sample calculations which we will discuss below. Before, however, the makefiles can be 'utilized', a number of global variables need to be specified for the compilation and linkage of the program. In a Linux or Unix environment, this is achieved by modifying (and sourceing) the script file make-environment which saves the user from adopting each makefile independently. In fact, the script make-environment just contains a very few lines for specifying the local compiler, the options for the compiler, as well as the local paths for the libraries. Under Windows, in contrast, not much help need to be given to the user since most compiler nowadays provide the feature to define 'projects'. In this case, it is recommended to read off the required modules and source files from the makefile of the program component and to 'declare' them directly to the project as associated with the given component. In the past year, the Xgreens program has been applied under several Linux systems using the Lahey Fortran 95 compiler. The Ratip program as a whole has been found also portable rather easily to other platforms such as IBM RS/6000, Sun OS, or to the PC world. The file Read.me file in the ratip root directory contains further details for the installation. Overall, however, it is expected that it will not be difficult to compile the program also under other operating systems. Test calculations To illustrate the use of the Xgreens program, we briefly discuss and display two examples. They refer to the generation of the radial Green's function for a few selected (one-electron) symmetries in atomic gold (Z = 79) and demonstrate how the accuracy of these functions can be increased by a proper choice of the radial grid. However, before the radial components can be generated, we need first to specify the potential as shown in Figure 2. For the present examples, we have chosen a HX (Hartree-plus-statistical-exchange) due to the work of Cowan [23,24], starting from the ground-state wave functions of neutral gold. As usual for Grasp92 and Ratip, these wave functions have to be provided in terms of the .csl configuration state list and the .mix configuration mixing files as well as the .out radial orbital file. All of these input files are provided also in the test-greens/ subdirectory of the Ratip root directory and, thus, can be used for the present tests. In Figure 2 moreover, the potential is saved to the central-field potential file z79-au-hx.pot and later re-utilized in Figure 3. The output of the two examples are shown in the Test Run Output below. Like for the wave functions, the 'quality' of the generated Green's functions becomes only fully apparent, if they are used for calculating observables which can be compared to experiment. Although calculation of observables is not the aim of the present work, it is still possible to test the accuracy of the generated Green's functions by re-calculating the radial orbitals (as discussed in section 2.3) and by comparing these orbitals with those as obtained from the integration of the Dirac equation for the given central field. This 'comparison' is done in Xgreens XGREENS: Calculation of relativistic central-field Green's functions with energies E < 0 (Fortran 95 version); (C) Copyright by P Koval and S Fritzsche, Kassel (2004 Enter ASF level number for which the potential should be calculated: 1 Enter the subshell (e.g. 1s, 2p-, ...) which is specific to Cowan's HX method: 1s Write the generated potential to disc ? y Enter the name of .pot central-field potential file: z79-au-hx.pot Radial Green's functions will be calculated within the range 0 <= r, r' <= 4.797 a.u. with the boundaries of the nuclear charge function Z(0) = 79.00 and Z(r_max) = 1.951 In the given potential the one-electron energies (in eV) are: Enter the name of a .pot file with a central-field potential z79-au-hx.fld Radial Green's functions will be calculated within the range 0 <= r, r' <= 5.269 a.u. with the boundaries of the nuclear charge function Z(0) = 79.00 and Z(r_max) = 1.000 In the given potential the one-electron energies (in eV) are: 1s: -8.131E+04 2s: -1. on request by evaluating for the two lowest principal quantum numbers n the (radial) overlap integrals of the re-calculated functions with the solutions by Salvat's program. In addition, the normalization integrals are also displayed as seen from the Test Run Output. Except of a few cases, the deviation of these integrals from 1.0 are typically well below 0.1 % on the standard grid with about 300 mesh points in r and r ′ . For these deviations, there are two source of numerical 'inaccuracy' which arise from the test procedure and are not related to the generation of the radial Green's functions. They are caused by the fact that by using Salvat's solver [10], a third-order spline is used to represent the potential at intermediate points in r in contrast to the linear representation (9) for the generation of the Green's functions. Moreover, the integration of the radial integrals uses the standard Grasp92 procedure and, hence, is not adapted as well to Salvat's solutions. The accuracy of the Green's functions (and the test integrals) can be improved however if the number of grid is enlarged, i. e. the spacing between the mesh points reduced. In our second example, cf. Figure 3, we therefore adopt a grid with approximately twice the number of mesh points. Note that this requires four times as much storage as the radial components are functions in r and r ′ . In this test case, moreover, the radial Green's functions are generated for four different symmetries s, p 1/2 , p 3/2 , d 3/2 , and d 5/2 . When the Test Run Output from this example is compared with those from example 1, we see that the accuracy in the overlap integrals in increased by about a factor of 5. For this example, the generation and test took about 30 minutes in total on a 450 MHz Pentium III. After the termination of Xgreens, the radial Green's functions file (e. g. z79-au-rgf.rgf from example 2) contains all the Green's function components which were generated during the execution. Since this is a formatted file, it can easily be read and manipulated by any text editor. This (radial Green's function) file stores the individual components g T T ′ Eκ (r, r ′ ) of the Green's functions as 2-dimensional arrays within a simple file structure. An (internal) file signature # DCFGF in the first line is followed by the mode of interpolation and the number of Green's functions which are provided by the file. Then, each Green's function is specified (in turn), starting with the energy and symmetry of the function and followed by a table of six rows where the first two rows refer to the coordinates r and r ′ (in atomic units) and and the other rows to the four component functions g LL Eκ (r, r ′ ), g LS Eκ (r, r ′ ), g SL Eκ (r, r ′ ) and g SS Eκ (r, r ′ ) in this particular order. Although this format is convenient for the later use of these functions, it may lead to rather large data files, especially if some larger number of Greens' functions need to be generated. In practise, however, not much problems are expected with this file structure because disc storage became cheap recently and because, for most applications, we expect the Green's functions to be generated on demand by making use of the corresponding modules instead of keeping them in external files. Typically, the radial wave and Green's functions occur as part of matrix elements and, thus, first require an additional integration (over r and/or r ′ ) before any observable quantity is obtained. The format of the radial Green's function (.rgf) file can be used easily in order to plot the various radial components. In Figure 4, therefore, we display the nuclear charge function Z(r) and the two radial Green's function components g LL E s (r 0 , r) and g SL E s (r 0 , r) as function of (second) argument r and taken for r 0 = 0.204076 a. u. The radial Green's functions are calculated in the ground-state potential of atomic gold, as applied in example 2, for the energy E = 1000 eV. As seen from Figure 4, the large-large component g LL E s is continuous at r = r ′ while the small-large component g SL E s jumps owing to Eqs. (20) and (21) in space. Both components, moreover, behave 'regularly' at the origin as was constructed explicity by Eq. (8). Outlook. Applications of central-field Green's functions Apart from fundamental interest in having the central-field Green's functions available for 'relativistic' electrons, these functions are also useful for the perturbative treatment of atoms and ions. As discussed in subsection 2.3, namely, the Green's function provide a simple access to the complete spectrum of a quantum system and, hence, can be utilized for carrying out the summation over all the (one-particle) states of the spectrum as required in second-and higher-order perturbation theory. However, since we only provide the one-electron Green's functions, they can be applied for just those processes where the 'perturbations' are described in terms of one-particle operators. Perhaps, the most-studied perturbation of this type is the interaction of atoms and ions with the radiation field, which -within a sufficiently strong field -may lead not only to atomic photoionization but (in second order) also to the twophoton ionization and decay of atoms and ions and to several other processes. During the last two years, we utilized the central-field Green's functions mainly for exploring the two-photon ionization for the helium-like ions and for the inner-shell electrons from atomic neon and argon. Apart from the one-particle Green's functions (as discussed in this work), there are a number of further processes such as the double Auger decay, for which the two-particle Green's functions G E (r 1 , r 2 ; r ′ 1 , r ′ 2 ) are required in order to calculate the cross sections, decay rates and/or angular distributions. These two-particle functions are solutions to a defining equation similar to Eq. (2) but where the (one-particle) Dirac HamiltonianĤ D is replaced by the two-particle operatorĤ D,1 +Ĥ D,2 +V 12 , including the electron-electron interactionV 12 explicitly. Even by making use of a proper radial-angular decomposition of such Green's functions, their radial part g k κ 1 ,κ 2 (r 1 , r 2 ; r ′ 1 , r ′ 2 ) would depend then on four radial variables and an overall rank k, similar as known from the tensorial decomposition of the electron-electron interaction [25]. Therefore, an internal representation and generation of these (radial) functions appear rather infeasible in practise. An alternative to these two-particle Green's functions is given by the (so-called) modified pair functions Φ E,α (r 1 , r 2 ) which satisfy the equation whereŴ is a one-or two-particle operator which depends on the physical task to be solved and |Ψ α (P JM ) is one of the (two-electron) solutions of the corresponding homogeneous equation. These modified pair functions have the advantage that they only depend on two radial variables similar as the one-particle Green's functions. On the other hand, of course, these functions now depend on some (initial) two-particle state |Ψ α (P JM ) as well as on the particular choice of the interaction operatorŴ and, hence, are less general when compared with the Green's function from above. We currently investigate the possibilities for generating also such modified pair functions within the framework of Ratip and for their (later) use in the description of the double Auger decay. But already the one-particle Green's functions from this work enables one to explore a number of atomic properties which have not been studied before for complex atoms or, at least, not within a relativistic framework. June 4, 2004 Abstract From perturbation theory, Green's functions are known for providing a simple and convenient access to the (complete) spectrum of atoms and ions. Having these functions available, they may help carry out perturbation expansions to any order beyond the first one. For most realistic potentials, however, the Green's functions need to be calculated numerically since an analytic form is known only for free electrons or for their motion in a pure Coulomb field. Therefore, in order to facilitate the use of Green's functions also for atoms and ions other than the hydrogen-like ions, here we provide an extension to the Ratip program which supports the computation of relativistic (one-electron) Green's functions in an -arbitrarily given -central-field potential V(r). Different computational modes have been implemented to define these effective potentials and to generate the radial Green's functions for all bound-state energies E < 0. In addition, care has been taken to provide a user-friendly component of the Ratip package by utilizing features of the Fortran 90/95 standard such as data structures, allocatable arrays, or a module-oriented design. and the allowed energies. Apart from obeying the proper boundary conditions for a point-like nucleus, namely, Z(r → 0) = Z nuc > 0 and Z(r → ∞) = Z nuc − N electrons ≥ 0, the first derivative of the charge function Z(r) must be smaller than the (absolute value of the) energy of the Green's function, ∂Z(r) ∂r < |E|. Unusual features of the program: Xgreens has been designed as a part of the Ratip package [5] for the calculation of relativistic atomic transition and ionization properties. In a short dialog at the beginning of the execution, the user can specify the choice of the potential as well as the energies and the symmetries of the radial Green's functions to be calculated. Apart from central-field Green's functions, of course, the Coulomb Green's function [6] can also be computed by selecting a constant nuclear charge Z(r) = Z eff . In order to test the generated Green's functions, moreover, we compare the two lowest bound-state orbitals which are calculated from the Green's functions with those as generated separately for the given potential. Like the other components of the Ratip package, Xgreens makes careful use of the Fortran 90/95 standard. Introduction The use of Green's function has a long tradition for solving physical problems, both in classical and quantum physics. From perturbation theory, for instance, these functions are known for providing a rather simple access to the complete spectrum of a quantum system and, hence, to facilitate the computation of perturbation expansions beyond the first order in perturbation theory. Applications of the Green's functions can be therefore found not only in atomic and molecular physics but also in quantum optics, field theory, solid-state physics, and at various place elsewhere. In atomic physics, however, the use of Green's function methods was so far mainly restricted to describe the motion of electrons in a pure Coulomb field, i. e. to the theory of hydrogen and hydrogen-like ions. For these one-electron systems, calculations have been carried out, for example, for the two-photon [1,2] and multi-photon ionization [3,4], the two-photon decay [5], the second-order contributions to the atomic polarizabilities [6] as well as for determining radiative corrections [7,8]. Less attention, in contrast, has been paid to utilize Green's functions for non-Coulomb fields or for describing the properties of many-electron atoms and ions. Unlike to the generation of the -bound and free-electron -wave functions to the Schrödinger and Dirac equation, for which a number of programs are available now also within the CPC-library [9,10], there is almost no code freely available which helps generate the relativistic centralfield Green's functions. As known from the literature, however, nonrelativistic central-field Green's functions were constructed by McGuire [11] and by Huillier and coworkers [12], and were successfully applied for studying the multi-photon ionization of valence-shell electrons in alkali atoms. Therefore, in order to facilitate the use of relativistic central-field Green's functions for atomic computations, here we describe and provide an extension to the Ratip package which calculates these functions (as the solution of the Dirac equation with a δ−like inhomogeneity) for an arbitrary central field V(r). In the following section, we start with summarizing the basic formulas for the computation of relativistic central-field Green's functions. Apart from a brief discussion of the Dirac Hamiltonian, this includes the defining equation for central-field Green's function and the separation of the three-dimensional Green's function into radial and angular parts. However, since the separation has been discussed in detail elsewhere in the literature [13], we restrict ourselves to a short account on that topic and mainly focus on the computation of the radial components of the Green's functions. Section 3 later describes the program structure of Xgreens, its interactive control and how the code is distributed. Because the Xgreens program is designed as part of the Ratip package, we have used and modified several modules which were published before along with other components of the program. Section 4 explains and displays two examples of Xgreens, including (a) a dialog in order to calculate a central-field potential from Grasp92 wave functions [9] and (b) the generation of the radial Green's functions if the potential is loaded from an external file. Finally, a short summary and outlook is given in section 5. the Coulomb potential is replaced by some (arbitrarily given) central-field potential, V C (r) = − Z r −→ − Z(r) r . As in the nonrelativistic case where the Green's function obeys the defining equation (Ĥ − E ) G E (r, r ′ ) = δ(r − r ′ ) , the relativistic Green's function is given by a 4 × 4 matrix [14] which satisfies the inhomogeneous equation with I 4 being the 4 × 4 unit-matrix and where, as usual in atomic structure theory, E refers to the total energy of the electron but without its rest energy m e c 2 . Moreover, since in polar coordinates a central-field potential Z(r) in the Hamiltonian (1) does not affect the separation of the variables, the central-field Green's function has the same radial-angular representation as in the pure Coulomb case [13] G where Ω κm (r) = Ω κm (ϑ, ϕ) denotes a standard spherical Dirac-spinor and κ = ± (j + 1/2) for l = j ± 1/2 is the relativistic angular momentum quantum number; this number carries information about both the total angular momentum j as well as the parity (−1) l of the Green's function. While the summation over κ = ±1, ±2, ... runs over all (non-zero) integers, the summation over the magnetic quantum number m = −j, −j + 1, . . . , j is restricted by the corresponding total angular momentum. Moreover, the radial part of the central-field Green's function g LL Eκ (r, r ′ ) g LS Eκ (r, r ′ ) g SL Eκ (r, r ′ ) g SS Eκ (r, r ′ ) in (3) can be treated simply as a 2 × 2 matrix function which satisfies the equation with I 2 now being the 2 × 2 unit-matrix. Note that in Eq. (4), α refers to Sommerfeld's fine structure constant and that, in order to keep the equations similar to those for the Coulomb Green's functions [13], we make use of a nuclear charge function Z(r) = −rV(r) to define the central-field potential, instead of V(r) explicitly. In the radial-angular representation (3), two superscripts T and T ′ were introduced to denote the individual components in the 2 × 2 radial Green's function matrix. These superscripts take the values T = L or T = S to refer to 1 Here and in the following, we use atomic units (me = = e 2 /4πǫ0 = 1) if not stated otherwise. 5 either the large or small component, respectively, when multiplied with a corresponding (2spinor) radial solution of the Dirac Hamiltonian (1). For a pure Coulomb potential, Z(r) ≡ Z eff , an explicit representation of the (four) components g T T ′ Eκ (r, r ′ ) of the radial Green's function can be found in Refs. [13,14,15]. Generation of radial Green's functions When compared with the Coulomb Green's functions, not much need to be changed for the central-field functions in Eqs. (3) and (4) except that the nuclear charge Z = Z(r) now depends on r and, hence, that analytic solutions to these equations are no longer available. Therefore, to find a numerical solution to Eq. (4), let us first mention that this matrix equation just describes coupled equations for the two independent pairs (g LL Eκ , g SL Eκ ) and (g SS Eκ , g LS Eκ ) of the radial components. For example, if we consider the first pair (g LL Eκ , g SL Eκ ) of radial components of the Green's function, it has to satisfy the two equations and a similar set of equations holds for the second pair (g SS Eκ , g LS Eκ ). In the following, therefore, we will discuss the algorithm for solving Eqs. (5) and (6) but need not display the analogue formulas for the pair (g SS Eκ , g LS Eκ ). Inserting g SL Eκ (r, r ′ ) from Eq. (6) into (5), we arrive at the second-order inhomogeneous differential equation for the component g LL Eκ (r, r ′ ). Solution of this equation can be constructed as product of two linearly independent solutions [14] g LL Eκ (r, r ′ ) = M LL Eκ (min(r, r ′ )) · W LL Eκ (max(r, r ′ )) (8) for the corresponding homogeneous case [cf. Eq. (10) below], where M LL Eκ (r) denotes a solution which is regular at the origin, and W LL Eκ (r) a solution regular at infinity. Below, we will obtain these functions following the numerical procedure as suggested by McGuire [11]. For this, let us start with approximating the nuclear charge function Z(r) on some grid r i , i = 1, . . . , i max in terms of a set of straight lines and from the homogeneous part of Eq. (7) 6 as given for the i-th interval [r i , r i+1 ] of the grid. In this Eq., we can drop the second argument r ′ in the component g LL Eκ , since it now appears only as a parameter, and replace it by the superscript i to denote the particular piece of the grid for which we want the solution. From the approximation (9), moreover, we see that the nuclear charge function Z i (r) within the i−th interval gives rise to a pure Coulomb potential Z 0i /r and a constant which is simply added to the energy: E → Z 0i + E. We therefore find that, within the given interval, the regular and irregular solutions at the origin in (8) can both be written as linear combinations of the corresponding solutions for a Coulomb field with constants { f i,1 , f i,2 , g i,1 , g i,2 } which still need to be determined. For the Coulomb potential, the functions M i, Coulomb Eκ (r) and W i, Coulomb Eκ (r) are known analytically and given by [14] M i, Coulomb where M(a, b, r) and U(a, b, r) denote the Kummer and Tricomi functions [16,17], respectively, and where the quantities s i , t i and q i are given by The set of constants { f i,1 , f i,2 , g i,1 , g i,2 ; i = 1, ..., i max } can be determined from the fact that the two functions M LL Eκ (r) and W LL Eκ (r) in ansatz (8) as well as their derivatives need to be continuous in r, and that they behave regularly at the origin or at infinity, respectively. The constraint of being regular at the origin, for instance, requires the coefficients f i=1,1 = 1 and f i=1,2 = 0 and can be used together with the continuity M LL Eκ (r) and M ′ LL Eκ (r) , [where the superscript C here refers to the Coulomb functions in Eqs. (13) and (14)] in order to determine all the coefficients f i,j up to a normalization constant. A similar recurrence procedure also applies to the coefficients g i,j , but by starting from 'infinity', that is with g imax,1 = 0 and g imax,2 = 1, and by going backwards in the index i towards the origin. To determine finally the normalization of the radial component g LL Eκ (r, r ′ ) , e. g. of the coefficients {f i,j , g i,j }, we may return to Eq. (7) and re-write it in the form 7 Taking the integral over latter equation, we see that should be valid for ε → + 0 and for any r ′ and, hence, that -by using Eq. (7) and carrying out some algebraic manipulations -the derivative 'jumps' at r = r ′ . We can use the right hand side of Eq. (20) together with the derivative of Eq. (8) to determine the normalization constants c f and c g with which the coefficients f i,j and g i,j need to be multiplied in order to obtain the normalized radial component g LL Eκ (r, r ′ ) . Having constructed g LL Eκ (r, r ′ ) piecewise as solution to Eq. (7), we may obtain the second component g SL Eκ (r, r ′ ) of this pair simply from Eq. (6), where the derivatives of the Kummer and Tricomi functions in expressions (13) and (14) can be calculated by means of standard formulae [17] In practise, both functions M(a, b, z) and U(a, b, z) are required only for real arguments a, b, and z but need to be calculated with a rather sophisticated algorithm [18] in order to ensure numerical stability and to provide sufficiently accurate results. A similar numerical procedure has to be carry out also for the second pair (g SS Eκ , g LS Eκ ) of radial components. This gives rise of course to another set of coefficients {f i,j ,g i,j } and, finally, to the full radial Green's function from Eq. (4). The 4 i max coefficients for the (piecewise) regular and irregular solutions in ansatz (11)(12) and (8) certainly provide -together with a few numerical proceduresthe most compact representation of the radial central-field Green's functions. For the further computation of matrix elements and atomic properties, however, these radial components are usually represented (and stored) at some (2-dimensional) grid in r and r ′ as we will discuss below. Tests on the accuracy of the radial Green's functions To make further use of the Green's functions in applications, it is necessary to have a simple test on their numerical accuracy. For the radial-spherical representation of these functions as defined in Eq. (3), such a test is easily constructed since the Green's function contains the information about the complete spectrum of the Hamiltonian (1) and, hence, about any of its eigenstates ψ n (r). Making use of the well-known expansion of the Green's function in terms of the eigenstates of the Hamiltonian, the relation can be easily derived from the orthogonality of the eigenfunctions and holds for all energies E = E n . With the representation (3) in mind, the relation (22) can be written also in terms of the radial wave and Green's functions P nκ (r ′ ) where P nκ (r) and Q nκ (r) are the large and small components of the (Dirac) radial 2-spinors. As indicated by the tilde on the left-hand-side of Eq. (23), therefore, this relation may serve as a test on the accuracy of the Green's functions if, for instance, the radial components from both side of the equation are compared with each other or if some proper overlap integral is calculated. To test on the precision of the Green's functions in Xgreens, we generate the radial bound-state wave function P nκ (r) and Q nκ (r) as solution of the radial Dirac equation with the same potential − Z(r) r as applied before and by making use of the program by Salvat etal [10]. This solver has been embedded in our code and is utilized in order to calculate the relativistic components. With these radial functions, we then compute the two integrals in (23) for obtainingP nκ (r) andQ nκ (r), respectively, as a function of r. As default in Xgreens, we use the overlap integral P nκ (r) P nκ (r) +Q nκ (r) Q nκ (r) dr for the two lowest principal quantum numbers n of the (given) symmetry κ, together with the corresponding normalization integral (P nκ (r)P nκ (r) +Q nκ (r)Q nκ (r) ) dr , as a numerical measure on the accuracy of the generated Green's functions. These integrals are displayed explicitly by the program (on demand). For a standard (logarithmic) grid with about 300 nodes, the overlap integral (26) is typically within the range 10 −2 . . . 10 −3 . For such a grid, the accuracy is limited in the computations by the linear interpolation of the wave and Green's functions. The accuracy can be increased however for a larger number i max of grid points as shown in section 4. 3 Program structure The RATIP package Similar as Grasp92 [9] was designed for generating the wave functions within the multiconfiguration Dirac-Fock (MCDF) model, the Ratip package [19,20,21] is organized as a 9 suite of program components to calculate a variety of (relativistic) atomic transition and ionization properties. Among other features, the components of Ratip support for instance investigations on the autoionization of atoms and ions, the interaction with the radiation field, the parametrization of angular distributions in the emission of electrons and photons, or the analysis of interference effects between radiative and non-radiative processes. In quite different case studies, Ratip has helped analyze and interprete a large number of spectra and experiments. In order to provide efficient tools for atomic computations, it also incorporates a number of further components (beside of the main program components for calculating certain physical properties) which facilitate the transformation of wave functions between different coupling schemes or the generation of continuum orbitals and angular coefficients. With the development of the Xgreens program, we now provide an additional component to generate the relativistic central-field Green's functions within the Ratip environment. This is rather independent of their later 'use' where different properties such as the two-photon ionization and decay or various polarizabilities of atoms and ions might be calculated with the help of these functions. Owing to the large number of possible applications, however, a support of certain properties will largely depends on the requests by the users and on our further experience with the code. Not much more need to be said about Ratip's overall structure. A detailed account on its present capabilities has been given previously [22]. With regard to the further development of Ratip, we just note that our main concern now pertains to a long life-cycle of the code and to an object-oriented design within the framework of Fortran 90/95. With the present set-up of the Xgreens component, we continue our effort for providing an atomic code which is prepared for future applications in dealing with open-shell atoms and ions. Data structures and program execution Certainly, the major purpose of Xgreens is the computation of (one-electron) radial Green's functions for some -specified or externally given -central-field potential, which can later be utilized for calculating radial matrix elements of the type where j Λ (kr) denotes a spherical Bessel function, g TT Eκ (r, r ′ ) one of the components of the radial Green's function in (4), and where α and β refer to the radial wave functions of some bound or free-electron state, respectively. Matrix elements of this type play a key role, for instance, for studying two-photon ionization (bound-free) or decay (bound-bound) processes. Since the (four) components of the radial Green's functions depend on two radial coordinates, r and r ′ , a rather large amount of data need usually to be calculated by means of Xgreens. These data are finally stored in an external (.rgf) radial Green's function file as will be discussed in the next subsection. To generate and utilize the Green's functions, two derived data types TGreens single rgf and TGreens rgf have been introduced as shown in Figure 1. Using these data types, a single Green's function is kept internally in terms of its expansion coefficients from Eqs. (11) and (12) for the large-large and a similar set of coefficients for the small-small component on the given grid. Apart from the particular representation of the grid, these data structures contain the energy and symmetry of the radial Green's function, the nuclear charge function(s) as well as information about the mode of interpolation of the radial components between the grid points and the maximum tabulation point mtp. In Xgreens, a variable of type(TGreens rgf) is used in order to store the information about all the requested functions, and a few additional procedures are provided to calculate from these structures the values of the radial components for any set of arguments. The basic steps in the execution of the program are very similar as for other components of the Ratip package. At the beginning, all the necessary input data are read in by an (interactive) dialog. The requested Green's functions are then calculated in turn and tested for their accuracy. The (main) output of the program is a formatted Ascii file which provides an interface for further applications, either within the Ratip environment or as worked out by the user. Interactive control and output of the program Like the other components of the Ratip package, Xgreens is controlled by a dialog at the beginning of the execution. In this dialog, all information need to be specified for obtaining the central-field potential as well as about the number and symmetry of the radial Green's functions to be generated. Owing to various choices for defining the spherical potential, however, slightly different dialogs may occur during the execution. An example is shown in Figure 2, where first a Hartree-plus-statistical-exchange (HX) potential due to Cowan [23,24] is generated from the ground-state wave functions of atomic gold, and then two radial Green's functions are calculated within this potential. Apart from the nuclear charge, given in terms of a Grasp92 .iso isotope data file, the dialog first prompts for specifying (or confirming) some basic parameters such as the speed of light, the grid parameters as well as for the energy units, in which the further input and output is done by the program. This is followed by a prompt for determining the central-field potential for the radial Green's functions. At present, Xgreens supports four models including a pure 11 Coulomb potential (for the calculation of Coulomb Green's functions) as well two potentials (HFS and HX) to incorporate also the exchange interaction in some approximate form. In the Hartree model, in contrast, only the direct potential of some atomic level is applied in the later computation of the Green's functions. In all these cases, however, the wave functions of the selected state have to be specified in terms of the corresponding Grasp92 files for the configuration state functions (CSF), mixing coefficients as well as the radial orbitals. If computed internally, the potential can be written also to disc and used in further applications. Beside of the internal set-up of the potential, in addition, a user-defined potential can be given also explicitly and read in by the program. Although the nuclear charge is specified by means of a .iso file (in line with all other components of Ratip), the Green's functions are always generated for a point-like nucleus in order to 'utilize' their regular behaviour of the radial components at the origin [cf. section 2.2]. For each radial Green's function, then the energy and the one-particle symmetry κ need to be specified. The functions are finally written to the .rgf radial Green's function file whose name has also to be given at the end of the dialog. In this file, the radial Green's functions are given at the grid (in r and r ′ ) as specified originally. Moreover, to test the accuracy of the Green's functions, a number of overlap and normalization integrals are calculated on request in order to 'compare' a few low-lying orbitals, as obtained from the Green's functions by an integration over r ′ , with independently generated functions from Salvat's program [10]. In average, the computation of a single Green's function requires about 2 min on a 450 MHz Pentium PC and approximately the same time for the 'test' integrals. In the latter case, the rather large demand on CPU time arises mainly from the 2 i max radial integrals in r ′ −space, which need first to be performed in order to obtain the radial orbital functions from the given Green's function components. Distribution and installation A program of Ratip's size cannot be maintained over a longer period without that certain changes and the adaptation of the code to recent developments (in either the hardware or the operating systems) become necessary from time to time. For the distribution of the code, therefore, we follow our previous style in that the Ratip package is provided as whole. The main emphasis with the present extension of the program, however, is placed on the design and implementation of the two (new) modules rabs greens.f90 and rabs utilities 2.f90 which contain the code for the generation and the test of the radial Green's functions. In addition, the four modules (rabs special functions.f90, rabs error control.f90, rabs greens external.f90, and rabs cpc salvat 1995.f) became necessary and had to be appended to the code in order to support an accurate computation of the hypergeometric and a few related functions from mathematical physics as well as for providing Salvat's solver [10] for the generation of the radial wave functions (P(r), Q(r)) within a given potential. In the present version, the Ratip package now contains the source code for the 8 components Anco, Cesd, Lsj, Greens, Rcfp, Relci, Reos as well as the Utilities for performing a number of small but frequently occurring tasks. Together with the six additional modules from above, the overall program therefore comprises about 40.000 lines of code, separated into 22 modules (apart from the main program components and three libraries). All these source files are provided in the Ratip root directory. As before, there is one makefile for each individual component from which the corresponding executable can be obtained simply by typing the command make -f make-component, that is make -f make-greens in the present case. For most components of the Ratip package, in addition, we provide a test suite in a subdirectory test-component of the root where component refers to the names above. For example, the directory test-greens contains all the necessary files to run the sample calculations which we will discuss below. Before, however, the makefiles can be 'utilized', a number of global variables need to be specified for the compilation and linkage of the program. In a Linux or Unix environment, this is achieved by modifying (and sourceing) the script file make-environment which saves the user from adopting each makefile independently. In fact, the script make-environment just contains a very few lines for specifying the local compiler, the options for the compiler, as well as the local paths for the libraries. Under Windows, in contrast, not much help need to be given to the user since most compiler nowadays provide the feature to define 'projects'. In this case, it is recommended to read off the required modules and source files from the makefile of the program component and to 'declare' them directly to the project as associated with the given component. In the past year, the Xgreens program has been applied under several Linux systems using the Lahey Fortran 95 compiler. The Ratip program as a whole has been found also portable rather easily to other platforms such as IBM RS/6000, Sun OS, or to the PC world. The file Read.me file in the ratip root directory contains further details for the installation. Overall, however, it is expected that it will not be difficult to compile the program also under other operating systems. Test calculations To illustrate the use of the Xgreens program, we briefly discuss and display two examples. They refer to the generation of the radial Green's function for a few selected (one-electron) symmetries in atomic gold (Z = 79) and demonstrate how the accuracy of these functions can be increased by a proper choice of the radial grid. However, before the radial components can be generated, we need first to specify the potential as shown in Figure 2. For the present examples, we have chosen a HX (Hartree-plus-statistical-exchange) due to the work of Cowan [23,24], starting from the ground-state wave functions of neutral gold. As usual for Grasp92 and Ratip, these wave functions have to be provided in terms of the .csl configuration state list and the .mix configuration mixing files as well as the .out radial orbital file. All of these input files are provided also in the test-greens/ subdirectory of the Ratip root directory and, thus, can be used for the present tests. In Figure 2 moreover, the potential is saved to the central-field potential file z79-au-hx.pot and later re-utilized in Figure 3. The output of the two examples are shown in the Test Run Output below. Like for the wave functions, the 'quality' of the generated Green's functions becomes only fully apparent, if they are used for calculating observables which can be compared to experiment. Although calculation of observables is not the aim of the present work, it is still possible to test the accuracy of the generated Green's functions by re-calculating the radial orbitals (as discussed in section 2.3) and by comparing these orbitals with those as obtained from the integration of the Dirac equation for the given central field. This 'comparison' is done in Xgreens Enter ASF level number for which the potential should be calculated: 1 Enter the subshell (e.g. 1s, 2p-, ...) which is specific to Cowan's HX method: 1s Write the generated potential to disc ? y Enter the name of .pot central-field potential file: z79-au-hx.pot Radial Green's functions will be calculated within the range 0 <= r, r' <= 4.797 a.u. with the boundaries of the nuclear charge function Z(0) = 79.00 and Z(r_max) = 1.951 In the given potential the one-electron energies (in eV) are: 1s: -8. 15 on request by evaluating for the two lowest principal quantum numbers n the (radial) overlap integrals of the re-calculated functions with the solutions by Salvat's program. In addition, the normalization integrals are also displayed as seen from the Test Run Output. Except of a few cases, the deviation of these integrals from 1.0 are typically well below 0.1 % on the standard grid with about 300 mesh points in r and r ′ . For these deviations, there are two source of numerical 'inaccuracy' which arise from the test procedure and are not related to the generation of the radial Green's functions. They are caused by the fact that by using Salvat's solver [10], a third-order spline is used to represent the potential at intermediate points in r in contrast to the linear representation (9) for the generation of the Green's functions. Moreover, the integration of the radial integrals uses the standard Grasp92 procedure and, hence, is not adapted as well to Salvat's solutions. The accuracy of the Green's functions (and the test integrals) can be improved however if the number of grid is enlarged, i. e. the spacing between the mesh points reduced. In our second example, cf. Figure 3, we therefore adopt a grid with approximately twice the number of mesh points. Note that this requires four times as much storage as the radial components are functions in r and r ′ . In this test case, moreover, the radial Green's functions are generated for four different symmetries s, p 1/2 , p 3/2 , d 3/2 , and d 5/2 . When the Test Run Output from this example is compared with those from example 1, we see that the accuracy in the overlap integrals in increased by about a factor of 5. For this example, the generation and test took about 30 minutes in total on a 450 MHz Pentium III. After the termination of Xgreens, the radial Green's functions file (e. g. z79-au-rgf.rgf from example 2) contains all the Green's function components which were generated during the execution. Since this is a formatted file, it can easily be read and manipulated by any text editor. This (radial Green's function) file stores the individual components g T T ′ Eκ (r, r ′ ) of the Green's functions as 2-dimensional arrays within a simple file structure. An (internal) file signature # DCFGF in the first line is followed by the mode of interpolation and the number of Green's functions which are provided by the file. Then, each Green's function is specified (in turn), starting with the energy and symmetry of the function and followed by a table of six rows where the first two rows refer to the coordinates r and r ′ (in atomic units) and and the other rows to the four component functions g LL Eκ (r, r ′ ), g LS Eκ (r, r ′ ), g SL Eκ (r, r ′ ) and g SS Eκ (r, r ′ ) in this particular order. Although this format is convenient for the later use of these functions, it may lead to rather large data files, especially if some larger number of Greens' functions need to be generated. In practise, however, not much problems are expected with this file structure because disc storage became cheap recently and because, for most applications, we expect the Green's functions to be generated on demand by making use of the corresponding modules instead of keeping them in external files. Typically, the radial wave and Green's functions occur as part of matrix elements and, thus, first require an additional integration (over r and/or r ′ ) before any observable quantity is obtained. The format of the radial Green's function (.rgf) file can be used easily in order to plot the various radial components. In Figure 4, therefore, we display the nuclear charge function Z(r) and the two radial Green's function components g LL E s (r 0 , r) and g SL E s (r 0 , r) as function of (second) argument r and taken for r 0 = 0.204076 a. u. The radial Green's functions are calculated in the ground-state potential of atomic gold, as applied in example 2, for the energy E = 1000 eV. As seen from Figure 4, the large-large component g LL E s is continuous at r = r ′ while the small-large component g SL E s jumps owing to Eqs. (20) and (21) in space. Both components, moreover, behave 'regularly' at the origin as was constructed explicity by Eq. (8). Outlook. Applications of central-field Green's functions Apart from fundamental interest in having the central-field Green's functions available for 'relativistic' electrons, these functions are also useful for the perturbative treatment of atoms and ions. As discussed in subsection 2.3, namely, the Green's function provide a simple access to the complete spectrum of a quantum system and, hence, can be utilized for carrying out the summation over all the (one-particle) states of the spectrum as required in second-and higher-order perturbation theory. However, since we only provide the one-electron Green's functions, they can be applied for just those processes where the 'perturbations' are described in terms of one-particle operators. Perhaps, the most-studied perturbation of this type is the interaction of atoms and ions with the radiation field, which -within a sufficiently strong field -may lead not only to atomic photoionization but (in second order) also to the twophoton ionization and decay of atoms and ions and to several other processes. During the last two years, we utilized the central-field Green's functions mainly for exploring the two-photon ionization for the helium-like ions and for the inner-shell electrons from atomic neon and argon. Apart from the one-particle Green's functions (as discussed in this work), there are a number of further processes such as the double Auger decay, for which the two-particle Green's functions G E (r 1 , r 2 ; r ′ 1 , r ′ 2 ) are required in order to calculate the cross sections, decay rates and/or angular distributions. These two-particle functions are solutions to a defining equation similar to Eq. (2) but where the (one-particle) Dirac HamiltonianĤ D is replaced by the two-17 particle operatorĤ D,1 +Ĥ D,2 +V 12 , including the electron-electron interactionV 12 explicitly. Even by making use of a proper radial-angular decomposition of such Green's functions, their radial part g k κ1,κ2 (r 1 , r 2 ; r ′ 1 , r ′ 2 ) would depend then on four radial variables and an overall rank k, similar as known from the tensorial decomposition of the electron-electron interaction [25]. Therefore, an internal representation and generation of these (radial) functions appear rather infeasible in practise. An alternative to these two-particle Green's functions is given by the (so-called) modified pair functions Φ E,α (r 1 , r 2 ) which satisfy the equation Ĥ D,1 +Ĥ D,2 +V 12 − E Φ E,α (r 1 , r 2 ) =Ŵ |Ψ α (P JM ) , whereŴ is a one-or two-particle operator which depends on the physical task to be solved and |Ψ α (P JM ) is one of the (two-electron) solutions of the corresponding homogeneous equation. These modified pair functions have the advantage that they only depend on two radial variables similar as the one-particle Green's functions. On the other hand, of course, these functions now depend on some (initial) two-particle state |Ψ α (P JM ) as well as on the particular choice of the interaction operatorŴ and, hence, are less general when compared with the Green's function from above. We currently investigate the possibilities for generating also such modified pair functions within the framework of Ratip and for their (later) use in the description of the double Auger decay. But already the one-particle Green's functions from this work enables one to explore a number of atomic properties which have not been studied before for complex atoms or, at least, not within a relativistic framework. 19 TEST RUN OUTPUT *************** ** Example 1 ** ***************
2014-10-01T00:00:00.000Z
2004-09-07T00:00:00.000
{ "year": 2004, "sha1": "22a262cfffc0f2531cb6077fc2d8ce3fe5890235", "oa_license": null, "oa_url": "http://arxiv.org/pdf/quant-ph/0409040", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e9df372ebf7930ac49eddec5798ee437baa57523", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
226373452
pes2o/s2orc
v3-fos-license
Human Rights and Labour Standards from the Public Health Perspective in the World Trade Organization: Challenges and Possible Solutions According to S. Charnovitz (Charnovitz, 1987), the trade labour linkage has a long history. It has become one of the most contentious contemporary issues in trade and labour policy circles and debates (Langille, 1997). The idea of using international labour standards to protect workers from economic exploitation was first promoted by individual social reformers in Europe in the first half of the Abstract nineteenth century during the early stages of the industrial revolution. The work of these reformers was later taken over by various non-governmental organizations. Calls for an international labour legislation increased dramatically during the second half of the nineteenth century and found expression in various international organizations that were formed (often international associations of trade unions). Opponents of labor standards argue that the international pressure on the foreign countries is an unnecessary and counterproductive interference in the workings of the free market. In this view, the pressure for international labor standards represents either a disguised protectionism or a misplaced compassion. Proponents of labor standards argue that a set of minimal labor standards is necessary to promote fair competition and to facilitate efficient operation of the labor market. In industrialized countries, there has also been a growing undercurrent of resentment toward trade with countries with low labor costs, which threatens the viability of international trade agreements. The core areas of labor standards typically include freedom of association, collective bargaining, prohibition of forced labor, elimination of exploitative child labor and nondiscrimination. The International Labor Organization (ILO) has been the main institution concerned with international labor standards since its inception in 1919. The ILO establishes conventions that are binding only on the countries that ratify them. The ILO is not empowered to enforce compliance with ratified conventions; instead, it relies on the international pressure, advice and monitoring to encourage compliance. Additionally, several bilateral and multilateral trade agreements cover labor and environmental standards. For example, the labor side agreements were a critical element of the North American Free Trade Agreement (NAFTA). But how does it collaborate with public health? The International Labor Organization and its role from the Human rights perspective The (ILO) formally entered the trade labour interface debate in 1994 at the time of discussing a possible inclusion of a social clause in the WTO, the establishment of a link between trade and labour in different forms within NAFTA and the EU, and the conditioning of trade preferences and concessions by some developed countries in respect for labour standards. The ILO set up a working party on the social dimensions of the liberalization of international trade, but in 1995, the ILO's governing body concluded that the working party would not pursue the question of trade sanction and that further discussions of a link between international trade and social standards or a sanctionbased social clause mechanism would be suspended. With respect to trade and labor standards linkages in regional trading arrangements, within the EU, the social dimension of the European integration took a concrete form in 1991, when 11 of the 12 Member States (excluding the UK) signed the community's Charter of Fundamental Social Rights. Another important step in the development of EU social policy was the adoption by the 11 members (excluding the UK) of the protocol on Social Policy at Maastricht in 1991 (Trebilock, Howse and Eliason, 2013). Recently, the debate over trade and labour rights has been extended to human rights more generally as an entirely logical development. However, in the case of other human rights, the debate is much less focused on 'linkage', including sanctions, and much more on the effects of trade obligations on the ability of states, especially developing countries, to fulfill economic, social and cultural rights, such as the right to health or to adequate food. Here, developing countries, although wary about sanctions, have been generally supportive of the efforts to evaluate and interpret trade agreements in human rights terms (Alben, 2001). Core labour standards (CLS) and human rights Various CLS have been characterized by the UN Universal Declaration of Human Rights, the subsequent international Covenant on Civil and Political Rights, and International Covenant on Economics, social and Cultural rights. Important role plays the principle of socially responsible state, that means that state is obliged to establish sustainable and balanced policies to ensure public welfare. The country must ensure a balance between its financial capabilities and not just personal rights in the social area, but also a need to ensure the welfare of entiry society, creating legal regulations that are aimed at sustainable development of the country (Janis Grasis, 2016). The ILO's 1998 Declaration of Fundamental Principles and Rights at Work enumerates a short list of core international labour standards that are defined more fully in eight background Covenants incorporated by reference, namely, freedom of association and collective bargaining, the elimination of forced labour, the elimination of child labour and the elimination of discrimination in employment, which is also consistent with the characterization of certain core labour standards or rights as human rights, especially those that guarantee basic freedom of choice in employment relations (Michaeel, Trebilcock and Howse, 2005). Labour standards have been used in the Generalized System of Preferences as a preferential system to provide a duty free access to exports of developing countries by (most notably) the European Union and the United States of America. Currently, there is a revision of the EU's GSP scheme, in terms of the considerable potential implications, given that the new GSP plus scheme appears to target not only the ratification of the fundamental Conventions, but also the application of Conventions in line with comments from the ILO supervisory bodies. This has the potential to be very problematic for employers.Anartya Sen argues, in his book 'Development as Freedom', that the basic goals of development can be conceived of in universalistic terms, where the individual's well-being can plausibly be viewed as entailing certain basic freedoms irrespective of the cultural context: 1. Freedom to engage in political criticism and association; 2. Freedom to engage in market transactions; 3. Freedom from the ravages of preventable or curable diseases; 4. Freedom from the disabling effects of illiteracy and lack of basic education; 5. Freedom from extreme material privation. (Sen, 1999) . According to Sen, these freedoms have both intrinsic and instrumental values. Importantly, in contrast to the unfair competition and race to the bottom rationales for linking international trade policy and international labour standards, the human rights perspective focuses primarily on the welfare of the citizens in exporting, not importing countries. The assumption underlying this concern for basic or universal human rights is that failure to respect them in any country is either a reflection of the decision of unrepresentative or repressive governments rather than the will of the citizens or a sign of the majoritarian's oppression of the minorities, for example, children, women or racial religious minorities; alternatively, there may be paternalistic concerns that citizens in other countries have made uninformed or illadvised choices to forgo these basic rights. The linkage of the international trade policy, including trade or other economic sanctions, with CLS that reflect basic or universal human rights is a cogent one. When citizens in some countries observe gross or systematic abuses of human rights in other countries, the possible range of reactions open to them include diplomatic protests, withdrawal of ambassadors, cancellation of air landing rights, trade sanctions or more comprehensive economic boycotts, or at the limit, military intervention. Arguing that doing nothing is always or often the most appropriate response is inconsistent with the very notion of universal human rights. In extreme cases, such as war crimes, apartheid, the threat of chemical warfare in the case of Iraq, genocide in the case of Serbia, or the Holocaust in the case of Nazi Germany, excluding a priori economic sanctions form the menu of possible options, seems indefensible. Whether it is the most appropriate option may, of course, be context specific and depend both on the seriousness of the abuses and on the likely efficacy of the response choice of the instrument, issues to which we turn next. But it is sufficient for present purposes to restate the point that, to the extent that CLS are appropriately characterized as basic or universal human rights, a linkage between trade policy and such labour standards is not only defensible but arguably imperative, in contrast to the other two rationales for such a linkage which, despite their much longer historical lineage, are largely spurious and inconsistent with the central predicates of a liberal trading system. However, CLS viewed as basic or universal human rights, by promoting human freedom of choice, are entirely consistent with a liberal trading regime that seeks to ensure other human freedoms, in particular, the right of individuals to engage in market transactions with other individuals without discrimination on the basis of country or location (Molatlhegi, 2002). Having said this scope and the definition of the viewed human rights as sufficiently universal as to potentially warrant the imposition of trade sanctions for their violation is problematic in various respects. Even CLS are not susceptible to uncontentious understandings of their scope. The scope of many economic, social and cultural rights is controversial (Ignatiefee, 2001). These controversies do not obviate the normative force of the rights themselves, but do have implications for the choice of instruments and the choice of the institutional arrangement for addressing the trade policy-labour standards linkage, to which we now turn. As regards to a developing country: From a developing country perspective, the conventional wisdom is that unlike the case with the developed countries, an increased integration with the world economy will be beneficial to less skilled workers. However, this does not seem to be supported by the available empirical evidence, which suggests that many developing countries experienced rising wage inequality after opening to international trade. It appears possible that there is a pervasive skill bias in globalization. It is also uncertain what prospects international trade offers in creating jobs in developing countries, particularly those located in Africa and Latin America. Human rights beyond labour rights Since the end of the Cold War, two main visions have guided the evolution of the international law and institution: the visions of human rights and humanity and that of economic globalization. Both visions have offered challenges to traditions and understandings of sovereignty: they have given a new significance to non-state actors in the evolution and implementation of international law. Both have often given rise to demands and aspirations to global politics and constitutionalism as well as a new relationship between local, national, regional and global levels of governance. However, the legal, institutional and policy cultures of international human rights law and of international trade, financial and investment law have developed largely in isolation from one another (Howse and Teitel, 2007). As a matter of international law, the International Covenant on Civil and Political Rights (ICCPR) and the International Covenant on Economic, Social and Cultural Rights (ICESCR) and the WTO are, in the first instance, treaty regimes. A fundamental structural characteristic of the international legal system is that of decentralization without hierarchy. Treaty norms in the ICCPR and ICESCR, and other human rights instruments have an equal legal status to those in the WTO (a few of such norms, related to prohibitions on torture and slavery have a higher status as ius cogens or preemptory norms of International law, trumping treaty obligations to the extent of inconsistency). A large majority of states are signatories to both the WTO Single Undertaking (the core WTI treaties), and the ICCPR and ICESCR. The principle of decentralization without hierarchy, along with that of giving a full effect to international obligations, implies the need to interpret and to develop these regimes in a complementary and consistent fashion to the possible extent. As the Report of the international Law Commission (ILC) on fragmentation of international law notes, 'In international law, there is a strong presumption against normative conflicts (Koskenniemi, 2006). The Declaration on Trade-related Intellectual Property Rights (TRIP) and Public Health and the Kimberly (Conflict Diamonds) waiver reflect an unacknowledged debt to human rights consciousness in the WTO. The current Director-General of the WTO, Pascal Lamy, has written about globalization with a human face, and his conception of the economic sphere, including the international economic sphere, is deeply rooted in the notion of humanity. More recently, a joint study by the ILO and the WTO Secretariat which explicitly refers to the freedom of association and the right to collective bargaining as 'universally recognized Human Rights', urges their respects as such and not just for instrumental reasons of social peace, and refutes, with empirical evidence, the notion that respect for such rights harms competitiveness (Knoll, 2003). Sen vigorously challenges the view that human rights are 'luxury goods' that poor countries cannot afford until they have achieved a certain level of prosperity; instead, the improvement of the economic welfare depends upon respect for rights in many and complex ways. More recently, Alan Sykes has noted that, generally speaking, there is a positive correlation between a country's openness to trade and its tendency to respect human rights. This puts into question the idea that poor countries should or must sacrifice human rights or postpone their realization for the sake of openness to trade, and an outwardoriented development strategy (Sykes, 2003). Liberalization of trade and health policy perspective Health care policy, in common, is one of the important rights. Healthcare, as one of the freedoms, is protected by human rights. In national as well as international legal systems, the expression of health policy is under high protection. (Palkova and Kudeikina 2020) International trade also hurts the right to health directly or indirectly. In this case, it should be mentioned that the Regional Committee for the Eastern Mediterranean of World health Organization discussed, in its Forty-fifth Session, the impact of the GATT Agreements on health, and passed a resolution urging Member States to: A. Ensure that ministries of health are represented on national committees entrusted with the task of studying the negative impact of World Trade Organization agreements on the health sector; B. Conduct studies to coordinate response to World Trade Organization health-related agreements in cooperation with the Regional Office. World health Organization expressed its concerns in a statement at the Third WTO Ministerial Conference as follows -trade and public health should not be discussed in isolation from each other. Decisions made outside the health sector have tremendous influence on health outcomes, especially in poor societies. World health Organization supports the main purpose of promoting trade, that is, to improve living conditions and to raise real income. It strongly reaffirms that health is central to this development goal. The benefits to be derived from expanding trade should further the goal of improving the health of the population, especially that of poor or marginalized groups who may find themselves excluded from the process of economic growth. (WHO, 2013). TRIPS is the importance of interpreting WTO treaties from a health policy perspective. Health policy experts and organizations were able to promote the protection of the safeguards in TRIPS for parallel importing and compulsory licensing by stressing that the treaty text accorded WTO members these rights. In addition, health policy activism helped stimulate subsequent state practices in the form of the Doha Declaration on the TRIPS Agreement and Public Health which reinforced WTO members' rights to use TRIPS flexibilities and safeguards for public health purposes. Treaty interpretation from a health policy perspective remains important in other areas of TRIPS as well. The question arises whether the provisions of GATS may also provide insufficient flexibility for health policy makers, transferring the issue from the realm of treaty interpretation into treaty implementation or revision. Health concerns often seem to have insufficient weight in decisions many governments make in international forums, such as the WTO. GATS forces WTO members to think about health in connection with the growing role of services in modern economies and the impact of globalization trends, particularly on the poor. In addition, GATS establishes a process designed to progressively liberalize trade in services, and health policy-makers must be prepared to participate in this process to ensure that such liberalization unfolds in a way sensitive to the needs of national governments in ensuring the provision and regulation of health-related services. GATS "institutional framework'' and particularly the dispute settlement mechanism, are also important parts of the GATS for health policy (Mattoo, Stern and Zanini, 2008). Any liberalization under GATS should aim to produce better quality, affordable and effective health-related services, leading to a greater equity in health outcomes. Liberalization should also ensure the necessary policy and regulatory space governments required to promote and protect the health of their populations, particularly those in greatest needs. GATS creates health opportunities and challenges, especially for developing countries. GATS accords countries a considerable choice, discretion, and flexibility so that the proper management of the process of liberalization of trade in health-related services can adequately protect health. In key areas of GATS, governments face choices about the breadth and depth of liberalization of trade in health-related services and the impact of such liberalization on health policy. In fact, countries are free to decide whether liberalization in the health sector should be pursued or not and to what extent. Countries are not obliged to liberalize health services if they do not wish to do so. These choices make it imperative that health officials understand the structure and substance of GATS, collaborate with other government agencies on GATS implementation and liberalization, and act to ensure that the GATS process does not adversely affect the national health policy. (Drager Smith, 2006). As regards to the key provisions of GATS, it creates the multilateral legal framework for international trade in nearly every type of service. The Agreement's 29 articles establish the scope of its rules' coverage, impose general obligations, structure the making of specific commitments, construct a process for progressive liberalization of trade in services and link the treaty to the WTO's dispute settlement mechanism. Although experts acknowledge that GATS has not, to date, significantly affected trade in health-related services, the potential for GATS to do so, through the progressive liberalization process, is tremendous. In the GATS 2000 negotiations, countries may be receiving requests from and may consider submitting offers to other WTO members for market access and national treatment commitments in many different health-related service sectors. On the basis of what is mentioned above, a very interesting subject is WHO's work on GATS and policy. WHO's work on GATS has, to date, focused on collecting evidence on the potential and actual impact of GATS on the functioning of health systems. These efforts involve: 1. Collecting data on trade in healthrelated services; 2. Undertaking a wide range of countrybased studies; 3. Conducting regional and national training programs; Although the environment per se is not a WTO issue, several WTO Agreements and rules are relevant to environmental issues. There have been several environmental related WTO disputes often centering on the issue of "like product". In making a determination of "likeness", WTO rules permit health risks to be taken into account. In a recent case on asbestos, the Appellate Body found the objective pursued, i.e. the preservation of human life and health, to be "both vital and important in the highest degree", and concluded that an import ban on asbestos was a "necessary" measure to protect human health (WTO, 2002). Summary Summarizing the results of the dissertation research, the following conclusions are presented. The International rules on trade, are necessary for four related reasons, Firstly, to restrain countries from taking trade-restrictive measures, for their own interest in the world economy. Secondly, to give traders and investors a degree of security and predictability regarding the trade policies of other countries. Thirdly, to allow for the effective protection and promotion of important societal values and interests (such as public health, a sustainable environment, consumer safety, cultural identity and minimum labour standards), while at the same time ensuring that countries mean only those measures that are necessary for the protection of these values and interests. And fourthly, to achieve a greater measure of equality economic relations. It should be noted that for the potential of international trade to be realized, there must be good governance at the national level, as well as further reduction of trade barriers and more development aids. Without the national or international action required in these four areas, international trade will not bring prosperity to all, but, on the contrary, it is likely to result in more income inequality, social injustice, environmental degradation and cultural homogenization.
2020-07-31T00:48:58.477Z
2020-07-24T00:00:00.000
{ "year": 2020, "sha1": "3daff6b84abf0758ade41c0350fb83f05f722bce", "oa_license": "CCBY", "oa_url": "https://ibimapublishing.com/articles/JESR/2020/423674/423674.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3daff6b84abf0758ade41c0350fb83f05f722bce", "s2fieldsofstudy": [ "Economics", "Political Science" ], "extfieldsofstudy": [ "Business" ] }
119598424
pes2o/s2orc
v3-fos-license
Hopf algebra and renormalization: A brief review We briefly review the Hopf algebra structure arising in the renormalization of quantum field theories. We construct the Hopf algebra explicitly for a simple toy model and show how renormalization is achieved for this particular model. 1 Introduction most of the proofs have been relegated to appendices. The article is concluded with a very simple example in appendix C The forest formula A brief summary of the BPHZ renormalization procedure and the derivation of the forest formula is given in the appendix A. The key result of BPHZ renormalization is an iterative formula (forest formula) which gives a renormalized Feynman graph in terms of the divergent graph, its subgraphs and the corresponding counter terms. Forest formula can be written in a schematic form as follows: 3) where Γ and Γ r are bare and renormalized graphs respectivley. Γ is the graph with all the subdivergences removed. The sum is over all non-empty proper forests of Γ. Z γ and Z Γ are counter terms. t Γ is a renormalization scheme dependent operator, which removes the overall divergence associated with graph Γ. To make the notion of forest precise, let H 1 , · · · , H m be all 1PI, non overlapping divergent subgraphs of Γ, then a proper forest of Γ is any subset of the following set: {H 1 , · · · , H m }. (2.4) Representing the graph We would like to represent Feynman graphs in a more algebraic fashion such that their forest structure and subdivergences become manifest. This would be done by representing them as 'parenthesized words'. Parentheses encode information about the nestedness or the disjointness of the subdivergences and letters appearing in these words correspond to graphs without subdivergences. Parenthesized words can be assigned to a graph by the following procedure: • For every forest we write down a pair of brackets respecting the forest structure, i.e., if a forest A is inside a forest B then the pair of brackets corresponding to the forest A are contained inside the pair of brackets corresponding to B. • Consider a given pair of brackets, if we shrink all the brackets/forests inside it to a point the remainder is a graph γ i without any subdivergences. We write the letter corresponding to γ i next to the right closing bracket of the pair of brackets under consideration. • Rest of what is contained in the pair under consideration is written to the left of this letter. For an example, consider the diagram in figure 2.1. It has two disjoint subdivergences and is overall divergent when the subdivergences are shrunk to a point. The two subdivergences are contained in rectangular boxes. These subdivergences themselves are both 1PI and do not contain any subdivergences. In the figure we have also shown the letters corresponding to these subdivergences. It is easy to see that, using our rules above, this diagram corresponds to the parenthesized word ((x 1 ) (x 2 ) x 1 ). • Disjoint forests and configurations inside disjoint pair of brackets commute in this construction. i.e., • Only the forest structure of the graph is made manifest in this construction and we lose information about to which propagator or to which vertex of a graph γ j another graph γ i is attached. Several different attachment can yield the same forest structure. Hence any Feynman diagram belongs to a class given by a Parenthesized word. For example, the two diagrams in figure 2.2 belong to the class represented by the parenthesized word ((x 2 ) (x 2 ) x 1 ). • A letter x i has one and only one closing bracket on its right side while it can have more than one opening brackets. • We include the empty graph as () which would act as a unit element (not to be confused with the unit map) in the construction of the Hopf Algebra. • An important characteristic of a parenthesized word is its length, which is simply the total number of letters x i appearing in it. For example, in collection (2.6), the parentheized words have lengths 0, 1, 2, 2, 3, · · · respectively. • In general we will have a class of Feynman graphs represented by the notion of parenthesized words constructed out of letters x i . Some examples are: • A parenthesized word, whose left most bracket is matched with its right most bracket is called an irreducible parenthesized word and corresponds to a 1PI Feynman graph. Examples are: An arbitrary irreducible parenthesized word can be represented as (Xx i ), where X is an any parenthesized word. • A parenthesized word, whose left most and the right most brackets do not match with each other is called a reducible parenthesized word and can be written as product of irreducible parenthesized words. For example is a reducible parenthesized word and is written as a product of two irreducible parenthesized words ((x i ) x j ) and (x k ). Figure 2.3: Commutative diagram of Hopf algebra A detailed discussion of the mathematical properties of Hopf algebra will lead us off topic. In this subsection, we will give the formal definition of a Hopf algebra and different elements appearing in the definition. We will also give a rough sketch of how the procedure of renomalization can be described by an underlying Hopf algebra structure. These notions will be made more precise in the next section. Formally a Hopf algebra is defined as following. Definition 1. A Hopf algebra is an associative and coassociative bialbegra H over a field K with a K-linear map S : H → H, called antipode such that the diagram 2.3 commutes. E, e, m, ∆ are called unit, co-unit, product and co-product maps respectively. The condition for the commutativity of the diagram can be written algebraically as: where X is an element of Hopf algebra and 1 d is the identity map. Now we will give a brief overview of how renormalization would turn out to be related to the Hopf algebra structure. • Basic objects of the Hopf algebra are Feynman graphs Γ which will be represented by the corresponding parenthesized word X Γ . Representatives of the overall divergent graphs without subdivergences will be identified as the primitive elements of the Hopf algebra. All other elements X Γ can be built out of these primitive elements. • The co-product resolves the graph into its forests. (2.10) • We have a renormalization map R, which extracts the divergent parts of a graph (depending on the renormalization scheme). • The antipode S gives the counter term Z Γ through the renormalization map. • The renormalized Feynman graph will related to the term m [(S ⊗ 1 d ) ∆ [X]], appearing in the condition of the commutativity. We would indeed see that m [(S ⊗ 1 d ) ∆ [X]] = 0, expressing the fact that the we get a finite result. Construction of Hopf Algebra In this section we will construct the Hopf algebra related to renormalization. This will be done by explicitly defining the all the maps and elements appearing in the definition (1). We will proceed in several steps, establishing algebra, co-algebra, bialgebra and finally Hopf algebra structure. The algebra structure As discussed in the previous section, we will represent Feynman diagrams by parenthesized words. We will arrange these parenthesized words into an algebra structure here. Let A be the set of all parenthesized words. We regard this as a Q vector space. It is easy to see that A is a vector space over Q. Now, we introduce a bilinear product map as follows: Also we have an identity element e = () which satisfies: To understand the product (3.2) consider the example with X = ((x) x) and Y = (y) then XY is a well defined product given by ((x) x) (y), i.e., the product of two parenthesized words give a reducible parenthesized word. By introducing the product we have furnished A with an algebra structure. Now we define a homomorphism (the unit map) from Q to the set A as follows: Now, by definition, the bilinear product m is associative, our algebra A has an identitiy element e and we have constructed a homomorphism from the field of rational numbers Q to algebra A, this means that the set A is a unital associative algebra. The coalgebra structure In this subsection, we furnish A with the structure of a coalgebra. Let us first give the formal definition of a coalgebra. Definition 2. A coalgebra, C over a field K is a vector space C over K together with linear maps e : C → K (counit) and ∆ : C → C ⊗ C (coproduct) such that where 1 d is the identity map on C, or quivalently, the two diagrams in figure 3.1 commute. In the second diagram, we have identified the naturally isomorphic spaces C, C ⊗ K,K ⊗ C. The second equation above is also called the coassociativity condition for the coproduct ∆. Figure 3.1: Commutative diagrams for coalgebra C Now, we will define the counit and the coproduct maps for the set A under consideration. The counit We define a counit by: This definition is motivated by the fact that there is no rational number which should be assigned naturally to an arbitrary parenthesized word and thus the counit annihilates Feynman graphs. On the other hand we assign the rational number 1 to the empty graph e. The coproduct The definition of the coproduct is more involved as compared to the elements defined so far. Roughly speaking, coproduct yields a sum of terms i X i ⊗ Y i , where the first terms, X i , are to be identified with divergent subgraphs and the second terms, Y i , correspond to the remainder of the graph obtained by reducing X i to a point. To give a rigorous definition of the coproduct, it will be useful to define a projection map P as follows: It is easy to confirm the following properties of the map P by explicit computation. 14) We also define a useful endomorphism B (x i ) , which is parametrized by a single letter x i , corresponding to a primitive graph. With the help of the maps P and B, we are now in a position to define the coproduct as follows. This definition of the coproduct is complete. It is easy to use the above definition to show an important property of the coproduct. Another important property of the coproduct is: This can also be shown by using the definition (3.20), however the proof is a bit involved. The proof is based on the standard induction argument on the length of the words X and Y . Another way to write the coproduct is by using the Sweedler's notation, ∆ [X] = X X 1 ⊗ X 2 , where the sum is over the subwords X 1 of X and X 2 = X/X 1 . Proof of this assertion is given in appendix B.1. Using this notation and the properties of the map P , we can write the equation (3.20) of the coproduct as: (3.23) Let us now consider a few example to explain how the coproduct acts on the elements of the set A. (a) Using the similar method (but after more tedious algebra) we can also compute: Coalgebra check We have defined the counit and the coproduct maps for A, but in order to furnish the coalgebra structure on A we need to show that these maps satisfy the equations (3.6) and (3.7). The first of these relations is trivial to show due to the definition of the counit as : (3.28) Next, we want to show the equation (3.7) holds. This can be proved using induction on the length of the words. A detailed proof is given in the appendix B.2. After successfully defining a counit and a coproduct on A, we have completed the construction of the coalgebra structure on A. We have already established the fact that A is a unital coassociative algebra. The property (3.22) ensures that the algebra and the coalgebra structures are compatible. This implies that A is actually a bialgebra. The antipode To complete the construction of the Hopf algebra, what remains to find is an antipode. It turns out that antipode is actually the object which achieves the renormalization, it combines the terms generated by the coproduct and combines them in a way which is similar to the forest formula. We define the antipode as follows: S : This completely defines the antipode. However, we need to show that this antipode is actually well defined and induces a Hopf algebra structure. This amounts to showing that equations (3.33) and (3.34) are equivalent 1 and also the condition (2.9) is satisfied. Equivalence of the two definitions follow from the associativity of the product m and the coassociativity of the coproduct ∆. The detailed proof is given in appendix B.3. The proof that the condition (2.9) is satisfied, is given in appendix B.4. We have now completely furnished the set of all Feynman diagrams, A with the structure of a Hopf algebra. We have not yet discussed precisely how the renomalization is achieved by this structure. This will be the subject of the next section. From Hopf algebra to the forest formula In this section, we describe how the Hopf algebra constructed above produces the forest formula, generates counter terms and the renormalized Feynman graphs. We will see that an important ingredient in this regard is the renormalization map, R, which is renormalization scheme dependent. Given a Feynman graph Γ, we associate a parenthesized word X Γ to it. Using the Feynman rules we obtain an integral expression associated with the graph Γ, denote it by φ (X Γ ) ∈ V , where V is a vector space, endowed with suitable structure which is not important for our considerations. For example, it could be the space of Laurent polynomials in the regularization parameter. These Feynman integrals are subject to some renormalization conditions which are described by renormalization map R : V → V . The renormalization map depends on the renormalization scheme, for example, in the case of minimal subtraction, R picks out the only the divergent part of φ (X Γ ). The map φ, the renormalization map R and the antipode of the Hopf algebra S give rise to a map S R at the level of the Feynman integrals, which is written as: We can use ∆ [((x i ) x j )] as computed in equation (3.26). Since, P 1 annihilates e, we finally find that: Similarly, after a straightforward but tedious computation one can find that: Let us now proceed further to show that the forest structure in equations (2.1,2.2,2.3) emerges from the Hopf the algebra structure. Let U be a subword of X, then by using the representation of the coproduct in Sweedler's notation and the fact that P 1 annihilates e, the antipode can be written as: If the parenthesized word X is associated to a Feynman graph Γ then the subwords U = e, X are associated to the proper forest γ of the graph Γ. Using this fact, we can now write the map S R in the following way: . The renormalized Feynman graph Γ ren is obtained as follows. Let X be the parenthesized word associated with the graph Γ (We will use the parenthesized word X, the corresponding graph Γ and the corresponding Feynman integral φ (Γ) interchangeably) then: 44) in the last equation, the first term is just the Feynman integral associated with graph Γ, the second term is the counter term Z Γ and the last term just removes the subdivergences as we have seen earlier. Now, we omit writing φ and replace parenthesized words with the respecting graphs to find: Earlier, we showed that at the Hopf algebra level, the operator m [(S ⊗ 1 d ) ∆] annihilates any parenthesized word other than the unit e. This expresses the fact that at the level of the Feynman integrals we will get essentially a finite result. Summary In this section we will briefly summarize the key results of this article. By representing the Feynman diagrams as parenthesized words, we furnished them into a set A. We also included the empty graph, represented by the unit element e, in that set. Then we introduced an algebra structure on A by defining a bilinear product m : A → A. We also defined a unit map E : Q → A, furnishing A into a unital associative algebra. Next, we introduced the coalgebra structure on A by defining the counit map and the coproduct map. The coproduct was defined in such a way that it was compatible with the product m and hence we obtained a bialgebra structure on A. To complete the construction of the Hopf algebra, we defined an antipode map S : A → A. We also showed that the struture of the forest formula is recovered if we identify the antipode with the counter term of a specific graph. To make this notion precise, we defined a map φ : A → V , which assigns a parenthesized word an analytic expression (Feynman integral) using the Feynman rules. We defined the renormalization map R which gives the divergent part of a Feynman integral. It turned out antipode S induced the counter term for a graph via R. The most important result we obtained is the equivalence of the antipode and the forest formula. This equivalence followed by making a set of identifications between the elements of the Hopf algebra and the objects of the standard renormalization theory. We list these identifications here. • 1PI Feynman graph Γ with subdivergences are identified with irreducible parenthesized word (Xx) whose bracket structure matches the forest structure of Γ, and the letters label the components of Γ obtained after reducing the subdivergences to a point. • The Feynman graph, with all its subdivergences renormalized, Γ is identified with the object: (4.1) • The renormalized Feynman graph Γ ren = Γ + Z Γ is identified with: A BPHZ Renormalization Consider a Feynman graph Γ. By using Feynman rules we can obtain the corresponding analytic expression F Γ . In general this expression can be written as a Laurent series in the regularization parameter ǫ. If we consider φ-cubed theory in 6 spacetime dimensions and use dimensional regularization then where a n are some coefficients and the integer N is bounded above by the number of loops in the graph Γ, which can be shown explicitly. We stress here that in the general argument for the BPHZ renormalization nothing depends crucially on the particular toy model chosen here. Let us now define a 'subtraction' operator associated with the graph Γ as follows i.e., it picks out the divergent part of F Γ . In general, the subtraction operator is renormalization scheme dependent, here we have chosen the minimal subtraction scheme. The finite part of the graph can now be written as: So, we see that the term '−t Γ F Γ ' provides the counter term for the graph Γ and 1−t Γ removes the divergence associated with graph Γ and makes it finite in the ǫ → 0 limit. Now, consider the graph Γ to have proper 1PI subgraphs H i , i = 1, · · · , m. For simplicity, we assume that all these subgraphs are overall divergent, if they are not divergent, there is no need for renormalization. We order these graphs such that if H i ⊂ H j then i < j. Now we define the following: where the product in the second equality needs to be ordered. Since the operator '1 − t H i ' removes the divergence associated with the subgraph H i , we see that equation (A.4) is nothing but the graph Γ with all its subdivergences renormalized. Now we define the 'Bogoliubov R operator' which removes the over all divergence associated with Γ and renders it finite: Let us now define a restricted graph Γ/H as the graph obtained by reducing H to a point inside Γ, then it is where the sum is taken over all subgraphs of Γ (i.e., all non empty subsets (denoted by φ) of the set {H 1 , · · · , H m }). We will also need the following theorem due to Hepp [12], which we state here without proof. Courtesy this theorem we can restrict the φ in equation (A.7) to be the subset of non overlapping 1PI divergences. Since H∈φ (−t H ) F Γ provides the counter term associated with the subgraph φ we can write: where Z φ is the counter term which makes the subgraph φ finite. The subgraph φ is formally defined as: and is called a 'forest' of graph Γ. In the above expression the term inside second set of parenthesis is the graph Γ with all non-overlapping subdivergences renormalized. The remaining divergence is then removed by the operator (1 − t Γ ). Equation (A.9) is called 'Zimmermann's Forest Formula'. We can write the forest formula in a schematic fashion as follows: where Γ and Γ r are bare and renormalized graphs respectivley. Γ is the graph with all the subdivergences removed. γ denotes all proper forests of Γ. Z γ and Z Γ are the counter terms. Example We now consider an example which explains some important aspects of the forest structure of a Feynman graph and the application of the forest formula. Let us look at the diagram in figure 2.1. This graph (say Γ 1 ) has only two non overlapping 1PI subgraphs, say H 1 and H 2 , as labeled and boxed in the diagram. The corresponding proper forests are: So we find that: We shoowed earlier that this diagram corresponds a parenthesized word ((x 1 ) (x 2 ) x 1 ). If we compare the structure of the counter term Z Γ obtained here with equation (3.39) (which computes S R [((x 1 ) (x 2 ) x 1 )]), we see that the two objects have exactly the same structure after the identifications described in the section 4. B Proofs B.1 Sweedler's Notation Let U be any subword of a parenthesized word X, then our coproduct is defined in such a way that: This assertion is easy to prove using the induction on length of the words. It is obviously true for words of length 1. Assume that it is true for word X of length n and then induce. Let us consider an irreducible parenthesized word (Xx) of length n + 1. which proves our assertion for irreducible word of length n + 1. For an arbitrary word XY of length n + 1, the assertion follows by using the induction assumption and the fact that . This completes our proof. B.2 Coassociativity of the coproduct Here we prove that the coproduct defined in equation (3.20) is coassociative and satisfies the following condition: (B.7) Proof. We will prove this using induction on the length of the parenthesized words. It is trivial to see that ∆ is coassociative when acting on the words of length 1. For the induction we assume that it is coassociative acting on words of length n. First, we show that it is coassociative on irreducible parenthesized words of length n + 1 and then we prove the assertion for arbitrary parenthesized words. We use the Sweedler's notation and also drop the summation sign to simplify the notation further. Let X be a parenthesized word of length n then: where equation (B.8) is just the simplified Sweedler's notation and equation (B.9) is the induction assumption. Now, consider the parenthesized word (Xx j ) of length n + 1. A straightforward computation gives: Now, let us compute the RHS of equation (B.7). By using the definition (3.23), we get. First two terms in the above equation are the same as in equaton (B.12). Let's focus on the third term. An important result in this regard is the following. Using this we can write: where, the second equality just follows from the induction assumption (B.9). This is precisely the third term in equation (B.12) and this complete the proof of coassociativity for parenthesized words of length n + 1 an of the form (Xx j ). Now, for a general parenthesized word XY of length n + 1, we use the property of the coproduct (3.22) to get: where the first and second lines follow from property (3.22), third equality follows fromt the induction assumption. Fourth and fifth lines again follow from (3.22). This complete the proof of coassociativity for the coproduct. B.3 Equivalence of definitions of antipode In the definition of the antipode, two definitions, (3.33) and (3.34), were given. For the antipode to be well defined, these two definitions should be equivalent. We prove this equivalence in the following. Proof. We can strip off the parenthesized word (Xx i ) from the argument of the antipode in equations (3.33) and (3.34) and represent antipode as an operator acting on A. Then, we need to show that: Both sides still involve the antipode S, let us do one more iteration on the both sides. For the left hand side we get: where the last equality follows because of the fact that P 1 ⊗ P 1 ∆P 1 = P 1 ⊗ P 1 ∆, which is easy to confirm. For the right hand side, a similar computation yields: From equations (B.31) and (B.32), we deduce that, to show the equivalence of the two definitions we need to prove the following: This is very easy to show using the previously established properties of the coproduct ∆ and the product m. Using the coassociativity (∆ ⊗ 1 d ) ∆ = (1 d ⊗ ∆) ∆, we can freely make the following change in the left side of the above equation: Similarly, now we make use of the associativity of the product, this implies that m (1 d ⊗ m) = m (m ⊗ 1 d ). Using this, we can again move the last two operators in the direct product to the first two places, yielding: This completes the proof for the equivalence of the two definitions. B.4 Hopf algebra check Here, we show that the antipode defined earlier in this article actually satisfies the condition (2.9). Proof. We will do this using induction. For a parenthesized word of length 1, (x), it is easy to see that: Let us now assume that the assertion holds for parenthesized words of length n, consider an irreducible parenthesized word (Xx) of length n+1. Since the map P 1 annihilates e, using the Sweedler's notation we can write the antipode of (Xx) as follows: Using this, we find that: For an arbitrary parenthesized word XY , due to the induction assumption and the property (3.22) of the coproduct, the assertion holds trivially. This completes our proof. C Example Here, we will work out an elementary example which elucidates how all the different elements of the Hopf algebra fit together to give a finite result for a divergent integral. We will use a very simple toy model, defined below: It is easy to see that I j is divergent as 1 jǫ . We call the subscript j in (x j ), the loop order of (x j ). This toy model is the simplest realization of our Hopf algebra. Let us consider the divergent graph X = ((x 1 ) (x 2 ) x 1 ). Our claim is that the expression X r ≡ m − (first four terms with c replaced by 1) . (C.5) Now, The first two terms can be combined to get: The third term can be written as So that the sum of the four terms is: Plug this in equation (C.5) we finally obtain the expression: which is clearly well defined and finite in the ǫ → 0 limit. Although this was a very simple example, there should be no hinderance in generalizing this to more realistic QFT examples. If we consider some realistic Feynman graph, our Hopf algebra will renormalize it with the same ease by applying the operator m [(S ⊗ 1 d ) ∆].
2015-05-18T19:12:48.000Z
2015-05-18T00:00:00.000
{ "year": 2015, "sha1": "d90418eaeb55a804c3abcb7473dd732bdccfb452", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d90418eaeb55a804c3abcb7473dd732bdccfb452", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
245273696
pes2o/s2orc
v3-fos-license
Mediators of Public Resonance: Cinematic Reflections on the Role of Iconic Figures of the ‘Everybody’ in Populist Political Processes This article analyses how the medium of film stages and reflects on populist political practices and on the main figures and ways of appearing, acting and relating they involve to create public resonance. In analysing two sample films – one ( Meet John Doe , Frank Capra 1941) released in the United States in the early 1940s and the other ( Chez Nous , Lucas Belvaux 2016) made in Europe in the 2010s – it shows in detail how populist leaders embody the role of a central iconic figure, the ‘everybody’, in addressing the audience as ‘people’ in opposition to an ‘elite’. The central characteristics of such a figure and its functions in political discourses as well as the effects of its public performances and the stages it uses are disclosed. In addition, the article gives an insight into the genealogy and iconology of this figure. In this respect, it shows that for contemporary populism, the link of everybodies to the political myth of ‘the people’ re-emerging with the popular revolutions of the seventeenth and eighteenth century is most important. In comparing the two sample films, the article discusses various concepts that are used in political theory to grasp the phenomenon of populism. By relating these concepts to the lived practices of populism depicted in the films, the key stylistic and performative features of populism are highlighted and the patterns of collective myths associated with it are revealed. At the same time, however, the change in populist political mobilisation from modernity to late modernity is also discussed. In this reading, popular film appears as a medium that does not represent an escape I. INTRODUCTION How does populist discourse address its public? What are the main aesthetic tactics, iconic figures and ways of speaking and appearing in public that it employs to create resonance among spectators and citizens? What aesthetic, political and popular arts traditions do the aesthetic figurations and tactics circulated by populist discourse reactivate? This article 1 answers such questions by analysing how popular cinematic productions deal with a central iconic figure, the 'everybody', which populist leaders embody in a particular way 2 in staging 'the people' and in addressing the audience as 'people' in opposition to an 'elite'. A close reading of the mise-en-scene of this mediating figure grasps the specific offer that populist discourse proposes to spectators and citizens and also reveals aspects, meanings and questions in respect to the past and contemporary 'populist Zeitgeist' (Mudde 2004, 542) which are usually not explicitly addressed in the political sphere. The first section introduces the concept of 'the everybody' based on the example film, Meet John Doe (1941) by Frank Capra, and discusses the tasks that this figure addressing the audience can assume in the realm of the political. At the same time, the everybody is also considered as a figure of thought, and insights are given into its genealogy and iconology. The second section presents a close reading of Meet John Doe, whose title already refers to a 'John Doe' -a name used for centuries as a placeholder in English and US court cases when the true identity of a person is either unknown or when there is a preference for it to remain concealed, and which has subsequently become synonymous with 'the ordinary man'. This film is investigated from a visual studies perspective informed by political theory and the history of democracy as a cinematic reflection on populist political processes, with especially the role that such everyman figures play in addressing and involving the audience being looked at in detail. The central functions that such figures assume within political discourses are thematised. The final section of the article addresses the contemporary 'populist revival' (Roberts 2007, 3) and discusses a contemporary European cinematic appropriation of the plot around an everybody figure and her exploitation as a political leader, which show similarities to political practices demonstrated by the Front National (FN) in France. Besides equivalences between how populism is depicted in the two films, the paper analyses the main transformations in the mise-en-scène of populism at the beginning of the new millennium, when it was emphatically re-invented in various European countries, not only in France but also in Italy, Spain, Greece, Denmark, the Netherlands, Finland, Sweden, Austria, Germany and Hungary (Taggart 2017). Using the example of the French film Chez Nous (This is our Land, Lucas Belvaux, 2016), the aestheticpolitical tactics and iconic attraction figures now appearing in a transformed way 1 An earlier, shorter version of this article appeared in Italian as: Anna Schober, 'Un uomo di strada diventa un leader populista: Arriva John Doe (Capra 1941) come riflessione cinematografica sul ruolo degli "uomini qualunque" nei processi politici,' Cinema e Storia 1 (2019): 57-74. The Deutsche Forschungsgemeinschaft (ref. SCHO 1454/1-1) funded the research upon which this article is based. 2 In his typology of representations of the 'good rulers', Pierre Rosanvallon (2018, 218f.) also mentions the 'homme-peuple', which goes hand in hand with the claim to install a personalised power perceived as radically democratic. The homme-peuple can be expressed in totalitarian and populist versions -although these cannot be equated. Populist movements usually adhere to the electoral boundaries of democratic political systems, although there are strong tendencies to weaken them, as the recent example of Donald Trump shows. on the European political stage are investigated as central mediating devices of a new 'democracy of rejection' (Rosanvallon 2008, 123, 179) in which the people assert their sovereignty by periodically rejecting those in power, voters often turn into protesters and new forms of border regimes (Schulze Wessel 2017, 103-107) emerge, which can manifest themselves anywhere in social space. The particular aesthetics in which this film responds to contemporary populist practices informed by social media and surveillance practices is another pivot of its analysis. II. WHO OR WHAT ARE EVERYBODIES? In Meet John Doe (1941), Gary Cooper embodies an innocent, direct, 'simple' character. In the course of the film, however, he is built up into a popular political figure, a kind of leader for the masses because of crisis-like, random circumstances, which in the film are concretised as those of the United States during the Great Depression and in particular the media business of the time. In the film, the actor first appears as a tramp. As such, he is characterised as moving through various milieus, is not properly rooted in any of them but gives us a view of numerous social scenes. In casting Gary Cooper for this role, Capra chose a performer who does not submerge himself in his part, but 'who instead come[s] trailing clouds of association which are the residue of other parts, and whose "acting" has the bold simplicity of an icon rather than the literal detail of a photograph' (Dickstein 1983, 321). As a result, like Charlie Chaplin's tramp, he acquires 'a timeless and emblematic quality that distances [him from] ordinary poverty' (Dickstein 1983, 321). This combination of a tramp and an iconic actor who in the course of the story also becomes a political figure addressing the masses turns this cinematic figure into an everybody, a figure of appeal and mediation (Schober 2019a) that feature films and other visual media frequently employ. The concept of 'everybody' (l'homme ordinaire) is taken from Michel de Certeau's The Practice of Everyday Life/L'invention du quotidien: 1 Arts de faire (1988). According to de Certeau, the main function and achievement of this figure conveying an impression of ordinariness is the popularisation of knowledge and the dissemination of political positions. Through it, discourses can simultaneously be proven true and authenticated (de Certeau 1988, 2-3). 3 At the same time, through such 'general persons', existing certainties can be rejected, questioned and contested, and new meanings, world views and orders of things can achieve a breakthrough. The design of these figures refers to a reservoir of pre-existing images, often of previously marginalised or alien human forms. These are then adapted and transformed into a 'new man' or 'new woman', paving the way for a new public regime, in which a transformation of a subject culture (Reckwitz 2010, 69-79) and of political power relations (Schober 2019b, 61) converge. From a sociological perspective, everybodies are variants of the figure of the third person (Fischer 2004, 78-86): although they usually appear with a very specific face as representatives of a particular social class, differentiated by gender, nation, race or ethnicity, they nevertheless address 'everybody', and hence speak to a potentially universal audience. In doing so everybodies trigger transition but can also be overpowering and generally act as ambivalent agents (Schober 2014). They mediate not only between the particular and the universal and the self and the other but also between the private and the public, and in doing so bring about communion and reconciliation as well as being able to increase hate and resentment. Historically, such discursive figures appear very early in the theatre -in the 'Everyman' plays of the fifteenth and sixteenth centuries (ca. 1510-1535) and their Dutch model Elckerlijc, 4 in which they convey both individual responsibility and community spirit to the audience. In addition, the everybody as a moral-ethical attitude towards the world also already existed in the sixteenth century, for example, in Montaigne (1998, 5, 78-96), who cites 'random trivial figures' that enable other, likewise random and trivial figures to constitute themselves as private and political subjects. However, for the question of the role these figures play for populist political movements and parties in the twentieth and early twenty-first centuries, another genealogical line is particularly important, in which elements of these earlier traditions are taken up. The main thesis formulated in this respect is that iconic figurations of the 'everybody' gained particular political momentum following the popular revolutions between the sixteenth and nineteenth centuries and in the course of this became re-figured as public agents linked to the insurgent crowd, providing a sensual physical presence and, with this, also a permanence of the modern, Western myth of the self-empowering people (Schober 2019b, 65). Figures of individual insurgents were thus pictured both as part of the people as a rebellious group of concrete persons and as carriers of a myth containing the power to transform society (Canovan 2005, 120). In this manifestation, everybodies appeared as physical-concrete points of connection and attraction for political mobilisation. Some of these figurations soon became iconic and were adopted and remediated using various 'visual vehicles' such as painting, photography, film, Internet platforms and live performance, which makes the iconography of the everybody a cross-media one. Parallel to this development of political attractors in human form to appeal to the citizens as people, figures such as 'She' and 'I' appeared in literature in the eighteenth century, particularly in novels, through which the readership was trained in identificatory reading (Kittler 1985, 86f.). But also in theatre, where the classical 'everyman' had almost completely disappeared between the seventeenth century and the beginning of the twentieth century, this material was revived and has experienced a lively performance history since the early twentieth century (Adolf 1957;Schmidt 2001, 397 and 44f.). The main playground of the everybody as a relational figure to address and involve the audience in modernity into a plot, however, is film: here a multitude of bodily, moral, ethical and political positions are staged to which the viewer can temporarily identify with and experience resonance in a variety of (bodily, imaginative, emotional) ways. Populist or not, in their attempt to stage an appeal, politicians can use the medium of film to increase their reach and presence, but, as the examples examined in this article show, films can also be used to reflect on the acquisition of political resonance and its entanglement with visual media. III. THE TRAMP AS A WAY OF POLITICALLY ADDRESSING THE PUBLIC Meet John Doe (1941) by Frank Capra represents a close observation of political processes based on a colourful appeal proclaiming the vox populi and the enthusiastic 4 These plays confront their audience with the existential experience of death and address the public directly through the figure of the everyman (Davidson et al. 2007, 9). The film also shows how, as part of this process, the initial fascination and enthusiasm that his followers invest in John Doe can suddenly tip into hatred and persecution. This ultimately leads John Doe to wish to fulfil his promises of authenticity, which he embodies in his role, precisely by committing suicide as he had pledged at the start of the film in the words his inventor had put in his mouth. In the released version of the film, 5 however, John Willoughby alias John Doe is prevented from the planned suicide by a small group of supporters who still believe in him, providing the story with a happy ending. As Frank Capra (Glatzer 1975, 34) or 'the elite' is one of the core features of populism (Judis 2016, 88;Moffitt 2020, 21;Mudde 2004, 543). In addition, the kind of politics presented in this film is shown as something that is performed and has to do with language, issues of accent, physical appearance, gestures and ways of dressing that in one way or the other are characterised by a 'flaunting of the low' in terms of relating to the people, appearing before their eyes and decision-making (Ostiguy 2017, 77). More historically precise, in this film Capra engages with the widespread populist conceptions and practices during the interwar years in the United States. On the one hand, these are tendencies that emanated particularly from the middle class and were aimed against the New Deal, which was largely supported by the left and the underprivileged. The John Doe Clubs, for example, reflect various Christian populist movements such as the Share Our Wealth Clubs founded by the populist politician Huey Long or the National Union for Social Justice by the radio priest Father Coughlin (Brinkley 1982). 6 These movements constituted an 'optimistic populism' (Phelps 1979, 391), which displayed 'great reverence for the individual' (Glatzer 1975, 39). It was characterised by an 5 Frank Capra (1997, 302;McBride 1992, 431) writes in his autobiography that he had difficulties finding an ending for this film. He realised several endings and tested them by showing them to the press and various audiences. (Richards 1976, 67). At the same time, however, the 1930s were also characterised by a strong political mobilisation on the left, and John Doe in his faded and torn denim-outfit also resembles the industrial union worker, the 'forgotten man' who Roosevelt evoked in a speech he gave in 1932 as a reason to rebuild the country and the economy from the bottom-up (Gage 2016). Although the film's depiction of the John Doe Clubs alludes more to conservative populist movements, it is left open whether John Doe gives a voice and a face to a right-wing or left-wing mobilisation. Rather, populism is portrayed as a political force that involves a range of discursive resources and performative practices that can be put to very different uses. 7 At the same time, however, this film also involves us in a reflection -for example, The John Doe Clubs and "the John Doe idea," which is Capra's own cherished ideal of good will and personal benevolence, are ludicrously inadequate to the problems they face, and to the villains who manipulate them from the very start' (Dickstein 1983, 330). Mainly, however, this film demonstrates the centrality of the everybody figure for the constitution of 'the people' against 'the elite'. The people, according to Margaret Canovan, thereby needs to be understood as being neither a natural, organic entity nor an artificial construction but as 'the contingent outcome of intersecting actions by a multitude of political actors, none of them in a position to foresee or control the result' (Canovan 2005, 55). It is at the same time a number of concrete individuals taking action in a specific time at a specific place and an abstract collective entity, a kind of myth of salvation (Canovan 2005, 120f.). Hence populism appears as something that is done rather than being solely a property of political actors or an ideology (Moffitt 2020, 24;Ostiguy 2017, 74), even if the resistance that populism develops vis à vis elite political culture, often imbued with values commonly labelled 'liberal', such as individualism, multiculturalism, internationalism or progress-orientation, can amount to something like an 'ideology' (Canovan 1999, 4). As already mentioned, Meet John Doe does not foreground whether the kind of mobilisation that the tramp as an everybody triggers can be labelled left or right. What is more emphasised is the portrayal of how John Doe as a confrontational personalised leader motivates citizens and infuses them with a collective enthusiasm, which, however, can easily tip over into expressions of fanatical hate and resentment (Girard 1999, 145;Schober 2014, 266f.). In this respect, the styles and sociocultural elements, for example in the form of scripts, stages, and the habitus of those that appear in public as populists, are made graspable throughout the film. The latter in particular is supported by the acting skills of Gary Cooper and by the medium of film as one that can capture 7 This is also highlighted by Ernesto Laclau (2007, 176) in his definition of populism. physical appearance, facial expressions and gestures, and convey them to the audience, for instance, through close-ups. Meet John Doe thus represents populism as an everyday practice and in doing so enters into a complicity with theoreticians of populisms following a discursive-performative approach, such as Margaret Canovan (1999), Ernesto Laclau (2007) or Benjamin Moffitt (2016Moffitt ( , 2020. 8 But what are the main characteristics of such figures, the stages they employ, their performances and their audiences that this film shows us? III.1. CONNECTION: UNIVERSAL-PARTICULAR The brief outline of the plot has already shown that in the narration of this film the position of the everybody figure is propelled by 'John Doe', who is simultaneously an everybody and a nobody and is only coincidentally made into this concrete character in flesh and blood. The everybody occurs as an appealing and at the same time transgressive figure: it potentially addresses 'everybody', that is, a broad, even universal circle of spectators and listeners, and in doing so, as the film demonstrates, it can forge connections between previously often hostile or reciprocally ignorant social positions. In tension with this universalising role, however, the film also shows that it is precisely the particularity of John Willoughby alias 'John Doe' and his 'low' way of relating to people (Ostiguy 2017, 77f.), that is, his weather-beaten face, his faded dungarees, his crumpled hat, uninhibited conduct and a certain brittleness in speaking that distinguishes his voice from the usual polished speechmaking voices, that constitutes his charisma and leads to the emotional involvement and activation of the audience. Above and beyond this, in one scene sitting by a fire, the tramp John Willoughby also embodies a nostalgia for a vanished America, an idealised, preurban America. This means that part of the staging of John Willoughby alias John Doe as an attractor for a broad audience is presenting him as an example of homepride. This is also done by showing him as an enthusiastic baseball player, the most popular sport in the United States, closely connected to national morality. In this role he appears as a sportsman with an attractive physical appearance and -despite his initial awkwardness and scepticism -with an increasing ability for social adaptation, which makes him a kind of 'new man' of 'organised modernity' (Wagner 1993). 9 His opponents are exponents of the 'high' in the form of intellectuals and members of the elite, 'the patronizing, jargon-ridden, ivory tower dweller, isolated from the common people' (Richards 1976, 70). In the film, these are represented by newspaper reporters, advertising executives or players in the political machine in a narrower sense. III.2. AUTHENTICITY A main feature of the everybody in Meet John Doe is Long John's authenticity. This impression is achieved through presenting him as natural, spontaneous and novel (Montgomery 2001, 403f.) instead of formal, artificial and stale, for example, by highlighting his use of folksy expressions or his spontaneous bodily or facial expressions, which are captured by close-ups or are shown by his demonstrative 8 This approach is distinct from two other ones (Moffitt 2020, 12f., 17f.) regarding populism either as an ideology -more precisely a 'thin centered ideology' (Mudde 2004, 543) that is usually combined with other ideologies -or as a strategy. The latter approaches populism through a focus on the role of the leader (Wayland 2017). 9 As such, John Doe opposes the earlier hegemonial culture of the self that emphasised the bourgeois professional subject that was not socially positioned in a group-oriented way (Reckwitz 2010, 358 authenticity of the figure is addressed in the film as one that was formed and constructed for an audience by Ann Mitchell, for example, in her initial letter. However, the film also shows that this 'fabrication' does not cancel out its effect. The resonances this mask of authenticity is able to trigger are also overtly presented throughout the film as something ambivalent. This staged authenticity makes politics personal and immediate and confronts another way of doing politics, which in this way seems all the more remote and bureaucratic (Canovan 1999, 14). As such, it impresses and convinces not only the public but also Long John himself, which towards the end of the film will bring him to grow into his role morally and become the sacrificial social figure he was styled as in the fictional reader's letter right at the beginning. At the same time, this pronounced authenticity leads to a kind of fusion between the leader and his followers, appearing in the form of a discourse of love (Ostiguy 2017, 83), which also establishes a demarcation between 'us' and 'them' and can amount to a totalitarian closure. III.3. TRIGGERING AFFECTS AND RESONANCES The film identifies John Doe as a figure that is closely related to other people's desires -those of Ann Mitchell, who 'invented' him, of the activists in the John Doe Clubs, who are mobilised by him as citizens, and the media tycoon D. B. Norton, who tries to exploit him. He motivates this desire through his simplicity and directness and the spontaneity of his actions. Through the effects that this desire triggers, however, he is also encouraged to further cultivate his role as John Doe. The desire-based relationship between everybody and the recipient is also repeated as well as addressed through the film's mise-en-scène. Thus at a key point in the film, when John Willoughby alias John Doe speaks to his intra-filmic audience for the first time over the radio, he is shown in full face close-ups, so that we as the audience of the film assume almost exactly the same position as the enthusiastic and admiring masses in the film. At this point, but also at other moments, we are expressly called on to imagine ourselves in the position of the emotionally involved recipients of the speeches made by this extraordinary person situated in a specific media setting. But we as the audience are also included in the film by another media tactic: through the narrative view behind, from the beginning we are privy to the 'doubling' of the leading character. The complications that arise in the course of the film and the repeated moving away that John Willoughby practices here vis à vis the position of John Doe, however, request us in the auditorium also to observe the social dynamics that John Willoughby triggers as an everybody from a certain distance. In this way, the film simultaneously puts us in a position in which we are called upon to relate to the main character emotionally as well as in one in which we are brought to observe how populism is put into practice and especially the ambivalences that accompany it. This comes to the fore most prominently in a sequence in which the initial enthusiasm of the masses for John Doe tips over into hate and persecution as a result of the skilful defamation by D. B. Norton. The map of the spreading John Doe Clubs that is repeatedly shown in the first major part of the film, marking a geography of desire in respect to 'John Doe', thereby suddenly transforms into one of pure anger. IV. CONTEMPORARY EUROPEAN APPROPRIATIONS OF POPULISM Meet John Doe is a reflection on populism as a mediatised performance of politics, which 'has the revivalist flavour of a movement, powered by the enthusiasm that draws normally unpolitical people into the political arena' (Canovan 1999, 6). The film focuses on the populist movement's leader, his audiences, and on how a relationship between the leader and his followers emerges and changes, but it is also about the various stages (newspapers, radio, live performance and tour) and political forums (stadiums, assembly rooms and streets and the meetings, large gatherings and marches taking place in them) as well as about the range of people 'behind the scenes'. 10 Because of this, it appears astonishingly close to contemporary forms of populism, in which central processes of political representation and decision-making seem to express as well as to promote media tendencies such as polarisation, personalisation and a focus on scandals and emotions in combination with antiestablishment attitudes (Moffitt 2016, 75-77). In a very pronounced form, and although there are certainly also differences, contemporary populists such as Donald Trump, Marine Le Pen or Beppe Grillo show some of the characteristics that Capra highlighted in Meet John Doe. They all demonstrate the centrality of the mediatised leader who seeks to exploit the (constitutive) gap between the promise of a better world and the (often seemingly defective and boring) real existing performance of democracy (Canovan 1999, 12). Like John Doe, they manage to create powerful impressions of authenticity and credibility by rejecting prevailing conventions in their performances -for example, by employing the self-culture style of the rebellious artist such as Beppe Grillo or using politically incorrect, 'bad' expressions such as Donald Trump or Marine le Pen. Their direct, 'low' style goes along with a 'characteristic mood' (Canovan 1999, 6) turning politics into something like a campaign or a movement. On the various public stages, the personal, singular and the exceptional is foregrounded in connection with these 10 For long stretches, the film, for example, refers to practices coined by Charles Edward Coughlin, commonly known also as 'Father Coughlin' or as the 'Radio Priest', a Catholic priest who became a very successful populist politician opposing Franklin D. Roosevelt and his New Deal in the 1930s. He was one of the first to use radio to reach a mass audience and attracted millions of weekly listeners. On these practices, see: Brinkley (1982, 83 charismatic leaders -so strongly that it tends to be overlooked that they are still staging a universal address to potentially each and everyone in the audience. They manage to be perceived as strong leaders, able to solve the crisis they themselves perform 11 and in this way to defend the people against its enemy, which -in the European scene -is now often a combination of the corrupt elites and other groups defined as 'threats', especially immigrants, Muslims, multiculturalists and globalists. A politicisation of regional belonging is often used to express more diffuse experiences of a lack and of frustrations in relation to politics in general (Taggart 2017, 254). Such a contemporary re-invention of populism is reflected in Chez Nous (This is Our Land, Lucas Belvaux, 2016), a film that not only shows the most diverse parallels with how populism is staged in Meet John Doe but also makes historical transformations graspable. It is a French film with the action located in northern France, where the FN began making gains in the 1990s, especially in working-class areas (Judis 2016, 103). The film thus deals with the fact that today, unlike the 1930s, populism is also a strongly European political phenomenon -something that is historically quite new, since, as Cas Mudde and Cristobal Rovira Kaltwasser have shown, the first decades of the post-World War II era and to a large extent also before populism was 'almost totally absent from European politics' (Mudde and Kaltwasser 2017, 33). Both the story and the way the film involves the spectators through the narrative view behind are strikingly similar to Meet John Doe. Here, too, a young person, Pauline (Émilie Dequenne), who works as a community nurse in a small town in the Pas-de-Calais in northern France, is selected as a mayoral candidate to provide credibility and authenticity for a populist party. Again, similar to the Capra film, through the narrative view behind we as the public are privy to this undertaking from the beginning. Through Pauline as a guiding everybody figure, we are invited to closely observe how contemporary populism is practised and how she serves as a vivid mediator for making politics personal and immediate. In contrast to the Capra film, however, and connected to an observation of contemporary populist practices, the film reflects on identity formation as a highly unstable, conflicted, contradictory and liminal process. 12 It portrays everyday life in a small town in the rural periphery, characterised by multiculturalism as expressed in 'ethnic' restaurants and social circles with mixed backgrounds -something, however, that is sharply contrasted with the presence of aggressive and violent practices of identity politics by local militant right-wing groups. Pauline is represented as a woman whose family history is rooted in the region, a mobile caregiver with excellent social contacts with a sick, left-wing, former communist father, but also as a single mother crushed between a most flexible and demanding form of employment, various private care responsibilities and the adjustment to different and competing desires that are expressed towards her. At the beginning, Pauline is shown as having no interest in party politics at all but being alert to the political and social tendencies around her. The film depicts her as a contemporary 'everywoman', whose day-to-day experiences appear torn between traditional expectations and a contemporaneity marked by fragmentation, competing gender models and her own contradictory aspirations. Although, here, the subject model for a contemporary everybody is female, 13 the way the heroine is represented shows the extraordinary potential of gender and gender expectations to be fluid and to be re-accentuated and even re-invented in different ways. In the course of the film, Pauline is promoted to fill in the role of a populist leader by an authoritative, fatherly colleague, Philippe Berthier (André Dussollier), a medical doctor with a radical right-wing past. In the scene where he tries to convince her to stand for mayor, the doctor explicitly acknowledges her difference as a woman and how important this is for the role he has chosen for her: 'It is women who will change the world', he says. At such moments, the film comments on a European trend which tripled between 2011 and 2014. 14 Not only through a client who she finds dead at the beginning of the film but also through all the other suffering people she has to deal with in her job, she is closely linked to experiences of ineluctability -again similar to John Willoughby through the risk of committing suicide. Much of the film is concerned with the changes that overturn Pauline's social relations after she decides to assume the role of candidate for the mayor's office. First, we see how she is transformed in a media-friendly way -from an unspectacular but beautiful middle-aged brunette and mother full of simplicity and sincerity, she is dyed blonde and is presented as a spectacular media face, which, however, appears completely silenced, like a dummy. Behind the scenes, as we also observe, she is hardly allowed to make any decisions. The 'incestuous elite', against which she is positioned, is depicted as composed of the political establishment, especially mainstream parties, institutions of the financial world such as Wall Street and the European Central Bank, and 'aggressive Islam', and is merged with immigrants, 'beneficiaries' and 'uprooted workers'. In the second part of the film, we witness how the sympathies she was entrusted with by the various local social circles at the beginning are overturned and changed after her transformation into a politician. Trustingly, some of her clients now start telling her about their fears, for example, in respect to immigrants, which are turned into racist fantasies or conspiracy theories, whereas others -for instance, a Muslim woman and her daughter whom she had cared for -begin displaying rejection or disturbance. A polarisation emerges, which soon leads to aggression and even violence, stirred up by the extremist right-wing club around Stephane -as we get to know again through the narrative view behind. In this pronounced depiction of a reversal of desire and enthusiasm into hate fuelled by manipulative tactics of fascist forces, there are also strong parallels with Meet John Doe. In both films, there are other, similar figures -the 'colonel' in Meet John Doe and two women of similar age in Chez Nous -who act as contrasting figures to sharpen the profile of the main characters. Pauline is on the one hand contrasted with Nada (Charlotte Talpaert), a young woman with southeastern European roots and her (former) friend who engages in political activism against xenophobia. At the same time, Pauline is set in relation to Natalie (Anne Marivin), who initially supports and encourages her to get involved with the right-wing party but -after Pauline resigns as a mayoral candidate -drops her and, towards the end of the film, finally takes her place as the political hope and candidate of the populist party. While Nada challenges xenophobic views in her circle of friends and in the small town and shows civil courage and activist engagement, Natalie clearly expresses right-wing views. Her extremist attitudes are underlined by her son, who, initially unnoticed by his mother, radicalises himself on the internet and launches a dystopian video in which radical Islamists take power in France. By situating Pauline between these two female figures, with whom she maintains a close but in both cases not a tension-free relationship, it remains unclear whether she can be assigned to the right, xenophobic side or the left, critical of racism. As is typical in contemporary film-making ( -all of a sudden, she realises his threatening and devastating attitude and instantly establishes a further differentiation and separation. In a context marked by anonymity, fragmentation, social decline, pronounced and often violent contrasts and strange coexistences that Pauline is situated in throughout the film, such sharp polarisations and the separating function that charismatic (male and female) everybody figures can exert come most sharply into the foreground. This appears to be the biggest difference that Chez Nous identifies vis à vis the populist reality staged by Frank Capra in the 1940s. In addition, John Doe is shown as gradually fitting into a larger whole and managing to grow morally, whereas Pauline is constantly torn between different, competing models of action and contradictory aspirations. But similar to John Doe, she is also searching for something: a new life, providing stability in times of strange coexistences and non-connectedness as she and Stephane express it at various points in the film. V. CONCLUSION: MEDIATORS OF POPULISMS IN TRANSFORMATION To sum up, in both films the central role of everybodies for the development of populist political movements is demonstrated and analysed. Both Pauline and Long John are depicted as being used as kind of affective interfaces to constitute a 'people' positioned against an elite -with this people not necessarily being the same as the 'common people'. The fact that both are amateurs, completely lacking political experience, is exposed as a basic condition for being trusted to exercise an appeal to 'the people'. The central characteristics that the films assign to them are self-disclosure and authenticity, even in a staged and invented form, the triggering of emotions and resonance and an ethical attitude characterised by a display of individuality and community spirit. In one way or another, a 'flaunting of the low' (Ostiguy 2017, 77) is noticeable in their physical presence and appearance as well as in their gestures and language. At the same time, at first neither of the main characters recognise the fascism of the parties they appear for, that is, they are also characterised as somehow ingenuous and naive people. As both Meet John Doe and Chez Nous demonstrate, with its recording and registering function, also in relation to the human figure and especially the human face, the medium of film enters into a kind of complicity with populism, as it is particularly able to confront us with 'authentic' doppelgangers of ourselves. With the cinema performer: 'the physical existence of a person [is] present on the screen in overwhelming size. The camera actually picks out a fleeting glance, a casual shrug' (Kracauer 1985, 137). At the same time, the performer remains an 'object among objects' (Kracauer 1985, 139) that can all look back at the audience and so create contact and involvement. The bodily dimension of this figure and the 'touching' resonance it can create in the spectator are thus emphasised by the medium of film. Populism appears in both films as a particular way of doing politics that exploits the gap between what is and what could be. It emerges not only as a political performance and a praxis but also as a kind of rapport for which a certain style, certain patterns and affectual narratives (Canovan 1999, 5;Moffitt 2016, 5-6;Ostiguy 2017, 74f.) can be identified, even if historically and regionally they are played out in very different ways. The films explore the public stages they are employed on and how they are used in detail. In the form of personalised stories, they also show the effects of the public is also disclosed and it is made present how this affects their leaders as well as public events as a whole. In the case of both main characters, their role as mediating everybodies is presented as a very ambivalent one. This consists in the fact that -as particular 'ordinary people' -they bolster claims of veracity and convey emotions creating a connection and emotional bond with some in the audience, thus establishing an 'us' against 'them'. They evoke approval and even love, but they assume an increasingly polarising role in the course of the stories that they motivate. Like love, hate and resentment also prove to be the motor of socialisation (or disintegration 15 One line along which a distinction can be drawn, however, is the attitude towards immigrants and refugees. The populist party for which Pauline is chosen as a candidate is characterised by a sharp demarcation of multiculturalism and the expression of hatred towards migrants. It has its offshoots in fascist groups that also use violence against refugees. This is contrasted by collective manifestations of protest against xenophobia stirred by committed citizens such as Nada, Pauline's (former) friend. A similar distinction in relation to contemporary populism as presented in this film is also made by John Judis (2016, 15), who points out that: 'leftwing populists champion the people against an elite or an establishment. […] Rightwing populists champion the people against an elite that they accuse of coddling a third group, which can consist, for instance of immigrants, Islamists or African American militants '. 16 15 Eric Fassin (2019, 81) seeks to establish such a distinction. However, he fails to recognise that revolt and resentment cannot be neatly separated, but that practices of mediated politics are 'often far messier and more emotional than dominant normative ideals might imply' (Wahl-Jorgensen 2019, 19). 16 However, his conclusion (Judis 2016, 15) that left-wing populism is dyadic and right-wing populism triadic cannot be supported. Left-wing populism is also often triadic because in it, too, certain groups -such as the bourgeoisie, former allies, academics and sometimes foreigners -are regularly associated with hostility and resentment. Analysis and the expression of resentment cannot always be clearly distinguished here either. Rene Girard (1999, 20;Schober 2014, 266f.) shows that structures of desire such as those at play in populism are always determined by a triad, since mediators of desire always appear in them and the relationship between subject and mediator threatens to change from an admiring to a hostile one. Although the populist parties in both films tend to be assigned to the right-wing political spectrum, the actions of Long John and Pauline are clearly attributable neither to the right nor to the left -they are geared towards mobilisation beyond the established party spectrum and take up perceptions and problematic issues that that have to do with what is experienced as a lack in everyday life. Both leaders appear in a way 'chosen' but are at the same time sacrificed and scapegoated by the masterminds of the political machinery of the party for which they stand. So, for example, similar to followers of contemporary European populists such as activists of Umberto Bossi's Lega Nord, Pauline is shown as enacting a kind of mimetism (Dematteo 2011, 53-54) and turning into a local 'clone' of the national leader of the party, who at the same time is depicted as being endowed with the power to destroy her followers. Lucas Belvaux's film stages contemporary populist practices as part of a social reality shaped and transmitted by a variety of visual media, most importantly social media, digital media and optical surveillance techniques, which are shown as repeatedly assuming an active role -also in advancing the film's narrative. In the Capra film, in contrast, in tension with John Doe's visual and physical presence, it is his voice that is thematised as being responsible for his success as a leader -a brittle, clumsy voice on the radio that enters people's private homes even in the remotest areas of the country. This voice triggers resonance, whereas in the case of Pauline it seems initially to be her caring, relating attitude as a nurse and her broad involvement in the local community. In her case, this resonance vanishes as she is transformed into a media-friendly icon of political protest, cut off from former relations. Together with the various (monitoring) images she encounters and which make her change the ways she has chosen, the image into which she is transformed provokes helplessness, violence and withdrawal. Hence, Capra still seems to attribute a strong mobilising, albeit polarising, potential to visual and above all acoustic media in connection with political (populist) processes. In contrast, Belvaux almost exclusively emphasises the divisive, destructive, controlling and -in the case of Stephane using his smartphone to show refugees as trophies -also the triumphantly self-congratulatory and thus again violent role of visual media in the present. However, it is again a visual medium, his film, that leads us to contemporary conflicts and the social, political and psychological dispositions associated with them.
2021-12-18T16:07:35.183Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "090062da48853c0a90024a8f564a055a1516edc1", "oa_license": "CCBY", "oa_url": "http://journal-redescriptions.org/articles/10.33134/rds.321/galley/402/download/", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "aa897e4a5986ddae365e3b847fd83f55309393c7", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
209441137
pes2o/s2orc
v3-fos-license
The essentiality landscape of cell cycle related genes in human pluripotent and cancer cells Background Cell cycle regulation is a complex system consisting of growth-promoting and growth-restricting mechanisms, whose coordinated activity is vital for proper division and propagation. Alterations in this regulation may lead to uncontrolled proliferation and genomic instability, triggering carcinogenesis. Here, we conducted a comprehensive bioinformatic analysis of cell cycle-related genes using data from CRISPR/Cas9 loss-of-function screens performed in four cancer cell lines and in human embryonic stem cells (hESCs). Results Cell cycle genes, and in particular S phase and checkpoint genes, are highly essential for the growth of cancer and pluripotent cells. However, checkpoint genes are also found to underlie the differences between the cell cycle features of these cell types. Interestingly, while growth-promoting cell cycle genes overlap considerably between cancer and stem cells, growth-restricting cell cycle genes are completely distinct. Moreover, growth-restricting genes are consistently less frequent in cancer cells than in hESCs. Here we show that most of these genes are regulated by the tumor suppressor gene TP53, which is mutated in most cancer cells. Therefore, the growth-restriction system in cancer cells lacks important factors and does not function properly. Intriguingly, M phase genes are specifically essential for the growth of hESCs and are highly abundant among hESC-enriched genes. Conclusions Our results highlight the differences in cell cycle regulation between cell types and emphasize the importance of conducting cell cycle studies in cells with intact genomes, in order to obtain an authentic representation of the genetic features of the cell cycle. Background The cell cycle is the process of growth and proliferation of living cells, during which each cell replicates its genome and divides into two daughter cells. This is a highly complex and organized process that consists of 4 consecutive phases: G1, S, G2 and M, and requires the scheduled occurrence of a large series of events [1]. Naturally, faithful execution of the cell cycle is of utmost importance, and many layers of regulation have evolved to ensure its integrity. The most prominent regulation layer is the periodic expression of cyclins, which allows the ordered activation of specific cyclin-dependent-kinases (CDKs), which, in turn, regulate the transition of the cells through cell cycle phases [2]. Another regulatory layer is embodied by the cell cycle checkpoint mechanisms. Checkpoints are control mechanisms activated at different points along the cell cycle to monitor its integrity and fidelity by allowing cell cycle progression only under satisfying conditions. For example, DNA damage can trigger checkpoint activation at the G1/S phase and the G2/M transition points as well as during the S phase (the intra-S checkpoint), thus preventing the transmission of DNA alterations to the newly formed cells. Depending on the extent of the damage, the checkpoints promote either DNA repair, or, if the damage is too excessive, apoptosis or senescence of the affected cells [3,4]. Additional triggers for checkpoint activation are chromosomes that are unattached or improperly attached to the opposite spindle poles during mitosis. These may lead to the activation of the M phase spindle assembly checkpoint that prevents unequal inheritance of the genetic material. Impairment in checkpoint mechanisms may cause genomic instability, leading to severe phenotypes, such as tumorigenesis, developmental delay and intellectual disability [5][6][7]. There is a close relationship between cell cycle regulation and cancer etiology. Cancer cells are characterized by genomic instability; they possess the capacity for unlimited cell divisions, and are characterized by an uncontrolled cycle, which can progress independently of growth signals. Abrogated activity of cell cycle factors, such as CDKs and checkpoint proteins, is highly frequent in cancer cells; and such mutations in cell cycle genes are often associated with tumorigenesis [7][8][9]. Moreover, genes participating in the inhibition of CDKs often act as tumor suppressors, and some of them are regulated by p53, which is encoded by the TP53 gene and promotes apoptosis in response to DNA damage, mainly through the G1/S checkpoint. TP53 is referred to as the "guardian of the genome" since it is the most prominent tumor suppressor protein and the most mutated gene in human cancers [8,10]. Like cancer cells, pluripotent stem cells, such as embryonic stem cells (ESCs), are capable of unlimited proliferation; but, unlike cancer cells, they have differentiation capacity into various cell types, a feature that is retained through infinite cell divisions, by the process of selfrenewal. The cell cycle machinery was shown to be tightly associated with pluripotency state, since abrogated activity of cell cycle components affects pluripotency and vice versa [9]. The cell cycle in ESCs has distinct features compared to differentiated and cancerous cells. These include fast proliferation, shortened G1 and G2 phases and a relatively high percentage of cells in S phase [11,12]. Accordingly, cells that are committed to differentiation undergo many cell cycle changes including the lengthening of G1 phase [13]. These alterations appear to be the cause of cell fate decisions, since cell cycle machinery is actively involved in the determination of the pluripotency state. The short G1 phase of ESCs was shown to disrupt the formation of 53BP1 nuclear bodies around chromosomal lesions, preventing their protective effect against erosion, thus causing a replication stress in the next S phase [14]. Nevertheless, this shortening also appears to have a positive role in pluripotency maintenance. Generally, G1 is considered to be the most important phase in the context of stem cell fate decisions, as in this phase CDKs are regulating the activation of developmental genes, which respond to differentiation signals. This activation initiates the differentiation cascade, a transcriptional program that ultimately leads to cell fate changes [13]. Notably, different CDKs can activate diverse targets, leading to distinct lineage differentiation events [12]. Contrary to G1 phase, it appears that S and G2 phases actively, and independently of G1, support the maintenance of the pluripotent state [15]. The fast proliferation and short cycle of stem cells lead to high frequency of DNA lesions, since the short G1 phase does not leave enough time for the repair of nonreplicated DNA and therefore risks the quality of subsequent DNA replication. Furthermore, these cells are reported to have impaired activation of the G1/S checkpoint upon DNA damage [16]. However, the acquired lesions encounter a fortified wall of robust and constitutively active DNA damage response that efficiently deals with the damage and maintains a relatively low mutation frequency [17,18]. Interestingly, a previous study which identified the essential genes for the normal growth and survival of human pluripotent stem cells, demonstrated that more than 50% of the essential and transcriptionally enriched genes in these cells were involved in the cell cycle and DNA repair processes [19,20]. In this study, we analyzed the genetic networks underlying general and unique cell cycle traits, by identifying genes that have common and unique functional impact on the proliferation and survival of cancer and embryonic stem cells. We found that genes linked to S phase and to the checkpoint mechanisms are particularly essential for the proliferation of both cell types. However, the differences observed between the cell cycle essentialomes of pluripotent and cancer cells were largely based on differential essentiality of checkpoint genes between these cell types. In addition, we identified specific cell cycle genes that may play a role in the different properties of each cell type and illuminated a selective dependency of pluripotent cells on the proper function of the spindle assembly checkpoint mechanism. Notably, we found great differences in the genetic networks responsible for growth restriction between pluripotent stem cells and cancer cells. Differences in growth dependency of cell cycle genes between cancer and pluripotent cells To shed light on the genetic basis of cell cycle regulation in ESCs and cancer cells, we generated a list of cell cycle genes, consisting of genes of the cell cycle phases and checkpoint genes, retrieved from independent sources ( Fig. 1a; Additional file 1: Table S1 and Additional file 2: Table S2). Genes of the cell cycle phases include protein coding genes that have been shown to have a phenotypic effect on the progression of one or more of the phases of the cell cycle [21]. Checkpoint genes are genes involved in cell cycle regulation as well as in the cellular response to DNA damage and the maintenance of genome integrity. In order to unveil the genetic factors underlying the properties of the cell cycle in the different cell types, we used data from genome-wide CRISPR/Cas9 loss-of-function screens performed in 5 different cell lines: a haploid hESC line (pES10; hereinafter referred as ESC) [19], and four cancer cell lines [22], one of which had a near haploid karyotype. Two of the cancer cell lines originated from T cell leukemia and two from B cell lymphoma, and all four lines had a mutation in TP53. All analyzed screens were based on the same single guide RNAs (sgRNAs) library, a feature that was shown to be necessary for a reliable comparison [19]. We chose to analyze several cancer cell lines in order to eliminate the background noise that may stem from the genetic variation between different tumors. The CRISPR/Cas9 screens mentioned above were previously b CRISPR score distributions of cell cycle genes in ESCs (red curve) and cancer cells (orange curve). CRISPR score per gene for cancer cells represents the average score across four transformed cell lines. P-value of Kruskal Wallis test is shown. c PCA plot demonstrating the separation of essentiality scores for cell cycle genes across different cell lines. d Fraction of checkpoint genes among all cell cycle genes as compared to their fraction among the top 100 genes contributing to PC1 separation used for mapping the essential and growth-restricting genes in each cell line [19,22], by comparing the prevalence of each sgRNA immediately following library generation to that of a later time point after several weeks of culturing, and computing a CRISPR score that represents the log2 of the ratio between the final and initial frequencies. A negative score indicates a perturbation in an essential gene for the proper growth and survival of the cells and a positive score implies a perturbation in a growth-restricting gene. The term "growth" in this article refers to the phenotype of cell enrichment that can occur due to changes in the rate of proliferation, apoptosis or differentiation [19]. For this study, we used the computed CRISPR scores of cell cycle genes, and analyzed the differences in genetic features between ESCs and cancer cells. First, we performed a Kruskal-Wallis test comparing the distribution of the CRISPR scores of cell cycle genes in ESCs and in cancer cells and found a significant (P < 2.2e−16) difference between these cell types (Fig. 1b). The distribution of ESCs tended more towards the negative X-axis, indicating that more essential CRISPR scores were found in ESCs as compared to cancer cells (Fig. 1b). Accordingly, comparison of CRISPR scores of cell cycle genes between the different cell lines using principle component analysis (PCA) successfully distinguished between the pluripotent cell line and the 4 cancer cell lines (Fig. 1c). In part, this difference may reflect changes in the magnitude of the effect of various cell cycle factors on cell growth following cancerous transformation. As expected, the B cell-derived cell lines Raji and Jiyoye clustered very closely together (Fig. 1c), suggesting a high resemblance. The difference between the two leukemic cell lines may be due to the unique near haploid nature of KBM7 (haploid in the whole genome except for chromosome 8 and a 30 mega base segment on chromosome 15). Importantly, KBM7 cell line resembled the other cancerous cell lines more than the haploid ESC line pES10 (Fig. 1c), reinforcing the fact that the differences seen are related to the transformation status of these cells rather than their ploidy. To better understand the genetic basis of the observed discrepancy between the analyzed cell lines, we focused on the top 100 genes that contributed most to PC1 (Additional file 3: Table S3). This list was significantly enriched for essential genes for all cell lines as indicated by two population proportion tests (P < 0.00001 for all comparisons; Additional file 4: Fig. S1A). This observation reinforces the suggestion that the difference between the cell types is based on genes that are crucial for the growth of the cells, and probably contribute to the unique features of each cell line. Interestingly, the proportion of checkpoint genes out of the top 100 genes was significantly higher than their overall proportion of cell cycle genes (40% vs. 21%, respectively, P < 0.00001 in a two population proportions test; Fig. 1d), indicating a major role for checkpoint genes in the differences between cancer and pluripotent cells. The enriched pathways for these top 100 genes compared with all cell cycle genes included chromosome organization, checkpoint regulation and DNA damage response (Additional file 4: Fig. S1B), all of which are pathways known to be impaired in cancer cells [23]. Cell cycle genes, especially checkpoint and S phase genes, are highly essential for the growth of pluripotent and cancerous cells Next, we characterized the essentiality and growth restriction patterns of cell cycle genes in ESCs and in cancerous cells. For each cell line, we used the computed CRISPR scores and classified the genes with FDR < 0.05 as essential for growth or as growth-restricting. Overall, 9.2% of all the protein-coding genes in the human genome were identified as essential for the normal growth of ESCs [19], and 9.1% in average for the growth of cancer cells (Fig. 2a). As expected, genes of cell cycle phases had higher essentiality percentages both in ESCs and in cancer cell lines ( Fig. 2a; 13.9% and 13.6%, respectively). Intriguingly, checkpoint genes had an even higher percentage of essentiality both in ESCs and cancer cells ( Fig. 2a; 29.2% and 26.3%, respectively). This is in line with the PCA results, in which checkpoint genes were shown to contribute the most to the differences between the CRISPR scores of cell cycle genes in cancer and pluripotent cells. Conversely, regarding the growth-restricting genes, more genes were identified as growth-restricting in ESCs than in cancer cells (Fig. 2b). Moreover, the percentage of these genes in cancer cells remained low and constant regardless of the gene set examined. This can be due to the fact that cancer cells are often impaired in growth restriction mechanisms, for example because of mutations in tumor suppressor genes [23]. These impairments trigger their rapid proliferation, which is a hallmark of cancer cells. Loss-of-function mutations in genes that take part in the impaired growth restriction pathways would probably not cause an effect on cell cycle progression and therefore they would not be detected in the CRISPR screens as growth-restricting genes. An interesting observation emerged as we looked at each of the phases separately. Whereas in G1 and G2/M phases the patterns of essentiality did not considerably deviate from the general one, in S phase, and to a lesser extent also in S+G2/M (genes whose downregulation caused a cell cycle arrest in both the S and G2/M phases), the fraction of essential genes was remarkably higher, even higher than that of checkpoint genes. This was true for both ESCs and cancer cells ( Fig. 2c; 32.8% and 29.9%, respectively), emphasizing the great importance of regular and accurate operation of all the components of the S-phase regulatory network. In general, the order of essentiality suggested from this analysis in ESCs and cancer cells, is as follows (high to low): S phase genes, checkpoint genes, total cell cycle genes (phases and checkpoints) and all genes (Fig. 2c, d). Notably, the distribution of S phase CRISPR scores seemed to have two peaks, an average one and a very negative one. The latter is likely to represent a group of highly essential s phase genes (Fig. 2d). High overlap of essential genes and no overlap of growth-restricting genes among analyzed cell lines To understand whether the same genetic pathways are responsible for cell cycle regulation in different cell types, we checked the degree of overlap between the genes identified as essential or growth-restricting in different cell lines. Regarding the essential genes, there was a considerable overlap between all cell lines, and the highest proportion was that of the common essential genes in all five cell lines (Fig. 3a). Such a large overlap was found both in checkpoint genes and in genes of cell cycle phases (Additional file 4: Fig. S2A), as well as in each phase separately (Additional file 4: Fig. S2B). Arguably, the 75 genes that are common to all 5 cell lines represent the central network of cell cycle regulation that is essential for the growth of all cell types (Additional file 5: Table S4). Interestingly, these genes are involved in a dense network of interactions (Additional file 4: Fig. S3A), in which the most enriched biological process is DNA replication (Additional file 4: Fig. S3B), implying a high proportion of S phase genes. Intriguingly, almost all the essential genes in S phase were common to all 5 cell lines. The number of essential genes in S phase ranged from 16 to 20 in different cell lines, and 15 of them were present in all cell lines (Fig. 3b). This, in addition to the high essentiality of S phase genes (Fig. 2c), further highlights the robustness of S phase and the importance of its integrity for cell survival. It also suggests that this highly conserved group of 15 essential genes represents the "core genes" of S phase, which are essential for cellular growth regardless of the cell type (Additional file 5: Table S4). 7 out of the 15 genes in this list are established DNA replication factors, according to the functional classification in STRING database [24]: CDC6, CDT1, GINS2, POLA1, POLE2, GINS2, RRM1, and RRM2. Strikingly, when we analyzed the growth-restricting genes, a very different picture emerged. The vast majority of growth-restricting cell cycle genes were unique to each cell line, and not even one gene was common to all cell lines analyzed in this study (Fig. 3c). Differential essentiality analysis reveals unique pathways for pluripotent and cancer cells To gain more insight into the differences between ESCs and cancer cells, we analysed the genes with differential pattern of essentiality between these cells. For this, we examined the genes specified as essential in all cancer cell lines, but not in ESCs, and vice versa. Overall, we identified 24 ESC-specific essential cell cycle genes and 13 cancer-specific essential cell cycle genes (Fig. 4a). These genes may account for some of the phenotypic differences in cell cycle properties between these cell types. Among the unique stem cell-essential genes stood out a group of four closely related genes involved in the spindle assembly checkpoint (Fig. 4b) [25,26]. This checkpoint prevents aberrant segregation of chromosomes during mitosis, thus maintaining genome integrity. Interestingly, a closer look at this pathway revealed a large proportion of ESC essential genes, suggesting a special dependence of pluripotent cells on this checkpoint (Fig. 4b). An analysis of differentially growth-restricting genes identified 33 genes that are growth-restricting only in ESCs. No growth-restricting gene was common to all cancer cell lines (Fig. 4c). Interestingly, protein interactome analysis of the growth-restricting checkpoint genes in ESCs, using the STRING database, revealed Differential essentiality and growth-restriction analysis of cell cycle genes in ESCs and cancer cells: a Differentially essential genes ranked according to a differential score calculated by subtracting cancer cell CRISPR score of each gene from its ESC CRISPR score. Genes associated with the spindle-assembly checkpoint are highlighted. b Schematic representation of the spindle-assembly checkpoint pathway. c Differentially growth-restricting genes between ESCs and cancer cells ranked according to the difference between the CRISPR scores obtained in the screens using these cell types. d Protein interactome analysis of growth-restricting checkpoint genes in ESCs TP53 as the primary connecting link between these genes (Fig. 4d). TP53 is a well-characterized tumor suppressor gene encoding the protein p53, which has a crucial role in apoptosis. TP53 is the most frequently mutated gene in human cancers [27], and it is mutated in all the cancer cell lines examined in this study (Additional file 6: Table S5). These observations suggest that in the absence of p53 activity, a high proportion of the growth-restricting genes are not adequately regulated and thus lose their inhibitory effect. Consequently, it would take a mutation in a single gene to subvert the whole growth restriction system. Indeed, the percentage of growth-restricting genes in cancer cells is very low (Fig. 2b), and this may also be the reason for the small overlap between the growth-restricting genes in different cell lines analyzed in this study (Fig. 3c). It is possible that the existing growth-restricting genes in cancer cells gained their effect as a result of novel mutations acquired by each cancerous cell line independently during tumorigenesis. To check this hypothesis, we retrieved the lists of background mutations acquired by each cancer cell line as documented in the Catalogue of Somatic Mutations in Cancer (COSMIC) (Additional file 6: Table S5). Interestingly, the only cell cycle gene that was mutated in all these cell lines was TP53. Other than that, the lists of mutant genes varied both in number and in identity, and they were associated with different pathways. These findings support our hypothesis regarding the importance of both TP53 mutations, and the independent acquisition of random mutations in each cell line, to explain the differences in growth regulation mechanisms. Some key cell cycle genes are neither essential nor growth-restricting in all cell lines Despite the central role of the checkpoint mechanisms in cell cycle regulation and the relatively high proportion of essential genes, 44.3% of the checkpoint factors did not come up as essential or as growth-restricting in all cell lines examined (Additional file 2: Table S2). Allegedly, this might suggest that these genes do not have a substantial effect on cell growth. However, an in-depth look at the identity of these genes revealed that mutations in many of them are linked to autosomal recessive disorders in humans, with severe phenotypes such as predisposition to cancer, developmental delay and neurodegeneration (Additional file 4: Fig. S4). Importantly, lack of phenotypic effect of the lossof-function mutations in the non-essential and nongrowth-restricting genes can also imply the existence of backup mechanisms that perform similar functions, thereby compensating for their absence. ESC-enriched cell cycle genes contain high frequency of mitosis-related genes Finally, we took an additional approach in order to identify the genetic network responsible for the ESC-specific properties. We searched for cell cycle genes which are both essential for the growth of ESCs, and are selectively enriched in expression in these cells (with expression 10 times higher in ESCs than in other tissues) [19,20]. Notably, this analysis yielded only a small subset of genes, indicating that overall the expression of cell cycle genes is not cell type specific. However, it is interesting to note that this subset of overlapping genes constitutes a tightly connected protein network (Fig. 5a) that is highly enriched for mitotic spindle organization and DNA replication (Fig. 5b), as compared with all cell cycle genes. This is even more interesting considering our previous result regarding ESC-specific differentially essential genes (Fig. 4b), which also had a high representation of M phase checkpoint genes. Together, these results highlight the role of mitotic genes in determining the unique cell cycle characteristics of ESCs. Discussion We present here a comprehensive analysis that offers a new perspective on the genetics of cell cycle regulation, based on genome-wide functional studies in cancer and In this analysis we defined the essentiality landscape of cell cycle genes in one haploid ESC line and four cancer cell lines. The haploid nature of the pluripotent cell line considerably improves the efficiency of obtaining lossof-function genotypes by CRISPR/Cas9 mutagenesis and increases the chances of capturing the essentiality phenotypes. Importantly, this state of ploidy does not have a major influence on the essentialome landscape. This was demonstrated by previous studies [19,28], and is evidenced by the fact that KBM7, the near-haploid cancer cell line, is clustered together with the other cancer cell lines rather than the haploid pluripotent cell line pES10 (Fig. 1c). Only one pluripotent cell line, pES10, was examined in this study since it was previously demonstrated that the methodological differences, such as the sgRNA library used for the screen, add noise to such comparisons [19] and no other CRISPR/Cas9 screen was performed on human ESCs using the same sgRNA library. As more screens are performed, the resolution of this analysis is expected to increase. Given the fundamental role of the cell cycle process in the propagation of life, it is expected that the proper operation of cell cycle genes is essential for the survival and proliferation of cells, an actuality we demonstrate here. In fact, mutations in a single cell cycle gene often result in cell death or in a significant slowdown of cell growth. Intriguingly, both pluripotent and cancer cell types show higher sensitivity to mutations in checkpoint or S phase genes, compared with mutations in genes involved in other cell cycle-related processes. Moreover, the subset of S phase genes that are indispensable for cell growth is almost identical in all cell lines. This reflects the tight regulation applied on the mechanisms of DNA replication and cell cycle checkpoints, and emphasizes their importance for cell proliferation. Notably, mutations in some of the key checkpoint factors, which regulate the activity of many downstream targets, do not have a significant impact on the growth rates of any of the tested cell lines. However, in many cases, perturbations of these genes are associated with human diseases characterized with severe consequences, including developmental syndromes with neurological symptoms. For instance, individuals with homozygous mutation in Ataxia Telangiectasia Mutated (ATM) gene, a master regulator of the DNA damage response that plays a central role in the activation of DNA damage checkpoints, were shown to develop the neurodegenerative disorder Ataxia Telangiectasia (A-T). A-T is characterized by uncoordinated movement, cancer predisposition, telangiectasia, and cerebellar atrophy [29]. Similarly, individuals with homozygous mutation in BLM, another central DNA damage related gene, develop the Bloom's syndrome that is characterized by predisposition to cancer, growth deficiency and genomic instability [30]. Mutations in both ATM and BLM did not result in cell growth impairments in vitro, probably since the cells were grown in optimal conditions for a relatively short period of time. Moreover, the fact that some key cell cycle genes are not essential for cell growth may also indicate the existence of backup mechanisms that were developed through evolution to cover for the loss of function of these highly important genes. It can be interesting to try and unveil these hidden pathways by impairing more than one of these central genes simultaneously and determining synthetic lethality interactions. Unlike the overall essentiality percentages, which were similar among different cell lines, growth restriction patterns turned out to be very heterogeneous. As a rule, cancer cells tend to have less growth-restricting genes than ESCs. Furthermore, growth restriction genes vary widely between different cancer cell lines, both in numbers and in identity. This can be explained by the central role of the tumor suppressor p53 in the regulation of the growth restriction network in normal cells. When p53 is mutated, as in the case of more than half of human tumors [27] including all the cancerous cell lines examined in this study, the growth restriction mechanism is severely disrupted. Consequently, only a few genes act as growth inhibitors in cancer cell lines. These genes acquired this role as a result of novel mutations occurred in the process of tumorigenesis. In addition, despite the similar origin of the cancer cell lines, which were all derived from the hematopoietic system, we found many differences in the CRISPR scores of cell cycle genes, reflecting the high genetic variance between human tumors. Interestingly, TP53 was shown to be the most growth restricting gene in human ESCs [19], indicating its key role in cell cycle regulation in these cells. Moreover, TP53 was found to be the most frequently mutated genes in human pluripotent stem cells [31], demonstrating that mutations in this gene grant a growth advantage not only in somatic cells but also in human pluripotent stem cells, and highlighting the need to ensure TP53 integrity even when working with non-cancerous cells. Another approach to study the different roles of cell cycle genes in cancer and pluripotent cells is to examine genes with a differential pattern of essentiality between these cell types. In this way, we have identified an increase in the essentiality of mitosis genes and of genes responsible for the regulation of tumour suppression in ESCs. In addition, several interesting individual genes have also emerged, such as DHODH, the highest scoring cancer-specific essential gene and MDM2, which is one of the highest scoring ESC-specific essential genes. DHODH gene encodes a rate-limiting enzyme for de novo pyrimidine nucleotide synthesis, with a role in the regulation of VEGF mRNA translation. DHODH was also shown to have a specific role in acute myeloid leukemia (AML). Inhibition of this enzyme enabled myeloid differentiation in human and mouse AML models, and it can be used as a strategy for overcoming differentiation blockade in this cancer [32]. In addition, DHODH inactivation or deficiency inhibits melanoma cell proliferation, induces cell cycle arrest at S phase and leads to autophagy in human melanoma cells [33]. Interestingly, in line with the known role of DHODH, the most dramatic effect of its knockout in this study was observed in the leukemic cell lines KBM7 and K562 (CRISPR scores − 3.4 and − 3.8, respectively), strengthening the notion for the specific role of DHODH in leukemia. MDM2, which is essential only in ESCs encodes an E3 ubiquitin ligase with proto-oncogene properties that promotes cell proliferation and tumor formation. Interestingly, MDM2 targets p53 for degradation and thus negatively regulates p53 activity; additionally it is also transcriptionally regulated by p53 [34]. The involvement of p53 can explain the lack of phenotypic effect of MDM2 knockout on cancer cells, which are already mutated in the TP53 gene. As expected, the highest differentially growth-restricting gene, which affects ESCs but not cancer cells, is TP53. In fact, genes that have been found to be differentially essential or growth-restricting in cancer and pluripotent cells but have no established functional connection to either cell type, could also be very interesting to study. Validation and further analyses on such genes can lead to novel discoveries regarding the function of these genes in cell cycle, tumorigenesis and differentiation. Interestingly, the unique effect of M phase genes in pluripotent cells emerged from two independent analyses in this study. Analysis of differentially essential genes determined that ESC-specific essential genes are enriched for mitotic genes. In addition, analysis of the essential cell cycle genes that are also at least 10 times more expressed in ESCs as compared with other tissues shows these genes to be highly enriched for mitotic genes. Together, these findings may imply that M phase has high essentiality in ESCs, a fact that was overlooked in previous studies. It was demonstrated that M phase in ESCs has an increased DNA repair activity [35], and that some M phase genes may play a role in the regulation of DNA repair during S phase [36]. Theoretically, such increased response to DNA damage may compensate for the short G1 that does not leave enough time for proper DNA repair [37]. This is a possible explanation for the higher abundance in essentiality of M phase genes in pluripotent cells. Yet, this is only one aspect of the entire evaluation. The distinct enrichment for genes with an established role at the spindle assembly checkpoint indicates an important role for this mechanism in the growth of pluripotent cells. The many differences between cancer and pluripotent cells raise a serious concern regarding the frequent usage of cancer cells as a model system for cell cycle studies. Apparently, we cannot infer general conclusions regarding cell cycle regulation from cancer cells, especially concerning inhibitory pathways, which are almost absent in these cells. Of note, our analysis is partially based on a list of genes that participate in the cell cycle phases, which was retrieved from a study performed on a cancer cell line, and therefore it is probably missing some relevant genes. This emphasizes the need for functional screens performed on normal cells in order to get a more profound understanding of cell cycle genetics, and highlights the advantages of comparative studies of several cell types. In their impressive study, Mukherji et al. [21] used a series of phenotypic measurements in order to classify the cell cycle genes to the different phases. This approach is highly advantageous, but it may also lead to some classification errors, for instance in cases in which an affect in one phase is phenotypically evident only later in the cycle. One such example is BUB1 that was classified as a G1 gene, even though it is known to act during mitosis as part of the spindle assembly checkpoint (as shown in Fig. 4b). Notably, although the pluripotent cell line analyzed here is not cancerous, it is also not a normal primary cell line. In fact, ESCs share some features with cancer cells, such as unlimited capacity for cell division and fast proliferation. Thus, the differences we obtained between cancer and pluripotent cell lines may result from the stemness or un-transformed nature of pluripotent cells, and it should be taken into account while interpreting the results. That said, there are many technical limitations for the research of primary cells, such as slow growth and limited proliferation. Therefore, studying primary cells is much more challenging and demanding, and thus less common. Such studies will become easier to perform with the improvements in resolution, coverage, and costs of genetic research methods. Conclusions In this study, we employed CRISPR/Cas9 libraries to explore the characteristics of cell cycle regulation in pluripotent and cancer cells. We found that genes that take part in the S phase and in checkpoint mechanisms are particularly essential for the growth of both cell types. We identified the core genetic networks that are responsible for cell cycle progression and revealed genes that are uniquely required for pluripotent or cancer cells. Interestingly, as opposed to the growthpromoting networks, the growth-restricting networks were not conserved between cell lines. This appears to be because cancer cells often harbor mutations in the tumor suppressor TP53, which is at the center of the growth-inhibition mechanism. Finally, a unique dependency of pluripotent cells in the process of mitotic spindle checkpoint emerged from two independent analyses in this study. Overall, our results represent new insights regarding the genetics of the cell cycle and highlight the differences between normal and transformed cell types. Further, this study indicates the inaccuracies that may arise due to the use of cells that have accumulated mutations for cell cycle studies. CRISPR screen data Data analyzed in this research was obtained from two different studies, which performed CRISPR-based genome-wide loss-of-function screens targeting 18,166 protein-coding genes, in one haploid ESC line (pES10) [19] and four cancer cell lines: two T-cell-derived chronic myelogenous leukemia cell lines (KBM7 and K562) and two B-cell-derived Burkitt's lymphoma cell lines (Raji and Jiyoye) [22]. Importantly, the analyzed studies used the same single gRNA) library and the same method to calculate the CRISPR score of each gene. Defining cell cycle and checkpoint genes List of genes of cell cycle phases was retrieved from a siRNA knockdown screen, which targeted 24,373 predicted human genes in the osteosarcoma-derived cell line U2OS in order to find genes whose downregulation disrupts the progress of the cell cycle. In total, 1152 cell cycle genes were identified in this study, and were grouped into 8 different categories, based on their function and phenotypic effect [21]. Accession numbers of these genes were converted by us to gene symbols and Ensembl IDs using the BiomaRt package with the R software. Subsequently, the gene list was filtered to include only known protein-coding sequences with up-to-date Ensembl IDs (excluding predicted mRNA models, noncoding RNAs, incomplete sequences etc.), reducing the list from 1152 to a total of 826 genes. Lastly, each gene was assigned to one of four groups, based on the phase of the cell cycle that it was shown to regulate: G1, S, S+G2 or G2/M (Additional file 1: Table S1). List of cell cycle checkpoint genes was retrieved from the Gene Ontology database AmiGO, version 2 [38][39][40]. Gene names were converted to gene symbols and current Ensembl IDs using BiomaRt, leading to a total of 219 genes (Additional file 2: Table S2). Mutations in cancer cell lines Lists of background mutations in the Raji, Jiyoye and K562 cell lines were retrieved from the Cell Lines Project of the Catalogue Of Somatic Mutations In Cancer (COSMIC). Mutations in KBM7 were obtained from Bürckstümmer et al. [41]. Synonymous mutations were removed, and only mutations reported in COSMIC database were chosen for further analysis. Ensembl transcript IDs were retrieved using Biomart (Additional file 6: Table S5). Data analysis Lists for genes of cell cycle phases and checkpoint genes were matched with the CRISPR data to form a joint dataset of the CRISPR scores of cell cycle genes. Genes with negative CRISPR scores and with significance values (FDR or adjusted p-value) lower than 0.05 were considered as essential for cell growth. Genes with the same significance levels but with positive CRISPR scores were considered as growth-restricting. This data was used to identify cell cycle genes that are involved in the mechanisms of growth-promotion and restriction in all cell lines. CRISPR scores of cell cycle genes were compared between the different cell lines to identify common and unique cell cycle factors for cancer and pluripotent cells. STRING database of known and predicted protein-protein interactions was used for the analysis of protein interactions and identification of protein networks [24]. Functional annotation and classification of the genes were achieved using the STRING database and the GOrilla tool for identification and visualization of enriched gene ontology terms in gene lists [42,43].
2019-12-23T15:08:52.557Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "0edf2593ce356b4433d244d619d12862d04dd10d", "oa_license": "CCBY", "oa_url": "https://celldiv.biomedcentral.com/track/pdf/10.1186/s13008-019-0058-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0edf2593ce356b4433d244d619d12862d04dd10d", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
103227292
pes2o/s2orc
v3-fos-license
High-density two-dimensional electron system induced by oxygen vacancies in ZnO We realize a two-dimensional electron system (2DES) in ZnO by simply depositing pure aluminum on its surface in ultra-high vacuum, and characterize its electronic structure using angle-resolved photoemission spectroscopy. The aluminum oxidizes into alumina by creating oxygen vacancies that dope the bulk conduction band of ZnO and confine the electrons near its surface. The electron density of the 2DES is up to two orders of magnitude higher than those obtained in ZnO heterostructures. The 2DES shows two $s$-type subbands, that we compare to the $d$-like 2DESs in titanates, with clear signatures of many-body interactions that we analyze through a self-consistent extraction of the system self-energy and a modeling as a coupling of a 2D Fermi liquid with a Debye distribution of phonons. ZnO is a transparent, easy to fabricate, oxide semiconductor with a direct band gap E g = 3.3 eV. Its many uses include window layers in photovoltaic devices, varistors for voltage surge protection, UV absorbers, gas sensors, and catalytic devices [1,2]. ZnO is also a candidate for novel applications, such as transparent field effect transistors, UV laser diodes, memristors, or hightemperature/high-field electronics [1][2][3][4][5][6]. In fact, ZnO can be seen as a link between the classical group-IV or III-V semiconductors, e.g. Si or GaAs, and transition metal oxides (TMOs), such as SrTiO 3 . Due to their valence d-orbitals, the latter show a rich variety of collective electronic phenomena, like magnetism or high-T c superconductivity [7,8]. Moreover, the controlled fabrication of a two-dimensional electron system (2DES) in ZnO can result in extremely high electron mobilities, even competing with the ones of GaAs-based heterostructures, and showing the quantum hall effects [9,10]. Here we show, using angle-resolved photoemission spectroscopy (ARPES), that the simple evaporation in ultra-high vacuum (UHV) of an atomic layer of pure aluminum on ZnO creates a 2DES with electron densities up to two orders of magnitude higher than in previous studies. We demonstrate that the 2DES results from oxidation of the Al layer and concomitant doping with oxygen vacancies of the underlying ZnO surface. The 2DES is composed of two subbands with different effective masses, as the mass of the inner band is wholly renormalized due to the energetic proximity of its band bottom with a phonon frequency, whereas the outer band, dispersing deeper in energy, shows only a kink due to the electron-phonon interaction. We thoroughly investigate the electron-phonon coupling by a self-consistent extraction of the electron self-energy. We deduce an Eliashberg coupling function wholly compatible with a 2D Debyelike distribution of phonons and a mass enhancement parameter λ = 0.3. Previous photoemission experiments on ZnO [35][36][37][38][39][40][41] showed that hydrogenation of its polar or non-polar surfaces, for instance through chemisorption of hydrogen, methanol or water, induces a downward band-bending and the formation of a 2DES with a moderate electron density n 2D ≤ 2 × 10 13 cm −2 , showing only one broad shallow subband below the Fermi level (E F ) [40]. More recently, several ARPES studies focused on the manybody phenomena of electron-phonon coupling in oxides, demonstrating that at low carrier densities the 2DES in TiO 2 , SrTiO 3 and also ZnO are composed of polarons [42][43][44]. Due to a non-adiabatic electron-phonon coupling, the polaronic regime changes to a Fermi liquid behavior with increasing electron densities, as electronic screening of the polar lattice becomes more efficient [45]. However, the Fermi liquid regime in ZnO has not been studied yet, as previous doping methods of the surface were insufficient to achieve high electron densities. Attaining large carrier densities for a 2DES in ZnO is also appealing for applications in high-power transparent electronics. We now discuss our main findings. Henceforth, we will focus on data measured at the O-terminated ZnO(0001) surface. As shown in the Supplementary Material, similar results are obtained at the ZnO(0001) (zincterminated) interface, although the resulting 2DES has a slightly smaller electron density. Furthermore, to recall that we deposited pure Al (not aluminum oxide) on the ZnO surface, we note the resulting AlO x capping layer simply as "Al", specifying in parenthesis the evaporated thickness. Additional details on the crystal-lographic nomenclature, surface preparation, aluminum evaporation, and ARPES measurements are provided in the Supplementary Material. The creation of a 2DES using Al deposition is identical to the procedure described in Ref. [31]. It is worth noting that, for previously reported 2DES in oxides, the intense synchrotron beam can create oxygen vacancies due to desorption induced by electronic transitions [46]. This process, based on the photo-excitation of core levels, is different in titanates and ZnO [47]. Thus, our results demonstrate that the creation of 2DES in oxides using Al is a much more general mechanism, enabling furthermore ARPES studies independent from the relaxation mechanism of photo-excited core levels. Fig. 1(a) shows that the Al-2p core-level peak at the Al(2Å)/ZnO interface corresponds to oxidized aluminum, whose binding energy (E − E F = −75 eV) is very different from the one of metallic aluminum (−72.5 eV) [31]. Fig. 1(b) compares the valence-band of the bare, stoichiometric ZnO(0001) surface (red curve) and of the Al(2Å)/ZnO interface (blue curve). We observe that, contrary to oxygen-deficient surfaces or interfaces of TMOs [31,48], there are no measurable states corresponding to localized electrons (i.e., deep donors) in the band gap of oxygen-deficient ZnO. The absence of such states in ZnO emphasizes the simpler character of a 2DES based on s-valence electrons, compared to the d-valence electrons in TMOs. On the other hand, the binding energy and shape of the O 2p valence band are dramatically changed, possibly because the O-2p valence band of the oxidized Al layer is at a binding energy of ≈ 6 eV. Moreover, as detailed in the inset of Fig. 1(b), the Al(2Å)/ZnO interface shows a clear quasi-particle peak at E F , not present at the bare surface. The contribution of oxygen vacancies to n-type conductivity in bulk ZnO has been a controversial issue [49][50][51][52][53]. The photoemission signatures observed here after Al deposition, namely an oxidized Al core level and the appearance of a 2DES at E F , are identical to the ones reported in other oxides [31], indicating that the mechanisms underlying the 2DES formation are similar. Future theoretical works should explore in detail the energetics and specific role of oxygen vacancies near the surface of ZnO. We now characterize the electronic structure of the 2DES at the Al(2Å)/ZnO(0001) (oxygen-terminated) interface. Fig. 1(c) shows the in-plane Fermi surface map measured by ARPES. There are two metallic states forming in-plane circular Fermi sheets around Γ, that correspond to confined states of ZnO's conduction band -which is formed by orbitals of s-character. Fig. 1(d) presents the energy-momentum dispersion map of the two states forming the above concentric Fermi circles, henceforth called outer (o) and inner (i) subbands. They were measured around the bulk Γ 002 point along the inplane k <1120> direction. Such 2DES with two subbands in ZnO had not been observed before, as electron densities were not large enough in previous studies [36][37][38][39][40][41]44]. Additional data presented in the Supplementary Material demonstrates that the in-plane periodicity of the electronic structure corresponds to the one of an unreconstructed surface, and that the two subbands form cylindrical, non-dispersive Fermi surfaces along the (0001) direction perpendicular to the interface, confirming their 2D character. The subbands' Fermi momenta, determined from the maxima of the momentum distribution curve (MDC) integrated over E F ± 5 meV, red curve on top of Fig. 1(d), are k o F = (0.17±0.005)Å −1 and k i F = (0.07±0.005)Å −1 . Their band bottoms, extracted from the maxima of the energy distribution curve (EDC) over Γ ± 0.05Å −1 and the dispersion of the EDC peaks, Figs. 2(a, b), are located at binding energies E o b = (450 ± 5) meV and E i b = (55 ± 5) meV. Due to the light and isotropic band mass of the s-type electrons forming the 2DES, the subband splitting in ZnO is ≈ 3 times larger than in titanates [32]. The thickness of the 2DES can be estimated from the subbands' binding energies and energy separation by assuming a triangular-wedge quantum well, yielding 21Å (or 4 unit cells) along c (see Supplementary Material for details). From the area enclosed by the in-plane Fermi circles (A F ), the density of electrons in the 2DES is n 2D = A F /(2π 2 ) = (5.4 ± 0.3) × 10 13 cm −2 , or about 0.14 electrons per hexagonal unit cell in the (0001) plane. Such electron density is far larger than the critical value, estimated at 3.8 × 10 12 cm −2 , at which the crossover from a polaronic to a Fermi liquid regime for electron-phonon coupling occurs [45]. Additionally, the effective masses around Γ of the outer and inner subbands, determined from their Fermi momenta and band bottoms using free-electron parabola approximations, are respectively m o = (0.25 ± 0.02)m e and m i = (0.34 ± 0.08)m e , where m e is the free-electron mass. The mass of the outer subband agrees well with the conduction-band mass along the (0001) plane calculated for bulk stoichiometric ZnO or determined from infrared reflectivity and cyclotron resonance experiments on lightly-doped ZnO [1,2]. As the confinement of non-interacting electrons in a quantum well should result in subbands with the same effective mass, we will focus on analyzing the renormalization of inner band in the following paragraphs. In fact, as seen from Figs. 1(d) and 2(a), the band bottom of the inner subband presents a complex structure, with a peak-dip-hump clearly seen in the EDC around Γ. Likewise, as shown in Figs. 1(d) and 2(a, b), the outer subband shows a kink in its dispersion at approximately the same binding energy (ω D = 70 meV) of the dip observed in the inner band, together with a peak-diphump for the EDCs around its Fermi momenta. As will be shown shortly, all these features result from electronphonon coupling. The quantification of electron-phonon interaction is possible through the analysis of the energy-dependent real (Σ 1 ) and imaginary (Σ 2 ) parts of the electron selfenergy. These can be inferred from the spectral function of the many-electron system, directly measured by ARPES [54]. Thus, we will extract and model the self energy of the outer band, and then use the results to renormalize the inner band, which is difficult to fit due to the the peak-dip-hump structure. Fig. 2(b) shows the dispersion of the spectral function peak for the outer subband, extracted from the maxima of the EDCs (blue circles) and MDCs (orange circles). The continuous green line is a cosine fit to the data representing the bare (i.e., non-interacting) electron disper- sion of this subband. The energy difference between the MDC peak and the bare band gives the real part of the electron self-energy, and is plotted in Fig. 2(c), red circles. The pronounced peak in Σ 1 at E − E F ≈ −70 meV corresponds to the kink in the experimental dispersion. Likewise, the energy dependence of the MDCs line-widths gives the imaginary part of the electron self energy (or electronic scattering rate), and is shown in Fig. 2(d), red circles. Here, one observes a rapid increase of the scattering rate from E F down to the binding energy at which the real part of the self-energy peaks, followed by a less rapid but steady increase. To check the consistency of the self-energy extracted from our data, we compute the Kramers-Kronig transformation (TKK) of the experimental Σ 2 (E) [respectively Σ 1 (E)], black crosses in Fig. 2(c) [respectively Fig. 2(d)]. We observe an excellent agreement between Σ 1 (E) and TKK{Σ 2 (E)} [respectively between Σ 2 (E) and TKK{Σ 1 (E)}], ensuring that our analysis and choice of bare dispersion respect causality. The simultaneous occurence of a pronounced peak in Σ 1 and an abrupt change in slope in Σ 2 at about the same energy ω D , as observed in Figs. 2(c, d), are typical landmarks of the interaction between the electron liquid and some collective modes of the solid (e.g. phonons) having a characteristic energy ω D [54]. Thus, we fit the experimental complex self-energy with a model of a Fermi liquid with Debye electron-phonon coupling, both in 2D [55,56], as shown by the continuous blue curves in Figs. 2(c, d). The fit gives a Debye frequency of 68±2 meV, in excellent agreement with our data and the phonon energies (up to about 580 cm −1 , or ≈ 70 meV) measured by other techniques [1,6], and a dimensionless coupling constant λ = 0.3 ± 0.05, The isotropic Fermi liquid of the fit is characterized by a carrier density of (6.7 ± 0.4) × 10 13 cm −2 , close to the experimental value. The electron-phonon, or Eliashberg, coupling function α 2 F (ω) resulting from the used 2D Debye model is shown in the inset of Fig.2(b). We checked that a fit with a 3D Fermi liquid + Debye model [55,57] yields a larger phonon cutoff energy, of the order of 85 meV, and an overall poor agreement with the experimental self-energy. The details of the models and a comparison of the obtained fits are given in the Supplementary Material. We now turn to the inner subband. To model it, we rigidly shift the bare outer band in energy and then renormalize it using the previously deduced self-energy. As shown by the red curve in Fig. 3(a), a shift of 377 meV fits the experimental Fermi momenta, matching the required conservation of the 2D electron density. The resulting renormalized inner band, black curve in Fig. 3(a), compares excellently with the experimental inner band. To cross-check the above analysis of the ARPES data, we simulated the whole 2DES spectral function using the self-energy of the 2D Fermi liquid + Debye model fitted to the data. The resulting ARPES map, Fig. 3(b), compares well with the data. Thus, the entire electronic structure of the 2DES at the Al(2Å)/ZnO(0001) surface can be understood from doping of the bulk conduction band by oxygen vacancies, electron confinement due to band-bending induced by those vacancies, and coupling of the ensuing subbands with a Debye-like distribution of phonons. Note that, in the present case of a high carrier density, the coupling constant λ gives directly the electron mass renormalization m due to electron-phonon interaction, namely m /m 0 = 1 + λ [58], where m 0 is the noninteracting band mass. Using parabolic approximations (i.e., energy-independent band masses) for the subbands' dispersions, we can assume that the bottom of the outer band, located well below the phonon energies, gives the non-interacting band mass, while the inner band, located just above the Debye energy, gives the electron mass fully renormalized by coupling to phonons. This yields a coupling constant λ ≈ 1 − m i /m o = 0.36 ± 0.3, subject to large errors, but in overall agreement with the more accurate value obtained above from the fit to the whole energy-dependent complex self-energy. More generally, in insulating dielectric oxides, the electron-phonon coupling can significantly depend on the electron density, due to different screening mechanism of the oscillating ions. At low densities (i.e. band fillings smaller or comparable to the phonon cutoff frequency), screening based on dielectric polarization results in large, spatially delocalized, polarons. At high densities the increased electronic screening of the ionic lattice vibrations result in a Fermi liquid regime with weaker electron phonon coupling [45]. Those two regimes were recently characterized by ARPES in anatase-TiO 2 (001) [42] and SrTiO 3 [43]. Note furthermore that the electron-phonon coupling constant λ = 0.3 obtained here is significantly smaller than the coupling constant observed in the Fermi liquid regime of anatase-TiO 2 and SrTiO 3 (λ TiO2 [42,43,59]. This suggests that, in the high carrier density regime, the electronic screening for electron-phonon coupling is more efficient for the s electrons of the 2DESs in ZnO than for the d electrons of the 2DESs in TMOs. Notably, our self-consistent Kramers-Kronig analysis of the self energy in ZnO, and the deduction of the electron-phonon coupling parameter using a Debye model, is different from previous approaches used in other oxides, like SrTiO 3 [60], where λ was inferred from the slope of Σ 1 at E F (i.e., the renormalization of quasiparticle mass), or TiO 2 anatase, where it was estimated by modeling the self-energy to reproduce the data [42]. Note that the coupling parameters deduced from the renormalization of quasiparticle mass, velocity and spectral weight in ARPES data are in general subject to large errors, as mentioned before, and distinct from the true microscopic coupling parameter [61]. As a whole, our results highlight the universal character of the approach based on surface redox reactions to create 2DESs in functional oxides [31], unveil similarities and differences between s-and d-orbital type 2DES, and add new ingredients to the rich many-body physics displayed by confined electronic states in ZnO. Our observations suggest that oxygen vacancies can contribute to electron-doping near the surface of ZnO, motivating further experimental and theoretical studies on the formation and role of vacancies at surfaces/interfaces of this important transparent semiconductor oxide. Moreover, the realization of a highly doped 2DES in ZnO opens a new realm of possibilities, such as high-power applications using a transparent oxide semiconductor that presents many advantages with respect to standard Sndoped In 2 O 3 (ITO): ZnO is more abundant, cheaper, easier to fabricate and process, non toxic, and when doped it can attain mobilities comparable to those of ITO [1][2][3]. Work at CSNSM was supported by public grants from the French National Research Agency (ANR), project LACUNES No ANR-13-BS04-0006-01, and the "Laboratoire d'Excellence Physique Atomes Lumière Matière" (LabEx PALM projects ELECTROX and 2DEG2USE) overseen by the ANR as part of the "Investissements d'Avenir" program (reference: ANR-10-LABX-0039). Work at KEK-PF was supported by Grants-in-Aid for Scientific Research (Nos. 16H02115 and 16KK0107) from the Japan Society for the Promotion of Science (JSPS). Experiments at KEK-PF were performed under the ap-proval of the Program Advisory Committee (Proposals 2016G621 and 2015S2005) at the Institute of Materials Structure Science at KEK. T. C. R. acknowledges funding from the RTRA-Triangle de la Physique (project PEGA-SOS). A.F.S.-S. thanks support from the Institut Universitaire de France. ZnO structure and notation ZnO crystallizes in the hexagonal wurtzite structure, with the oxygen anions forming a tetrahedron around the Zn cation. The lattice constants are a = 3.25Å and c = 5.2Å. All through this paper, we use the Miller-Bravais 4-index, or (hkil), convention for hexagonal systems, which makes permutation symmetries apparent, where (hkl) are the regular Miller indices for an hexagonal lattice, and the third (redundant) index is defined as i = −(h + k). Thus, we note [hkil] the crystallographic directions in real space, (hkil) the planes orthogonal to those directions, and hkil the corresponding directions in reciprocal space. Surface preparation and Aluminum deposition There are two possible terminations of the polar (0001) ZnO surface: Zn, or (0001) termination, and O, or (0001) termination [62]. Commercially available single crystals (SurfaceNet GmbH) with the two terminations at opposing faces were used in our experiments. The surfaces of ZnO were prepared based on the work of Dulub and coworkers [63]. The existence of potassium impurities in single crystals of ZnO is possible as potassium hydroxide is an educt in the hydrothermal synthesis of single crystals. Long annealing at elevated temperatures resulted in the migration of potassium impurities from the bulk to the surface as evidenced by the Auger spectra in Fig. 4(a). We followed two procedures to reduce the presence of potassium and obtain an atomically clean and cristalline surface: 1. Ar+ sputtering at 1 kV for 10 minutes, 2. short annealing for 5 min at T ≈ 600C to 700 • C in UHV; or alternatively: 1. Ar+ sputtering at 1 kV for 10 minutes at T ≈ 600 − 700 • C, 2. stop annealing approximately 5 minutes after sputtering. These procedures resulted in LEED images similar to the one shown in Fig. 4(b). Oxygen vacancies were then created on the UHV clean and cristalline ZnO surfaces by the deposition of 2Å of aluminum at sample temperatures T ≈ 50 − 100 • C. The complete details on the Al deposition are described elsewhere [31]. Photoemission measurements ARPES experiments were performed at the CAS-SIOPEE beamline of Synchrotron SOLEIL (France) and at beamline 2A of KEK-Photon Factory (KEK-PF, Japan) using hemispherical electron analyzers with vertical and horizontal slits, respectively. Pristine sample surfaces and oxygen-vacancy doping by Al-capping were obtained by in situ surface preparation as described in the previous section. The sample temperature during measurements was 7 K (SOLEIL) or 20 K (KEK-PF), without observing any T -dependence between these two temperature values. The typical angular and energy resolutions were 0.25 • and 15 meV, while the mean diameter of the incident photon beam was of the order of 50 µm (SOLEIL) and 100 µm (KEK-PF). We used variable energy and polarization of the incident photons. A systematic variation of the photon energy revealed no changes in the energy-momentum dispersion (see next section), a feature that is characteristic of 2D-like band structures. During the time window of our measurements the pressure was in the range of 10 -11 mbar and no evolution or degradation of the spectra was observed. Electronic structure of 2DES: in-plane periodicity and out-of-plane confinement Fig. 5(a) shows the in-plane Fermi surface measured by ARPES, extended over three neighboring Brillouin zones of the unreconstructed ZnO(0001) surface. The two concentric circular Fermi sheets described in the main text are systematically observed around each of the Γ points in these Brillouin zones, demonstrating that the electronic structure has the periodicity expected from an unreconstructed surface. Figs. 5(b, c) furthermore show that such two states form cylindrical, non-dispersive Fermi surfaces in the k 1120 − k 0001 plane, i.e. along the (0001) direction perpendicular to the interface, confirming their 2D character. Note that the Fermi surface in the k 1120 − k 0001 plane would be circular in the case of a 3D state, as the effective mass of the s-electrons is isotropic. Confinement potential and extension of the 2DES at the AlOx/ZnO interface The characteristics of the confinement potential, assumed for simplicity as triangular-wedge shaped, can be readily extracted from the bottom energies of the outer and inner subbands [23]. As the lowest edge of ZnO's conduction-band is mainly s-like [1], the out-of-plane effective masses, which enter into the computation of the quantum well eigen-energies, should be identical to the in-plane masses directly determined from our ARPES data. Thus, using the effective mass around Γ of the outer subband, which is non-renormalized by electronphonon interaction, and the energy difference of 380 meV between the outer and inner subbands, we find that the 2DEG realized in our experiments corresponds to electrons confined by a field of about F ≈ 280 MV/m in a well of depth V 0 ≈ −0.93 eV. The geometrical depth of the quantum well is then d = V 0 /eF ≈ 33Å. The thickness of the 2DEG can also be estimated from the average position of the inner subband's wave-function, corresponding to the electrons in the quantum well farthest away from the surface. From the solutions to the Schrödinger equation in the above potential wedge, this yields approximately 21Å, or about 4 unit cells along c, in agreement with the value inferred from the quantumwell geometrical depth. Eliashberg formalism The theoretical tool to deal with the Hamiltonian including the electron-phonon interaction is the Eliashberg theory, at the center of which is the Eliashberg coupling function, α 2 F (ω) [64][65][66]. This function can be interpreted as the phonon density of states (at energy ω) weighted by the electron-phonon coupling matrix element. The electron-phonon mass enhancement parameter λ, can be calculated from α 2 F (ω) and understood as the dimensionless coupling strength: Here, ω max is the maximum phonon energy, and usually takes the value of Debye energy ω D . The factor 2 appears as both the absorption and emission processes are counted in. In the limit T → 0K that we will use, the electronphonon self-energy can also be calculated from the Eliashberg coupling function as: In principle, once the dispersion relation of the scattering phonon is given, the Eliashberg coupling function and the electron-phonon coupling strength can be calculated by applying some assumptions and approximations. In simple models, such as the Einstein model and the Debye model, analytical results can be derived. Self-energy for the 2D and 3D Debye models In the 2D Debye model, used in the main text to analyze our data, the Eliashberg coupling function α 2 F (ω) can be analytically calculated [56]: In turn, from Eq. 2 and Eq. 3, Σ 2 can be also calculated: Here, λ is the mass enhancement parameter, also given by the negative slope of Σ 1 at E − E F = 0. The real part of the self-energy for the 2D Debye model (Σ 2D 1 ) does not have an analytical expression, but can be readily calculated from the Hilbert transform of Σ 2D 2 . The 3D Debye model has simple analytical forms for both Σ 1 and Σ 2 [57]. For instance, for Σ 2 it is: Thus, using either the 2D or 2D Debye models, one can extract accurate values of the Debye energy ω D and the dimensionless coupling strength λ from fits to the whole energy dependent complex self-energy. Figure 6 compares the fitting results with 3D and 2D Debye models for the outer subband (right branch, as in Comparison of fits to the experimental selfenergy using Debye models in 2D (blue lines, ωD = 70 meV, λ = 0.34) and 3D (green lines, ωD = 85 meV, λ = 0.27). (a) Real part of the self-energy. (b) Imaginary part of the self-energy. For simplicity, we have omitted in the fits the Fermi-liquid part of the self-energy, which only contributes to small corrections above ωD. FIG. 7. Fits to the experimental self-energy using 3D models for the Fermi-liquid and Debye electron-phonon coupling. (a) Real part of the self-energy. (b) Imaginary part of the self-energy. In both fits, the same parameters for the Debye frequency (ωD = 68 meV) and electron-phonon coupling constant λ = 0.3 were used. The best fit parameters for the Fermi-liquid are unphysical: either the constant C3D is very small or the band filling F is exaggeratedly large. All other fits with physically reasonable parameters completely fail to capture the value and energy-dependence of the self energy, especially above ωD. the main text) of the 2DES in ZnO. From this figure, it appears that the 3D Debye model fits better Σ 1 when |ω| > ω D , while the 2D Debye model shows a sharper inflection at |ω| = ω D . As for the fitting of Σ 2 , the two models give similar results, except that the 3D model tends to give a larger ω D and a flatter bottom of Σ 2 , in less good agreement with experiments. Fermi-liquid self-energy in 2D and 3D At temperatures much smaller than the bottom F of the conduction band (bare electron mass m e ), the self-energy of a 2D electron liquid can be written as [55]: where C 2D is a constant, F = 2πn 2D × 2 /(2m e ), and n 2D is the density of electrons. Similarly, the self-energy of a 3D electron liquid is [55]: where C 3D is a constant, and F is the bottom of the conduction band for the electron system. In both cases, the real part of the self-energy can be calculated from the Hilbert transform of Σ 2 . Fits to the experimental self-energy using 2D models for the Fermi liquid and the Debye electron-phonon coupling where presented in the main text. Figure 7 shows fits with 3D models of a Fermi-liquid + Debye self-energy for the outer subband (right branch, as in the main text) of the 2DES in ZnO. It is clear that the 2D model used in the main text provides a much better fit. Moreover, the use of a 3D Fermi liquid model yields "reasonable" (albeit still poor) fits of the experimental data only for unphysical values of the model parameters, such as a vanishingly small constant C 3D 1 (for metallic systems, it should be close to 1 [55]) or an exceedingly large F ≈ 50 eV. 27 TABLE I. Comparison of the Debye frequencies and electronphonon coupling constants extracted from fits to the experimental self-energy using the 2D and 3D Debye models. The Fermi-liquid part of the self-energy, which provides only small corrections above ωD, has been neglected for simplicity. L(R) stands for the left(right) branch of the outer band. Index 1 (2) corresponds to fits to Σ1 (Σ2). Fig. 8 shows the experimental dispersion and complex self-energy extracted from the left branch of the outer subband of the 2DES in ZnO, together with fits using the 2D and 3D Debye models. Table I present a summary of the fitting parameters for Σ 1 (index 1) and Σ 2 (index 2) obtained from those two models for both the left (L) and right (R) branches of the outer subband of the 2DES in ZnO. A comparison of Figs. 6 and 8, and an inspection of the parameters listed in tableI, shows that the results are consistent with each other, except for a sensitively larger Debye frequency, and smaller coupling constant, extracted from the fit to Σ 2 in the left branch of the outer subband (L 2 column). From all the other fits, the mean value of ω D (2D) is 0.069 eV, or 556.5 cm −1 in spectroscopy units, while the average of ω D (3D) is 0.085 eV or 685.6 cm −1 . The mean value for ω D (2D) compares very well to the E 1 and A 1 LO modes identified in previous measurements of phonon modes in ZnO [67]. However, The mean value for ω D (3D) does not correspond to any previously reported phonon energy in this material. Thus, all in all, the 2D Fermi-liquid +Debye model appears as a better description of our ARPES data on ZnO, coherent with previous results from other experimental probes. 2DES at the Zn-terminated ZnO surface Fig. 9(a) shows the in-plane Fermi surface map measured by ARPES at the Zn-terminated [0001] plane of the Al(2Å)/ZnO(0001) interface. Similar to the Oterminated plane, this surface also shows two states forming in-plane circular Fermi sheets around Γ. However, their Fermi momenta are smaller than those obtained at the O-terminated surface. Accordingly, as shown by the the energy-momentum dispersion map in Fig. 9(b), the corresponding subbands disperse down to smaller (in absolute value) energies. In particular, the bottom of the inner subband is very close to E F .
2018-02-19T23:30:35.000Z
2018-02-19T00:00:00.000
{ "year": 2018, "sha1": "ed4a4621260636c894d6459eaab4eedfd8c0005a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1802.06907", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ed4a4621260636c894d6459eaab4eedfd8c0005a", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
211136264
pes2o/s2orc
v3-fos-license
(2020). Chemo-selective Rh-catalysed hydrogenation of azides into amines. Carbohydrate Research Rh/Al O 3 can be used as an effective chemo-selective reductive catalyst that combines the mild conditions of catalytic hydrogenation with high selectivity for azide moieties in the presence of other hydrogenolysis labile groups such as benzyl and benzyloxycarbonyl functionalities. The practicality of this strategy is exemplified with a range of azide-containing carbohydrate and amino acid derivatives. Introduction Amines are common functional groups present in many organic compounds. In multistep organic synthesis, reactive amine groups often require temporary protection [1]. Azides are often used to mask amines, their chemical stability and orthogonal reactivity towards other common protecting groups makes them exceedingly versatile [2]. The use of azides in carbohydrate synthesis is also very prevalent since glycosyl azides are highly useful precursors for N-linked glycans [3]. One of the most common ways to introduce azides in chemical synthesis consists in the displacement of halogens or other leaving groups (e.g. tosyl and triflyl) by NaN 3 , other common protocols include acid-catalysed addition of trimethylsilyl azide, amine substitution via sulfonylazides or the use of diazonium salts among others [4,5]. Reduction of the azide moiety to an amine is also a synthetically important step, since many azides can be prepared with regio-and stereocontrol, their subsequent reduction permits controlled introduction of the amine group. To that end, the most common protocols for the reduction of azides into amines involve Pd/C or PtO 2 -promoted catalytic hydrogenation [6], Staudinger reaction [7], or hydride reduction [8] (e.g. LiAlH 4 , PhSeH, tris(trimethylsilyl)silane, dithiol/Et 3 N), among many others [2]. Despite the numerous methods available for the reduction of the N 3 group, selectivity issues often arise due to the presence of other nonorthogonal protecting groups or labile moieties on the same molecule which can't withstand the reaction conditions. While Pd-catalysed hydrogenation is often non-selective for benzyl ethers and olefins, previous examples of selective Pd-catalysed hydrogenation of azides in presence of benzyl ethers have been reported [9]. Nonetheless, the method does not allow for selective reduction of azides in the presence of the more labile benzyloxycarbonyl (Cbz) protecting group. On the other hand, the Staudinger reaction requires the use of water, which is often incompatible with hydrolytically labile groups; hydride-mediated reductions (e.g. LiAlH 4 ) are not selective towards N 3 groups in the presence of aldehydes or esters and in some instances the methods involve the use of toxic or/and malodourous reagents (e.g. selenols, thiols, tin hydrides) [2]. Rh complexes have been successfully and extensively employed in organometallic chemistry in for example metathesis reactions [10,11], hydrosilylation reactions [12], hydrogenation of olefins [13] and for reduction of nitriles to amines [14]. Rh-catalysed hydrogenation represents an alternative to the aforementioned azide reduction strategies, which has the potential to offer several advantages over more traditional methods, in terms of chemical orthogonality. Herein we report the use of Rh as an effective and selective catalyst for the hydrogenolysis of azides. The applicability of the method is exemplified with a range of azide-containing carbohydrate and amino acid derivatives. We demonstrate that Rh/Al 2 O 3 may be used as an effective reductive catalyst that combines the mild conditions used in catalytic hydrogenation with high selectivity for azide moieties in the presence of other labile groups. To the best of our knowledge there are no reported applications of the use of Rh or its complexes for the catalytic reduction of azides in biological chemistry. Results and discussion Initial studies were aimed at evaluating the effectiveness of Rh as a mild hydrogenation catalyst for the chemo-selective reduction of azides in the presence of benzyl ethers, which are typically labile under Pdcatalysed hydrogenolysis conditions [15]. To that end, glycosyl azide 1 https://doi.org/10.1016/j.carres.2020.107948 Received 17 January 2020; Received in revised form 4 February 2020; Accepted 5 February 2020 T was chosen as a model substrate (Scheme 1a) and reacted with 10 mol% commercially available Rh/Al 2 O 3 in presence of an excess of acetic acid to help stabilise the resulting amine product as the acetate salt [16]. Pleasingly, the reduction of 1 to amine 2 was achieved in an excellent yield of 90% using mild conditions (H 2 , 1 atm, balloon and room temperature) without affecting the surrounding benzyl groups. To determine the efficiency of this reduction under different catalyst loadings, a preliminary rate determination screen was performed using 10, 5 and 1 mol% of Rh/Al 2 O 3 under the same reaction conditions as before. Under these conditions and for each catalyst loading, the conversion between azide 1 to amine 2 was monitored by taking aliquots of the reaction at different time intervals and analysed via NMR spectroscopy (Scheme 1b). The reduction at 10 mol% of Rh reached > 90% completion after 6 h. Similar results were achieved at 5 mol% of Rh after 13 h, while at 1 mol% load of Rh the reaction was slower and gave a 17% conversion after 24 h. Solvent effects were then explored and the outcome of the model reaction in different solvents was monitored in order to evaluate if the nature of the solvent would influence the efficiency of the reduction. The Rh-catalysed hydrogenation of 1 into 2 was tested in toluene, EtOAc, MeOH, THF, and CHCl 3 under the same conditions (5 mol% catalytic load, H 2 1 atm, 5 h, room temperature). No significant difference was observed between the different solvents (Scheme 1c), except for CHCl 3 that gave a much lower conversion probably due to the lower solubility of H 2 in halogenated solvents [17]. Consequently, It was decided to carry out the Rh-catalysed hydrogenations in a 6:1 toluene/EtOAc mixture, providing optimal solubility conditions for both reagents and products. Encouraged by these initial results, the scope of the Rh-catalysed azide reduction was then explored on a range of glucosides containing either primary, secondary or anomeric azides and bearing both acetyl and benzyl ether protecting groups (Scheme 2) using the optimized catalytic conditions and toluene/EtOAc (6:1) as the reaction solvent. In most cases, the reductions proceeded smoothly in moderate to good yields at room temperature. In brief, glycosides bearing azidegroups at C2 as in 3 and 4 afforded the respective perbenzylated and peracetylated 2-aminoglycosides 5 and 6 in good yields of 63% and 86% yield, after 3 and 6 h, respectively, at room temperature. Rh-catalysed reduction of 1-azido acetyl-protected glucoside 7 resulted in the corresponding 1-amino glycoside 9 in 69% with no anomerization byproducts observed. In the case of 1-azido benzyl-protected glucoside 8, a complex mixture of amino-containing derivatives including α/β anomeric mixtures of the 1-amino derivative were obtained under this conditions, likely due to the electron donating nature of the benzyl protecting groups which makes the resulting hemiaminal product more reactive and thus more susceptible to anomerization and/or degradation [18]. We, therefore resolved the product mixture via acetylation of the resulting reduced product using acetic acid and pyridine which lead to the isolation of the major product, β-1-N-acetyl glycoside 10 in 36% Scheme 1. (a) Rh-catalysed hydrogenation of 1, reagent and conditions: H 2 1 atm, Rh/Al 2 O 3 10 mol%, AcOH, toluene 6 h, room temperature, 90%. (b) Plot of conversion of 1 into 2 at 1 mol% (blue ), 5 mol% (red ) and 10 mol% (grey ) of Rh/Al 2 O 3 over time. (c) Relative conversion rates between 1 into 2 in different solvent at 5 mol% Rh/Al 2 O 3 , H 2 1 atm, AcOH, after 5 h. Scheme 2. Reagent and conditions: i) H 2 1 atm, Rh/Al 2 O 3 10 mol%, AcOH, toluene/EtOAc (6:1), room temperature. ii) H 2 1 atm, Rh/Al 2 O 3 10 mol%, AcOH, toluene/EtOAc (6:1), 6 h, room temperature, then Py/Ac 2 O 1:1 16 h, room temperature. yield after 2 steps. On the other hand, reduction of glucoside 11, bearing a primary azide at C6, underwent reduction to the amine within 8 h with concomitant acetate migration of the acetyl group at C4 to afford acetamide derivative 12 in a 83% yield. This was confirmed by the proton shift associated for H4 (from δ 4.97 in 11 to δ 3.33 ppm for acetamide 12) and to the presence of a cross-peak in the COSY spectrum corresponding to the broad OH signal and H4. To ascertain whether other H 2 -cleavable protecting groups were amenable to the mild Rh reaction conditions, CbzN-protected glucoside 13 was also screened. Compound 13 was reduced to amide 14 in 63% yield, without loss of the carbamate protecting group. Similarly to the outcome observed for 11, the primary amine undergoes intramolecular transacetylation affording the corresponding acetamide and leaving a free hydroxyl at position C4 of the sugar. In order to elucidate whether the transacetylation reaction occurs only for amines at position C6 of glycosides featuring OAc groups at adjacent positions (e.g. C4) of the sugar moiety or due to the general stronger nucleophilicity of primary amines over secondary or anomeric ones, the reduction of acetate containing glycosides 15 and 17, bearing a primary azide at different positions on the sugar, were performed. Amino-derivatives 16 and 18 were obtained in good yields of 73% and 94%, respectively, without the formation of transacetylated derivatives. These results confirm that the observed acetate migration appears to be a specific feature of C6-amine containing glycosides where acetate groups are present at C4 and can undergo intramolecular rearrangement via a 6-membered transition state giving the corresponding acetamide, which is a common problem in carbohydrate chemistry [19]. To evaluate the chemical selectivity and efficiency of Rh-catalysed hydrogenation in the presence of different protecting and functional groups, orthogonally protected glycoside 19, thioglycoside 21, nucleosides 23 and 25 and amino acid derivatives 27 and 29 containing free amino and carboxylic acid groups, were subjected to the Rh-catalysed reduction (Scheme 3). Reduction of 19 gave amine 20 cleanly in 74% yield without cleavage of the benzylidene acetal protecting group and without the need for anhydrous conditions. On the other hand, Rh-catalysed reduction of sulphur-bearing glycoside 21 only yielded starting material, despite of long reaction times (24 h) with a 10 mol% of Rh, likely due to catalyst poisoning by the thioether [20,21]. Our reduction conditions were then applied to azido-containing uridine derivative 23 and adenosine 25 as model pyrimidine and purine nucleoside substrates. In the case of 23, complete reduction of the azido and double bond in the nucleobase afforded dihydrouridine amine 24 as the major product. The lack of chemo-selectivity in this instance is not completely unexpected as previous work has demonstrated the applicability of Rh in the reduction of uridine bases [22][23][24]. However, selective azido reduction in adenosine 27 was successful and amine 28 was obtained in 65% yield without affecting the nucleobase, in virtue of a stronger aromaticity compared to the pyrimidine model 25. Finally, we also demonstrated that Rh-catalysed hydrogenation of azides is compatible with the presence of free acid or basic amine functions such as those found in amino acid derivatives 27 and 29, which furnished the corresponding amines 28 and 30 in excellent yields of 92% and 86% respectively, after subjecting the parent substrates to the Rh-catalysed conditions in methanol as the solvent for solubility purposes. Conclusions In conclusion, we have demonstrated that Rh/Al 2 O 3 can be used as an effective and chemo-selective catalyst for the reduction of azides in the presence of other hydrogen-labile functional groups such as O-Bn and N-Cbz protecting groups The method is mild and offers an orthogonal alternative to Pd-or Pt-catalysed hydrogenation where high specificity for the azide moiety is required. Although the presence of thiols as in thioglycosides appears to be incompatible with the protocol, in a similar way to hydrogenations carried out with Pd or Pt catalysts where the catalysts are inactivated, the versatility of the protocol is exemplified in a range of glycosides and amino acid derivatives demonstrating its compatibility with other commonly used orthogonal functional groups, setting the stage for novel applications of Rh-catalysed azide reduction in organic synthesis. Declaration of competing interest The authors declare no conflict of interest.
2020-02-13T09:13:19.676Z
2020-02-06T00:00:00.000
{ "year": 2020, "sha1": "0339e9b733b9404e26bf290b756f3522bf283252", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.carres.2020.107948", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "9ad4fbb7ec3893adeb5346bae1578ac3fbbf5513", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
259224980
pes2o/s2orc
v3-fos-license
Strain-switchable field-induced superconductivity Field-induced superconductivity is a rare phenomenon where an applied magnetic field enhances or induces superconductivity. Here, we use applied stress as a control switch between a field-tunable superconducting state and a robust non–field-tunable state. This marks the first demonstration of a strain-tunable superconducting spin valve with infinite magnetoresistance. We combine tunable uniaxial stress and applied magnetic field on the ferromagnetic superconductor Eu(Fe0.88Co0.12)2As2 to shift the field-induced zero-resistance temperature between 4 K and a record-high value of 10 K. We use x-ray diffraction and spectroscopy measurements under stress and field to reveal that strain tuning of the nematic order and field tuning of the ferromagnetism act as independent control parameters of the superconductivity. Combining comprehensive measurements with DFT calculations, we propose that field-induced superconductivity arises from a novel mechanism, namely, the uniquely dominant effect of the Eu dipolar field when the exchange field splitting is nearly zero. Introduction The switching between distinct electronic phases in quantum materials by external tuning parameters is a central focus of condensed matter physics, both to study how competing orders interact and with the goal of technological development (1).One rich research area is tuning systems with both ferromagnetism and superconductivity.The interaction of the antagonistic phases leads to unusual phenomena, such as spontaneous magnetic vortices (2,3) and spin-polarized supercurrents (4), the latter of which hold promise for superconducting spintronics technologies and energy-efficient data storage.Much attention has focused on superconducting spin valves, i.e. thin film heterostructures with ferromagnetic layers surrounding a superconducting layer (5).An applied magnetic field switches the sandwiching ferromagnetic layers between parallel and antiparallel alignment, which strongly tunes the magnetic pairbreaking effect and effectively turns the superconductivity on and off.This enables the ultimate switchability of magneto-transport, between a resistive and zero-resistance state, thus achieving infinite magnetoresistance and the possibility of low energy dissipation computation technologies (4). Infinite magnetoresistance occurs not only in artificial heterostructures, but also in a handful of single crystal materials exhibiting field-induced superconductivity, including several Eu and U-based superconductors (6)(7)(8)(9)(10) and organic superconductors (11,12).In these systems as well as in thin-film superconducting spin valves, the zero-resistance temperature T0 is often below 1K, up to 4K for UTe2 under pressure (13), limiting their practical application.The current record-holder for highest fieldinduced superconductivity temperature is in the chemically-doped Eu-based iron pnictide superconductor, EuFe2As2.Like other iron pnictide superconductors, EuFe2As2 exhibits an electronic nematic transition which creates orthorhombic structural twin domains.The suppression of nematicity by chemical doping results in the emergence of superconductivity, with an onset temperature TSC reaching 18K-30K at optimal doping (14)(15)(16)(17).Meanwhile, in optimal doped materials the Eu moments order ferromagnetically along the c-axis, with TFM=16K-20K (18)(19)(20)(21)(22).The similar ordering temperatures of the two antagonistic phases implies a potentially strong competition between them.Indeed, for Co-and Rhdoped samples a large reentrant resistivity appears below TFM as the Eu magnetic flux disrupts the nascent superconductivity, pushing T0 far below TSC.Unexpectedly, applying a small in-plane magnetic field (μ0H <0.5T) to these materials raises T0 from ~5K to ~6-7K (23,24).Thus far, the mechanism of this field-induced superconductivity has not been determined, nor has the effect been optimized to enhance T0 to its limit. In this work, we demonstrate field-induced superconductivity in 12% Co-doped EuFe2As2 at T0=9K, which can be enhanced up to at least 10K or suppressed to at least 4K using in-situ applied uniaxial stress.To our knowledge, this is the highest reported temperature of magnetic-field-induced superconductivity in any material.Doped EuFe2As2 exists as a natural-grown atomic limit of the thin film superconducting spin valve architecture, with alternating ferromagnetic Eu and superconducting/nematic FeAs layers (Fig. 1A).We combine synchrotron x-ray techniques and transport measurements to reveal that straintuning of the nematicity and field-tuning of the Eu moments act as independent tuning knobs of the superconductivity (Fig. 1B,C).Indeed, doped EuFe2As2 acts as a strain-switchable superconducting spin valve, which has potential both for spintronics applications and more fundamental investigations of ferromagnetic superconductors.Finally, we combine DFT calculations and analysis of the Eu dipole field to ascertain the origin of the field-induced superconductivity; in short, the directional anisotropy of the upper critical field Hc2 enables the in-plane reorientation of the Eu moments to boost T0.In the Discussion, we consider how this novel mechanism could be realized in other systems, including in 2D systems and at even higher temperatures.).The FM/SC phase competition prevents a zero-resistance state from being reached until the lower temperature T0, below which the three phases simultaneously coexist.(B,C) Three doors (i.e.tuning parameters) lead to the zero-resistance state (shaded areas of the phase diagram).One door is field-induced superconductivity (FI, cyan).Applying a small in-plane magnetic field (Hsat) reorients the Eu moments and reduces the magnetic flux through the FeAs layers, enhancing superconductivity and boosting T0.A second door is strain-induced superconductivity (SI, lavender).As in other iron-pnictide superconductors, the N/SC phase competition enables an effective strain-tuning of superconductivity via strain-tuning the lattice-coupled nematic order.Here, tensile (compressive) stress along the FeAs bonding direction enhances (suppresses) superconductivity and increases (decreases) T0.The third door is simply tuning the temperature (TI) to cross the externally-tuned value of T0.With combined strain and field tuning, T0 can in principle take any value between T=0 and T=TFM (green). Strain-switchable field-induced superconductivity Single crystal samples of 12% Co-doped EuFe2As2 were grown using Sn flux (see Methods).We found that using a (Fe,Co)-rich, nonstoichiometric growth composition yielded samples with increased superconducting transition temperatures relative to stoichiometric-grown samples (23) (Methods, Supp.Fig.S1).Samples 1 and 2 were selected from different growth batches and were prepared identically as matchsticks to measure the inline resistivity (Fig. 1A).To better compare the field and strain tuning of the resistivity, we present all transport data normalized to the zero-field freestanding resistivity at T=25K, with / 0 = (, , )/ (25 , 0, 0). In the freestanding state, sample 1 was cooled through the superconducting (TSC=19K ) and ferromagnetic (TFM=17.2K)transitions under zero field (Fig. 2, black), reaching / 0 = 0 at T0=7.5K.Temperature sweeps were repeated with fixed magnetic field applied either in-plane (Fig. 2, red) or out of plane (Fig. 2, blue).An out of plane field is found to only increase the resistivity, while only lowering the value of T0.In sharp contrast, an in-plane field is far more detrimental to superconductivity between TSC and TFM, but zero resistance is reached at an enhanced value of T0=9.0K for μ0H=0.2T.Thus, we demonstrate field-induced superconductivity.It is striking that the superconductivity shows a different preference for in vs out of plane field above and below TFM; we will return to this point in the section that discusses the mechanism of field-induced superconductivity. Figure 3A shows / 0 vs temperature at fixed in-plane field (μ0H=0 T and μ0H=1 T) and / 0 vs field at fixed temperature for sample 1.For T > TFM, an applied field up to 1 T acts only to increase the resistivity.For T<T0, / 0 = 0 up to 1T.However, for TFM>T>T0, the minimum resistivity value is reached at finite field.As we will show below, this resistivity minimum corresponds to the full in-plane saturation of the Eu moments, and we mark this field value as Hsat (Fig. 3B, black circles).In Figure 4, we plot Hsat vs temperature and find that it is well described by square-root temperature dependence, H sat ∝ √T FM − T, indicating the mean-field behavior of the Eu magnetic ordering.For 9K > T > 7.5K, we find that zero resistance can be induced in the vicinity of Hsat. Following these measurements, sample 1 was mounted to a uniaxial stress device (see Methods; we corrected for a post-mounting background resistivity of order / 0 = 1%, see Supp.Fig.S7).The sample was initially cooled under zero device voltage to base temperature.The sample was then slowly warmed under large fixed tension or compression, yielding the resistivity vs temperature curves in Figure 4 (right).The uniaxial stress was aligned along the Fe-As bonding direction, which induces strains in both B1g and A1g symmetry channels.We find that TSC varies monotonically as a function of the nominal uniaxial strain and can be tuned by ~1 K, revealing the tunability of the nematicity/superconductivity phase competition dominating by the A1g strain in line with previous work in BaFe2As2 (25,26).Intriguingly, the resistivity is especially tunable below TFM, and we find that T0 can be enhanced or suppressed by ~3 K, demonstrating the increased sensitivity to external tuning parameters within the ferromagnetic superconducting phase.However, TFM is virtually unchanged with strain, suggesting that strain has minimal impact on the Eu magnetic order. Next, we applied field at fixed temperature and fixed tension or compression (Fig. 3C).We find that the resistivity field dependence is similar to the zero-strain condition, but with an initial resistivity that is lower or higher with tension or compression, respectively, enabling a combined strain-field tuning of T0.In Figure 4 we construct the strain and field-tunable phase diagram of superconductivity.Fieldinduced superconductivity is accessible in a temperature window from 7.5K to 9K under zero strain, with a measured minimum of 4K under compression and a maximum near 11K under tension, and with an onset field between μ0H=0.1T and 0.3T.As the Eu magnetic moment increases with decreasing temperature, field-reorientation of the moments has a larger effect on the superconductivity, and so the fixed-strain phase volume of field-induced superconductivity is largest under compression and smallest under tension.We note that this is a substantial qualitative difference from UTe2 where pressure tuning can shift the critical field by many tesla (13).(Right) Resistivity vs temperature for the zero-strain state (same as black curve in Fig. 2,3) and for the tensile (green) and compressive (magenta) strain states in Fig. 3C.(Left) Phase boundary between ρ >0 and ρ =0 states under zero strain (cyan), tension (green) and compression (magenta), determined by resistivity vs temperature data (diamonds) and resistivity vs magnetic field (squares) from Fig. 3 and Supp.Fig.S5.Field-induced superconductivity indicated by shaded areas for each strain state.Eu in-plane saturation field Hsat taken from minimum of magnetoresistance in Fig. 3B vs. temperature (black circles), with mean-field fit line (red). Strain and Magnetic Field: Independent Tuning Knobs of Superconductivity To further identify the independence of strain and magnetic field for tunability of superconductivity, as well as to identify the mechanism of the field-induced superconductivity, we performed transport measurements under applied strain simultaneous with either x-ray diffraction (XRD) or x-ray magnetic circular dichroism (XMCD) at the Advanced Photon Source (see Methods).XMCD is a powerful tool to study ferromagnetic superconductors, as it is an element-specific electronic fluorescent effect which bypasses any diamagnetic shielding from the superconductivity.Further, it is a necessary tool for studying strain-tuning of the magnetic order given the experimental challenge of using conventional magnetometry techniques with a strain device (27). We performed XRD measurements on sample 2 at T = 13.5 K, just below the maximum of the reentrant resistivity, across a range of strain.The linearity of the inline strain confirms a constant strain transmission (Fig. 5B).We also measured the B2g-symmetry spontaneous orthorhombicity , which is a proxy of the nematic order (28) (see Methods and Supp.Fig.S2).We find that under applied tension the magnitude of is suppressed by up to 30%, coinciding with a dramatic decrease in the resistivity (Fig. 5A).Under compression, is roughly constant as the resistivity increases, suggesting that the saturated nematicity suppresses the superconductivity.This strain dependence of nematicity is consistent with the combination effect of the induced A1g and B1g strains, where the latter acts as a transverse field that suppresses nematicity quadratically.Thus, we can effectively strain-tune the superconductivity via its competition with the strain-tunable nematicity and the associated antiferromagnetic order (26,29). Field-induced superconductivity was observed in sample 2 at T=10K under both fixed-strain and fixed-field conditions.With zero field, the resistivity can be strain-tuned from / 0 = 5% under zero strain, to / 0 = 40% at maximum compression, and / 0 = 0% with maximum tension (Fig. 5C).Thus, tensile strain can effectively raise the superconducting transition to at least 10K.The application of an in-plane magnetic field (μ0H=0.26T) decreases the resistivity at all strain states, and zero resistivity is obtained at roughly 75% the maximum applied tension.Thus, tensile strain and magnetic field can work together to raise the transition temperature even higher.Figure 5D shows resistivity vs applied magnetic field at four fixed tension values, where a narrow strain range permits field-induced superconductivity. To investigate the origin of the field-induced superconductivity, we performed simultaneous resistivity and XMCD measurements vs field at five fixed strain states between maximum compression and tension (Fig. 5E,F).Here, the XMCD signal is proportional to the Eu magnetization along the field direction (10 degrees above the in-plane grazing incidence due to sample chamber constraints; see Methods).As the Eu ferromagnetic moments are spontaneously ordered along the c-axis, the Eu in-plane moment (and XMCD signal) is initially nearly zero under zero field.For all strains, increasing the magnetic field linearly increases the in-plane moment towards saturation at Hsat=0.25T, coinciding with the minimum of the magnetoresistance.From this, we conclude that the Eu moment reorientation towards the in-plane direction is intimately connected to field-induced superconductivity.Despite the large change in the zero-field resistivity with strain, there is no apparent strain-induced change in either the saturation field value or saturation XMCD value.This strain independence was somewhat unexpected given that the localized Eu 4f electrons presumably order with assistance from the strain-sensitive Fe 3d electrons via an RKKY interaction (30).As strain does not affect the Eu-magnetic order, and as strain has been shown to be far more effective than magnetic field in tuning the nematic order in this material system (27), we find that strain and field act as effectively independent tuning parameters of the superconductivity. Mechanisms of field-induced superconductivity The Jaccarino-Peter effect (31) has often been invoked to explain field-induced superconductivity in s-wave superconductors, including in Eu-based Chevrel phases (6, 7) and organic superconductors (11,12,32).Here, the Zeeman splitting induced by an external field compensates the internal exchange-bias splitting, resulting in superconductivity.In our case, the exchange-bias field is parallel to the external field, so a Jaccarino-Peter compensation is not possible.Instead, two other mechanisms contribute to the exchange splitting induced in the Fe bands: the Hund's rule coupling of Eu f-and d-orbitals, with the latter overlapping with Fe d-orbitals and inducing a polarization parallel to Eu f moments; and the Schrieffer-Wolfe coupling of Eu f-and Fe d-orbitals, which leads to an antiparallel polarization.To characterize these two effects, we performed DFT calculations using the Wien2K package (33,34) for Eu moments fully polarized in-plane (see Supplementary Information, Fig. S8-10).We find that both show high sensitivity to the Hubbard U on Eu sites, and as these two interactions have opposite signs, the induced splitting of Fe bands is relatively small and varying in sign and amplitude over the Fermi surface.This 'accidental cancelation' gives a reasonable explanation for the coexistence of superconductivity and ferromagnetism.Above TFM this cancellation is lifted as the Eu moments become disordered, which also explains the flipped field-preference of superconductivity above and below TFM (Fig. 2).The small exchange interaction has also previously been suggested by DFT and Mossbauer studies in related materials (18,(35)(36)(37).Nonetheless, these two exchange splitting mechanisms do not drive the field-induced superconductivity, and an alternative explanation is required. An explanation to the field-induced superconductivity mechanism comes by considering the sizeable dipolar magnetic field exhorted by Eu moments onto the Fe layers.Using the classical Clausius-Mosotti theory of the polarizable media, we can determine the dipole field from the stacked infinite planes of fully-ordered ferromagnetic Eu moments as = /3 = 0.3 , where = 7 is the magnetization of the Eu moments and = 90 Å 3 is the volume per moment.Importantly, this is not an "effective" magnetic field derived from the exchange splitting, but a real field (with respect to the superconducting condensate) that can be screened by Abrikosov vortices (38,39).Reviewing Figure 4F, at 10K and an applied field of 0.25T the XMCD signal saturates at 80% of the 2K XMCD value (Supp Fig. S4), suggesting a total dipole field of 0.24T, in agreement with this estimate.A resistive state is found under zero field, where a net 0.24T of Eu field is aligned to the c-axis.A zero-resistance state is found under an applied field of 0.25T in-plane, which combines with the reoriented Eu moments to give a total 0.49 T of flux in-plane.As in other iron-based superconductors (40), Eu(Fe0.88Co0.12)2As2has a moderate in-vs.outof-plane 2 anisotropy, with = H C2,in /H C2,out ≅ 2.1 at T=2K (Supp.Fig.S5).As > 0.49 0.25 , and as we expect to increase with temperature towards TSC (41), we can explain the narrow field range of the fieldinduced superconductivity as due primarily to rotating the Eu moments in-plane to take advantage of the higher in-plane critical field.Further, this explains why applied strain does not shift the field range where superconductivity onsets, as strain does not directly tune the Eu magnetic order. Discussion Here, we have demonstrated field-induced superconductivity at T=9K, which can be accessed at small field (μ0H≤0.3T)and tuned with accessible strain values (|εxx|<0.2%).Our combined XRD, XMCD and transport measurements show that strain and magnetic field act as independent tuning knobs, with the former affecting the nematic order and Fe antiferromagnetism and the latter affecting the Eu ferromagnetism.These knobs tune the phase diagram analogously to chemical doping, but without introducing additional disorder.The high tunability of this system results from the close competition between the simultaneously coexisting superconducting, nematic and ferromagnetic phases.In contrast, no field-induced superconductivity has been reported in related Eu-based iron pnictide materials such as EuRbFe4As4 (42) or optimal Ir-doped EuFe2As2 (15), likely due to the substantially stronger superconducting order.We anticipate even higher field-induced superconducting temperatures could be obtained in materials engineered with a perfect balance between higher temperature superconductivity and ferromagnetism. An open question is the microscopic details of the zero-resistance state, and especially its straintunability.The small resistivity just above T0 has previously been associated with mobile flux vortices making up a spontaneous vortex liquid phase, with zero resistivity indicating the freezing of these vortices (2).An intriguing possibility to explain the enhanced strain-tunability of T0 below TFM is that vortices become pinned at nematic domain boundaries (43), which can be tuned in number and size with strain.A second direction for future work is to assess this material's potential for superconducting spintronics applications, such as by studying the degree of spin polarization and spin-triplet pairing of the supercurrent as it passes through the field-tunable magnetic layers (4). We have also described a novel mechanism for field-induced superconductivity distinct from the Jaccarino-Peter effect and spin-triplet U-based compounds.This mechanism could likely be present in other systems that exhibit (1) large magnetic moments that are easily field-tunable (e.g.L=0 rare earth elements) and (2) a superconducting order which is dimensionally highly anisotropic (e.g. a van der Waals (vdW) material (44)(45)(46) or at the interface between different materials (47)).This mechanism could arise quite naturally from a vdW heterostructure, with one superconducting layer and one ferromagnetic layer.We note that the apparent first report of field-reentrant superconductivity in a vdW system occurs with stacked thin flakes of antiferromagnetic CrCl3 and superconducting NbSe2 (48), which demonstrates the potential for our proposed mechanism to likewise underlie field-induced superconductivity in 2D materials. Sample Preparation Single crystal samples of Eu(Fe 0.88 Co 0.12 ) 2 As 2 were grown from a tin flux as described elsewhere (23).We used a nonstoichiometric mix ratio of Eu: (Fe 0.85 Co 0.15 ): As: Sn of 1:8.5:2:19.This ratio resulted in samples with higher zero-resistance temperatures (T ) compared to the stoichiometric 1:2:2:20 ratio ( 23), (Supp.Fig.S1).However, there was significant sample to sample variability in T 0 , which may result from doping inhomogeneity.The composition was measured by EDX to be 12% Co-doping, despite a nominal doping of 15%.The samples were cleaved from large as-grown single crystal plate and cut along the tetragonal [1 0 0] direction into bars with dimensions ~2 x 0.60 x 0.06 mm.Four gold wires were attached with silver epoxy to measure the inline resistivity using a standard 4-point measurement and an SR830 lock-in amplifier with 1mA fixed current.Sample 1 was measured in a Quantum Design PPMS.Sample 2 was measured in x-ray compatible cryostats at Argonne National Laboratory. A piezo-actuator uniaxial stress device (Razorbill Instruments, CS-100) was used to provide in-situ stress.The built-in capacitance strain gauge was used to determine the nominal strain as in (28).Sample chamber constraints prevented the measurement of for data presented in Figure 5C-F.Below the nematic transition (Ts=68K, Supp.Fig.S3), structural twin domains form along the Fe-Fe bonding direction .The applied stress thus does not detwin the domains, but instead can tune the magnitude of the nematic order parameter through nonlinear couplings between , and (see ref. (25) and Supp.Figs.2-3). After mounting sample 1 on the strain device, a field, strain and temperature dependent background resistivity of order / 0 ≈ 1% was present, masking the true entrance into the zero-resistance state.We estimate the field range of field-induced superconductivity from the field range where the resistivity dips below this background resistivity (see Supp Fig. S6,7 for analysis), from which we estimate field-induced superconductivity occurs in the bulk of the sample up to T=11K.True zero resistance was measured in sample 2 at T= 10K, reported in Fig. 5C,D. X-ray Magnetic Circular Dichroism and X-ray Diffraction XRD measurements were performed at the Advanced Photon Source, beamline 6-ID-B, at Argonne National Laboratory.X-rays of energy 7.6 keV illuminated an area 500x500 um, fully encompassing a cross section of the middle of the crystal where strain transmission is highest.The sample and strain device were mounted on a closed cycle cryostat.Gaussian fits to the tetragonal (1 0 7), (0 0 8) and (1 1 8) reflections were used to determine the lattice constants ( ), (c), and ( & ), corresponding to inplane along the stress axis, out of plane, and in-plane at 45 degrees to the stress axis, respectively. XMCD was measured at the Advanced Photon Source beamline 4-ID-D at Argonne National Laboratory.We probed the Eu L3 edge using x-rays of 6.97 keV, which measure the spin polarization of the Eu 5d band due primarily to the magnetic moment of the 4f orbital.A superconducting split coil magnet with a large bore was used to apply magnetic field.The sample temperature was controlled using He flow.XMCD was collected in fluorescence geometry by monitoring the Eu L α line using a four element Vortex detector integrated with the Xspress module to enable a larger dynamical range.Circularly polarized x-rays were generated using a 180 microns thick diamond (111) phase plate.Data was corrected for self-absorption.The XMCD spot size illuminates the whole sample width across the y direction and is roughly 100 microns along the x direction (between the transport wires) and probes a depth of about 5 microns.The beam is centered on the middle of the crystal where strain is most transmitted and homogenous.The incident beam was aligned with the applied magnetic field at an angle of ~10 degrees above parallel to the sample surface (grazing incidence) due to sample chamber constraints.All XMCD data is normalized to the zero-strain, μ0H>0.3Tsaturated value at T=2K (Supp.Fig. S4). DFT Calculations The full-potential linearized augmented plane wave Wien2K package (33) has been used for the DFT calculations.We use the Perdew, Burke, and Ernzerhof (34) version of the generalized gradient approximation (GGA) to the exchange-correlation functional within density functional theory.The sphere radii for Eu, Fe, As are taken as 2.50, 2.29, 2.18 bohr, respectively.The basis set cut-off parameter RmtKmax = 8.0 was used.The number of k points was set to 4500.The crystal structure and magnetic moments on Eu and Fe are illustrated in Fig. S8.We set U=9eV on the Eu atom and did collinear spin polarized self-consistent calculations in the primitive (not conventional) cell.WIEN2k has a parameter () which tweaks the strength of Hund's rule coupling.The Hund's Rules coupling is set to normal full strength when =1 and completely switches off when =0.We used this parameter to delineate the two effects mentioned above: the Schrieffer-Wolfe interaction does not depend on the Hund's rule coupling strength, while the Eu(f)-Eu(d) interaction can be switched off using .In Fig. S9, we show the band structure for these two values of  around the Fermi level.The largest splitting near the Fermi level when =1 is about 25meV along Z-. II. XRD under strain XRD measurements were performed on sample 2 at the Advanced Photon Source, beamline 6-ID-B, at Argonne National Laboratory.X-rays of energy 7.6 keV illuminated an area 500x500 um, fully encompassing a cross section of the middle of the crystal where strain transmission is highest.The sample and strain device were mounted on a closed cycle cryostat.Gaussian fits to the tetragonal (1 0 7), (0 0 8) and (1 1 8) reflections were used to determine the lattice constants ( ), (c), and ( & , due to the split peak in the twinned state), corresponding to in-plane along the stress axis, out of plane, and inplane at 45 degrees to the stress axis, respectively. Figure S2 shows the uniaxial strains A Poisson's ratio greater than unity is unexpected in an isotropic, linear elastic material, and so this large change in planar spacing may result from a significant magnetostructural response due to the ferromagnetic Eu layers, as well as tuning of the nematic order. Most significantly, is found to be suppressed by roughly 30% at maximum tension, while being relatively unaffected (or even slightly enhanced) by compression.Given the competition between nematic order and superconductivity, it is clear that the sharp reduction in the resistivity with tension can be attributed (at least in part) by a strain-suppression of the nematic order (see also Fig. S6a).This result is fully in agreement with previous work in Co-doped BaFe2As2, where tension (compression) applied along the tetragonal [1 0 0] direction resulted in a suppression (enhancement) of the nematic transition temperature (25).We observe a similar effect via warming the sample through the nematic transition under either tension or compression (Figure S3).Phenomenologically, the tuning of nematicity with stress applied along the tetragonal [1 0 0] direction results from both to the introduction of an orthogonal antisymmetric strain ( 1 = − + ) which acts to suppress nematicity and , and to a surprisingly large sensitivity to which tunes the unit cell volume despite not breaking any symmetries (see ref. S1), tension (green) and compression (magenta).Note: the fixed-strain data has been corrected for a ~3 K thermal lag.As such, we do not attempt to make a quantitative assessment of the strain-tuning of the transition temperature, and instead only share this data to show the basic phenomenology of an enhanced (suppressed) nematic transition temperature with compression (tension) in qualitative agreement with past work in Co-doped BaFe2As2 (25). III. XMCD of 2K result On sample 2, the first XMCD data were taken after the initial cooldown at zero applied strain at T=2 K and 10 K through a field range of μ0H=+-1 T. All XMCD data in this work are normalized to this μ0H>0.3T, T= 2 K fully saturated XMCD value, which corresponds to the ~7 (μ0H~0.3T) fully ordered Eu magnetic moment.The initial XMCD saturation value at T=10K, μ0H=0.25T is approximately 80% of the 2K saturation value.See Main Text for details of the XMCD measurement. IV. Freestanding magnetoresistance Sample 1 In the freestanding state prior to mounting on the strain cell, sample 1 was cooled through the superconducting and ferromagnetic transitions under zero field (Fig. S5a,b, black), and with an applied field of μ0H=0.1 T, 0.2 T and 1.0 T either in-plane (Fig. S5a) or out of plane (Fig. S5b).An out of plane field is found to only increase the resistivity, while only lowering the value of 0 .In sharp contrast, an in-plane field is far more detrimental to superconductivity between and , but zero resistance is reached at an enhanced value of 0 =9.0 K for μ0H=0.2T, demonstrating field-induced superconductivity. V. Assessing the nonzero background resistance of Sample 1 Extensive resistivity measurements of sample 1 were made under different temperature, field and strain states.Unfortunately, after mounting sample 1 on the strain device, a field, strain and temperature dependent background resistivity of order ≈ 0.01% was present, masking the true entrance into the zero-resistance state (Fig. S6).This is apparently due to a small volume of the sample which buckled under strain and thus behaves as if heavily-compressed, effectively raising its respective value of T 0 while still being highly strain, field and temperature dependent. Here we describe the workaround to this issue.Prior to mounting on the strain device, the sample reached zero resistance at 0 = 7.5 .In Figure S6, the orange trace is under small tension ( = 0.04%) and has a lower resistivity than the freestanding trace at all temperatures above T=9K, indicating an enhancement to the superconductivity.Below T=9 K, the orange trace has a higher resistivity and never reaches zero.The sample should be expected to reach zero resistance at higher temperature under tension, as is observed in sample 2. At T=7.5K, the orange trace should already be in a zero-resistance state, but instead has a value of approximately 0 = 1%.We thus use this value as a conservative estimate for the temperature and field entrance into the true zero resistance state.We determine the value of 0 under fixed strain as 0 ( = −.19%)= 4, 0 ( = .04%)= 7.5, and 0 ( = .20%)= 10.3. In Figure S7 Comparing calculations with the full Hund's rule coupling (=1), where both Schrieffer-Wolfe Eu(f)-Fe(d) antiferromagnetic coupling and Hund's rule Eu(f)-Eu(d) ferromagnetic coupling is included, and those with =0, where only the former is operative, we see that (a) Eu(f)-Fe(d) is indeed antiferromagnetic (the red bands are always below the blue ones), and rather large for some bands at the Fermi level, while the competing Eu(f)-Eu(d) interactions is largely cancelling it for =1, and this cancellation is, fortuitously, nearly complete right at the Fermi level (while at ~0.2 eV below or above it becomes large, up to 100 meV).This confirms our conjecture of a fragile character of this cancellation. Figure 1 . Figure1.Three Doors into Superconductivity.(A) Eu(Fe .88Co .12 ) 2 As 2 consists of stacked planes of Eu and FeAs layers, with the former exhibiting ferromagnetism (FM; TFM=17K) and the latter hosting both nematicity (N, TS=66K) and superconductivity (SC, TSC=19K).The FM/SC phase competition prevents a zero-resistance state from being reached until the lower temperature T0, below which the three phases simultaneously coexist.(B,C) Three doors (i.e.tuning parameters) lead to the zero-resistance state (shaded areas of the phase diagram).One door is field-induced superconductivity (FI, cyan).Applying a small in-plane magnetic field (Hsat) reorients the Eu moments and reduces the magnetic flux through the FeAs layers, enhancing superconductivity and boosting T0.A second door is strain-induced superconductivity (SI, lavender).As in other iron-pnictide superconductors, the N/SC phase competition enables an effective strain-tuning of superconductivity via strain-tuning the lattice-coupled nematic order.Here, tensile (compressive) stress along the FeAs bonding direction enhances (suppresses) superconductivity and increases (decreases) T0.The third door is simply tuning the temperature (TI) to cross the externally-tuned value of T0.With combined strain and field tuning, T0 can in principle take any value between T=0 and T=TFM (green). Figure 3 . Figure 3. Strain-tunable field-induced superconductivity.(A) Freestanding resistivity vs temperature at fixed in-plane applied field (μ0H=0T, black; μ0H=1T, gray) and resistivity vs in-plane field at fixed temperature.Onset of superconducting transition (TSC=19K) and ferromagnetic order (TFM=17.2K)indicated.(B) Same resistivity vs field data as in (A) plotted against logarithm y-axis, with additional data at three non-integer temperatures (black).Cyan markers indicate entrance and exit from zero-resistance state for T=8K to 9K.Minimum of resistivity for T=10K to 16.7K and inflection point at 17K marked by black circles, corresponding to the in-plane saturation field Hsat needed to align the Eu moments in-plane.Entrance and exit from zero-resistance state marked by cyan squares.(C) Resistivity vs field at fixed temperatures (4K to 5K) for one tensile and one compressive strain state and corresponding freestanding values from (A).Field range of zero-resistance shown in Fig.4 (shaded). Figure 4 . Figure 4. Strain and Field Tunable Phase Diagram.(Right)Resistivity vs temperature for the zero-strain state (same as black curve in Fig.2,3) and for the tensile (green) and compressive (magenta) strain states in Fig.3C.(Left) Phase boundary between ρ >0 and ρ =0 states under zero strain (cyan), tension (green) and compression (magenta), determined by resistivity vs temperature data (diamonds) and resistivity vs magnetic field (squares) from Fig.3and Supp.Fig.S5.Field-induced superconductivity indicated by shaded areas for each strain state.Eu in-plane saturation field Hsat taken from minimum of magnetoresistance in Fig.3Bvs.temperature (black circles), with mean-field fit line (red). Figure 5 . Figure 5. X-ray characterization of independent strain and field tuning.(A,B) Fixed temperature (T=13.5K)strain sweep (compressive to tensile) with simultaneous resistivity measurements (A) and XRD measurements (B) of the inline strain , the out of plane strain , and the nematicity-driven spontaneous orthorhombicity (see Methods for definitions).(C) The resistivity vs strain device voltage at T = 10K under in-plane applied field of μ0H=0T and μ0H=0.26T.Inset shows high tension range.The voltage range in (C) corresponds approximately to the range of ε xx nom in (A,B), which could not be simultaneously measured due to sample chamber restrictions.(D) Resistivity vs applied in-plane field at fixed strain values corresponding to colored arrows in (c) inset.(E,F) The simultaneously-collected resistivity (E) and XMCD (F) vs applied field at T=10K for five fixed strain values (see Methods for XMCD normalization details).Eu moment saturation coincides with minimum of resistivity at H=Hsat.Voltages listed in (E,F) correspond to slightly greater tension states than corresponding voltages in (C) due to different thermal hysteresis in the piezo actuators between the two measurements.Error bars in (B) on represent error propagation of Gaussian fits to the split [1 1 8]T reflection peak (see Methods), while error bars on and are smaller than marker size. Figure S2 . Figure S2.Sample 2. Fixed temperature (T=13.5K)strain sweep (compressive to tensile) with simultaneous resistivity measurements and XRD measurements of the and lattice constants (presented normalized by their zero-strain values) and the and lattice constants (presented as the antisymmetric strain 2 ).Error bars on 2 represent the error propagation of the Gaussian fits to and . Figure S3 . Figure S3.Sample 2. Resistivity vs temperature under zero strain (black, freestanding data from Fig.S1), tension (green) and compression (magenta).Note: the fixed-strain data has been corrected for a ~3 K thermal lag.As such, we do not attempt to make a quantitative assessment of the strain-tuning of the transition temperature, and instead only share this data to show the basic phenomenology of an enhanced (suppressed) nematic transition temperature with compression (tension) in qualitative agreement with past work in Co-doped BaFe2As2(25). Figure S4 . Figure S4.XMCD vs field at 2K and 10K.Data in (a) collected from a single field sweep from +1T to -1T.Data in (b) are the normalized difference of the positive and negative field values in (a). Figure S5.(a,b) Sample 1 resistivity vs temperature for zero applied field (black) and = 0.1 , 0.2 and 1 applied in-plane (a) and out of plane (b).For μ0H = 0.2 applied in-plane, the zero-resistivity temperature rises from 0 = 7.5 to 9.0 .(c) At T=2 K magnetic field was applied in plane (red) and out of plane (blue) to extract the upper critical fields 2 for each direction, yielding an anisotropy term ≅ 2.1. we show the resistivity vs applied in-plane magnetic field data at several temperatures under = −.19%(a) and = .20%(b).The field values where the resistivity crosses 0 = 0.01% are indicated by square markers, and these values are used to define the field-induced superconductivity phase space in the Main Text Figure 4.Note that the resistivity below this cutoff value appears to still show a temperature, field, and strain tunability, but this occurs separately from the tunability of the bulk of the sample. Figure Figure S6.(a) Resistivity vs temperature at two compressive and two tensile strain values, compared to freestanding value (black).(b) The same data zoomed to observe low resistivity values. Figure S7 . Figure S7.Sample 1. Resistivity vs applied in-plane magnetic field under compression (a) and tension (b)./ 0 >0.01 line marks the cutoff for estimated entrance into bulk zero-resistance state. Figure S8 . Figure S8.Conventional structure, with a= 5.5372 Å , b= 5.5052 Å , c= 12.0572 Å .Eu atoms are ferromagnetically ordered along the easy axis of the Fe antiferromagnetic order.High symmetry points of corresponding BZ are shown on right. Figure S9 . Figure S9.Band structure of U=9eV for =0 and 1. Red bands for spin up (aligned to Eu moments) and blue bands for spin down (anti-aligned to Eu moments). Figure S10 . Figure S10.Projected bands of (a) Fe d, (b) Eu f and (c) Eu d. is surprisingly large compared to with the Poisson ratio = −
2023-06-23T06:42:36.219Z
2023-06-21T00:00:00.000
{ "year": 2023, "sha1": "2af056482ad43eb295572e0f06d14d8b3db7f6e1", "oa_license": "CCBY", "oa_url": "https://www.science.org/doi/pdf/10.1126/sciadv.adj5200?download=true", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "2af056482ad43eb295572e0f06d14d8b3db7f6e1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
53374758
pes2o/s2orc
v3-fos-license
Air pollution-a tale of two cities London and Beijing. Capital cities half a world away, but plagued by a common – and deadly – problem: air pollution. Globally, outdoor air pollution accounts for over 4 million deaths a year [1], and brings forward up to 9,000 deaths a year in London [2] (there are no official city-level statistics for Beijing). Air pollution is the fourth-leading risk factor for disease in the world [3], contributing to a wide range of NCDs [4]: cardiovascular disease (heart disease and stroke), chronic obstructive pulmonary disease (including emphysema and chronic bronchitis), asthma among children, cancer and dementia – affecting the daily quality of life of millions of people. So what are the two cities doing to tackle air pollution? London's air pollution Air pollution isn't new.In 1952, London's Great Smog [5] is thought to have killed 4,000 people in just a few days, and it contributed to a total of 12,000 deaths-hospitalisations increased by 50% and respiratory admissions by over 150%.The primary pollutant was sulphur dioxide, with families gathering around coal-burning fires each evening, and with coal-burning power stations by the river in the heart of the city.The smog acted as a tipping point for government action -the Clean Air Act 1956 established 'smoke control areas', with residents given generous subsidies to convert to smokeless fuels and power stations moved outside urban areas and switched to (cleaner) gas. So why is air pollution still a problem for London?Part of the answer lies in a well-intentioned push over the last couple of decades to switch from petrol to diesel cars [6], which produce less carbon dioxide and are therefore less of a contributor to climate change -but they are also a source of nitrogen dioxide (NO 2 ) and fine particulate matter (PM), known since the 1990s [7] to be damaging to health.This is barely visible unless you get high above the city, when it shows as a fine brown haze -but there has been a dawning realisation over the last few years that this can no longer continue, with photos of young children standing in facemasks [8] outside their schools making the front page of London's Evening Standard. Beijing's air pollution Pollution problems in Beijing are more recent than in London -indeed, China put standards in place for ambient air quality in the 1980s [9], years before industry, transport and energy demands had reached the point at which air pollution became a threat to city dwellers' health.From 2000 onwards, however, air pollution became a real problem, and the 2008 Olympics was a tipping point for awareness and action.Prior to this, air quality data had been kept confidential, but in 2008 the US Embassy began tweeting about pollution levels -finally leading the government to make data public and to acknowledge the problem. Air pollution is the "biggest public health emergency of a generation" [10] -Sadiq Khan, Mayor of the City of London Emission compliance is now aggressively monitored and (as in the UK in the 1950s) industrial power plants have been moved outside the most affected cities, cleaner stoves are encouraged, and efforts are being made to convert from coal to cleaner natural gas.Diesel is less of a driver of pollution than in London -there are few diesel cars, and people have been incentivised to buy electric vehicles (getting a licence plate is difficult in Beijing, but until very recently there have been no such restrictions on EVs -and they are not subject to the rules restricting car use on high-pollution days).However, the very rapid growth of buying goods on the Internet, largely delivered by diesel trucks, may affect levels of diesel pollution. Taking action Neither city is sitting idly by.In the UK, there has been a rapid recent rise in public pressure on local and national government to take action, particularly on NO 2 -one road in London exceeded its annual limit on NO 2 within the first five days of 2017 [11].When the government tried to delay the publication of a national air quality plan, it was successfully taken to court by campaigning organisation Client Earth, who claimed that the delay was already unacceptable -and the judges agreed.This resulted in the Air Quality Plan for Nitrogen Dioxide [12] which has been criticised [13] for doing too little in the short-term, and for leaving action in the hands of local authorities.This month (September), a report to the Human Rights Council [14] from the UN Special Rapporteur on the human rights implications of hazardous substances and toxic waste has reiterated that the government 'continues to flout its duty to ensure adequate air quality and protect the rights to life and health of its citizens' (para.31).The mayor of London has been vocal in his concerns about pollution in London, introducing a £10 daily charge [15] from October for the most polluting vehicles and establishing low emission bus zones. In China, the Air Pollution Action Plan 2013-17 [16] set goals for air pollution (including that concentration of PM10particulate matter less than one 100 th of a mm -must fall by 10% or more in all cities) with specific targets for the most affected regions of the country, particularly the industrial east: in Beijing, average PM2.5 (particulate matter less than one 400 th of a mm) levels were required to fall from 89.5μg/m 3 in 2013 to 60μg/m 3 (still far above the WHO guideline [17] of 10μg/m 3 as an annual mean).Chinese society is such that government edicts are rapidly rolled out and seen as being for the good of the greater number (in contrast, one imagines that suggesting cars with only odd-or evennumbered licence plates should be allowed on London's streets would not go down well in the UK's more individualist society!).Beijing may be on track to reach these goals, and further targets will soon be set -but the city still has a long way to go.There are days on which air pollution is so bad that schools are closed, and some families are moving to other areas with cleaner air -PM2.5 concentrations can peak at 1,000μg/m 3 . The future While London may be 'orders of magnitude ahead of Beijing' in terms of what it has achieved, there is no room for complacency -and Professor Jim Zhang believes that: 'while historically London offered much for Beijing to learn on how to use legislations and technologies to improve air quality, the huge success in bike sharing and in using electric vehicles and CNG bus fleets in Beijing may offer London some food for thought'. Citizen engagement will keep the pressure on government to deliver on its promises.In Beijing, residents are very aware of the threat to health, on bad haze days many people wear face masks, and many households own an air purifier.In both countries, there are apps that provide alerts to warn residents when pollution is particularly high.In the UK, the first National Clean Air Day [18] was held in 2017, with air pollution trending highly on social media, over 200 volunteer-led events, and hundreds of organisations encouraging employees, students, patients and residents to adopt clean air actions: 'An opinion survey for National Clean Air Day showed that 85% of the public think it is important to tackle air pollution and 65% would be willing to pay a monthly contribution to fund air-quality measures.This level of commitment provides a clear mandate for national and local authorities to take ambitious steps to improve air quality swiftly with public backing.' Citizen activism can help to build the evidence base through the use of monitoring kits (e.g. Submitted by ncd-admin on 21 September, 2017 -08:50 Language English A man walks through Beijing, China surrounded by smog © Shutterstock London and Beijing.Capital cities half a world away, but plagued by a common -and deadly -problem: air pollution.Globally, outdoor air pollution accounts for over 4 million deaths a year [1], and brings forward up to 9,000 deaths a year in London [2] (there are no official city-level statistics for Beijing).Air pollution is the fourth-leading risk factor for disease in the world [3], contributing to a wide range of NCDs [4]: cardiovascular disease (heart disease and stroke), chronic obstructive pulmonary disease (including emphysema and chronic bronchitis), asthma among children, cancer and dementia -affecting the daily quality of life of millions of people.So what are the two cities doing to tackle air pollution? from Friends of the Earth [19]) and health professionals are being encouraged to get involved through the Unmask my City [20] campaign.Finally, it is worth remembering that London and Beijing are just individual cities: the problems of air pollution go far beyond their perimeters.Plans such as the complete phasing of diesel cars and vans in the UK [21] by 2040, and China's consideration of halting production of all diesel and petrol cars [22] need to be introduced and enforced nationally -and rapidly.With thanks to Professor Jim Zhang (Duke University in the United States and Duke Kunshan University in China) and Chris Large (Global Action Plan -coordinators of National Clean Air Day).
2018-10-22T07:32:06.083Z
2020-04-18T00:00:00.000
{ "year": 2020, "sha1": "32913052861f46564670da959e2ea50cc4dd318e", "oa_license": "CCBY", "oa_url": "https://sushrutajnl.net/index.php/sushruta/article/download/17/31", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "32913052861f46564670da959e2ea50cc4dd318e", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
26314260
pes2o/s2orc
v3-fos-license
An improved solvent-free synthesis of flunixin and 2-(arylamino) nicotinic acid derivatives using boric acid as catalyst A simple solvent-free protocol for the preparation of flunixin, a potent non-narcotic, non-steroidal anti-inflammatory drugs is reported using boric acid as catalyst. Its salt, flunixin meglumine are then prepared under reflux in EtOH. This sustainable method are then extended for the synthesis of a series of 2-(arylamino) nicotinic acid derivatives. The present protocol combines non-hazardous neat conditions with associated benefits like excellent yield, straightforward workup, and use of readily available and safe catalyst in the absence of any solvent, which are important factors in the pharmaceutical industry. The pathway for catalytic activation of 2-chloronicotic acid with boric acid was also investigated using Gaussian 03 program package. Electronic supplementary material The online version of this article (10.1186/s13065-017-0355-4) contains supplementary material, which is available to authorized users. So far, several improved methods have been developed for the synthesis of flunixin including classical reflux in water [12], xylene [5] and ethylene glycol [20]. Some of these methods suffer from several drawbacks such as difficult workup, long reaction times, use of large quantities of non-green organic solvents such as xylene that are harmful for environment and harsh reaction conditions. In this regard, solvent-free approach which has become increasingly popular in recent years for reasons of economy and pollution prevention, as well as cleaner products, simple work-up, high-speed due to the high concentration of materials, and excellent yields would be the ideal approach, as it is often claimed that "the best solvent is no solvent" [21]. To the best of our knowledge, we herein report the first green synthesis of 2-(2-methyl-3-trifluoromethylanilino)nicotinic acid (3a) under solvent-free conditions in the presence of catalytic amount of Boric acid. In this contribution, studies to reach a broader scope and generality of this reaction is also presented using other anilines, as nucleophiles, some 2-arylaminonicotinic acids derivatives 3 were also synthesized by this method. In a first attempt, the reaction of 2-methyl-3-trifluoromethylanilin (1) with 2-chloronicotinic acid (2) was examined under various reaction conditions, the results and the optimization of reaction conditions are summarized in Table 1. Initially, the effect of acid and basic catalysts was tested. In the absence of any catalyst no reaction took place under reflux in water (Table 1, entry 1), and also under solvent-free condition the desired product was obtained in low yield in the absence of any catalyst (entry 2). When this reaction is carried out in the presence of a catalyst such as K 2 CO 3 , NEt 3 , Fe 3 O 4 , and DABCO in the presence and absence of solvent only a trace amount of product was detected even after 24 h (entries 5-8), while the desired product was obtained in moderate yield in the presence of PTSA and boric acid (entries 9 and 10). Considering the cost and toxicity of these two catalysts, and also due to a slight difference in efficiency between them (entries 21 and 23), we choose boric acid as optimal catalyst. In the next step the effect of various solvents and temperature (80-150 °C) was examined ( Table 1, entries [8][9][10][11][12][13][14][15][16][17][18][19][20][21][22]. The best yield was obtained at 120 °C under solvent-free condition (entry 17). Therefore, considering the viewpoints of green chemistry, the synthesis of flunixin was carried out under solvent-free condition. And finally, we studied the effect of the ratio of reactants on the yield of the reaction. When the reagents were used in an Therefore, the optimum reaction conditions for the synthesis of flunixin were as follows: molar ratio of 2-methyl-3-trifluoromethylanilin (1) to 2-chloronicotinic acid (2) equal to 2:1 at 120 °C under solvent-free condition in the presence of H 3 BO 3 (20 mg, 30 mol%) as catalyst (Table 1, entry 21). This procedure was also scaled up to 30 g of product and the yield of flunixin was maintained excellent however the reaction time was augmented about 45 min (Scheme 2). The synthesized flunixin was characterized by FT-IR, 1 HNMR, 13 CNMR and GC/MS (see Additional file 1). A quantitative analysis was performed to determine the amount of residual boron in the synthesized flunixin. The result of ICP analysis shows a level of 2.23 ppm (2.23 mg/L) of boron in the product. According to the World Health Organization, the health-based guideline for boron level in drinking water is 2.4 mg/L. Since we have been very successful in this solvent free approach with a very specific aromatic amine such as 2-methyl-3-trifluoromethylanilin, we decided to study the scope and limitation of the reaction using other aniline derivatives as nucleophiles. Reactions of aniline derivatives with 2-chloronicotinic acid gave good-to-excellent yields. This result was expected because the pyridine ring is activated toward nucleophilic attack by the formation of a pyridinium salt with the acid catalyst. Table 2 shows several 2-arylaminonicotinic acids derivatives synthesized under solvent-free condition in the presence of 20 mg H 3 BO 3 at 120 °C. As it can be seen, nucleophilic substitution takes place readily at the 2-and 4-positions of a ring particularly when substituted with an effective leaving group such as a halogen atom. In nucleophilic substitution, the intermediate is negatively charged. The capacity of the ring to withstand the negative charge determines the stability of the intermediate and transition state which drives it and consequently determines the reaction rate. Nucleophilic attack at the 2-position yields a carbanion that is hybrid of structures. The hybrid structures of the anionic intermediate formed during the nucleophilic attack are especially stable, since the negative charge is located on the electronegative nitrogen atom which can better accommodate it (Scheme 3). For this reason the nucleophilic substitution occurs specially on the pyridine ring instead of the benzene ring and preferably at the 2 and 4 positions. Various anilines were tolerated in this methodology and gave the product in good yield (Table 2, entries 1 , 3, 4, 5, 6, 8, 9), but some anilines (entry 2 and 7) evince the influence of electron withdrawing group and steric hindrance on their reactivity by giving a low product yield. Remarkably, when primary amines other than anilines were selected for this transformation, no reaction was observed and they are not potent nucleophiles in this methodology; this could be attribute to the higher nucleophilicity of the acyclic primary amines poisoning Meisenheimer complex Scheme 3 Nucleophilic substitution of a pyridine ring the catalyst as a result of competitive binding with the primary amine to the boric acid. The scope in substrates for nucleophilic substitution was also investigated using some pyridines derivatives such as 4-chloro pyridine, 3-chloro pyridine and 2-chloro-5-nitro pyridine. However, when the reaction was run using these substrates, no product was observed even after 12 h ( Table 3). The effect of other substituents on the pyridine ring show that the substitution reactions are more facile if an activating group is present on the ring. 2-chloronicotinic acid contains a strongly electron-withdrawing group -COOH. This would be expected to better activate the halogen located ortho to it. This result clearly show the activating effect of the carboxylic acid group in S N Ar of aromatic ring by forming resonance-stabilized intermediate known as Meisenheimer complex [27]. The results for synthesis of 2-arylaminonicotinic acids derivatives 3 are very satisfactory, affording the respective products in good yield in short reaction times. Flunixin is not soluble in water, and as it is administered by intravenous or intramuscular injection consequently to increase its solubility in water, it is often formulated as the meglumine salt. Therefore the reaction of flunixin (1 mmol) with meglumine (1 mmol) was also examined under various reaction conditions and the results and the optimization of reaction conditions are summarized in Fig. 2. As it can be seen, refluxing in ethanol gave the best results among all examined conditions. The as produced flunixin meglumine was characterized by FTIR, 1 HNMR, Table 3 Scope in pyridine analogues for nucleophilic aromatic substitution The optimized structures of nicotinic acid-boric acid, IMs, and product for pathway (a, b) by B3lyp/6-311G (d, p) singlet in the gas phase 13 CNMR, GC/Mass (available in Additional file 1) and high-performance liquid chromatographic (HPLC) method was used to determine the purity to flunixin meglumine (see Additional file 1). As can be seen from the HPLC chromatogram, the purity of the synthesized flunixin meglumine is higher than the standard. Theoretical study of mechanism The pathway for catalytic activation of 2-chloronicotic acid with Boric acid was investigated using theoretical gas-phase calculations [36]. The full geometry optimizations and property calculations were performed within the density functional theory (DFT) approach using the Becke's three-parameter B3LYP exchange-correlation functional [37] and the 6-311G** basis set [38,39]. The two possible pathway for catalytic activation of 2-chloronicotic acid with Boric acid, i.e. hydrogen bonding and lone pair co-ordinate bond with the empty orbital on the boron are presented in Fig. 3. The optimized geometries of IMs, products (P) and transition state (TS) and the numbering used in the analysis of the results are displayed in Fig. 4. The optimized bond lengths (in nm) and stability energies E (a.u.) for IMs, TS and products are tabulated in Table 4. The calculations overall indicate small affinity of boron for the nitrogen donor atom (IMB1) so that the optimized structure tend to form hydrogen bonds (HBDs) in (IMB 81) and give strong support to the suggested mechanism in which HBD formation between nitrogen in pyridine and OH of Boric acid is the preferred mode of activation. These results also showed that the internuclear distance C1-Cl10 increased from 1.75 in IMA to 2.04 in TS indicating the cleavage of this bonds while the inter-nuclear distance N22-C1 decreased from 1.91 in Table 4 The optimized bond lengths (in nm) and stability energies E (a.u.) computed at B3LYP/6-311G (d, p) level of theory for IMA, IMB, TS, PA, and PB (see Fig. 4 We also used Gaussian 03 [36] to observe the vibrations in the TS and saw that the negative frequency corresponded to the motion of N22 and Cl10. According to obtained results, the following proposed mechanism seems reasonable for the synthesis of flunixin catalyzed by boric acid (Scheme 4). The substitution reactions proceed via an addition-elimination mechanism. In this proposed mechanism the nucleophilic aromatic substitution reaction occurs via a two-step mechanism in which the first step is the addition 2-methyl-3-trifluoromethylanilin (1) to the pyridine ring of 2-chloronicotinic acid (2) which was activated through HBD between pyridine and Boric acid and so leading to the intermediate (I). In the second step elimination of the halide ion restores the aromaticity leading to the product (3a). Conclusion In summary, we have developed a simple, convenient and efficient method for the synthesis of flunixin using H 3 BO 3 as catalyst under solvent-free conditions. This method are then extended for the synthesis of a series of 2-(arylamino)nicotinic acid derivatives. The present protocol has several advantages, particularly solvent-free conditions, high yields, ecofriendly operational, experimental simplicity along with mild conditions and the most important of them is development and optimization of flunixin meglumine synthesis in order to transfer to a larger scale for manufacture. Density function UB3LYP/6-311++g(d,p) calculations give strong support to the suggested mechanism in which HBD formation between nitrogen in pyridine and OH of Boric acid is the preferred mode of activation.
2017-12-14T19:16:09.061Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "a53fde07628d53f36b7611558c0443d07aca3b08", "oa_license": "CCBY", "oa_url": "https://bmcchem.biomedcentral.com/track/pdf/10.1186/s13065-017-0355-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a53fde07628d53f36b7611558c0443d07aca3b08", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
225672099
pes2o/s2orc
v3-fos-license
Correlation of clinical and histopathological diagnoses of oral mucosal lesions at tertiary care centre: a retrospective study Oral mucosal lesions (OML) are a serious global problem as these affect the quality of life of people. Prevalence of OML in out-patient department in western Maharashtra was approximately 39.1%. Dermatological diseases not only involve the skin and its appendages but may also involve the oral cavity. Hence examination of oral cavity is important for a dermatologist. The lesions of oral cavity in dermatological disorders may precede before skin manifestation or may be the sole manifestation of these disorders or may occur simultaneously with skin lesions. OML may present with variety of symptoms like burning sensation, soreness, intolerance to spicy food, difficulty in swallowing, ulceration, decreased mouth opening which affects day to day activities. Various groups of dermatological diseases associated with OML are pre-malignant lesions like leukoplakia, erythroplakia, oral submucosal fibrosis (SMF), actinic cheilitis; malignant oral squamous cell carcinoma (SCC); vesiculobullous disorders; lichen planus and other lichenoid disorders; infections: bacterial, viral and fungal; collagen vascular diseases; vasculitis like behcets disease; erythema multiforme; recurrent aphthous stomatitis; miscellaneous. INTRODUCTION Oral mucosal lesions (OML) are a serious global problem as these affect the quality of life of people. 1 Prevalence of OML in out-patient department in western Maharashtra was approximately 39.1%. 2 Dermatological diseases not only involve the skin and its appendages but may also involve the oral cavity. Hence examination of oral cavity is important for a dermatologist. The lesions of oral cavity in dermatological disorders may precede before skin manifestation or may be the sole manifestation of these disorders or may occur simultaneously with skin lesions. 3 OML may present with variety of symptoms like burning sensation, soreness, intolerance to spicy food, difficulty in swallowing, ulceration, decreased mouth opening which affects day to day activities. Rationale OML may be pre-malignant. Therefore, secondary prevention in the form of early detection and timely treatment is the key. Many times, OML are the initial sign of the skin diseases. Therefore, it is necessary to diagnose at the earliest and prevent further progression of the disease. Despite all above, OML are often neglected as they go unnoticed or take time to become symptomatic. Due to similar morphological appearance of the lesions, histopathology is the gold standard. And in this study, we will find out the correlation between clinical and histopathological diagnoses. There is wide discrepancy on clinic-histopathological correlation of different types of OML, ranging from 17% to 50%. 5,6 Recently a study showed prevalence of 39% of OML in OPD patients in western Maharashtra, which is very high as compare to other areas. So, a study was conducted in our tertiary center of Mumbai to study the correlation of clinical and histopathological diagnosis of different OML. Aim Aim of the study was to correlation between clinical and histopathological diagnoses. METHODS This is a retrospective study of all patients with OML who underwent biopsy over a period of 1 year from January 2018 to December 2018 in KEM hospital, Mumbai. Inclusion criteria All biopsied cases of OML that have presented to department of dermatology of KEM hospital, Mumbai. Exclusion criteria Patients with OML that did not consent for biopsy and incomplete data available at the time of analysis. Records of biopsy conducted in the Department of Dermatology of KEM Hospital over one year were reviewed. All cases of OML with detail clinical and histopathological data were selected. Histopathology slides of all archived tissues were retrieved for review. 164 patients were included in the study. Statistical analysis All responses were tabulated by the investigator using Microsoft-Excel Software. Graphical representation was made wherever necessary. Concordance index and discrepancy index were calculated as follows. 7 RESULTS Out of the total 164 patients,104 (63.41%) were males and 60 (36.58%) were females. Maximum number of patients were in the age group of 35 to 50 years. DISCUSSION Lichen planus was the most common condition seen in our study, which is in contrast to study done by Abidullah et al. 8 Histopathological correlation was found to be 81.25%. In this study, the commonest site of oral lichen planus was buccal mucosa. Leukoplakia was the second common condition in our study. Majority of the patients were male. Maximum were guttkha chewers followed by smoking with buccal mucosa being most common site. There is wide discrepancy in histopathological correlation in different study. Pemphigus vulgaris was the 3rd most common entity in our study. The reason for lesser correlation could be most of the patients biopsied were without skin lesion and intact blisters are difficult to find in oral cavity. SMF more common in males with 75% histopathological correlation. The discrepancy in the clinical and histopathological diagnosis could be attributed to other lesions presenting with same complains, that is difficulty in opening mouth. 13 most of the patient were beetle nut chewer. Squamous cell carcinoma with 72% correlation with males more commonly affected than females. Majority of them were addicted to tobacco chewing or smoking or both. To our knowledge, there are no similar studies with respect to oral SMF and oral SCC. The most common site for all the above conditions was buccal mucosa in our study. This was retrospective study including only biopsied patient, many of them who did not consent for the biopsy or lost data were not accountable. Also, this study has no statistically significant data with respect to all other OML. Therefore, more detailed prospective randomised studies with a larger sample sizes are recommended to further establish the clinic-histopathological correlation in OML. CONCLUSION The overall percentage of clinical diagnoses correlating with histopathological diagnosis was 75.60% with discrepancy index of 24.39%, hence histopathology is very important to arrive at the accurate diagnosis and to plan definitive treatment. Histopathological examination of OML must be done routinely because wide variety of conditions present with similar morphologic features and can be the initial signs of many skin disorders. At times histopathological examination is nonconclusive but clinical suspicion is very strong, so repeat biopsy is advisable. Also, few of the OML can be potentially malignant in nature, in such cases multiple site biopsy is better.
2020-07-02T10:38:18.873Z
2020-06-23T00:00:00.000
{ "year": 2020, "sha1": "e3e42b46f16c3936ae8e26fe5b68fda1cd766a18", "oa_license": null, "oa_url": "https://www.ijord.com/index.php/ijord/article/download/1019/571", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e9acdd52c681c901200b3c04d3c18c37d1dd4f56", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
73481035
pes2o/s2orc
v3-fos-license
Titin‐truncating variants are associated with heart failure events in patients with left ventricular non‐compaction cardiomyopathy Background Titin‐truncating variants (TTNtv) have been recognized as the most prevalent genetic cause of dilated cardiomyopathy. However, their effects on phenotypes of left ventricular non‐compaction cardiomyopathy (LVNC) remain largely unknown. Hypothesis The presence of TTNtv may have an effect on the phenotype of LVNC. Methods TTN was comprehensively screened by targeted sequencing in a cohort of 83 adult patients with LVNC. Baseline and follow‐up data of all participants were collected. The primary endpoint was a composite of death and heart transplantation. The secondary endpoint was heart failure (HF) events, a composite of HF‐related death, heart transplantation, and HF hospitalization. Results Overall, 13 TTNtv were identified in 13 patients, with 9 TTNtv located in the A‐band of titin. There was no significant difference in baseline characteristics between patients with and without TTNtv. During a median follow‐up of 4.4 years, no significant difference in death and heart transplantation between the two groups was observed. However, more HF events occurred in TTNtv carriers than in non‐carriers (P = 0.006). Multivariable analyses showed that TTNtv were associated with an increased risk of HF events independent of sex, age, and baseline cardiac function (hazard ratio: 3.25, 95% confidence interval: 1.50‐7.01, P = 0.003). Sensitivity analysis excluding non‐A‐band TTNtv yielded similar results, but with less strength. Conclusions The presence of TTNtv may be a genetic modifier of LVNC and confer a higher risk of HF events among adult patients. Studies of larger cohorts are needed to confirm our findings. | INTRODUCTION Left ventricular non-compaction cardiomyopathy (LVNC), characterized by excessively prominent trabeculations and deep intratrabecular recesses, is classified as a primary genetic cardiomyopathy by the American Heart Association. 1,2 The clinical presentation of LVNC is highly heterogeneous, ranging from no obvious symptoms to serious complications including heart failure (HF), thromboembolism, and ventricular arrhythmias. 3 Therefore, it is crucial to identify patients at high risk for these adverse events and implement appropriate treatment to reduce mortality and morbidity. Titin, encoded by the TTN gene, is a giant filament that spans the hemi-sarcomere of striated muscle. It plays important roles in sarcomeric integrity, signal transmission, passive stiffness, and contraction regulation. 4 Although a missense variant of TTN has been shown to be associated with a penetrant cardiomyopathy with features of LVNC, 5 the significance of most missense variants remains unclear. 6 However, titin-truncating variants (TTNtv) have been recognized as the most common genetic cause of dilated cardiomyopathy (DCM) and appear to modify the phenotype of hypertrophic cardiomyopathy. 6,7 Here, we hypothesized that TTNtv might also act as a secondary modifier rather than a primary factor in the context of LVNC. Thus, we conducted this study to investigate the prevalence of TTNtv, and their correlations with clinical manifestations and longterm prognosis in a Chinese cohort of patients with LVNC. | Study design and subjects Data were obtained from a cohort of patients with LVNC that has already been described. 8 | Follow-up and clinical outcomes Outcome data were obtained through a telephone interview or clinic visit. The last follow-up was performed in April 2018. The primary endpoint was a composite of all-cause death and heart transplantation. The secondary endpoint was HF events, a composite of HFrelated death, heart transplantation, and HF hospitalization. HFrelated death was defined as death preceded by symptoms of HF lasting >1 hour. HF hospitalization was defined as a hospital stay of >24 hours with a primary diagnosis of HF during the follow-up. | Statistical analysis Categorical variables are presented as frequency and percentage, and continuous variables are expressed as median (interquartile range). Differences in participant characteristics were compared by Pearson's χ 2 test or Fisher's exact test for categorical variables, and by independent-sample t tests or Mann-Whitney U tests for continuous variables. Survival curves were constructed by the Kaplan-Meier method and compared by the log-rank test. Univariable and multivariable Cox proportional hazards regressions were performed to calculate the hazard ratio (HR) and 95% confidence interval (CI), and to evaluate the association between TTNtv and clinical outcomes. Covariates included in the multivariable model were age, sex, and New York Heart Association functional class III/IV at baseline. Differences were considered significant if the two-sided P-value was <0.05. Sensitivity analyses excluding all non-A-band TTNtv were performed to reduce the confounding due to a position related effect of TTNtv. All analyses were performed with SPSS version 22.0 software (IBM Corp., Armonk, New York). | Study population and genetic findings A total of 83 adult patients were included in the study, of which 58 (69.9%) were male and the mean age at enrollment was 44 years (Table 1). A total of 13 TTNtv were identified in 13 (15.7%) patients, with 9 variants in the A-band of titin (Table S1 in Supporting Information). There were no multiple TTNtv carriers. Among the 13 carriers of TTNtv, 2 patients (15.4%) also carried a probably pathogenic variant in other cardiomyopathy-related genes (Table S2). | Genotype-phenotype correlation at baseline There were no significant differences between TTNtv carriers and non-carriers in terms of demographic data, LVNC subtypes, comorbidities, cardiac function, or echocardiographic findings (Table 1). | Genotype-phenotype correlation for outcomes During a median follow-up of 4.4 (2.8-6.2) years, 28 (33.7%) patients reached the primary endpoints and 35 (42.2%) experienced HF events (Table 2). There were significantly more HF events and HF hospitalizations in TTNtv carriers than in non-carriers (P = 0.006 and 0.003, respectively). No significant differences in other endpoints were observed between TTNtv carriers and non-carriers. Univariable and multivariable analyses found that the New York Heart Association functional class III/IV at baseline was a strong predictor of the primary and secondary endpoints (Table 3). In addition, the presence of TTNtv was significantly associated with an increased risk of HF events inde- | Sensitivity analysis A-band TTNtv are suggested to have higher penetrance than variants of other regions in the context of DCM. 12 Therefore, sensitivity analyses were performed after the exclusion of non-A-band TTNtv. A total of nine A-band TTNtv were detected in nine patients, who had larger left ventricular end-diastolic dimension and lower left ventricular ejection fraction (Tables S1 and S3). During the follow-up, significantly more HF events and HF hospitalizations occurred in A-band TTNtv carriers than in non-carriers (Table S4). After adjustment, a marginal association was found between TTNtv and HF events (HR: 2.35, 95% CI: 0.99-5.58, P = 0.052, Table S5). Notably, five of the nine patients with A-band TTNtv had an additional diagnosis of DCM (dilated LVNC, Table S2). | DISCUSSION In a cohort of 83 adult patients with LVNC, 13 TTNtv were identified in 13 participants, including 9 TTNtv in the A-band region of titin. No significant phenotypic differences were found between TTNtv carriers and non-carriers at baseline. During the follow-up, the presence of TTNtv was associated with an increased risk of an HF event after adjustment for sex, age, and baseline cardiac function. Sensitivity analysis excluding non-A-band TTNtv yielded similar results, but with less significance. As the most prevalent genetic cause of DCM, TTNtv account for 20% to 25% of familial cases. 13 Interestingly, it has been reported that TTNtv are also detected in about 10% to 15% of patients with LVNC, 14,15 although no causal association between TTNtv and LVNC has been identified. Consistent with these studies, 15% of the patients in our study were found to carry TTNtv. The reason why Hypertrophic LVNC 4 (4. 20,21 which can also contribute to an increased risk of HF events. 22 These adverse effects can be even more prominent in the context of LVNC, in which there was originally supposed to be a higher risk of HF. A meta-analysis showed that A-band TTNtv had larger odds ratios than other TTNtv, suggesting position-dependent effects on the penetrance of TTNtv in DCM. 12 In our study, the association between It has been recognized that nearly 40% of adult patients with LVNC could experience HF events. 23 Consistent with this, 42% of the patients in our study suffered HF events during follow-up, underlining the high risk of HF-related adverse outcomes associated with this disease. However, risk stratification has been challenging in these patients due to a lack of specific prognosticators. In this regard, our findings that TTNtv confer an increased risk of HF events are important because TTNtv can be informative in risk assessment. For example, patients with TTNtv need close follow-up and may benefit from the early initiation of anti-HF therapy. Some limitations to our study should be noted. First, it was an observational study, which could suffer from residual confounding. Second, all participants were of Chinese Han ancestry from a single center, which might limit the generalizability of our findings. Third, the sample size was relatively small and insufficient to draw definite conclusions. Further study with a larger sample size is needed to confirm our findings. Fourth, functional models to corroborate our findings are still lacking. | CONCLUSIONS In a Chinese cohort of patients with LVNC, the presence of TTNtv was found to be associated with an increased risk of HF events independent of sex, age, and baseline cardiac function. The identification of TTNtv may contribute to overall risk assessment in LVNC.
2019-03-11T17:25:40.624Z
2019-04-16T00:00:00.000
{ "year": 2019, "sha1": "e0882d1265befbaa5d6a1b5c1aee144a52ea6d65", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/clc.23172", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1f629aea38cdc9e6225a56c53c42347b14395c5d", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
25358040
pes2o/s2orc
v3-fos-license
Tissue Variability and Antennas for Power Transfer to Wireless Implantable Medical Devices The design of effective transcutaneous systems demands the consideration of inevitable variations in tissue characteristics, which vary across body areas, among individuals, and over time. The purpose of this paper was to design and evaluate several printed antenna topologies for ultrahigh frequency (UHF) transcutaneous power transfer to implantable medical devices, and to investigate the effects of variations in tissue properties on dipole and loop topologies. Here, we show that a loop antenna topology provides the greatest achievable gain with the smallest implanted antenna, while a dipole system provides higher impedance for conjugate matching and the ability to increase gain with a larger external antenna. In comparison to the dipole system, the loop system exhibits greater sensitivity to changes in tissue structure and properties in terms of power gain, but provides higher gain when the separation is on the order of the smaller antenna dimension. The dipole system was shown to provide higher gain than the loop system at greater implant depths for the same implanted antenna area, and was less sensitive to variations in tissue properties and structure in terms of power gain at all investigated implant depths. The results show the potential of easily-fabricated, low-cost printed antenna topologies for UHF transcutaneous power, and the importance of environmental considerations in choosing the antenna topology. I. INTRODUCTION Passive wireless implantable medical devices provide opportunities for improved patient monitoring, documentation, and treatment. Powered transcutaneously by electromagnetic waves, passive implantable devices enable miniature, batteryless implants, but consequently depend on reliable power transfer through tissue. Passive wireless implantable devices have no implanted battery, instead harvesting energy from incident electromagnetic waves, allowing dramatic miniaturization and extended lifetime of the implant [1]- [4]. Additionally, passive devices are particularly well-suited for periodic monitoring in combination with implanted biosensors, which only need to be powered when obtaining a sensor reading [5], [6]. The function of passive implantable devices is dependent on the generation of electromagnetic fields at the implantation site and the efficient capture of energy using implantable antennas. A practical transcutaneous system must be expected to function reliably for each patient and across various patients. However, biological tissue is a variable and often unpredictable medium for electromagnetic power transmission [7]. Depending on the application, a transcutaneous system may need to function at different locations in the body where there are differences in tissue structure (e.g., reading from implanted biosensors at various body locations). For example, skin and fat will be encountered for subcutaneous implants in the leg and the arm, while power transfer through the skull must be considered for cortical implants. Even the same body area is expected to have different composition among individuals, due to variations in characteristics such as body fat content and muscle size [8]. Tissue properties also vary within a single individual over time due to changes in body fat and fluid content with age or behavior [9]- [11]. All such tissue variations affect electromagnetic power transfer to a passive implantable device and therefore dictate the functionality of a wireless transcutaneous system. While previous studies have investigated the effect of tissue and tissue variations on implantable antenna properties including input impedance, they are mostly dedicated to examining the implantable antenna alone or they focus on a single antenna topology (typically loop antennas) [12]- [20]. In transcutaneous systems where the external antenna is in proximity to the implant, the external and implanted antenna are not isolated. IEEE Standard C95.1 defines the far field region boundary as 2D 2 /λ, where D is the antenna dimension and λ is the electromagnetic wavelength [21]. The field region and power transfer mechanisms of an antenna system are therefore dependent on the operating frequency, the antenna dimensions, the transmission medium, and the antenna separation distance. When the two antennas are not isolated, they must be analyzed simultaneously to account for loading effects of the implant in addition to effects of the tissue medium [16], [17], [20]. Mark et al. performed such a two-antenna analysis, investigating variability and uncertainty in thickness and dielectric properties of tissues in the head, determining the maximum achievable gain across a frequency range of 100 MHz to several GHz using loop antennas [17]. However, maximum achievable gain assumes simultaneous conjugate matching to maximize power transfer, while the power gain of a two-antenna system is ultimately sensitive to mismatch due to changes in the system impedance [22]. It is therefore important to quantify the effects of tissue variability on a transcutaneous system, to ensure the system continues to function efficiently and safely in variable tissue environments. The goals of this study were to compare antenna topologies in terms of transcutaneous power gain and to examine the effects of tissue variability. The choice of antenna topology is integral to the function of a system, and the optimal choice of topology can improve power transfer while also simplifying the design of impedance matching networks. In the first part of this work, four printed antenna topologies were evaluated in terms of their function in a UHF system for transcutaneous power transfer to an implanted device: planar dipole, meandered dipole, single-turn square loop, and three-turn square loop. These topologies were chosen for their relative ease of fabrication and their potential for use in thin, miniature implantable devices. In the second part of this study, the power gain of transcutaneous systems was calculated with varying tissue characteristics for dipole and loop antenna systems. Maximum power gain for each configuration was compared to power gain with mismatch due to tissue variability. Optimizing matching networks for a particular tissue composition mimics designing a transcutaneous system based on the expected properties of the physiological implant location. Varying the tissue composition and tissue properties then represents variations that will be encountered using such a system in practice. II. METHODS Throughout this work, the power gains of transcutaneous systems were calculated from simulations in ANSYS HFSS 15.0, a 3-D electromagnetic field solver utilizing the finite element method. Each modeled system consisted of an external antenna and an implanted antenna separated by tissue, analyzed as a two-port network. Scattering parameters (S-parameters) obtained from simulation were used to calculate power gain with conjugate impedance matching and with impedance mismatch. Simulations were performed at 915 MHz to utilize the UHF ISM band for wireless communications and the sub-GHz range recommended for efficient midfield wireless power transfer to miniature implants [16]. The dimensions of the implant antenna in this work were constrained within 1 cm by 1 cm, and the antenna separations range from 3.5 mm to 16.1 mm. Therefore, the operating field regions in this work include both the reactive near field and radiative near field regions, and necessitate full-wave simulation [16], [21]. A. PART I: ANTENNA TOPOLOGIES AND DIMENSIONS In the first part of this study, the simulated tissue was a simplified layered model consisting of skin, fat, and muscle, similar to that used in [16], [17], and [23]. The thickness of each layer was modeled with reference to values measured for the arm: 2.2 mm skin, 10.8 mm fat, and 35 mm muscle [10]. Tissue properties were defined according to measured values for skin, fat, and muscle [24]. For simulations in tissue, the antenna was positioned between the fat and muscle layers as it would be for a subcutaneous implant, as shown in Figure 1, resulting in an antenna separation of 13 mm [25]. Fabricated antennas were tested experimentally using tissue phantoms, and antenna parameters were measured using a vector network analyzer (VNA) (Agilent 8753ES S-Parameter Network Analyzer). Layered tissue phantoms were constructed according to procedures and formulations in [26] and [27], with layer dielectric properties similar to human skin, fat, and muscle, and layer thicknesses based on measured values for the arm (to match the simulation model). During measurements, the implanted antenna was positioned between the layers of fat and muscle phantom, to replicate the positioning of the implant in simulation. 1) ONE-PORT MODEL VERIFICATION To first validate the simulation model, one-port simulations of implanted antennas were performed in the tissue model and in air and compared to one-port measurements on fabricated antennas. Initially examining only the implanted antenna decreased the simulation complexity and simplified the measurements necessary to verify the simulation model. Values of the input port reflection coefficient (S 11 ) were obtained from one-port simulations of the two simplest antenna topologies: a planar dipole and a single-turn loop, shown in Figure 1. The input port reflection coefficient is directly related to the input impedance of the antenna, which is a function of the antenna size and topology and the surrounding media. Simulated and measured S 11 were therefore used to evaluate the discrepancies between the simulation model and fabricated antennas of the same topology and dimensions. A feed gap of 1 mm was chosen for both the dipoles and the loops, due to the dimensions of standard connectors that would be used later for measurements. The dipole length and trace width and the loop size and trace width (as labeled in Figure 1) were varied in simulation. The same sets of antenna dimensions were simulated in tissue and air to allow use of the same fabricated antennas for measurements in tissue and air. The simulated dimensions of the dipole and loop were chosen such that the dipole length and loop perimeter extended up to at least the first expected resonance in fat, with fat having the lowest permittivity and therefore longest wavelength. The dimensions were also set according to printed circuit board manufacturing specifications and to prevent any overlap of the traces. The increment size of the swept dimensions was chosen to observe trends in S 11 over the full range of simulated dimensions, including the expected resonances in tissue (see Appendix A). The simulated values of S 11 were compared with VNA measurements of the fabricated antennas, using layered tissue phantoms [26], [27]. Each of the fabricated geometries corresponded to one of the simulated configurations. The fabricated geometries were chosen to compare trends observed with changing each dimension in simulation. The expected resonances were in agreement between simulation and measurement to within 2 cm dipole length and 1 cm loop size, with differences attributed to phantom dielectric properties (see Appendix A). 2) TWO-PORT ANTENNA SYSTEMS Next, both implanted and external antennas were simulated with the tissue model as a two-antenna system and analyzed as a two-port network to compare the performance of each antenna topology. This was necessary as simulations of the implanted antenna alone do not adequately inform about the behavior of the antenna in a transcutaneous UHF system including both implanted and external antennas. The proximity of the implanted antenna in such a system affects the fields of the external antenna. Two-antenna simulations were performed to evaluate systems of planar dipoles, meandered dipoles, single-turn loops, and three-turn loops as shown in Figure 1. The two-antenna simulations included an external antenna and an implanted antenna separated by layers of tissue, modeling transcutaneous power transfer to a subcutaneous implanted device. The antenna positions were as depicted in Figure 1: the implanted antenna was positioned under the layers of skin and fat and on top of a layer of muscle, while the external antenna was positioned on the external surface of the skin, such that the layers of skin and fat separated the two antennas. The implanted and external antennas were of the same topology within each simulated two-antenna system. The dimensions of the antennas were varied in simulation to compare power gain across different configurations of implanted and external antenna dimensions. The feed gaps and trace widths of the antennas were chosen with the same rationale previously discussed for one-port simulations and measurements. A maximum size limit of 1 cm square was chosen for the implanted antenna, a size comparable to other works on implantable antennas with dimensions ranging from 1 mm by 1 mm to 3 cm by 2 cm [28]- [34]. The size constraints on an external antenna are more relaxed, but the transmitter size was kept under 3 cm square to constrain the number of simulation iterations. The tissue geometry and composition were held constant while the antenna dimensions were varied. The following dimensions of each antenna topology were varied in simulation (as labeled in Figure 1): the meander height and trace width of the meander dipole, the loop size and trace width of the three-turn loop, the length and trace width of the planar dipole, and the loop size and trace width of the single-turn loop. The dimensions were swept in simulation within the previously stated size limits for the implant and the external antenna (see values in Appendix A). The antennas were simulated at every combination of implant and external dimensions, such that a single implant size and trace width was simulated with each of the sizes and trace widths of the external antenna. This resulted in a total of 720 configurations of the planar dipole system, 360 configurations of the meandered dipole system, 360 configurations of the single-turn loop system, and 360 configurations of the three-turn loop system. More configurations were possible for the dipole due to the lack of restrictions on trace width that were necessary to prevent overlapping of traces in the meandered dipole and the loop topologies. S-parameters were used to calculate maximum power gain for each system. The maximum power gain for each set of dimensions was calculated as the power gain assuming simultaneous conjugate matching at the source and the load. The peak power gain was then determined as the greatest maximum power gain for a given topology across all of the combinations of external and implanted antenna dimensions. The peak gain represents the maximum power transfer between an external antenna of up to 3 cm square and a subcutaneous implant of up to 1 cm square, given a particular antenna topology and operating frequency. As implementing a complete physical system requires realizable impedance matching networks, the impedance values required for conjugate matching were also considered when comparing antenna topologies. B. PART II: TISSUE VARIABILITY The tissue variability analysis was performed using singleturn square loops or meandered dipoles for the external and implanted antennas, based on the results of the first part of the study. The dimensions of the antennas, shown in Figure 2, were held constant to isolate the effects of changing the tissue characteristics. The analysis was performed as follows: first, maximum power gain with simultaneous conjugate matching was calculated at one of five implant locations shown in Figure 2 (arm, thigh, scalp, cortex, or abdomen); next, the antenna system location was varied over the five locations with fixed matching networks, and the power gain at each location was calculated using the S-parameters from simulation and the impedances of the fixed matching networks. The implant antenna was positioned under skin and fat in the arm, thigh, abdomen, and scalp locations, and under the skull in the cortex location ( Figure 2). The external antenna was always positioned in contact with the external skin surface. For each configuration, power gain without conjugate matching was compared to maximum power gain achievable with simultaneous conjugate matching. The effect of changing tissue dielectric properties was also investigated by comparing the gain at each location with dry skin and wet skin, based on reported properties [24]. Tissue thicknesses and properties were varied to cover a range of tissue locations measured in the literature representing potential locations of transcutaneous systems. The tissue composition in the arm and thigh was simplified to layers of skin, fat, muscle, and bone. Tissue layers used to represent the abdomen were skin, fat, muscle, and body fluid. Tissue layers used for the head were skin, fat, skull, gray matter, and white matter. Tissue layer thicknesses were modeled according to measured values reported throughout the literature [8], [10], [35]- [39]. Dielectric properties of each tissue were defined according to reported values [24]. Each location and the associated tissue layer thicknesses are shown in Figure 2. The model of the abdomen included body fluid behind the muscle extended to terminate the model, to simulate the abdominal cavity. The model of the head included brain tissue behind the skull with white matter extended to terminate the model [40]. A. PART I: ANTENNA TOPOLOGIES AND DIMENSIONS The results of the antenna topology comparison are visually summarized in Figure 3. For the planar dipole, a peak power gain of −23.19 dB occurred at an implanted dipole length Assuming a power of 1 W delivered at the source, the results imply a peak received power of 29.7 mW for single-turn loops, 4.8 mW and 6.9 mW for planar dipoles and meandered dipoles, respectively, and 53.9 µW for three-turn loops. This obviously does not account for absorption limitations and assumes a fixed operating frequency, and therefore primarily indicates how the topologies compare in terms of peak power gain and trends associated with changing dimensions. The maximum power gain of the planar dipole increased with greater length and trace width of the external and implanted dipole. The power gain of the meandered dipole system was greatest when the external and internal dipole meander heights were equal. The power gain increased with greater trace width of both the external and implanted dipole. The meandered dipole length was a function of the trace width, so this result is analogous to the planar dipole system. The power gain of the single-turn loop system was greatest for loops of approximately the same size, and increased with greater trace width of both the implanted and external loops. Power gain of the three-turn loop system increased with greater trace width of both the external and internal loops, and was greatest for the smallest implanted loop size and an external loop size of 2 cm (see Appendix A for more details on the results of Part I). B. PART II: TISSUE VARIABILITY The results of the tissue variability analysis are summarized in Figures 4, 5, 6a, and 6b. For both antenna systems, the highest maximum gains were possible through the scalp, and the lowest maximum gains occurred through the abdomen. For the single-turn loop system, there was a consistent trend of decreased maximum gain with increased implantation depth. For the meandered dipole system, the greatest power gains occurred through the scalp, followed by the cortex, arm, thigh, and abdomen. This corresponds to decreased gain with increased depth except for the arm and thigh. The meandered dipole was determined to be more affected than the singleturn loop by differences in the tissue geometry. This is a potential explanation for the lack of a consistent trend of decreased gain with increased depth for the dipole system. A comparison of the loop and dipole power gain in each configuration is shown in Figure 7 and Figure 6c. The loop antenna system afforded higher maximum gain than the dipole system through all but the thickest tissue (abdomen), consistent with loop antennas being most effective in the near field and the observation that magnetic field strength decreases with greater separation of the loop antennas [12]. From the first part of this study, it was determined that increasing the size of the external loop antenna at this separation did not improve the gain, while increasing the length of the external dipole increased the gain for the same size of implanted antenna. Therefore, the dipole system presents an advantage for greater implantation depth. It is expected that further increase in the external dipole length could allow greater implant depths, within safety limitations on tissue absorption. The gain of the loop system varied over a wider range than the dipole system when the tissue location was varied. The power gain of the dipole system was therefore more consistent through variable tissue, but was generally lower than the loop system. Similar to the results across body locations, the loop system generally provided higher gains while the dipole system provided greater consistency in the presence of dielectric property variations. The lowest gains with non-optimal matching networks through the abdomen, arm, thigh, and scalp were seen when the matching was optimized to the cortex location. Conversely, when matching was optimized to the abdomen, arm, or thigh, the gains through the arm and thigh were greater than the gain at the cortical location even though the implantation depth is less at the cortex. These effects can be attributed to the difference in tissue composition: at the abdomen, arm, and thigh locations there were layers of skin and fat between the antennas, while at the cortex location there were layers of skin, fat, and bone. At the greater loop separation in the abdomen, the power gain was more sensitive to changes in tissue properties, indicating that at greater implantation depths the power transfer of the loop system was not solely dependent on the magnetic field. With the dipole system, there was not a trend of increased gain sensitivity to tissue dielectric property variations with implantation depth. In fact, gain was most consistent through the abdomen, the location with the lowest achievable gain with optimal matching. For both antenna systems, higher gains were possible through wet skin, presumably due to higher permittivity of wet skin and therefore electrically larger antennas. However, the gain was more sensitive to changes in properties when matching was optimized for wet skin, likely due to a combination of mismatch and electrically smaller antennas in dry skin. IV. DISCUSSION Tissue variability is an important consideration for robust implantable medical devices utilizing transcutaneous power. Tissue structure and properties have been documented to vary among patients, across locations of the body, and over time [7]- [11]. The results of this study indicate that power gain is highly dependent on the antenna topology and impedance matching, including impedance mismatch due to tissue variations, and maximum power gain for a given system is determined by implantation depth, antenna size, and tissue characteristics. For the initial antenna simulations using a planar tissue model and tissue thicknesses representing the arm, the singleturn loop showed the greatest peak power gain. However, the small real impedances required for conjugate matching could prove difficult to achieve with matching networks in a physical system. The dipole topologies offered greater real impedances for matching. Comparing the dipole topologies, the meandered dipole showed higher peak gain than the planar dipole, with similar impedances required for matching. The multiple-turn loop provided even higher impedances for conjugate matching, however the maximum achievable gain was orders of magnitude less than that of the other topologies. In applications where the size of the external antenna is not constrained, these results show that greater power gain can be achieved for the dipole systems by increasing the external dipole length beyond the length of the implanted dipole. In the loop systems investigated in this work, further increasing the size of the external loop antenna did not afford increased power gain. Additionally, increasing the width of the planar dipole increased the power gain of the system, while increasing the meander height of the meandered dipole did not improve the gain. For an application with unconstrained external antenna dimensions, the planar dipole topology affords the greatest peak power gain for a fixed implant size. Further work includes determining the achievable power delivery within constraints on energy absorption in tissue. The performance of each antenna topology can be explained by examining the fields of each system [41], [42]. For example, the greatest power gain for the loop systems was achieved when the magnetic field vectors were most perpendicular to the plane of the implant loop, consistent with power transfer in loop systems occurring primarily through inductive coupling. The increase in gain with both dipole VOLUME 5, 2017 length and width indicate combined radiative and reactive mechanisms of power transfer. The dipole antennas in this work couple capacitively due to their proximity, and the meandered dipoles achieve power transfer through a combination of capacitive and inductive coupling [42]. Other works have used dipoles in similar proximity, referring to the antenna systems as electrodes due to the coupling method of power transfer [19]. Optimizing to achieve maximum power gain through a particular tissue composition is a reasonable starting point for designing a transcutaneous system, but in practice the system will encounter variable tissue configurations that will affect the power delivered. Ideally, an adaptive system would be implemented to adjust the matching network to each tissue configuration, although the power gain of even an adaptive system will be subject to limits related to implantation depth and tissue characteristics [22]. Additionally, the power limitations of passive implantable devices may limit adaptive matching at the implant. It is therefore important to quantify the effects of non-optimal impedance matching due to expected environmental variations, as explored in this study. In this study, antenna dimensions were optimized using one tissue model, and then applied to various tissue configurations to examine effects of tissue variability. The topologies used in the current study were chosen to be representative of power transfer through capacitive or inductive coupling, and the effects of tissue variability are expected to be similar for antennas utilizing similar power transfer mechanisms in the near-and mid-field. The choice of antenna topology based on power gain through a fixed tissue structure presents its own issues in that the chosen topology may not present the same power gain advantages through another tissue configuration. That is, the choice of antenna topology (and dimensions) can be biased by the choice of tissue structure to design and evaluate potential topologies. In this work, although the loop antenna system showed the highest peak gain in the first part of the study, the meandered dipole system showed similar or higher gain for some tissue configurations and impedance mismatch. In particular, the dipole system surpassed the gain of loop antenna system for larger antenna separations. Therefore, quantifying the effects of tissue variability is not only important for a system to operate despite tissue variability among patients and over time, but also to account for differences between the design environment and the environment in practice. In this study, changes in tissue properties and composition caused the greatest mismatch losses, while tissue thickness and geometry determined the maximum achievable gain. The gain with mismatch is expected to be lower through a given tissue composition if the matching networks have not been optimized to a similar tissue composition, as evidenced by the lower gains through subcutaneous locations when matching networks were optimized for a cortical implant. Shallow implantation depth and higher permittivity present favorable conditions for power transfer due to limited attenuation and larger electrical size of the antennas, although the results of this study suggest that designing matching networks for these conditions will lead to the system gain being more sensitive to environmental variations in practice. Based on the tissue variability analysis in this work, the antenna topology and matching networks can be designed to achieve more consistency across tissues (with lower gain) or higher gain through certain tissues (with more gain variability), depending on the power transfer mechanisms. This is analogous to designing a wide-or narrow-band antenna. For example, if a device is expected to be used at several body locations with different tissue compositions, and particularly through thicker tissue such as the abdomen, a dipole system is more desirable. If power delivery is to be maximized and the implantation depth is comparable to the size of the antennas, a loop antenna system is likely to be more effective. Variations in gain due to frequency are expected to be similar to the variations in gain due to changes in tissue properties, because both frequency and tissue properties affect antenna electrical size and wavelength within the tissue. The meandered dipole is a more wideband antenna than the loop, and therefore expected to be less affected by changes in frequency as it is less affected by changes in tissue properties [42]. V. CONCLUSION This study investigated the effects of varying tissue structure and properties in wireless transcutaneous systems, by evaluating maximum power gain and power gain with impedance mismatch. Four antenna topologies were first evaluated in terms of peak power gain through a given tissue configuration based on measured tissues in the arm, over a range of external and implanted antenna dimensions. A single-turn loop and a meandered dipole system were then evaluated with varying tissue structure representing different locations on the body, and varying tissue properties representing fluctuations in tissue water content. The results indicate that a single-turn loop antenna topology provides the highest peak gain with the smallest implanted antenna, while dipole systems provide higher real impedance for conjugate matching and the ability to increase gain with a larger external antenna. At close antenna separations, the loop system was shown to provide higher power gain than the meandered dipole system. At antenna separations greater than the loop dimensions, the dipole system achieved higher gain, and the power gain of the dipole system was overall less sensitive to changes in tissue structure and properties. The results suggest that through choices of matching networks and antenna topologies, a system can be designed to maximize peak power gain for a narrow range of tissue properties, or to achieve greater consistency with lower peak power gain through variable tissue. APPENDIX A ANTENNA SIMULATION AND MEASUREMENTS In simulation in air, a minimum S 11 of −20.9 dB occurred at a dipole length of 13.5 cm and trace width of 0.1 cm. In simulation in tissue, a minimum S 11 of −13.4 dB occurred at a dipole length of 6 cm and a trace width of 0.1 cm. The first resonance of a dipole is expected to occur at 0.47 wavelengths [41]. The wavelength in fat at 915 MHz is approximately 14 cm, so the resonance was expected at a dipole length of 6.6 cm. The wavelength in air at 915 MHz is approximately 33 cm, so the resonance was expected at a dipole length of 15.5 cm. The resonant length decreases as the dipole width increases, which explains the smaller resonant length in both air and tissue [41]. The measured S 11 in air showed a difference of up to 2.79 dB around the resonance point, but the resonance occurred at approximately the same dipole length as in simulation. Otherwise the measurements agreed with the simulated values to within 1 dB. The resonance points were shifted in tissue, indicating that the wavelength in the fat phantom was longer than the wavelength in the simulated fat layer, therefore the associated phantom permittivity was lower than the permittivity used in simulation. The resonance in measurement and simulation occurred within 2 cm of dipole length (Figure 8). In simulation in air, a minimum S 11 of −9.86 dB occurred at a loop size of 7 cm and trace width of 0.01 cm. Because 7 cm was the maximum simulated loop size, the minimum does not represent the resonance point, but was used as a reference for experimental comparisons. In simulation in tissue, a minimum S 11 of −33.4 dB occurred at a loop size of 4 cm and trace width of 0.3 cm. The first resonance of a loop is expected to occur at a circumference of 1.2 wavelengths [41], so the resonance was expected at a loop size of approximately 4.2 cm in tissue and 9.9 cm in air. The measurements for the loop in air agreed with simulation results to within 2 dB. The resonance in simulation and measurement occurred within 1 cm loop size, although loop S 11 in simulation was considerably lower than measured (Figure 9). Antenna dimensions used in one-port simulations are listed in Table 1. Antenna dimensions used in two-port simulations are listed in Table 2. APPENDIX B POWER GAIN CALCULATION The equations below were used to calculate power gain assuming simultaneous conjugate matching, where S 11 , S 21 , S 12 , and S 22 are the S-parameters from simulation [43]. Equation 1 represents maximum power gain (G max ) with simultaneous conjugate matching at the source and load, where S is the reflection coefficient looking toward the source and L is the reflection coefficient looking toward the load. Equation 2 represents the reflection coefficient looking toward the source required for simultaneous conjugate matching, where B 1 and C 1 are defined by Equations 4 and 6, respectively. Equation 3 represents the reflection coefficient looking toward the load required for simultaneous conjugate matching, where B 2 and C 2 are defined by Equations 5 and 7, respectively. The value in Equations 4, 5, 6, and 7 is defined by Equation 8 [43]. He was an Executive Director of the RFID Center of Excellence. He was the co-author and co-editor of over 20 books. In addition, he has authored or co-authored over 200 publications in refereed journals, conference proceedings, and so on. He holds over 40 patents, including a magnetically levitated gyro, a gyro optical sensor, energy harvesting, and antennas on a CMOS chip. Sejdić as a recipient of the Presidential Early Career Award for Scientists and Engineers, the highest honor bestowed by the United States Government on science and engineering professionals in the early stages of their independent research careers. In 2017, he was awarded the National Science Foundation CAREER Award, which is the National Science Foundation's most prestigious awards in support of the career-development activities of those scholars who most effectively integrate research and education within the context of the mission of their organization. VOLUME 5, 2017
2017-10-05T05:43:08.227Z
2017-08-09T00:00:00.000
{ "year": 2017, "sha1": "91ee627b10235ace487d34725a14fa06c27f2571", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1109/jtehm.2017.2723391", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "9a806cff36015f3f7e76a84c6dc8787d3a8285ed", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
119348667
pes2o/s2orc
v3-fos-license
Four-Dimensional N=2(4) Superstring Backgrounds and The Real Heavens We study N=2(4) superstring backgrounds which are four-dimensional non-\Kahlerian with non-trivial dilaton and torsion fields. In particular we consider the case that the backgrounds possess at least one $U(1)$ isometry and are characterized by the continual Toda equation and the Laplace equation. We obtain a string background associated with a non-trivial solution of the continual Toda equation, which is mapped, under the T-duality transformation, to the hyper-\Kahler Taub-NUT instanton background. It is shown that the integrable property of the non-\Kahlerian spaces have the direct origin in the real heavens: real, self-dual, euclidean, Einstein spaces. The Laplace equation and the continual Toda equation imposed on quasi-\Kahler geometry for consistent string propagation are related to the self-duality conditions of the real heavens with ``translational'' and ``rotational''Killing symmetry respectively. Introduction Supersymmetric σ-models have attracted attention for various reasons for a long time. One of them is their deep relationship with complex manifold theory. Recently the interest have been refreshed in connection with superstring theory. It has been shown that the number of supersymmetries realized on two dimensional world sheet restricts the background geometry. If two dimensional world sheet theory has N = 1 supersymmetry, no-restriction on the background is imposed. N = 2 supersymmetry, however, imposes a number of conditions. The simplest case is that of a torsionless Riemannian background, which must be a Kähler manifold in order to admit N = 2 supersymmetry [1]. Such σ-models are conventionally formulated in terms of N = 2 chiral superfields and the superspace Lagrangian is just the Kähler potential. In the presence of torsion, the situation becomes considerably complicated. In this case the background has to admit two covariantly constant complex structures. A typical example of such WZNW σ-models are those with group manifolds as target spaces. In ref. [2] the conditions for N = 2 supersymmetry on group manifolds were found and a complete classification was given. In ref. [4] it was shown that (2,2) supersymmetric σ-models formulated in terms of chiral and twisted chiral superfields describe torsionful target spaces. Moreover, the abelian T-duality transformation was formulated by means of a Legendre transformation which interchanges a chiral superfield with a twisted chiral one in the manifestly N = 2 supersymmetry preserving manner. Then the backgrounds which are dual to those described by the familiar (2,2)σ-models formulated in terms of chiral superfields are completely described by ones formulated in terms of chiral and twisted chiral superfields. In order that these geometries provide consistent string backgrounds they have to satisfy, adding the dilation field, the string equations of motion, namely, the vanishing of β-functions. In ref. [5] a systematic discussion on four-dimensional backgrounds with N = 2 world sheet supersymmetry was given. There a set of conditions were derived, which are imposed on Kähler or torsionful non-Kähler with N = 2 world sheet supersymmetry. These conditions for consistent string propagation could be re-expressed by simple differential equations. For example a class of non-Kählerian backgrounds including the axionic instanton background was constructed as solutions to a simple integrable model i.e. one with the Laplace equation as field equation. Following this line, in the presence of (at least) one U(1) isometry, the new four-dimensional non-Kählerian background which has the non-trivial dilaton and torsion fields was constructed in ref. [6]. In this case the constraint imposed on target space geometry is related to an integrable model namely one with the continual Toda equation as field equation and the relation of the solution with the hyper-Kähler Eguchi-Hanson instanton background was discussed. In this paper, we explore these lines. The new superstring background with non-trivial dilaton and torsion fields, which is dual to the hyper-Kähler Taub-NUT instanton background, is presented. The origin of the integrable property of non-Kählerian backgrounds, which emerge as the Laplace equation and the continual Toda equation, is clarified. It is found that these integrable equations are related to those of the real heavens, which is the self-dual condition of the Riemann curvature of the euclidean Einstein gravity. This paper is organized as follows. We begin with a review of some of the relevant aspects presented in ref. [3,4] for constructing non-trivial four-dimensional non-Kählerian backgrounds with torsion fields described by the (2,2) σ-models formulated in terms of one chiral and one twisted chiral superfield. As is worked out in ref. [5], adding dilaton field we present the differential equations imposed on target space geometry for consistent string propagation. Following ref. [6] , it is shown that, due to (at least) one U(1) Killing symmetry, the condition imposed on non-Kählerian backgrounds implies the continual Toda equation. A non-trivial background, which is dual to the Eguchi-Hanson instanton background, is obtained as a solution of the continual Toda equation. In section 3, it is found that the non-Kählerian background which is dual to the Taub-NUT instanton background can be constructed through a solution of the continual Toda equation. Section 4 is spent to show that the origin of integrability lies in the real heavens. The last section is devoted to a summary and discussions. In the appendix A, the vanishing conditions of β-functions are re-expressed in terms of the quasi-Kähler potential and dilaton field. The duality transformation by means of a Legendre transformation is explained in appendix B. N=2 Superstring backgrounds The most general N = 2 superspace action for one chiral superfield U and one twisted chiral superfield V in two dimensions is determined by a single real function K(U,Ū, V,V ) [3,4]: (2.1) The superfields U and V obey a chiral or twisted chiral constraint The action (2.1) is invariant, up to total derivatives, under the quasi-Kähler gauge transformations: To read off the target space geometry of the theory it is convenient to write down, denoting u and v as the lowest component of the superfield U and V respectively, the purely bosonic part of the superspace action (2.1) : The target space metric and anti-symmetric tensor are expressed in terms of K It follows that the field strength H µνλ = ∇ µ B νλ + ∇ ν B λµ + ∇ λ B µν can also be expressed entirely in terms of the function K: If K uū and K vv are positive definite, the target space possesses (2, 2) signature. To obtain a space with euclidean signature, we have to require that they are positive definite and negative definite respectively. Note that the metric is non-Kählerian with torsion, whereas N = 2 world sheet supersymmetry is guaranteed. Hitherto we have just discussed the geometrical structure of the N = 2 supersymmetric σ-models. In string theory, there is another background field, namely the dilaton field Φ(u,ū, v,v), so that one adds to the σ-model action eqn. (2.4) a term of the form 1 2 R (2) Φ(u, v), where R (2) is the scalar curvature of the twodimensional world sheet. In order that these backgrounds provide consistent string solutions, they have to satisfy the vanishing of the β-function equations. Then the requirement of one-loop conformal invariance of the two-dimensional σ-model leads to the following equations of motion for the background fields [8], (2.5) Moreover, the vanishing of the dilaton β-function is provided by the equation of motion for dilaton field as In the presence of N=4 world sheet superconformal symmetry, the solution to the lowest order in α' is exact to all orders and δc remains zero to all orders. The conditions of the vanishing of β-functions were re-expressed in terms of K and Φ entirely in [5]. There three exclusive cases were considered, corresponding to the differential equation which is satisfied by K. In this paper we concentrate ourselves to two of them, the case(i) and case(ii) explained in appendix A. For the case(i), the potential K must satisfy the Laplace equation and the dilaton field is expressed in terms of a solution K as 2Φ = ln K uū + const. (2.8) In turn the case(ii) is characterized by the following nonlinear differential equation which determines target space geometry. The dilaton field is given in terms of a solution of eqn.(2.9) to be 2Φ = ln K ww + const. (2.10) In this case quasi-Kähler potential and dilaton field have U(1) Killing symmetry with respect to W , namely K = K(u,ū, w +w), Φ = Φ(u,ū, w +w). We denote the U(1) isometry as U(1) w for simplicity. The Hyper-Kähler Eguchi-Hanson Instanton and Integrable Equations In ref. [6] it was shown that non-Kählerian backgrounds characterized by eqn. (2.9) arise as a solution of the continual Toda equation. Performing the duality transformation its relation to the Eguchi-Hanson instanton background was discussed. In fact eqn.(2.9) is re-expressed, denoting K w as U , as the continual Toda Assuming that U = ln α(u,ū) + ln β(w +w), eqn.(2.11) reduces to the Liouville equation ∂ u ∂ū ln α(u,ū) + kα(u,ū) = 0 (2.12) where ∂ 2 w β is a constant due to the separation of variables and is denoted as k. By using the solution of the Liouville equation authors of ref. [6] employed the simplest non-trivial solution of eqn.(2.11) as and wrote down the solution of eqn.(2.9) as (2.14) It was shown that after performing the change w → −w the geometry characterized by (2.14) is dualized with respect to U(1) w isometry to give the hyper-Kähler Eguchi-Hanson instanton background. However w-sign flipped K is no longer a solution of eqn. (2.9) . In order to make evident the relation of integrable models to the hyper-Kähler Eguchi-Hanson instanton background, we present here another solution of eqn.(2.9) , which is also associated with the solution (2.13) , as which describes the non-trivial background with torsion; The scalar curvature is computed to be given by which depends only on w +w and is asymptotically zero as w +w → ±∞. Let us comment on the difference of eqn.(2.15) from eqn.(2.14) . There ap- Since the geometrical objects are expressed in terms of the derivatives of K, both potentials describe the same geometry except for the range of (w +w) 2 , which correspond to describing two different coordinate patches. For the solution (2.14) and (2.15) the range must be (w +w) 2 < ρ 2 and (w +w) 2 > ρ 2 respectively. There exist different backgrounds with those discussed so far, which are called dual background and obtained by duality transformation. Now we dualize the solution (2.15) with respect to U(1) isometry with respect to the twisted chiral superfield W following the procedure described in the appendix B. The duality transformation interchanges a twisted chiral superfield W with a chiral superfield Ψ so that the dual theory is described by two chiral superfield. Then dual geometry is Kähler . The dual Kähler potential is determined bỹ with a constraint equation 0 = K w − (ψ +ψ) which determines (w +w) in terms of (ψ +ψ) and u,ū. ⋆ Now the independent variables are ψ and u. We use here w +w < 0 as a solution of (2.17) in order that the geometry (2.16) has euclidean signature. The Hyper-Kähler Taub-NUT Instanton and Integrable Equations We consider in this section the relation of the hyper-Kähler Taub-NUT instanton to the integrable model of the quasi-Kähler geometry. It is shown that the hyper-Kähler Taub-NUT instanton background is also dual to a quasi-Kähler geometry characterized by a solution of the continual Toda equation. Let us first consider a solution of the continual Toda equation (2.11) with (u,ū) denoted as (z,z) to avoid confusion in the following discussion; (3.1) The quasi-Kähler potential is The corresponding geometry is the same as (2.15) with ρ = 0. In fact the quasi-Kähler potential (2.15) with ρ = 0 can be transformed to (3.2) under the quasi-Kähler transformation as follows. The potential (2.15) with ρ = 0 is expressed as We perform the coordinate transformation u = e 8z but the resulting potential is not a solution of (2.9) . In order to obtain a solution the quasi-Kähler transformation as a solution of (2.9) . Next let us dualize (3.2) with respect to U(1) w isometry. We obtain the dual Kähler potential, interchanging a twisted chiral superfield W with a chiral super-field Z 1 , asK where we introduce z 2 ≡ 8z. The corresponding geometry is completely flat but the coordinate system of eqn. (3.3) suggests the relation to the hyper-Kähler Taub-NUT metric as is seen below. In order to see the relation of the Kähler potential (3.3) to the hyper-Kähler Taub-NUT metric, let us employ the following expression for the hyper-Kähler Taub-NUT potential [9] with the Kähler coordinate where λ corresponds to the magnetic mass. The variables θ, φ and ψ are angular ones in the polar coordinates with the range 0 ≤ θ ≤ π, 0 ≤ φ ≤ 2π and 0 ≤ ψ ≤ In this way we see that the quasi-Kähler potential We show below that this is the case. Since we have a hyper-Kähler Taub-NUT metric associated with the Kähler potential (3.4) in the coordinate (3.5) which is dual to a solution of (2.9) in the case of λ = 0, we try to dualize (3.4) with respect to U(1) z 1 isometry. Due to the Z 2 property of duality transformation, we can construct the Kähler potential (3.4) from the resulting quasi-Kähler potential. Here we deal with the case that the Kähler geometry which is described by two chiral superfields is dualized to the quasi-Kähler geometry described by a chiral and a twisted chiral superfield. This causes the modification of the duality transformation (B.2) and (B.3) as follows. The dual quasi-Kähler potential, interchanging a chiral superfield Z 1 with a twisted chiral superfield W , is determined by with a constraint 0 = ∂ z 1 K T N + (w +w). The constraint becomes (w +w) = where the quasi-Kähler coordinate is given, denoting z 2 as 8z, by λs sin 2 θ). describes N = 2 superstring background. The corresponding geometry which is associated with the potential (3.7) is given by (3.10) In the real coordinate the metric is expressed as ds 2 = 1 + λs 4s (ds 2 + s 2 dθ 2 ) + 1 4s cos 2 θ 1 + λs + (1 + λs) sin 2 θ −1 (16dφ 2 + s 2 sin 2 θdψ 2 ), where φ and ψ are introduced in (3.8) to be the imaginary part of w and z respectively. One find that in the case λ = 0 the metric is dual with respect to rotational U(1) isometry associated with the angular coordinate φ to flat metric as expected. It follows that the scalar curvature is asymptotically zero (s → +∞). For the case λ = 0 , the curvature singularity is at s = 0. In turn if we consider λ < 0, there are two singularities at s = 0 and −1/λ. The Quasi-Kähler Geometry and The Real Heavens In section 2 and 3 we present N = 2 superstring quasi-Kähler backgrounds which are dual to Eguchi-Hanson instanton and Taub-NUT instanton respectively. Both are obtained through solutions of the continual Toda equation. In this section we study the origin of the integrable property of the quasi-Kähler geometry for the case(i) and case(ii). It is shown that the integrability can be understand as a direct reflection of the one of the real heavens: real, self-dual, euclidean, Einstein spaces. It was shown, in ref. [13], that all solutions to the real vacuum Einstein equations with self-dual or anti-self-dual curvature: in the presence of at least one Killing symmetry, fall into two cases which correspond to two distinct types of Killing vectors. The first type, what is called "translational", corresponds to Killing vectors K ν with self-dual or anti-self-dual covariant derivatives The second type, what is called "rotational", includes all other Killing vectors. These gravitational backgrounds are hyper-Kähler and consistent with N = 4 world sheet supersymmetry. The relation of the world sheet supersymmetry to Tduality transformation is recently considered in ref. [17] by using these backgrounds. In the following we show that the quasi-Kähler backgrounds for the case(i) and case(ii) are dual to the real heavens with (at least) one "translational" Killing symmetry and "rotational" one respectively. At first we consider the case(i). The string backgrounds admit a conformally flat metric coupled to axionic instanton and have been considered before [5,15,16]. To make the statement concrete we recall the quasi-Kähler backgrounds: In the following we distinguish these cases by means of upper (lower) sign. In these coordinates eqns.(4.1) are expressed by the following Denoting 2K uū as V , the anti-symmetric tensor B µν can be chosen as B τ i = ω i with satisfying the special condition: ∇V = ±∇ × ω. Now we perform the duality transformation with respect to τ direction: The resulting dual backgrounds are given by where ω i are constrained to satisfy the condition We next consider the case(ii). The quasi-Kähler backgrounds have the following form, denoting K w = ∂ w+w K ≡ U(u,ū, w +w), with U satisfying the continual Toda equation ∂ u ∂ūU + ∂ 2 w+w e U = 0. It is convenient to introduce the real coordinates (τ, x, y, z) as x + iy. We denote the upper and lower case to correspond to the upper and lower sign in the following. In these coordinates eqns.(4.6) are expressed by with U satisfying the continual Toda equation It was shown in ref. [13] that, in the presence of (at least) one "rotational" Killing symmetry, solutions to the real vacuum Einstein equations with self-dual or antiself-dual curvature are completely determined by U satisfying eqn.(4.10) with the metric (4.9) . In the above, we considered the case that the quasi-Kähler backgrounds have one U(1) isometry with respect to a twisted chiral superfield. If we consider the case C 1 = 0, C 2 = 0 instead of the case(ii) C 1 = 0, C 2 = 0 in section 2, the quasi-Kähler backgrounds turn to possess one U(1) isometry with respect to a chiral superfield. In this case, denoting w and u as the lowest component of a chiral and twisted chiral superfield respectively, the metric and torsion fields H µνρ have opposite sign to eqns.(4.6) . Introducing the real coordinates (4.7) with the change w → −w, the dual backgrounds have the metric (4.9) with the constraint (4.10) again. As a consequence, we can state that the origin of the integrability of the quasi-Kählerian for the case(i) and case(ii) lies in the real heavens. In section 2 and 3, the quasi-Kähler backgrounds which are dual to the Eguchi-Hanson and Taub-NUT instanton background respectively are constructed. Since these instanton backgrounds admit not only "translational" Killing symmetry but also "rotational" one, they can be written in the form (4.9) . The multi-ALE and multi-Taub-NUT instanton backgrounds don't admit additional "rotational" Killing symmetry in general except for the Eguchi-Hanson and Taub-NUT instanton backgrounds. Hence the quasi-Kähler backgrounds which are dual to these instantons can not be constructed for the case(ii). Summary and Discussions In this section, we first summarize our result and then briefly discuss their generalizations. We investigate four dimensional N = 2 superstring backgrounds which are described by a chiral superfield and a twisted chiral one. In particular we considered the case where there is (at least) one Killing symmetry and the quasi-Kähler potential is determined by the continual Toda equation. We found that the background which is dual to the well-known Taub-NUT instanton background arises through a non-trivial solution to the continual Toda equation. We clarify the relationship of the quasi-Kähler backgrounds with the real heavens i.e. the real, self-dual, euclidean, Einstein spaces. It is found that the quasi-Kähler backgrounds for the case(i) and (ii) are dual to the real heavens with a "translational" Killing symmetry and a "rotational" one respectively. Then it was found that the origin of the integrable property lies in the real heavens. Since the hyper-Kähler Taub We studied (2,2) σ-models described in terms of twisted chiral and chiral superfields. It is known that these σ-models put a strong restriction on the background geometry, namely, two complex structures must commute. The two commuting complex structures emerge when one consider the WZNW σ models only on SU(2) ⊗ U(1) or U(1) 4 among the various group manifolds [11]. Thus the generic (2,2) supersymmetric σ-models can not be exhausted employing these superfields. In ref. [12] the (2,2) σ-models formulated in terms of semi-chiral superfields, which satisfy only a left-handed or right-handed chirality condition but not both simultaneously, are shown to possess two non-commuting complex structures and correspond to the generic case. So far only the case of (2,2) world sheet supersymmetry has been considered. If heterotic (2,0) σ-models are considered, the metric and torsion are given by a complex vector potential [10]. The two dimensional action is formulated in terms of (left-handed or right-handed) chiral superfields in which the vector potential appears. It is intriguing problem for us to investigate the backgrounds which are described by these σ-models in the string context. three exclusive cases were considered: (i) C 1 = C 2 = 0, (ii) C 1 = 0, C 2 = 0, (iii) In the following we concentrate ourselves to the case(i) and case(ii). For the case(i), the conditions (2.5) are re-expressed by the following set of differential equations: which can be solved by U − 2Φ = const. and V − 2Φ = const. The potential K must satisfy where c is a constant. The dilaton field is expressed as 2Φ = ln |K uū | + const. For the case(ii), it is very useful to perform the following change of coordinates [5]: Combining a remaining condition of β B uū = 0 with eqns.(A.1) we obtain that Φ = Φ(u,ū, w +w), V = V (u,ū, w +w) and U = U (u,ū, w +w). Thus this case necessarily leads to at least one U(1) isometry. The sixteen conditions for (2.5) are entirely re-expressed by the following set of differential equations: If we consider the case c 2 = 0, the resulting geometry has (2, 2) signature. In order to have euclidean signature we must set c 2 = iπ. APPENDIX.B In this section we consider duality transformation. As was explained in ref. [3,4] this duality can be described by interchanging twisted chiral superfields with chiral ones. Let us consider the case that the potential K has one Killing symmetry with respect to V and is of the form where V is a twisted chiral field, whereas U is a chiral field. We denote the above U(1) isometry as U(1) v for simplicity. The 'dual' potentialK is obtained as a Legendre transform of K; where Ψ is a chiral field. Since the dual potentialK is described by two chiral superfields, the dual transformation explained above produces a torsionless Kähler manifold. It follows that the dual metric has the following form: On the other hand if the Killing symmetry is with respect to a chiral superfield U, the corresponding dual metric has opposite signature to (B.4). Since we are considering the case that there are one chiral and one twisted chiral superfield, the duality transformation produces a torsionless Kählar manifold explained above. In order that this N=2 preserving duality transformation by means of a Legendre transformation coincides with the usual abelian T-duality transformation [7], the dual dilaton field must be 2Φ = 2Φ − ln 2K vv .
2014-10-01T00:00:00.000Z
1995-05-30T00:00:00.000
{ "year": 1995, "sha1": "486f5c7ae467a845b671dafdeab52f464a3709b7", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/9505177", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "486f5c7ae467a845b671dafdeab52f464a3709b7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
217550127
pes2o/s2orc
v3-fos-license
Socioeconomic gradient in the developmental health of Canadian children with disabilities at school entry: a cross-sectional study Objective To examine the relationship between developmental health and neighbourhood socioeconomic status (SES) in kindergarten children with disabilities. Design Cross-sectional study using population-level database of children’s developmental health at school entry (2002–2014). Setting 12 of 13 Canadian provinces/territories. Measures Taxfiler and Census data between 2005 and 2006, respectively, were aggregated according to custom-created neighbourhood boundaries and used to create an index of neighbourhood-level SES. Developmental health outcomes were measured for 29 520 children with disabilities using the Early Development Instrument (EDI), a teacher-completed measure of developmental health across five domains. Analysis Hierarchical generalised linear models were used to test the association between neighbourhood-level SES and developmental health. Results All EDI domains were positively correlated with the neighbourhood-level SES index. The strongest association was observed for the language and cognitive development domain (β (SE): 0.29 (0.02)) and the weakest association was observed for the emotional maturity domain (β (SE): 0.12 (0.01)). Conclusions The magnitude of differences observed in EDI scores across neighbourhoods at the 5th and 95th percentiles are similar to the effects of more established predictors of development, such as sex. The association of SES with developmental outcomes in this population may present a potential opportunity for policy interventions to improve immediate and long-term outcomes. I, the Submitting Author has the right to grant and does grant on behalf of all authors of the Work (as defined in the below author licence), an exclusive licence and/or a non-exclusive licence for contributions from authors who are: i) UK Crown employees; ii) where BMJ has agreed a CC-BY licence shall apply, and/or iii) in accordance with the terms applicable for US Federal Government officers or employees acting as part of their official duties; on a worldwide, perpetual, irrevocable, royalty-free basis to BMJ Publishing Group Ltd ("BMJ") its licensees and where the relevant Journal is co-owned by BMJ to the co-owners of the Journal, to publish the Work in this journal and any other BMJ products and to exploit all rights, as set out in our licence. The Submitting Author accepts and understands that any supply made under these terms is made by BMJ to the Submitting Author unless you are acting as an employee on behalf of your employer or a postgraduate student of an affiliated institution which is paying any applicable article publishing charge ("APC") for Open Access articles. Where the Submitting Author wishes to make the Work available on an Open Access basis (and intends to pay the relevant APC), the terms of reuse of such Open Access shall be governed by a Creative Commons licence -details of these licences and which Creative Commons licence will apply to this Work are set out in our licence referred to above. Other than as permitted in any relevant BMJ Author's Self Archiving Policies, I confirm this Work has not been accepted for publication elsewhere, is not being considered for publication elsewhere and does not duplicate material already published. I confirm all authors consent to publication of this Work and authorise the granting of this licence. To date, associations between a number of health outcomes and a combination of economic, human, and social characteristics, commonly conceptualized as socioeconomic gradients, have been reported, including end-stage renal disease, breast cancer, obesity, and cardiometabolic health. [1][2][3][4][5][6] These studies have mostly focused on chronic conditions in adulthood, with studies on the socioeconomic determinants of child health emerging only more recently. [7][8][9][10][11] A socioeconomic gradient in typically developing children's developmental health has been reported in a number of high-, middle-, and low-income countries, [12][13][14] including Canada. 8 15-17 Additionally, the prevalence of childhood disabilities has been consistently shown to be negatively associated with SES. 18 used data from the Canadian National Longitudinal Survey of Children and Youth (NLSCY) for children between 0 and 11 years of age to illustrate an inverse relationship between the prevalence of chronic childhood disabilities and SES. 19 Msall and colleagues (2007) reported a more than three-fold difference in disability rates between children living in distressed vs. advantaged neighborhoods in Rhode Island. 20 However, little is known about the relationship between SES and developmental outcomes in children with special needs . Existing evidence most often addresses specific diagnoses during middle childhood, is not representative of all disabilities experienced by children during early childhood, and does not consider the impact of SES outside of the immediate family environment (i.e., neighborhood SES) which has been shown to be a significant influence on developmental outcomes in typically developing children. 8 21 22,23 Understanding determinants of developmental health in early childhood can help in identifying groups of children with disabilities that are likely to be most at risk for worse academic and social outcomes later in life. Such identification is useful for policy planning and the provision of health and education services. The objective of this study is to determine if there is a socioeconomic gradient in the developmental health of children with disabilities at school entry. This work extends existing research in that it focuses on speech impairment to non-verbal). The EDI database has been linked to Canadian Census and Taxfiler data from 2006 and 2005, respectively, using custom-created neighborhood boundaries. Meaningful boundaries were delineated using information on existing social structures and administrative and geographic divisions. 38 Census and Taxfiler variables were used to create the Canadian Neighbourhoods and Early Child Development (CanNECD) SES index, which includes indicators of education, language/immigration, marital status, wealth, income, dues, social capital, poverty, residential stability, and income inequality (Table S1). 39 Analysis All data analyses were conducted in SAS TM software using the GLIMMIX procedure. 40 Given that EDI domain scores are left-skewed and restricted in range, and that children are clustered within neighborhoods and schools, EDI data were transformed from left-to right-skewed by subtraction from 11, and analyzed using hierarchical generalized linear modeling (HGLM) with the identity link and gamma distribution. The fit of other distributions and link functions was also assessed but found to be generally inferior. Although children are clustered within two levels (neighborhoods and schools), only neighborhood of residence was included as a cluster variable due to data sparseness. 41 All models were performed using the Laplace approximation that allows estimation of likelihood statistics and has been shown to perform well with regard to accuracy and precision. 42 EDI domain scores were used as the dependent variable. For each EDI domain, the analysis was performed hierarchically in three steps. First, an intercept-only model was constructed. Second, a model with child-level characteristics that have been found to be significant predictors of children's developmental health (i.e., age, sex, and English/French language learner status (EFSL)) as fixed-effects was constructed. 25 43 Additionally, dummy variables for year of data collection, province, and the interaction between the two were included to control for variations in data collection procedures across time points and provinces. Finally, to evaluate the association between neighborhood-level SES and To assess whether the inclusion of child-level characteristics (age, sex, EFSL status), neighborhood-level SES, and random effects significantly improved model fit, partial likelihood ratio tests were performed, and goodness-of-fit indices (i.e., Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC)) were compared between models. Multicollinearity was tested by examining variance inflation factor (VIF) statistics for age, sex, EFSL status, and the SES index. VIF statistics for province of residence, time of data collection, and their interaction are not included as these were artificially inflated due to having been dummy coded and included as part of a regression model with few predictors. Leverage statistics, along with plots of raw, Pearson, and studentized residuals were used to identify outliers and influential observations. Observations with leverage statistics more than twice the mean of all leverage values were investigated for data entry error. A sensitivity analysis was conducted where observations with outlying studentized residuals, defined as studentized residuals with absolute values greater than two, were excluded in the estimation of the models. Cases with missing data were excluded from the analysis but were compared to those without missing data to ensure no substantial differences in demographic characteristics. Population Characteristics A total of 29,520 children with disabilities were identified in the database. Population characteristics are presented in Table 1. These children resided in 2,016 neighborhoods. Neighborhood characteristics are presented in Table 2. Forty (1.95%) neighborhoods in the database were excluded from the analysis due to not having Model Diagnostics and Sensitivity Analyses Excluding dummy coded categorical variables, all VIF statistics were below the cut-off of 10 and ranged from 1.05 and 1.10. Studentized residuals were used to identify influential and outlying observations. The results of the sensitivity analysis excluding cases with absolute studentized residual values greater than 2 are presented in Table S10 through 14. The results from this sensitivity analysis were very similar to the results of the primary analysis. Discussion The objective of this investigation was to examine the association between neighborhood-level SES and developmental health in children with disabilities (operationally defined as "special needs" designation) at school entry, in order to determine the importance of contextual factors in predicting outcomes in this population. The results indicate that neighborhood-level SES is a consistent and significant predictor of developmental outcomes in this population. An average difference of 0.12 to 0.29 points in EDI domain scores was observed per standard deviation difference in SES, with higher EDI domain scores being observed in higher SES neighborhoods. Neighborhood-level SES had the strongest association with the language & cognitive development domain and the weakest with emotional maturity domain. Consistency with previous studies Comparing the magnitude of association between SES and developmental health with previous literature is difficult due to differences in the operationalization of these constructs and differences in analytic methods. Previous studies, mostly conducted with typically developing children, 12 have either explored the direct association between SES and developmental health 8 15-17 44 or investigated mediators 10 of this relationship, including parent/child activities, access to a computer, participation in organized classes and activities, and maternal mental health. [45][46][47] Most of these studies measured SES at the individual family level and all demonstrated a positive association between social and economic variables and developmental health. Among the studies done in typically developing populations, four use EDI outcomes, with three including neighborhood-level measures of SES. 8 15 17 All studies demonstrated a positive association between SES and the EDI. The most recent study looked at neighborhood effects in typically developing children using four published neighborhood SES indices. 8 The strength of association between the indices and EDI domains varied, depending on the SES index used. Similar to our results, the strongest association was most often found for the language & cognitive development domain. The few studies done in children with disabilities also report a positive association between SES and academic and social outcomes. 21-23 48-50 These studies are different from the present investigation in that they only focus on a few high-incidence diagnoses, such as learning disabilities during middle childhood and adolescence and do not measure SES at the neighborhood-level. Strengths and limitations There are several strengths of this study. First, we used population-level data, which made focusing on children with disabilities that only make up a small proportion of the population possible, while also maximizing external validity and statistical power and minimizing potential selection bias. Second, we focused on early childhood, a time that critically impacts children's long-term academic and social trajectory. 51 Third, we applied a non-categorical approach to childhood disabilities which reflects current thinking in the field of child development and findings that diagnostic categories often do not fully reflect the actual abilities and needs of children. [52][53][54] Fourth, the EDI has undergone extensive reliability and validity testing, and has been found to be predictive of academic achievement and social functioning throughout early and middle childhood. [25][26][27][28][29][30][31][32][33][34] The psychometric performance of the EDI in children with special needs has also been found similar to its performance in typically developing children. 35 Currently, the EDI is the only available indicator of developmental health that allows examination of variability across Canada at a population-level. Finally, the analytic methods used in this investigation appropriately take into account the skewed distribution and nesting of EDI data, which prevents artificially deflated standard errors and hence inappropriate statistically significant findings. This investigation is also subject to limitations. First, due to the cross-sectional design of this study, causality cannot be established. There is evidence that developmental problems in children may increase parental stress and impact the general socioeconomic wellbeing of families. 55 56 Additionally, there is the possibility of self-selection where families with similar experiences may choose to reside within similar neighborhoods. Regardless of causality, or lack thereof, the results of this study indicate that services aimed at young children with disabilities that are particularly accessible in low SES neighborhoods are likely to be most impactful. Second, we used a very broad definition of disability, which is based on the designation of the child by the education system at kindergarten, and hence, children with disabilities who did not have this designation by the education system were excluded. It is possible that a very small minority of children who were not typically developing but did not have this designation were excluded. Third, the SES index may not accurately reflect the socioeconomic condition of the neighborhoods in which children were raised. The variables used to construct the SES index come from 2005 and 2006, whereas EDI data were collected between 2004 and 2014. It is possible that changes in neighborhoods or relocation of families could render the SES index less reflective of the true early environment for some groups of children, which may have led to underestimation of the association between SES and developmental outcomes. However, empirical evidence indicates that it is unlikely for neighborhood characteristics to drastically change over time or for families move to neighborhoods which are greatly different from their previous ones. 57 Finally, we were unable to control for family-level SES in the models. Thus, it is not possible to determine whether this association is driven by neighborhood or family characteristics. We were also unable to control for specific diagnoses or severity of disabilities that have undoubted impact on child development. Similar investigation should be extended for smaller subgroups of children who share diagnoses or functional impairments. Implications Our findings indicate that the relationship between SES and developmental outcomes also holds for children with disabilities. 8 15-17 44 58 This underscores the potential impact of the early environment of children on their development. Although clinicians often focus on biological factors, such as family history of disabilities and harmful exposures in utero, social influences have commonly been found to be more predictive of long-term developmental and academic outcomes and may be more amenable to change. 44 According to survey data, clinicians are receptive to screening for social determinants of health outside of the purview of clinical care, suggesting that the findings of this investigation are likely to be relevant and acceptable to those in the clinical community. 59 Our findings show that the association between child development and socioeconomic status, which is well-established for typically developing children, also exists for children with disabilities. This highlights the urgency for improving the social and economic context in which children are raised, in addition to targeted interventions delivered at the individual child level. Failure to do so will likely result in further perpetuation of inequities in child development -more so as children with disabilities are already among the most disadvantaged groups globally. 18 60 It remains to be seen whether large-scale policy interventions can help in reducing disparities in this population similarly to other groups. 61 Additional investigations could further strengthen and contextualize these findings. Specifically, establishing the consistency and relative strength of the relationship between SES and developmental outcomes across subgroups of physical, behavioral, and learning disabilities, as well as subgroups based of condition and time of diagnosis, would further untangle the relationship between SES, disabilities, and development, and would be helpful in identifying service provision strategies that are likely to be most successful in improving outcomes. Introduction Background/rationale 2 Explain the scientific background and rationale for the investigation being reported 4-5 Objectives 3 State specific objectives, including any prespecified hypotheses 5 Study design 4 Present key elements of study design early in the paper 5 Conclusions: The magnitude of differences observed in EDI scores across neighborhoods at the 5 th and 95 th percentiles are similar to the effects of more established predictors of development, such as sex. The association of SES with developmental outcomes in this population may present a potential opportunity for policy interventions to improve immediate and longer-term outcomes. Strengths and limitations of this study  Our investigation uses a large, representative population-level database, that allowed us to focus on children with disabilities that make up only a small proportion of the population, while also maximizing external validity and statistical power and minimizing potential selection bias.  We used data from the EDI, a valid and reliable measure of children's developmental health.  We focused on early childhood, a time that has been well documented to critically impact children's long-term academic and social trajectory.  We applied a non-categorical approach to childhood disabilities that reflects current thinking in the field of child development.  The study's limitation is the exclusive use of neighborhood-level socioeconomic status indicators, without the ability to control for family-level ones. Introduction To date, associations between a number of health outcomes and a combination of economic, human, and social characteristics, commonly conceptualized as socioeconomic gradients, have been reported, including end-stage renal disease, breast cancer, obesity, and cardiometabolic health. [1][2][3][4][5][6] These studies have mostly focused on chronic conditions in adulthood, with studies on the socioeconomic determinants of child health emerging only more recently. [7][8][9][10][11] A socioeconomic gradient in typically developing children's developmental health has been reported in a number of high-, middle-, and low-income countries, [12][13][14] including Canada. 8 15-17 Additionally, the prevalence of childhood disabilities has been consistently shown to be negatively associated with SES. 18 Stabile & Currie (2003) used data from the Canadian National Longitudinal Survey of Children and Youth (NLSCY) for children between 0 and 11 years of age to illustrate an inverse relationship between the prevalence of chronic childhood disabilities and SES. 19 Msall and colleagues (2007) reported a more than three-fold difference in disability rates between children living in distressed vs. advantaged neighborhoods in Rhode Island. 20 However, little is known about the relationship between SES and developmental outcomes in children with special needs . Existing evidence most often addresses specific diagnoses during middle childhood, is not representative of all disabilities experienced by children during early childhood, and does not consider the impact of SES outside of the immediate family environment (i.e., neighborhood SES) which has been shown to be a significant influence on developmental outcomes in typically developing children. 8 takes a diagnosis-free, non-categorical approach to childhood disability, and uses population-level data. Methods The project was approved by the Hamilton Integrated Research Ethics Board (no. 2403). Patient and Public Involvement Patients/the public were not involved in the design or conduct of this study. Data Source and Measurement Data for this study come from a Pan-Canadian database on early childhood development. 8 25 The EDI is completed by teachers in the second half of the kindergarten year (the year before Grade 1) -usually between February and March -based on their observations of each child. It is comprised of 103 core items, and domain scores range from 0 to 10, with higher scores indicating better developmental health. The EDI has been validated extensively for both typically-developing children [25][26][27][28][29][30][31][32][33][34] and those with disabilities. 35 The database also includes data on children's age, sex, and whether they have a "special needs" designation. 24 The "special needs" designation is the operational indicator of childhood disability in our study. Definitions of "special needs" are set by each province/territory, 36 37 but they are similar and generally include children with identified health problems, with or without formal medical diagnoses, that impede their ability to learn in a regular classroom. Children encompassed by this definition have a broad range of impairments, varying widely in both type (e.g., physical or mental) and severity (e.g., mild speech impairment to non-verbal). The most common disabilities in this population include learning disabilities and speech impairments, which is consistent with the prevalence of disabilities in children at school entry in developed countries. 38 39 The EDI database has been linked to Canadian Census and Taxfiler data from 2006 and 2005, respectively, using custom-created neighborhood boundaries. 40 Briefly, the neighborhood boundaries were defined using Statistics Canada's dissemination blocks and were created to contain a minimum of 50 and a maximum of 600 valid EDI records per neighborhood. (Table S1). Analysis All data analyses were conducted in SAS TM software using the GLIMMIX procedure. 41 Given that EDI domain scores are skewed and restricted in range, and that children are clustered within neighborhoods and schools, the data were analyzed using hierarchical generalized linear modeling (HGLM). The fit of a range of distributions and link functions were assessed and it was found that the identify link and gamma distribution produced the best model fit. EDI data were transformed by subtraction from 11 to allow for the gamma distribution to accommodate the left skew. Although children are clustered within two levels (neighborhoods and schools), only neighborhood of residence was included as a cluster variable due to data sparseness. 42 All models were performed using the Laplace approximation that 25 38 Additionally, year of data collection, province, and the interaction between the two were included as categorical variables to control for variations in data collection procedures across time points and provinces. Finally, to evaluate the association between neighborhood-level SES and children's developmental health, the SES index was added in the third model. Random effects of each of the individual predictors were added to the final model one-by-one and the overall improvement in the fit of the model was tested. To assess whether the inclusion of child-level characteristics (age, sex, EFSL status), neighborhood-level SES, and random effects significantly improved model fit, partial likelihood ratio tests were performed, and goodness-of-fit indices (i.e., Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC)) were compared between models. Multicollinearity was tested by examining variance inflation factor (VIF) statistics for age, sex, EFSL status, and the SES index. VIF statistics for province of residence, time of data collection, and their interaction are not included as these were artificially inflated due to having been dummy coded and included as part of a regression model with few predictors. Leverage statistics, along with plots of raw, Pearson, and studentized residuals were used to identify outliers and influential observations. Observations with leverage statistics more than twice the mean of all leverage values were investigated for data entry error. A sensitivity analysis was conducted where observations with outlying studentized residuals, defined as studentized residuals with absolute values greater than two, were excluded in the estimation of the models. Cases with missing 8 data were excluded from the analysis but were compared to those without missing data to ensure no substantial differences in demographic characteristics. Population Characteristics A total of 29,520 children with disabilities were identified in the database. Population characteristics are presented in Table 1. These children resided in 2,016 neighborhoods. Neighborhood characteristics are presented in Table 2. Forty (1.95%) neighborhoods in the database were excluded from the analysis due to not having any children with special needs (Table S2). These neighborhoods included fewer children overall, were of higher SES, and did not proportionally represent Canadian provinces as the majority were in Quebec. Characteristics of children missing any one of the five EDI domain scores are presented in Table S3. Overall, only a small proportion of children (<2%) were missing data on any of the EDI domains and these children did not differ in demographic characteristics from the analytic sample. Model Results Regression coefficients, their levels of significance, and goodness-of-fit indices from the final model for each of the EDI domains are presented in Table 3. Additional details on each step of model development along with goodness-of-fit indices are presented in supplementary tables 4 through 8. The gamma distribution with an identity link produced the best fit for most domains, as assessed by AIC and BIC statistics (Table S9). Random effects of predictors did not significantly improve fit and so they were not included in the final model. Model Diagnostics and Sensitivity Analyses Excluding categorical variables, all VIF statistics were below the cut-off of 10 and ranged from 1.05 and 1.10. Studentized residuals were used to identify influential and outlying observations. The results of the sensitivity analysis excluding cases with absolute studentized residual values greater than 2 are presented in Table S10 through 14. The results from this sensitivity analysis were very similar to the results of the primary analysis. Discussion The objective of this investigation was to examine the association between neighborhood-level SES and developmental health in children with disabilities (operationally defined as "special needs" designation) at school entry, in order to determine the importance of contextual factors in predicting outcomes in this population. The results indicate that neighborhood-level SES is a consistent and significant predictor Consistency with previous studies Comparing the magnitude of association between SES and developmental health with previous literature is difficult due to differences in the operationalization of these constructs and differences in analytic methods. Previous studies, mostly conducted with typically developing children, 12 have either explored the direct association between SES and developmental health 8 15-17 44 or investigated mediators of this relationship, including parent/child activities, access to a computer, participation in organized classes and activities, and maternal mental health. [45][46][47] Most of these studies measured SES at the individual family level and all demonstrated a positive association between social and economic variables and developmental health. Among the studies done in typically developing populations, five use EDI outcomes, with four including neighborhood-level measures of SES. 8 The few studies done in children with disabilities also report a positive association between SES and academic and social outcomes. 21-23 49-51 These studies are different from the present investigation in that they only focus on a few high-incidence diagnoses, such as learning disabilities during middle childhood and adolescence and do not measure SES at the neighborhood-level. Strengths and limitations There are several strengths of this study. First, we used population-level data, which made focusing on children with disabilities that only make up a small proportion of the population possible, while also maximizing external validity and statistical power and minimizing potential selection bias. Second, we focused on early childhood, a time that critically impacts children's long-term academic and social trajectory. 52 Third, we applied a non-categorical approach to childhood disabilities which reflects current thinking in the field of child development and findings that diagnostic categories often do not fully reflect the actual abilities and needs of children. [53][54][55] Fourth, the EDI has undergone extensive reliability and validity testing, and has been found to be predictive of academic achievement and social functioning throughout early and middle childhood. [25][26][27][28][29][30][31][32][33][34] The psychometric performance of the EDI in children with special needs has also been found similar to its performance in typically developing children. 35 Currently, the EDI is the only available indicator of developmental health that allows examination of variability across Canada at a population-level. Finally, the analytic methods used in this investigation appropriately take into account the skewed distribution and nesting of EDI data, which prevents artificially deflated standard errors and hence inappropriate statistically significant findings. This investigation is also subject to limitations. First, due to the cross-sectional design of this study, causality cannot be established. There is evidence that developmental problems in children may increase parental stress and impact the general socioeconomic wellbeing of families. 56 Second, we used a very broad definition of disability, which is based on the designation of the child by the education system at kindergarten, and hence, children with disabilities who did not have 58 This appears to be confirmed by the remarkable stability of the CanNECD SES Index, the measure used in this study, over the period of five years. 48 Finally, we were unable to control for family-level SES in the models. Thus, it is not possible to determine whether this association is driven by neighborhood or family characteristics. We were also unable to control for specific diagnoses or severity of disabilities that have undoubted impact on child development. Similar investigation should be extended for smaller subgroups of children who share diagnoses or functional impairments. Implications Our findings indicate that the relationship between SES and developmental outcomes also holds for children with disabilities. 8 15-17 44 59 This underscores the potential impact of the early environment of children on their development. Although clinicians often focus on biological factors, such as family history of disabilities and harmful exposures in utero, social influences have commonly been found to be more predictive of long-term developmental and academic outcomes and may be more amenable to change. 44 According to survey data, clinicians are receptive to screening for social determinants of health outside of the purview of clinical care, suggesting that the findings of this investigation are likely to be relevant and acceptable to those in the clinical community. 60 Our findings show that the association between child development and socioeconomic status, which is well-established for typically developing children, also exists for children with disabilities. This highlights the urgency for improving the social and economic context in which children are raised, in addition to targeted interventions delivered at the individual child level. Failure to do so will likely result in further perpetuation of inequities in child development -more so as children with disabilities are already among the most disadvantaged groups globally. 18 61 It remains to be seen whether large-scale policy interventions can help in reducing disparities in this population similarly to other groups. 62 It is important to consider the findings in context of the availability of support services for children with special needs in Canada prior to school entry. The strategies, programs, and accessibility vary by province/territory, and often within jurisdictions, as municipal and regional health units are often service providers, but generally access is easier for children with a specific diagnosis than for those with unspecified disorders. 54 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 o n l y o n l y o n l y o n l y 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 F o r p e e r r e v i e w o n l y Discuss the generalisability (external validity) of the study results 12-13 Strengths and limitations of this study  Our investigation uses a large, representative population-level database, that allowed us to focus on children with disabilities that make up only a small proportion of the population, while also maximizing external validity and statistical power and minimizing potential selection bias.  We used data from the EDI, a valid and reliable measure of children's developmental health.  We focused on early childhood, a time that has been well documented to critically impact children's long-term academic and social trajectory.  We applied a non-categorical approach to childhood disabilities that reflects current thinking in the field of child development.  The study's limitation is the exclusive use of neighborhood-level socioeconomic status indicators, without the ability to control for family-level ones. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 F o r p e e r r e v i e w o n l y Introduction To date, associations between a number of health outcomes and a combination of economic, human, and social characteristics, commonly conceptualized as socioeconomic gradients, have been reported, including end-stage renal disease, breast cancer, obesity, and cardiometabolic health. [1][2][3][4][5][6] These studies have mostly focused on chronic conditions in adulthood, with studies on the socioeconomic determinants of child health emerging only more recently. [7][8][9][10][11] A socioeconomic gradient in typically developing children's developmental health has been reported in a number of high-, middle-, and low-income countries, [12][13][14] including Canada. 8 15-17 Additionally, the prevalence of childhood disabilities has been consistently shown to be negatively associated with socioeconomic status (SES). 18 used data from the Canadian National Longitudinal Survey of Children and Youth (NLSCY) for children between 0 and 11 years of age to illustrate an inverse relationship between the prevalence of chronic childhood disabilities and SES. 19 Msall and colleagues (2007) reported a more than three-fold difference in disability rates between children living in distressed vs. advantaged neighborhoods in Rhode Island. 20 However, little is known about the relationship between SES and developmental outcomes in children with special needs . Existing evidence most often addresses specific diagnoses during middle childhood, is not representative of all disabilities experienced by children during early childhood, and does not consider the impact of SES outside of the immediate family environment (i.e., neighborhood SES) which has been shown to be a significant influence on developmental outcomes in typically developing children. 8 21 22,23 Understanding determinants of developmental health in early childhood can help in identifying groups of children with disabilities that are likely to be most at risk for worse academic and social outcomes later in life. Such identification is useful for policy planning and the provision of health and education services. The objective of this study is to determine if there is a socioeconomic gradient in the developmental health of children with disabilities at school entry. This work extends existing research in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 that it focuses on early childhood, a time at which experiences set the trajectory for future academic and social outcomes, takes a diagnosis-free, non-categorical approach to childhood disability, and uses population-level data. Methods The project was approved by the Hamilton Integrated Research Ethics Board (no. 2403). Patient and Public Involvement Patients/the public were not involved in the design or conduct of this study. Data Source and Measurement Data for this study come from a Pan-Canadian database on early childhood development, which is held at the Offord Centre for Child Studies at McMaster University, a national repository for this database. 8 It is comprised of 103 core items, and domain scores range from 0 to 10, with higher scores indicating better developmental health. Permission to collect EDI data on kindergarten children was obtained from the respective provincial and territorial governments. With the exception of the province of Alberta, which required written consent from parents, data were collected via passive consent. The EDI has been validated extensively for both typically-developing children [26][27][28][29][30][31][32][33][34][35] and those with disabilities. 36 The database also includes data on children's age, sex, and whether they have a "special needs" designation. 24 The "special needs" designation is the operational indicator of childhood disability in our 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 F o r p e e r r e v i e w o n l y 6 study. Definitions of "special needs" are set by each province/territory, 37 38 but they are similar and generally include children with identified health problems, with or without formal medical diagnoses, that impede their ability to learn in a regular classroom. Children encompassed by this definition have a broad range of impairments, varying widely in both type (e.g., physical or mental) and severity (e.g., mild speech impairment to non-verbal). The most common disabilities in this population include learning disabilities and speech impairments, which is consistent with the prevalence of disabilities in children at school entry in developed countries. 39 40 The EDI database has been linked to Canadian Census and Taxfiler data from 2006 and 2005, respectively, using custom-created neighborhood boundaries. 41 Briefly, the neighborhood boundaries were defined using Statistics Canada's dissemination blocks and were created to contain a minimum of 50 and a maximum of 600 valid EDI records per neighborhood. The criterion of having at least 50 EDI records per neighborhood was based on empirical data on EDI reliability. The custom-created neighborhood boundaries were based on existing administrative and geographic divisions and were created in consultation with provincial/territorial governments, to maximize their meaningfulness. Guhn et al. (2016) provide a more detailed description of the process for neighborhood boundary definition. 41 Census and Taxfiler variables were used to create the Canadian Neighborhoods and Early Child Development (CanNECD) SES index, which includes indicators of education, language/immigration, marital status, wealth, income, dues, social capital, poverty, residential stability, and income inequality (Table S1). Analysis All data analyses were conducted in SAS TM software using the GLIMMIX procedure. 42 Given that EDI domain scores are skewed and restricted in range, and that children are clustered within neighborhoods and schools, the data were analyzed using hierarchical generalized linear modeling (HGLM). The fit of a range of distributions and link functions were assessed and it was found that the identify link and gamma distribution produced the best model fit. EDI data were transformed by subtraction from 11 to two levels (neighborhoods and schools), only neighborhood of residence was included as a cluster variable due to data sparseness. 43 All models were performed using the Laplace approximation that allows estimation of likelihood statistics and has been shown to perform well with regard to accuracy and precision. 44 EDI domain scores were used as the dependent variable. For each EDI domain, the analysis was performed hierarchically in three steps. First, an intercept-only model was constructed. Second, a model with child-level characteristics that have been found to be significant predictors of children's developmental health (i.e., age, sex, and English/French language learner status (EFSL)) as fixed-effects was constructed. 26 39 Additionally, year of data collection, province, and the interaction between the two were included as categorical variables to control for variations in data collection procedures across time points and provinces. Finally, to evaluate the association between neighborhood-level SES and children's developmental health, the SES index was added in the third model. Random effects of each of the individual predictors were added to the final model one-by-one and the overall improvement in the fit of the model was tested. To assess whether the inclusion of child-level characteristics (age, sex, EFSL status), neighborhood-level SES, and random effects significantly improved model fit, partial likelihood ratio tests were performed, and goodness-of-fit indices (i.e., Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC)) were compared between models. Multicollinearity was tested by examining variance inflation factor (VIF) statistics for age, sex, EFSL status, and the SES index. VIF statistics for province of residence, time of data collection, and their interaction are not included as these were artificially inflated due to having been dummy coded and included as part of a regression model with few predictors. Leverage statistics, along with plots of raw, Pearson, and studentized residuals were used to identify outliers and influential observations. Observations with leverage statistics more than 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 F o r p e e r r e v i e w o n l y 8 twice the mean of all leverage values were investigated for data entry error. A sensitivity analysis was conducted where observations with outlying studentized residuals, defined as studentized residuals with absolute values greater than two, were excluded in the estimation of the models. Cases with missing data were excluded from the analysis but were compared to those without missing data to ensure no substantial differences in demographic characteristics. Population Characteristics A total of 29,520 children with disabilities were identified in the database. Population characteristics are presented in Table 1. These children resided in 2,016 neighborhoods. Neighborhood characteristics are presented in Table 2. Forty (1.95%) neighborhoods in the database were excluded from the analysis due to not having any children with special needs (Table S2). These neighborhoods included fewer children overall, were of higher SES, and did not proportionally represent Canadian provinces as the majority were in Quebec. Characteristics of children missing any one of the five EDI domain scores are presented in Table S3. Overall, only a small proportion of children (<2%) were missing data on any of the EDI domains and these children did not differ in demographic characteristics from the analytic sample. Model Results Regression coefficients, their levels of significance, and goodness-of-fit indices from the final model for each of the EDI domains are presented in Table 3. Additional details on each step of model development along with goodness-of-fit indices are presented in supplementary tables 4 through 8. The gamma distribution with an identity link produced the best fit for most domains, as assessed by AIC and BIC statistics (Table S9). Random effects of predictors did not significantly improve fit and so they were not included in the final model. Model Diagnostics and Sensitivity Analyses Excluding categorical variables, all VIF statistics were below the cut-off of 10 and ranged from 1.05 and 1.10. Studentized residuals were used to identify influential and outlying observations. The results of the sensitivity analysis excluding cases with absolute studentized residual values greater than 2 are presented in Table S10 Consistency with previous studies Comparing the magnitude of association between SES and developmental health with previous literature is difficult due to differences in the operationalization of these constructs and differences in analytic methods. Previous studies, mostly conducted with typically developing children, 12 have either explored the direct association between SES and developmental health 8 15-17 45 or investigated mediators of this relationship, including parent/child activities, access to a computer, participation in organized classes and activities, and maternal mental health. [46][47][48] Most of these studies measured SES at the individual family level and all demonstrated a positive association between social and economic variables and developmental health. Among the studies done in typically developing populations, five use EDI outcomes, with four including neighborhood-level measures of SES. 8 The few studies done in children with disabilities also report a positive association between SES and academic and social outcomes. 21-23 50-52 These studies are different from the present investigation in that they only focus on a few high-incidence diagnoses, such as learning disabilities during middle childhood and adolescence and do not measure SES at the neighborhood-level. Strengths and limitations There are several strengths of this study. First, we used population-level data, which made focusing on children with disabilities that only make up a small proportion of the population possible, while also maximizing external validity and statistical power and minimizing potential selection bias. Second, we focused on early childhood, a time that critically impacts children's long-term academic and social trajectory. 53 Third, we applied a non-categorical approach to childhood disabilities which reflects current thinking in the field of child development and findings that diagnostic categories often do not fully reflect the actual abilities and needs of children. [54][55][56] Fourth, the EDI has undergone extensive reliability and validity testing, and has been found to be predictive of academic achievement and social functioning throughout early and middle childhood. [26][27][28][29][30][31][32][33][34][35] The psychometric performance of the EDI in children with special needs has also been found similar to its performance in typically developing children. 36 Currently, the EDI is the only available indicator of developmental health that allows examination of variability across Canada at a population-level. Finally, the analytic methods used in this investigation appropriately take into account the skewed distribution and nesting of EDI data, which prevents artificially deflated standard errors and hence inappropriate statistically significant findings. This investigation is also subject to limitations. First, due to the cross-sectional design of this study, causality cannot be established. There is evidence that developmental problems in children may increase parental stress and impact the general socioeconomic wellbeing of families. 57 58 Additionally, Second, we used a very broad definition of disability, which is based on the designation of the child by the education system at kindergarten, and hence, children with disabilities who did not have this designation by the education system were excluded. It is possible that a very small minority of children who were not typically developing but did not have this designation were excluded. which are greatly different from their previous ones. 59 This appears to be confirmed by the remarkable stability of the CanNECD SES Index, the measure used in this study, over the period of five years. 49 Finally, we were unable to control for family-level SES in the models. Thus, it is not possible to determine whether this association is driven by neighborhood or family characteristics. We were also unable to control for specific diagnoses or severity of disabilities that have undoubted impact on child development. Similar investigation should be extended for smaller subgroups of children who share diagnoses or functional impairments. Implications Our findings indicate that the relationship between SES and developmental outcomes also holds for children with disabilities. 8 15-17 45 60 This underscores the potential impact of the early environment of children on their development. Although clinicians often focus on biological factors, such as family history of disabilities and harmful exposures in utero, social influences have commonly been found to be more predictive of long-term developmental and academic outcomes and may be more amenable to change. 45 According to survey data, clinicians are receptive to screening for social determinants of health outside of the purview of clinical care, suggesting that the findings of this investigation are likely to be relevant and acceptable to those in the clinical community. 61 Our findings show that the association between child development and socioeconomic status, which is well-established for typically developing children, also exists for children with disabilities. This highlights the urgency for improving the social and economic context in which children are raised, in addition to targeted interventions delivered at the individual child level. Failure to do so will likely result in further perpetuation of inequities in child development -more so as children with disabilities are already among the most disadvantaged groups globally. 18 62 It remains to be seen whether large-scale policy interventions can help in reducing disparities in this population similarly to other groups. 63 It is important to consider the findings in context of the availability of support services for children with special needs in Canada prior to school entry. The strategies, programs, and accessibility vary by province/territory, and often within jurisdictions, as municipal and regional health units are often service providers, but generally access is easier for children with a specific diagnosis than for those with unspecified disorders. 55 Additional investigations could further strengthen and contextualize these findings. Specifically, establishing the consistency and relative strength of the relationship between SES and developmental outcomes across subgroups of physical, behavioral, and learning disabilities, as well as subgroups based on severity of condition and time of diagnosis, would further untangle the relationship between SES, disabilities, and development, and would be helpful in identifying service provision strategies that are likely to be most successful in improving outcomes.
2020-04-30T09:01:54.085Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "0d7ac3db71aa99bd2548f38aff3abd701a725b04", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/10/4/e032396.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "62d4e3126195b98f54b6d3d56b263b1c178eba0e", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
269852752
pes2o/s2orc
v3-fos-license
Overcoming Dimensionality Constraints: A Gershgorin Circle Theorem-Based Feature Extraction for Weighted Laplacian Matrices in Computer Vision Applications In graph theory, the weighted Laplacian matrix is the most utilized technique to interpret the local and global properties of a complex graph structure within computer vision applications. However, with increasing graph nodes, the Laplacian matrix’s dimensionality also increases accordingly. Therefore, there is always the “curse of dimensionality”; In response to this challenge, this paper introduces a new approach to reducing the dimensionality of the weighted Laplacian matrix by utilizing the Gershgorin circle theorem by transforming the weighted Laplacian matrix into a strictly diagonal domain and then estimating rough eigenvalue inclusion of a matrix. The estimated inclusions are represented as reduced features, termed GC features; The proposed Gershgorin circle feature extraction (GCFE) method was evaluated using three publicly accessible computer vision datasets, varying image patch sizes, and three different graph types. The GCFE method was compared with eight distinct studies. The GCFE demonstrated a notable positive Z-score compared to other feature extraction methods such as I-PCA, kernel PCA, and spectral embedding. Specifically, it achieved an average Z-score of 6.953 with the 2D grid graph type and 4.473 with the pairwise graph type, particularly on the E_Balanced dataset. Furthermore, it was observed that while the accuracy of most major feature extraction methods declined with smaller image patch sizes, the GCFE maintained consistent accuracy across all tested image patch sizes. When the GCFE method was applied to the E_MNSIT dataset using the K-NN graph type, the GCFE method confirmed its consistent accuracy performance, evidenced by a low standard deviation (SD) of 0.305. This performance was notably lower compared to other methods like Isomap, which had an SD of 1.665, and LLE, which had an SD of 1.325; The GCFE outperformed most feature extraction methods in terms of classification accuracy and computational efficiency. The GCFE method also requires fewer training parameters for deep-learning models than the traditional weighted Laplacian method, establishing its potential for more effective and efficient feature extraction in computer vision tasks. Introduction Over the years, graph theory has expanded and gained significant advancements in various fields, such as chemistry, biology, and computer science [1][2][3].Likewise, in machine learning, many problems can be modeled as a graph, where nodes represent pixels or regions, and edges describe relationships between nodes.The graph-based methods can capture and exploit an image's spatial values and relational structures, offering a rich and flexible framework for image analysis and classification tasks [4].Graph theory allows us to represent any graph in matrix form.The Laplacian matrix is one of the standard matrix forms used in graph representation.It conveniently represents a graph's local and global properties.The Laplacian matrix can be formed in several ways; the most conventional matrix formation is by finding the adjacency matrix and its respective Degree matrix.Note that the Laplacian matrix grows larger in size with the increasing size of the image.This can lead to increasing computational time in postprocessing algorithms.Therefore, feature or dimensionality reduction is often a critical step when working with a large dataset.Additionally, it is vitally important to have a feature extraction algorithm that consumes less computational time. In the past, the Laplacian Eigenmap (LE) was the most utilized nonlinear feature extraction method for the Laplacian matrix [5].In LE, Belkin and Niyogi first compute the eigenvalues of the Laplacian matrix of a graph, and then, corresponding to their eigenvectors, the smallest non-zero eigenvalues are selected.In contracts to the LE method, He and Niyogi [6] proposed an algorithm called Locality Preserving Projections (LPP) that learns the linear mapping of data rather than a nonlinear mapping.Note that the LPP might not perform well on nonlinear structural data. Besides calculating simple eigenvalues for feature extraction, Roweis and Saul [7] introduced Locally Linear Embedding (LLE), a manifold learning algorithm to project highdimensional data into low-dimensional space.The fundamental principle of LLE involves selecting a predetermined number of nearest neighbors for each data point, typically referred to as the "k-number".After identifying these neighbors, LLE calculates the local geometric structures by determining the best linear combination of these k-neighbors to reconstruct each data point.When transforming to a low-dimensional space, LLE ensures that these data points maintain their original proximities, staying as close together (or as far apart) as they were initially, preserving their relative distances and relationships.The drawback of LLE is that the user must define the "k-nearest neighbors" in it, which is not ideal for non-supervised operations.Moreover, the LLE is sensitive to noisy data and outliers. The Isometric Feature Mapping (Isomap) [8] proposed by Tenenbaum et al. is another significant feature extraction method that finds the path with the shortest distance (also called geodesic distance) between all data point pairs in the local neighborhood.The geodesic distances help to capture the intrinsic manifold structure within the data.Similarly, He et al. [9] presented the "Laplacian score", where the initial nearest neighbor graph is constructed and converted into a weighted Laplacian matrix.After that, the Laplacian score is calculated by deducting one from the feature variance and dividing by its degree, i.e., the number of connected nodes.Both LPP and LLE require a particular nearest neighbor graph and the Laplacian matrix. Besides the feature extraction methods that are mentioned so far, several other feature extraction methods have been proposed that can be directly implemented on the Laplacian matrix.For instance, the Principal Component Analysis (PCA) [10] is the most commonly used linear feature extraction method in machine learning.In PCA, the data points are transformed orthogonally, and a new set of coordinates is generated, also known as principal components.The users select the number of principal components according to the data point's total variance.However, increasing the number of data points increases the computation time for feature extraction.Another version of the PCA is called "kernel PCA" where the data points are mapped into higher dimensional space using the "kernel function".After the data points are mapped, the principal components are computed.Then, as in standard PCA, the user selects the number of principal components according to the data point's variance.Different types of kernel functions can be used for "kernel PCA", such as the Radial Basis Function (RBF) [11] or the polynomial kernel function [12].Note that the kernel PCA requires more computational time compared to the traditional PCA.Another alternative way to reduce computational time is by taking a smaller size of the dataset and reducing it to lower dimensions.Later, "dot-product" is used with the rest of the dataset to reduce the features.However, it might result in low classification performance.Another alternative way to reduce computational time is to reduce the dataset to smaller batches, such as Incremental PCA (I-PCA) [13], and then apply feature extraction techniques.However, it remains a critical step to determine the optimal batch size that balances computational efficiency with the enhancement of classification performance in feature reduction. Additionally, addressing the computational efficiency in processing high-dimensional matrices remains a considerable challenge in developing feature extraction algorithms.The feature reduction methods reviewed in the preceding sections suggest an increase in computational demands proportional to the expansion of dataset sizes and dimensionalities, as exemplified by a dataset comprising 100,000 images, each with a resolution of 150 × 150 pixels.Motivated by this issue, the current study introduces an innovative approach to mitigate the 'curse of dimensionality' and low computational time without significantly compromising classification accuracy.This paper presents the development and application of a novel dimensionality reduction algorithm that surpasses various established feature extraction techniques in terms of classification accuracy while also demonstrating a noticeable decrease in computational time requirements.Furthermore, this research shows how the performance of these feature extraction algorithms is influenced by variations in image patch sizes. The proposed algorithm utilizes the Gershgorin circle (GC) theorem for dimensionality reduction or feature extraction.The GC theorem was developed by mathematician S. A. Gershgorin [14] in 1931.The GC theorem estimates an eigenvalue inclusion of a given square matrix.The GC theorem has been used in several diverse applications, such as stability analysis of nonlinear systems [15], graph sampling in Graph theory [16], and evaluating the stability of power grids [17].Over time, several extensions of the GC theorem have provided a better close estimation of eigenvalue inclusion of matrices [18,19].The GC theorem is more time-efficient in computation than other eigenvalue inclusion methods [19].However, none of the inclusion methods have been used for feature extraction tasks. Once features are effectively extracted through any method, the subsequent pivotal step is to classify them by selecting an appropriate classification algorithm.The extracted features help not only to reduce computation time but also to reduce the number of training parameters that are required for the classification algorithms.In the fields of machine learning (ML) and deep learning (DL), many algorithms have been developed that provide state-of-the-art performance.In the field of ML, algorithms like Support Vector Machines (SVM) [20] and Decision Trees [21] are most commonly used, while in DL, algorithms such as artificial neural networks (ANN) [22] and convolution neural networks (CNN) [23] are some of the few algorithms that are commonly used. This paper introduces a novel feature extraction method for the graph-weighted Laplacian matrix by utilizing a mathematical theorem known as the Gershgorin circle theorem.Figure 1 shows the complete overview process of the proposed GCFE algorithm.The proposed algorithm modifies the weighted Laplacian matrix by converting it into a strictly diagonally dominant matrix termed a modified weighted Laplacian (MWL) matrix.Later, applying the GC theorem, the matrix's P × N × N feature is reduced to P × N × 2 features, where P = no. of patches; N = no. of nodes, or total pixel size, accordingly.Finally, the reduced features are fed into the classification algorithm.For performance comparison, two classification algorithms, 1D-CNN and 2D-CNN, were utilized in this study.Detailed explanations of the proposed method, along with descriptions of the datasets used, are provided in Section 2. Section 3 discusses the results of the proposed methods, focusing on GCFE's computational efficiency and performance accuracy compared to other feature extraction studies.This paper concludes with a summary of the findings and their implications in Section 4. Datasets This study utilizes three well-known and publicly available computer vision datasets with different image types, instances, and features.Table 1 presents the properties of each dataset."The EMNIST" dataset is an extension of the original MNIST dataset that includes letters of the alphabet compared to the traditional digit classes.It was created by the National Institute of Standards and Technology (NIST) Special Database 19 [24].The dataset includes seven sets, with digits, letters, and balanced and unbalanced sets, providing a variety of challenges for machine learning models.Each set has a 28 × 28 grayscale image with different numbers of classes and instances, as shown in Table 1. Cats Vs. Dogs (CVD) Dataset The "Cats Vs.Dogs" dataset consists of 25,000 color images of 37 different breeds of dogs and cats.The dataset was created for the 2013 Kaggle competition [25].All the Datasets This study utilizes three well-known and publicly available computer vision datasets with different image types, instances, and features.Table 1 presents the properties of each dataset.[24].The dataset includes seven sets, with digits, letters, and balanced and unbalanced sets, providing a variety of challenges for machine learning models.Each set has a 28 × 28 grayscale image with different numbers of classes and instances, as shown in Table 1. Cats vs. Dogs (CVD) Dataset The "Cats vs. Dogs" dataset consists of 25,000 color images of 37 different breeds of dogs and cats.The dataset was created for the 2013 Kaggle competition [25].All the images are resized to 100 × 100 × 3.While it has different objects in the background images, the target objects are in the foreground. Malaria Cell (MC) Dataset The "malaria cell images" dataset was released by the National Institute of Health (NIH) [26], which consists of 27,558 instances equally divided between two classes.The dataset comprises parasitized and uninfected cells from segmented cells' thin blood smear slide images.The dataset has RGB color images with a solid black color on the background.In our study, all images of the data sets were resized to 100 × 100 × 3 size. Methodology The proposed feature extraction algorithm consists of four principal steps-image preprocessing, formation of a MWL Matrix, GCFE, and classification.Figure 2 images are resized to 100 × 100 × 3.While it has different objects in the background images, the target objects are in the foreground. Malaria Cell (MC) Dataset The "malaria cell images" dataset was released by the National Institute of Health (NIH) [26], which consists of 27,558 instances equally divided between two classes.The dataset comprises parasitized and uninfected cells from segmented cells' thin blood smear slide images.The dataset has RGB color images with a solid black color on the background.In our study, all images of the data sets were resized to 100 × 100 × 3 size. Methodology The proposed feature extraction algorithm consists of four principal steps-image preprocessing, formation of a MWL Matrix, GCFE, and classification.Figure 2 Preprocessing The size of the input images was in M × N format, where M represents no. of rowpixels, and N represents no. of column-pixels, respectively.As shown in Figure 2, the sample image has M = 4 and N = 4.All the input image (pixel) intensities are initially scaled down from 0-255 to 0-1 by implementing min-max normalization, which is also known as feature scaling.After scaling, the input images are segmented into smaller patch (P) sizes.The images with smaller patches are called partition matrices with size P × M × N. The purpose of smaller patch sizes was to examine the performance of feature extraction algorithms on different patch sizes.The criteria for patch size selection were based on multiplying factors of the input image.For instance, if the input image size is 28 × 28, multiplying factors would be all numbers that they can divide evenly with the patch size, such as 2, 4, 7, 14, and 28, accordingly.For example, in Figure 2, the patch size = 2 for the sample image (i.e., 2 × 2); then, the 4 × 4 image is converted into a 4 × 2 × 2 partition matrix.In other words, the image will have 4 sub patches (P), each with 2 × 2 pixels.Similarly, if patch size = 28, the output partition matrix would be 1 × 28 × 28.The next step is converting each partition matrix into a graph (G).During image-to-graph conversion, image pixels are converted into a set of vertices or nodes (V) (represented by red circles in Figure 2).The connections between sets of nodes are called edges (E) (represented by green lines in Figure 2).For this operation, any graph conversion method can be used.In this study, Preprocessing The size of the input images was in M × N format, where M represents no. of rowpixels, and N represents no. of column-pixels, respectively.As shown in Figure 2, the sample image has M = 4 and N = 4.All the input image (pixel) intensities are initially scaled down from 0-255 to 0-1 by implementing min-max normalization, which is also known as feature scaling.After scaling, the input images are segmented into smaller patch (P) sizes.The images with smaller patches are called partition matrices with size P × M × N. The purpose of smaller patch sizes was to examine the performance of feature extraction algorithms on different patch sizes.The criteria for patch size selection were based on multiplying factors of the input image.For instance, if the input image size is 28 × 28, multiplying factors would be all numbers that they can divide evenly with the patch size, such as 2, 4, 7, 14, and 28, accordingly.For example, in Figure 2, the patch size = 2 for the sample image (i.e., 2 × 2); then, the 4 × 4 image is converted into a 4 × 2 × 2 partition matrix.In other words, the image will have 4 sub patches (P), each with 2 × 2 pixels.Similarly, if patch size = 28, the output partition matrix would be 1 × 28 × 28.The next step is converting each partition matrix into a graph (G).During image-to-graph conversion, image pixels are converted into a set of vertices or nodes (V) (represented by red circles in Figure 2).The connections between sets of nodes are called edges (E) (represented by green lines in Figure 2).For this operation, any graph conversion method can be used.In this study, three different graph conversion methods have been used.These are called "2D-grid lattice", "pairwise graph", and "K-nearest neighbors (K-NN) graph", accordingly.The three graph methods are utilized to justify the performance of the proposed feature extraction on different graph structures.In Figure 2, each 2 × 2 partition matrix is converted to a graph using a pairwise graph.Each edge is weighted according to the "Manhattan distance" between any two given nodes.Equation (1) presents the formula to calculate the weighted edge (W ij ) of any given pair of nodes in a graph representation of an image. where W ij = weight of the edge between the node i th and j th ; value 1 = the pixel value at the coordinates (x 1 , y 1 ) for i th node inside image.value 2 = the pixel value at the coordinates (x 2 , y 2 ) for j th node inside image. Modified Weighted Laplacian (MWL) Matrix Graphs are generally transformed into matrix forms to facilitate interpretation or processing.The most common way is the weighted or unweighted adjacency matrix.The weighted adjacency matrix (A) is the Z × Z square matrix, where Z represents the total number of nodes.The total number of nodes in matrix A is equal to the number of rows multiplied by the number of columns in the image patch size.The elements of the undirected graph weighted adjacency matrix are formed using Equation (2). where W ij = weight of the edge between nodes i and j; E = the set of edges in the graph such that (i, j) is an edge connecting node i and j; A ij = i, j) th entry of the weighted adjacency matrix A. In Figure 2, it can be depicted that the 4th patch of the sample image is converted into a 4 × 4 weighted adjacency matrix from a 2 × 2 partition matrix pairwise graph.Each entry in the matrix represents the weight of the edge according to Equation (1).Note that the matrix is symmetric for undirected graphs.To construct the modified Laplacian matrix, it is essential to compute the Degree matrix.Typically, the Degree matrix (D) is calculated by taking the row summation of the weighted adjacency matrix.Instead, we computed the elements of the unweighted adjacency matrix (S) using Equation (3).In unweighted adjacency, the matrix represents the presence or absence of edges between nodes in the graph.The entries of the S matrix are typically binary, where "1" indicates that there is an edge between nodes i and j, and "0" indicates that there is no edge between nodes i and j.Then, the Degree matrix (D) is computed as described in Equation ( 4), where each diagonal entry D ii represents the degree of the i th node. where S ij = i, j) th entry of the unweighted adjacency matrix S, E = the set of edges in the graph, where (i, j) are edge-connecting nodes i and j. where ∑ Z j=1 S ij = the summation of the i th row of matrix S, which is the number of edges connected to node i, also known as the degree of the node. Next, the MWL matrix (L) is computed by taking the difference between the Degree matrix (D) and weighted adjacency matrix (A), as shown in Equation ( 5).The elements of matrix L are calculated using Equation ( 6).This modification helps the MWL matrix remain a strictly diagonally dominant matrix and ensures Positive Semi-Definite (PSD) properties.The final size of the MWL matrix is P × N × N.For instance, the MWL matrix L of the sample image 4th patch in Figure 2 is strictly diagonally dominant.In this matrix, the absolute summation of off-diagonal values in each row is less than 3, which corresponds to the diagonal values of L. L = D − A ( 5) Gershgorin Circle Feature Extraction The GC theorem estimates the eigenvalue inclusion for a square matrix in the complex plane [14].The GC theorem states that all the eigenvalues of the square matrix are included in the union GC or Gershgorin disks.Each L matrix eigenvalue inclusion consists of radius vector R = [r 1 (L), r 2 (L), . . . ,r n (L)] and center vector C = [c 1 (L), c 2 (L), . . . ,c(L)].Each GC radius and center vector of a MWL is represented as feature reduction.The elements of vector R and C are calculated according to Equations ( 7) and ( 8), respectively.The estimated radius of each GC is obtained by i th row absolute summation of off-diagonal values of the square matrix L denoted as r i (L).The center of each GC is calculated by taking the i th row diagonal values of square matrix L denoted as c i (L).Furthermore, in Figure 2, the representation of GC features can be illustrated for the 4th patch of the sample image. where V = Set of all nodes in the graph, with V = { 1, 2, . . . ,n} .L ij = the element of the MWL matrix at the i th row and j th column.L ii = diagonal entry of MWL matrix for i th node.Additionally, due to the MWL matrix being strictly diagonally dominant, all GC features lie on the real axis of the Cartesian plane [19].Moreover, the r i (L) also displays a square matrix's estimated lower and higher bounds of eigenvalues.Finally, the MWL matrix with P × N × N is reduced to the GC features with a P × N × 2, which is equivalent to P × { r i (L)} ×{c i (L)} matrix size. Classification The GCFE algorithm performance was evaluated using the 1D and 2D-CNN models for feature extraction and classification.Figure 3 shows the complete architecture for both deep-learning models [27].The model architecture was mostly similar for all the experiments, besides a few internal layer settings, such as kernel or padding size, which were modified.In the 2D-CNN model (Figure 3a), each convolution layer's kernel size is set to (1 × 3) for GCFE classification and (3 × 3) for other methods that were used for comparison, as shown in Table 2. Similarly, each pooling layer's kernel is set to (1 × 2) size for the GC feature classification, while (2 × 2) is used for other methods.Since the GCFE method results in two vectors, R and C, for each patch, the kernel sizes for convolution and pooling layers were changed, as shown in Figure 2. In addition, both vectors for all individual patches of a single image are stacked up in sequence.In the 1D-CNN model (Figure 3b), all kernel and padding sizes of each convolution layer, as well as the pooling layer, are kept the same. all kernel and padding sizes of each convolution layer, as well as the pooling layer, are kept the same. K-NN 1D-CNN Accuracy Initially, the GC features are fed into the input layer.The input layer for 2D-CNN was structured as (batch size, (P × {r i (L)}), (P × {c i (L)}), channels), such as (1000, (1 × 784), (1 × 784), 1).For the 1D-CNN, the input layer was organized as (batch size, (P × {r i (L)} × {c i (L)}), channels), e.g., (1000, (1 × 784 × 784), 1).Following the input layer, the data proceed into a convolution layer.Each convolution layer for both models is configured with 32 filters, also known as a feature map.The feature maps extract different patterns from input data while training the deep-learning model.Each filter slides convoluted with input data to produce a feature map, to capture spatial hierarchies.After each convolution layer, the ReLU (Rectified Linear Unit) activation function is applied.The ReLU helps to handle the vanishing gradient problem by introducing nonlinearity to the model.After ReLU, the data are passed to the pooling layer.Each pooling layer extracts the dominant spatial features from feature maps and reduces the size of the feature map.The "average pooling" method is employed on pooling layers in both models, which calculate the average value for each patch on the feature map.Furthermore, each model has two more sets of convolution layer + ReLU + pooling layer sequentially connected.After K-NN 1D-CNN Accuracy Initially, the GC features are fed into the input layer.The input layer for 2D-CNN was structured as (batch size, (P × { r i (L)}), (P × {c i (L)}), channels), such as (1000, (1 × 784), (1 × 784), 1).For the 1D-CNN, the input layer was organized as (batch size, (P × { r i (L)} ×{c i (L)}), channels), e.g., (1000, (1 × 784 × 784), 1).Following the input layer, the data proceed into a convolution layer.Each convolution layer for both models is configured with 32 filters, also known as a feature map.The feature maps extract different patterns from input data while training the deep-learning model.Each filter slides convoluted with input data to produce a feature map, to capture spatial hierarchies.After each convolution layer, the ReLU (Rectified Linear Unit) activation function is applied.The ReLU helps to handle the vanishing gradient problem by introducing nonlinearity to the model.After ReLU, the data are passed to the pooling layer.Each pooling layer extracts the dominant spatial features from feature maps and reduces the size of the feature map.The "average pooling" method is employed on pooling layers in both models, which calculate the average value for each patch on the feature map.Furthermore, each model has two more sets of convolution layer + ReLU + pooling layer sequentially connected.After the last pooling layer of the model, the data are transformed into the 1D vector using the flattening layer.It helps connect the convolution part of the model to the upcoming fully connected layer. The Fully Connected Neural Network is built by connecting two dense layers in sequence with a Dropout Layer.In a Fully Connected Neural Network, each layer's artificial neurons are fully interconnected with all artificial neurons of the next dense layer.Each dense layer has 512 artificial neurons and utilizes a nonlinear ReLU activation function.The Dropout layer is set to 0.1 (equivalent to 10%) for model overfitting regularization.In the Dropout layer, the fractions of neurons are randomly dropped out (i.e., setting to zero) during the training of the model.Finally, the output layer is interconnected with dense layer 2. The SoftMax activation function is utilized for the output layer.The SoftMax function normalizes the input data into a probability distribution over the target classes where the sum of all probabilities equals one.The number of neurons in the output layer varies according to the number of classes in the datasets.The "SpareCategoricalCrossentropy" and "Adam" are used as "loss functions" and optimizers for both CNN models.The detailed mathematical description of CNN can be found in [23]. Results and Discussion This study compares the proposed method with seven feature reduction methods and one non-feature reduction algorithm with identical CNN classification architecture.In addition, while keeping the same environment all over the experiment, the true performance of the proposed method is evaluated.All the experiments were executed on a university supercomputer server, which was configured with 24 Core and 24 GB memory per core.The cross-validation technique is used to validate the model's performance.Table 2 displays a comparative analysis of the proposed method in three different ways with different datasets, graph types, classification architecture, and assessment metrics. In the first approach, the GCFE performance was examined on different patch sizes of images using 2D CNN, as shown in Figure 4.All the GCFE experiments in Figure 4 were based on a 2D grid graph.In this experiment, the datasets were split into training, validation, and testing, with ratios of 70%, 15%, and 15%, respectively.The CNN models were trained with 10 epochs.The EMNIST datasets were tested with 2, 4, 7, 14, and 28 patch sizes, while the MC and CVD datasets were experimented with 2, 4, 5, 10, and 20 patch sizes.Also, from Figure 4, it can be seen that the GCFE performance across different image patch sizes remains almost consistent, with an average standard deviation accuracy of ±0.4475.The average GCFE accuracy performance along with standard deviation (SD) for each dataset are 84.53 ± 0.714, 85.01 ± 0.281, 88.30 ± 0.269, 98.86 ± 0.154, 91.18 ± 0.473, 98.12 ± 0.124, 94.54 ± 0.404, and 69.62 ± 0.157 for E_Balanced, E_ByClass, E_ByMerge, E_Digits, E_Letter, E_MNIST, MC, and CVD, respectively (from Figure 4).Besides each model's accuracy, other evaluating metrics, such as the F1 score, Recall, and Precision, were also computed, which is illustrated in Figure 4. Notably, the CVD dataset demonstrated lower performance, which can be attributed to its inherent characteristics-specifically, the significant presence of extraneous objects in the background as compared to the target foreground objects (cats or dogs).Further analysis revealed that in approach 2, the CVD dataset consistently showed lower accuracy across all feature extraction methods compared to other datasets. In the second approach, additional experiments were conducted to compare the feature extraction algorithms with different graph types, their accuracy, and computational time presented in Table 3.In addition, all the experiments in Table 3 are performed on datasets with balanced distribution classes, which were E_Balanced, MC, and CVD datasets.Furthermore, the number of epochs and ratios of split datasets were kept similar to approach 1.However, due to memory resource limitations, only smaller image patch sizes were selected.For image-to-graph transformation, GCFE and the other experiments utilize two different graph types: 2D-grid and pairwise graphs, respectively.Each graph type had seven different experiments for comparison: GCFE (2D-CNN); Laplacian (2D-CNN); GCFE (1D-CNN); I-PCA (1D-CNN); kernel-PCA (1D-CNN); spectral embedding (1D-CNN); and In the second approach, additional experiments were conducted to compare the feature extraction algorithms with different graph types, their accuracy, and computational time presented in Table 3.In addition, all the experiments in Table 3 are performed on datasets with balanced distribution classes, which were E_Balanced, MC, and CVD datasets.Furthermore, the number of epochs and ratios of split datasets were kept similar to approach 1.However, due to memory resource limitations, only smaller image patch sizes were selected.For image-to-graph transformation, GCFE and the other experiments utilize two different graph types: 2D-grid and pairwise graphs, respectively.Each graph type had seven different experiments for comparison: GCFE (2D-CNN); Laplacian (2D-CNN); GCFE (1D-CNN); I-PCA (1D-CNN); kernel-PCA (1D-CNN); spectral embedding (1D-CNN); and Raw Image (2D-CNN), accordingly.In Table 3, the letter "P" in the dataset name represents the patch size.For instance, "E_balanced_P2" means an E_Balanced dataset with patch size 2. In Table 3, the experiment titled "Raw Image" is conducted to provide an approximate performance assessment for each dataset patch size."~" = Out of Memory; "t" = total time computational for all samples; "t*" = total time computational for all samples by extracting a small bundle of samples and the rest of samples dot product (all computational time in seconds). Furthermore, similar to the findings in Figure 4 regarding GCFE performance across different image patch sizes, Table 3 also indicates similar accuracy performance trends for the GCFE method using the 2D-grid graph and the 1D-CNN model.In addition, for the E_Balanced patch size experiments, the accuracy deviated by merely ±0.4497 SD.In contrast, the accuracy performance for other feature extraction methods like PCA, kernel-PCA, and spectral embedding increased with increasing patch size.For instance, the accuracy increases from 76.468% to 78.223% with SD ± 1.0044 for I-PCA, 78.138% to 80.580% with SD ± 1.2467 for kernel-PCA, and 75.787% to 77.755% with SD ± 1.0166 for spectral embedding as the patch size varied from 2 and 4 to 7. Also, similar trends can be observed for the pairwise graph type for the E_Balanced dataset. In the "Laplacian" experiment, the standard weighted Laplacian matrix was constructed and applied either as input data for various feature extraction methods or fed directly into the classification model.The feature extraction algorithms, such as I-PCA, kernel-PCA, and spectral embedding, were configured to produce the same quantity of features as the GCFE output.This configuration ensures a real performance comparison between the methods.For a detailed examination, configurations and associated code for all methods are available at Supplementary Materials [28].Figure 5 shows each feature extraction method's mean accuracy (ACC), utilizing both graph and 1D-CNN.It also displays the average Z-score between GCFE and other individual feature extraction methods.Both average, ACC, and average Z-score were computed for E_Balanced and patches P2, P4, and P7, accordingly.As can be seen in Figure 5a, b, GCFE consistently outperforms all other methods across both graph types, as indicated by its predominantly positive Z-score values.The only exception is the spectral embedding with the pairwise graph, which has a Z-score of −0.01 in Figure 5b.Additionally, the GCFE method ACC on CVD_P2 and MC_P2 is also much higher compared to other feature extraction methods for both graph types.For instance, when considering the 2D-grid graph type, the percentage difference between GCFE (1D-CNN) and I-PCA for CVD_P2 and MC_P2 was 5.16% and 37.04%, respectively.graph types) was lower than the GCFE.Still, with the increasing patch size of the image, the small batch and dot-product method for computational time increased more than the GCFE method.For instance, the spectral embedding method with 2D-grid and pairwise graph-E_Balanced_P4, E_Balanced_P7, CVD_P2, and MC_P2 shown in Table 3.In the third comparison approach, the GCFE was compared with additional feature reduction, which included Isomap, LLE, Modified LLE (MLLE) [29], and Hessian Eigenmap [30].In this approach, the feature reduction methods were compared by their accuracy and total computational time (generating graph till feature reduction), as shown in Figure 6.The K-NN graph type is utilized to convert images to graphs.During these experiments, the "K" value for the graph was selected to match the image's patch size for LLE, MLLE, and Isomap, while for the Hessian Eigenmap, K was set to 300.A total of 300 components (no. of reduced features) were chosen for the LLE, MLLE, and Isomap methods and 20 components for the Hessian Eigenmap method.Similarly, in approach 2, the LLE, MLLE, Isomap, and Hessian Eigenmap methods were applied to a small subset of the dataset comprising 1000 samples (100 samples for each E_MNIST class).Later, the rest of the datasets are transformed into reduced features by the dot product between the reduced feature of the small subset and the entire dataset.In Figure 6, similar trends were Besides the comparison of feature extraction accuracy performance, the computational time for the feature extraction algorithm is an important criterion.Table 3 also presents the computation time needed to perform feature extraction on all datasets.The presented times were in seconds.Note that Table 3 has two types of time notation: "t" and "t*".The "t" represents the time to compute all dataset instances simultaneously, while "t*" indicates the time taken when processing the dataset in smaller batches, obtaining their reduced features and subsequently implementing the dot product on the remaining instances.The GCFE computed 131,600 E_Balanced instances for a 2D-grid graph in approximately 6 s (actual 6.044 s), 6 s (actual 5.868 s) and 16 s with patch sizes 2, 4, and 7, respectively, and took 32 s and 36 s for 27,558 MC and 25,000 CVD instances with patch size 2. For the pairwise graph, it processed the same E_Balanced instances in approximately 5 s (actual 5.765 s), 5 s (actual 5.714 s), and 8 s and took 27 s and 33 s for the MC and CVD instances, all with patch size 2. Thus, the computational time for both graph types of GCFE is much lower compared to other methods, such as I-PCA, kernel-PCA, and spectral embedding.However, considering the small batch and "dot-product" method for feature extraction, the computation time for a spectral embedding patch size of 2 (both graph types) was lower than the GCFE.Still, with the increasing patch size of the image, the small batch and dot-product method for computational time increased more than the GCFE method.For instance, the spectral embedding method with 2D-grid and pairwise graph-E_Balanced_P4, E_Balanced_P7, CVD_P2, and MC_P2 shown in Table 3. In the third comparison approach, the GCFE was compared with additional feature reduction, which included Isomap, LLE, Modified LLE (MLLE) [29], and Hessian Eigenmap [30].In this approach, the feature reduction methods were compared by their accuracy and total computational time (generating graph till feature reduction), as shown in Figure 6.The K-NN graph type is utilized to convert images to graphs.During these experiments, the "K" value for the graph was selected to match the image's patch size for LLE, MLLE, and Isomap, while for the Hessian Eigenmap, K was to 300.A total of 300 components (no. of reduced features) were chosen for the LLE, MLLE, and Isomap methods and 20 components for the Hessian Eigenmap method.Similarly, in approach 2, the LLE, MLLE, Isomap, and Hessian Eigenmap methods were applied to a small subset of the dataset comprising 1000 samples (100 samples for each E_MNIST class).Later, the rest of the datasets are transformed into reduced features by the dot product between the reduced feature of the small subset and the entire dataset.In Figure 6, similar trends were noticed in approach 2, where the accuracy performance of LLE, MLLE, and Isomap decreased with a decrease in the patch size of the image, while the GCFE and Hessian Eigenmap did not have a major variation in accuracy performance.Moreover, the GCFE outperformed the LLE, MLLE, and Isomap in classification accuracy.The GCFE and Hessian Eigenmap methods showed only minor differences in accuracy performance.However, Figure 6 indicates that the Hessian Eigenmap had a higher computational time compared to GCFE.Additionally, LLE and MLLE had lower computational times than GCFE due to the smaller dataset subset selected for feature reduction.and GCFE (2D-CNN) in approach 2. In Figure 7, each circle represents the number of training parameters, which are scaled down to 10 −6 .The number of required training parameters for all standard Laplacian features is 3.51 for patch 2, 3.54 for patch 4, 9.93 for patch 7, 38.84 for patch 14, and 157.65 for patch 28.Comparatively, the GCFE has only 3.5 training parameters with an average percentage difference of only 0.684% (for 2D-grid type) and 0.952% (for pairwise type) compared to the standard weighted Laplacian method.Note that the number of training parameters for GCFE will remain the same for different patch sizes.Additionally, the results demonstrate that the GCFE method offers robust and reliable feature extraction, with minimal variability in performance as indicated by its low SD and higher ACC across different datasets and graph types.This consistency is crucial for Additionally, the results demonstrate that the GCFE method offers robust and reliable feature extraction, with minimal variability in performance as indicated by its low SD and higher ACC across different datasets and graph types.This consistency is crucial for applications in computer vision, where the precision of feature extraction can significantly impact the accuracy of subsequent tasks such as image classification.In this study, the GCFE method exhibited an average SD of 0.3202 using the 2D-grid graph type across all datasets and an SD of 0.305 using the K-NN graph type on the E_MNIST dataset.These SD results demonstrate the method's consistent ACC performance across different image patch sizes, reducing uncertainty in the GCFE method's performance. Conclusions This work demonstrated a new feature extraction method for a weighted Laplacian matrix using the GC theorem.The proposed GCFE method was compared against various feature extraction algorithms while utilizing an identical CNN architecture.With only a few exceptions, the GCFE method outperformed other feature extraction methods, having a positive Z-score on both graph types.In addition, the performance accuracy of GCFE was consistent with different patch sizes of images.The GCFE method also required a much lower number of training parameters for classification models without any substantial change in accuracy compared to the standard weighted Laplacian method.This makes GCFE a good alternative solution for resource-constrained environments.Beyond accuracy, the GCFE method is computationally time efficient compared to other methods.However, it is essential to consider that GCFE is an irreversible feature reduction technique.This means that once features are extracted, they cannot be transformed back to their original state.This method is constrained by the inherent limitation that it extracts a fixed number of features from any given image, which has dimensions P × M × N, ultimately producing a reduced output of P × N × 2. Unlike parametric methods such as Principal Component Analysis Figure 1 . Figure 1.An overview of the GCFE methodology from image preprocessing to classification using the modified weighted Laplacian approach. Figure 1 . Figure 1.An overview of the GCFE methodology from image preprocessing to classification using the modified weighted Laplacian approach. illustrates a detailed flowchart of GCFE formation with a sample image of 4 × 4 size.J. Imaging 2024, 10, x 5 of 15 illustrates a detailed flowchart of GCFE formation with a sample image of 4 × 4 size. Figure 2 . Figure 2. Flowchart of the proposed matrix transformation (pairwise graph) and GCFE for sample image size. Figure 2 . Figure 2. Flowchart of the proposed matrix transformation (pairwise graph) and GCFE for sample image size. Figure 3 . Figure 3. Representation of deep-learning architectures utilized in this study for feature classification.(a) 2D-CNN model.(b) 1D-CNN model. Figure 3 . Figure 3. Representation of deep-learning architectures utilized in this study for feature classification.(a) 2D-CNN model.(b) 1D-CNN model. Figure 4 . Figure 4. GCFE performance metric for all datasets with different image patch sizes and 2D-grid graph classified using 2D CNN. Figure 4 . Figure 4. GCFE performance metric for all datasets with different image patch sizes and 2D-grid graph classified using 2D CNN. Figure 5 . Figure 5.Comparison of mean ACC performance across feature extraction methods along with average Z-score for two graph types on E_Balanced dataset.(a) with 2D-grid graph.(b) with pairwise graph. Figure 5 . Figure 5.Comparison of mean ACC performance across feature extraction methods along with average Z-score for two graph types on E_Balanced dataset.(a) with 2D-grid graph.(b) with pairwise graph. Figure 6 . Figure 6.Accuracy vs. total computational time (generating graph to feature reduction) in log scale between various feature reduction methods on E_MNIST dataset with different image patch sizes. Figure 7 Figure 7 illustrates the number of training parameters of the 2D-CNN model for different patch sizes of the E_Balanced dataset-standard Laplacian (2D-CNN) features and GCFE (2D-CNN) in approach 2. In Figure 7, each circle represents the number of training parameters, which are scaled down to 10 −6 .The number of required training parameters for all standard Laplacian features is 3.51 for patch 2, 3.54 for patch 4, 9.93 for patch 7, 38.84 for patch 14, and 157.65 for patch 28.Comparatively, the GCFE has only 3.5 training parameters with an average percentage difference of only 0.684% (for 2D-grid type) and Figure 6 . Figure 6.Accuracy vs. total computational time (generating graph to feature reduction) in log scale between various feature reduction methods on E_MNIST dataset with different image patch sizes. Figure 7 Figure 7 illustrates the number of training parameters of the 2D-CNN model for different patch sizes of the E_Balanced dataset-standard Laplacian (2D-CNN) features Figure 6 . Figure 6.Accuracy vs. total computational time (generating graph to feature reduction) in log scale between various feature reduction methods on E_MNIST dataset with different image patch sizes. Figure 7 Figure7illustrates the number of training parameters of the 2D-CNN model for different patch sizes of the E_Balanced dataset-standard Laplacian (2D-CNN) features and GCFE (2D-CNN) in approach 2. In Figure7, each circle represents the number of training parameters, which are scaled down to 10 −6 .The number of required training parameters for all standard Laplacian features is 3.51 for patch 2, 3.54 for patch 4, 9.93 for patch 7, 38.84 for patch 14, and 157.65 for patch 28.Comparatively, the GCFE has only 3.5 training parameters with an average percentage difference of only 0.684% (for 2D-grid type) and 0.952% (for pairwise type) compared to the standard weighted Laplacian method.Note that the number of training parameters for GCFE will remain the same for different patch sizes. Figure 7 . Figure 7. Number of training parameters (scaled by a factor of 10 −6 ) of 2D CNN model for GCFE and standard Laplacian (SLap) features. Figure 7 . Figure 7. Number of training parameters (scaled by a factor of 10 −6 ) of 2D CNN model for GCFE and standard Laplacian (SLap) features. Table 1 . Different properties of dataset. Table 1 . Different properties of dataset. Table 2 . Overview of GCFE comparison approaches across diverse datasets and graph structures using different classification architectures and performance metrics. Table 2 . Overview of GCFE comparison approaches across diverse datasets and graph structures using different classification architectures and performance metrics. Table 3 . Comparison of proposed GCFE with other methods by measuring accuracy performance and computational time. Table 3 . Comparison of proposed GCFE with other methods by measuring accuracy performance and computational time.
2024-05-19T15:58:04.514Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "e5fb0934895bf49635f069c9645d1263a4f6b085", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2313-433X/10/5/121/pdf?version=1715783282", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ab86348b14d48631c88e7e4b57e26332cd142133", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
36024001
pes2o/s2orc
v3-fos-license
Non-destructive terahertz imaging of illicit drugs using spectral fingerprints The absence of non-destructive inspection techniques for illicit drugs hidden in mail envelopes has resulted in such drugs being smuggled across international borders freely. We have developed a novel basic technology for terahertz imaging, which allows detection and identification of drugs concealed in envelopes, by introducing the component spatial pattern analysis. The spatial distributions of the targets are obtained from terahertz multispectral transillumination images, using absorption spectra measured with a tunable terahertz-wave source. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference. 2003 Optical Society of America OCIS codes: (110.2970) Image detection systems; (120.4290) Nondestructive testing; (190.4410) Nonlinear optics, parametric processes; (300.6270) Spectroscopy, far infrared References and links 1. D. M. Mittleman, G. Gupta, B. Neelamani, R. G. Baraniuk, J. V. Rudd, and M. Koch, “Recent advantages in terahertz imaging,” Appl. Phys. B 68, 1085-1094 (1999). 2. B. Ferguson and X. C. Zhang, “Materials for terahertz science and technology,” Nature Materials 1, 26-33 (2002). 3. P. H. Siegel, “Terahertz technology,” IEEE T. Microw. Theory Tech. 50, 910-928 (2002). 4. J. E. Parmeter, D. W. Murray, D. W. Hannum, Guide for the selection of drug detectors for law enforcement applications, NIJ Guide 601-00, (National Institute of Justice, Washington, 2000). 5. A. Fitzgerald, and J. Chamberlain, “An introduction to medical imaging with coherent terahertz frequency radiation,” Phys. Med. Biol. 47, R67-R84 (2002). 6. T. Löffler, T. Bauer, K. Siebert, H. G. Roskos, A. Fitzgerald and S. Czasch, “Terahertz dark-field imaging of biomedical tissue,” Opt. Express 9, 616-621 (2001), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-9-12-616 7. S. Wang, B. Ferguson, C. Mannella, D. Gray, D. Abbott and X.C Zhang, ”Powder Detection Using THz Imaging,” in OSA Trends in Optics and Photonics (TOPS) vol. 73, Conference on Lasers and ElectroOptics, OSA Technical Digest, Postconference Edition (Optical Society of America, Washington DC, 2002), pp. 132-133. 8. Y. Watanabe, K. Kawase, T. Ikari, H. Ito, Y. Ishikawa and H.Minamide, “Component spatial pattern analysis of chemicals using terahertz spectroscopic imaging,” Appl. Phys. Lett., 83, 800-802 (2003). 9. K. Kawase, J. Shikata and H. Ito, “Terahertz wave parametric source,” J. Phys. D: Appl. Phys. 35, R1-R14 (2002). 10. S. Kawata, K. Sasaki, and S. Minami, “Component analysis of spatial and spectral patterns in multispectral images. I. Basis,” J. Opt. Soc. Am. A 4, 2101-2106 (1987). (C) 2003 OSA 6 October 2003 / Vol. 11, No. 20 / OPTICS EXPRESS 2549 #2902 $15.00 US Received August 15, 2003; Revised September 21, 2003 Introduction The terahertz (THz) waves, categorized between millimeter radio waves and far infrared light waves, exhibit properties of both sides of the electromagnetic spectrum.Like radio waves, they can be transmitted through a wide variety of substances such as paper, cloth, ceramics, plastics, wood, bone, fat, various powders, dried food, and so on.In addition, like light waves, they can easily be propagated through space, reflected, focused and refracted using THzoptics.Furthermore, the short wavelength (several hundred µm), much shorter than that of usual radio waves, allows for a spatial resolution which is sufficient in many imaging applications [1][2][3].The range of potential applications is likely to expand even further with the increased availability of many absorption spectra (i.e., fingerprint spectra) peculiar to specific chemicals, including vitamins, sugars, pharmaceuticals, agricultural chemicals, discovered since last year in the THz-wave region. The absence of non-destructive inspection technique for illicit drugs hidden in mail envelopes has resulted in such drugs being not only smuggled across international borders but also transported from one jurisdiction to another within a country with surprising ease [4].The situation must also be attributed to the inconvenience of having to obtain a search warrant to examine the contents every time the need arises.A majority of the legal systems in the world prohibit private letters, whether they be suspected or otherwise, from being examined without a search warrant.There exist several inspection techniques such as passing the mail through an x-ray scanner, having it sniffed by a trained dog, or swiping its outside with a trace detection system.However, the ability of x-ray scanners is limited to identifying the shape of a vinyl plastic bag or a tablet, and not the type of the drug, providing insufficient grounds for opening the envelope for examination.Trace detection and canine detection, on the other hand, can only be effective if there are detectable signs outside the envelope, such as a scent or trace amounts of the concealed drug.Development is also underway in the field of millimeter-wave imaging, but the lack of fingerprint spectra in this region makes the identification of the drug type difficult.Another limitation is the low spatial resolution, of several millimeters.As for infrared imaging, where chemical fingerprint spectra do exist, the high degree of absorption and scattering in paper prevents an accurate measurement. In contrast, the THz wave is suitable for drug detection purposes, being able to screen the contents of envelopes and our measurement results having proven the existence of fingerprint spectra peculiar to illicit drugs in the THz region.Having spent the past ten years in developing a widely tunable THz-wave parametric oscillator (TPO) that is both compact and easy to use, we have demonstrated a THz spectroscopic imaging system by introducing the component pattern analysis method.The spatial distribution of the drugs inside the envelope was extracted from the multispectral images using the absorption spectra.This procedure takes advantage of two key elements, namely the TPO's wide tunability and the fingerprint spectra.Although there have been several reports on multispectral THz-imaging [5,6], and powder detection in the envelope [7], they did not utilize spectral fingerprints of the target.Therefore, it has been difficult for them to determine the specific kinds of drugs in the envelope. Experimental methods The THz spectroscopic imaging system [8] consists of a Q-switched Nd:YAG laser, a TPO [9], imaging optics, an xy scanning stage, a detector, a lock-in amplifier, and a personal computer, as shown in Fig. 1.The Nd:YAG laser (wavelength 1.064 µm) pumps a nonlinear optical crystal MgO:LiNbO 3 (length 65mm), simultaneously generating a THz-wave (wavelength 120 -300 µm, frequency 1 -2.5 THz) and a idler wave (wavelength 1.068 -1.074 µm) by non-collinear phase matched parametric oscillation.The frequency tuning is achieved by rotating the oscillator slightly so that the phase-matching angle is changed.The generated THz-wave is focused on the target by a polyolefin plastic lens (focal length 50 mm) producing a focal spot of about 0.5 mm in diameter.The target is raster scanned by the xz stage over a 20 × 38 mm 2 area, which corresponds to 40 × 76 = 3040 pixels.The measurement time was approximately ten minutes.The transmitted THz wave is projected onto a pyroelectric or a Si-bolometer detector.The signal is separated from noise with a lock-in amplifier synchronized on the laser pulse frequency.The stability of this imaging system was RMS = 2.3%.In this condition, the required minimum absorption per path was approximately 6.9% (3σ level).The calculation procedure, component spatial pattern analysis [10], is based on the following principle: Consider that a target, which is composed of M substances having different spectral characteristics, is imaged at N frequencies.Each image is composed of L pixels, which are thought of as being rearranged one-dimensionally, for ease of calculation.The transmitted intensity can be described by the following linear matrix equation: .For the case when N > M, [P] can be solved using a leastsquares method as where t denotes transpose.By this means, the spatial distribution of a specific component in a sample made up of several chemicals can be imaged.As the incident THz waves are mainly attenuated by absorption in the sample drugs, the transmitted intensity satisfies the Lambert-Beer law.Therefore, the detected intensity is first divided by the standard intensity of the THz wave and then the logarithm of this ratio is taken for the elements of matrix [I] in order to satisfy the linearity required by Eq. (1). Experimental results As samples we chose for this experiment three drugs that were: methamphetamine (dmethamphetamine hydrochloride, more than 98% purity), currently the most widely consumed drug of abuse in Japan, MDMA (dl-3,4-methylenedioxymethamphetamine hydrochloride, 67% purity), another drug of abuse becoming widespread on a global scale, and aspirin (100% purity) as a reference.As shown in Fig. 2, ~20 mg of each substance were placed in a small 10 × 10 mm polyethylene bag.The three bags were then placed inside a usual airmail-type envelope.THz images of the rectangular area indicated by the yellow line in Fig. 2 were captured.By changing the frequency emitted by the TPO within the 1.3 to 2.0 THz range, we obtained N = 7 multispectral images as shown in Fig. 3, generating a matrix [I] of dimensions N × L = 7 × 3040.In Fig. 3, the scale of the image -ln(I t /I 0 ) is the logarithm of the transmitted THz-wave intensity I t divided by the intensity of the THz wave that was only transmitted through the envelope I 0 .This means that the greater the absorption, the brighter the shades.Aspirin appears darker at this stage due to its relatively low absorption.Subsequent component pattern analysis, however, will cancel the effect of absorption intensity, leaving only the spatial pattern.Therefore, the low absorption will not interfere with its detection. The absorption spectra of the three drugs were measured with the same TPO system as shown in Fig. 4. The corresponding absorption intensity values at the seven frequencies were extracted to obtain the matrix [S] of dimensions N × M = 7 × 3.Although the spectra of methamphetamine and MDMA are similar, the difference between them enabled us to distinguish between the two using the component pattern analysis method. By substituting [I] and [S] thus obtained into Eq.( 2), the spatial pattern [P], represented by a matrix of dimensions M × L = 3 × 3040, was calculated.Figure 5 shows the result of extracting the three components from this matrix, with each image corresponding to each of the sample drugs.As it is evident from these images, the three drugs have been clearly distinguished and the corresponding spatial patterns obtained.A ROI (region of interest) was set in each area of the component patterns in Fig. 5 and then we took the average of tone in each ROI.The ROI was a square with 20 × 20 pixels, which is similar to the size of a plastic bag.The averages of MDMA, aspirin, and methamphetamine were 122, 119 and 138, respectively.The errors were less than ±10%, which is sufficient for the drug detection purposes. Conclusion In conclusion, the non-destructive detection of illicit as well as legally available drugs hidden in envelopes was shown to be possible at a concentration of ~20 mg/cm 2 .In addition to the results reported in this article we have also verified experimentally that we can isolate and extract the spatial patterns of each component by using the above method even in cases where the target is a mixture of multiple drugs or arranged in layers.Aside from the three drugs chosen for the experiments reported here, we also confirmed by conducting spectroscopic measurements that the component pattern analysis method can be applied in the same way to other drugs, including d-amphetamine (stimulant drug), l-ephedrine and l-methylephedrine (stimulant raw materials), l-methamphetamine (ingredient of Vicks Inhaler), acetaminophen (Tylenol), and caffeine.We plan to conduct non-destructive imaging on these drugs, while exploring the possibility of applying our technology to screening packages, security frisking, quality inspection of pharmaceuticals, and pathologic diagnosis.A joint project is currently underway to develop a high-speed THz-spectroscopic imaging system that uses CCD electrooptic sampling, which is expected to drastically shorten the measurement time of our method. [ ] is a N × L matrix of the N recorded images whose row vectors represent each an Lpixel image taken at an individual frequency, [S] is a N × M matrix of the measured spectra of M drugs whose column vectors represent the spectrum data set of each substance at N frequencies, and [P] is a M × L matrix of the spatial distribution of the M drugs whose row vectors represent the spatial pattern of each drug with L pixels.For the case when N = M, [P] is simply given by [P] = [S] -1[I] Fig. 2 . Fig. 2. View of the samples.The small polyethylene bags contain from left to right: MDMA, aspirin, and methamphetamine.The bags were placed inside the envelope during imaging.The area indicated by the yellow line represents the imaging target, 20×38mm in size.Since methamphetamine and aspirin are similar in appearance, we used a slightly longer bag for the latter to avoid confusion. Fig. 4 . Fig. 4. Absorption spectra of MDMA (yellow line), methamphetamine (red line), and aspirin (blue line).The corresponding absorption intensity values at the seven frequencies were extracted to obtain the matrix [S].
2017-11-03T08:40:27.176Z
2003-10-06T00:00:00.000
{ "year": 2003, "sha1": "2513e38647943ba558172969f8c0a4121fda1371", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.11.002549", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "2513e38647943ba558172969f8c0a4121fda1371", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
14304401
pes2o/s2orc
v3-fos-license
A new equation for the mid-plane potential of power law discs. II. Exact solutions and approximate formulae The first-order ordinary differential equation (ODE) that describes the mid-plane gravitational potential in flat finite size discs in which the surface density is a power-law function of the radius R with exponent s (Hur\'e&Hersant 2007) is solved exactly in terms of infinite series. The formal solution of the ODE is derived and then converted into a series representation by expanding the elliptic integral of the first kind over its modulus before analytical integration. Inside the disc, the gravitational potential consists of three terms: a power law of radius R with index 1+s, and two infinite series of the variables R and 1/R. The convergence of the series can be accelerated, enabling the construction of reliable approximations. At the lowest-order, the potential inside large astrophysical discs (s ~ -1.5 +/- 1) is described by a very simple formula whose accuracy (a few percent typically) is easily increased by considering successive orders through a recurrence. A basic algorithm is given. Applications concern all theoretical models and numerical simulations where the influence of disc gravity must be checked and/or reliably taken into account. Introduction Gaseous discs in which the main physical quantities (density, pressure, temperature, thickness, velocity) scale with cylindrical radius as power laws, i.e. "power-law discs", represent an important class of theoretical systems. These are used customary to model accretion in evolved binaries (Shakura & Sunyaev 1973;Pringle 1981), circumstellar matter (Dubrulle 1992;Edgar 2007), the environment of massive black holes inside active galactic nuclei (Collin-Souffrin & Dumont 1990;Huré 1998;Semerák 2004) or even the stellar component of some galaxies (Evans 1994;Zhao et al. 1999). In most applications however, power-law discs are truncated either to avoid diverging values at the disc centre (such as density, mass) or in attempting to reproduce the properties of observed discs of finite extension and mass. Although self-similarity is not compatible with the presence of edges, it is generally considered that power laws offer a good description of disc properties in some regions (far from the edges). Note that the presence of sharp edges can be misleading when interpreting observational data (e.g. Hughes et al. 2008). In general, the surface density Σ in the outer parts of discs is a decreasing function of the cylindrical radius R. Depending on the models, hypotheses, and objects, we have, for instance, Σ ∝ R −3/5 in binaries (Shakura & Sunyaev 1973), Σ ∝ R ±9/20 in active Send offprint requests to: Jean-Marc Huré ⋆ Present address : La Maurellerie, 37290 Bossay-sur-Claise galactic nuclei (Collin-Souffrin & Dumont 1990), Σ ∝ R −1 for a Mestel disc (Mestel 1963), or Σ ∝ R −3/2 in circumstellar discs (Piétu et al. 2007). In the context of stationary viscous α-discs, a wide range of power-law exponents is allowed since the temperature T , Σ, and R satisfy the condition (e.g. Pringle 1981): ΣT R 3/2 = cst, while it is Σ ∝ R −1/2 in β-discs (Huré et al. 2001). The calculus of the gravitation potential of finite-size, power-law discs has received little attention yet. Several reasons can be put forward. Solving the Poisson equation or computing the integral of the potential is not a trivial procedure, especially in the presence of edges. It is generally believed that gravity due to low mass discs is unimportant compared with that of a central proto-star or black hole, and cannot be probed (see however Baruteau & Masset 2008). Many studies employ the multi-pole expansion which is known to converge too slowly inside sources to be efficient for the numerical applications (e.g. Clement 1974;Stone & Norman 1992). Huré & Hersant (2007) demonstrated that the mid-plane potential of flat power-law discs obeys an inhomogeneous first-order Ordinary Differential Equation (ODE). In this second paper, we discuss the exact solutions of this ODE for the entire physical range (outside and inside the disc) in terms of infinite series. In particular, it is shown that the mid-plane potential is a combination of a power law for the radius R and two series of the variables R and 1/R. Since these series converge rapidly inside large discs, it is possible to derive reliable approximations by truncating the series at low orders. This paper is organised as follows. The ODE for the potential is briefly recalled in Sect. 2 and its formal solution is derived in Sect. 3. In Sect. 4, we express the potential at the two disc edges and consider a few special cases. The inside and outside solutions in the form of series are presented in Sect. 5. In Sect. 6, we analyse the potential in the disc inside in detail, and in particular, the power-law contribution. Since all series involved converge rapidly, we are able to derive reliable approximations for the potential; this is done in Sect. 7. We discuss in Sect. 8 the case of discs with no inner and/or outer edge. The paper ends with a few concluding remarks. 2. The mid-plane potential in power-law discs from an Ordinary Differential Equation (ODE) Following Huré & Hersant (2007) (hereafter Paper I), the mid-plane potential ψ due to a flat power-law disc satisfies the ODE: where ̟ = R/a out is the cylindrical radius in units of the radius a out of the disc outer edge, s is the power-law index of the surface density Σ, namely (generally, s < 0 in astrophysical discs): and S(̟) is the piecewise defined function. Depending on the position in the disc (see Fig. 1), we have: where ∆ = a in /a out < 1 is the axis ratio, a in is the radius of the inner edge, ψ out = 2πGΣ out a out is a positive constant, Σ out is the surface density at the disc outer edge, K is the complete elliptic integral of the first kind: and G is the gravitation constant. The above ODE can in principle be solved in the entire radial domain since boundary conditions (both ψ and the associated acceleration −d ̟ ψ) are known precisely at ̟ = 0 and at ̟ = ∞ for power law distributions. Although S is singular at the two disc edges (i.e. for ̟ ∈ {∆, 1}), Eq. (1) is far more tractable in computing ψ than the integral form (e.g. Durand 1953): whose integrand is logarithmically singular everywhere inside the disc, or than the Poisson equation, which involves vertical gradients. Formal solution of the ODE A formal solution of Eq. (1) is found by setting (e.g. Rybicki & Lightman 1979): The exact derivative ofψ is: where we recognise, inside brackets, the function S. Therefore, we have: whose formal solution forψ is of the form: Back-substitutingψ andS from Eqs.(6) and (7), we find the general expression for the mid-plane potential: This solution is fully determined in the entire spatial domain or part of it as soon as the potential is known at a given normalised radius ̟ 0 . We observe that, for s = −1, ψ(̟) is the mixture of a power law of the radius (the first term in the right-hand-side) with exponent (e.g. Bisnovatyi-Kogan 1975;Evans & Read 1998): and a complicated function of the radius R (the definite integral). In the following, we shall analyse Eq. (11) analytically in terms of infinite series by considering in Eq. (3) the expansion of K over its modulus (e.g. Gradshteyn & Ryzhik 1965): γ n x 2n with 0 ≤ x ≤ 1, Outer edge At the outer edge, the potential is calculated in a similar manner, but using the variable u = a/R ≤ 1. For ̟ = 1, Eq. (5) writes 1 : Replacing K(u) by its series representation yields: 1 In contrast with the gravitational acceleration (the gradient of ψ), the potential is generally finite at the edges. 2 In particular, we use the transformation: where and by assuming ∆ = 0 (see below). As for ψ(∆) and for the same reasons, ψ(1) is also a converging series. Special values of s If the power law exponent s is such that: at a certain rank n ≡ n ∆ , then ψ(∆) must be, in practice, written in a slightly different form. This happens for s ∈ E ∆ with: Since lim for x > 0 and any q, we have: Then, if s ∈ E ∆ , the potential at the disc inner edge is given by: In a similar way, if s is such that: 2n + s + 2 = 0 (26) at a rank n ≡ n 1 , then one term in Eq. (19) must be treated separately. This happens for s ∈ E 1 where: From Eq. (23), we have: We note that s cannot belong simultaneously to set E ∆ and to set E 1 . Cases with ∆ = 0 The case ∆ = 0 occurs i) when the disc has no inner hole (i.e. a in = 0) but finite size, and/or ii) when the disc has an inner edge but is infinitely extended (i.e. a out → ∞). In the first case, ψ(∆) (denoted ψ c in Paper I) becomes the potential at the origin of coordinates and has infinite value as soon as 1 + s ≤ 0. In the second case, ψ(1) represents the gravitational potential at infinity. It diverges if s + 1 ≥ 0. Figure 3 summarises the ranges of s where the edge surface density, edge potential, and total disc mass are finite. We note that only discs having either i) a in = 0, a out = ∞ with s > 0, or ii) a in > 0, a out → ∞ with s ≤ −2 are physically meaningful since they are characterised by a finite surface density, a finite total mass and a finite potential. Mestel discs do not belong to these categories (e.g. Mestel 1963;Hunter et al. 1984 Solution of the ODE in the form of series We see from Eqs. (3) and (13) that the function S can be easily expressed as a series. We have: By inserting this general expression into Eq. (11), we find for s / ∈ E ∆ ∪ E 1 and ∆ = 0: where the coefficients a n and b n are respectively given by: Since ψ(∆) and ψ(1) are available (see Sect. 4), we use ̟ 0 = ∆ and ̟ 0 = 1 to simplify Eq. (30). Using Eqs. (16) and (19), we thus have: where: with: We note that A is a function of s. Figure 4 compares the total potential ψ(̟) with the power law contribution (i.e. the term A̟ 1+s ) for three typical values of the exponent s in a disc of axis ratio ∆ = 0.1. We clearly see that, in a finite size disc: i) the gravitational potential is not a power-law function of the radius, ii) a power-law contribution is present inside the disc only, and iii) the power law is not the dominant part of the potential. As expected, spatial self-similarity is broken due to edges. We note that, outside the disc (i.e. in regions I and III), the series coincides with the multi-pole expansions. Potential inside the disc The determination of the gravitational potential is usually straightforward outside the distribution where different kinds of expansions are efficient in practice (Kellogg 1929). In contrast, it is problematic inside matter where the classical multi-pole approach fails to converge rapidly (e.g. Clement 1974;Stone & Norman 1992). Converging series In region II, we have b n = J n ∆ 2n+s+2 . If we set: Figure 2 shows the term a n versus n for s = −1.5 (in this case, I n = J n ). As a consequence, the three series involved in Eq. (33) converge rapidly since i) I n and J n both vary as 1/n 2 at large n, ii) all terms are positive for large n, and iii) ̟ ≤ 1 and X ≤ 1. This is interesting for the truncation of series and the construction of reliable approximations (see Sect. 7). (Hayashi 1981), and the α and β-viscosity discs where s ≈ −0.6 depending on models (Shakura & Sunyaev 1973;Collin-Souffrin & Dumont 1990;Dubrulle 1992;Richard & Zahn 1999). The coefficient A The coefficient A is plotted in Fig. 5. It is symmetric with respect to s = − 3 2 since: c n γ n = − 4(4n + 1) (4n − 2s ′ + 1)(4n + 2s ′ + 1) , where s ′ = s + 3 2 . For certain integer values of |s|, A is strictly zero. This is in particular the case for s = −3 and s = 0 (see Appendix A). As Eq. (34) shows, A rises as soon as the exponent s is such that either 2n − s − 1 or 2n + s + 2 is small (see Sect. 4). Even, if n = n ∆ (or n = n 1 ), the coefficient A apparently contains a singular term, namely I n∆ ̟ 1+s (resp. J n1 ̟ 1+s ); however, this singularity exactly cancels with the term a n∆ ̟ 2n∆ (resp. b n1 ̟ −2n1−1 ). In practice, when s ∈ E ∆ , Eq. (33) is no longer valid. Instead, we have: where Similarly, if s ∈ E 1 , the potential writes where A few values of A, A ∆ , and A 1 are listed in Appendix B. We note that A ∆ (or A 1 ) differs only from A by the term I n∆ (resp. J n1 ). Convergence acceleration Once s is given, the coefficients A, A ∆ , and A 1 can be easily determined at the required accuracy. It is also possible to improve the convergence rate of the associated series. This accelerates the computation of the coefficients and makes their dependence with the exponent s more explicit. Convergence acceleration is performed by using the properties of the definite integrals of the complete elliptic integrals of the first and second kinds. The demonstration reported in the Appendix C yields, for s / ∈ E ∆ ∪ E 1 : where and C is the Catalan constant (half the area under the function K). Numerically, the constant in A is 1 − 6C π ≈ −1.4310555380011220. (45) Figure 2 shows the coefficient d n for s = −1.5 (in this case, d n = e n ). It follows that, for large n, d n and e n behave like ∼ 1/n 4 asymptotically, and so A approaches its converged value far more rapidly than by means of Eq. (34). For s ∈ E ∆ , we then find: instead of Eq. (40), and for s ∈ E 1 , this is: instead of Eq. (42). Depending on the exponent s of interest, a good approximation for A, A ∆ or A 1 can be obtained by considering only the largest terms in the sum, i.e. all terms up to the rank n ≈ 1 2 (1 + s) or n ≈ − 1 2 (s+ 2). For astrophysical discs, s is around −1 meaning that we retain only the first term in Eq. (43). We then find the following approximation: whose accuracy is better than 3% for −2.7 s −0.3 as Fig. 6 shows. For s ∈ {−2, −1}, we find from Eqs. (46) and (47): which is in good agreement with the converged value given in Table B. Approximate formulae inside the disc Equations (33), (39) and (41) contain three rapidly converging series that can be truncated to derive reliable approximations for the potential. For s ≈ −1, only a few terms can be considered (see Sect. 6). Although many truncations are possible, we have noticed that the most accurate approximations of ψ for discs 3 are obtained provided the coefficient A (or A ∆ or A 1 depending on s) takes its converged value. Under these circumstances, the N -order approximation for the potential in region II becomes: which is, in its asymptotic limit: Zero-order approximation As argued in Sect. 6.3, a reliable formula for the potential in astrophysical discs (for which s ≈ −1.5 ± 1) is obtained by considering only the terms a 0 and b 0 /̟, in addition to 3 Here, discs are supposed to be objects of axis ratio ∆ 0.1. the power law. At the lowest order, we thus have 4 : where, in this case, A 1 = A ∆ ≈ −1.386 (see Appendix B). Figure 7 compares this zero-order approximation with the exact potential for typical disc parameters. It follows that the relative deviation |1 − ψ app. /ψ| does not exceed 10%, the deviation being the largest close to the edges. It is worth noting that the accuracy remains of the same order if coefficient A is determined by Eq. (48). This is convenient when the explicit dependence of ψ on s is required. Under this hypothesis, the potential becomes (see note 4): (53) Figure 8 shows the accuracy of this formula in the (̟, s)−plane. We see that the relative deviation of ∼ 10% observed previously for s = −1.5 holds globally for s roughly in the range 5 [−3, 0]. This agrees with the fact that Eq. (48) produces values of A within a few percents for this range of exponents. The deviation can be reduced at the inner and outer edges provided additional terms are included (see below). Higher orders If necessary, more accurate expressions are obtained by accounting gradually for following terms (each acting as a smaller and smaller correction). For N = 1, we have: and for N = 2, this is: and so on. We note that Eq. (33) is particularly well suited to numerical computation since ψ can be determined by means of a recurrence procedure. A possible algorithm (not including the treatment of singular cases where s ∈ E ∆ ∪E 1 ) is proposed in the Appendix D. Finite disc without inner hole If the disc has no inner edge but a finite size (i.e. a in = 0 and a out = ∞), then the ODE is (see Huré & Hersant 2007): It can be verified that the solution is still described by Eqs. (33) and (43), but the coefficients a n and b n are: a n = I n , in region II 0, in region III and b n = 0, in region II −J n , in region III 5 This range of exponents should be appropriate for most astrophysical applications (see the introduction). Infinitely extended disc with inner hole If the disc is infinite but has an inner edge (i.e. a in > 0 and a out → ∞), then where is the new space variable, and ψ in = 2πGΣ in a in is a constant (this is not the potential at the inner edge). The analogue of Eq. (33) is: where A is still given by Eq. (34), a n = −I n , in region I 0, in region II and b n = 0, in region I J n , in region II. Infinitely extended disc If the disc is infinite, the ODE become homogeneous: and the solution is a power law: where R 0 is some reference radius. In this case only, a selfsimilar surface density can rigorously be associated with a self-similar potential. The presence of edges destroys this property. We note that, for s = −1 (i.e. Mestel's disc), the derivation of the ODE requires a careful treatment. The integral form, i.e. Eq. (5), gives: We then have This expression is compatible with Eq. (56a) when a out → ∞ (in this case, region III no longer exists and we have S = ψ out /̟). Concluding remarks In this paper, we have derived an exact expression for the gravitational potential in the plane of flat power-law discs as a solution of the ODE reported in Paper I. This expression is valid over the entire spatial domain and takes into account finite size effects. Inside the disc (the most difficult case to treat in general), it consists of three terms of comparable magnitude: a power law of the cylindrical radius R with index 1+s (where s is the exponent of the surface density) and two series of R and 1/R. In terms of convergence, our expression is by far superior to the multi-pole expansion method. Reliable approximations for the potential can be produced by performing fully controlled truncations. We have shown that the potential can be expressed by means of a simple function of R and s, which is valid to within a few percents in the range of exponents −3 s 0. This formula should be sufficiently accurate for most astrophysical applications. If necessary, more accurate formulae can be developped by including successive terms. These results should help in investigating various phenomena where disc gravity plays a significant role. An interesting point concerns the case of discs for which the surface density is not a power-law function. As shown in Paper I, it is easy to reproduce numerically the potential when the profile Σ(R) is a mixture of power laws. From an analytical point of view however, the construction of a reliable formula for ψ as compact as the one obtained here is not guaranteed at all. For instance, for an expansion of the form: where s is an integer, each of the M + 1 series {a n , b n } should be truncated at a rank N n ∆ ∼ (s + 1)/2 (see Eqs. (33) and (50)), which corresponds to an approximate formula for ψ containing about 2M (M + 1) terms. The number of terms to consider can become prohibitively large when several power laws are required.
2008-08-12T12:46:11.000Z
2008-08-12T00:00:00.000
{ "year": 2008, "sha1": "adf557177833bcaf3bf1291edcb60c6fc533a3b6", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2008/41/aa09682-08.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "adf557177833bcaf3bf1291edcb60c6fc533a3b6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
233425980
pes2o/s2orc
v3-fos-license
Effect of Ni-Cr Alloy Surface Abrasive Blasting on Its Wettability by Liquid Ceramics An adequate surface is essential in ensuring a solid bond between the metal and dental ceramics for metal framework wettability. This work is aimed at investigating the effect of variable abrasive blasting parameters on Ni-Cr alloy surface’s ability to be wetted with liquid ceramics at elevated temperatures. One-hundred and sixty-eight samples were divided into 12 groups (n = 14), which were sandblasted using variable parameters: type of abrasive (Al2O3 and SiC), the grain size of the abrasive (50, 110, and 250 µm), and processing pressure (400 and 600 kPa). After treatment, the samples were cleaned in an ultrasonic cleaner and dried under compressed air. Dental ceramics were applied to the prepared surfaces via drops, and the wettability was tested in a vacuum oven at temperatures in the range of 850–1000 °C. The results were statistically analyzed using ANOVA (α = 0.05). For all surfaces, the contact angles were less than 90° at temperatures below 875 °C. For Al2O3, the best wettability was observed for the smallest particles and, for SiC, the largest particles. The ability to wet the surface of a Ni-Cr alloy is related to its sandblasting properties, such as roughness or the percentage of embedded abrasive particles. It should not be the only factor determining the selection of abrasive blasting parameters when creating a prosthetic restoration. Introduction Metal-ceramic restorations in the form of crowns and bridges are widely used in dental prosthetics, where the ceramic material is fired onto a metal substructure. Such prostheses are characterized by their pleasing aesthetics and durability, reaching ten years of use [1][2][3][4]. Thanks to ceramics' relatively smooth surface, plaque does not stick to its surface [5]. The metal-ceramic connection is widely tested because of differences between the properties of the materials [6][7][8][9]. In some cases, ceramics crack or chip from the metal surface [10][11][12][13]. Such damage is difficult to repair; therefore, a solution is sought to ensure the most durable connection between the materials. There are several mechanisms in the connection between metal and ceramics. The first is to ensure that the ceramics are mechanically attached to irregularities in the metal surface. During firing, the semi-liquid ceramic flows into grooves that result from abrasive blasting of the alloy surface. This treatment's parameters are fundamental because they affect the size of the created unevenness and may affect the joint's durability and strength [14]. The use of abrasive particles that are too small or too large may cause insufficient surface roughness for the ceramic to attach well enough. Particles that are too small will cause the width and depth of unevenness to be too small, and ceramics with a low viscosity will not flow into them and will easily chip from the metal surface at a later stage. Particles that are too large will cause the width and depth of the unevenness to be too large, which may result in insufficient attaching [15]. Another mechanism that is said to be part of the metal-ceramic connection is the chemical bond. There are reports in the literature that diffusion of elements and the formation of chemical bonds between materials occurs at the metal-ceramic interface [9]. The last mechanism concerns the connection that provides the difference in the coefficients of thermal expansion (CTE) between the materials [8,9]. The difference in thermal shrinkage between the materials results in the creation of compressive stress in the ceramic during cooling, which increases the joint's strength. The clamping of ceramics occurs, not only on the prosthetic element, but also on the unevenness. Their sizes and shapes are given by the abrasive blasting effect on the joint's quality. All the described mechanisms are components of the metal-ceramic connection. However, the mechanical attaching of ceramics is the most important mechanism. Appropriate metal surface treatment increases the surface and its influence on a restoration's strength is visible [16]. Abrasive blasting of a metal affects the surface conditions, the parameters of which are roughness and wettability or the percentage of embedded abrasive particles [1]. Analysis of the wettability of the surface should be crucial in creating a restoration. However, dental ceramics change their characteristics from hydrophilic to hydrophobic under the influence increases in temperature. When it comes into contact with a metal surface at room temperature, the ceramic is a mixture of powder and water. As the temperature increases during firing, the water in the material evaporates, and the characteristic of the ceramic changes. Therefore, the study of surface wettability by measuring liquids gives incomplete information on its influence on the connection. The purpose of this study was to analyze the impact of various parameters of abrasive blasting on the wettability of Ni-Cr alloy surfaces by liquid ceramics at varying temperatures. Materials and Methods One-hundred and sixty-eight Heraenium ® NA nickel-chromium alloy samples (Heraeus Kulzer, Hanau, Germany) were obtained commercially as ready-made elements for prosthetic works, with cylindrical shapes with a diameter of 8 mm and height of 15 mm. The chemical composition is presented in Table 1. The alloy's chemical composition was determined by the X-ray fluorescent analysis method using an SRS300 spectrometer (SIEMENS, Berlin, Germany). Samples were divided into two groups and were abrasion blasted (Alox 2001, Effegi Brega, Sarmato, Italy) using alumina (Al 2 O 3 ) or silicon carbide (SiC) for 20 s, with a nozzle inclination of 45 • and at a distance of 15 mm from the surface of the material. Every group of samples was divided into six subgroups (n = 14). The groups were distinguished by the abrasive blasting parameters where the abrasive particle size and processing pressure were the variables. Designations of the samples are presented in Table 2. After abrasive blasting, all samples were cleaned in an ultrasonic cleaner (Emmi-55HC-Q, Emag, Mörfelden-Walldorf, Germany) with deionized water for eight minutes to remove loose abrasive particles. Then the surface was dried under compressed air. Dental ceramics IPS Classic (Ivoclar Vivadent, Schaan, Lichtenstein) were dropped onto the prepared surfaces, and the surface wettability was tested in a tube furnace designed for this activity with the possibility of connecting a camera. Measurements were made every 25 • C in the temperature range 850-1000 • C based on sample photos, which were used to determine the values of contact angles by measuring the geometric features by drop shape analysis. Contact angles were calculated based on images made at different temperatures after various surface treatments. The method of measuring the angle was taken from thé Smielak et al. study [17]. The contact wetting angle was determined according to Young's equation from the 3-phase contact point [17] for each analyzed temperature according to the formula: where θ-contact angle, σ SV -surface tension at the solid-gas interface, σ SL -surface tension at the solid-liquid interface, σ LV -surface tension at the liquid-gas interface. The value of the contact angle is measured on both the left and right sides of the drop according to geometric parameters of specimens [17]. Additionally, the relative wetting force was calculated for the performed experiments by relating the wetting forces of individual groups to a reference group (Al 2 O 3 , 400 kPa, 110 µm abrasive). This group was selected as a reference group because these parameters of treatment are assumed to be the best for metal-ceramic connection in dentistry [14,18]. Wetting force F c , taking into account surface tension σ LV , the contact angle θ, and the circumference of the sample O p , results from the relationship: Thus, it is possible for the extreme contact angles obtained in the previous experiments to determine the relative contact force as a relationship defined by the ratio of the contact angles: Statistical analyses of the results were conducted by using the Statistica statistical software. A 2-factor ANOVA and a post hoc Tukey's test were conducted (α = 0.05). Figure 1 shows photos of a ceramic drop on alloy samples, which were used to determine the contact angles. Results The surface wettability measurement results are presented in Tables 3 and 4 and Figures 2 and 3. The research results show the effect of abrasive blasting parameters on ceramics' contact angles at elevated temperatures. The chart analysis shows that the contact angle decreases with the temperature increase in each case, and thus the wettability of the treated surface by liquid ceramics increases ( Figure 3). Moreover, it can be seen that there are differences in substrate wettability trends with temperature change depending on the abrasive blasting parameters (Figure 2). The statistical analysis of the measurement results shows that the only important factor affecting the surface's wettability at temperatures close to firing temperatures is its size of abrasive particles (Figures 4 and 5). In the Al 2 O 3 abrasive, the abrasive case size correlates with case of treatment pressure, and for larger particle sizes, the higher treatment pressure is beneficial. The above graphs allow you to determine the temperature at which the liquid ceramics begin to wet the treated surface. The transition temperatures from the non-wetting state to the wetting state are presented in Table 5. From the given data, it can be seen that the obtained contact angle below 90 • (θ < 90 • -wetting) is achieved for most abrasive blasting parameters below a temperature of 850 • C. The deviating values are visible for Al 2 O 3 for the large particle at low pressure (400 kPa) and a small particle at high pressure (600 kPa). The influence of temperature on the relative wetting force, calculating according to Equations (1) and (2), is presented in Figure 5. A significant increase in wettability was observed for large sizes of Al 2 O 3 abrasive at a pressure of 400 kPa. In the SiC abrasive case, an increase in the wetting force concerning the reference parameters was observed for small abrasive particles. The relative wetting force was compared to samples treated with 110 µm abrasive and 400 kPa pressure. This choice was dictated because these parameters were considered the most favorable for the metal-ceramic connection [14,18]. A horizontal line with the value of F 0 /F 1 represents this treatment variant. Figure 6a shows that for the samples treated with Al 2 O 3 , relative wetting force treated with 50 µm abrasive and 400 kPa is more significant than the reference variant. In other variants of the treatments, the relative wetting force is lower. The situation is slightly different for samples treated with silicon carbide. Samples treated according to variants 50 µm/400 kPa, 110 µm/400 kPa, and 50 µm/600 kPa, from 890 • C, have the relative wetting force more remarkable than the reference sample (110 µm/400 kPa, Al 2 O 3 ) in the entire temperature range (Figure 6b). Discussion The research of metal surface wettability with ceramics in prosthetics is scarce, and there are few literature reports on this topic. They mainly concern the study on the wettability of zirconium oxide [17,19,20]. In our tests, Ni-Cr alloy's wettability was analyzed, which is widely used in the formation of restorations in the form of prosthetic crowns and bridges. The research results show that the metal surface's wettability with liquid ceramics depends on the abrasive blasting parameters. It is influenced by both the pressure and the type and size of the abrasive used for treatment. The research byŚmielak et al. [17], carried out on zirconium oxide, shows that the wettability also depends on the type of processing (milling, grinding, abrasive blasting). These authors showed that zirconium oxide's wettability with liquid ceramics increases when the elements are surface treated after shaping (milling). It is related to the change of the material surface condition in relation to the subjected only to milling. The treatment parameters also affect the treated surface condition [15], e.g., its roughness and surface free energy. Figures 2-5 show different contact angles, depending on the abrasive blasting parameters used. As already mentioned, the metal substructure surface's wettability with liquid ceramics affects this connection's quality. The optimal treatment, i.e., ensuring the greatest metal-ceramic bond strength, is abrasive blasting with aluminum oxide of 110 µm under a pressure of 400 kPa [11]. The presented research shows that these parameters' surface wettability is not the highest (Table 3, Figure 2). In principle, the best ceramic wettability can be observed for the surface treated with the smallest (Al 2 O 3 , 50 µm, 400 kPa and Al 2 O 3 , 50 µm, 600 kPa) and largest (Al 2 O 3 , 250 µm, 600 kPa) abrasive particles (Table 3). This is seen in Figure 5, and shows the relative wetting force, which is also greater for these parameters. It can also be seen that the relative wetting force does not reach the highest value for the parameters that, according to literature reports, are optimal for the strength of the metal-ceramic connection. Therefore, it should be assumed that the strength of the connection, apart from wettability, is influenced by other factors. One of these factors may be the amount of embedded abrasive in the metal surface. Research has shown that different amounts of abrasive material are embedded in treated surfaces [15,21,22]. It is not easy to define this impact clearly. On the one hand, the embedded sharp-edged abrasive particles may be the points of fracture in the ceramic, which may contribute to a reduction in strength, and, on the other hand, the dissolution of alumina by liquid ceramics may increase in strength. However, these are only assumptions, and further research is needed to confirm this. By analyzing the relative wettability force, it can be assumed that the bond strength after treatment with silicon carbide should be better. However, as previously noted, this is not the only factor. In prosthetics silicon carbide is not used for processing. There are no reports on the strength of the metal-ceramic connection after such treatment, so these results cannot be compared to those in the case of aluminum oxide treatments. The abrasive type's effect, the dependence of the samples' wettability after treatment with aluminum oxide and silicon carbide, and 600 kPa pressure are similar for 110 and 250 µm particles. Although the contact angles for the silicon carbide treatment are slightly more significant, the differences between the angles are slight. The wettability for 50 µm particle processing is somewhat different, and the chart's different nature is visible. The change in contact angle for aluminum oxide is much greater in the same temperature range: from about 98 • at 850 • C to about 54 • at 1000 • C, while for silicon carbide in the same temperature range, it is from about 72 • to about 64 • . As for the 400 kPa pressure treatment, the nature of the wettability changes and the temperature change is similar. Still, for silicon carbide, the contact angles are smaller, and the nature of the changes also depends on the particle size. Additionally, in this case, the nature of the curve for the 50 µm abrasive differs from the other two particle sizes. Considering that with the same particle, the surface roughness should be similar, it should rather not be associated with the surface roughness after treatment. Perhaps it is related to the amount of embedded abrasive particles, which, for the same particle size does not have to be analogous for different materials. Silicon carbide particle is a harder and brittle material than aluminum oxide; therefore, particles hitting the treated surface are more easily crushed and bounced off. The degree of crushing and rebound is related to the weight of the individual particles and the operating pressure, affecting the incident's energy abrasive particles. Clarification of this issue requires a more detailed examination of the amount of embedded abrasive particles after treatment. The explanation can also be obtained by modeling the phenomena occurring during abrasive particles' impact on the treated surface. In summary, the abrasive blasting parameters influence the Ni-Cr alloy surface's wettability with ceramics at elevated temperatures. The surface wettability is influenced by both the surface roughness and the amount of embedded abrasive particles. However, it is not clear to what extent the surface roughness affects the contact angles and the amount of embedded abrasive particles. Since there is a dependence of wetting on the abrasive blasting parameters and the type of abrasive, it seems that the ceramic's firing temperature is not a constant value. It should be selected each time, depending on the type of surface treatment performed, because of the good flow of the ceramic in the surface's unevenness. Thus, wet the surface of the metal with liquid ceramics is needed for a good connection. In practice, the firing of dental ceramics on metal substructure occurs at temperatures in the range of 920 • C. You can see that these are not the temperatures at which the wettability is the best. The presented research shows that with increasing temperature, the contact angles' values decrease, and the sample surface's wettability with liquid ceramics increases. Considering only the wettability, the firing temperatures of the ceramics should be higher. Firing temperature restrictions apply. It should be taken into account that the quality of prosthetic restoration is influenced by other factors that may have a negative impact, along with the increase of the firing temperature (change of the alloy structure, hightemperature corrosion, etc.). The temperatures used seem to be optimal. The θ wettability angle is less than 90 • at 880 • C for aluminum oxide treatment and 850 • C for silicon carbide treatment. Therefore, it can be seen that, at practically relevant firing temperatures, the liquid ceramic wets the alloy's surface (angle θ much less than 90 • ). The obtained results concern only Ni-Cr alloy. They cannot be transferred to other materials. It can be seen from literature reports that, for example, for zirconium oxide at these temperatures, there was no surface wettability because the contact angles were higher than 90 • [17]. This means that the metal surface has a better surface wetting ability than liquid ceramics compared to zirconium oxide. Conclusions The research shows that the abrasive blasting parameters, like treatment pressure and size of abrasive particles, impact the metal surface wettability at high temperatures. Most likely, it is related to the roughness of tested surfaces or the percentage of embedded abrasive particles. However, there is a significant correlation between the wettability and the size of the used abrasive particles. The wettability of the alloy surface with liquid ceramics increases with increasing temperature and each tested abrasive blasting parameter provides good alloy surface wettability (θ < 90 • ). Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
2021-04-29T05:18:54.697Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "42116eb7a9b69df01baa63114dc5d3ea215b72e6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/14/8/2007/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "42116eb7a9b69df01baa63114dc5d3ea215b72e6", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
4825684
pes2o/s2orc
v3-fos-license
Liver-Enriched Gene 1, a Glycosylated Secretory Protein, Binds to FGFR and Mediates an Anti-stress Pathway to Protect Liver Development in Zebrafish Unlike mammals and birds, teleost fish undergo external embryogenesis, and therefore their embryos are constantly challenged by stresses from their living environment. These stresses, when becoming too harsh, will cause arrest of cell proliferation, abnormal cell death or senescence. Such organisms have to evolve a sophisticated anti-stress mechanism to protect the process of embryogenesis/organogenesis. However, very few signaling molecule(s) mediating such activity have been identified. liver-enriched gene 1 (leg1) is an uncharacterized gene that encodes a novel secretory protein containing a single domain DUF781 (domain of unknown function 781) that is well conserved in vertebrates. In the zebrafish genome, there are two copies of leg1, namely leg1a and leg1b. leg1a and leg1b are closely linked on chromosome 20 and share high homology, but are differentially expressed. In this report, we generated two leg1a mutant alleles using the TALEN technique, then characterized liver development in the mutants. We show that a leg1a mutant exhibits a stress-dependent small liver phenotype that can be prevented by chemicals blocking the production of reactive oxygen species. Further studies reveal that Leg1a binds to FGFR3 and mediates a novel anti-stress pathway to protect liver development through enhancing Erk activity. More importantly, we show that the binding of Leg1a to FGFR relies on the glycosylation at the 70th asparagine (Asn70 or N70), and mutating the Asn70 to Ala70 compromised Leg1’s function in liver development. Therefore, Leg1 plays a unique role in protecting liver development under different stress conditions by serving as a secreted signaling molecule/modulator. Introduction The process of liver development includes 1) the specification of hepatoblasts from the endoderm, 2) proliferation of hepatoblasts to form the liver primordium (liver bud), and 3) differentiation and proliferation of hepatocytes to form the embryonic liver [1][2][3][4][5]. Liver organogenesis is not only controlled by intrinsic transcription factors such as FoxA factors [6], GATA factors [7], Hhex [8] and Prox1 [9] but also by secreted signaling molecules [10] including FGF [11], BMP [12], Wnt [13] and RA [14] produced by neighboring mesodermal cells/tissues. Strikingly, studies of mouse, chick/quail, Xenopus and zebrafish have shown that the molecular events controlling liver development are robustly conserved across these different species although evolutionally, these species are distantly related, especially when considering the obvious anatomic differences in organ initiation and patterning [1][2][3][4] and the differences in the circumstances of their embryogenesis. Unlike mammals and birds, teleost fish complete the process of embryogenesis externally. To cope with the stresses brought about by environmental changes, teleost fish have to evolve anti-stress mechanism(s) to protect the process of embryogenesis/ organogenesis. However, the molecule(s) mediating such activity is(are) currently unknown. Therefore, it is of great interest to determine whether other such factors, in addition to the aforementioned common factors, protect liver development during external embryogenesis in teleost fish. leg1 (liver enriched gene 1) is an evolutionally conserved gene in vertebrates that encodes a novel secreted protein Leg1, which contains only a domain of unknown function 781 (DUF781) [15][16][17]. In zebrafish, there are two copies of the leg1 gene, namely leg1a and leg1b, which are closely linked on chromosome 20 [15]. Previous reports showed that knockdown of total Leg1 results in defective liver development, and the expression of leg1 is modulated by hypoxia conditions [18]. On the basis of the detailed analysis of leg1a mutants generated by the TALEN method, we provide strong evidence to demonstrate that Leg1 functions probably as a novel signaling molecule/modulator to protect liver development through Erk phosphorylation under stress conditions, and glycosylation at N 70 in Leg1a is essential for this function. Loss-of-function of Leg1aconfers a small liver phenotype under different stress conditions We reported previously that leg1a but not leg1b was the predominant form expressed during the embryonic stage in zebrafish and that knockdown of leg1a resulted in a small liver phenotype [15]. To unequivocally prove the role of leg1a in liver development, we generated two leg1a mutant alleles, one with a 13-bp insertion (leg1a zju1 ) and another with a 12-bp deletion (leg1a zju2 ) (leg1b is intact in these two leg1a alleles), via the TALEN technique [19] by targeting exon 1 of leg1a (Fig 1A). To our surprise, unlike the leg1a morphants [15], the leg1a homozygous mutant obtained from a cross between either leg1a zju1/+ or leg1a zju2/+ heterozygous male and female did not show an obvious small liver phenotype at 3.5 days post fertilization (dpf) when examined with a liver-specific molecular marker fatty acid binding protein 10a (fabp10a) The leg1a zju1 homozygotes (-/-) obtained from a cross between heterozygous (+/-) parents showed normal liver development (B). Each dot represents the liver size (measured based on the signal area of fabp10a) of a single embryo. Three independent experiments were carried and a representative one is shown here. Total Leg1 was detected in unfertilized eggs (C) and in two independent samples of 4-dpf WT (+/+) and mutant (-/-) embryos collected at different dates (D). (E-G) The maternal-zygotic leg1a zju1 homozygotes (mu) obtained from a cross between homozygous parents lacked Leg1 at 1 dpf, but expressed Leg1b at 3.5 dpf and 7 dpf (E), and showed a small liver phenotype on March 13, an intermediate-sized liver on April 16, and a normal sized liver on August 8, 2015. (F). Recordings of liver phenotype in 32 cases from December 30 2013 to October 4 2015 showed majority of the maternal-zygotic leg1a zju1 homozygotes (mu) exhibited a small liver in 14 cases recorded in cold seasons but a big variation in 18 cases recorded in warm/hot seasons (G). ***, p<0.001, N.S., not significant. Western blot was repeated three times for (C), five times for (D) and (E). ( Fig 1B), and both leg1a zju1 and leg1a zju2 homozygous mutants could grow to adulthood and were fertile. We examined the total Leg1 levels by western blot analysis and found no drastic difference between unfertilized eggs from wild-type (WT) and leg1a zju1/+ heterozygous females ( Fig 1C). In fact, the leg1a zju1 homozygous mutant obtained from leg1a zju1/+ crosses retained, though showing variations, a considerable level of Leg1 at 4 dpf (Fig 1D), suggesting that maternal Leg1 compensated for the need for Leg1a during early hepatogenesis [20]. The leg1a zju1 homozygous mutant was propagated and allowed to produce progenies. We determined that such leg1a zju1 homozygous progenies (maternal-zygotic mutants) lacked Leg1 (Fig 1E) at 1 dpf but started to express the Leg1b homolog at 3.5 and 7 dpf. Surprisingly, whole-mount in situ hybridization (WISH) using the fabp10a probe revealed that the maternal-zygotic mutant exhibited a small liver phenotype in a season-dependent manner (Fig 1F, S1A Fig). For example, majority of the maternal-zygotic mutants exhibited a small liver phenotype in 14 cases recorded during the cold season whereas the mutant liver showed a great variation in sizes ranging from normal to small in 18 cases recorded during the warm/hot seasons (Fig 1G, S1B and S1C Fig). These results suggest that liver development in the maternal-zygotic leg1a zju1 mutant is amenable to its living environment. We tested this hypothesis by growing fish in different mild stress conditions. Growing maternal-zygotic leg1a zju1 mutants in relative high temperature (32°C) and high density (200 embryos per 10-cm diameter Petri dish) (Fig 2A) or briefly treating the maternal-zygotic leg1a zju1 mutants at 24 hpf with 2.5 mJ/cm 2 ultraviolet (UV) irradiation (UV25) (Fig 2B, S2A and S2B Fig) sharply increased the proportion of the mutant embryos displaying the small liver phenotype at 3.5 dpf. High density alone also caused a small liver phenotype to the maternal-zygotic leg1a zju1 mutants (S2C Fig). In addition, we found that incubating the zygotic leg1a zju1 mutant in the egg water containing a mild but not lethal dose of H 2 O 2 (0.5mM) also led to the small liver phenotype (Fig 2C). Interestingly, the maternalzygotic leg1a zju1 mutant embryos did not exhibit a small liver phenotype at 3.5 dpf when they were grown in the egg water containing 0.5% or 1% ethanol starting at 24 dpf (S2D Fig), a concentration range not causing overall abnormality [21]. UV25 treatment also enhanced the small liver phenotype in leg1a zju2 , another mutant allele of the leg1a gene (S2E Fig). UV treatment, high temperature, andH 2 O 2 treatment all would lead to oxidative stress [22]. To find out whether the maternal-zygotic leg1a zju1 mutant is compromised in scavenging ROS caused the oxidative stress, we compared the ROS level at different time points between the UV25 treated WT and maternal-zygotic leg1a zju1 embryos by DCFH-DA [23]. The result showed that the maternal-zygotic leg1a zju1 embryos accumulated a higher ROS level at all time points examined within the first hour after UV treatment( Fig 2D). We wondered whether the development of the small liver phenotype in leg1a mutants could be prevented by blocking the production of reactive oxygen species (ROS). Diphenyleneiodonium (DPI) and apocynin (APO) are two specific inhibitors of the Duox/Nox enzyme often used to block the production of ROS [24,25]. We treated the maternal-zygotic leg1a zju1 mutants with DPI or APO one hour prior to the UV25 treatment and found that both DPI and APO prevented the mutants from developing the small liver phenotype (Fig 2E and 2F). Leg1a protects liver development under stress conditions Liver, exocrine pancreas and intestine are all derived from the endoderm [26]. Previous genetic screening found that mutants with defects in liver development often showed defective development of the exocrine pancreas and/or intestine [27,28], likely because liver and exocrine pancreas share common progenitors [29,30]. Leg1a expression is enriched in the embryonic liver, but meanwhile, Leg1a is also a secretory protein [15]. Considering this fact, we wanted to Fig 2. Liver development in the maternal-zygotic leg1a zju1 mutant is amenable to different stresses. (A and B) When growing in high temperature (32°C) and high density (200 embryos per 10-cm diameter Petri dish) (HH) starting from 10 hpf till to 3.5 dpf(A) or briefly treated with 2.5 mJ UV/cm 2 (UV25) at 24 hpf(B) the maternal-zygotic leg1a zju1 embryos (mu) consistently exhibited a more severe small liver phenotype when compared with the untreated mutant embryos. (C) Comparison of liver development in WT and the maternalzygotic leg1a zju1 (mu) embryos treated with 0.05, 0.5 and 5 mM H 2 O 2 at 24 hpf for half an hour. (D) Upon UV25 treatment, the maternal-zygotic leg1a zju1 (mu) embryos accumulated a higher level of ROS when compared to the treated WT or untreated mutant embryos within 60 min. (E and F) Incubation with APO (E) and DPI (F), two inhibitors of Duox/Nox enzyme for O 2 biosynthesis, prior to UV25 treatment prevented the effect of UV on liver development in the maternal-zygotic leg1a zju1 (mu) embryos. Liver size was measured at determine whether leg1a zju1 also affects development of the pancreas and other digestive organs. We used fabp10a, trypsin, insulin and fabp2a probes in WISH to mark the liver, exocrine pancreas, endocrine pancreas and intestine, respectively. Interestingly, it appeared that UV25 treatment drastically reduced the liver size, but only subtly affected the exocrine pancreas development and did not show observable effect on the intestinal tube development in the maternal-zygotic leg1a zju1 mutant ( Fig 3A). We then used prox1 and hhex, two earlier hepatic markers, in WISH to examine the effect of UV25 treatment on liver bud formation at 30 hpf [13]. The result showed that UV25 treatment halted liver bud formation in most of the mutant embryos but not the WT (Fig 3B). A TUNEL assay did not reveal any obvious differences in the apoptotic activity between the UV25-treated WT and mutant liver cells (S3 Fig, total 12cryosections from 6 embryos examined). Immunostaining of phosphorylated histone 3 (PH3, a molecular marker for cell proliferation) showed that leg1a zju1 liver cells (defined by immunostaining of the hepatic marker Betaine homocysteine S-methyltransferase (BHMT), in red) contained significantly fewer (p<0.05) PH3 positive cells (12 of 604 total cells counted or 2.4%, data obtained from 6 embryos) when compared with those in the WT (30 of 684 total cells counted or 4.43%, data obtained from 6 embryos) at 54 hpf after UV25 treatment (Fig 3C and 3D). Therefore, the maternal-zygotic leg1a zju1 mutant develops a small liver phenotype under UV stress due to cell cycle arrest. Leg1a activates the Erk pathway to promote liver development One possible explanation for the inhibitory effect of UV25 treatment on liver development in the maternal-zygotic leg1a zju1 mutant is that UV25 treatment induces the expression of Leg1. However, we observed that the UV-or H 2 O 2 -treatment of the WT embryos at 24 hpf did not cause significant changes to the levels of total leg1 transcripts at 3 and 6 hours after treatment (S4 Fig). UV25 treatment of the WT embryos at 24 hpf neither affected the level of total Leg1 protein at 3, 6, 9 and 12 hours after treatment ( Fig 4A). In zebrafish, 24-34 hpf is a crucial stage for hepatogenesis when signaling molecules including FGF, BMP, Wnt2bb and RA orchestrate the initiation of the liver bud [13,[31][32][33]. Based on all of the above, we speculated that the signaling pathway promoting cell proliferation is probably impaired due to loss of the maternal-zygotic Leg1a. This prompted us to investigate whether Leg1, being a secretory protein, is involved in known signaling pathways. We treated 24hpf WT and maternal-zygotic leg1a zju1 mutant embryos with UV25 and found that UV25 treatment up-regulated the level of p-Erk in WT but showed an inhibitory effect on the mutant at 6 hours post treatment (i.e. at 30 hpf) ( Fig 4B). Notably, UV25 treatment did not affect the Bmp signaling as indicated by the level of pSmad1/5/8 ( Fig 4B). Importantly, upon UV25 treatment, Leg1a over-expression by leg1a mRNA injection at one-cell stage increased the level of p-Erk but did not alter the level of pSmad1/5/8 ( Fig 4C). Considering that the activation of the expression of Bmp2 by heatshock of Tg(hsp70l:bmp2b) [34] embryos at 18 or 24 hpf increased only the level of pSmad1/5/8 and not that of p-Erk (Fig 4D and 4E), we speculated that Leg1 does not signal through the Bmp pathway but probably through the Erk signaling pathway to protect liver development under stress conditions. To test whether Leg1 acts through the Erk-signaling pathway to protect liver development, we generated a constitutively active form of Erk mutant (caErk) by substituting L 84 to P 84 (L84P), S 162 to D 162 (S162D), D 330 to N 330 (D330N) simultaneously [35]. It has been shown 3.5dpf. Quartile boxplot was used to present the data. Each dot represents the liver size of an individual embryo. CK, the APO or DPI untreated control group. ***, p<0.001. doi:10.1371/journal.pgen.1005881.g002 Leg1-FGFR-Erk Signaling Protects Liver Development that over-activating Erk signaling at the early stage (up to 80% epiboly) negatively regulates the endoderm formation [36]. Indeed, we found that injection of caErk mRNA into one-cell stage embryos caused small liver both in WT and mutant embryos (S5A Fig). To overcome the effect of Erk-signaling on early embryogenesis we injected caErk mRNA or fgf8 mRNA into the yolk at 22hpf and treated the embryos with UV25 at 24hpf. The effectiveness of this way of injection is demonstrated by the fact that the injected Cy3-labled oligo-dT can successfully reach to the prospective liver bud region (S5B Fig). We found that such injection rescued the mutant liver development to a great extent ( Loss of maternal zygotic Leg1a affects the liver development. Embryos were treated with or without UV25 at 24hpf, WISH was performed to assess the development of the liver (fabp10a), exocrine (trypsin) and endocrine (insulin) pancreas and intestine (fabp2a) in the maternal-zygotic leg1a zju1 (mu) embryos at 3.5 dpf. At least three independent WISH was performed, each time with 24-31 embryos for each sample, and representative embryos were shown. (B) WISH using prox1 and hhex to examine liver bud formation at 30 hpf in embryos treated with or without UV25 treatment at 24 hpf. Representative pictures in each group are presented. The number of embryos exhibiting the phenotype over total embryos examined are shown in the bracket. Red arrow: liver bud, blue arrow: pancreatic bud. (C and D) Images (C) and statistical analysis (D) of PH3 immunostaining to compare cell proliferation of hepatocytes in WT and maternal-zygotic leg1a zju1 embryos (mu) after UV25 treatment. BHMT is an enzyme highly expressed in the liver and was used to mark out the hepatocytes. DAPI was used to stain the nuclei. Red arrows indicate PH3 positive cells in the liver. PH3 positive cells in the neural tube in WT and the mutant were also recorded in parallel. *, p<0.05. in, intestinal tube, lv, liver, nt, neural tube. WT and maternal-zygotic leg1a zju1 embryos were treated with UV25 at 24 hpf. UV25 treated embryos were harvested for total protein extraction for western blot analysis of Leg1 at 3, 6, 9 and 12 hours post treatment. (B)24-hpf WT and maternal-zygotic leg1a zju1 mutant (mu) embryos were treated with or without UV25. Total protein was extracted at 6 hours post treatment (hpt) and subjected to western blot analysis of p-Erk, total Erk, and pSmad 1/5/8. (C) leg1a zju1 embryos at one-cell stage were injected with or without leg1a mRNA, then treated with or without UV25 at 1 hpf. Total protein was extracted at 6 hours post treatment (hpt) and subjected to western blot analysis of p-Erk, total Erk, and pSmad 1/5/8. (D and E) Over-expression of Bmp2a does not activate the phosphorylation of Erk. Tg(hsp70l:bmp2b) embryos were heat-shocked at 18 hpf (D) or 24 hpf (E). Total protein was extracted from embryos 6 or 12 hours post heatshock and subjected to western blot analysis of pSmad1/5/8, p-Erk and total Erk. Over-expression of Bmp2 by heatshock increased only the level of pSmad1/5/8 and not that of p-Erk. CK, wild type embryos; caBMP, Tg(hsp70l:bmp2b) embryos.(F) TRE-caErk plasmid was injected into WT and mutant (mu) embryos at one-cell stage. Injected embryos were embryos and the injected embryos were treated with UV25 at 24 hpf followed immediately by addition of the drug Dox (final concentration 30 μg/mL). The liver development in these embryos was examined with the fabp10a probe at 3.5 dpf. The result showed that induction of the caErk expression between 24hpf and 33hpf achieved a significant rate of rescue of the liver growth in maternal-zygotic leg1 zju1 mutant ( Fig 4F) Leg1 is modified by glycosylation at N 70 We showed previously that Leg1 is a classical secretory protein [15]. Because glycosylation is a common modification for a secretory protein [39], we checked whether Leg1 is also modified by glycosylation. There are two types of glycosylation, N-glycosylation and O-glycosylation [40]. N-glycosylation can be cleaved by PNGase F [41]whereas O-glycosylation can be cleaved by the combination of endo-α-N-acetylgalactosaminidase plus neuraminidase [42]. We previously reported that adult fish serum contains a high level of Leg1 [15]. We used these enzymes to treat the serum protein and also total protein extracted from embryos at 3dpf, respectively, and found that only PNGase F treatment caused a band shift (Fig 5A and 5B). Because both leg1a and leg1b are expressed in the adult liver to produce the serum Leg1 [15], the fact that PNGase F treatment caused a clear band shift to total Leg1 protein from the serum suggests that both Leg1a and Leg1b are modified by N-glycosylation. To confirm this hypothesis, leg1a and leg1b were cloned into the expression vector PCS2 + , and the obtained plasmids were used to transfect the human liver cancer cell line HepG2. PNGase F treatment caused a band shift to both the expressed Leg1a and Leg1b in HepG2 (Fig 5C). To determine which amino acid residue is glycosylated in Leg1a and Leg1b, we used a webbased platform, NetNGly (http://www.cbs.dtu.dk/services/NetNGlyc/), to predict the site of modification(s) based on the Asn-Xaa-Ser/Thr motif [43]. The prediction showed that the 70 th asparagine (N 70 ) was a putative glycosylation site for both Leg1a and Leg1b (Fig 5D and 5E). For Leg1a, N 298 was also predicted to be a candidate site for glycosylation ( Fig 5D). We then mutated N 70 and N 298 in Leg1a to alanine (A) to obtain the leg1a N70A and leg1a N298A plasmids. We transfected the leg1a WT plasmid and the two leg1a N70A and leg1a N298A mutant plasmids into the HepG2 cell line, respectively, and performed western blot analysis of Leg1 in the extracted total protein at 24 hours post transfection. The result showed that leg1a N70A produced a product with a mobility like that of Leg1a treated with PNGse F, whereas leg1a N298A produced a product with a mobility like that by the leg1a WT plasmid (Fig 5F). We also mutated the N 70 to A 70 in Leg1b and found that Leg1b N70A was no longer sensitive to PNGase F treatment and exhibited a mobility like that of Leg1b treated with PNGase F (Fig 5G). In addition, we injected leg1a N70A and leg1b N70A mRNA and their respective WT control mRNA into zebrafish embryos at the one-cell stage and extracted total protein at 9 hours post injection. Western blot analysis of the protein samples showed that both Leg1a N70A and Leg1b N70A exhibited a mobility like that of Leg1a or Leg1b treated with PNGase F (Fig 5H). All of these results demonstrated that N 70 was the only N-linked glycosylation site for both Leg1a and Leg1b. ability of Leg1a N70A and Leg1b N70A , WT leg1a, WT leg1b, leg1a N70A or leg1b N70A plasmid was each co-transfected with HA-tagged rnasel1 plasmid into HepG2 cells. Rnasel1 (NCBI accession no. AI476973) encodes a known secretory protein Rnasel1 [45] and was used as a control here. Total proteins were extracted from the culture medium and the cell pellet, respectively. Western blot analysis of the protein samples showed that Leg1a, Leg1b, Leg1a N70A and Leg1b N70A were all detected in the cell pellet fraction (Fig 6A). Leg1a, Leg1b and Leg1a N70A were also detected in the culture medium fraction (Fig 6A), whereas no Leg1b N70A was detected ( Fig 6A). Meanwhile, we noticed that the secretion of HA-Rnase1l in the cells expressing Leg1b N70A was also greatly reduced ( Fig 6A). To determine where the un-secreted Leg1b N70A was located in the protein trafficking route, we co-immunostained Leg1 with ER and Golgi markers, respectively. Consistent with western blot analysis, Leg1a and Leg1a N70A were secreted normally (Fig 6B), as was the WT Leg1b, which was hardly detectable in the leg1b plasmid-transfected cells (Fig 6C and 6D, upper panels). In contrast, Leg1b N70A nicely co-localized with the cis-Golgi indicated by cis-and medial-Golgi marker Giantin (Fig 6C, lower panels) but not with the ER marker PDI [46,47] (Fig 6D, lower panels). Strikingly, cells transfected with the leg1b N70A plasmid appeared to harbor more cis-Golgi components (revealed by Giantin staining) than those in the WT leg1b-transfected cells (Fig 6C), indicating that the Leg1b N70A mutant protein is retained in the cis-Golgi apparatus, which caused a traffic jam in the cells such that the secretion of Rnase1l was also severely blocked in these cells (Fig 6A). The fact that accumulation of Leg1b N70A mutant protein in the cis-Golgi but not in the ER might explain why we did not observe an activation of the markers (including Bip, Chop, and p-eIF2a) for the ER-stress response either in the cultured cells ( Leg1-FGFR-Erk Signaling Protects Liver Development N 70 -glycosylation is required for Leg1a to protect liver development under stress condition Next, we tested whether N 70 glycosylation in Leg1a is required to promote liver development by injecting leg1a andleg1a N70A mRNA, respectively, into the maternal-zygotic leg1a zju1 mutant embryos at the one-cell stage (S8 Fig). These injected embryos were briefly treated with UV25. WISH analysis using the fabp10a probe showed that leg1a N70A mRNA injection failed to rescue the mutant liver development (Fig 7A). In fact, Leg1a N70A was greatly compromised in promoting Erk phosphorylation under UV25 (Fig 7B). Leg1a interacts with FGFR3 that depends on Leg1a N 70 -glycosylation FGFis a key effector of the Erk signaling pathway. We wanted to determine whether Leg1 interacts with the FGF receptor (FGFR) to activate the phosphorylation of Erk. Extracted serum protein containing Leg1a and Leg1b (total Leg1) was incubated with human 293T cells transfected with a plasmid expressing FLAG-tagged FGFR3. Co-immunoprecipitation (Co-IP) analysis showed that Leg1 interacted with FGFR3 ( Fig 7C). To determine whether N 70 -glycosylation is necessary for Leg1 to bind to FGFR3, we treated total serum proteins (containing both Leg1a and Leg1b) with PNGase F to get a mixture of Leg1 and de-glycosylated Leg1 under the undenaturized condition (Fig 7C, left panels). The mixture of Leg1 plus de-glycosylated Leg1 was incubated with 293T cells expressing FGFR3. Co-IP analysis showed that only Leg1 but not the de-glycosylated Leg1 interacted with FGFR3 (Fig 7C, right panels). We also overexpressed Leg1a and Leg1a N70A in 293T cells (Fig 7D, left panels) and harvested the culture medium containing Leg1a or Leg1a N70A to incubate with 293T cells overexpressing FGFR3, respectively. Co-IP showed that Leg1a but not Leg1a N70A interacted with FGFR3 (Fig 7D, right panels). In zebrafish, FGFR3 was co-immunoprecipitated by the Leg1 antibody when Leg1a and FGFR3 were co-expressed by their mRNA co-injection (Fig 7E). The zebrafish transgenic line Tg(hsp70:dnfgfr1-gfp) expresses the dominant-negative Fgfr1 (dn-Fgfr1) by the hsp70heakshock promoter [48]. The expressed dn-Fgfr1, whose tyrosine kinase domain is replaced by GFP (as a reporter of the transgenic embryos), can form heterodimer with all FGFR subtypes so that to block the FGF signaling. When this line was treated with UV25 only we found that the level of p-Erk was increased (S9 Fig). However, when the embryos were heat-shocked at 22 hpf to express dn-Fgfr1 followed by treatment with UV25 at 24 hpf, the effect of UV25 on activation of p-Erkin the GFP + embryos was down-regulated to a similar level to that observed in the UV25 untreated GFP + embryos. This result further suggests that the activation of Erk by UV25-treatment is through the FGF pathway, Compensatory mechanism is activated in leg1a zju1 mutant Considering the importance of the liver for a living organism and the viable and fertile nature of theleg1a mutant, we wondered whether the small liver in the leg1a mutant would be recovered to normal during later growing stages. We treated WT and maternal-zygotic leg1a zju1 mutant with UV25 at 24hpf, and check the liver size at 3.5dpf and 10dpf. While, as expected, almost all the maternal-zygotic leg1a zju1 mutant displayed a small liver compared to the WT at 3.5dpf, the liver sizes in the mutants were recovered to normal at 10dpf (Fig 8A). However, the maternal-zygotic leg1a zju1 mutant exhibited a lower survival rate~32% (28/87) when compared to 66% (50/76) for the WT counted at 10 dpf. Examining the expression of leg1b, the homolog of leg1a, in the leg1a zju1 mutant revealed that the levels of leg1b transcripts were up-regulated both in zygotic homozygous mutant at 4 dpf( Fig 8B) and maternal-zygotic mutant at 3-, 5-, and 7-dpf ( Fig 8C). WISH using the leg1 probe also showed that the total leg1 transcripts in the maternal-zygotic leg1a zju1 mutant was enriched in the liver (S10B Fig).These data suggest that the compensatory mechanism [49] is activated in the leg1a zju1 mutant to support the liver development at the later stages. At the adult stage, although the liver to body ratio of the leg1a zju1 mutant fish did not show significant difference to that of the WT fish ( Fig 8D) the leg1a zju1 mutant fish exhibited a shorter stature and higher mortality compared to the WT fish ( Fig 8E and 8F), suggesting that the Leg1a anti-stress pathway also functions in the adult fish and that theLeg1bonly partially compensates for the function of Leg1a. Discussion In addition to the precise spatial and temporal control of genetic programs instructing oganogenesis, successful completion of organogenesis also relies on the maintenance of an optimal environment through the elimination or neutralization of the stress-induced harmful reagents, and how this is achieved is of tremendous interest in the field of developmental biology [50]. Although undergoing external embryogenesis, teleost fish harbor a robust genetic program dictating liver development as long as any environmental change, including temperature or natural UV irradiation, is not detrimental. It is therefore of interest to explore the mechanism(s) behind this phenomenon. We showed that Leg1 plays a unique role in protecting liver development under different stress conditions by serving as a secretory signaling molecule/modulator to activate the Erk pathway. This finding may explain the adaption of teleost fish in coping with environmental changes. Leg1a interacts with FGFR and glycosylation at N 70 is crucial for Leg1 to interact with FGFR3 and to protect liver development under stress conditions. (A and B) Maternal-zygotic leg1a zju1 embryos (mu) at one-cell stage were injected with leg1a or leg1b N70A mRNA. Injected embryos were treated with UV25 at 24 hpf and allowed to grow to 3.5 dpf for comparison of liver development by WISH using fabp10a probe (A) or to 30 hpf for protein extraction for western blot analysis of p-Erk and total Erk (B). A representative data set of three independent experiments was shown. ***, p<0.001, N.S., not significant. (C and D) Leg1 but not de-glycosylated Leg1 interacts with FGFR3. 293T cells over-expressing FLAG-tagged FGFR3 were incubated with total serum containing Leg1 (serum) or total serum Leg1 partially de-glycosylated by PNGase F (serum+P) (C), with supernatant containing secreted Leg1a or Leg1a N70A respectively from leg1a or leg1a N70A plasmids transfected 293T cells (D). In (C), black arrow, total Leg1, red arrow, de-glycosylated Leg1. An anti-FLAG antibody was used to perform the Co-IP, FGFR3 was detected with the anti-FLAG antibody and Leg1 with the Leg1 antibody. (E) Co-IP assay of interaction between Leg1a and FGFR3 in zebrafish. 200pg leg1 mRNA and 400 pgfgfr3 mRNA were co-injected into one-cell stage embryos. Total protein was harvested 7 hours after injection and was subjected to Co-IP using the Leg1 antibody or mouse IGG antibody. FGFR3 was detected by an anti-FRFR3 antibody. Western blot was repeated three times for (B), four times for (C), three times for (D) and (E). doi:10.1371/journal.pgen.1005881.g007 Leg1-FGFR-Erk Signaling Protects Liver Development The process of liver organogenesis is governed by key transcription factors (e.g., HNF, GATA, Prox and Hhex) and signaling molecules (e.g. FGF, Bmp and Wnt) [1][2][3]10]. Meanwhile, each of these stages has to deal with the oxidative stress constantly imposed intrinsically or externally. In zebrafish, hepatoblasts are specified at around 24 hpf and start to form the liver primordium at around 30 hpf. Both FGF and Bmp play crucial roles during this period [10,31,32]. In general, FGF acts through the FGFR-RAS-ERK signaling pathway [51], Bmp through activation of Smad1/5/8 phosphorylation [52] and Wnt2bb through the β-catenin-TCF pathway to control organ/tissue development, respectively [53]. It is envisaged that molecules mediating anti-oxidative stress during liver organogenesis might act as a tuner of the pathways controlling cell proliferation or elimination. Based on the facts that 1) Leg1a expression is enriched in the yolk syncytial layer between 24-48 hpf(S10A Fig)and this layer is directly exposed to external stress such as UV irradiation or low level of oxygen, 2) Leg1a expression is obviously enriched in the embryonic liver at 48 hpf, 3) Leg1a is a secretory protein, 4) the leg1a zju1 maternal-zygotic mutant exhibited a small liver only under the cold season, UV irradiation, high temperature, or H 2 O 2 treatment, and 5) the small liver phenotype was rescued by the antioxidant chemicals DPI and APO, we conclude that Leg1a defines a novel anti- stress pathway to protect the liver development. Besides, we noticed that the leg1a zju1 maternal-zygotic adults displayed a shortened body length and reduced survival rate, suggesting that the Leg1-meidated anti-stress pathway is also necessary for wellbeing of an adult fish. Then, the question is how there is a season in a fish facility which is maintained at relatively constant temperature throughout the year? Since the small liver exhibited by the maternalzygotic leg1a zju1 mutant is ROS-dependent we speculate that the difference in the oxygen content in the fish water in different seasons might be the cause of the stress-related phenotype although the temperature is maintained in the facility. We know that the oxygen content in the water is related to atmospheric pressure and that the atmospheric pressure is higher in the cold seasons and lower in the warm seasons. We checked the weather record between Dec 30, 2013 to Jan 30, 2015 in Hangzhou and plotted the liver size against the record of atmospheric pressure. The small liver phenotype in leg1a mutant nicely correlates with high atmospheric pressure in the cold seasons (S1D Fig). However, we cannot exclude other possibilities at this moment. Utilizing specific morpholinos (MOs), we previously showed that leg1a is required for liver bud outgrowth [15]. However, zygotic leg1a zju1 mutants do not show this phenotype, and maternal-zygotic leg1a zju1 mutant phenotype is milder than the phenotype generated by the leg1a-MO. The discrepancy between the phenotype caused by MO-injection and the phenotype exhibited by a loss-of-function mutant is indeed a concern in the zebrafish community. The explanations for the discrepancy observed could be: 1) previous used leg1-MO might have yielded an off-target effect on the liver development in the morphants, this fits with the observation that leg1a or leg1b mRNA or even combination of leg1a and leg1b mRNA only partially rescued the morphant small liver phenotype [15]; 2) injecting morpholino itself works as a stress cue to induce the small liver phenotype when Leg1 is knocked down; and 3) since the leg1b gene is still intact in the leg1a zju1 mutant, the mild phenotype exhibited by the maternalzygotic leg1a zju1 mutant might be due to the functional compensation by Leg1b [49]. To narrow down the possibilities, we tried to get the leg1a and leg1b double knockout mutant, however, failed in obtaining such double mutant. We also injected standard control morpholino (ST-MO, derived from human β-globin antisense morpholino) into the leg1a zju1 mutant embryos and found that ST-MO did not enhance the leg1a zju1 mutant phenotype (S11 Fig). We then compared the expression of leg1b in the WT and leg1a zju1 and found that the expression of leg1b is elevated in the leg1a zju1 mutant embryos. These data suggest that the expression of leg1b is mobilized to compensate, at least partially, for the loss of function of Leg1a in the leg1a zju1 mutant. Mechanistically, it appears that Leg1a does not signal through the Bmp pathway because Leg1a over-expression does not promote the phosphorylation of Smad1/5/8 as done by the over-expression of Bmp at 18 or 24 hpf. However, Leg1a over-expression does promote the phosphorylation of Erk upon UV25 treatment. Furthermore, UV-treatment caused up-regulation of the phosphorylation of Erk in WT but not in the maternal-zygotic leg1a zju1 mutant. In addition, as revealed by immunostaining, it appeared that more p-Erk cells were in the WT endoderm than that in the maternal-zygotic leg1a zju1 mutant after UV25 treatment (S12 Fig). The intriguing question is why Leg1a promotes Erk phosphorylation after UV exposure. We speculate that it maybe because UV causes certain modification or conformation change to Leg1a that facilitates the interaction between Leg1a and Fgfr3 to promote Erk phosphorylation. Nevertheless, these data suggest that Leg1a signals through the Erk pathway. Since FGF is a key effector of the Erk pathway and is essential for liver development, our data suggest that there might be a crosstalk between the FGF and Leg1-meidated anti-stress signaling pathways. Therefore, it is of great interest to determine how Leg1 promotes Erk phosphorylation in the future. For example, being a secretory protein, does Leg1a have its own receptor or shares the FGF receptor to mediate its activity? If Leg1a does share the FGF receptor with FGF, then which type of receptor do they share? Or does Leg1a simply serve as an agonist to facilitate the binding of FGF to its receptor? Leg1 is an evolutionally conserved protein across the vertebrates [15]. A recent report showed that Leg1 homologs in monotreme is highly expressed in monotreme milk and appears to be modified by N-glycosylation [17]. This implies that the tissue expression specificity and function of Leg1 might vary among different animal species. Here we showed that zebrafish Leg1a is glycosylated at N 70 . Although this glycosylation modification is not essential for the secretion of Leg1a, it is important for Leg1a in the promotion of liver development, for the phosphorylation of Erk and interaction with FGFR3. All available data have suggest that Leg1a is a novel signaling molecule/modulator, which has urged us to identify more downstream signaling molecules involved in this pathway, which may ultimately reveal the importance of this pathway in the evolution of vertebrates. Ethics statement All animal procedures were performed in full accordance to the requirement by 'Regulation for the Use of Experimental Animals in Zhejiang Province'. This work is specifically approved by the Animal Ethics Committee in the School of Medicine, Zhejiang University (ETHICS CODE Permit NO. ZJU2011-1-11-009Y, issued by the Animal Ethics Committee in the School of Medicine, Zhejiang University). Fish lines and maintenance The zebrafish (Danio rerio) AB strain was used as WT in this study. To generate the leg1a mutant, we constructed a TALEN vector against the first exon of the leg1a gene (Fig 1A) according to the "Unit Assembly" protocol [19]. The TALEN mRNA was synthesized using the SP6 mMESSAGEmMACHINE Kit (Ambion) and was injected into the WT embryos at onecell stage. These embryos were bred to the adulthood as founders to mate with a WT fish. Eight embryos from each cross were genotyped using the primer pair leg1a 4244 Fw (CTTACAAGT TACAGCAGCTCC) and legg1a 7748 Rv (CACAACGGACCAGTACATCG) followed by the second primer pair TALEN ID fw (CTCCCAGAGGATGACCATGT) and TALEN ID Rv (ACTCCAGAGCGGATTCTCCT) to identify leg1a mutants, and the rest embryos were bred to adulthood for identification of individuals carrying the mutation. The Tg(hsp70:dnfgfr1-gfp) and Tg(hsp70l:bmp2b) fish lines were obtained from Dr Feng Liu. Fish was raised and maintained in the fish facility (Ai-Sheng Zebrafish Facility Manufacturer Company, Beijing, China) in Zhejiang University according to the standard procedure. Cell lines and plasmid transfection HepG2 cells were grown in the DMEM medium (high glucose, GIBCO), supplemented with 10% newborn calf serum (NBCS, GIBCO). Plasmids were transfected into cells mediated with lipofectamine 2000 (InVitrogen) according to the manufacturer's instruction. Total protein was extracted 24 hours after transfection and was subjected to western blotting analysis. Plasmid construction and mRNA in vitro transcription The ORF region of leg1and erk was cloned into PCS2+ vector. All leg1and erk point mutations were generated by site-directed mutagenesis. The primers for leg1 mutant used in the PCR reaction were designed by the webtoolPrimerX (http://bioinformatics.org/primerx/index.htm). The sequences of primers are listed in S1 Table. All primers used for erk point mutation was designed as previously described [35]. mRNAs were obtained via in vitro transcription using the mMessagemMachine (Ambion) according to the manufacturer's instruction. Whole-mount in situ hybridization (WISH) and liver size measurement WISH was performed as previously described [27]. prox1, hhex, fabp10a (liver fatty acid binding protein 10), insulin and trypsin, fabp2 (intestinal fatty acid binding protein 2) were cloned into expression vectors, respectively [27,31]. Corresponding probes were synthesized via in vitro transcription and were labeled with digoxigenin (DIG, Roche, Diagnostics). Liver size was measured as previously described [54]. Briefly, liver was marked out after WISH suing the fabp10a probe, and imaged by Nikon AZ100 from left lateral view after aligning two eyes of the embryo vertically. The fabp10a signal area in each image was calculated by Nikon image system (NIS-elements D v3.0)and used as the index of the liver size. Glycan cleavage PNGase F (NEB, P0704) was used to cleave N-linked glycosylation, and a combination of Endo-alpha-α-Acetylgalactosaminidase (NEB, P0733) and neuraminidase (NEB, P0720) was used to cleave O-linked glycosylation. All the enzyme treatment was performed according to the manufacturer's instruction. UV and drug treatment Embryos were treated with different dosage of UV energy supplied by Ultraviolet Crosslinker (UVP, CL-1000) at 24 hpf and then allowed to grow in the egg water. For H 2 O 2 treatment, 24 hpf old embryos were treated with different concentration of H 2 O 2 for half an hour. For APO and DPI treatment, embryos were incubated with 0.5 μM APO (Sigma,W508454) or 10μM DPI (Sigma, D2926) for 1 hour at 23 hpf, followed by UV25 treatment, and then allowed to grow in normal egg water. Embryos injected with TRE-caErk plasmid were treated with Dox at 24hpf, and replaced with fresh egg water at 30hpf (6hpt) and 33hpf (9hpf), respectively. Measurement of ROS level ROS content measurement was performed as described previously [56] with some modifications. Briefly, embryos were sunk in 100μl of 10μM DCFH-DA(Beyotime, S0033) solution for one hour prior to UV25 treatment. For each sample, embryos were divided into four groups (containing three embryos in each group) and placed in a 96-well plate. After UV25 treatment the fluorescence signal was measured at a 10 min interval for one hour on a Synergy H1 Reader (Biotek) (excitation 485 nm, emission 560 nm). Co-immunoprecipitation (Co-IP) For studying the interaction between Leg1 and FGFR3 in the cell culture system, Leg1 sample was prepared either by diluting 30μl of the fish serum with 1000μl of serum-free DMEM media (Gbico) or by transfecting 293T cells with the leg1 plasmid (cloned into the PCS2 + expression vector) and collecting the culture media 30 hrs after transfection. The Leg1 samples were incubated with the 293T cells expressing FGFR3 (cells transfected with the FGFR3 plasmid in the PLX304 expression vector, the vector is provided by Dr Bing Zhao) at 4°C for one hour. After incubation, the cells were washed with PBS for three times, and lysed with NP40 lysis buffer (50mM Tris-Hcl, PH 8.0, 150mM NaCl, 1% NP40, 2mM EDTA). For studying the interaction between Leg1 and FGFR3 in the embryos, embryos injected with leg1a and fgfr3 mRNA at one cell stage were harvested at 7hpf and lysed with NP40 lysis buffer. All lysates were incubated with Leg1 antibody or Flag antibody at 4°C overnight, followed by incubation with Protein A/ G argrose beads (beyotime, Cat.No.P2012) for further 2 hrs. The beads were washed with cold PBS for three times and eluted by 100mM PH2.2 glycine. The elution was subjected to western blot analysis. Quantitative Real Time-PCR (qPCR) More than 50 embryos were pooled for total RNA extraction. Reverse transcription was performed by SuperScript II Reverse Transcriptase (Invitrogen, 18064-014) according to the manufacturer's protocol. The transcribed cDNA was used as the template in qPCR with SYBR Green Master Mix (Vazyme). The CFX96 real time system (Bio-Rad) was used to obtain the threshold cycle (C t ) value, and the relative expression of each gene was determined after being normalized to the actin gene. Primer pairs used are listed in S2 Table. Statistics In considering of relative small sizes of samples with skewed phenotype distribution among individuals in this study, the conventional statistical analysis by showing mean and standard error/derivation is apparently not suitable for presenting the liver size measurement data. Instead, quartiles are more intuitive in presenting data with relative small sample size with skewed distributions [57]. Therefore, we used the quartile boxplot to present our data [58]. The box plot was drawn by ggplot2 [59]. Survival ratio statistical analyses were carried by Chisquared test. Other statistical analyses were performed with the Student's T-test. Ã , p<0.05, ÃÃ , p<0.01, ÃÃÃ , p<0.001, N.S, no significant difference. Supporting Information S1 Images showing an example of determining the liver size by WISH using the fabp10a probe. WT: wild type; mu: leg1a zju1 mutant; mu+UV25: leg1a zju1 mutant treated with UV25. (B) WT and maternal-zygotic leg1a zju1 (mu) embryos were treated with 1 mJ/cm 2 (UV10), 2.5 mJ/cm 2 (UV25) and 5 mJ/cm 2 (UV50) UV at 24 hpf and grew to 3.5 dpf for WISH analysis of liver development. (C) Comparison of liver sizes between the WT and maternalzygotic leg1a zju1 (mu) embryos growing in a high density condition (200 embryos per 10-cm diameter Petri dish). (D) Growing the maternal-zygotic leg1a zju1 (mu) embryos in the egg water containing 0.5% or 1% ethanol did not cause a small liver phenotype. (E) Upon UV25 treatment the maternal-zygotic leg1a zju2 embryos also exhibited a small liver phenotype at 3.5 dpf. Ã , p<0.05, ÃÃ , p<0.01, ÃÃÃ , p<0.001, N.S., no significance. (TIFF) S3 Fig. leg1a zju1 mutant does not suffer from elevated apoptosis upon UV25 treatment. Images of TUNEL assay in the 54-hpf WT and maternal-zygotic leg1a zju1 embryos (mu) after UV25 treatment at 24 hpf. No abnormal apoptotic activity was observed near the endodermal region including liver (lv) and intestine (in) in the maternal-zygotic leg1a zju1 embryos (mu) compared to the WT. 12 sections from six embryos for each genotype were examined. nc, notochord, nt, neural tube. RTTA expression was driven by the β-actin gene promoter. Dox binds to RTTA and the Dox-RTTA complex binds to the TRE promoter to drive the expression of caErk. (B) 10 pgTre-caErk plasmid DNA was injected into one-cell stage maternal-zygotic leg1a zju1 embryos (mu). These embryos were treated with Dox at 6 hpf and total protein was harvested at 12 hpf. The protein samples were subjected to western blot analysis. The total Erk versus Tubulin ratios were shown on the right. Dox, doxycycline. Error bar stands for the standard error. ÃÃÃ , p<0.001. Western blot was repeated three times. (C) Images of representative 3.5-dpf embryos after WISH using the fabp10a probe. Embryos was first injected with Tre-caErk plasmid at one-cell stage, then treated with UV25 at 24 hpf and followed by Dox treatment for 6 hours (6 hpt) or 9 hours (9 hpt). After Dox treatment, embryos were transferred to the normal egg water to grow to 3. HepG2 cells were transfected with the leg1b, leg1b N70A , and the vector plasmid DNA. Total protein was extracted 30 hours post transfection and subjected to western analysis of Bip, Chop, phosphorylated eIF2α (p-eIF2α), and total eIF2α. These ER-stress response markers were not activated by the hypoglycosylated Leg1b N70A . Vector, the PCS2 + vector transfected cell. (B) Western blot analysis of Bip, Chop, phosphorylated eIF2α (p-eIF2α), and total eIF2α in the WT and maternal-zygotic leg1a zju1 mutant embryos at 2 dpf and 3 dpf. (C) qPCR analysis of the transcript levels of ER-stress response markers including atf6, bip, perk, chop, ire1a, and grp94 in 3 dpf WT and maternal-zygotic leg1a zju1 mutant embryos. Error bar stands for the standard error. Primers for analyzing these ER stress marker was as previously reported (S2 Table). Western blot was repeated three times each for A and B. (TIFF) S8 Fig. N 70 Glycosylation is required for Leg1a to protect liver development. Corresponding to Fig 7A and 7B. Western blot analysis of Leg1a or Leg1a N70A protein in 3 dpf old maternalzygotic leg1a zju1 mutant embryos injected with leg1a (mu+1a) orleg1a N70A (mu+1a N70A ) mRNA at the one-cell stage. Protein samples from the WT and maternal-zygotic leg1a zju1 mutant (mu) embryos were used as controls. Western blot was repeated three times. (TIFF) S9 Fig. Blocking the FGFR activity attenuates the activation of Erk by UV25 treatment. Tg (hsp70:dnfgfr1-gfp) embryos were heatshocked at 22 hpf to induce the expression of dominant negative FGFR1 (dn-Fgfr1). GFP signal was used to distinguish the dn-Fgfr-expressed (GFP+) and non-dn-Fgfr-expressed (GFP-) embryos. Embryos were treated with UV25 at 24 hpf, and total protein was extracted from embryos at 30 hpf (6 h post treatment) and was subjected to western analysis of the level of p-Erk. Tublin was used as a loading control. H.S., heatshock. Ã , p<0.05, ÃÃ , p<0.01, N.S., no significance. Western blot was repeated two times. (TIFF) S10 Fig. WISH analysis of total leg1 expression in the WT embryo at 27 hpf (A, arrow points to the endoderm region giving rise to the liver primordium) and in the WT and maternal-zygotic leg1a zju1 mutant embryos at 7 dpf (B, arrow points to the liver). n = 25. (TIFF) S11 Fig. Morpholino injection does not cause a small liver phenotype to the maternalzygotic leg1a zju1 mutant. One nanolitre of 0.5 mM standard control mopholino (ST-MO) was injected into one-cell stage embryos. The liver development was examined at 3.5 dpf using the fabp10a probe. N.S., no significance. (TIFF) S12 Fig. Immunostaining of p-Erk in the WT and maternal-zygotic leg1a zju1 mutant embryos at 27 hpf. Serial cryosections (S1 to S4) from three WT embryos (WT-1, WT-2 and WT-3) and three mutant embryos (mu-1, mu-2 and mu-3) treated with UV25 at 24 hpf were shown. DAPI was used to stain the nuclei. (TIFF)
2016-05-17T07:37:24.656Z
2016-02-01T00:00:00.000
{ "year": 2016, "sha1": "be2543bebe7d113e3d07197818ad834eb66d147d", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1005881&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "be2543bebe7d113e3d07197818ad834eb66d147d", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
39414429
pes2o/s2orc
v3-fos-license
An Efficient FeCl 3 Catalyzed Synthesis of N , N ’-Diarylformamidines An efficient FeCl3 catalyzed synthesis of N,N’-diarylformamidines using triethylorthoformate (1 equivalent) and primary aryl amines (2 equivalents) at ambient temperature has been described. This methodology provides an ecofriendly and simple procedure without using any hazardous and expensive chemicals. Introduction Formamidines have structural similarity to the imidazole ring, a part of the histamine molecule, are supposed to possess enormous biological activities.The biochemical aims of formamidines include monoamine oxidase inhibitor [1,2], adrenergic, neurochemical receptors [3][4][5][6][7][8] and prostaglandin E2 synthesis [9].Formamidines are also noted for their complexation with transition metals [10,11] and usage as auxiliaries in asymmetric synthesis [12,13], electrophiles [14].The utility of formamidines as support linkers in solid phase synthesis [15] is now well established in the field of organic synthesis.Formamidines are now vastly used for the preparation of imidazolium salts which are the precursor for the synthesis of N-Heterocyclic carbenes [16].Moreover, formamidines are useful subject of interest to the physical chemists for dynamic NMR study [17].There have also been reported some cryoscopic molecular weight determination experiments utilizing the molecular association property of diarylformamidines in benzene solution [18]. Results and Discussion There are only a few reports [16,[19][20][21][22][23][24][25][26] in the literature for the synthesis of formamidines specially using triethyl orthoformate and amines.However, there is still scope for further improvement in this field since most of the reported methods suffer from long reaction times, elevated temperature or use of toxic and expensive reagents.Very recently, Sadek et al. [26] reported the synthesis of diarylformamidines using ceric ammonium nitrate (CAN) in water.But it is well known that CAN is a toxic and strong oxidizing reagent and especially in water it shows strong acidic property to affect many sensitive functional groups.So, a mild and efficient method is still desirable.We report herein an efficient FeCl 3 catalyzed synthesis of N,N'-diarylformamidines using triethylorthoformate (1 equivalent) and primary aryl amines (2 equivalents) at ambient temperature.Compared to other methods this method is much more environment friendly due to not using any toxic chemicals.In a preliminary experiment, a solution of aniline (1a) (2 mmol) and triethyl orthoformate (1 mmol) in the presence of a catalytic amount of FeCl 3 (10 mol%) in toluene (10 mL) was stirred for 3 h at room temperature.Solvent was removed and the solid mass obtained was purified by column chromatography over silica gel to afford pure formamidine 1b in excellent yield (Scheme 1). Thus, a series of diarylformamidines have been synthesized using the reaction conditions and the results are summarized in Table 1.All the products were characterized by spectral and analytical studies and were compared with the reported data (10,13b,13d,13f-h).The probable mechanism of the formation of the product may be suggested with the line of the report by Sadek et al. [26] (Scheme 2).It is proposed that FeCl 3 as a Lewis acid activates ethoxy groups and enhances the C-O bond cleavage to generate a stable carbocation which facilitates the subsequent nucleophilic displacements by aromatic amines. Conclusion In conclusion, we have developed a mild and efficient method for the direct conversion of primary aryl amine to N,N'-diarylformamidines using a catalytic amount of ple method oxic and expensive reagents. General Procedures All melting points were taken on a Gallenkamp melting point apparatus and are uncorrected.The 1 H and 13 were recorded in CDCl 3 using TM on 300 and 75 MHz spectrome and IR were recorded using a Shim strument.High-resolution mass spectra were obtained using a Qt of Micro YA263 instrument.Toluene was dried over sodium.Chloroform was freshly distilled from phosphorus pentoxide.Petroleum ether of boiling range 60˚C -80˚C and silica gel of 60 -120 mesh were used for column chromatography. Table 1 . Synthesis of N,N'-diarylformamidines. a Yields refer to pure isolated products.b Refluxed in toluene.
2017-09-09T21:15:19.450Z
2013-02-25T00:00:00.000
{ "year": 2013, "sha1": "be4a75b3466e12ab3aedd408bcd11f00ab560737", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=28071", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "be4a75b3466e12ab3aedd408bcd11f00ab560737", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
262575185
pes2o/s2orc
v3-fos-license
Outcome Evaluation of Burn Injury Management: A Study of Selective Traditional Home Remedies Background Clinicians classify burns as epidermal, partial thickness (superficial and deep), or full thickness, according to the depth of tissue damage. Although skin is considered the largest organ in the human body, studies investigating burns, their types, and their management has revealed that the background knowledge of burn aid the community possesses remains unsatisfactory. Thus, in this study, we aimed to evaluate the effect of various traditional home remedies, taking into account the type of burns and the nature of the remedies used from a cosmetic point of view. Materials and methods This is an original retrospective study conducted at Dr. Soliman Fakeeh Hospital in Jeddah from June through December 2022. Using the Vancouver Scar Scale (VSS), eligible patients who met our inclusion criteria were invited to participate in the study after a review of their patient history, an assessment of basic vital signs, and a physical examination. Results Fifty-two participants met our inclusion criteria and successfully completed the study. A total of 80 wounds of varying severity in various locations were evaluated. Participants were divided into three categories according to VSS scores indicating good, intermediate, or poor healing. None of the eight cases treated with water resulted in poor healing. However, tomato paste resulted in poor healing for six cases (60%) but moderate and good healing for two cases (20%). Conclusion The safest and most effective initial management for burns among all the reviewed remedies was the application of cool running water, followed by seeking medical attention for evaluation and proper treatment, whereas tomato paste had a markedly poor effect. Introduction The skin is considered the largest organ in the human body; it serves as a protective barrier in addition to its many other vital functions for survival [1].Burns are potentially harmful lesions that can occur at any time and have a variety of negative repercussions, including physical and occupational harm, loss of functionality, cosmetic deformity, and psychosocial harm [2].The term "burn" is defined as an injury to the skin or other organic tissue that is primarily caused by thermal or other acute trauma.It occurs when one or more of the skin cells or other tissues are destroyed by hot liquids (scalds), hot solids (burns of contact), or flames (burns of fire) [3]. Describing burns as first, second, or third-degree alone does not adequately convey the significance of a burn injury.Due to the ambiguous and inconsistent interpretation of these words, they may be misleading [1].Burns are also classified as epidermal, partial thickness (superficial and deep), or full thickness according to the depth of tissue damage.Fourth-degree burns extend beneath the subcutaneous tissues and involve fascia and/or muscle [4].Because they cause an estimated 180,000 deaths each year, burns are a major public health concern worldwide.Almost two-thirds of these occurrences occur in the WHO regions of Africa and Southeast Asia, with the majority occurring in low-and middle-income countries [5].The observed situation is made worse by the fact that Saudi Arabia has weak burn first-aid practices and a significant prevalence of traditional home remedies [6,7]. Saudi Arabian respondents to a study conducted in Majmaah reported having limited knowledge of first aid despite possessing bachelor's degrees and being generally well-educated [7].Another study conducted in the Al-Baha region showed that the majority of participants (73.6%) had inadequate knowledge of first aid for burns, while only 26.4% had adequate knowledge [8].In contrast, a 2011 survey of New South Wales residents conducted to assess their knowledge of burns and first aid revealed that only a minority were aware of correct burn first-aid procedures [9].A study conducted in Kumasi utilizing 85 different substances, such as sand, muddy water, starch, corn dough, cow dung, egg white, calamine lotion, gentian violet, ointments, creams, and toothpaste, revealed diverse knowledge of first-aid techniques and administration [10]. Various limitations regarding first aid and initial care for burn patients have also been identified.Consequently, patients encounter numerous challenges during recovery, including more comorbidities, higher mortality rates, and longer hospital stays [11].The early management of burn cases represents one of the biggest challenges and definitely reflects the degree of morbidity and mortality [10].Cooling the burn surface is one of the most traditional forms of care and has been used for decades.However, new management strategies have been advised over time based on advanced assessment, burn site, and burn degree [12,13]. A study conducted in Vietnam revealed that only a minority of physicians had participated in emergency burn management training courses [11].Another study carried out in a hospital in Lahore, Pakistan, showed that few of the parents who presented to the ER department with their children suffering from burn injuries had previous knowledge of how to manage burns [14].In Northern Australia, a study showed that a limited number of total cases were less likely to perform burn first-aid interventions [9].A 2017 study in Al-Madinah City, Saudi Arabia, revealed that parental knowledge of burn, injury, and fracture first aid is inadequate [15].Another study conducted in 2018 in Riyadh, Saudi Arabia, concluded that the level of knowledge and awareness among parents regarding burn first aid was insufficient, finding that only 6% of 300 parents had an adequate awareness level of burn first aid.All the others, which represented the majority of the parents, relied on inappropriate myths to manage burns [16]. Studies about burns, their types, and methods of management have revealed the truth about the community's background knowledge of burn aid, exposing that the average level of knowledge is unsatisfactory and calls for improvement [17].While the body of literature on first-aid knowledge and the management of particular populations is expanding, no published studies have sought to assess the effectiveness of appropriate burn management and its influence on outcomes.In this study, we evaluated initial patient management practices in Jeddah, Saudi Arabia, taking into account the type of burns and the nature of the remedies used. Materials and methods This is an original retrospective study conducted at one of the largest private hospitals in the Middle East, Dr. Soliman Fakeeh Hospital in Jeddah, on patients with different types and levels of burns who presented to outpatient clinics.An ethical approval request was submitted and approved by the institutional review board of Dr. Soliman Fakeeh Hospital (Approval No.:232/IRB/2021), and from June through December 2022, a total of 68 people were recruited.All participants gave their explicit written consent prior to the collection of data after being fully informed of the objectives of the study and assured that only the authors would have access to the information. Inclusion/exclusion criteria and timeline All patients who presented to the outpatient clinic were evaluated for eligibility for this study by the resident staff.Eligible patients were then invited to participate in the study.Patients with a burn incident of any type who presented to Dr. Soliman Fakeeh Hospital during the six-month period from June through December 2022 and who used any type of remedy as a first intervention before seeking medical advice were included in the study.We excluded patients with a history of chronic illness, such as autoimmune or systemic diseases, which might have affected the course of wound healing and the overall outcome.Their evaluation included a thorough review of their patient history, an assessment of basic vital signs, and a physical examination (Figure 1). Assessment basis The assessment was not centered on demographic characteristics but rather on study-specific variables, such as the degree and type of burn, the time of incidence, the time since the first home intervention, the nature of intervention, and the frequency of application, as well as the results for the Vancouver Scar Scale (VSS) shown in (Table 1).The scale considers four physical characteristics of scars: vascularity, height (thickness), pliability, and pigmentation.Thus, the lowest possible score is 0 (for normal skin), and the highest potential value is 13 due to a lack of data, since physicians do not typically document patients' management of their burns but only the management once the patient seeks hospital consultation.In this regard, the authors chose to create digitalized checklists (solely for organizational purposes) to evaluate the patients' knowledge and management at the time of occurrence and while the study was being conducted. Statistical analysis Data were entered into Microsoft Excel (Microsoft, Redmond, Washington), coded, and reviewed for accuracy prior to being exported to SPSS version 25.0 (IBM Inc., Armonk, New York) for analysis.The frequency and percentages of the categorical variables were utilized for the descriptive analysis.The quantitative variables were described using frequencies and percentages.The Chi-squared test was utilized to examine the association between the variable of interest (VSS) and the other independent variables (initial intervention, type of burns, and degree of burns).The Pearson correlation coefficient was employed to examine the link between VSS and each of the following: frequency of application and duration of application using the same modality.Utilizing linear regression, the factors impacting burn healing were predicted.An interval of 95% confidence was utilized, and a p-value of less than .05was deemed statistically significant. Results A total of 68 participants were eligible for the study and included, but 16 patients were ultimately dropped for a variety of reasons, including failure to follow up, refusal to continue the study, and the use of a combination of creams or ointments in addition to the initial modality.This left 52 participants who met our inclusion criteria and successfully completed the study.Male patients made up 53.8% of the population, while female patients made up 46.2%; the mean age was 40.9 years (SD=17.1)( TABLE 2: Baseline demographic characteristics (N=52) A total of 80 wounds of varying severity and in various locations were evaluated.Wounds were divided into four distinct categories according to the degree of the burn: epidermal, superficial partial thickness, deep partial thickness, and full thickness.Patients with multiple wounds of varying severity were also grouped.Participants were divided into three categories according to VSS scores.In the first group, wound scores of 0-3 indicated good healing, scores of 4-8 indicated intermediate healing, and scores of 9-13 indicated poor healing.The majority of participants fell into the 2nd-degree category, with 25% having superficial partial thickness and 43.8% having deep partial thickness.Among the participants, thermal contact was the leading cause of burns.Ice had been applied to 8.8% of wounds, tomato paste to 12.5%, honey to 12.55%, olive oil to 7.5%, milk to 7.5%, yogurt to 10%, toothpaste to 12.5%, coffee grounds to 17.5%, and water to 11.3%.The majority of burns were either well or moderately healed (42.5% and 45%, respectively).Their total mean VSS score was 4.5, the mean frequency of using the initial substance was 2.8 times, and the average number of days prior to seeking medical help was 3.5 (Table 3). VSS -Vancouver Scar Scale The association between the remedies used for burn therapy and the healing categories (excellent, moderate, and poor) is discussed in (Table 4).The correlation is statistically significant (Chi=5.36,p=.021).None of the eight cases treated with water resulted in poor healing, whereas tomato paste resulted in poor healing for six cases (60%) and moderate and good healing for two cases (20%). Discussion The efficacy of home remedies for treating multiple injuries, including burns, has been widely studied, but their definitive effect on the outcome "cosmetic wise" has never been determined, as burns are one of the most common injuries in the world and place a significant burden on healthcare systems.Our research was conducted in Saudi Arabia, where burn injuries are prevalent by nature.Some studies have suggested that socioeconomic status and cultural background are associated with the prevalence of burn injuries, which makes the location of our study of particular interest due to its understudied cultural and social environment [17,18]. Once the type of burn has been identified, immediate action must be taken.The efficacy of exposing a burn to cool running water has been repeatedly demonstrated as the most important initial step [3,19,20].Following the administration of first aid, the definitive treatment for less severe burns consists of antibiotic coverage and analgesics, whereas more severe cases necessitate additional treatments, including possible skin grafts and substitutes.In numerous cultures, it is widely believed that honey, olive oil, tomato paste, toothpaste, and coffee grounds, among other substances, can aid in the treatment of burn wounds.In the most severe cases, the use of alternative medicines may cause infection, sepsis, or even death [10,[21][22][23]. Numerous studies have been conducted on the efficacy of these common home remedies, and they are constantly being evaluated for their potential benefits.In this context, we cannot completely rule out the efficacy of some of these substances, as they have their own special proprieties.Olive oil, for example, has antifungal and antibiotic properties [17,24,25], while honey has some intriguing properties that may speed up the healing process, reduce scar formation, and lower the risk of infection [2,26]. Applying water as quickly as possible and at the proper temperature produced the best results in our study.It is a known fact that victims of a burn benefit from the prompt removal of the cause and cooling of the injured area.Reducing the elevated temperature of the burned tissue improves the physiological response.It also provides important palliative care.Using ice as the primary cooling agent, on the other hand, can aggravate an injury by restricting blood flow to the affected area (cold-induced vasoconstriction) [12].Similar effects were observed with milk and yogurt, among other modalities.While honey, olive oil, coffee grounds, and toothpaste yielded moderately satisfactory results, tomato paste produced the least desirable results. Study limitations The evaluation of the data was based on the wounds that were present in the hospital at the time of the study, which provided a broad notion of the effects of only a few types of remedies applied by various patients.Another crucial issue that needs to be addressed is the fact that in this study, comprehensive observation of all cases was not conducted, which means that numerous non-modifiable and other modifiable elements, such as the application method and commitment from participants, may have had an impact on the outcome, not to mention that this study relied on the expertise of experts in VSS assessment; thus, there may be some variance in the results.Therefore, we recommend broadening the investigation to include other types of materials at other public and private institutions. Conclusions Our findings indicate that the safest and most effective initial management technique for burns among all the remedies reviewed is the application of cool running water, followed by seeking medical attention for evaluation and proper treatment.The most markedly poor effect resulted from the application of tomato paste. Our reliance on healthcare providers, hospital emergency rooms, and local urgent care units has detracted from our understanding of the importance of having a community that is educated about the first-aid management of burns and emergency home incidents.Of course, it is inadvisable to risk treatment with ineffective home remedies; however, cool running water, the simplest and most readily available modality, has been repeatedly demonstrated to be the most effective.Burn accidents are among the most common injuries in Saudi Arabia and the world, which is why we must take precautionary and preventative measures and educate the public about the significance of primary interventions that the victim or a bystander can carry out until professional medical assistance is available. TABLE 4 : Relationship between initial treatment materials and burn healing *Statistically significant; # Fishers' exact test
2023-09-26T15:03:42.442Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "1bf5b16655511f2fe65574f55bc4874f27f33021", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6908846eb7d47cdaac6ceff657b62873fd845f19", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
55016358
pes2o/s2orc
v3-fos-license
Central Upwind Scheme for Solving Multivariate Cell Population Balance Models Microbial cultures are comprised of heterogeneous cells that differ according to their size and intracellular concentrations of DNA, proteins and other constituents. Because of the included level of details, multi-variable cell population balance models (PBMs) offer the most general way to describe the complicated phenomena associated with cell growth, substrate consumption and product formation. For that reason, solving and understanding of such models are essential to predict and control cell growth in the processes of biotechnological interest. Such models typically consist of a partial integro-differential equation for describing cell growth and an ordinary integro-differential equation for representing substrate consumption. However, the involved mathematical complexities make their numerical solutions challenging for the given numerical scheme. In this article, the central upwind scheme is applied to solve the single-variate and bivariate cell population balance models considering equal and unequal partitioning of cellular materials. The validity of the developed algorithms is verified through several case studies. It was found that the suggested scheme is more reliable and effective. Introduction In mammalian cell culture, individual cells exhibit heterogeneity due to differences in their cellular metabolism and cell-cycle dynamics [1].In the step-by-step cell cycle action each cell of the population entity grows to a certain size (approximately double to its original size) and then divides into two identical daughter cells.Basically cell division is an exponential process, the two daughter cells further divide into four daughter cells, then four into eight and so on [2].The environmental factors such as oxygen, pH, temperature and substrate concentration greatly affect the cell growth rate.At any point in time t, in heterogeneous population different cells exist at different stages of the cell cycle.The different cells could be differentiated according to their size, DNA and RNA contents, protein contents and other inter-cellular properties.Thus, the developments of accurate mathematical models are needed that account for heterogeneities present at the single-cell level. Various mathematical models exist in the literature for describing dynamics of biological systems.These models consider the cell population as a "continuum" or lumped biophase, thus assuming that it behaves as a homogeneous entity.Cell population balance models (PBMs) are the only models that take into account the fact that cell properties are distributed among the cells of a population, such as protein content, DNA content, etc.Moreover, they have capability to effectively describe the internal chemical structure of the single cell by incorporating intracellular chemical reactions involving multiple chemical species.Therefore, these models provide the most accurate way of describing the complicated phenomena associated with cell growth, nutrient uptake and/or product formation in microbial populations.They typically consist of multidimensional integrto-partial differential equations for describing the dynamics of the state distribution function, nonlinearly coupled with integro-ordinary differential equations accounting for substrate consumption and (or) product formation.Because of their obvious mathematical complexity, the numerical solutions of such models are challenging for the given numerical scheme.In mid 1960s, these models were used for the first time to simulate cell dynamics [3] [4].Cell PBMs are either single-variable or multi-variable, depending on numbers of variables involved in the system.Multivariable models explain the complicated phenomena of cell development, product formation and substrate consumption as compared to the single-variable models.Despite of these details, bioreactor PBMs are associated with many difficulties.Firstly, for majority cell system the growth rate function, the cell division rate and the partitioning probability density function are not known, that can be overcome by flow cytometry analysis.The on-line flow cytometry, when combined with suitable cell population models, develops the computerbased system that provides control of cellular distribution.Secondly, a great computational cost is required to numerically solve multi-variable cell PBMs. Initially, Hulburt and Katz [5] introduced population balance models in chemical engineering, and these models were later on developed by Randolph and Lason [6].By the time, their applied nature recognized them to be the vital part of research.The large computational time and unavailability of exact solution were the primary obstacles in the development of these balances.The analytical solutions are possible in simplified situations only.Thus, researchers started to find its numerical solution instead of the exact solution.A large number of numerical techniques are present for approximating population balance equations (PBEs) in chemical and biological engineering, such as the weighted residual method [7], the Monte Carto simulation [8], the method of moments [9] [10], the finite difference method [11]- [13], the spectral methods [14], the finite element method [15] [16], and the high resolution finite volume scheme [17] [18], etc. In this article, the high-resolution non-oscillatory central-upwind scheme is applied to solve the single-variate and bivariate cell population balance models considering the equal and unequal partitioning of mother cells [19] [20].The basic idea of such schemes is that they use information of local propagation speeds and the approximated solution is obtained as cell averages.Further, such schemes have an upwind nature, as they take care of directions of wave propagation by measuring the one-sided local speeds.Several case studies are carried out and the results of central-upwind scheme are compared with those obtained from the first order upwind scheme. Single-Variate Cell Population Balance Model A single-variate cell population balance model that expresses the ratio of nutrients is considered in this section.Let ( ) , N x t is a function that represents the state of entire population with the amount of biomass x at time t between x and d x x + .The zero moment ( ) 0 N t and the first moment ( ) N t respectively describe the total number of cells per unit bio-volume and concentration.They are defined as [13] ( ) ( ) The number density function is defined as The computational domain is taken to be This means that the amount of biochemical components is conserved at cell partitioning.Particularly, daughter cells cannot have larger amounts of biochemical components than the parent cells.Thus, the probability density function ( ) , , P x y s must be zero for all states of daughter cells which are larger than the states of parent cells, ( ) Furthermore, the probability of a dividing cell with physiological state y to produce a daughter cell of state x must be equal to the probability of producing a daughter cell of state y x − , i.e. P x y s P y x y s Due to the above-mentioned hypothesis and process description, the dynamics of the state distribution function ( ) , N x t is described by the general cell population balance equation [13] ( ) where Q x t x s N x t DN x t y s N y t P x y s x with initial conditions Let us define boundary b of the state as a point where at least one quantity of biochemical components obtains either its minimum value or its maximum value.Then, the boundary conditions are given as Equation ( 8) has three terms, the first term shows accumulation, the second term accounts for the growth into larger cells, and the third term is the source term.In the source term (c.f.Equation ( 8)), the first term represents loss of cells due to the partitioning of cells which leads to the birth of daughter cells.The second term is the dilution rate.The integral birth term is multiplied by the factor 2 for representing the division of a parent cell into two daughter cells.The equal partitioning, where each mass of a mother cell is equally divided into two daughter cells, can be mathematically described by replacing the partition probability density function by a Dirac delta function ( ) , , . 2 Under this assumption, Equation ( 8) reduces to [13] ( ) Thus, the integral term disappears.Since the maximum and minimum values of the dividing cell are , respectively.Thus, the birth term in Equation ( 12) exists only for the domain ,min ,max 2, 2  .The substrate consumption in time is described by an ordinary integro-differential equation of the form [13] ( ) The first term in Equation ( 13) denotes the inlet minus outlet rates from the reactor, and the integral term describes the rate of loss of substrate due to cell growth.Here, ( ) , q x s denotes the consumption rate and f s is the saturation concentration.In this model, coupling is the only reason for nonlinearity.For a constant abiotic environment, the problem becomes linear. Numerical Scheme for Single-Variate Cell PBM Here, the semi discrete central-upwind scheme is derived to numerically approximate the PBE model in Equation (7) [19] [20].Before applying the scheme, we need to discretize the computational domain.Let n be the number of discretization points and Moreover, x ∆ denotes the constant width of each grid interval, i x represent the cell centers, and cate the cell boundaries.We refer, Moreover, ( ) 2 and . 1 After integrating Equation ( 7) over the interval The numerical fluxes are given as [19] Here, N + and N − shows the piecewise linear reconstruction N  for N, namely Here, x i N are the first order approximations of ( ) N x t and are calculated using a nonlinear limiter that confirms the non-oscillatory nature of reconstruction [19] [20].The computation of these slopes is given by family of discrete derivatives parameterized with where Here, ∆ denotes central differencing and MM is the min-mod nonlinear limiter.Further, the local one sided speed at is given as [19] ( ) ( ) The second order TVD Runge-Kutta scheme is used to solve Equation ( 18) to achieve second order accuracy in time.That upgrades N in the following two stages ( ) ( ) ( ) where, ( ) Bivariate Cell Population Balance Model This section considers the bivariate cell population balance model for equal partitioning.Here, two property coordinates are used.The evolution of ( ) , , g x s N x y t g x s N x y t N x y t Q x y t t x y where The initial condition is given as Here, , N x y ∈  , and the boundary conditions are defined as The above cell population balance model is coupled with the mass balance of substrates [13] ( ) Numerical Scheme for Bivariate Cell PBM Let n x and n y represents large integers in the x and y-directions, respectively.We assume a Cartesian grid with a rectangular domain ,min max ,min max , , and 1 y j n ≤ ≤ .We take ( ) x y x y At any time t, the cell averaged values ( ) , i j N t for conserved variables are given as ( ) The piecewise linear interpolation is described as [19] [20] ( , , On integrating Equation (26) over the control volume i j C , the two dimensional scheme can be expressed as [19] [20] Here , 2 2 , 2 2 where, we have , 2 2 , . 2 2 The local speeds are computed as , . Numerical Case Studies In this section, a few single-variate and bivariate case studies are considered.The suggested numerical schemes are qualitatively and quantitatively analyzed. Test Problem 1: Single-Variate Case The following presumptions are considered for this problem • The Gaussian division probability density function is taken as the initial distribution. • Due to constant abiotic environment, Equation ( 13) for substrate consumption is not considered. • Growth rate functions are consider as: 1) Constant growth rate: 2) Linear growth rate: 3) Quadratic growth rate: ( ) where, m 0 is average mass at time t and U d is the average doubling time.In the unequal partitioning, the number density function does not show a periodic behavior.In that case, the partitioning mechanism is calculated by beta distribution [13] [21] ( ) ( ) The division function is given as [13] ( Here, f is division probability function which is a truncated Gaussian distribution and it is assumed that a balanced growth state with time independent mass is reached [21]. In the case of constant, linear and quadratic growth rates doubling time for equal and unequal partitioning are taken to be U d = 5h, U d = 2h and U d = 5h and simulation times are 30h, 20h and 30h, respectively.Figures 1-4 show the constant growths of state distribution function and number density function in two and three dimensions obtained by the comparison of upwind central scheme with first order backward difference method.Figures 5-8 describe the linear growth obtained by the same comparison of upwind central scheme with first order backward difference method.Similarly, Figures 9-12 show quadratic growth.The values of parametric are given in Table 1.From the figures, it is clear that in both cases of equal partitioning and unequal partitioning, solution reaches the fastest for quadratic growth and slowest for the linear growth and it is in between in constant growth.It is observed from figure that when time is double the mass concentration become double and also it is obvious from above comparison that first order scheme is more diffusive while central scheme shows accuracy. The results verify that central scheme can be used to capture such profiles more efficiently and accurately. Test Problem 2: Single-Variate Case In this problem, the following assumptions are taken into account.Other parametric values are listed in Table 2. In this case growth rates are also depending on the substrate concentration.Therefore, Equations ( 7) and ( 13) are solved together. • The growth rate is taken as [13]- [15] higher order schemes.The numerical test problems verify the usefulness of the suggested-scheme which is computationally efficient, accurate, and easily applicable. of the x and y-derivatives of N at the cell centres ( ) Figure 2 . Figure 2. Test problem 1 (single-variate case): State distribution functions for constant growth rate. Figure 3 . Figure 3. Test problem 1 (single-variate case): Number density functions for constant growth rate. Figure 4 . Figure 4. Test problem 1 (single-variate case): Number density functions for constant growth rate in 3-D. Figure 6 . Figure 6.Test problem 1 (single-variate case): State distribution functions for linear growth rate. Figure 7 . Figure 7. Test problem 1 (single-variate case): Number density functions for linear growth rate. Figure 8 . Figure 8. Test problem 1 (single-variate case): Number density functions for linear growth rate in 3-D. the averaged values of the conservative variables
2018-12-13T12:10:00.874Z
2014-04-24T00:00:00.000
{ "year": 2014, "sha1": "ef4c6bdd015535415f9cde10d59072e23afd4848", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=45623", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "ef4c6bdd015535415f9cde10d59072e23afd4848", "s2fieldsofstudy": [ "Engineering", "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
269970070
pes2o/s2orc
v3-fos-license
Grounded Ambitions: A Lean Approach for Assessing Beachability in Concept Design Littoral operations have become an increasing interest for defense stakeholders over the last several decades. Many navies currently operate ship-to-shore assets that are designed to travel shorter distances exclusively in the littorals between a ship and the beach. New concepts are being designed to transit much longer distances from shore-to-shore in both blue water and littoral regions. This Concept of Employment (CONEMP) drives these ships to displacements that are orders of magnitude larger. Compared to smaller vessels where seakeeping and maneuverability performance in the surf-zone are a significant area of interest, larger vessels have a comparatively greater risk with respect to the ability of the ship to get far enough up a beach to safely deliver assets and then get off the beach. This research presents the foundation for a new simulation tool to analyze how far up the beach a ship will be able to get given loading condition, initial speed, beach condition, and hull shape. The focus of this research is to provide a low computational-cost method for analyzing the beachability of a ship that still considers the dominating physical phenomena of grounding at early stages of design. The tool will need much faster turnaround times than high-fidelity Reynolds-Averaged Navier-Stokes (RANS) or Finite Element Analysis (FEA) simulations to support the rapid and evolving environment of concept design timelines. INTRODUCTION Landing ships and craft play a critical role in delivering people and supplies to areas with limited infrastructure, during both war and peacetime.They have been the vanguard in many operations such as the Normandy landings during WWII, Incheon Bay during Korea, as well as being some of the first on the scene during disaster relief efforts such as during Operation Sea Angel following the 1991 Bangladesh cyclone and Operation Unified Assistance following the 2004 Indian ocean tsunami (Lewis, 2023;Smith, 1995;Tsunami aid: Who's giving what, 2009).Landing ships and craft enabled each of these operations by facilitating the movement of people and supplies in a way that was unachievable by air and ground means.Landing craft remain a major interest to the world's navies, with most major navies having a significant amphibious force in their fleet (Baker, 2023).Amphibious landing operations conducted by the Allies during WWII often saw the use of Landing Ship Tanks (LST), which were ocean-going ships that would carry heavy equipment such as main battle tanks and smaller landing craft.LSTs could travel long distances in shore-to-shore scenarios, as shown in Figure 1, that smaller landing craft cannot.LSTs were not only important for large amphibious operations, but also for supplying troops in areas with no infrastructure such as the Pacific Islands as shown in Figure 2. The director of the Southwest Pacific forces during WWII, Daniel E. Barbey, said of the LST, "Without these ships there would have been no Southwest Pacific Force.Without these ships the major amphibious invasions of Europe and the Pacific could not have been undertaken" (Barbey, 1969).D-Day, in which over 4,000 landing craft of various types were deployed, was delayed in part due to the required 230 LSTs that were not yet ready, as they were necessary to deploy the five sea assault divisions and heavy armor companies (Koenig & Doerry, 2018).Following WWII, changing operational needs drove the amphibious force away from the LST to the Landing Ship Dock (LSD), a type of ship that can carry several coastal, short range landing craft and does not beach itself, unlike the LSTs.The advantages of the LSDs are that they are generally faster than LSTs and can be outfitted for multiple roles when compared to the more restricted LST (Hope, 1991).Across the world, LSDs and LSD type ships like Landing Platform Docks (LPD) and Landing Helicopter Docks (LHD) are either being designed or acquired by many navies due to the flexibility that these types of ships offer (Keane et. al, 2009).The last major LSTs designed by the United States were built in the mid to late 1960s and have since all been decommissioned from the US fleet. As a result, and demonstrated by the current United States amphibious fleet, most existing vessels that physically land on the beach are small craft like Landing Craft Air Cushion (LCAC), Landing Craft Utility (LCU) and Assault Amphibious Vehicles (AAV) that operate between the larger LSDs, LPDs, and LHDs and the beach.The LCAC, an unconventional landing craft, shown below in Figure 3 has an operation range of 200-300 nm (Naval Sea Systems Command, 2021).The average concept of employment (CONEMP), a use-case-oriented description of a system, for these crafts only necessitates shorter transit distances near the coast.This, in addition to host ship requirements has driven the size of these ship-to-shore crafts much smaller than the LSTs of WWII.LCUs can be better categorized as boats, generally, due to their operational profile between the coast and host ship (Raunek, 2022).Because boats are generally smaller, a major focus during their design is on performance in a near-beach seaway as the waves can have a significant impact on their ability to safely arrive and land on the beach.This has prompted most of the research and development for landing craft to be focused on small craft with an emphasis on how they handle in the surf zone or very large ships that do not contact the beach directly.For purposes of this paper, large ships will be considered as those that must transit open ocean and usually have a displacement greater than 1000 tonnes.The flexibility of the LSD type ships is not without its drawbacks.Reliance on multiple smaller craft to bring equipment and supplies to the shore functionally limits the missions and quantities the ship can deliver.To overcome this limitation, there has been a push in recent years to develop a larger beaching vessel, operationally similar to the LST that can land heavy equipment as well as transit the open ocean.The ability to perform both of these functions means that a large landing ship will be able to operate independently of ports and landing craft.This falls in line with the concept of the fast moving, self-supplied marine littoral regiment (MLR), which will provide mobility and flexibility for littoral combat operations (U.S. Marine Corps, n.d.).In addition to supporting an MLR, a large landing ship will be able to deliver supplies and troops across the area of operations with minimal infrastructure support and aid the Naval Construction Force (SEABEEs) in establishing infrastructure.An artist's rendering is shown in Figure 4 (Harper, 2022).(Harper, 2022) Due to the shift towards LSD type ships and small beaching craft, a knowledge gap exists for large, LST-like, landing ship design; which makes the acquisition of these ships particularly high-risk.Additionally, since WWII, ground vehicle weights have increased, with the heaviest vehicles, main battle tanks, increasing from 30.3 tons during WWII, to 61 tons when the M1 Abrams was first developed, to the massive 75 tons the Abrams is currently (Larson, 2021).This increase in weight creates design challenges in landing these heavy vehicles, posing risk to new landing craft designs.Increased costs and complexities since WWII also means that ships are harder to build and replace, with Fletcher class destroyers costing $102 million in current USD per ship while the Arleigh Burke Flight III costs $1.45 billion per ship (Hill, 2023).Increased cost means that high risk factors, such as beaching ships' ability to get up the beach, need to be more heavily scrutinized in the early design stages. To address knowledge gaps, larger ship acquisition costs, and increasingly short timeframes, modeling and simulation is being implemented across the ship design space to maximize ship performance regarding mission requirements (Cole, 2022).Modeling and simulation, a capability that didn't exist in a comparable state to the modern era when LSTs were designed, provides a low-cost solution to address potential risks in ship designs.To pare down risk and address the knowledge gaps specific to large landing ship beaching, this paper outlines the theory for developing a simulation tool for the large ship beaching problem. If there was a tool that filled the knowledge gap identified, it would provide a critical new capability in the ship concept design space.Currently, the impacts of ship design characteristics on hydrodynamic performance, like resistance and seakeeping, are well known.Additionally, there exist an array of available ship design analysis tools to determine the seakeeping and resistance performance of a ship.This is not the case for beaching analysis; therefore, there is a need for development of an analysis method to accurately assess the beaching problem and determine the relationship between ship design characteristics and beaching performance.With a method available to assess beaching characteristics quantitatively, a trade off in performance for these three areas of hydrodynamic design can be determined, driving ship design forward and delivering well rounded ships to complete the specified mission. Application of this tool in the design spiral paired with beaching Top Level Requirements (TLR) will be a driving factor for hull shaping and displacement.Using provided requirements for the beaching environment, ramp angle, ramp length, and fording depth, a designer would be able to assess bow shaping impacts and determine necessary ship conditions for beaching such as trim and displacement.These discoveries would inform the amount of payload the vessel would be able to carry, the arrangement and amount of ballast tanks to meet a required trim, and the opportune bow shape for a beaching operation.These impacts are critical components of the design of beaching ships and this tool provides a way for ship designers to identify risk and communicate capabilities of concept designs in a quantitative and comprehensive manner. BACKGROUND The United States Defense Acquisition Process can be summarized into multiple milestones as shown in Figure 5. Discussion in this paper is pre-Milestone B with a focus on Milestone A, Material Solution Analysis.According to Defense Acquisition University (DAU), "Phase activity will focus on identification and analysis of alternatives, measures of effectiveness, key trades between cost and capability, life-cycle cost, schedule, concepts of operations, and overall risk" (DOD Instruction 5000.84 "Analysis of Alternatives", 2020) During preliminary design, naval architects must deal with several technical challenges with tight timelines and budgets.Despite only 5% of costs being expended in preliminary design, 60%-80% of lifecycle costs are usually determined from decisions made in this stage.Visually, this is demonstrated in Figure 6.Being able to perform analysis early in the design process is a necessary way to buy down risk and lower program costs (Gaspar, 2011).(Drezner, et al., 2011) Figure 6: Overall Ship Cost by Milestone (Gaspar, 2011) Due to the numerous, interrelated aspects of ship design, naval architects often refer to the design process as the design spiral shown in Figure 7 (Gaspar, 2011).Starting very broad and assessing one aspect of the design at a time, the design should eventually converge to meet requirements.It is expected that as the design changes, setbacks will occur, and specific aspects of the design will need to be assessed multiple times.For example, as displacement increases, draft increases and resistance on the hull form must be re-assessed.If resistance increases too much, larger engines must be selected to meet speed requirements, which will again increase displacement and have additional impacts on ship performance and cost.To add more complications, requirements at this stage of design are often fluid and design teams must adapt quickly and be flexible to these changes.(Gaspar, 2011) Concept design makes up various aspects of the solutions analysis and concept refinement steps shown in Figure 8 (Schank, et al., 2014).Project durations often range from small 3-month excursions to large multiyear efforts.Each type of project having its own complexities and complications.In comparison to other design stages, this is a relatively short period of time for a "typical" USN ship design cycles shown in Figure 9. Examples of the work naval architects do include integration studies, Analysis of Alternatives (AoAs), Requirements Evaluation Teams (RETs), and full ship concept designs sometimes referred to as indicative designs.During the CVX AoA, roughly 70 ship studies were developed and evaluated (Raber & Perin, 2000).The magnitude of these design programs creates a dire need for tools to deliver fast, accurate results.Fortunately, developments in recent years have made analysis in many areas of ship design quite agile.In the example of an AoA, it is necessary for technical experts to have the ability to assess several designs against Key Performance Parameters (KPPs).Design Space Exploration (DSE) is another growing area of interest in naval architecture in which a full range of parameters are varied in order to study a wider scope and gather quantifiable data to inform decisions (Robertson, et.al., 2022).DSE is increasingly valuable in situations when there may be design bias or a lack of clarity in requirements.An effort conducted at Naval Surface Warfare Center (NSWC) Carderock Division generated 2,916 individual hull forms to evaluate the impacts of varying hull characteristics on resistance, stability, and seakeeping (Strickland, Devine, & Holbert, 2018).Doing such analysis on hull shape provides valuable insight as to the preferred design for resistance, seakeeping, or other metrics such as beachability.This makes DSE yet another use case for a beaching tool that is capable of accurate results and short run times.The concept design environment can be fluid, fast paced, and often necessitates naval architects to be able to quickly run analysis, often with turnaround times less than 24 hours, or against hundreds to thousands of design points.Although concept design is a small portion of the overall ship design life cycle, it informs and drives the rest of the cycle, making it a critical step in ship acquisition.(Schank, et al., 2014) Figure 9: Ship Design Lifecycle (Schank, et al., 2014) As mentioned previously, a majority of the beaching vessels currently in service are small craft (less than 1000 tons displacement) that can be carried by larger ocean-going ships.Extensive research has been done and is currently being executed to characterize the performance of these craft.The University of Iowa with the support of the Office of Naval Research (ONR) has recently executed research on surf zone dynamics and beachability of a small, single operator, craft called Quadski in which they experimentally and numerically defined the dynamics of the craft while beaching (Behra, 2020;Yamashita, et al. 2022).The Quadski work considers both the hydrodynamic and ground reactions when analyzing craft dynamics, with both interactions being major focuses of this paper.However, since the Quadski is small compared to the operational environment, there is a major concern with the seakeeping capabilities of the craft.It also has wheels that enable the craft to more easily get up a beach.This means that a majority of the research, has been allocated to the hydrodynamic reactions of the craft with respect to waves in the near-beach zone.In addition, the grounding work that has been completed uses a coupled CFD-MBD (computational fluid dynamics, model based definition) approach to define ground reactions, which is computationally expensive and complex.As shown in Figure 10, this approach requires a welldefined mesh that is recalculated at every time step to account for soil deformation, driving up computation costs.The same group also worked on a lower fidelity solution which uses data from higher fidelity models to drive outputs, however this means large changes in the design will require additional simulations with the high-fidelity model (Yamashita, et al., 2023).Since concept design often requires significant design changes, this approach is not a viable method.In summary, due to high computational costs, long set up and run times of high-fidelity codes, as well as a heavy focus on seakeeping, small craft beaching research is not applicable to pre-Milestone B applications or for scaling to larger ship sizes.(Yamashita, et al., 2022) Separate work has been done to characterize the deformation of saturated soil using LS-DYNA, a commercial structural deformation code that has large material libraries and experimental validation, making it ideal for quick simulation development (Flores-Johnson, et al., 2016).However, these studies and LS-DYNA solve material interactions on the grain scale, which leads to high computational costs and heavy reliance on user knowledge for setup (Flores-Johnson, et al., 2016;Sturt, et al., 2021).It has been used in applications of saturated sand deformation, making it applicable to the beaching problem, however, LS-DYNA does not calculate hydrodynamic interactions making it difficult to use in landing vessel applications (Flores-Johnson, et al., 2016).Additional research has been done on the effects of ship-structure interactions on larger commercial ships; generally, regarding the effect of grounding of the ship or the effect of a ship striking an offshore platform.Many of these studies used Finite Element Analysis (FEA) approaches within LS-DYNA to quantitatively define the effect of ship strikes on the sea floor as well as offshore platforms (Nguyen, et al., 2011;Yu, et al., 2016).Like other LS-DYNA codes, the main issue with these methods is that they require long setup times, large amounts of computational resources, and high user expertise, making them unsuitable for a concept design problem.Research done by Hansen, et al. (1995) attempted to define how far a ship will travel up a beach during grounding.They utilized a pressure method to calculate soil reaction, which was experimentally verified as shown in Figure 11.(Sterndorff & Pedersen, 1996).This method, while less computationally complex than LS-DYNA, still required a significant amount of computational resources and would require additional experimental data to support the analysis of vessels with other than simple, conventional bows.In other technical disciplines related to naval architecture, there are concept design tools, such as Total Ship Drag (TSD) for resistance prediction and Ship Motion Program (SMP) for seakeeping evaluation, that allow naval architects to evaluate concepts rapidly (Wilson, et al., 2011).These concept design tools, while physics-based, generally have less fidelity than some of the more computationally intensive tools, however, they provide a reference point to compare multiple designs and inform the stakeholder on the best path forwards.These tools have quick setups and run times, which make them appealing for the fast-paced environment in pre-Milestone B work, but lack both accuracy and flexibility compared to more expensive solutions. TSD is a low-fidelity tool that provides resistance predictions within seconds by reducing the physical problem using potential flow and Thin Ship Theory assumptions.A critical component of TSD that makes it well suited for early stage design work is that it is relatively easy to use.For example, the simulation is relatively insensitive to the input mesh, removing the need for sensitivity studies and allowing for reduced set up times.The low-cost of TSD make this tool ideal for evaluating a large trade space of concepts.The validation of TSD explains that TSD, "does a good job of providing quick and reasonably accurate evaluations for typical US Navy hull forms."(Wilson, et al. 2011). The studies previously mentioned have demonstrated that there is work being done to evaluate beaching in a high-fidelity environment, however, there is a need for beaching analysis programs equivalent to TSD.Only one known method exists for evaluating beaching in a low fidelity realm: a quasi-static state solution, solving an energy balance equation.Based on a paper written by Pedersen (1995), this method considers a two phased approach in which; the first phase considers a ship with velocity V contacting a sloped beach and trims about a prescribed contact point until the ships trim is equal to the beach slope, and a second phase that considers the hull to then slide up the beach with the entire keel in contact with the beach.The second phase is only entered in this calculation if there is kinetic energy remaining after the ship reaches the trim of the beach.This model is limited in that the flat sloped beach is considered to not deform.This limits the effect of the beach on the ship to a simple frictional force in accordance with Amontons-Coulumb law, using a constant kinetic friction coefficient, (Pedersen, 1995). The major limitations of this method are: the two phased quasi-static approach, and the severely limited consideration of hull shape on beach deformation.The quasi static approach limits the actions to large time-step phases where the physics is simplified using different methods in each phase.In both phases, this method ignores the interactions between the relevant physical features.For example, any translation of the hull up the beach as it rotates about its contact point in phase I is neglected.Additionally, ignoring hull shaping would reveal the same result for two hulls approaching the beach with the same velocity and mass, one having a broad blunt bow with a bulb and another with a skinny sharp bow, so long as they had the same contact point.Considering the large scale impacts of bow shaping on other hydrodynamics, like fluid resistance, this is a major problem that the current strategy overlooks and severely limits designers' ability to compare beaching results across a trade space of differing hulls, a common activity in pre-Milestone B. Therefore, ignoring the deformation of sand is an overly-broad simplification of the interaction and has resulted in low confidence in the results.Additionally, no validation of this method has ever been performed.Given this evaluation method, any engineer would be ill prepared to answer the questions they are currently being asked. PROPOSED APPROACH Language like beaching, beachability, or grounding are all terms that need definition before proceeding.For the purposes of this paper, beaching and grounding are synonymous in defining the act of a vessel impacting, riding up on, and embedding into a beach.This maneuver is intentional and a critical part of the prospective vessel's CONEMP.This does not include accidental grounding (i.e. a sandbar).Beaching and grounding start when the hull, likely to be near the intersection of the bow and keel, contacts the beach and ends when the vessel comes to a static equilibrium.Beachability is a qualitative measure of how reliably and safely a ship can deliver a payload from ship to shore. Beachability is generally, quantitatively measured as an achievable fording depth or required ramp length or angle.Factors, such as the final position of a ship after beaching can be used to inform ramp design or evaluate against current ramp characteristics.Ramp angle defines the angle from the horizontal that the ramp contacts the beach.Ramp length defines how far, in any dimension, the ramp can extend from the hinge point.Fording depth represents the distance from the point where the ramp contacts the beach to the water free surface.This functionally represents the maximum depth of water the payloads will encounter during the beaching operation.Different payloads will have different requirements for each of these criteria.For example, vehicles with large separation between axels might have a minimum ramp length or maximum ramp angle to prevent chassis contact with the hinge point of the ramp.Other payloads might have fording depth requirements like vehicles, which cannot have their exhaust stacks submerged otherwise they risk down flooding into the engine.Additionally, ramp length, ramp angle, and fording depth can have significant impacts on the duration of the beaching mission, which can be a critical component of performance. Assumptions The proposed tool will consider a simplification of the physics of beaching to reduce runtime to fit within the discussed concept design framework.The considered physics include: kinetic energy transformed to potential energy from the vessel moving up the beach, energy losses from deformation of the beach, and other hydrostatic losses.These aspects of the physics have to take into account hull shape.The problem needs to be solved as a time series to capture the simplified physics to capture small changes in the state of the ship and the interaction of these changes over time.The theoretical assumptions that are taken to reduce the computational complexity of this problem are listed below.Should future Verification and Validation (V&V) efforts determine that any of these assumptions have significant detrimental impacts on the simulation, modifications will be made to remove or modify that assumption.  The origin will be at the intersection of the stem and the waterline when the ship contacts the beach with the positive X axis traveling away from the bow of the ship.That means the stern will be at a negative length from the origin at t = 0.The Y axis will be 0 on the centerline, negative to port and positive to starboard.This assumes that the contact point is on the centerline.The Z axis travels from keel to weather deck with Z=0 at the waterline. The coordinate system can be seen in Figure 12.  Sand friction will be calculated as a solid-on-solid interaction;  Beach surface is a flat plane with a specified slope about the Y axis;  Representative beach geometry will not undergo deformation as simulation progresses;  Displaced sand moves vertically and once it is moved it is no longer considered in the simulation;  , coefficient of wetted frictional drag, is calculated via the ITTC '57 equation (Zubaly, 1996);  , coefficient of wetted residual drag, will be provided via external, supplementary analysis;  Shallow water effect can be ignored at reasonably low speeds;  Water is calm, environmental and ship generated waves can be ignored;  Ship is not powered forward after contact with the beach;  Ship motion is restricted to three degrees of freedom: surge, trim, and heave;  Ship will rotate about a traditionally calculated center of flotation (CF);  This simulation will occur ignoring the effects of a third fluid, being air. The primary objective of this model is to consider the interaction of the hull form with the beach.Simplification of the beach definition, deformation, and interaction with the hull will be critical to the design purpose of this tool.Modeling the true physics of sand that can become saturated due to fluid interactions with the hull is an unreasonable problem to solve with current technology due to the sheer amount of interactions (Goodman, 2017).The proposed solution is to use first principles and basic definitions of fluids and solids interactions to reduce the complexity of the problem. While a major component of the hull-beach interaction is the displacement of sand, capturing the true mechanics of this would likely require complex and computationally expensive meshes like the one shown in Figure 10.In order to account for the beach deformation in a computationally feasible manner, the beach will be assumed to be a dense fluid and be in a constant shape and location.As a fluid, there is a simple computation to determine the normal force from the beach.This assumption is supported in a paper by Kang in Equation 1 which relates the buoyant forces ( ) of granular media to, , density of sand, g, gravity, and V, volume of displaced sand.This relation is reliant on an experimentally determined coefficient, K (Kang et al 2018).This assumption allows a computationally efficient method for accounting for the normal force caused by the beach. 𝐹 𝑍𝐵 = 𝐾𝜌𝑔𝑉 Equation [1] A difficult component of treating the sandy medium as a fluid is that it would complicate the computation of frictional force between the beach and the hull.The classic computation of force, shown in Equation 2, requires an empirically determined coefficient (Zubaly, 1996).A literature review reveals that no coefficients of frictional drag for a sandy beach handled as a fluid exists.Additionally, existing research has proven that the friction of a solid and a saturated granular medium adheres to the Amontons-Coulumb law which requires a direct relation of the quantity of frictional to the normal force on the medium.(Divoux & G'eminard, 2007).For these reasons, it is assumed the most reasonable approach to calculate the frictional force of the beach on the hull as if they act as This assumption not only supports the Amontons-Coulumb law but also allows this simulation to utilize existing empirically derived frictional coefficients of sand which are only relevant for the solid-on-solid interaction. 𝐹𝑜𝑟𝑐𝑒 = 𝐶𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 * 𝐷𝑒𝑛𝑠𝑖𝑡𝑦 * 𝑆𝑢𝑟𝑓𝑎𝑐𝑒 𝐴𝑟𝑒𝑎 * 𝑉𝑒𝑙𝑜𝑐𝑖𝑡𝑦 2 2 Equation [2] An additional complicated component of the beach problem is how to define the beach geometry, particularly complicated by handling the sandy beach as it acts as both a fluid and a solid.In order to simplify the problem, initial geometry of the beach is assumed to be a flat plane at an average slope representative of the desired operational area.This assumption seems reasonable considering beach topography changes relatively rapidly overtime when compared to acquisition timelines, and therefore modeling specific beach geometry is unnecessary, and computationally costly.It is known that as the hull displaces sand, some will rise above the presumed surface of the beach.This deformation would be influenced by the topography of the beach.If the beach is assumed to be flat then it must also be assumed that the sand is dissipated after it is deformed.With these assumptions, a geometrically varied and physically complex composite material is simplified to three inputs; a coefficient of friction, , a beach slope, , and density, . While the physics of the interaction between the hull and the beach are considered primary in this simulation, the hydrodynamic and aerodynamic physics are considered secondary.Considering the relatively slow speeds associated with the beaching mission and the relatively large size of the ships of focus, fluid dynamic impacts are assumed to not be critical to consider to a high level of fidelity.Air resistance and wind heeling is assumed to be reasonably small, and can be ignored.Additionally, hydrodynamic effects and problems are generally complicated to implement and expensive to run.Assuming forward approach speeds, 0 , are less than roughly 5 knots, it should be reasonable to assume that hydrodynamics associated with forward speed and momentum can be neglected or simplified.For the purposes of calm water resistance, it is assumed that the coefficient of residual drag is constant throughout the simulation.With the availability of reliable resistance tools, residual resistance or , will be provided externally for a single initial speed condition.Frictional fluid resistance or will be approximated with the ITTC '57 equation.Sinkage due to sea floor interactions will be neglected at this level of detail due to a lack of inexpensive and reliable computational methods and for consistency with the already neglected seafloor topography. As discussed at length in the background and introduction to this paper, the majority of recent research prior to this study has been invested in the interaction of small bodies in near-beach surf zone waves.Previous studies underline how difficult the seakeeping problem in the surf-zone is to simulate.In order to simplify the proposed simulation presented in this study, it is assumed that the ships used in this model are reasonably large (i.e. less than 1000 tonnes) to ignore significant waveinduced motions.Additionally, it is assumed that wave-induced motions do not have significant impacts on the final location of the ship with respect to the beach.Neglecting seaway-induced motions enables the simulation to be fixed in roll, sway, and yaw which offers additional computation cost savings and is relevant since the beach and hull geometry are assumed to be symmetric across centerline, y=0. Proposed Solution The backbone of this beachability tool is a time-domain solution, executed as a series of computations at discrete steps through time and space.The tool will be implemented in python in a modular format such that future levels of detail and fidelity could be easily added, based upon need and resources. The hull form will be modelled as a coarse surface mesh and the fluid free surface will be modelled as a z-plane at location z=0 with the hull located such that the baseline is at the given load condition waterline.The code will begin at time = 0 and i=0 at the moment of impact with i being the time iterator and time being a scalar in seconds.The hull form will move forward in the x-direction through the stationary free surface (water) and into the static beach surface.At each time step the transformation of kinetic energy, , will be determined as follows in Equation 3, until the ship reaches zero kinetic energy.Based on Equation 4, the ship will have achieved a static condition, zero velocity, on the beach at this final iteration. will be determined as the remainder of kinetic energy at each iteration after considering the change in potential energy, ∆, as defined in Equation 5, work done on the wetted hull by water, , as defined in Equation 6, work done by the friction with the beach, , as defined in Equation 7, and work done by deforming the beach medium, ,as defined in in Equation 8.The change in potential energy will be variable with respect to the heave, z, of the ship at each time step with respect to the constants gravity, , and ship mass, . Equation [8] Work done by the water, as defined in Equation 6is the force caused by the fluid multiplied by the distance the ship travelled in the direction of the resisting force, ∆.The resistance on the hull caused by the fluid is proportional to the fluid density, , wetted surface area, , velocity, , and the coefficient of total drag, .The wetted surface area, , will be computed at each iteration.The coefficient of total drag is a summation of residual, , and frictional, , coefficients of drag per Equation 9.As stated in the assumptions, the residual coefficient of drag will be provided by an external simulation.The frictional coefficient of drag will be computed via the ITTC '57 approximation in Equation 10 which related to the Reynolds number, , defined with respect to length on waterline, , ship velocity, , and dynamic viscosity, , as defined in Equation 11 (Zubaly, 1996). Work done by the friction with the beach, , will be calculated as a coefficient, times a normal force in Equation 7. The normal force is assumed to be equal to the weight of the vessel that the beach will support.Based on the assumption that the sandy beach acts as a dense fluid, this weight is being determined using the assumption that the deformed beach provides a buoyant force according to Archimedes principle (Kang et. al, 2018).This force will be determined at each times step via Equation 12, mass of the displaced beach, , times gravity.The calculation of can be seen in Equation 12where is the density of sand and ∇ is the volume of displaced sand. Equation [12] As stated in the assumptions, deformation of the beach will not be explicitly modelled geometrically, however the simulation will still consider the work done in the act of moving the displaced sand.To approximate this deformation work, , the change in potential energy of the displaced sand will be determined at each iteration via Equation 8.The mass of sand that is displaced, m , will be determined at each time step as the difference in the total mass of the beach the ship displaces from the previous time step as explained by Equation 13.In order to determine the change in potential energy of the sand a segmented approach will be used, iterating over the mass, , and calculating the vertical distance ∆ required for that mass to be entirely above the beach plane as show in Figure 13.Expanding and transforming Equation 13 by inserting Equations 4, 5, 6, 7, and 8 results in Equation 14.This allows Equation 3 to be rewritten in terms of in Equation 14.This equation represents the proposed total energy conservation considered at each time step during the simulation.An additional system of equations will be required at each time step to determine the location in space of the hull form as it pitches and heaves.The pitch and heave of the hull will be calculated with a moment and force balance equations, Equation 15and Equation 19 respectively.The moment balance equation, Equation 15, requires the sum of the moments caused by the buoyant force from the water, , buoyant force from the beach, , and the weight of the ship, to equal zero.It is assumed that the sum of the mass of displaced water, , and the mass of displaced sand, , is equivalent to the mass of the ship, as shown in Equation 19.It is assumed that the ship exclusively rotates around the center of floatation, , with the moment arms , , and calculated as the difference between the center of action and the center of floatation in Equation 16, Equation 17, and Equation 18.The center of action for buoyancy due to water and beach are determined to be the center of volumes, and respectively.The center of action of the mass of the ship is the center of gravity, , and is a required input to the simulation. Algorithms and Methods The initial information required at the start of the simulation at which point the ship has just initiated contact with the beach and i=0, is listed below.The hull geometry is proposed to be a coarse surface mesh in the form of a PLY file.CAPSTONE, a HPCMP CREATE product can easily create simple surface meshes, from many of the classic geometry file types used in naval architecture.PLY is accepted in python mesh libraries Trimesh and VEDO (Haggerty, et al., 2019) (Musy, et al., 2021).These open source python libraries are proposed for handling mesh intersections.Working in python allows for plug-ins to the Leading Edge Architecture for Prototyping Systems (LEAPS) tool suite (Shaeffer, et al., 2020).Longevity of tools in the concept design space rely on integration with existing products or projects.Python is a well-known language commonly used in the naval architecture realm, therefore, generation of a python based tool will allow for integration in many existing projects.The objective of this simulation is to output ship position when ship reaches a static condition.The simulation will iterate over time and space to determine ship velocity, , and hull position at each time step until the ship comes to a static equilibrium, = 0, which is represented in the bottom left red triangle in Figure 15.Given this information, ramp characteristics and beachability can be determined easily.At the start of each time step, the new hydrostatic condition of the ship is calculated based on the workflow in Figure 16.Given the updated hydrostatic condition, change in velocity at each step is calculated via the conservation of energy given in Equation 3and Equation 14.The simulation will continue to iterate until the ship velocity approaches very near to zero and is assumed to have met a static equilibrium.Separate trim and heave iterations occur at every time step.These loops are separate, but also dependent.This can be seen after the output of the force iteration returning to a moment calculation box on the right in Figure 16.The result is an iterative solution towards determining the ship's hydrostatic condition at each time step.In order to reduce computational time, the step size used in the iterative solution will be adaptive, dependent on how far the solution is from equilibrium.A critical component of this calculation is that the hull geometry directly drives the outcome which is a notable improvement over existing state-of-the-art beachability analysis in early-stage design discussed previously. Following the flow chart in Figure 16, starting in the top left, these loops require information about the hull geometry, beach surface, wetted surface, and hull characteristics.The tool queries the mesh intersections to calculate wetted and beached volumes.The beached volume can be seen in yellow in Figure 17 and 18. Multiplying these volumes by their respective densities and distances to the center of rotation, CF, gives a moment balance equation seen in Equation 15.The assumed convention is a counterclockwise moment with the ship approaching the beach from the left being positive as seen in Figure 17.It is clear that a positive moment results in bow up trim and a negative moment results in bow down trim.The force and resulting moment are predicated on treating sand as a dense fluid that acts a buoyant force on the hull through the beached volumes centroid (Kang, et al. 2018).The trim angle will be adjusted as shown in the top right of the flow chart in Figure 16 until the moment is balanced, within a given tolerance, to zero using Equation 15.The updated trimmed position will provide new wetted and beached volumes to the force calculation in the middle of Figure 16.Equation 19 will then balance the forces in the z direction.With a positive force convention being up, if the force is positive the hull will experience positive heave and, if negative, negative heave.These heaving forces are shown in Figure 18.After ∑ ≈ 0, the moment is calculated once more.If ∑ ≈ 0 the hull mesh is queried. CONCLUSIONS As discussed in this paper, beaching has had a long and critical history in navies under taking amphibious operations around the globe.Delivering reinforcements and resources is vital to successfully providing disaster relief and maintaining a forward position in war.Both large and small beaching vessels are necessary to accomplish this during times when port infrastructure is contested or not available.Following WWII, the focus regarding beaching has shifted toward smaller craft.However, due to advances in technology and an uncertain warfighting environment, much is unknown as to the needs of the future.Looking objectively at the current amphibious fleet, a lack of knowledge in the areas of large beaching ships has been identified.This gap is aimed to be addressed through the development of a low fidelity beaching analysis tool. With such a tool, naval architects will be able to quickly and accurately assess risk and costs associated with designing a large beaching ship.For the development of this tool, a first principals' approach has been taken to develop a time-series based simulation using conservation of energy.In order to evaluate a bounded and simplified trade space, the critical physical components of the beaching problem needed to be identified and fully considered.The interactions that will be evaluated or simplified in this tool are primarily focused on the beach-ship problem.Confirmation of these simplifications and assumptions is difficult due to a lack of Subject Matter Experts (SMEs) in the beaching domain problem and existing body of knowledge.Specifically, little research has been aimed at the hull form and beach interaction problem compared to the seaway induced motions problem.Due to the lack of available expertise, a verification and validation (V&V) effort will be required to provide confident use of this tool within ship acquisition frameworks.As previously noted, decisions made early in the design, especially at the preliminary design stage, can ultimately determine the future success of a ship acquisition program. Future work The effort to develop the simulation discussed in this paper is intended to be completed by the end of FY24 (September 2024).A validation effort is planned to begin in late FY24 and completed in FY25.Due to a lack to higher fidelity simulation data, model testing is being pursued.Utilizing modeling expertise at NSWC Carderock Division and partnership with the model test basin at the Engineering Research and Development Center (ERDC) (ERDC Overview, n.d.), testing results can be obtained which a beaching tool should be able to emulate within some tolerance.Planning for this level of model testing began in early Calendar Year (CY) 24.Scaling as well as other unidentified topics are of concern and will be addressed as planning is refined and resources and time are available. Additionally, future efforts aim at considering developmental improvements to the simulation based on time and resources.There are two notable features which have been identified.Firstly, the development of a method to objectively assess and compare the beachability of different concepts and enable automated design space exploration.The primary measure for beachability is the ability of the ship in its beached position to deliver its payload to the beach.This payload can vary depending on CONEMP.Different types of payloads have different requirements usually revolving around fording depth, ramp angle, and ramp length.These ramp characteristics that will enable objective assessment of beachability are included in the flow chart in Figure 15 with the path once = 0 is satisfied.The approach is to find the intersection of a ramp line, starting at the hinge point with slope of ramp angle, and beach line, with the beach slope.Then, the vertical distance from the waterline to that intersection point has to be measured and reported as the fording depth.This final calculation has not been discussed as part of the current effort of the tool.This information is critical to the beaching design problem and can be calculated external to the tool, based on the solved final ship position.Integration in the tool is intended to reduce additional steps, total time, and errors. Secondly, since these ships eventually need to extract themselves from the beach, it would be useful to integrate both beaching and extraction into the same tool.There are two possible methods for a vessel to get off the beach: under propeller load, utilizing the assistance of kedge anchors, or a combination of both.The force that the propeller and kedge anchor must overcome to get off the beach will be a result of wetted drag from water, skin drag from beach, etc...This will require additional investigation, however, it is likely to have an overlap with planned capabilities within the base version of the tool. In later developments of the tool, adding options for specifying ship thrust throughout the simulation, seaway conditions, and beach approach angle are all areas of interest.In practice, many beaching vessels continue to generate thrust from the propellers after impact with the beach in order to increase the distance they can travel up the beach.In order to assess this, additional sources of power will need to be included in the energy balance.For seaway conditions and approach angles, a 6 degree of freedom (DOF) analysis will be required and assumptions of symmetry will have to be overcome.This adds more complexity to the simulation, however, is not unusual for tools to offer both 3-DOF and 6-DOF analysis options.This will be further considered and researched as development continues. Rather than just adding additional features, future developments will also attempt to improve fidelity while retaining rapid assessment capability.This means exploring other potential simplifications and existing theory, such as resistive force theory (RFT) to achieve more defined ground reaction forces.RFT is a theory that uses linear superposition of experimental results to generate lift and drag forces on a body independent of the body's orientation (Zhang & Goldman, 2014).The advantage of using RFT for this project is that it can be computationally inexpensive while still being relatively high fidelity since it relies on experimental data to drive responses.Additionally, there are many well documented sources about its use in different environments since the theory was developed in the 1950's (Marcotte, 2016).There are a few potential disadvantages of RFT.It has only been used to predict interactions at very low Reynolds numbers (Re ~ 10 −2 ), beaching will mostly occur at higher Reynolds numbers (>Re ~ 10 5 ) so it will need to be explored if RFT can be reasonably scaled up (Rodenborn, et al., 2013, Zubaly, 1996).Most RFT tools also require experimental testing to quantitatively determine the reactions of the granular material and that is an expensive and lengthy process.Some work has been done to empirically define these reactions which may be useful for this project (Marcotte, 2016).RFT is just one of many potential methods for improving fidelity that could be explored.Hopefully, this paper will encourage conversation and gather attention from experts in the field with knowledge of other applicable theories.Truly and accurately solving the beaching problem could have wide impacts and will require collaboration from various disciplines beyond naval architecture. Figure 10 : Figure 10: Boundary Mesh of the Wheel of the Quad Ski(Yamashita, et al., 2022) Figure 12 : Figure 12: Coordinate System for beach modeling Figure 16 : Figure 16: Trim and Heave Iteration Flow Chart Figure Figure 17: Trimming Moment Figure Figure 18: Heaving Force
2024-05-23T15:05:40.112Z
2024-05-21T00:00:00.000
{ "year": 2024, "sha1": "f14d3d6ff0de49ed45399d42d5f483078293d747", "oa_license": "CCBY", "oa_url": "https://proceedings.open.tudelft.nl/imdc24/article/download/859/841", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "fac9c1383accc24d876af62ba2209344fdbf97c3", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
54773936
pes2o/s2orc
v3-fos-license
THERMODYNAMIC ANALYSIS OF FLUID FLOW IN CHANNELS WITH WAVY SINUSOIDAL WALLS Entropy generation in channels with non-uniform cross-section that can be found in many fluid flow systems is an important concern from the thermodynamic design point of view. In this regard, the entropy generation in channels with periodic wavy sinusoidal walls has been considered in the present study. The flow is assumed to be 2-D steady laminar and the main parameters considered are the Reynolds number, height ratio Hmin/Hmax and module length ratio L/a. The fluid enters the channel with uniform axial velocity and temperature. The wall of the channel is assumed to be at uniform temperature which is different that of the fluid at the inlet of the channel. The distribution of the entropy generation as well as the total entropy generation has been studied numerically. It is found that the Reynolds number and the geometric parameters, height ratio and module length ratio have significant effect on both the local concentrations of entropy generation as well as the total entropy generation in the channel. Flow separation and re-circulation size, strength and location of flow are found to be major concern in determining the local entropy generation. Introduction Entropy generation has been the subject of investigation in many engineering applications for the purpose of improving the second law efficiency and providing sustainability.Since the entropy generation is related to not only the thermo-physical properties of materials but also the geometry of the thermal system, study of entropy generation is carried out on individual bases.It is known that any design modification made in a thermal system to enhance for example the heat transfer directly affects the entropy generation within the system.Among the many investigations that can be found in the literature in this regard, Sahin [1,2] studied the effect of the cross-sectional geometry of a fluid flow duct on the entropy generation in an effort to minimize the entropy generation. Heat transfer enhancement has been an extremely important consideration for heat exchanger applications that can be found in numerous engineering processes ranging from micro size medical to huge power plant systems.It is very common to use wavy channels to enhance the heat transfer in particularly compact heat exchangers.The fluid flow and heat transfer characteristics in wavy channel for the case of 2-D steady laminar fluid flow have been studied by Bahaidarah et al. [3].The subject of the present work is to investigate the entropy generation for the same geometry and study the effects of various geometric parameters of the wavy sinusoidal channel on the entropy generation. The subject of heat transfer enhancement is extremely important for heat exchanger applications.Numerous publications have been devoted to the study of creative ways of increasing the heat transfer rate in compact heat exchangers [4].The symmetric corrugated or wavy-walled channel is one of several devices utilized for enhancing the heat and mass transfer efficiency.The characteristics of the flow and heat transfer in a channel with such a configuration have been the subject of several investigations including Nishimure et al. [5], Nishimura et al. [6], Ali and Ramadhyani [7], Wang and Vanka [8], Stone and Vanka [9], and Niceno and Nobile [10].However, none of these works have included the second law analysis or the entropy generation in the channel. On the other hand entropy generation in channel flow has been studied by many researchers.Abbasi et al. [11] analyzed the entropy generation in Poiseuille-Benard channel flow with the use of the classic Boussinesq incompressible approximation.They found that the maximum entropy generation is localized at areas where heat exchanged between the walls and the flow is the maximum.They observed no significant entropy production in the main flow. Haddad et al. [12] studied the entropy production due to laminar forced convection in the entrance region of a concentric cylindrical annulus.They found that the entropy generation is inversely proportional to both Reynolds number (Re) and the dimensionless entrance temperature.They also observed that increasing Eckert number and/or the radius ratio will increase the entropy generation. Ko and Cheng [13] investigated the developing laminar forced convection and entropy generation in a wavy channel.They considered the effects of aspect ratio (W/H) and the Reynolds number on entropy generation.They found that the case of W/H = 1 provided the minimal entropy generation.In addition, the higher Re is found to be beneficial for obtaining the lower values of the total resultant entropy generation in the flow field.Accordingly, the case with W/H = 1 and higher Re is suggested to be used so that the irreversibility resulted from the developing laminar forced convection in the wavy channel could be the least.In another paper, Ko [14] studied the effects of corrugation angle on the developing laminar forced convection and entropy generation in a wavy channel.He analyzed flow characteristics including re-circulating flows, secondary vortices, temperature distributions, and friction factor as well as Nusselt number.He discussed the effects of corrugation angle on the distributions and magnitudes of local entropy generation resulted from frictional irreversibility and heat transfer irreversibility. Heat transfer and fluid flow characteristics inside a wavy walled enclosure were studied numerically by Mahmud and Sadrul Islam [15].They applied the second law of thermodynamics to predict the nature of irreversibility in terms of entropy generation.They carried out simulation for a range of wave ratio, aspect ratio and angle of inclination. Mahmud and Fraser [16] studied the entropy generation inside a cavity made of two horizontal straight walls and two vertical wavy walls of sinusoidal shape for the case of laminar natural convection.In their analysis the horizontal straight walls are kept adiabatic, while the vertical wavy walls are isothermal but kept at different temperatures. Ko [17] investigated numerically the entropy generation in a double-sine duct, which is frequently used in plate heat exchangers.He concluded that the entropy generation, in cases with larger Re and smaller heat transfer is dominated by the entropy generation due to frictional irreversibility; whereas the entropy generation is dominated by the entropy generation due to heat transfer irreversibility in cases with smaller Re and larger heat transfer. The purpose of the present study is to examine the effects of geometric parameters on the 2-D developing fluid flow and heat transfer characteristics as well as the entropy generation in symmetric wavy channels and steady laminar flow.The flow is assumed to stay laminar and stable throughout the wavy channel, i. e. no bifurcations occur within the Re range considered in the present paper.Vortex instability and travelling wave instability that would normally occur for the case of high Re are not considered and can be found in other works such as refs.[18,19]. Geometric configurations The geometric configuration considered is the sinusoidal channel as shown in fig. 1.The governing independent parameters influencing the fluid flow and heat transfer through a periodic array of wavy passage are the Re, height ratio (H min /H max ), and module length ratio (L/a).Table 1 shows all configurations considered in this study.Each case is assigned a unique name and is studied for Re values of 25, 100, and 400. Since the boundaries of the physical domain are irregular or represent complex geometries, a body-fitted grid or non-orthogonal grid system was developed to generate the grid for the domain of interest.The grids shown in fig. 2 were generated using algebraic grid-generation techniques.The physical domains illustrated in fig. 1 can be discretized using the algebraic sheared transformation.The x co-ordinate was discretized into equally spaced points.The y co-ordinate was discretized into equally spaced points at each x location by the normalizing transformation technique as: where Y(x) is the upper boundary, which is the sine function for the sinusoidal shaped channel.This kind of grid-generation technique produces regular or structured grids.The reader is referred to Hoffman [20] for further details.The computational domain is divided into three individual regions.Those regions are the entry region, the periodic wavy modules, and the exit region.A uniform orthogonal grid is used for both the entry and exit regions.The grid distribution shown in fig. 2 can be repeated successively to generate the domain of periodic wavy modules.In this study, six consecutive modules are included in the computational domain.The scalar and velocity variables are stored at staggered grid locations.All thermo-physical properties are stored at the main grid locations, while the velocity variables are stored at the interface of each control volume. Mathematical formulation Consider the 2-D periodic sinusoidal cross-sectional channel, as shown in fig. 1.The thickness of the channel is neglected and the thermal boundary condition on the surface of the channel is assumed to be uniform wall temperature T w .Fluid enters the channel with uniform axial velocity u = U in and uniform temperature T in .Both hydrodynamic and thermal boundary layers start developing.The purpose is to study the volumetric entropy generation rate distribution throughout the fluid in the channel.This requires solution of velocity and temperature fields in the fluid.The governing equations and the boundary conditions for this steady problem are [21]: -momentum -entropy generation rate where the dissipation function F is given by: The first term in eq. ( 6) represents the entropy generation due to heat conduction in the radial and axial directions.The last term, on the other hand, accounts for the fluid friction contribution to the entropy generation. Boundary conditions A no-slip boundary condition was assigned at the walls, where both velocity components are set to zero (e. g., u = v = 0).The channel was subjected to a constant wall temperature (T = T w ) condition.A uniform inlet velocity profile was assigned at the inlet boundary condition (u = U in ).A constant inlet temperature (T = T in ), different than the wall temperature, was assigned at the channel inlet.The streamwise gradients of all variables were set to zero at the outlet boundary to attain a fully developed state in which no change takes place in the flow direction. Convergence criteria The discretization equations obtained by integrating the governing partial differential equations resulted in a set of linear algebraic equations for each variable which need to be solved iteratively.The set of linear algebraic equations were solved sequentially within each iteration.A set of these equations was solved by using the line-by-line method, which is a combination of the tridiagonal matrix algorithm and the Gauss-Siedel procedure.Convergence could be declared if the maximum of the absolute value of the mass residues was less than a very small number e (e. g., 10 -5 ).In this study, convergence was declared by monitoring the sum of the residues at each node.Since the magnitude of u x and u h are not known a priori, monitoring the relative residuals is more meaningful.The relative convergence criteria for u x and u h are defined as: In the pressure equation, it is appropriate to check for mass imbalance in the continuity equation.The convergence criterion for pressure was defined as: The convergence criterion for temperature was defined as: R bT a T b nodes The numerical iteration criterion required that the normalized residuals of mass, momentum, and energy be less than 10 -6 for all cases considered in this study. Validations The developed code was validated by reproducing solutions for some benchmark problems.The fluid flow and heat transfer in a parallel-plate channel subjected to constant wall temperature was predicted.As expected from classical results for this problem, the flow will develop in the entrance region until it reaches fully developed condition, at which no further changes in velocity profile take place in the streamwise direction.Since the gradient of pressure in the fully developed region is constant, the velocity profile is parabolic, with the point of maximum velocity located along the centerline and equal to 1.5 times the mean velocity.The Nusselt number (Nu) for the fully developed region between two parallel plates subjected to constant wall temperature is 7.56, which agrees favorably with the Nu 7.54 mentioned by many authors, such as Incropera and DeWitt [22]. As mentioned earlier, Wang and Vanka [8] studied numerically the 2-D steady and time-dependent fluid flow and heat transfer through a periodic sinusoidal-shaped channel for fluid with a Prandtl number of 0.7 for one set of geometric parameters, H min /H max = 0.3 and L/a = 8.They presented a comparison of the calculated separation and re-attachment points with the experimental data of Nishimura et al. [6].They also presented the Nu distribution along the walls of the sine-shaped channel.Figure 3 shows the local Nu presented by Wang and Vanka [8] for a single periodically fully developed (PDF) module and the developing flow results generated in the present work for six consecutive modules.Disregarding the first module, the next five modules show that the flow has reached the fully developed condition as they have the same behavior and the results of a PDF can fit to any one of them.The local Nu is given by: Nu av w b , i wall Niceno and Nobile [10] studied numerically the same flow problem by means of an unstructured co-volume method for the same set of geometric parameters, but for two different geometric configurations, i. e., sinusoidal-shaped and arc-shaped channels.The flow and temperature fields were studied under the assumption of fully developed flow, which means that the flow repeats itself from module to module, and the heat transfer coefficient has reached its asymptotic value.Based on this assumption, Niceno and Nobile [10] analyzed only one module of the geometry.However, in this work, six consecutive modules were considered.The fully developed condition could be reached at the second or the third module and the results are comparable to those of Niceno and Nobile [10].Figure 4 shows the results of the friction factors (f) obtained for sine-shaped geometry for the fourth module as a function of Re.The result obtained with the developed code agrees with the numerical results of Niceno and Nobile [10] and the experimental observations of Nishimura et al. [6].The friction factor was computed based on its standard definition: where MI and MO stand for module inlet and module outlet, respectively, P m is the mean pressure, H av -the average channel height (H av = H max /2 + H min /2), and u av -the average velocity in a single module in the channel.The Re is defined as: Grid independence Structured symmetric grids were used for the computations to ensure symmetric solutions.A grid refinement study was performed in order to assess the accuracy of the results.Table 2 gives a summary of the grid independence test for both sine-shaped and arc-shaped channels at different Re.It can be seen from the results that the values of friction factor (f) and Nu obtained at different grids vary by less than 1.8%, thus demonstrating the adequacy of the grid adopted and the numerical accuracy of the method.All calculations presented here were obtained with the finest grid (6,561 grid points). Results and discussion The computational domain consists of six modules of sinusoidal shape wavy channel.The effect of Re, H min /H max , and L/a on the local as well as total entropy generation is discussed.The effect of each of these parameters on the velocity profile, streamline, normalized temperature field, normalized pressure drop, and module average Nu indicated that in most of the cases considered periodically fully developed profiles are attained downstream of the first two or three modules [3].Thus, one module (the fourth module from the inlet) has been selected as a representative module to discuss the details of entropy generation. The streamlines in the fourth module from the inlet are shown in fig. 5 for fixed height ratio H min /H max = 0.3 and module length ratio L/a = = 8.As Re is increased the flow separation and circulation increases and moves towards the exit of the module.This indicates that the velocity gradients attain greater values and their position moves as the Re is increased.For low Re (Re = 25) the circulation ceases and no separation is observed.In all cases the flow symmetry is observed as a result of symmetrical geometric configuration.For the case of low Re the velocity gradients are maximum at the inlet and exit locations where the cross-sectional areas are minimum.For larger Re, however, larger velocity gradients may be obtained within the module away from the inlet and exit locations as a result of the flow detachment and re-circulation. The effect of Re on the temperature variation within the fourth module is shown in fig. 7. Dimensionless isotherms are shown for fixed height ratio H min /H max = 0.3 and module length ratio L/a = 8.For low Re the temperature gradients are concentrated around the inlet region.In most of the remaining domain the gradients are small.As the Re is increased however the temperature gradients increase in the whole domain except along the flow centerline where the gradients are minimal as a result of insufficient time for the temperature penetration.Next to this centerline region the temperature gradients are relatively high.Temperature gradients along the surface of the channel near the exit location also increase as the Re is increased. The velocity and temperature gradients shown in figs.5 and 6 indicate the location and the strength of the entropy generation in the module.The variation of the entropy generation in the fourth module for the same height ratio and module length ratio as given in those figures is shown in fig. 7.As can be seen from fig. 7(a), the entropy generation in the entire domain is very small and uniform.There are no concentrations of the entropy generation is observed.This is due to the uniform distribution of the velocity gradients with no flow separation and re-circulation and low temperature gradients.As the Re is increased the entropy generation rate around the inlet and exit locations increase, figs.7(b) and 7(c).Along the centerline, the entropy generation is minimum within a certain region of flow thickness depending of the Re.This region of flow thickness decreases as the Re is increased.It should be also noted that there is a region of low entropy generation in both sides of the channel near the wall where the cross-section is large.This is due to the low velocity and temperature gradients due to the flow re-circulation activity taking place further downstream.As the Re is increased further, the concentration of the entropy generation increases around the thin centerline fluid jet and near the wall of the channel around the exit location.The symmetrical distribution of the entropy generation is also noted. The geometric parameters, namely the height ratio H min /H max and module length ratio L/a, have a considerable influence on the distribution of entropy generation.Figure 8 shows the effect of the height ratio H min /H max on the entropy generation in the channel.As the height ratio is increased the distribution of the entropy generation increases and becomes more uniform throughout the major part of the domain. Figure 9 shows the effect of the length ratio on the distribution of the entropy generation.For lower length ratio, the entropy generation is concentrated in the regions close to the inlet and exit regions.This is due to the increase of the internal volume of the module and decrease of the velocity gradients within the module.Thermal and velocity gradients in this case occur around the inlet and exit locations where the flow cross-section is the minimum.On the other hand, when the length ratio is increased the entropy generation within the module volume increases in most parts of the domain except along the axis where the fluid flow attains high speed with smaller velocity gradients.The maximum entropy generation is observed along the wall surface of the channel with the exception of the location upstream of the region where flow separation and recirculation is observed.It should be noted that in the limit where the height ratio approaches infinity, the channel geometry approaches to that of straight channel in which the entropy generation will be concentrated near the channel wall and will seize along the centerline of the channel. Conclusions Entropy generation in a channel with periodic sinusoidal wall is considered in the present study.The flow is considered to be laminar and the thermal boundary condition is assumed to be uniform temperature.The conclusions obtained from the present work can be summarized as follows. The fluid attains a steady periodic laminar flow after a developing region that extends through one or two modular segments of the channel. Re has a considerable effect on the local and overall entropy generation rate in the channel.In general, the entropy generation rate is concentrated around the inlet and the outlet sections of the periodic modules of the channel.As the Re increases flow separation and re-circulation occurs in the module resulting in local concentrations of the entropy generation within the channel. The height ratio of the channel mainly affects the distribution of the entropy generation in the channel module.As the height ratio is increased the distribution of the entropy generation becomes more uniform in axial direction.Entropy generation rate in the transversal direction shows considerable fluctuations whose magnitude depends on the strength of the re-circulation of fluid flow in the module. The module length ratio also affects the distribution of the entropy generation rate in the module volume, although the effect of it on the overall entropy generation is minimal.As the length ratio is decreased the re-circulation of fluid flow increases causing higher concentrations of local entropy generation in the channel module. The distribution of the entropy generation rate in the wavy sinusoidal channel becomes important in designing the geometric configuration of fluid flow passages used in many engineering devices such as compact heat exchangers. Figure 3 .Figure 4 . Figure 3. Local Nusselt number along the walls of a sine-shaped channel Figure 5 . Figure 5.Effect of Re on the streamlines for sinusoidal channel configuration, fourth module, H min /H max = 0.3 and L/a = 8
2018-12-05T15:54:04.995Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "955869d1535380607e080269aa3d1f1ef5af0418", "oa_license": null, "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0354-98361200200B", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "955869d1535380607e080269aa3d1f1ef5af0418", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
208513110
pes2o/s2orc
v3-fos-license
Data processing over single-port homodyne detection to realize super-resolution and super-sensitivity Performing homodyne detection at one port of squeezed-state light interferometer and then binarzing measurement data are important to achieve super-resolving and super-sensitive phase measurements. Here we propose a new data-processing technique by dividing the measurement quadrature into three bins (equivalent to a multi-outcome measurement), which leads to a higher improvement in the phase resolution and the phase sensitivity under realistic experimental condition. Furthermore, we develop a new phase-estimation protocol based on a combination of the inversion estimators of each outcome and show that the estimator can saturate the Cramér-Rao lower bound, similar to asymptotically unbiased maximum likelihood estimator. I. INTRODUCTION Optimal measurement scheme followed by a proper data processing is important to realize high-precision and highresolution phase measurements [1][2][3]. For the commonly used intensity measurement over quasi-classical coherent states, the achievable phase sensitivity is subject to the shot-noise limit (SNL) δθ ∼ O(1/ √n ), wheren is the number of particles of the input state. Furthermore, the intensity measurement at the output-port of the coherent-state light interferometer gives rise to an oscillatory interferometric signal ∝ sin 2 (θ/2) or cos 2 (θ/2), which exhibits the fringe resolution λ/2 determined by wavelength of the incident light λ. This is often referred to the classical resolution limit of interferometer, or the Rayleigh resolution criterion in optical imaging [4]. These two classical limits in the sensitivity and the resolution can be surpassed with non-classical states of the light [5,6] such as the N-photon NOON state (|N, 0 a,b + |0, N a,b )/ √ 2. This is a maximally entangled state with all the particles being either in the mode a or all in the mode b, leading to the super-sensitivity δθ ∼ O(1/N) and the super-resolution λ/(2N) [3][4][5][6][7]. However, the NOON states are difficult to prepare and are fragile to the loss-induced decoherence [8][9][10]. Recently, several important progresses have been reported. The first one is the achievement of super-resolution by feeding the interferometer with a coherent laser, followed by coincidence photon counting [11], parity detection [12,13], and homodyne detection with a proper data processing [14]. Specially, Distante et al [14] detect the field quadrature at one port of coherent-state light interferometer and then binarize the measurement data p ∈ (−∞, +∞) into two bins p ∈ [−a, a] and p [−a, a], which results in a deterministic and robust super-resolution with classical states of the light. The second progress is the recent theoretical proposal and experimental demonstration [15] that feeding a coherent state and a squeezed vacuum state into the two input ports * Electronic address: aixichen@zstu.edu.cn † Electronic address: wenyang@csrc.ac.cn ‡ Electronic address: grjin@zstu.edu.cn of the interferometer followed by the same data processing over the single-port homodyne detection, which can realize deterministic super-resolution and super-sensitivity simultaneously with Gaussian states of light and Gaussian measurements [15]. This result may provide a powerful and efficient way to enhance the sensitivity of gravitational wave detectors [16,17] and that of correlation interferometry [18]. The data-processing method proposed by Refs. [14,15] is equivalent to a binary-outcome measurement [19][20][21], where the outcome "0" corresponds to p ∈ [−a, a] and the outcome "∅" for p [−a, a]. To infer an unknown phase shift, the simplest protocol of the phase estimation has been used by inverting the averaged signal [14,15]. The advantage of the inversion estimator is that it has a relatively simple analytical expression and its sensitivity follows the simple error-propagation formula [14,15]. Moreover, for any binary-outcome measurement, it has been shown that the inversion estimator asymptotically saturates the Cramér-Rao lower bound (CRB) [19][20][21]. However, the binarization of measurement data and the inversion estimator suffer from a serious drawback, i.e., they do not take into account all the information from the measurement [22,23]. Consequently, they tend to degrade the achievable sensitivity significantly, e.g., at θ = 0, the sensitivity diverges [14,15], so the inversion estimator cannot infer the true value of phase shift in the vicinity θ ∼ 0. In this paper, we propose a new strategy capable of further improving both the resolution and the sensitivity using the experimental setup similar to Schafermeier et al [15]. Our strategy consists of two essential ingredients. The first one is to divide the measurement data into three bins: (−∞, −a), [−a, a], and (a, ∞), corresponding to three outcomes "−", "0", and "+" , respectively. This is equivalent to a three-outcome measurement and enjoys two advantages over the previous binaryoutcome case [15]: (i) The divergence of phase sensitivity at θ = 0 is removed, which is useful for estimating a small phase shift; (ii) Higher improvement in the resolution and the sensitivity is achievable under realistic experimental parameters. The second ingredient is a composite estimator based on a linear combination of the inversion estimators associated with each measurement outcome. This estimator takes into account available information from all the measurement outcomes of a general multi-outcome measurement, so it is capable of saturating the CRB asymptotically. Therefore, this composite estimator enjoys the good merits of the inversion estimator (i.e., the simplicity) and the well-known maximum-likelihood estimator (i.e., unbiasedness and asymptotic optimality in the sensitivity). In addition to the squeezed-state light inteferometry, our estimation protocol may also be applicable to other kinds of multi-outcome measurements. II. SINGLE-PORT HOMODYNE DETECTION WITHOUT DATA-PROCESSING As depicted by Fig. 1(a), we consider the homodyne detection at one port of the interferometer that fed by a coherent state |α 0 and a squeezed vacuum |ξ 0 (i.e., the so-called squeezed-state interferometer) [24,25]. To enlarge available information about the phase shift θ, the field amplitudes are chosen as α 0 ∈ R and ξ 0 = −r ∈ R (i.e., arg α 0 = 0 and arg ξ 0 = π); See Refs. [26][27][28] and also the Appendix. The total number of photons injected from the two input ports is given byn = α 2 0 + sinh 2 r. Furthermore, the Wigner function of the input state is given by [29] W in (α, β) = W |α 0 (α)W |ξ 0 (β) with ̺ (≤ 1) and e −r describing the purity and the squeeze parameter of |ξ 0 . The Wigner function of the output state takes the same form with the input state W out (α, β; θ) = W in (α θ ,β θ ) [30][31][32], where the variables (α, β) have been replaced by (α θ ,β θ ); see the Appendix. Integrating the Wigner function over {x a , x b , p b }, we obtain the conditional probability for detecting a measurement quadrature p ∈ (−∞, ∞), where, for brevity, we omit the subscript "a" in the quadrature p a , and introduce Note that Eq. (3) holds for the homodyne detection at one port of the interferometer fed by the input |α 0 ⊗ |ξ . Here |ξ could be arbitrary gaussian state of light, withμ andν to be determined by |ξ . As the simplest case, the coherent-state input |α 0 ⊗ |0 corresponds toμ =ν = ̺ = 1 and hence η θ = 1, in agreement with our previous result [19]. In Fig. 1(b), we show density plot of P(p|θ) against the phase shift θ and the measurement quadrature p, where the red dashed line is given by p = −α 0 sin(θ)/2. This equation takes the same form with that of the signal which shows the full width at half maximum (FWHM) = 2π/3, and hence the Rayleigh limit in fringe resolution [14,15]. According to Refs. [33][34][35][36], the ultimate phase estimation precision is determined by the CFI: where P ′ ≡ ∂P/∂θ. When the coherent-state component dominates over the squeezed vacuum, maximum of the CFI occurs at θ = 0, i.e., F (0) = α 2 0 /η 0 =να 2 0 ≃ e 2rn , which yields a subshot-noise sensitivity: This is the best sensitivity attained from the single-port homodyne detection in the limit α 2 0 ≫ sinh 2 r, coincident with the intensity-difference measurement [24,25]. III. BINARY-OUTCOME HOMODYNE DETECTION To improve the resolution, one can separate the measured data into two bins [14]: p ∈ [−a, a] as an outcome, denoted by "0", and p [−a, a] as an another outcome "∅", with the bin size 2a. Using Eq. (3), it is easy to obtain the conditional probabilities of the outcomes, denotes a generalized error function, and with η θ being defined in Eq. (4). The above data-processing method is equivalent to a binary-outcome measurement [32], with the observableΠ = µ 0Π0 + µ ∅Π∅ , whereΠ 0 = +a −a |p p|d p andΠ ∅ =1 −Π 0 . Obviously, the output signal is given by where we have used the relation Π k (θ) = P k (θ) for k = 0 and ∅. Following Schafermeier et al [15], in Fig. 1(c), we choose the eigenvalues µ ∅ = 0 and µ 0 = 1/erf( √ 2ae r ) to show the signal as a function of θ (see the red solid line), which shows Π (0) = 1. This treatment is useful to determine the FWHM of the signal and hence the resolution, as depicted by the vertical lines of Fig. 1(c). In Fig. 2(a), we show numerical results of the FWHM as functions of the bin size a and the squeezing parameter e −r . Similar to Ref. [15], one can note that the improvement of the FWHM compared to the Rayleigh criterion 2π/3 (i.e., the ratio 2π/3 FWHM ) increases as a → 0 and r → ∞. For a given and finite number of photonsn, this means that a better resolution beyond the Rayleigh criterion (i.e., the super-resolution) can be obtained when a, α 0 → 0. Independent on µ 0 and µ ∅ , the phase sensitivity of the binary-outcome measurement is given by where ∆Π = Π2 − Π 2 and P ′ 0 = ∂P 0 /∂θ. On the other hand, the CFI of this binary-outcome measurement is given by [19] F bin (θ) = where, in the last step, we have used the normalization relation P 0 (θ) + P ∅ (θ) = 1. The above results indicate that the phase uncertainty predicted by the error-propagation δθ bin always saturates the CRB 1/ F bin (θ), which holds for any binary-outcome measurement [19][20][21]. As illustrated by the blue dashed line of Fig. 1(d), one can see that the sensitivity reaches its maximum at the optimal working point θ min (the vertical lines) and the best sensitivity δθ bin,min ≡ δθ bin (θ min ) can beat the SNL (= 1/ √n ). Similar to Ref. [15], in Fig. 2(b), we show the improvement in the sensitivity δθ bin,min /SNL as functions of the bin size a and the squeezing parameter e −r . For a givenn = 100, the best sensitivity can reach 4dB when a = 0.5 and e −r = 0.2 (i.e., sinh 2 r/n ≈ 0.06). From the squares of Fig. 3, one can also find that the FWHM scales as (2π/3)/ √n and the best sensitivity δθ bin,min ∼ 0.75/n 0.54 , with the scaling better than the SNL (i.e., the super-sensitivity). Specially, a 22-fold improvement in the phase resolution and a 1.7-fold improvement in the sensitivity can be obtained with a = 0.5, α 2 0 = 427, and sinh 2 r = 0.687 (i.e., e −r = 0.47) [15]. Normally, the data processing over the measurement quadrature p ∈ (−∞, ∞) can increase the resolution, at the cost of reduced phase sensitivity. In this sense, the ultimate phase sensitivity obtained from the single-port homodyne measurement without any data-processing (i.e., δθ CRB,min ) is the best sensitivity of the binary-outcome measurement in the limit a → ∞ [15]. From Fig. 3, one can see δθ bin,min > δθ CRB,min (the thick solid line). More importantly, δθ bin diverges at θ = 0 and therefore no phase information can be inferred for a small phase shift θ ∼ 0. To avoid this problem, we present a new data-processing technique (equivalent to a multi-outcome measurement), based upon the experimental setup similar to Schafermeier et al [15]. to obtain a better resolution and an enhanced sensitivity [see below Figs. 2(c) and (d)]. For a general multi-outcome measurement, the averaged signal can be obtained by taking expectation value ofΠ with respect to a phase-encoded stateρ(θ), namely where µ k and P k (θ) = Π k = Tr[ρ(θ)Π k ] denote the eigenvalue and the conditional probability associated with the kth outcome. With N independent measurements, one records the occurrence number of each outcome N k at given θ ∈ (−π, π). As N ≫ 1, the conditional probabilities can be measured by the occurrence frequencies, due to P k (θ) ≈ N k /N. For the multi-outcome homodyne measurement, we numerical simulate P 0 (θ) and P ± (θ) using M replicas of N random numbers [32]. As illustrated by the solid circles of Fig. 4(a) and (b), one can note that statistical average of the occurrence frequencies N 0 /N and N ± /N, fitted as P (fit) 0 (θ) and P (fit) ± (θ), show good agreement with their analytical results. Once all phase-dependent {P k (θ)} and hence Π (θ) are known, one can infer θ via the inversion estimator θ inv = g −1 ( k µ k N k /N), where g −1 denotes the inverse function of g(θ) = Π (θ) . This protocol of phase estimation is commonly used in experiments, since its performance simply follows the error-propagation formula. However, the inversion estimator based on the averaged signal does not take into account all of the available information, especially the fluctuations in the measurement observable at the output ports [22]. To improve the phase information, one can adopt data-processing techniques such as maximal likelihood estimation or Bayesian estimation [23], which saturates the CRB [33][34][35][36]: where F mul (θ) = k f k (θ), being a sum of the CFI of each outcome, with The phase-dependent {P k (θ)} and hence { f k (θ)} can be obtained in principle, at least, from the interferometric calibration, where the value of θ is known and tunable. In Fig. 1(d), we show the sensitivity per measurement δθ mul ≡ √ N∆θ mul as a function of θ (the red line). The best sensitivity occurs at θ = 0 and hence δθ mul, min ≡ 1/ F mul (0). The improvement of δθ mul, min compared with the SNL is depicted in Fig. 2(c), which shows larger quantum-enhancement region than that of δθ bin, min . In Fig. 2(d), we show the scaling of the best sensitivity − log δθ mul, min logn against the bin size a and the squeezing parameter e −r , where the solid line implies the SNL. For a givenn ≫ 1, one can find that the scaling can even reach the Heisenberg limit as α 0 , a → 0. In Fig. 3, we show the scaling of δθ mul, min and compare it with δθ bin, min , using the parameters sinh 2 r = 0.687 and ̺ = 0.58. To optimize the performance, we choose the bin size a = 0.1 for the multi-outcome measurement; While for the binary-outcome case, we take a = 0.5 [15]. One can find that numerical results of δθ mul, min (the solid circles) can be well fitted as 1.1e −r / √n , better than that of δθ bin, min (the squares). This result almost approaches the best sensitivity of the single-port homodyne measurement without any dataprocessing (the thick line). From the inset, one can also note that the signal becomes further narrowing in a comparison with that of Ref. [15]. For instance, a 38-fold improvement in the resolution and a 1.9-fold improvement in the sensitivity is achievable with the realistic experimental parameters [15]: a = 0.1, α 2 0 = 427, and sinh 2 r = 0.687. To saturate the CRB, we adopt two estimation protocols based on the single-port homodyne detection in the squeezedstate interferometer. The first one is maximum-likelihood estimation. It is well known that the MLE is unbiased and can saturate the CRB when N ≫ 1 (see e.g. Ref. [33]). Numerically, the estimator θ mle can be determined by maximizing the likelihood function (i.e., a multinomial distribution): where N k = N k (θ 0 ) denotes the occurrence number of each outcome at a given true value of phase shift θ 0 , and P (fit) k (θ) is a fit of the averaged occurrence frequency. To speed up numerical simulations, we directly use the analytical results of P k (θ). For large enough N, the phase distribution can be well approximated by a Gaussian [21]: where σ is 68.3% confidence interval of the Gaussian around θ mle , determined by In Fig. 4(c), we plot the averaged phase uncertainty per measurement √ Nσ (see the circles) and its standard derivation (the bars) for each given θ 0 , using M replicas of N random numbers. One can find that the circles follows the blue solid line (i.e., δθ mul ). Furthermore, from Fig. 4(d), one can find that standard derivation of θ mle (the bars) is larger than averaged value of the error (θ mle − θ 0 ), indicating that θ mle is unbiased [23]. A new phase-estimation protocol can be obtained from a convex combination of the CFI of each outcome f k (θ). First, we define the inversion estimator of each outcome θ inv,k = P −1 k (N k /N) by inverting the equation P k (θ) = N k /N. Next, we construct a composite phase estimator with the weight determined by f k (θ), where f k (θ) has been defined by Eq. (17), with k = 0, ± for the multi-outcome homodyne measurement. Obviously, this result is physically intuitive. For example, if the CFI of the outcome k = 0 dominates over that of the others (so that θ inv,0 is much more reliable than θ inv,± ), then the above equation reduces to θ est ≈ θ inv,0 . Furthermore, this estimator enjoys the good merits of the inversion estimator (i.e., the simplicity) and the well-known maximum-likelihood estimator (i.e., unbiasedness and asymptotic optimality in the sensitivity). In Fig. 4(e) and (f), we numerically obtain the estimators {θ (1) est , θ (2) est , · · · , θ (M) est } using M replicas of N random numbers at each given θ 0 . Unlike the MLE, the performance of θ est is simply determined by the root-mean-square fluctuation where (· · · ) s ≡ M i=1 (· · · )/M denotes the statistical average. As shown in Fig. 4(e) and (f), one can find that the averaged phase uncertainty per measurement √ Nσ est almost follows the CRB δθ mul and the bias θ est s − θ 0 is almost vanishing, similar to the MLE. It should be mentioned that the dashed lines in Fig. 4(c) and (e) show the sensitivity of the binary-outcome scheme δθ bin , which can beat the SNL if one takes a = 0.5 (see Ref. [15]). Based on Eq. (15), one can also investigate the performance of the simplest inversion estimation θ inv , which depends on the choice of the eigenvalues µ k [32]. When µ + = µ − , it is simply given by δθ bin . For other choices of {µ k }, the performance of θ inv cannot outperform that of the MLE and hence the new estimator θ est , as predicted by the Crámer-Rao inequality [33][34][35][36]. Finally, in addition to the squeezed-state light interferometry, we believe that our estimation protocol may also be applicable to other kinds of multi-outcome measurements (e.g., intensity-difference measurement over the twin-Fock states [37], which will be shown elsewhere). V. CONCLUSION In summary, we have proposed a new data-processing method for the homodyne detection at one port of squeezedstate light interferometer, where the measurement quadrature are divided into three bins: p ∈ (−∞, −a), [−a, a], and (a, ∞), corresponding to a multi-outcome measurement. Compared with previous binary-outcome case [15], we show that (i) the divergence of phase sensitivity at θ = 0 can be removed, which is useful for estimating a small phase shift; (ii) Higher improvement in the resolution and the sensitivity is achievable with the realistic experimental parameters. For instance, we obtain a 38-fold improvement in the resolution with the average number of photonsn ∼ 427, while the sensitivity ∼ 1.1e −r / √n , almost approaching the best sensitivity of the single-port homodyne measurement without any data-processing. Furthermore, a new phase-estimation protocol has been developed based on a combination of the inversion estimators of each outcome. Similar to the well-known maximum-likelihood estimator, we show that the estimator is unbiased and its uncertainty can saturate the Cramér-Rao bound of phase sensitivity. Our estimation protocol may also be applicable to other kinds of multi-outcome measurements.
2019-11-29T00:59:37.000Z
2019-11-29T00:00:00.000
{ "year": 2019, "sha1": "cf25c7ee303a6a2382d57307b2eab154a82f3683", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1911.12912", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "cd087b40c1e483706d9284713e7ca1b6d99314ca", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
2364673
pes2o/s2orc
v3-fos-license
Red and Processed Meat and Colorectal Cancer Incidence: Meta-Analysis of Prospective Studies Background The evidence that red and processed meat influences colorectal carcinogenesis was judged convincing in the 2007 World Cancer Research Fund/American Institute of Cancer Research report. Since then, ten prospective studies have published new results. Here we update the evidence from prospective studies and explore whether there is a non-linear association of red and processed meats with colorectal cancer risk. Methods and Findings Relevant prospective studies were identified in PubMed until March 2011. For each study, relative risks and 95% confidence intervals (CI) were extracted and pooled with a random-effects model, weighting for the inverse of the variance, in highest versus lowest intake comparison, and dose-response meta-analyses. Red and processed meats intake was associated with increased colorectal cancer risk. The summary relative risk (RR) of colorectal cancer for the highest versus the lowest intake was 1.22 (95% CI  = 1.11−1.34) and the RR for every 100 g/day increase was 1.14 (95% CI  = 1.04−1.24). Non-linear dose-response meta-analyses revealed that colorectal cancer risk increases approximately linearly with increasing intake of red and processed meats up to approximately 140 g/day, where the curve approaches its plateau. The associations were similar for colon and rectal cancer risk. When analyzed separately, colorectal cancer risk was related to intake of fresh red meat (RR for 100 g/day increase  = 1.17, 95% CI  = 1.05−1.31) and processed meat (RR for 50 g/day increase  = 1.18, 95% CI  = 1.10−1.28). Similar results were observed for colon cancer, but for rectal cancer, no significant associations were observed. Conclusions High intake of red and processed meat is associated with significant increased risk of colorectal, colon and rectal cancers. The overall evidence of prospective studies supports limiting red and processed meat consumption as one of the dietary recommendations for the prevention of colorectal cancer. Introduction Colorectal cancer is the third most frequently diagnosed cancer worldwide, accounting for more than one million cases and 600 000 deaths every year. Incidence rates are highest in North America, Western Europe, Australia/New Zealand, and in Asian countries that have experienced nutrition transition, such as Japan, Singapore, and North-Korea [1]. Incidence rates are stable or decreasing in long-standing economically developed countries, while they continue to increase in economically transitioning countries. Recent declines in mortality from colorectal cancer have been observed in North America and Japan, possibly due to primary prevention (surveillance and screening) and improved treatment [2]. Decreasing trends in colorectal cancer mortality have also been observed in most Western European countries [3]. The role of environmental and lifestyle factors on colorectal carcinogenesis is indicated by the increase in colorectal cancer incidence in parallel with economic development and adoption of a western lifestyle [4], as well as by the results of migration studies that demonstrate a greater lifetime incidence of colorectal cancer among immigrants to high-incidence, industrialized countries compared to residents remaining in low-incidence countries [5]. Screening and surveillance of adenomatous polyps, a precursor of colorectal cancer, is currently the cornerstone for primary prevention of colorectal cancer [6]. However, understanding the role of environmental factors in colorectal carcinogenesis may inform additional primary prevention strategies that can further reduce risk. Several plausible biological mechanisms have been suggested to explain the association of red and processed meats with colorectal cancer [7][8][9]. These include the potential mutagenic effect of heterocyclic amines (HCA) contained in meat cooked at high temperature [10], but this is not specific of red and processed meats since HCA's are also formed in poultry. A second mechanism involves endogenous formation in the gastrointestinal tract of N-nitroso compounds, many of which are carcinogenic. Red meat but not white meat intake shows a dose-response relation with the endogenous formation of nitroso compounds in humans [11]. This has been explained by the abundant presence of heme in red meat that can readily become nitrosylated and act as a nitrosating agent [12,13]. Nitrites or nitrates added to meat for preservation could increase exogenous exposure to nitrosamines, N-nitroso compounds, and their precursors; meats cured with nitrite have the same effect as fresh red meat on endogenous nitrosation [14]. In the 2007 World Cancer Research Fund and American Institute of Cancer Research (WCRF/AICR) report ''Food, Nutrition, Physical Activity, and the Prevention of Cancer: a Global Perspective'', an international panel of experts based on an extensive review of the existing evidence concluded that high intake of red and processed meat convincingly increases the risk of colorectal cancer [15]. However two recent reviews of prospective studies concluded that the available epidemiologic evidence is not sufficient to support an independent positive association between red meat or processed meat consumption and colorectal cancer, because the likely influence of confounding by other dietary and lifestyle factors, the weak magnitude of the observed association, and its variability by gender and cancer subsite [16,17]. Indeed, a positive association has been suggested in most but not all epidemiologic studies [15], and in some well conducted prospective studies, the association between red and processed meat and colorectal cancer was attenuated after better adjustment for potential confounders [18]. Since then, new results from ten prospective studies [19][20][21][22][23][24][25][26][27][28] have been published. This included studies in Asian populations [20,25,27,28], a Canadian breast cancer screening cohort [24], a US multi-ethnic cohort [26], and four American cohorts [19,[21][22][23].We have focused our review on prospective studies, because case-control studies are more liable to recall and selection bias, and randomized controlled trials on red and processed meats and colorectal cancer are considered not feasible. The data on the relation of red and processed meats and colorectal cancers are summarized in highest versus lowest meta-analyses. Because stronger causal inference can be drawn from dose-response associations, we also conduct linear dose-response analyses. None of the previous meta-analyses have examined the shape of the dose-response relationship; we further explore whether there is a non-linear dose-response relationship between red and processed meats intake and colorectal cancer risk. Data sources and search We performed a systematic search for publications on red and processed meat and colorectal cancer in Pubmed, without any language restriction from 1966 to 31 March 2011, using the search strategy implemented for the WCRF/AICR report [15] (Text S1). The medical subject headings and text words covered a broad range of factors on foods and foods components, physical activity, and anthropometry. We also hand-searched reference lists from retrieved articles, reviews, and meta-analysis papers. The complete protocol and full search strategy used is available at http://www. dietandcancerreport.org/cu/ [29]. Inclusion criteria and data extraction Studies were included if they reported estimates of the association of red meats, processed meats, or both with colorectal, colon, or rectal cancer risk. ''Red meat'' was described in most studies as the intake of beef, veal, pork, mutton and lamb. ''Processed meat'' was defined as the total intake of ham, bacon, sausages, cured or preserved meats. Here, ''red and processed meats'' is used to denote the food item that includes both ''red meats'' and ''processed meats'' into a single item in the studies identified in the search. To be included in the dose-response meta-analyses, the numbers of cases and the denominators in the cohort studies or the information required to derive them using standard methods [30] had to be reported. Other data extracted were study characteristics, cancer outcome, description of meat item, method of dietary assessment, and adjustment factors. When multiple articles on the same study were found, the selection of results for the meta-analysis was based on longer follow-up, more cases identified, and completeness of the information required to do the meta-analysis. The search, study selection, and data extraction was conducted by several reviewers (led by EK) at Wageningen University, The Netherlands up to June 2006, and by two reviewers (DSMC and RL, led by TN) at Imperial College London from June 2006 to March 2011. Statistical analysis Relative risk estimates were pooled using fixed-effects and random-effects models. We present the results from the randomeffects meta-analysis that accounts for between-study heterogeneity [31] unless otherwise specified. We conducted meta-analyses for red and processed meats, combined and separately, using the description of the meat items given in the articles. In highest versus lowest meta-analyses (the comparison of the highest intake level to the lowest intake level), the relative risk (RR) estimate from each study was weighted by the inverse of the variance to calculate summary relative risks (RR) and 95% confidence intervals (CI). In linear dose-response meta-analyses, we pooled the relative risk estimates per unit of intake increase (with its standard error) reported in the studies, or computed by us from the categorical data using generalized least-squares for trend estimation [32]. When intake was expressed in ''times'' or ''servings of intake'', we converted it into grams (g) using 120 g as a standard portion size for red and processed meat combined and for red meat, and 50 g was assumed as standard portion size for processed meat, as in the WCRF/AICR report [15]. Means or medians of the intake categories were used when reported in the articles; if not reported, midpoints were assigned to the relative risk of the corresponding category. Zero consumption was used as boundary when the lowest category was open-ended and when the highest category was open-ended, we used the amplitude of the lower nearest category. For studies reporting intakes in grams/1000 kcal/day [22,26,33], the intake in grams/day was estimated using the average energy intake reported in the article. When a study provided results by gender, we first pooled these estimates using a fixed-effects model and included the pooled value in the metaanalysis. One study provided results for distal and proximal colon cancer [34] and we derived the relative risk for colon cancer using the same procedure. We also conducted meta-analyses stratified by cancer sub-site, gender, and geographic area. Dose-response relationships were expressed per increment of intake of 100 grams per day for red and processed meat, and 50 grams per day for processed meat as in previous meta-analyses [15,35]. To assess heterogeneity, we computed the Cochran Q test and I 2 statistic [36]. Sources of heterogeneity were explored in stratified analysis and by linear meta-regression, with gender, geographic area, year of publication, length of follow-up, and adjustment for confounders as potential explanatory factors. We also explored if heterogeneity of results was explained by the studies in which a standard portion size was used to convert times/servings per day to grams per day, and by method of dietary assessment. Small study and publication bias were examined visually in funnel plots for asymmetry and by Egger's test [37]. The influence of each individual study on the summary RR was examined by excluding each study in turn from the pooled estimate [38]. We further examined the potential non-linear dose-response relationship between red and processed meats and colorectal cancer using fractional polynomial models [39]. We determined the best fitting second order fractional polynomial regression model, defined as the one with the lowest deviance. Non-linearity was tested using the likelihood ratio test [40]. All analyses were conducted using Stata version 9.2 (StataCorp. 2005. Stata Statistical Software: Release 9. College Station, TX: StataCorp LP). P,0.05 was considered statistically significant. Results of search and study selection Forty-two articles from 28 prospective studies that examined the relationship of red and/or processed meat intakes and colorectal, colon, and rectal cancer incidence were identified ( Figure 1). Eight articles were excluded [41][42][43][44][45][46][47][48] because other articles of the same cohort studies with more cases [49][50][51] or with information required in the meta-analysis were already included [18,52,53]. We could not include the UK Dietary Cohort Consortium [42], as data from two of the seven component cohorts were in other cohort consortium that was included in the meta-analysis because it had more cancer cases [50]. Hence, 24 prospective studies (2 case-cohort, 3 nested case-control and 19 cohort studies) were included in the highest versus lowest meta-analyses, of which 21 studies provided enough information to be included in the dose-response meta-analyses. Characteristics of the study cohorts There were 13 cohorts of men and women, three male cohorts, and eight female cohorts. Twelve studies were from North-America, including a multiethnic cohort. The European Prospec- Table 1. Summary relative risks of meta-analyses of red and processed meats, red meat and processed meat, and colorectal cancer for all studies and by subgroups. tive Investigation into Cancer and Nutrition (EPIC) study involved ten European countries. The remaining were two studies each from Finland, the Netherlands, and Japan, and one study each from Australia, Canada, Sweden, China, and a Singaporean study with Chinese participants. In all studies, relative risk estimates were adjusted for age and sex, and all except two adjusted for total energy intake. More than half of the study results were adjusted for body mass index (BMI), smoking, alcohol consumption, or physical activity, close to half controlled for dairy food or calcium intake, social economic status, family history of colorectal cancer, or plant food or folate intake. In some studies, the estimates were controlled for use of non-steroidal anti-inflammatory drugs, fish or white meat intake. The main characteristics of studies included in the dose-response metaanalysis are shown in table S1. Study results not included in the dose-response meta-analysis are detailed in table S2. Intake of red and processed meats was significantly associated with an increased risk of colon cancer (RR for 100 g/day increase = 1.25, 95% CI = 1.1021.43) (8 studies, 5426 cases), with significant heterogeneity between studies (I 2 = 60%, P = 0.02). Meta-regression analysis showed that studies adjusted for age and energy only [55,58] reported stronger associations than the more adjusted studies [9,21,24,34,52,54] (P = 0.03). Red and processed meats intake was significantly associated with rectal cancer (RR for 100 g/day increase = 1.31, 95% CI = 1.1321.52) (5 studies, 2091 cases). In influence analysis, the statistical significance of the associations with colorectal, colon, and rectal cancers remained when each study was excluded in turn. There was evidence of a non-linear association of red and processed meats and colorectal cancer (P = 0.03). Visual inspection of the curve (Figure 3) suggests that the risk increases linearly up to approximately 140 g/day of intake. Above that intake level, the risk increase is less pronounced. No significant associations were observed for proximal and distal colon cancers in the meta-analysis of the two [34,54] out of the five studies [21,34,50,54,55] identified in the search (Table 1). The summary RRs for the highest versus lowest red meat intake comparison were 1.10 (95% CI = 1.0021.21), 1.18 (95% CI = 1.0421.35), and 1.14 (95% CI = 0.8321.56) for colorectal, colon, and rectal cancer respectively ( Table 1). The mean of the highest category of red meat intake ranged from 26 to 197 grams per day in the studies. In dose-response meta-analyses, red meat was statistically significantly associated with increased risk of colorectal (RR for 100 g/day increase = 1.17, 95% CI = 1.0521.31) (8 studies, 4314 cases) and colon cancer (RR for 100 g/day increase = 1.17, 95% CI = 1.0221.33) (10 studies, 3561 cases) ( Table 1) (Figure 4). No significant association was observed with rectal cancer (RR for 100 g/day increase = 1.18, 95% CI = 0.9821.42) (7 studies, 1477 cases). Influence analyses did not suggest strong influence from any of the individual studies on the summary estimates. For proximal and distal colon cancers, no association was observed when combining the two studies identified [34,50] (Table 1). No significant associations were observed for proximal and distal colon cancers in the meta-analysis of the two [34,54] out of five studies [21,28,34,50,54] identified in the search (Table 1). Small study or publication bias In the analyses, no evidence of small study or publication bias was detected by visual inspection of the funnel plots. P for Egger's test ranged from 0.13 to 0.98 in the different analyses. The only evidence of publication bias was in the studies on processed meat and colon cancer, which suggested small studies with inverse association are missing (Egger's test P = 0.06). Table 1 shows the results of the dose-response meta-analyses by gender and geographic area. In most strata the number of studies was low and in some there was significant evidence of heterogeneity. Principal findings The accumulated evidence from prospective studies supports that red and processed meats intake is associated with increased risk of colorectal, colon, and rectal cancers. The risk increase in colorectal cancer estimated in linear dose-response models was 14% for every 100 g/day increase of total red and processed meats, 25% in colon cancer, and 31% in rectal cancer. These results are consistent with those of the highest versus lowest metaanalyses. In non-linear models, colorectal cancer risk appears to increase almost linearly with increasing intake of red and processed meats up to approximately 140 g/day. Above this level, the risk increase is less pronounced. Red meat intake (assessed separately from processed meat) was associated with increased risk of colorectal and colon cancers, but the association with rectal cancer was not statistically significant. Similarly, processed meat intake was related with risk of colorectal and colon cancers, but not with rectal cancer. The lack of association with rectal cancer is in contrast with the results observed when red and processed meats were combined into a single food item, where similar associations with colon and rectal cancers were observed. This may be due to a lower number of studies in the analyses of rectal cancer than in those of colorectal and colon cancers. Our estimates are consistent with those reported in the 2007 WCRF/AICR expert report [15], where the risk increase of colon cancer was 37% for every 100 g/day increase in red and processed meats, and the risk increase of colorectal cancer was 29% for every 100 g/day increase in red meat, and 21% for every 50 g/day increase in processed meat. Selective reporting or publication bias Some articles [20,28,59,62,66] could not be included in the doseresponse meta-analysis because of insufficient information, but the dose-response meta-analyses were consistent with the highest versus lowest meta-analyses that included these studies that suggests that the exclusions from the dose-response meta-analyses did not bias our results. Two cohort studies could not be included in the meta-analysis. These studies reported positive but non-significant associations between fried sausage [67] and pork [68] and colon cancer. We could not include the results of the UK Dietary Cohort Consortium [42] that reported no association of red and/or processed meat and colorectal No evidence of publication bias emerged from visual inspection of funnel plots and Egger's tests in the analyses conducted, except for processed meat and colon cancer where there was a suggestion of small studies with inverse association missing. Since larger studies in the analysis have produced consistent results, it is unlikely for the missing studies to affect the association observed. Exploration of heterogeneity There was evidence of heterogeneity between studies on red and processed meats and colorectal cancer, that was significantly explained by intake unit conversion in the meta-regression analysis. The summary risk estimate was lower in the studies for which we used a standard portion size in the unit conversion, compared to other studies. The approximation may have attenuated the association, and the real association may be stronger than showed in our estimates. Meta-regression analysis indicated that level of adjustment partially explained the heterogeneity between studies on colon cancer. Studies adjusted for age and energy only (Nurses' Health Study -NHS [58] and Health Professional Follow-up Study -HPFS [55] showed a stronger association than studies with higher level of adjustments. However, after the exclusion of the studies adjusted only for age and energy intake from the analysis, moderate unexplained heterogeneity persisted. In a more recent article on the NHS and the HPFS, the associations of red meat and processed meat and colon cancer were attenuated after better adjustment for confounders and longer followup [18]. Nevertheless, in another recent article on the NHS, women who consumed one serving of red or processed meat daily for 40 years had a 20% increased risk of colon cancer compared with women who did not eat any red or processed meat [48]. This estimate is consistent with the results of our meta-analysis. Although we cannot rule out residual confounding, most studies included in the meta-analyses adjusted results by smoking, alcohol consumption, BMI and physical activity [18,[22][23][24]26,27,50,51,54,56,57,64] in addition to age, sex and energy; in several cohort studies the multivariate adjusted models also included folate intake [18,24,34,50], and other studies additionally adjusted for aspirin or other anti-inflammatory drug use [23,25,53,54]. Several potential confounders were not included in the final statistical models in some studies because, as the authors reported, their inclusion in the model did not substantially modified the relative risk estimates [19,33,49,52,60,63]. Implications The remaining question is whether there is substantial potential for primary prevention of colorectal cancer through limiting the intake of red meat and processed meat in high meat consumers. At a population level, the preventability estimates for red meat intake and colorectal cancer were 5% in the US, and the UK; and 7% in Brazil, and China; where 26%, 25%, 45% and 37% of the respective populations were estimated to consume more than 80 g of red meat per day [69]. Dietary and lifestyle factors are usually interrelated and it is likely that a change in a habit that is considered detrimental, such as high intake of red meat, will be accompanied by other healthful changes. In the large prospective cohort of American Nurses (NHS), it was estimated that women who consumed high amounts of red and processed meat, did not exercise, had a low folate intake, and had a consistently excess in body weight experienced over 3.5 times' the cumulative incidence of colon cancer, by age 70 years, than women who maintained a low-risk lifestyle and diet (defined as consuming low amounts of red and processed meat, exercising regularly, consuming 400 mg/day of folate, and maintaining a low relative body weight) [48]. Under different scenarios for red meat consumption, reduction of physical inactivity, obesity, alcohol consumption, early adulthood cigarette smoking, and low intake of folic acid from supplements, the population attributable risk of colon cancer for the combined modifiable risk factors ranged from 39% to 55% of cancers in an American cohort of middle age men [70]. The preventability of colorectal cancer in United Kingdom through reduced consumption of red meat, increased fruit and vegetables, increased physical activity, limited alcohol consumption and weight control was estimated to be 31.5% of colorectal cancer in men and 18.4% in women [15,71]. The preventability estimates of colorectal cancer through increasing intake of foods containing fiber, reducing intake of red and processed meat, alcohol, physical inactivity and body fatness were estimated to be close to 40% in USA, UK and Brazil, and 17% in China [69]. Measurement error might have attenuated the relative risk estimates in the individual studies in which the estimates are based, as well as in our meta-analyses. Conclusions The current evidence from prospective studies supports limiting the amount of red and processed meat in the high consumers for colorectal cancer prevention. Primary prevention of colorectal cancer should emphasize modification of multiple diet and lifestyle factors. Supporting Information Table S1 Main characteristics of the prospective studies included in the dose-response meta-analyses. (DOC)
2014-10-01T00:00:00.000Z
2011-06-06T00:00:00.000
{ "year": 2011, "sha1": "47a0c583dd589f99fd98624ce5dd65e1e783fd1c", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0020456&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "43f7e3054b6a9be02f918f350c75b1211bf14a5e", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
266531638
pes2o/s2orc
v3-fos-license
Coming of age in a pandemic era: The interdependence of life spheres through the lens of social integration of care leavers in Quebec during the COVID-19 pandemic This paper explores how the COVID-19 pandemic affected care leavers in Quebec, a social group already facing obstacles to INTRODUCTION Since 2020, the world's population has faced major economic and social disruptions due to the COVID-19 pandemic.For youth leaving care, this pandemic has been an additional obstacle on their paths to achieve their aspirations.Integration into the workplace has become more precarious for youth in care owing to job losses (Greenson et al., 2022) and difficulties in finding a job, especially during the early months of the pandemic.This situation has tended to improve over time, partly owing to vaccination campaigns (Rosenberg et al., 2022).For those who are studying, employment precariousness sparks fears that they will be unable to pursue their education because they cannot afford the tuition fees (Ruff et al., 2022). In addition, reduced access to their families and social environments, as well as access to professionals who offered support before the pandemic, created a sense of isolation for these youth and a weakening of their safety net of social support (Roberts et al., 2021;Ruff & Linville, 2021).Social and work integration difficulties care leavers face, and the lack of preparation for independent life (Goyette, 2010) underscore the importance of a robust support network as a key element for protecting their well-being since the support network seems to be a necessary lever to protect these youth (Goyette, 2012). In the Canadian province of Quebec, research shows that 5.5% of all children and youth (aged 0-17) were placed in out-of-home care during the last two decades (Esposito et al., 2023).When they leave their care settings around the legal age of majority (18 years old), they face many challenges in their transition to adulthood (Goyette & Turcotte, 2004;Mann-Feder & Goyette, 2019;van Breda et al., 2020).Acknowledging the challenges faced by them when leaving care, most Canadian provinces have extended care and services beyond the age of majority, and since the onset of the pandemic requested a moratorium to ensure that youth continue to have access to the services they received under the child protection system, which did not happen in Quebec (Goyette et al., 2020). In this context, this article aims to highlight the interconnected nature of the challenges faced by these young people in several spheres of their lives during the pandemic.More specifically, this article seeks to show the interrelated dynamics of socio-professional integration (education and employment) and social support experienced by these youth. OUT-OF HOME AND TRANSITION TO ADULTHOOD IN QUEBEC Quebec's child protection system is governed by the province's Youth Protection Act (YPA), which aims to safeguard children whose security or development is, or may be, at risk by regulating government intervention in private family life.According to the act, placing a child outside of their home is considered an exceptional measure, and the goal should be family reunification (YPA, 2007, Section 2).When family reunification is unfeasible within a specific timeframe, child protection workers must develop alternative permanency plans that meet each child's specific needs (Ministère de la Santé et des Services Sociaux (MSSS), 2016, p. 3).According to Section 4 of YPA (2007), the decision should focus on providing permanent stable living conditions and relationships that meet the child's age and developmental needs while ensuring continuity of care.These alternatives may include adoption, tutorship, placement until the age of majority (with a foster family, extended family or a significant third party; in residential settings in a resource offering specific care) (MSSS, 2016, p. 3). Underlying the importance of supporting care leavers in their transition to adulthood, the provincial Special Commission on the Rights of the Child and Youth Protection (2021) recommended setting up a post-placement program to support young people up to age 25 in their transition to autonomy (p.275).The recent introduction of Bill 15 by Quebec's legislature (Assemblée nationale du Québec, 2022) requires planning the transition to adulthood of youth leaving care 2 years before their 18th birthday as well as offering them the possibility to extend their stay in care if need be. Indeed, challenges faced by care leavers in Quebec have been extensively documented.When leaving care, they must independently build stability in various spheres of their lives, namely academic, residential, relational, work-related, financial or even health-related (Häggman-Laitila et al., 2018).Care leavers, whose issues are mirrored in several Western countries (Goyette et al., 2007), are particularly likely to face problems of integration in the workforce (Cameron et al., 2018).They must also contend with difficulties in pursuing their academic career.Data from a representative sample of care-leavers in Quebec (Goyette & Blanchet, 2022) show that they tend to lag significantly behind in their education and are at much higher risk of dropping out than youth from the general population.In addition, a significant proportion of these youth experience housing instability or homelessness (see Mech's meta-analysis, Mech, 2001).In Quebec, 33% of former foster care youth will experience at least one episode of visible homelessness by the time they reach age 21, while estimates among the general population are at 0.9% (Goyette et al., 2022). Despite the importance of the stability and longevity of social relationships (Best & Blakeslee, 2020), youth in care find it difficult to develop and maintain a social support network.Placement experiences can weaken ties with significant others, which in turn can reduce the number of friendships or cause them to refrain from seeking help from their families (Frechon & Lacroix, 2020).For example, frequent travel during placement increases youth's distance from family and friends, hindering their chances of forming meaningful relationships (Robin et al., 2015).In addition, youth placed in a transitional living program were more likely to have both formal support as well as a foster family member in their support network when compared to those not placed in the same program (Rosenberg, 2019).Since an important factor for developing a strong support network after leaving care is the quality of relationships established with family members and friends (Parent et al., 2016), there is clearly a disparity in access to support for care leavers entering adulthood. THEORETICAL FRAMEWORK: SOCIAL VULNERABILITY AS A DOUBLE DROPOUT PROCESS In this research, we used Castel's zones of vulnerability model (Castel, 1994) to better understand the social vulnerability of youth with former foster care experience in the context of the pandemic and by considering the dynamic between socio-professional and support network levels of fragilization.According to Oris (2017), « the concept of vulnerability emerged from the study of natural disasters » (p.1), capturing on the one hand the interaction of multiple factors in this phenomenon, and on the other hand, the variable and unequal impacts of a crisis depending on groups' coping capacity (Martin, 2019).Since this crisis had a variable impact on precarious populations, it seems relevant to explore the place of precarious working conditions and the weakening of social supports on the social vulnerability dynamic (Paugam, 2009).Castel's model distinctive feature is that weak and unstable integration into the main mechanisms of resource distribution in contemporary society places people in a situation of uncertainty and high exposure to the risk of poverty and ultimately social exclusion (Ranci & Migliavacca, 2010). Castel's zones of vulnerability model (Castel, 1994) show how vulnerability is a dynamic condition that results from a 'dual process of disengagement: from work and from relational integration ' (p. 13).This dual notion suggests that individuals' social integration relies on both work and social support and protection (Martin, 2019).The relationship to work is examined as a sphere of socio-occupational integration that includes both work and studies since they are interrelated and imply a specific source of protection.In contrast, the relational sphere includes young people's network of proximity, what Martin (2019) calls 'relational capital', which in some cases also includes their formal support network (e.g.street workers). Castel's (Castel, 1994) dynamic model encompasses four zones or spheres of integration: a) integration (people integrated into the labour market and into a support network), b) assistance (people distant from the labour market but integrated into a support network), c) disaffiliation (people distant from the labour market and isolated) and d) vulnerability (people in a precarious work situation and in fragile relationships).Zone d can be seen as a tipping point towards disaffiliation or social integration.Note that this model does not construe the process of exclusion as a fixed component: while it is possible to slip into disaffiliation, it is also possible to emerge from it. METHODOLOGY We adopted a qualitative approach to our research design, to understand the processes involved in youth lives.Considering youth's lives as processes allows us to consider multiple factors that influence their lives in a non-linear way (Longo, 2016).More specifically, two groups were formed based on three differentiation criteria.The first group, which we named 'Group A' (N = 24) was composed of youth who reported not having obtained a high school diploma, having experienced at least one episode of homelessness and having experienced at least one mental health problem.The second group, which we named 'Group B' (N = 24), included youth who reported having graduated from high school, not having experienced homelessness and not having experienced any mental health problems. A few reasons explain our research design rationale for this project.Not completing high school is associated with socioeconomic disadvantage (Campbell, 2015), while poverty and mental health problems are mutually reinforcing (Ridley et al., 2020), suggesting that youth entering the pandemic with mental health issues and no high school degree could have a harder time navigating challenges during the pandemic.Finally, stable housing is an important determinant of quality of life, including relationships with friends and family (Baumstarck et al., 2015) as well as access to a high school degree and work experience (Goyette & Blanchet, 2018). Considering links between high school completion, homelessness and mental health among former foster care youth in Quebec (Goyette et al., 2021), entering the pandemic with previous homelessness experience might give youth less support to cope with the challenges they face during the pandemic.Thus, creating two groups differentiated by their level of vulnerability allows us to better understand how the pandemic affected care-leavers' experiences-from their own point of view-in the academic, professional and social relational dimensions of their lives. Participants In this study conducted between June and November 2020, 48 youth, aged between 19 and 21 years old when they took part in it, were interviewed.Overall, both groups' participants share similar characteristics.Most participants in both groups identified as being white youth (20 for Group A and 21 for Group B) as well as francophone (24 for Group A and 23 for Group B).An equal amount from both groups come from relatively rural regions of the province (18 for both groups) compared with urban centres (6 for both groups).In Group A, 16 participants identified as women and 7 as men, while in Group B, 12 identified as women and 11 as men.No data are available concerning religious affiliation as well as sexual orientation of the participants. Recruitment Youth were recruited as part of a subsample of the Étude longitudinale sur le devenir des jeunes placés au Québec et en France (EDJeP), a national longitudinal study on care-leavers in Quebec.They were selected based on the responses provided to the quantitative questionnaire administered during the second wave of that study.In compliance with ethical standards, only participants who consented to be contacted again to participate in other related research projects led by the principal investigator were called.Approval from the Research Ethics Board of the Institut universitaire jeunes en difficulté was obtained for both the original EDJeP study (MP-CJMIU-16-02) as well as for the conduct of this research with a subsample of EDJeP (MP-CJM-IU-16-22). Data collection and analysis Interviews were conducted exclusively by ZOOM or telephone, and consent was obtained verbally and recorded.Interviews lasted an average of 90 min and followed a semi-structured format, covering several dimensions of their life experiences during the pandemic (education, employment and income, housing situation, social and family relationships, mental health and access to services).Each participant received $30 in compensation at the end of the interview.Interviewers used a logbook to document the interview context and to record their personal analysis.All interviews were audio-recorded and transcribed in full.To ensure confidentiality and anonymity, participants' names were replaced with fictitious first names. An in-depth thematic content analysis was performed using Nvivo data processing software, following the data reduction method proposed by Paillé and Mucchielli (2012).Each interview was coded according to theoretically defined themes and emergent themes and later organized into categorical tress.Key findings were consolidated into an analysis memo.Subsequently, as Huberman and Miles (1991) recommend, content was validated by two other members of the research team with diverse academic backgrounds, to ensure that all team members shared the same interpretations.Disagreements were thoroughly discussed, and a mutually satisfactory conclusion was reached regarding which interpretation to adopt. Following this, a comparative analysis was performed following Cécile Vigour's (2005) three steps: (1) gathering and contextualizing the information, (2) interpreting the similarities and differences between both groups and (3) presenting the findings of the comparative research.Finally, an analysis was conducted to position the trajectories of both groups of youth within Castel's distinct vulnerability zones.This enabled the identification of four specific paths that 10990860, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/chso.12829,Wiley Online Library on [22/01/2024].See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions)on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License resulted from the pandemic, located between the two extremes of the spectrum: (1) strong integration path, (2) integration under stress path, (3) risk of disaffiliation path and (4) Path towards disaffiliation. RESULTS Our results are presented in the following two subsections.The first section represents how participants from both groups lived different experiences during the pandemic.It allows us to understand how entering the pandemic with different levels of vulnerability affected their experiences.The second section categorises participants into four different thematic groups based on those experiences using Castel's (1994) model.This allows us to understand how the pandemic seems to have influenced their paths either towards social integration or disaffiliation. Disparate pandemic's effects among groups The results revealed that the pandemic exacerbated the difficulties foster care leavers faced in various spheres of life, including their educational and career paths as well as in their relational sphere.As we will see below, different levels of pre-pandemic vulnerability led to different life experiences for participants in Group A and Group B. Youth's educational trajectories during the pandemic Comparing interviews between participants from both groups suggests that educational careers were disproportionately affected according to pre-pandemic needs and obstacles they already had.For Group A participants, the pandemic played a major role in their ability to carry out their plan.Only two thirds of them were studying when the pandemic started, and most of them were returning to high school.The first lockdown in Quebec resulted in an interruption of classes followed by the implementation of remote study measures in most cases. Almost none of the participants from Group A said that they had the material or psychosocial resources to study remotely.Many did not have enough space to study at home.Some had to purchase their own equipment (one even had to take a loan to buy a computer), which was a major challenge given their financial precariousness.Some had to use their cell phones to attend classes due to the unavailability of a computer or Wi-Fi connection.Some participants recalled struggling to master technology and software used by their teachers.Because it's too complicated, and with time the motivation's gone.So that… me, instead… because the teachers y'know are also mixed up.They don't really know how it works.And sometimes, because I did exams, and even with the exams it was complicated, like, I had to make an appointment, and then it didn't work.And then I would show up at school and then I had to go back home, because that wasn't what they had heard with the teacher, and like… communication is really not there! [Lou] These challenges were exacerbated for those who also told us they already had learning disabilities [e.g.attention deficit hyperactivity disorder (ADHD)], which in turn was further enhanced by difficult access to the appropriate medication.This led many participants to put their studies on hold and wait for the pandemic to pass.In addition, motivation to pursue their studies was not enough for many participants.Louise explains that her inability to find stable employment, difficult access to basic services and a lack of support network during that time forced her to suspend her plans. Before the pandemic, yes, I was in college.But it closed, so I dropped out a bit due to that.But otherwise, I was on a good track anyway before COVID. […] I was in CEGEP [College of general and professional education in the province of Quebec] and I was doing parameters.It had been about two months, and then COVID came along and screwed everything up. […] I was down.I would say that I felt more alone. The loneliness was heavy. [Louise] These examples illustrate how the pandemic seemed to create an additive effect on many challenges participants already faced, making their study projects an uphill battle.It is therefore not uncommon for some of them to stress the importance of support to help them pursue their studies.For example, one participant stated that having a street worker could help him resume his plan, which was based on a positive experience.Another mentioned that his employers' strong encouragement for him to return to school gave him the motivation to not give up. So just to give you an example, they [the employer] told me, because at first I wasn't sure if I was going to go back to school, they said: "If you go back to school, we'll give you a raise." [Gabriel] Among Group A participants who were not studying, most indicated they would like to finish high school, but their situation prevented them from doing so, having neither the material nor the social conditions to carry out their plans.Their situation seemed to be indirectly impeded by the pandemic. The situation for Group B participants seems different.Most of them said that they were already studying in college or university or began when the pandemic began.Consequences were mostly felt due to the shift to a distance education, reducing their motivation to pursue. When we learned that it was still online at the beginning of the school year, I was a little discouraged at first, so I said "well, do I continue?"but no, I'm going to continue now. [Inaya] Although most participants said that they had the necessary equipment to follow online courses (computer, internet connection, etc.), some of them mentioned that the closure of campuses prevented them from accessing certain essential resources, both material and pedagogical.For example, one participant had difficulty accessing essential software, while another said she and her fellow students could not access spaces for experiments, such as laboratories.As explained by Juliette, they felt that these conditions reduced the quality of their learning experience. As one of my courses is just online, so the labs are, we watch videos instead of really practicing with the teacher and the animals, but it's still different, because it's all at a distance, so the teacher gives one hour of class, and then the rest of the course is basically everyone for him or herself, we do the work assigned. [Juliette] Participants also explained that studying in non-designated workspaces (such as a bedroom) leads to an increase in procrastination as well as fatigue from spending long hours in front of a computer. Well, I'm definitely in a different work environment, which is my room, with a whole bunch of distractions that come with it.So of course there, I was a little less motivated because I had access to my video game consoles, I had access to TV, things I don't have when I'm in a classroom, so it was a little harder to find the motivation, but I found it anyway. [Lucas] Note that despite all the negative experiences described above, some Group B participants still mentioned positive consequences of the educational measures resulting from the pandemic (e.g.being able to better balance school and work because of not having to travel, being able to sleep longer). Youth's occupational trajectories during the pandemic Interviews also clarify how the pandemic exacerbated difficulties that participants face in their occupational paths.We also observe disproportionate consequences felt between both groups.Group A participants seemed to be the most affected: almost all of them reported having to deal with disruptions in their occupational paths, manifested by them cumulating a series of jobs during the pandemic.Many said that looking for a job and holding onto one was quite challenging, which generated uncertainties for them regarding their future careers.Thus, many participants reported being afraid of losing their jobs, and of not being able to carry out their plans. The fact that I'm going to an apartment, I really can't lose my job.So, that's it… At first, I thought about going into grocery stores, because it's obvious that grocery stores are the last thing to close, except that I wasn't the only one who had that mentality, and I'll tell you that live grocery stores are full of employees.They are full.I've been to all the grocery stores in the region, they're all full!And either they don't want to hire new people so as not to contaminate the employees…it's really gotten intense now. [Gabriel] For many of them, the employment experience resembled a quest to survive, pushing them to string together work experiences and even take on jobs on the 'black' market in order to cover their living expenses.Total precariousness and strong uncertainties in the realization of life plans seem to be among the many challenges that fully occupy their complicated daily lives during a pandemic.For many, these concerns coexisted with physical or mental health problems or dilemmas related to parenthood.Jade explains that she had to choose between taking the risk of working and subsequently, losing her job or staying on social assistance to be able to meet her child's basic needs. And I don't want to get caught or lose my job because of [COVID].I have a child.Again, it comes back to my job as a mom.If I have a job, I have more [welfare] and it will be hard to get my disability back.So we agree that I'm not going to start messing around with that. [Jade] The reality seems quite different for Group B participants.They reported that they had either a stable employment or education and felt they had enough support-either from friends, family or financial aid-to pursue their life plans.Some were already working for several years, while others recalled not worrying about finding a job. Because, let's say, every time I've had a job, someone always comes up to me and says, "Oh, are you free on such and such a day?" and then I get offered a job.That's what happened with my last three jobs, so I didn't have any problems with that. [Maël] Although they seemed to demonstrate stronger resilience during the pandemic, this does not mean that they were not affected by it.Although few in number, some mentioned their fear of losing their job or even having experienced depressive episodes marked by the discontinuity of their work experience. I keep losing my job, getting a job, losing a job again.I don't know if I'm gonna keep the job I just got… It's kinda hard to know.And [sigh] planning to go out and give out resumes online and all that, it really stresses me out.I don't know why, and sometimes it leads to depression instead of… [Nina] Similarly, among those who reported difficulties along their trajectories from the outset, some indicated being able to adapt by mobilizing strategies to pursue their aspirations, either by turning to informal work experiences or by relying on social assistance.Added to this, the disparities observed raise the question of social support as an important lever for resilience and stabilization of life paths. Youth's social relations during the pandemic Both groups seemed to have been differently impacted in their social relations during the pandemic.Group A participants seem to have also faced heavier consequences in this regard.Several of them told us that they felt isolated because they had very few or no people to rely on during that time.They recount stories of major relationship breakdowns, be it deterioration of family relationships, romantic break-ups or death.While several Group A participants told us they had no relations with close family members before the pandemic, others indicated they felt strong pressures on their fragile relationships.Gabriel explains how spending all his time at home with his mother exacerbated their already toxic relationship. Let's say I told you that I lived here 60 hours a week here and we had 20 conflicts, well now we spend 120 [Hours a week together], so we have double that.Yeah, it's the same war all the time, so… usually, let's say we had a dispute, before the pandemic, that I could leave, well during the pandemic, I couldn't! [Gabriel] Additionally, most Group A participants declared they had limited interaction with their extended family members prior to the pandemic.Those who maintained contact with extended family members (or re-established contact) report that the pandemic created additional pressure on those relationships, weakening them.Lou's example shows how physical distancing measures nullified all her previous efforts to reconnect with her extended family members. Well, it's because I actually grew up and I wasn't really close to my family, or my parents, and because of the [Director of Youth Protection] and everything, and when I fell into an apartment, I got back in touch with them all.I wanted to change my life and I wanted to get to know my family and to reconnect with them, but with the Covid, the ties were broken and… I'm back to not talking to them… [Lou] Some Group A participants also expressed how the pandemic caused a reduction in their social circles, in some cases motivated by a desire to avoid negative relationships.Distancing from certain individuals gave them time to reflect on those relationships.In addition, they explained that school closures made it difficult to maintain relationships with their peers.This led them to express a profound sense of loneliness during the pandemic.Léa explained how this reactivated past isolation traumas from her foster care experience. I spent three years in a youth center locked up between four concrete walls, not even a window, five feet by five feet.So I'm a person who hates loneliness, we can agree that it's not my strong point!I need… Well, it's because I've been so… I call it sequestration in the youth centers, sorry, but you were really sequestered.In the long run, you create a little fear of solitude, and that's why… that's what it is, it just makes you… that's what it is, I feel locked in, I have nothing to do, I'm bored, it makes me depressed. [Léa] Many Group A participants had only a fragile network, consisting of front-line services and community organizations, as well as friends (many of them also in vulnerable situations) who were unable to provide important support for them due to service interruptions and their own vulnerabilities.For them, service interruptions also meant being cut off from their primary support network. I don't talk to my family much anymore.For me, [community housing resource] were like family.The fact that the pandemic meant that I couldn't go and see them anymore, it was a bit heartbreaking because I was tempted to go and see them.But it was a really good place I think.You feel good when you're there. [Louise] In contrast, Group B participants indicated that they had contact and relationships with their close family members before the pandemic began.They also reported no apparent long-term consequences of the pandemic on those relationships, some even reporting that spending more time together actually strengthened them.They said that they mostly had little contact with members of their extended family before the pandemic began.When they did keep contact, it was usually with their grandparents.However, COVID-19 restrictions in place at the time and fear of contamination made it hard for them to maintain this contact. I often go see my grandmother outside on her patio two meters away.Because you know, older people are more likely to… [be vulnerable.]Yeah, I don't take the risk of getting closer. [Gabriel] Group B participants initially indicated that having strong family relationships and were not really affected by the strains induced by the pandemic.Moreover, most of them reported limited consequences of the pandemic on their friendships because of their use of virtual communication technologies.It seems that while school closures made maintaining in-person contact with friends harder, virtual communication with friends helped preserve those friendships. During school, we called each other a lot, a lot, a lot, since we were doing homework together, that kind of thing.When the summer started we talked less and obviously with the pandemic, we did a lot less activities to avoid contact.So I spent a lot more time at home or at work than with my friends this summer, more than normal.But with all the social networks that exist today, I didn't feel like I said cut off from the rest of the world, that kind of stuff. [Lucas] Finally, we should also note that no Group B participants mentioned feelings of loneliness during the pandemic. Impacts seen from Castel's model of zones of vulnerability While we previously highlighted disproportionate consequences of the pandemic on youth who already faced significant challenges, this section goes further by analysing the results based on Castel's zones of vulnerability model (1994).Considering the effects of the pandemic on educational and occupational trajectories, as well as social relations yielded the identification of four distinct 'paths' of integration that fall between the two poles of the social integration-disaffiliation continuum (see Figure 1): (1) Strong integration path, (2) Integration under stress path, (3) Risk of disaffiliation path and (4) Path towards disaffiliation. At one end of the continuum, we have 'social integration', meaning the individual is inserted in an educational and/or professional project as well as inserted in a social support network.At the opposite end of the continuum, we have 'social disaffiliation', meaning the individual is not at all inserted in an educational and/or professional project and is not inserted at all in a social support network.Our results suggest that social support is central to inter-group variability in determining the characteristics of participant's paths as is demonstrated by details concerning the four integrations paths below. Strong integration path Participants in this path-who are all in Group B (n = 16)-seem to have relatively strongly established career and academic plans, despite the challenges encountered in their studies and the reduction in motivation linked to the pandemic.This strong social integration can be partly explained by the strong ties they maintain with their social circle and the support they receive from it.While they may have encountered obstacles, results suggest that the support network could be a protective element in overcoming these obstacles. Risk of disaffiliation path Some participants from Group A (n = 6) seem to be on a path suggesting risks of disaffiliation due to interruptions in their studies during the pandemic, causing some to find it difficult to continue their courses at a distance, compounded by the lack of material resources (e.g. computer equipment).Their work integration plan also seems impeded by the health measures that caused layoffs and job instability.For these young people, results suggest that the pandemic has weakened their social relationships considerably, leading in some cases to the suspension of relationships with members of their social circle. Path towards disaffiliation Several participants from Group A (n = 9) who seem to have experienced isolation could be heading towards social disaffiliation if they are not already disaffiliated.They seem to be the most socially isolated and an absence of an educational plan, which could be due to a lack of resources to invest in or resume their studies.They also seem to suffer from a high level of F I G U R E 1 Illustration of a continuum of social integration for care leavers during a pandemic.Each arrow represents a direction towards both extremes (total integration and total disaffiliation).The coloured rectangles represent each of the four paths identified in this study as well as their position on the continuum.insecurity, characterized by an insufficient income and number of working hours, which leads some of the youth to report seeking work on the black market.The lack of a robust support network for these young people to navigate through the crisis is also evident in their dwindling connections with individuals in their social circle.Furthermore, when they receive support, it is typically provided by a significant other with whom they seem to have a dependent romantic relationship. Integration path under stress Participants from both Group A (n = 9) and Group B (n = 8) following this path show signs of precariousness.More concretely, this path is manifested by difficulties in pursuing studies, which may lead to dropout.It is also characterized by employment and financial insecurity as well as uncertainty about their life plans.However, this seems to be counterbalanced by the presence of a protective support network that enables them to continue pursuing their work and educational integration path and thus reduce the risk of isolation or even social disaffiliation.This support network also seems strong, illustrated by resistance to pressures arising from social distancing measures on those relationships. DISCUSSION AND CONCLUSION: THE IMPORTANCE OF SOCIAL SUPPORT IN PURSUING ASPIRATIONS AFTER LEAVING CARE This article seeks to understand how the COVID-19 pandemic affected the lives of youth with former foster care experience.Results suggest that the pandemic exacerbated pre-existing vulnerabilities of many of them in terms of their schooling, their integration into the workforce and their relational environment, in line with several studies that have examined the impacts of the pandemic (e.g.Greeson, Jaffee, & Wasch, 2020;Greeson, Jaffee, Wasch, & Gyourko, 2020;Lotan et al., 2020).However, results suggest that some participants (Group A) were more severely affected, many being exposed to difficulties in their schooling or regarding youth's integration into the workforce and weakened their employment and financial situation.However, the most important factor that seems to influence their ability to cope with the health crisis is the strength of their social support network.Indeed, when participants from both groups had access to support from both formal and informal groups, they reported having better capabilities to pursue their integration paths and resist downward pressure imposed by the pandemic's restrictions.A heightened difficulty in accessing formal or informal support (Roberts et al., 2021) has led to isolation, consistent with the findings of other studies (e.g.Roberts et al., 2021;Ruff & Linville, 2021). In line with Castel's (1994), examining the educational and occupational paths of young people through a lens of social integration provides a more comprehensive understanding of how the pandemic affected them.This requires understanding how 'relational capital' generates the conditions for social integration (Martin, 2019).Thus, social relationships as a source of support, from both formal and informal networks, appear to play a central role in care leaver's integration paths.As Roberts et al. (2021) point out, isolation tends to increase among individuals who lack access to family or social support, and weak ties can lead to relationship breakdowns.Furthermore, results show that the ability to draw on a social support network is a real asset in enabling youth to better cope with the challenges they face as they transition to adulthood (Hedenstrom, 2021;Jones, 2014).They underline the need to understand how an individual's multiple lifespheres interact with each other during his life. However, if this study helps us to better understand how youth multiple lifespheres interact with each other during their lives, it has some limitations.Firstly, considering the exploratory nature of this study, finding would not be transferrable to all care leavers.Second, this study takes place after the first few months of the pandemic, before multiple other periods that came with even harsher restrictions in Quebec, which considering our results have the potential to exacerbate even more vulnerabilities that participants faced.Finally, the study solely focused on the perspectives of youth. Despite these limitations, this article has several research and practical implications.It expands on existing literature on how the pandemic affected care leaver's life courses in multiple dimensions of their lives.To go further, future research should consider how gender identities, ethnic and cultural origins as well as geographic areas of inhabitation also influenced young people's experience of the pandemic.This could enable a better understanding of the complex interplay of these dimensions for care leavers at the intersection of these dimensions.In addition, exploring the viewpoints of social and health service professionals and decision-makers would allow us to apprehend the intricate dynamics involved in providing support during times of crisis. On a practical level, the results of this research provide insight for professionals working with care leavers on the challenges they faced during this crisis.In this regard and in line with recommendations to recognize youth's right to express themselves and inform policymakers (CSDEPJ, 2021), results invite professionals to actively advocate for the development of tools and strategies that foster youth participation throughout the intervention process.This could lead to the implementation of preventive measures and a support network for care leavers, addressing their needs upon leaving care and empowering them to effectively pursue their aspirations. job 10990860, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/chso.12829,Wiley Online Library on [22/01/2024].See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions)on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License 10990860, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/chso.12829,Wiley Online Library on [22/01/2024].See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions)on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License 10990860, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/chso.12829,Wiley Online Library on [22/01/2024].See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions)on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
2023-12-25T16:05:36.623Z
2023-12-22T00:00:00.000
{ "year": 2023, "sha1": "9cea212b3f03165ca2c435df6bc24fab7c676a5e", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/chso.12829", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "87a4b957f3de3d5d0e1c53ca4178a5855554d7af", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
1879098
pes2o/s2orc
v3-fos-license
Gene Transfer to Chicks Using Lentiviral Vectors Administered via the Embryonic Chorioallantoic Membrane The lack of affordable techniques for gene transfer in birds has inhibited the advancement of molecular studies in avian species. Here we demonstrate a new approach for introducing genes into chicken somatic tissues by administration of a lentiviral vector, derived from the feline immunodeficiency virus (FIV), into the chorioallantoic membrane (CAM) of chick embryos on embryonic day 11. The FIV-derived vectors carried yellow fluorescent protein (YFP) or recombinant alpha-melanocyte-stimulating hormone (α-MSH) genes, driven by the cytomegalovirus (CMV) promoter. Transgene expression, detected in chicks 2 days after hatch by quantitative real-time PCR, was mostly observed in the liver and spleen. Lower expression levels were also detected in the brain, kidney, heart and breast muscle. Immunofluorescence and flow cytometry analyses confirmed transgene expression in chick tissues at the protein level, demonstrating a transduction efficiency of ∼0.46% of liver cells. Integration of the viral vector into the chicken genome was demonstrated using genomic repetitive (CR1)-PCR amplification. Viability and stability of the transduced cells was confirmed using terminal deoxynucleotidyl transferase (dUTP) nick end labeling (TUNEL) assay, immunostaining with anti-proliferating cell nuclear antigen (anti-PCNA), and detection of transgene expression 51 days post transduction. Our approach led to only 9% drop in hatching efficiency compared to non-injected embryos, and all of the hatched chicks expressed the transgenes. We suggest that the transduction efficiency of FIV vectors combined with the accessibility of the CAM vasculature as a delivery route comprise a new powerful and practical approach for gene delivery into somatic tissues of chickens. Most relevant is the efficient transduction of the liver, which specializes in the production and secretion of proteins, thereby providing an optimal target for prolonged study of secreted hormones and peptides. Introduction For several decades now, great effort has been invested in producing transgenic chickens [1][2][3]. Inherited biological and anatomical obstacles to avian transgenesis, arising from the unique anatomy of the avian reproductive system and a low rate of genomic incorporation of foreign DNA, have prevented the adaptation of protocols routinely used in mice. Therefore, alternative approaches were developed for chicken transgenesis, such as: (i) infection of primordial germ cells by viral injection into the subgerminal cavity of the newly laid egg [3][4][5], or at a later stage of development, upon primordial germ cell migration to the gonads through the circulation on embryonic day 2.5 (E2.5) [6]; (ii) injection of in vitro-modified embryonic stem cells or primordial germ cells, either employing non-viral vector systems, which allow insertion of large DNA fragments [7][8][9][10], or by utilizing viral vectors [11]. However, production of transgenic chickens using these approaches is much less efficient and more complex than transgenesis in other model animals, thereby preventing their routine use for research purposes. In contrast to the high complexity of the existing techniques for stable transgenesis in adult birds, transient transgenesis in chick embryos is widely used for developmental biology studies (for review see [12]). However, these approaches are not compatible with long term development and hatch. Among the reported viruses used for transduction of chicken cells, lentiviruses appear to be the most efficient [4,11]. Efficacy of lentiviral vectors in several clinical trials has been recently reported, such as use for gene-therapy studies in human cancer patients [13]. Lentiviral vectors are considered the preferred vector system for gene therapy due to their competence in transducing a wide variety of cell types, their unique ability to integrate into the genome of both dividing and non-dividing cells, and their considerable resistance to gene silencing, resulting in stable and long-term transgene expression [14][15][16][17][18]. The chorioallantoic membrane (CAM) is the site of respiratory gas exchange, calcium transport from the eggshell, acid-base homeostasis in the embryo, and ion and H 2 O reabsorption from the allantoic fluid. It consists of fused allantois and chorion membranes and is rich in blood vessels. Its position proximal to the shell membrane renders the CAM vasculature highly attractive for a variety of research purposes, such as the delivery of cells or chemicals to test tumor chemosensitivity [19], to study angiogenesis and metastasis [20], to evaluate drug-delivery systems in preclinical studies [21], and to assess the safety of cosmetic formulations [22]. In the current study, we present a new approach to gene delivery into somatic tissues of chickens via administration of lentiviral particles carrying either yellow fluorescent protein (YFP) or recombinant alpha-melanocyte-stimulating hormone (a-MSH) genes, into the CAM on embryonic day 11 (E11). Analysis of posthatch chicks showed that all of them expressed the transgene in various tissues, with highest levels of expression in the liver and spleen and lower levels in the brain, kidney, heart and breast muscle. The combination of a simple injection into the embryonic CAM and the use of an advanced feline immunodeficiency virus (FIV)-derived vector system comprise a unique and powerful method for gene delivery into somatic tissues of chicks. Ethics Statement All procedures were carried out in accordance with the National Institutes of Health Guidelines on the Care and Use of Animals and approved by the ''Animal Experimentation Ethics Committee'' of the ARO, Volcani Center (Protocol #356-0479-06). Eggs, Incubation and Hatching Conditions Fertile White Leghorn eggs were purchased from a local husbandry (Wolf-Weisman, Sitriya, Israel). Incubation was performed in a standard egg incubator, at 37.8uC and 56% relative humidity (RH). Eggs were incubated with their narrow end facing down, and rotated 90u once per hour. On E18, eggs were transferred to the hatching compartment in the incubator and incubation was continued at 37.8uC and 70% RH. Hatchability was 90% for untreated eggs. pLionII-a-MSH was constructed by digesting pLionII (Addgene, plasmid #1730) with EcoRV and inserting, downstream of the CMV promoter, a blunted BamHI fragment containing the sequence encoding human a-MSH from the plasmid pACTH1-17 (kindly donated by Dr. M.L. Hedley [23]). The a-MSH coding sequences in this construct are composed of selected segments of the human pro-opiomelanocortin (POMC) gene (signal peptide, sorting peptide, partial junction peptide, a-MSH-encoding sequence and a 12 base-pairs (bp) sequence encoding the a-MSH amidation signal [23]). The full sequence of pLionII-a-MSH (pLionII-pACTH1-17) was submitted to GenBank under accession number: BankIt1497321 seq JQ086322. In vitro Transduction of Chicken Cells in Primary Cultures with FIV-YFP Liver, spleen, kidney, brain, heart and breast muscle (pectoralis) were excised from E11 chicken embryos and dissociated with 2 mg/ml collagenase II, 0.15 mg/ml DNase I, 100 mg/ml streptomycin and 100 U/ml penicillin in HBS (20 mM Hepes, 137 mM NaCl, 3 mM KCl, pH 7.4) for 20 min at 37uC, with pipetting every 5 min. Dissociated cells were filtered through a 70 mm cell strainer (BD Falcon, Bactlab Diagnostics Ltd, Caesarea, Israel) to enrich for a single-cell suspension, and treated with red blood cell (RBC) lysing buffer (Sigma-Aldrich). Enrichment against fibroblasts was performed by pre-plating the cells for 20 min in a standard 24-well plate (Corning) before transferring the nonattached cells to new plates coated with 0.01% (w/v) calf skin collagen solution (Sigma-Aldrich). All cell types were cultured in DMEM supplemented with 10% FBS. Liver culture medium was also supplemented with ITS (Sigma-Aldrich), containing 5 mg/ml recombinant human insulin, 5 mg/ml human transferrin and 5 ng/ ml sodium selenite. Twenty-four hours after plating, cells were transduced by adding 300 ml of medium containing 2610 4 TU of FIV-YFP, supplemented with polybrene (8 mg/ml; Sigma-Aldrich) for 24 h. Two days after transduction, cells were washed with PBS and fixed with 4% (w/v) paraformaldehyde (PFA) for 10 min or analyzed by flow cytometry. Digital images were taken using a Nikon Eclipse TS100 microscope, equipped with an Olympus DP72 camera and Olympus DP controller software. Estimation of Transduction Efficiency by Flow Cytometry To estimate the percentage of YFP-expressing cells in cell culture and chicken tissues, cells were analyzed using the LSRII flow cytometer with FACSDiva software (BD Biosciences, Ness-Ziona, Israel). Cultured cells (.1.2610 5 ) were trypsinized, washed with PBS and immediately analyzed. For the liver tissue, pieces of excised liver were minced with scissors and a single-cell suspension was generated as described above and immediately analyzed. Nucleic Acid Extraction DNA was extracted from cells in culture or tissues using DNA lysis buffer (10 mM Tris, 10 mM EDTA, 0.5% SDS, 200 mg/ml proteinase K) at a ratio of 0.5 ml per well or per 0.05 g tissue, and placed at 55uC in a rotating shaker for 5 h or overnight, respectively. RNA was eliminated by incubation of the DNA samples with RNase at a final concentration of 25 mg/ml at 37uC for 1 h, followed by extraction with phenol-chloroform and ethanol precipitation. Total RNA was extracted using RNAzol B solution (Tel-Test Inc., Talron, Rehovot, Israel) according to the manufacturer's instructions. Briefly, 0.5 g of tissue, or subconfluent culture from a six-well plate, was homogenized in 0.5 ml RNAzol B solution using a Polytron PT3000 homogenizer (Kinematika, Labotal, Jerusalem, Israel). After centrifugation, the RNA in the upper phase was re-extracted with a phenol-chloroform solution and precipitated with ethanol. Turbo DNA-free kit was used for DNA elimination according to the manufacturer's protocol (Ambion, Agenteck, Tel-Aviv, Israel). The concentration and integrity of the extracted RNA were determined by spectrophotometry and gel electrophoresis, respectively. Figure S1), and this amplification is therefore specific for the recombinant a-MSH. The cycling protocol was: 94uC denaturation for 3 min, followed by five cycles of 94uC denaturation (30 s), 65uC annealing (60 s) and 72uC extension (30 s), another five cycles as above but with annealing temperature of 60uC, and an additional 25 cycles with annealing temperature of 55uC. RT reactions were carried out using 2 mg total RNA as template and a high-capacity cDNA reverse transcription kit (Applied Biosystems, Agentek, Yavne, Israel) according to the manufacturer's protocol. PCR and RT-PCR Detection of FIV Integration into the Chicken Genome using Repetitive DNA (CR1) PCR Genomic DNA was prepared 14 days after in vitro transduction of primary cell cultures from E11 embryo liver and muscle tissues, with FIV-YFP at multiplicity of infection (MOI) of one. The initial PCR was performed in a 20 ml reaction volume, using 100 ng template genomic DNA with one of the following CR1 primers: CR1-1 F: 59-TGGTTGGGTTGGAAGGGACC-39, R: 59 GGTCCCTTCCAACCCAACCA-39; CR1-3 F: 59-TCCATGGCCTTGGGCACATC-3, R: 59-GATGTGCC-CAAGGCCATGGA-39, in combination with one of the long terminal repeat (LTR)-specific primers encoded by FIV-YFP: right LTR F: 59-GGAGTCTCTTTGTTGAGGAC-39, left LTR R: 59-CGAAGTTCTCGGCCCGGATTCC-39. Control reactions contained the same DNA templates and LTR-specific primers but lacked the CR1 primers. Altogether, for each template DNA, eight CR1-LTR reactions for the first PCR and two control LTR reactions were performed. Additional controls consisted of DNA template from non-transduced liver and muscle cells. The protocol of amplification was as described above but elongation was extended to 8 min at 68uC to allow for long range amplification, using the BIO-X-ACT Long DNA Polymerase (Bioline, Origolab, Jerusalem, Israel). A second, ''nested'' PCR was carried out with 1 ml of a 1:1000 dilution of the first PCR product as template and nested primers for the left and right FIV-LTRs: left nested LTR F: 59-GGAGTCTCTTTGTTGAGGAC-39, R: 59-ATTCCGA-GACCTCACAGGTA-39 and right nested LTR F: 59-CTCCCTTGAGGCTCCCACAG-39, R : 5 9-CGAAGTTCTCGGCCCGGATTC-39. qPCR qPCR was performed using Fast SYBR Green Master Mix (Applied Biosystems) according to the manufacturer's protocol. The cDNA templates of the indicated tissues (2 ml) were used with primers specific for the recombinant sequence of a-MSH (F: 59-TTGCTGGCCTTGCTGCTT-39 and R: 59-GCACTCCAG-CAGGTTGCTTT-39), resulting in a 101 bp fragment. The forward primer was designed according to the adjacent virusderived sequence, placed upstream of the ATG signal. Therefore, these primers were specific to the exogenous a-MSH sequence (the position of the primers is illustrated in Figure S1). The YFP primers used were F: 59-TCAGCTCGATGCGGTTCAC-39 and R: 59-GTCCAGGAGCGCACCATCT-39, giving rise to a 99-bp amplicon. Gene expression was normalized to a housekeeping gene encoding chicken ribosomal 17 S protein (accession number X07257). The specific primers F: 59-GACCCGGACACCAAG-GAAAT-39 and R: 59-GCGGCGTTTTGAAGTTCATC-39 give rise to a 100 bp product. Standard curve slope for these primers was -3.331 and R 2 was 0.994. Another set of primers for the 17 S ribosomal protein gene (F: 59-AAGCTGCAGGAGGAGGA-GAGG-39 and R: 59-GGTTGGACAGGCTGCCGAAGT-39, giving a 136-bp amplicon, gave similar results (not shown). Normalization with GAPDH (F: 59-GCACCAC-CAACTGCCTGG-39 and R: 59-CTGTGTGGCTGTGATGG-CAT-39 giving a 100 bp amplicon) gave essentially similar results, but expression of GAPDH was lower in the spleen than in the liver (not shown). PCRs were performed using the StepOnePlus TM Real-Time PCR System (Applied Biosystems) with the following cycling protocol: 95uC denaturation for 10 min, followed by 40 cycles of 95uC denaturation (15us), 60uC annealing (40us) and 72uC extension (30 s). At the end of the real-time PCR, a melting curve was determined to verify the presence of a single amplicon. Relative quantification was calculated using the 2 2DCt method. All experiments were run in triplicates and repeated until variations between repeats were below 10%. The PCR products were purified and fragment identity was determined by sequencing. Immunofluorescence Tissue samples were fixed in 4% PFA in PBS overnight at 4uC. For analysis of paraffin sections, tissues were embedded in paraffin and sliced into 10-mm sections using a Leica RM2255 microtome. Slides were treated with 3% H 2 O 2 in PBS for 30 min at room temperature, blocked with 10% goat serum in PBS for 1 h, and incubated with rabbit anti-GFP (diluted 1:500, Invitrogen, Rhenium, Israel) at 4uC overnight. Following washes in PBS-0.05% Tween (PBS-T), slides were incubated with goat anti-rabbit Alexa 488 antibody (diluted 1:500, Invitrogen) at room temperature for 1 h, and washed with PBS-T. For labeling of smooth muscle actin (SMA), slides were similarly incubated with mouse anti-SMA antibody (diluted 1:200, DAKO, Enco, Petach-Tikvah, Israel), followed by goat anti-mouse Alexa 594 antibody (diluted 1:500, Invitrogen). Finally, sections were stained with 49,6diamidino-2-phenylindole (DAPI) and digital images were taken using a Nikon Eclipse e400 upright microscope, equipped with an Olympus DP72 camera and Olympus DP Controller software. Confocal images were obtained using an Olympus IX81 inverted laser scanning microscope with Fluoview-500 software. For whole-mount immunofluorescence analysis of excised tissue pieces (sized 5-10 mm), double-labeling was performed according to Alanentalo et al. [24], except that rabbit anti-GFP (diluted 1:700, Invitrogen) and mouse anti-SMA (diluted 1:200) antibodies were used, followed by goat anti-rabbit Alexa 488 or anti-mouse Alexa 594 antibodies (both diluted 1:700). Clearing of the doublelabeled pieces of liver tissue with benzyl alcohol and benzyl benzoate (Sigma-Aldrich) was performed as described previously [24]. Images were taken using the Olympus SZX16 epi-fluorescent stereomicroscope equipped with an Olympus DP72 camera. Analysis of Cell Viability and Proliferation Apoptosis of YFP-expressing cells was examined in paraffin sections using the In Situ Cell Death Detection Kit, TMR red (Roche Diagnostics; Dyn Diagnostics Ltd., Caesarea, Israel), which stains nicked DNA by end labeling of terminal deoxynucleotidyl transferase dUTP (TUNEL). Cell proliferation was demonstrated by immunofluorescence using anti-proliferating cell nuclear antigen (anti-PCNA, diluted 1:200, Dako) followed by staining with goat anti-mouse Alexa 594 antibody (diluted 1:500). Detection of a-MSH Peptide by Radioimmunoassay (RIA) Production of a-MSH peptide by HEK293T cells was examined 10 days after transduction with FIV-a-MSH vectors, using a RIA kit (EURIA-a-MSH; Euro Diagnostica AB, Malmo, Sweden) according to the manufacturer's protocol. HEK293T cells transduced with FIV-YFP vectors served as a control. Conditioned medium of 10 cm culture dish, incubated for 24 h with 6 ml of Opti-MEM culture medium (Gibco, Rhenium), was collected and cell extracts were prepared using 300 ml of CHAPS lysis buffer (1% w/v CHAPS, 30 mM Tris pH 7.5, 150 mM NaCl, Complete protease inhibitor cocktail from Roche Diagnostics, 50 mg/ml DNase and 10 mg/ml RNase. After 10 min incubation on ice, cell debris was pelleted by centrifugation at 10,000 g for 10 min. Supernatant was kept at 220uC until assay. Cell extracts and conditioned media were analyzed in 100 ml samples diluted 3-and 10-fold (using the supplied dilution buffer) and results were calibrated with the a-MSH titration curve supplied with the RIA kit. Statistical Analysis Statistical analyses were performed by one-way ANOVA and Tukey-Kramer honestly significant difference test. Comparison of hatching rates of the different experimental treatments was performed using the chi-squared test. Transgene Expression in Transduced Cells of Embryonic Primary Cultures To test the efficacy of FIV particles for gene delivery into somatic cells of chick embryos, primary cultures of liver, spleen, kidney, brain, heart and muscle tissues were prepared from E11 chick embryos, and transduced with FIV-YFP (6.5610 4 TU/ml). Expression of YFP was demonstrated 48 h later in all transduced cell types (Figure 1). Quantification of the results by flow cytometry (FACS) analysis revealed 1.3, 1.2, 2.8, 2.7, 2 and 4.7% YFPexpressing cells in the liver, spleen, kidney, brain, muscle and heart primary cultures, respectively ( Figure S2). These results provide a first demonstration of FIV particle transduction of various types of primary cultured cells derived from chick embryos in vitro. In vivo Transduction of Chicken Embryonic Tissues with Lentiviral Particles The procedure used for the introduction of foreign genes into chicken tissues in ovo via administration of lentiviral particles is presented in detail in Figure 2. Recombinant lentiviral particles, harboring genes of interest, are injected into a selected CAM vein through the eggshell membrane after removal of a small piece of the calcified shell. The shell is re-sealed and embryos are incubated to hatching. Hatchability rate was reduced by an average of 9% in PBS or viral injected eggs, as compared to nontreated eggs following this procedure ( Figure 2D). Transgene Expression in Tissues of Hatched Chicks Following in ovo Transduction with FIV-YFP Following in ovo transduction with FIV-YFP (Figure 2), various tissues from post-hatch chicks were analyzed for reporter gene expression by immunostaining using anti-GFP antibody (which also recognizes YFP that differs from GFP only by a Y66W substitution). As demonstrated in Figure 3, clusters of YFPexpressing cells were detected by whole-mount immunostaining on pieces of liver tissue excised on days 2 and 40 post-hatch ( Figure 3B & C, respectively) from FIV-YFP-treated chicks, but not from control chicks treated with PBS ( Figure 3A) or with FIV particles, carrying another cDNA (a-MSH, data not shown). Similar clusters of YFP-expressing cells were also detected by immunostaining of paraffin sections of liver from 2-day-old chicks treated with FIV-YFP ( Figure 3D & E). Images show YFP staining in cells with characteristic hepatocyte morphology ( Figure 3D) as well as cells associated with blood vessels ( Figure 3E). Transduction of cells associated with blood vessels in the liver was confirmed by doublestaining with anti-GFP and anti-SMA antibodies ( Figure 3H). An additional view of YFP-expressing cell clusters, which predominantly have characteristic hepatocyte morphology, is demonstrated in 3D images of liver tissue, stained with anti-GFP (Movie S1) or double stained with anti-GFP and anti-SMA antibodies (Movie S2). Another demonstration of the typical morphology of hepatocytes is provided in Figure S3. These images show that both GFP-positive and negative cells seem similar, as well as hepatic tissue from control non-transduced chicks. YFP-expressing cells were also observed in spleen sections of FIV-treated chicks ( Figure 3F & G). A higher frequency of YFPexpressing cells was observed, but in small clusters of two or three cells each. Examination of paraffin sections from kidney, brain, heart and muscle tissues revealed sporadic YFP expressing cells (data not shown). Yet, the appearance of YFP expression in these tissues was significantly lower compared to liver and spleen. To estimate the number of cells in a representative YFP positive cluster, several serial sections of paraffin-embedded liver sections were analyzed (Figure 4). The signals were observed at the same position in several serial sections indicating the specificity of the detecting antibody and the three-dimensional structure of the cluster. Given the estimation of 12mm diameter for a chicken liver cell [25], it can be assumed that most of the nuclei in each of the consecutive 10 mm tissue slices ( Figure 4E, arrowheads) represent different cells in the cluster. For a rough estimation of the number of cells in a cluster, the numbers of transduced cells' nuclei per section in Figure 4A-E were 9, 10, 11, 12, & 8, respectively, resulting in 50 nuclei of YFP-expressing cells. To estimate the proportion of transduced cells in the liver, liver cells were prepared from excised tissue samples and analyzed by flow cytometry. As shown in Figure 5, the average percentage of transduced cells was 0.4660.19 (n = 3), a proportion similar to that reported in mice following lentiviral transduction via tail injection [26]. Further examinations of liver sections were performed to verify the viability of the transduced cells ( Figure 6). The possibility of cell death through apoptosis was examined using TUNEL assay. As demonstrated in Figure 6A & B, only very few TUNEL-positive signals were obtained in sections of either FIV-YFP transduced chicks or controls, respectively. None of these appeared to colocalize with YFP-expressing cells. The images in Figure 6 represent one example of 10 slides that were analyzed. Immunostaining using anti-PCNA antibody, which is directed against the auxiliary protein of DNA polymerase delta, was performed to indicate cell proliferation. As shown in a representative image in Figure 6C, some PCNA signals were found to colocalize with YFP fluorescence. The PCNA and TUNEL analyses confirmed viability of the YFP-transduced cells in the hatching chicks. These results are compatible with the detection of YFP signals in pieces of liver excised from 40-day-old chickens (51 days post transduction), analyzed by whole-mount immunostaining ( Figure 3C). Taken together, these data demonstrate that in ovo injection of lentiviral particles leads to stable transduction of somatic tissues that can be detected post-hatch, primarily in the liver and spleen. Transgene Expression Following Transduction with FIV-a-MSH in vitro and in ovo Once we had successfully observed YFP expression using the CAM-injection approach, we set out to validate our findings by analyzing the expression of a functional gene, encoding the secreted a-MSH peptide. This peptide has pleiotropic effects on energy homeostasis, demonstrated mainly in mammalian species [27]. The presence of a-MSH was first analyzed at the DNA and mRNA levels following transduction of HEK293T cells in culture, with a FIV vector carrying a-MSH (FIV-a-MSH). The a-MSH sequence was detected in genomic DNA samples extracted from cells harvested 24 and 72 h after transduction, using primers specific for the recombinant a-MSH ( Figure 7A). A further indication of the specificity of the PCR results was obtained by purification and sequencing of the PCR product (not shown). Expression of a-MSH mRNA in the transduced cells was next examined by RT-PCR. Transcripts of the a-MSH-encoding sequence were detected in transduced HEK293T cell cultures 20 days post-transduction . Schematic representation of the gene-transfer procedure used to introduce foreign genes into chicken tissues. The genedelivery procedure: A. A gene of interest (such as YFP or a-MSH) is subcloned into the FIV-derived plasmid, pLionII, and the recombinant pLionII plasmids are used as a part of a three-plasmid system to produce the corresponding viral particles (such as FIV-YFP or FIV-a-MSH). B. E11 embryos are illuminated in a dark room and a prominent blood vessel of the CAM, approximately 1 cm below the air sac, is marked with a pencil. An oval window of approximately 563 mm is carefully drilled into the eggshell around the marked blood vessel, using a Dremel 300JD multitool with aluminum-oxide grinding stone (Dremel, Polack Supply, Haifa, Israel), and the drilled shell piece is gently removed using a fine forceps. Special care is taken to avoid damaging the underlying eggshell membrane. The egg is then stabilized vertically and recombinant lentiviral particles (2610 6 TU in 100 ml PBS) are injected through the transparent eggshell membrane into the blood vessel, using a 30 G needle. C. After hot-glue sealing of the eggshell window [54], eggs are returned to the incubator until hatch. Scattered clusters of YFP-expressing cells in the liver are illustrated in green. D. The table indicates the hatching rates determined for control non-injected chicks and chicks injected with PBS or viral particles. The hatching rate of injected embryos was 81.8%, which by comparing to the un-injected control with 90% hatching rate, calculated to represent a 9% reduction in hatch. WPRE, woodchuck hepatitis post-transcriptional regulatory element; pCMV, cytomegalovirus promoter. doi:10.1371/journal.pone.0036531.g002 ( Figure 7B). The controls included: cDNA with no RT enzyme, a reaction lacking template cDNA, and cDNA from non-transduced cells. GAPDH-specific primers were used as an additional control to demonstrate the integrity of the mRNA samples of the transduced and nontransduced cells ( Figure 7C). Next, production and secretion of a bioactive a-MSH peptide was assessed by RIA ( Figure 7D). Cell extracts and conditioned media of HEK293T cells, transduced with FIV-a-MSH, produced RIAdetectable peptide (852 and 1012 ng/ml, respectively). In contrast, no RIA signal could be obtained in control cells transduced with FIV-YFP ( Figure 7D). These results indicate our ability to produce active FIV-a-MSH particles, which are capable of directing the production and secretion of a-MSH in transduced cells. Following the confirmation of a-MSH expression, production and secretion in cell culture, chick embryos were treated with the same viral particles in ovo. Samples from liver, spleen, kidney, brain, heart and muscle were excised two days post hatch and subjected to qPCR using primers that specifically recognize the recombinant a-MSH transcripts and not those of the endogenous POMC ( Figure S1). The transgene mRNA was detected in the FIV-a-MSH treated chicks (Figure 8). The level of expression was highest in the spleen and liver (48.4617.6 and 19.363.3, respectively, in arbitrary units). Lower expression was observed in the kidney (2.360.9) and brain (1.360.2), while no expression could be detected in the heart or muscle. No expression of recombinant a-MSH was detected in tissues collected from control chicks treated with FIV-YFP virus ( Figure 8). Moreover, liver samples that were taken from 40 days old chickens (51 days after injection) also demonstrated a-MSH expression, indicating the stability of the transduction. Similar results were obtained also with tissues excised at day 2-post hatch from FIV-YFP treated chicks using YFP-specific primers (Figure 8, insert) with higher relative expression in the liver and spleen. Demonstration of Transgene Integration into the Chicken Genome Using CR1-PCR Primers matching the consensus sequences of the repetitive DNA elements were originally used by Coullin et al. [28], for primed in-situ labeling analysis. We employed the same primer sequences to detect transgene integration into the chick genome ( Figure 9). For preparation of genomic DNA samples, liver and muscle cell cultures of E11 chick embryos were transduced with FIV-YFP, and 14 days later cells were harvested for DNA isolation. As schematically demonstrated in Figure 9A, the amplification of virus-derived LTR sequences in the second step of the PCR critically depended on amplification by the CR1 and LTR primers in the first step. As demonstrated in Figure 9B, PCR product of the expected size was obtained using the nested primers only in genomic DNA samples of the FIV-YFP transduced liver and muscle cell cultures, and not in the non-transduced cell culture controls. Therefore, this analysis demonstrates that transduction of chicken cells results in integration of the FIV-associated transgene into the host cell genome. Altogether, these data demonstrate the feasibility of our simple and original manipulation in chick embryos for the introduction of foreign genes into chicken somatic tissues, primarily liver and spleen. Discussion We present a novel approach for the introduction of foreign genes into somatic tissues of chickens, by injection of FIV-derived lentiviral particles into the CAM vasculature of embryos. Recent progress has been reported in the successful production of transgenic chicken lines [3][4][5][6]9,29]. However, these approaches are highly complicated and costly [1,3], preventing their routine laboratory use for studies of gene function. The advantages of the technique described herein stems from its simplicity and high rates of transgene transfer and chick hatchability. The well-documented advantages of lentiviral vectors, particularly their ability to integrate into the host-cell genome and their resistance to downregulation by the endogenous immune system or other cellular mechanisms [30,31], strongly support the potential of this approach for gene transfer in chickens. This technique provides the first affordable tool for constitutive production of secreted proteins and peptides from the liver, for the study of their longterm effects. This technique is expected to advance molecular-level avian endocrinology research and enable identification of target Figure 3. Immunofluorescence analyses of YFP expression in the liver and spleen of post-hatch chicks following in ovo FIV administration. Whole-mount immunostaining with anti-GFP antibody (green) and anti-SMA antibody (red) was performed on pieces of livers from: 2-day-old chicks treated with either PBS (A) or FIV-YFP (B), and from 40-day-old FIV-YFP-treated chicks (C). Images were obtained using epifluorescent stereomicroscope. Scale bar = 200 mm for A and C, and 0.5 mm for B. Immunostaining of paraffin sections with anti-GFP antibody was performed on liver (D & E) and spleen (F & G) tissues from 2-day-old chicks treated with FIV-YFP. For each of these sections, DAPI staining (blue) is shown in the corresponding images (D9-G9) to indicate cell nuclei. Arrowheads mark nuclei of YFP-expressing cells with splenocyte morphology (F9). Merged YFP and DAPI staining is shown in the corresponding D99-G99. Images were obtained by confocal microscopy except for E, E9 and E99, which were obtained by epi-fluorescenct microscope. The arrow in E99 indicates a YFP-expressing cell with endothelial morphology located next to transduced cells with hepatocyte morphology. H. Confocal images of whole-mount immunostained liver sample, using both anti-GFP (H) and anti-SMA (H9) antibodies, confirmed the association of some of the YFP-expressing cells with blood vessels, which appear in yellow in the merged image (H99). Scale bar = 20 mm (D-G) and 50 mm (H). doi:10.1371/journal.pone.0036531.g003 proteins with importance to agriculture-oriented research, developmental biology and evolution, among other fields. For evolution studies in particular, the chicken provides an important perspective due to its evolutionary position between reptiles and other vertebrates. Although several types of lentiviruses, similarly pseudotyped with VSV-G, have been previously used in chick embryos [4], the use of FIV-based vectors for the delivery of foreign genes into chicken tissues is demonstrated here for the first time. Therefore, we first characterized the susceptibility of several chick embryonic tissues in culture to FIV transduction. Cells derived from embryonic liver, spleen, kidney, heart, brain and breast muscle tissues were transduced with a low dose of FIV-YFP, and the analysis indicated for the first time the susceptibility of the various chick cell types examined to transduction by FIV-derived particles. Injection of the lentiviral vectors into E11 chick embryos through the CAM vasculature resulted in a more restricted tissue-specific expression profile, with significantly higher relative levels of YFP and a-MSH expression in the liver and spleen, as detected by immunostaining for YFP, and qPCR for both YFP and a-MSH. This transgene-expression profile is in accordance with findings in rodents following lentiviral administration by tail vein injection to adult and neonatal mice [26,32,33]. Given the similar tropism for FIV transduction of these cell types in vitro, it seems logical to assume that the observed profile of tissue transduction following lentivirus application in vivo does not reflect viral tropism, but probably structural differences in the organization of the vasculature of the relevant tissues in the animal. Such a structural explanation could be provided by the fenestrated capillaries characteristic of the spleen, liver and myeloid bone marrow [34], which seem wide enough to facilitate penetration of the lentiviral particles (approximately 100 nm in diameter) [35] from the circulation to the cells of these tissues. Relatively high efficiency of lentiviral transduction in the liver was demonstrated in a variety of ways: (i) whole-mount immunostaining of pieces of liver tissue from 2-and 40-day-old chickens, (ii) immunostaning of paraffin sections, (iii) 3D confocal imaging, (iv) flow cytometry, and (v) qPCR. This relatively high efficiency of transduction is important, since the liver is among the largest tissues in the body and is highly specialized in processing secreted proteins. Therefore, manipulation of the chicken's liver by introducing genes encoding secreted proteins will provide a highly useful tool for endocrinological and other studies. Moreover, the pattern of transduction of the liver tissue, characterized by cell clusters, might provide a unique tool to better understand the process of liver regeneration, as a potential model for therapeutically oriented studies of liver diseases [36]. The immunofluorescence analyses, with whole-mount specimens and paraffin sections, indicated transduction of cells with characteristic hepatic morphology as well as with characteristics of other cell types associated with blood vessels. The presence of YFP in cells associated with blood vessels, in addition to hepatocytes, was expected, since viruses were injected into the chick circulation. While clusters of transduced cells in the liver of 2-day-old chicks were estimated to consist of about 50 cells, clusters in the spleen were much smaller, with only a few cells each. This difference is compatible with the different cell-multiplication rates in these tissues during the period of chick development between E11 and day 2-after hatch [37], and support the hypothesis that the appearance of transduced cells in clusters means that each infected cell transmits the transgene to its daughter cells. The smaller clusters of transduced cells in the spleen were seen at a higher density in each slide (not shown), compatible with the similar overall range of expression level obtained in the qPCR of the spleen and liver with the YFP and a-MSH specific primers. Furthermore, flow cytometry analysis demonstrated 0.4660.19% infected cells in the liver, similar to the frequencies reported following injection of lentiviruses into mice tails [26]. The pACTH1-17 expression cassette used in the present study to generate the pLionII-a-MSH construct and the FIV-a-MSH lentiviral particles has been used previously to direct the production and secretion of a bioactive a-MSH peptide in rodents [23,38,39]. The production of a bioactive peptide was demonstrated in those reports through a-MSH protective effect against liver fibrosis [39], ocular autoimmune diseases [38], as well as experimental encephalomyelitis [23]. Here we demonstrate that FIV-a-MSH particles can transduce HEK293T cells, leading to the detection of exogenous a-MSH sequences by PCR, RT-PCR, and qPCR. In addition, we provide the first RIA identification of an active a-MSH product of the FIV-a-MSH. RIA detection of a-MSH peptides in both cell lysate and conditioned medium of HEK293T cells, transduced with FIV-a-MSH, demonstrate that a-MSH production and secretion can be directed through FIV transduction. In mammals, a-MSH has been shown to participate in the control of energy homeostasis, as it is a primary target of the satiety hormone leptin [27], and to be involved in pigmentation and regulation of the immune response [40,41]. In chickens, data on the physiological roles of a-MSH were obtained in our and others' laboratories, mainly by characterization of short-term physiological effects following a single administration of a-MSH or its synthetic analogs to young chicks [42][43][44][45][46][47][48]. Therefore, we strongly believe that the technique presented here will dramatically advance the study of a-MSH in chickens, as a proof of concept, by providing an assay system for long-term studies in vivo. Integration of FIV-derived cDNA into the host cell genome was demonstrated using a CR1-PCR approach, which makes use of repetitive genomic sequences to amplify the ends of the viral LTR sequences. This approach is based on the Alu-PCR protocols used in our previous study to demonstrate genomic integration of FIVderived sequences in mammals [16], and is applied here to the chicken genome for the first time, by making use of previously characterized CR1 consensus sequences [28]. The CR1 repeats are the most abundant repeat family in avian species, belonging to long interspersed nuclear elements with more than 200,000 copies, accounting for about 80% of the chicken interspersed repeats [49]. They are significantly less abundant than the Alu sequences in the human genome, estimated to be about a million per haploid genome [50]. The fact that this approach gave the expected results despite this differential abundance might support the speculation that the spread of CR1 sequences, and possibly also the integration of viral vectors, has some bias in favor of sites of transcriptionally active chromatin. Notably, demonstration of transgene integration into the chick genome supports the stability and robustness of our in ovo transduction approach, and is in accordance with our Future use of lentiviral particles to deliver transgenes using our approach might entail using liver-specific promoters, such as the human a-antitrypsin (hAAT) [51], to restrict transgene expression to the liver. The advantages of using these promoters are twofold: first, it reduces possible undesired effects due to expression in other tissues, and second, their nonviral origin reduces the chances of downregulation of the transgene [32], which was reflected to some extent in the current study by the lower qPCR signals obtained for the livers of 40-day-old vs. 2-day-old chickens. We have previously demonstrated prolonged and stable hAAT-driven gene expression in liver cell lines and murine livers using FIV vectors [16,33]. In addition, we have reported efficient hAAT-driven gene expression in chicken liver using a naked DNA-delivery system [52]: by using the hydrodynamics-based gene-transfer method to direct gene expression in chick liver, we detected physiologically significant levels of transgene (human coagulation factor IX) in the circulation using the hAAT promoter [52]. This naked DNA-transfer technique does not involve DNA integration into the host-cell genome, and the introduced plasmid persisted in its original episomal form [53]. Nevertheless, that report indicated that the hAAT promoter can be used to drive expression of foreign genes in chickens. In summary, we have established a new approach for the transfer of genes to chickens. The use of lentiviral vectors enables integration of transgenes into the host-cell genome, while injection into the CAM vasculature provides unique accessibility to the chick embryo. The technique is best suited for the study of endocrinology in avian species, by directing genes encoding secreted hormones and peptides to the liver tissue. This procedure should also be useful for experiments in developmental biology, enabling the monitoring of small subpopulations of transgenic cells within a tissue. In addition, it should be possible to co-transduce a gene of interest and a reporter gene and study the effects of this manipulation in a background of non-transduced cells, on either cell morphology as seen under a microscope, or on gene expression as detected by double-immunostaining. Given the Figure 9. CR1-PCR analysis for genomic integration of YFP in chicken cells, transduced with FIV-YFP vectors. Schematic representation of the two-step PCR approach using the CR1-and LTR-specific primers. Diluted (1:1000) mixes from the first long PCRs, employing CR1 and LTR primers, were used as templates for short nested PCRs with nested LTR primers. In the absence of genomic integration, no signals are expected in the nested PCR. This is indicated by the control long PCR with no CR1 primers. B. Nested PCR products separated on 2% agarose gel, obtained using as templates the first long-PCR DNAs, which were prepared 14 days after in vitro transduction with FIV-YFP, from E11 liver and muscle cells. The primers used for the first long PCR are indicated: the LTR primers were 59LTR (59L) and 39LTR (39L); the CR1 primers were CR1-1F (C1F), CR1-1R (C1R), CR1-3F (C3F) and CR1-3R (C3R). The nested primers are indicated in Materials and Methods. Neither the controls without CR1 primers, nor the control using DNA from non-transduced cell cultures gave a signal. The expected size fragments were 120 bp for the 59LTR and 110 bp for the 39LTR. The products were confirmed by sequencing. These results indicate genomic integration of FIV-YFP-derived cDNA in the host chicken cells. M, molecular weight markers. doi:10.1371/journal.pone.0036531.g009 great difficulty involved in gene delivery to birds, the ability to introduce genes coding for secreted proteins into the liver is expected to be highly useful and provide new opportunities for agricultural and academic research in these animals. Being highly specialized in processing secreted proteins and among the largest organs of the body, the liver is an optimal target for producing and secreting exogenous proteins of interest for the study of gene function in chickens. Figure S1 Position of primers used for a-MSH amplification. The positions of the qPCR and RT-PCR primers are shown, relative to the sequence of the recombinant a-MSH gene. Note that for both primer sets, the forward primers match the pLionII backbone sequence, thus rendering specificity of the PCR amplification for recombinant a-MSH only. Figure S3 Immunofluorescence analysis of YFP expression in the liver of post-hatch chicks, following in ovo administration of vehicle or viral particles encoding YFP. Paraffin sections of chicks liver tissues were subjected to immunofluorescence, using anti-GFP antibody (green). Similar tissue and cell morphology was observed in both YFP-expressing cells and non-transduced cells (A), as well as in liver cells of vehicle treated chicks (B). Scale bar = 100 mm (TIF) Movie S1 3D confocal illustration of YFP-positive hepatocyets: whole-mount staining with anti-GFP antibody. Nuclei were stained with DAPI. (AVI) Supporting Information Movie S2 3D confocal illustration of YFP-positive cells in the liver, demonstrating their association with blood vessels: whole-mount staining with anti-GFP and anti-SMA antibodies. Nuclei were stained with DAPI. (AVI)
2016-05-12T22:15:10.714Z
2012-05-11T00:00:00.000
{ "year": 2012, "sha1": "9d0c16a2fad3a943263a1e97a8a4ddfa908d0962", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0036531&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cb36659ecf37bbde01e95ee39d2b90ce14723d2c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
253433191
pes2o/s2orc
v3-fos-license
Bioinformatics analysis identifies ferroptosis‑related genes in the regulatory mechanism of myocardial infarction Since ferroptosis is considered to be a notable cause of cardiomyocyte death, inhibiting ferroptosis has become a novel strategy in reducing cardiac cell death and improving cardiopathic conditions. Therefore, the aim of the present study was to search for ferroptosis-related hub genes and determine their diagnostic value in myocardial infarction (MI) to aid in the diagnosis and treatment of the disease. A total of 10,286 DEGs were identified, including 6,822 upregulated and 3.464 downregulated genes in patients with MI compared with healthy controls. After overlapping with ferroptosis-related genes, 128 ferroptosis-related DEGs were obtained. WGCNA successfully identified a further eight functional modules, from which the blue module had the strongest correlation with MI. Blue module genes and ferroptosis-related differentially expressed genes were overlapped to obtain 20 ferroptosis-related genes associated with MI. Go and KEGG analysis showed that these genes were mainly enriched in cellular response to chemical stress, trans complex, transferring, phosphorus-containing groups, protein serine/threonine kinase activity, FoxO signaling pathway. Hub genes were obtained from 20 ferroptosis-related genes through the PPI network. The expression of hub genes was found to be down-regulated in the MI group. Finally, the miRNAs-hub genes and TFs-hub genes networks were constructed. The GSE141512 dataset and the use of RT-qPCR assays on patient blood samples were used to confirm these results. The results showed that ATM, PIK3CA, MAPK8, KRAS and SIRT1 may play key roles in the development of MI, and could therefore be novel markers or targets for the diagnosis or treatment of MI. Introduction Myocardial infarction (MI) is a severe disease that occurs globally; from 2002 to 2015, the incidence of MI was ~242/100,000 individuals per year (1). According to the universal definition of MI (2), it may be divided into five types and is primarily induced by acute myocardial ischemia resulting from several factors. For example, the rupture of acute atherosclerotic plaques leads to ischemic myocardial damage due to the mismatch between oxygen supply and demand (3). Based on the existing clinical guidelines (4,5), clearing blocked vessels and reducing thrombotic obstruction with drugs as quickly as possible are the two most important treatment options. However, for vessels that are difficult to clear and when MI is caused by microvascular lesions, only conservative drug treatment should be used and recurrent attacks are more probable (6). Consequently, it is important to identify novel therapeutic targets to reduce MI. Ferroptosis is a process in which unsaturated fatty acids are highly expressed on the cell membrane and are subject to lipid peroxidation by Fe 2+ ions and lipoxygenase, thereby inducing cell death. It is also hallmarked by a decrease in the expression of the glutathione-dependent antioxidant system and glutathione peroxidase 4 (GPX4) enzymes (7). Ferroptosis is involved in tumor cell death, neurodegenerative diseases, renal failure and cardiac ischemic injury (8)(9)(10). MI is a severe type of ischemic heart disease in which ferroptosis plays a central role (11). At present, studies on the mechanism of ferroptosis in MI have primarily focused on endoplasmic reticulum stress, reactive oxygen species (ROS) generation, GPX4 and the autophagy-dependent ferroptosis pathway (11)(12)(13)(14)(15). Several studies have concluded that the inhibition of cardiomyocyte ferroptosis is a potentially important target for MI treatment. For example, treatment of an MI mouse model using ferrostatin-1 (an inhibitor of ferroptosis) or dexrazoxane (an iron-chelating agent) can reduce MI scar areas and myocardial enzyme activity (16). In addition, baicalin has been shown to prevent MI by inhibiting long-chain-fatty-acid-CoA ligase Bioinformatics analysis identifies ferroptosis-related genes in the regulatory mechanism of myocardial infarction 4-mediated ferroptosis (17). Moreover, other drugs, such as piperonylamine and artesunate, have also inhibited ferroptosis and represent potential drugs for the treatment of related diseases (18,19). Since ferroptosis plays a key role in MI, in the present study genes associated with MI and ferroptosis were identified. These genes may be useful for identifying putative therapeutic targets or providing a theoretical basis for understanding the molecular pathology of MI. Furthermore, microRNAs (miRNAs/miRs), transcription factors (TFs) and targeted drugs were analyzed in context to the above genes, and differential expression of these genes was verified using a separate dataset and clinical specimens. The present study provides a basis for further research exploring the potential therapeutic targets and regulatory mechanisms of MI, and also provides a new treatment strategy. Materials and methods Data sources. The transcriptome data of the current study were obtained from two datasets, GSE59867 (20) and GSE141512 (21) datasets of Gene Expression Omnibus (GEO) database (http://www.ncbi.nlm.nih.gov/geo/) (22). The population of the GSE59867 dataset consisted of 46 controls and 390 MI samples, and was used as a training set. The GSE141512 dataset consisted of 6 controls and 6 MI samples, and was used as an external validation set. The ferroptosis-related genes were extracted from the FerrDb database (http://www.zhounan. org/ferrdb). After removing the duplicated genes of the three subgroups of ferroptosis gene sets, a total of 259 genes were obtained (23). Acquisition of differentially expressed genes (DEGs). All the microarray data after normalization were analyzed by R 4.1.0 software (24). The R package, 'limma', was used to identify differentially expressed mRNAs between MI and control samples, with adjusted P-value <0.05 as the threshold (25). A heatmap cluster and volcano plot of the DEGs were created using the 'ggplot2' packages via R software. Furthermore, by intersecting with ferroptosis-related genes, the ferroptosis-related DEGs were obtained and the heat map of ferroptosis-related DEGs was created using the 'pheatmap' package (26). Gene set enrichment analysis (GSEA). The potential biological function of the DEGs was enriched using the GSEA method and annotated using Gene Ontology (GO) (http://geneontology. org/) and Kyoto Encyclopedia of Genes and Genomes (KEGG) databases (https://www.kegg.jp/). In GSEA, a false discovery rate (FDR) of <0.05 was considered to indicate DEGs that were significantly enriched. Weighted gene co-expression network analysis (WGCNA). The GEO expression file was used for WGCNA using the WGCNA R package (27). Firstly, samples were clustered to assess the presence of any outliers. Then, the automatic network construction function was used to obtain the co-expression network. The 'pick Soft Threshold' function was used (set to 15) to calculate the soft thresholding power β. Furthermore, the matrix data were then transformed into an adjacency matrix, hierarchical clustering and the 'dynamic Tree Cut' function were used to detect modules. After completing the calculation of module eigengene (ME) and merging similar modules in the clustering tree according to ME, a hierarchical clustering dendrogram was drawn. Modules were combined with phenotypic data to calculate gene significance (GS) and module significance (MS) to measure the significance of genes and clinical information and analyze the correlation between modules and clinical features. Then which modules are most relevant to MI was revealed. Functional annotation and pathway enrichment analysis. To reveal the functions of DEGs, GO annotation (28) and KEGG enrichment (29) analysis were conducted using the 'cluster profile' package. GO enrichment results of 'biological process' (BP), 'cellular component' (CC) and 'molecular function' (MF) were obtained. KEGG pathway analysis was used to describe gene function at the genomic and molecular levels and reveal the associated genes. P<0.05 was considered to indicate a statistically significant difference. Protein-protein interaction (PPI) network construction. The PPI network was constructed using the STRING database (30). The confidence score was set at 0.4 for the PPI analysis and was considered statistically significant. Cytoscape 3.8.2 was used to visualize the PPI network (31). Cytoscape plugin, MCODE, was used to screen the significant modules in the PPI network. Validation of hub genes. Receiver operating characteristic (ROC) curve analysis was performed using the pROC package (32) to evaluate the diagnostic value of the hub genes for MI. ROC curve analysis, which yields indictors of accuracy, such as the area under the curve (AUC), provides the basic principle and rationale for distinguishing between the specificity and sensitivity of diagnostic performance. Analysis of interaction effect and functional similarity for hub genes. The 'ggpubr' package was used to perform Spearman's correlation analysis on hub genes. The 'ggpubr' was a flexible package for data visualization based on 'ggplot2' package in R (33). Moreover, the functional similarity among proteins was evaluated using the geometric mean of semantic similarities in CCs and MFs through the GOSemSim package (34). Functional similarity measures the strength of the relationship between each protein and its partners by considering the function and location of proteins. Construction of gene-drug interaction network and regulatory network of hub genes . In order to explore the potential therapeutic drugs for MI, DEGs were uploaded to the CMAP database (https: //www.complement.us/cmap) (34), and relevant drugs associated with MI treatment were identified. Then, drugs targeting proteins encoded by hub genes were identified using the through the Comparative Toxicogenomics Database (CTD) (35). MiRNet database (36) was used to predict the TFs and miRNAs of hub genes. Hub genes and their TFs and miRNAs were integrated into a regulatory network, and visualized using Cytoscape software. Sample collection. The present study was approved by the Ethics Committee of Dezhou Municipal Hospital (Dezhou, China; approval no. 2022-L-06; January 17, 2022) and complied with The Declaration of Helsinki. Written informed consent was obtained from all subjects. A total of 5 patients with MI and 5 patients with stable angina pectoris/chronic coronary syndromes (CCS) were enrolled at Dezhou Municipal Hospital (Dezhou, China) between February 2022 and March 2022, and blood draws were completed at the hospital. The diagnoses of the patients followed the latest diagnostic guidelines. The diagnosis of MI was in accordance with the Fourth Universal Definition of Myocardial Infarction (2018) (3). MI is diagnosed when there is clinical evidence of acute myocardial ischemia and the rise or fall of cardiac troponin T values with at least one value exceeding the 99th percentile upper reference limit, followed by at least one of the following: i) Symptoms of myocardial ischemia; ii) changes on an electrocardiogram indicating new ischemia; iii) development of pathological Q waves on an electrocardiogram; iv) new loss of viable myocardium or new regional wall motion abnormality evidenced by imaging; and v) coronary thrombus evidenced by angiography or autopsy. CCS is diagnosed when the following three characteristics are met simultaneously: i) Retrosternal discomfort (its nature and duration have typical characteristics) (37); ii) fatigue or emotional stress can be induced; and iii) rest or nitrates can provide relief. The above criteria were met, and serum cardiac troponin I (cTnI) and myocardial enzymes were negative (38,39). Subjects diagnosed with CCS were considered to be the control group. Peripheral blood collection was completed within 12 h after admission. Inclusion criteria: i) Patients met the diagnostic criteria for MI or CCS; ii) patients were aged between 40-80 years old (sex was not limited); and iii) patient hemodynamics were stable, and there was no evident abnormality in liver and kidney function. Exclusion criteria: i) Patients with acute decompensation of chronic heart failure, symptomatic hypotension (systolic blood pressure <90 mmHg) or an expected survival period of <3 months; ii) patients with abnormal liver and kidney function and serious primary disease (for example, acute exacerbation of chronic obstructive pulmonary disease, diabetic ketoacidosis, multiple tumor metastasis); iii) pregnant and lactating women; iv) patients who have previously been found to be allergic to the experimental drug; or v) the presence of factors that can increase death, such as severe arrhythmia, pulmonary embolism, cardiogenic shock or obvious infection. RNA extraction and reverse transcription-quantitative PCR (RT-qPCR). Total RNA from peripheral blood was extracted using the SPEAKeasy Serum/Plasma RNA kit (Shandong Sparkjade Biotechnology Co., Ltd.) according to the manufacturer's protocol under low temperature. The Nano400 Spectrophotometer (Hangzhou Allsheng Instruments Co., Ltd.) was utilized to check the concentration and purity of the extracted RNA, with the A260/A280 ratio between 1.8 and 2.0. cDNA synthesis was conducted using HiScript II Q RT SuperMix (Vazyme Biotech Co., Ltd.) according to the manufacturer's protocol. Using GAPDH as a reference, RT-qPCR was performed with ChamQ Universal SYBR qPCR Master Mix (cat. no. R311-02; Vazyme Biotech Co., Ltd.) in the CFX96 Touch Real-Time PCR Detection System (Bio-Rad Laboratories, Inc.). Primer sequences (TsingKe Biological Technology) for reference and candidate genes are shown in Table I. The thermocycling protocol for PCR was as follows: 50˚C For 3 min, 95˚C for 2 min, followed by 40 cycles of 95˚C for 10 sec and 60˚C for 10 sec. The 2 -ΔΔCq method was applied to calculate the relative expression level of mRNA (40). Statistical analysis. SAS 9.4 (SAS Institute, Inc.) was used to analyze the clinical data, and the measurement data were tested for normality first. Two groups were compared using independent Student's t-test. If non-conformity was expressed as the median (Q1-Q3), Wilcoxon rank-sum test was used. Enumeration data were compared between the two groups using the x 2 test. If the theoretical frequency was too small, Fisher's exact probability method was used. All experiments were performed three times, and the results are expressed as the mean ± standard error of the mean. GraphPad Prism 9 (GraphPad Software, Inc.) was used to analyze the data. The statistically significant differences between the MI group and controls were examined using independent Student's t-test. P<0.05 was considered to indicate a statistically significant difference. Analysis of ferroptosis-related DEGs. A total of 259 ferroptosis-related genes were extracted from the FerrDb database. After intersecting them with the DEGs, a total of 128 ferroptosis-related DEGs were found. The Venn diagram of the ferroptosis-related DEGs are shown in Fig. 3A. Fig. 3B, Weighted co-expression net work construction and identification of key modules. Euclidean distance of the expression was used to perform hierarchical clustering. There were no outliers to remove (Fig. 4A). The soft threshold was set to 15 to construct a scale-free network (Fig. 4B). Next, eight modules were identified based on average hierarchical clustering and dynamic tree clipping (Fig. 4C). The blue module was the most relevant module associated with MI (Fig. 4D). Thus, a total of 1,547 genes in this module were selected for further analysis. PPI network construction and module analysis. To further study the interaction of the 20 ferroptosis-related genes, a PPI network was constructed using the STRING database. A total of six of the 20 genes were not related to other molecules and did not form a molecular network. With a confidence of >0.4 and hiding the disconnected nodes, a visualized PPI network was created using Cytoscape (Fig. 7A). Using the MCODE plugin, five genes in the key module were selected as hub genes, namely ATM, PIK3CA, MAPK8, KRAS and SIRT1 (Fig. 7B). Analysis of interaction effect and functional similarity of hub genes. Analysis of the interactome of the hub genes revealed that ATM and PIK3CA had the highest correlation (Fig. 8A). Proteins were ranked by their average functional similarity relationships among proteins within the interactome. ATM, MAPK8 and PIK3CA were the three top-ranked proteins potentially playing key roles in MI (Fig. 8B). Multi-factor regulation network construction. Based on the results from miRNet database, miRNAs-hub gene (Fig. 9A) and TFs-hub gene (Fig. 9B) networks were constructed using Cytoscape software. In order to facilitate the selection of key miRNAs, miRNAs targeting ≥3 hub genes were selected for network analysis. Finally, the network included five hub genes, 43 miRNAs and 34 TFs. Drug prediction. The Connectivity Map (CMap) database was used to search for potential drugs associated with MI (41,42). Based on the interaction information of genes and drugs in the CTD database, the association between potential drugs and hub genes was obtained Among them, dorsomorphin is a small-molecule drug may act on five hub genes (Fig. 10). Evaluation of the diagnostic performance of hub genes in GSE59867. The expression of hub genes in MI and control samples was detected, and it was found that the expression of hub genes was downregulated in MI (Fig. 11A). The diagnostic values of hub genes were further evaluated by ROC curves. It was found that ATM, PIK3CA and MAPK8 had high accuracy with AUC values of >0.7 (Fig. 11B). Expression of hub genes in GSE141512. The expression of hub genes was verified in the GSE141512 dataset, and it was found that the expression of all hub genes was downregulated in the MI group compared with the control. ATM, PIK3CA and SIRT1 genes showed significant differences in the expression between the MI and control groups (Fig. 12). Baseline characteristics of study subjects. In patients with CCS, there is no necrosis of the myocardium. Therefore, the CCS group was used as the normal control group. Moreover, the basic conditions between the MI and CCS groups, such as age, past medical history and medication history, were similar. A total of 10 participants were recruited in the present study and were separated into two groups, MI (n=5) and controls (n=5). Comparison between groups showed that the levels of high-sensitivity cTnI, creatine kinase-MB and low-density lipoprotein-cholesterol were statistically different between the two groups (P<0.05). The demographic, clinical features, medication history and laboratory data of all participants are shown in Table II. Validation of the hub genes. RT-qPCR was used to detect the transcriptional changes of all overlapped hub genes in peripheral blood from the controls and patients with MI. The results indicated that the expression levels of all hub genes were decreased in the MI group in comparison with those in controls (Fig. 13). Discussion In the current study, five hub genes associated with ferroptosis in patients with MI were screened by comprehensive bioinformatics analysis, namely ATM, KRAS, MAPK8, PI3KCA and SIRT1. miRNAs and transcription factors targeting the hub genes were selected to construct the corresponding regulatory network, as well as potential therapeutic drugs for MI targeting of the hub genes. Subsequently, using the GSE141512 validation set, the five hub genes were all confirmed as lowly-expressed genes in the MI group. Of these, the inter-group differences of ATM, PI3KCA and SIRT1 were statistically significant. Finally, it was verified that gene expression was decreased in patients with MI and CCS. GSEA enrichment analysis was also performed for the 10,286 identified DEGs. The intersection of DEGs and ferroptosis-related genes revealed 128 ferroptosis-related DEGs. Intersecting with the candidate genes for MI screened using WGCNA, 20 ferroptosis-related genes were identified. Next, the 20 genes were subjected to GO and KEGG enrichment analysis. GO analysis revealed that 'cellular response to chemical stress' and 'response to oxidative stress' were the most significant BPs, while the 20 most influential genes had roles in 'peptidyl-serine' modification, 'protein serine/threonine kinase activity' and 'regulation of TOR signaling'. KEGG analysis indicated that these genes were mainly enriched in 'FoxO signaling pathway', 'Autophagy-animal', 'Apoptosis' and 'Longevity regulating pathway-multiple species'. Furthermore, the recent study has shown that the sources of cellular stress damage can be divided into physicochemical (for example, radiation or toxins) and pathological (for example, hypoxia and infection) (43). Cell stress can cause rapid ROS accumulation, which further aggravates myocardial injury (44). ROS is involved in a variety of coronary diseases that occur under oxidative stress (45). It can affect DNA integrity by inducing mutations, modify protein structure by acting on enzymes and cause lipid peroxidation (46,47). lipid peroxidation is involved in apoptosis, autophagy and ferroptosis, which results in cardiomyocyte dysfunction and death. The underlying mechanism involves excessive ROS attacking the biofilm, which induces a lipid peroxidation chain reaction, and subsequently causes various types of cell death (48). To the best of our knowledge, there are no reports on the FoxO signaling pathway and ferroptosis, and most reports on mammalian target of rapamycin (mTOR) signaling and ferroptosis have focused on tumors. The mTOR and GPX4 signaling pathways mutually regulate autophagy-dependent ferroptosis of pancreatic cancer cells (49) . Baba et al (50) demonstrated a protective effect of rapamycin-targeted therapy on iron excess and ferroptosis of cardiomyocytes using mTOR-knockout mice and found that it inhibited ROS production. In summary, previous studies on the biological processes that were identified in MI indicate that the 20 key ferroptosis-related genes identified in the present study may affect the occurrence of ferroptosis by regulating ROS production and ultimately MI (51-53). To further explore the key genes affecting MI, the core modules were screened by PPI network analysis and five hub genes associated with ferroptosis were obtained, namely ATM, PIK3CA, SIRT1, KRAS and MAPK8. Low expression of ATM, PI3KCA and SIRT1 was observed in the GSE141512 validation set. In addition, low expression of the five genes was also observed in serum samples collected from patients with MI compared with the CCS (control) group. Reduced expression of ATM may protect cells from ferroptosis induced by the GPX4 inhibitor at different concentrations. With respect to the underlying mechanism, ATM inhibition may rescue ferroptosis by increasing the expression of iron regulators involved in iron storage and export. The coordinated changes of these iron regulators during ATM inhibition results in the reduction of labile iron to prevent iron-dependent ferroptosis (54,55). ATM is an important kinase in response to DNA damage and one of its downstream targets, p53, is associated with the regulation of ferroptosis (56). PIK3CA plays an important role in cell growth and survival, and it reduces the inflammatory response following MI through pyruvate dehydrogenase kinase 1/AKT signal transduction (57). The PIK3CA gene regulated by miR-375 is a key gene involved in the MI disease module (58). The expression of SIRT1 decreases gradually after MI and it inhibits ferroptosis-induced cardiomyocyte death through the p53/SLC7A11 axis. The increase in SIRT1 contributes to enhanced cardiomyocyte viability and reduced ferroptosis-induced cell death in vitro (13). MAPK8 has been shown to play an important role in the occurrence of recurrent cardiovascular events (59). There are numerous reports on the KRAS gene and tumor-associated diseases (60)(61)(62), suggesting that KRAS promotes tumor progression; however, there are few reports on the role of the KRAS gene in MI. Cells undergoing ferroptosis release KRAS (63), but the relationship of MI with the KRAS gene requires further study. ROC curve analysis can be used to evaluate MI biomarkers. The AUC for ATM, PIK3CA and MAPK8 were all >0.7. Of these, ATM presented the best discrimination performance, with an AUC of 0.790. TF-target and miRNA-target networks relevant to the hub genes were constructed, which highlighted 43 miRNAs and 34 TFs. Finally, the potential small-molecule drugs that could reverse this disease were investigated using the CMap database, which revealed 36 potential small-molecule drugs. Among them, dorsomorphin is a small-molecule drug that can act on all five hub genes. Dorsomorphin is a selective inhibitor of AMP-activated protein kinase (64), which has not been well-studied in MI. Therefore, its role should be the subject of future studies. The present study had the following strengths: i) The GSE59867 dataset included data from 390 MI samples and 46 healthy individuals, with a large sample size and high reliability of results; ii) verification was done twice using the GSE141512 dataset and by collecting and evaluating patient blood samples; and iii) for the first time, bioinformatics analysis was used to identify the hub genes of ferroptosis and MI. However, the identified marker genes and pathways require further verification to provide conclusive evidence for targeted therapy; if the protein expression of these genes can be further analyzed, more evidence will be available to determine the effect of each gene on ferroptosis and MI. In conclusion, the current study identified five putative genes relevant to ferroptosis and numerous genes associated with MI, which provides a basis for exploring the regulatory and intervening mechanisms of MI. Acknowledgements Not applicable. Funding Funding for the present study was obtained from the Shandong Province Famous and Old Traditional Chinese Medicine Exper t Inher itance Studio Constr uction Project (grant no. 201992), the Shandong Traditional Chinese Medicine Science and Technology Project ( g r a nt nos. 2 02 0 Q 010 a nd 2 021M18 0) a nd t he Natural Science Foundation of Shandong Province (grant no. ZR2021LZY038). Availability of data and materials The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Authors' contributions YHJ, WZW and YTX confirm the authenticity of all the raw data. YHJ, WZW and YTX were responsible for the conceptualization, methodology and design of the research, as well as writing and preparing the original draft. SYW and ZW were responsible for the bioinformatic data collection and analysis. JZ, YL, LZ and CL were responsible for the experimental data acquisition and analysis. JZ and YL were responsible Result is expressed as the mean ± standard deviation and the P-value has been obtained using independent t-test. b Result is expressed as the median (quartile 1-quartile 3) and the P-value has been obtained using Mann-Whitney U test. c Result is expressed as the number of cases (composition ratio, %), and the P-value has been obtained using χ 2 test or Fisher's exact test if the theoretical frequency of the variable was too small. MI, myocardial infarction; SBP, systolic blood pressure; DBP, diastolic blood pressure; LVEF, left ventricular ejection fraction; hs-cTnI, high-sensitivity cardiac troponin I; CKMB, creatine kinase-MB; NT-pro-BNP, n-terminal pro-B-type natriuretic peptide; TC, total cholesterol; TG, triglyceride; LDL-C, low-density lipoprotein-cholesterol; HDL-C, high-density lipoprotein-cholesterol; ACEI, angiotensin converting enzyme inhibitors; ARB, angiotensin II receptor blockers; CCB, calcium channel blockers. for the software validation and result interpretation. CL was responsible for the figure preparation. All authors have read and approved the final manuscript. Ethics approval and consent to participate The present study was approved by the Ethics Committee of Dezhou Municipal Hospital (Dezhou, China; approval no. 2022-L-06; January 17, 2022) and complied with The Declaration of Helsinki. Written informed consent was obtained from all subjects. Patient consent for publication Not applicable.
2022-11-10T17:23:09.664Z
2022-11-08T00:00:00.000
{ "year": 2022, "sha1": "cf8cf0250b99cfaf43b093112c4f17010837099f", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/etm.2022.11684/download", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "587122cb0362773345a5a9e1165eb3204c1ba84a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
237893314
pes2o/s2orc
v3-fos-license
Identication of A New Biomarker For Herpes Zoster Infection in Rheumatoid Arthritis Herpes zoster (HZ) is known as a side effect of using biologics in rheumatoid arthritis (RA). Incidence of this side effect may be different depending on genetic factors because susceptibility to HZ infection varies by race. Here, we analyzed the statistical relationships of whole genome single nucleotide polymorphisms (SNPs) with HZ infection in biologics-treated RA patients. The subjects were 321 Japanese female patients (including 56 herpes virus infected patients) of RA using biologics. The relationships of 302,814 SNPs with HZ infection were analyzed using case-control analyses by Fisher’s exact tests. We picked up SNPs (P < 10 -8 ) signicantly associated with HZ infection. Then, herpes infection was compared among the genotypes using a multivariate logistic regression analysis adjusted for onset age of RA. Rs10774580 located in 2’-5’-oligoadenylate synthetase like gene (OASL) was signicantly associated with herpes virus infection. The minor allele homozygous carrier was positively associated with herpes virus infection in multivariate analysis. We for the rst time showed a signicant relationship between a genetic factor and HZ infection among RA patients. Rs10774580 may be one of the biomarkers for HZ infection. Introduction Rheumatoid arthritis (RA) is a progressive autoimmune disease well de ned by widely accepted symptoms such as chronic joint in ammation and structural damage 1 . In treatment for RA at present, using biological agents such as tumor necrosis factor (TNF), interleukin-6 (IL-6) and cytotoxic Tlymphocyte-associated protein 4 (CTLA-4) blockades is extremely useful because these agents speci cally inhibit immune responses and in ammation. On the other hand, because these agents are immunosuppressant, infectious diseases become a signi cant problem in treatment for RA. Herpes zoster (HZ) is one of the most common viral infections in treatment for RA with immunosuppressants such as biological agents. In fact, it has been reported that RA patients have a higher risk of HZ compared with the general population 2, 3, 4 . Therefore, a number of studies analyzed relationships of the incidence of HZ with various possible risk factors. As the result, aging, high disease activity and corticosteroid, methotrexate and biological agents were reported as the risk for HZ 5 . These studies, however, have not taken genetic factors into consideration. On the other hand, several studies have identi ed genetic loci associated with onset of RA 6 . It has also been reported that the effectiveness of biologic agents can be predicted by combination of single nucleotide polymorphisms (SNPs) 7 . These studies used genome-wide association study (GWAS) in order to identify the genetic factors. Thus, conducting GWAS is thought to be valuable in order to identify unknown genetic factors associated with HZ in RA. In this study, in order to identify SNPs associated with HZ infection, we analyzed the statistical relationships of whole genome SNPs with HZ infection among biological agents-treated RA patients. Results The basic characteristics of the patients are presented in Table 1. The total patients were 320 aged 45.5 ± 13.9 years (mean ± SD). Only one SNP was identi ed that was signi cantly associated with HZ infection (Fig. 1). The SNP was rs10774580 in 2'-5'-oligoadenylate synthetase like gene (OASL). Table 2 presents the relationships of OASL genotype and onset age of RA with HZ infection. The minor allele homozygous of rs10774580 and the onset age of RA (≥ 65 years) were positively associated with HZ infection. Adjusted OR for the minor allele homozygous was 15.6 (95 % CI 3.9 − 61.4). Adjusted OR for the onset age (≥ 65 years) was 2.6 (95 % CI 1.1 − 6.4). Discussion To our knowledge, our study is the rst to analyze the relationships of whole genome SNPs with HZ infection among bDMARDS-treated RA patients and to identify a SNP as one of the biomarkers for HZ infection. It has been reported that aging, high disease activity, corticosteroid use and the use of methotrexate are risk factors for HZ infection in RA patients 5 . In addition, it has been reported that susceptibility to HZ infection in RA varies by race 8 . In fact, Japanese and Taiwanese have a higher risk compared to Americans and Europeans 9 . In this regard, genetic background may also affect the susceptibility. However, because there are not studies that take into account genetic polymorphisms such as SNPs, it is likely that genetic polymorphisms associated with the susceptibility were overlooked. Therefore, we conducted GWAS which is powerful tool to collectively identify SNPs associated with the susceptibility. In our study, rs10774580 located in intron region of OASL gene was signi cantly associated with HZ infection. The minor allele homozygous were positively associated with HZ infection. Human OASL has an antiviral activity against RNA viruses 10,11 . On the other hand, OASL inhibits type I interferon (IFN) induction during DNA virus infection such as herpes simplex, vaccinia and adenovirus 12 . This is because OASL binds to cylic GMP-AMP synthase (cGAS) known as DNA sensor, and inhibits cyclic GMP-AMP (cGAMP) synthesis in cGAS-STING (stimulator of interferon gene) pathway sensing the majority of DNA viruses 12 . Inhibiting IFN induction leads to enhancing DNA virus replication. Therefore, rs10774580 may affect the transcription of OASL because this SNP is intronic variation without amino acid substitution. As the result, the expression levels among OASL genotypes vary, and then differences of the susceptibility may be caused. Interestingly, several previous studies revealed that using Janus Kinase (JAK) inhibitors increased the risk of HZ infection compared to bDMARDS 13,14 . In this regard, it is unclear if rs10774580 is also associated with HZ infection in JAK inhibitors-treated RA patients. Thus, further analyses are needed in JAK inhibitors-treated RA patients. This study has several limitations. First, rs10774580 was identi ed by the result of GWAS among Japanese RA patients. It is well known that allele frequencies of most SNPs vary in different ethnic groups. The allele frequency of rs10774580 we identi ed also varied compared with the allele frequency of other ethnic groups reported in the HapMap database (https://www.ncbi.nlm.nih.gov/snp). Therefore, rs10774580 may not be applicable to non-Japanese RA patients as the biomarker. A second limitation is that this study didn't take into consideration the incidence of HZ in each patient. It is well known that some RA patients repeatedly develop HZ. Therefore, in order to identify the other biomarkers, further studies taking in the incidence of HZ into consideration are desired. Patients And Methods Patients. We recruited 321 Japanese female patients of RA receiving treatment with biological diseasemodifying antirheumatic drugs (bDMARDs). They included 56 HZ infected patients. Written informed consent to participate in this study was obtained from each patient. This study was approved by the ethical committee for analytical research on the human genome of the Matsubara May ower Hospital. All methods were performed in accordance with relevant guidelines and regulations. Genome-wide SNP genotyping. The patients' whole blood samples were used for DNA extraction at Mitsubishi BCL Inc. Genome wide SNP genotyping were performed at deCode genetics Inc. (Reykjavic, Iceland) using Illumina HumanHap300K chip technology (Illumina Corp., San Diego, CA, USA). After genotyping, 302,814 of 317,503 SNPs excluded SNPs with call rates < 90 % and minor allele frequency < 1 % were used in the case-control analysis described below. Statistical analysis. We used case-control analysis to analyze the relationship of 302,814 SNPs with onset of HZ by Fisher's exact tests using SVS 8.1.1 (Golden Helix Inc.). After case-control analyses, we picked up SNPs signi cantly associated with HZ infection. Univariate and multivariate logistic regression analyses were used to examine the effects of the SNP and onset age of RA on the risk for HZ infection. The logistic regression analyses were carried out using EZR 15 (Saitama Medical Center, Jichi Medical University, Saitama, Japan). EZR is a graphical user interface for R (The R Foundation for Statistical Computing, Vienna, Austria, version 2.13.0). P values < 10 − 8 were considered signi cant in case-control analysis. P values < 0.05 were also considered signi cant in the logistic regression analyses. Conclusion This is the rst report of a signi cant association between a genetic factor and HZ infection among bDMARDS-treated RA patients. As the result of GWAS, we showed that rs10774580 in OASL gene was signi cantly associated with HZ infection. Therefore, this SNP may be one of the biomarkers for predicting HZ infection among RA patients before using biologics. Values are mean ± SD, number of the patients Figure 1 Manhattan plot showing Fisher's exact tests' results.
2022-08-17T13:29:13.692Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "ba0f1d1f5f7b33df17cdea83b89ad29acd30f5ea", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-651045/v1.pdf?c=1631899636000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "ba0f1d1f5f7b33df17cdea83b89ad29acd30f5ea", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
219574777
pes2o/s2orc
v3-fos-license
Microbiology BACTERIA 98 STRUCTURE 98 TOXINS 98 COCCI AND BACILLI 98 METABOLISM 98 FLAGELLA 99 CAPSULE 99 MULTIPLICATION 99 STREPTOCOCCI 99 STAPHYLOCOCCI 100 BACILLUS ANTHRACIS 101 BACILLUS CEREUS 101 CLOSTRIDIUM TETANI 101 CLOSTRIDIUM BOTULINUM 101 CLOSTRIDIUM DIFFICILE 101 CLOSTRIDIUM PERFRINGENS 102 CORYNEBACTERIUM DIPHTHERIA 102 LISTERIA MONOCYTOGENES 102 NEISSERIA MENINGITIDIS (MENINGOCOCCI) 102 NEISSERIA GONORRHEA (GONOCOCCI) 103 ESCHERICHIA COLI 103 KLEBSIELLA PNEUMONIAE 103 SALMONELLA 103 PSEUDOMONAS AERUGINOSA 103 VIBRIO CHOLERA 104 MISCELLANEOUS ENTERICS 104 HAEMOPHILUS INFLUENZAE 104 HAEMOPHILUS DUCREYI 104 HAEMOPHILUS (GARDNERELLA) VAGINALIS 105 LEGIONELLA PNEUMOPHILA 105 BORDETELLA PERTUSSIS 105 FACULTATIVE INTRACELLULAR ORGANISMS 105 CHLAMYDIA 106 RICKETTSIA 106 TREPONEMA PALLIDUM 106 ADULTHOOD SYPHILIS 107 BORRELIA BURGDORFERI 107 LEPTOSPIRA INTERROGANS 108 MYCOBACTERIUM LEPRAE 108 MYCOPLASMA 109 PASTEURELLA MULTOCIDA 109 VIRUSES 109 ORTHOMYXOVIRUS 109 PARAMYXOVIRUS 109 HUMAN IMMUNODEFICIENCY VIRUS (HIV) 110 HEPATITIS VIRUSES 111 HERPES VIRUSES 112 HUMAN HERPES VIRUS-6 (HHV-6) 113 EPSTEIN-BARR VIRUS (EBV) 113 RABIES 113 OTHER VIRUSES 113 FUNGI 114 TINEA CAPITIS (FIG. 7.5) 114 TINEA CORPORIS (FIG. 7.6) 114 TINEA CRURIS 114 TINEA PEDIS 115 TINEA UNGUIUM 115 TINEA VERSICOLOR (FIG. 7.7) 115 CANDIDIASIS 116 CRYPTOCOCCUS NEOFORMANS 117 COCCIDIODES IMMITIS 117 MISCELLANEOUS FUNGAL INFECTIONS 117 PARASITES 117 ENTAMOEBA HISTOLYTICA 117 GIARDIA LAMBLIA 117 TRICHOMONAS VAGINALIS 118 PLASMODIUM 118 LEISHMANIA DONOVANI 118 AMEBAS CAUSING MENINGOENCEPHALITIS 118 TRYPANOSOMAS 118 HELMINTHS 118 4. Microaerophilic: Works only in the presence of small amounts of oxygen. These bacteria are deficient in catalase and peroxidase enzymes. 5. Heterotrophs: Bacteria using organic carbons for metabolism 6. Autotrophs: Bacteria using inorganic ammonium and sulfide for metabolism l Notes: 1. Aminoglycosides and Tetracyclines work AT 30S subunit of ribosomes; all other protein synthesis inhibitors work on 50S. 2. Facultative intracellular organisms, e.g., Salmonella, Listeria: They can live and metabolize normally inside macrophages after being phagocytosed. Multiplication l Transformation: Naked DNA of one cell attaches itself into another cell of close species. This is followed by entrance of the DNA into the cell and attaching to its genome, resulting in the transformation of the genetic characters of the recipient cell. l Conjugation: A sex pilus builds up between two cells like a bridge, to facilitate the transport of fertility (F) plasmid and antibiotic resistance genes. l Transduction: Bacteriophage (a virus that infects bacteria) transmits a piece of DNA from one bacterial cell into another. A bacteriophage holds on to the cell by its tail fibers and injects the DNA from its head all the way down into the cell. Types of bacteriophages are listed in Clinical picture: Muscle spasms, lockjaw, and risus sardonicus. The latter is a fixed smile due to muscle spasm, and is a sign of advanced disease. l Prevention: Vaccination, and tetanus toxoid is given every 10 years. l After exposure: Example: patient injured his foot by stepping on a rusty nail: 1. If last toxoid dose was within the last 5 years, no toxoid is needed. 2. If last toxoid dose was more than 5 years ago, give a new dose of toxoid. 3. If patient has never been immunized, give tetanus toxoid and immunoglobulin. Usually follows the use of broad-spectrum antibiotics, e.g., ampicillin, clindamycin. Patient presents with severe watery malodorous diarrhea and abdominal pain. As there is no postinfectious immunity l Notes: 1. Diphtheria toxins can only be produced in patients with iron deficiency. 2. Exotoxin of diphtheria works through its A portion by inhibiting elongation factor 2 (EF2) and protein synthesis. This is achieved by adenosine diphosphate (ADP) ribosylation. Listeria monocytogenes l Non-spore-forming, gram-positive bacillus, with unique features: 1. It is a facultative intracellular organism. 2. It is the only gram-positive organism capable of releasing endotoxins. 3. End-over-end motility (''tumbling'') l Clinical picture: Meningitis and sepsis. It is the third most common cause of meningitis in neonates (after Streptococcus agalactiae and E. coli). It also causes meningitis in immunocompromised patients (also see Chapter 3, Neuroanatomy). A motile H 2 S-releasing bacterium, with Vi antigen l Source: Food or water contaminated with animal feces. A famous source is undercooked eggs. Salmonella typhi is carried only by humans in the gallbladder; it is not carried by animals. l S. typhi: A facultative intracellular organism, causing typhoid (enteric) fever, where patient presents with stepladder fever, rosy spots on the abdomen, and right lower quadrant (RLQ) abdominal pain often confused with appendicitis. Treatment: ciprofloxacin or ceftriaxone. l S. cholera-suis: Causes bacteremia, which targets lungs, liver, or even the brain l S. enteritidis: Causes mucous or watery diarrhea. Treatment: Self-resolving. Antibiotics will prolong bacterial shedding. Pseudomonas aeruginosa l Gram-negative bacillus, which produces green pigment (fluorescin) and blue pigment (pyocyanin) and exotoxin A 1. Meningitis: Common in children between 6 months and 3 years of age. Antibiotics cause lysis of bacteria and release of antigens, inducing an immune reaction. This could be prevented by giving steroids 15 minutes prior to starting the antibiotics. 2. Epiglottitis: Fever, hyperextended neck (dogsniffing position), copious drooling of saliva and stridor. Do not attempt to examine pharynx (might cause laryngeal spasm). Diagnosis: Neck x-ray shows swollen epiglottis, also called thumbs-up sign. Examine airway in the operation room using a direct laryngoscope, to visualize the swollen cherry red epiglottis. 3. Septic arthritis: Mostly in children and usually affects a single joint l Prevention: HiB capsule vaccine (2, 4, 6, and 12 months) given in combination with DPT. The mechanism depends on stimulation of the T cells by the diphtheria toxin against the HiB capsule. where the x-ray looks much worse than how the patient presents; usually associated with diarrhea and altered mental status 1. Rocky Mountain spotted fever: Caused by Rickettsia rickettsii, transmitted by ticks (Dermacentor) Clinical picture: Fever, and petechial rash that starts in the palms and soles and creeps toward the trunk 2. Epidemic typhus: Caused by Rickettsia prowazekii, transmitted by ticks. Clinical picture: Fever, and rash that involves the whole body except palms and soles 3. Endemic typhus: Caused by Rickettsia typhi, transmitted by rats. Clinical picture: Fever, and rash that starts on the fifth day of fever. 4. Q fever: Caused by Coxiella burnetii, transmitted through contact with animals and animal products. Clinical picture: Fever and pneumonia, due to inhalation of endospores. It is the only rickettsia that does not cause rash. 5. Bartonella henselae: Causes cat-scratch disease. Clinical picture: Cat scratch, followed by fever, rash, and swollen tender pustular lymphadenopathy. Complication: Bacillary angiomatosis, which is proliferation of blood vessels, common in AIDS patients. 6. Ehrlichia canis: From dog licks, causing fever and rash. Peripheral smear shows numerous morulae inside the monocytes. 2. Bone: Sabre shins (inflamed bowed tibiae), Clutton joints (painless effusion), and destruction of medial proximal tibial metaphysis (Wimberger sign) Adulthood Syphilis l Primary (6 weeks): Characterized by painless chancre and painless lymphadenopathy. Chancre is a well-demarcated ulcer with indurated base, and it resolves spontaneously without scar formation. Rash is more prominent in palms and soles. 2. Condyloma lata: Wart-like lesions on moist surfaces. They are highly contagious lesions. l Latent: 25% of patients have relapse during that period l Tertiary: 1. Gummas: They occur in skin (painless) or bones (painful). 2. CVS: Injury to Vasa vasora, of aorta, leading to aortic aneurysm and aortic dissection. Also causes coronary obstruction and aortic regurgitation. 3. Neurosyphilis: l Multiple forms ranging from asymptomatic, to meningitis or even infarction l Tabes dorsalis: As explained in Chapter 3, Neuroanatomy, it targets the dorsal column (causing ataxia), and dorsal roots (causing loss of reflexes, pain, and temperature sensation) l General paresis of insane: Aphasia, confusion, and seizures l Argyll-Robertson pupil: Pupil that accommodates but never reacts to light 1. Cutaneous candidiasis: Common in skin folds, e.g., diaper rash. Lesions are red and moist with festooned edges, and are surrounded by papules known as satellite lesions (Fig. 7.8). 2. Oral candidiasis (thrush): Common in patients using inhaled steroids, and it could involve the tongue and the esophagus. Lesions are multiple white painful plaques. Prevention: Washing the mouth after using the steroid inhalers ( Fig. 7.9) 3. Genital candidiasis: It is the most common genital infection in females worldwide. Clinical picture: Whitish, milky vaginal discharge and curdlike patches.
2019-08-19T16:51:27.252Z
2008-09-25T00:00:00.000
{ "year": 2008, "sha1": "94e462ad67b81bdf3d435662412585eab4d347b7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "f1ee4fd6f4036535683d40fad3144469fa219c35", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
13863180
pes2o/s2orc
v3-fos-license
Thrips species ( Insecta : Thysanoptera ) associated to Cowpea in Piauí , Brazil Thrips are still poorly known in cowpea, Vigna unguiculata (L.) Walp., in Piauí, despite their economic importance in this crop, which stands out as one of the major cultures of North and Northeast regions from Brazil. Thus, this study aimed to identify the thrips species associated to the crop in Teresina and Bom Jesus, Piauí, Brazil. From October 2007 to August 2008, cowpea inflorescences were sampled in the municipalities by the technique of simple bagging. After screenings, thrips were preserved in AGA, mounted on permanent microscope slides and identified. The identified species were: Frankliniella brevicaulis Hood, 1937, F. insularis (Franklin, 1908), F. schultzei (Trybom, 1910), F. tritici (Fitch, 1855) and Haplothrips gowdeyi (Franklin, 1908). The slides are deposited at the entomological collection of the Departamento de Biologia, Universidade Federal do Piauí. A key to the species is provided. Frankliniella schultzei is recorded in several plant species in Brazil, and is considered pest on cotton, eggplant, lettuce, melon, soybean, rose, tobacco, tomato and watermelon.From the species collected in this survey, this is the one that may cause greater agricultural problems to the crop, either because of the number it was found or taking into account the wide range of other plant species in which it causes economic losses.Its agricultural importance worldwide is given by both feeding damage and vectoring tospoviruses (Hoddle et al. 2008).Only yellow specimens were collected. Frankliniella brevicaulis is widely distributed in the Neotropics, and is recorded in Brazil in banana (Monteiro et al. 1999), where it causes damage to the fruits in the form of brown rough punctures, which reduces the commercial value of the fruits (Fancelli 2004). Frankliniella insularis is widely distributed in Brazil, where it feeds on legumes (Mound & Marullo 1996), but it is not considered a pest.However, it can be considered a minor pest of leguminous crops in Central America, such as Cajanus spp.and Pachyrhizus spp.(Hoddle et al. 2008). Frankliniella tritici is well distributed in North America, associated to a wide range of plant species with flowers, and is considered a pest of roses (Hoddle et al. 2008).In Brazil, the species is recorded only in wheat in Rio Grande do Sul (Monteiro 1999). Introduction Almost a hundred of the about 6,000 described thrips species (Mound & Morris 2007) are notorious for causing extensive crop damage by feeding on leaf tissue or by vectoring viral diseases (Reynaud 2010).In Brazil, 546 thrips species are currently known (Monteiro & Lima 2011), from which about 24 are considered harmful to cultivated plants (Monteiro 2002). Thrips are pests of cowpea, Vigna unguiculata (L.) Walp., in the state of Piaui, attacking flowers, causing flower abortion and, thus, huge economic losses by reducing the crop productivity (Freire Filho et al. 2005).This crop is very important in northeastern Brazil, where, according to Freire Filho et al. (1999) it constitutes the main protein source for the population.However, thrips are poorly known on cowpea, despite their economic importance.Frankliniella schultzei (Trybom, 1910) is the only thrips species recorded on cowpea in northeastern Brazil, in the states of Rio Grande do Norte and Piauí (Chagas 1993, Fontes et al. 2011). The aim of this research was to identify the thrips species on cowpea in two municipalities in the state of Piaui, Brazil.A key to species is provided. Materials and Methods Thrips collections were weekly performed in October and December 2007 and January, February, July and August 2008 in Teresina and in April 2008 in Bom Jesus, according to the flowering of cowpea.Samples were collected in experimental fields at Embrapa Meio-Norte, in a transition area between the Caatinga and pre-Amazon in Teresina (05° 05' 21" S, 42° 48' 07" W, 72 m altitude) and in an area of Cerrado in Bom Jesus (09° 04' 28" S, 44° 21' 31" W, 277 m altitude). The technique used to collect thrips was the simple bagging (Waquil et al. 1986), in which cowpea inflorescences were removed and placed in clear plastic bags.After two hours of collection, the material was taken to the laboratory of Entomology of the Departamento de Biologia, Universidade Federal do Piauí, for screening. In the laboratory, the insects, still in plastic bags, were placed in a freezer at -5 °C for one hour before screening, to facilitate this stage.Then, fine-bristled brushes, under stereomicroscope, helped transfering the thrips to microtubes containing AGA (60% ethyl alcohol, glycerin and glacial acetic acid in the ratio 10:1:1 respectively).Slides were prepared according to the technique proposed by Mound & Marullo (1996) and Mound & Kibby (1998). Results and Discussion Five thrips species were identified, four belonging to the family Thripidae, Frankliniella brevicaulis Hood, 1937, F. insularis (Franklin, 1908), F. schultzei (Trybom, 1910), F. tritici (Fitch, 1855) and one to the family Phlaeothripidae, Haplothrips gowdeyi (Franklin, 1908).All of them, except for Frankliniella schultzei, are recorded for the first time in the state of Piaui and on cowpea in Brazil.The material is deposited in the entomological collection of the Departamento de Biologia, Centro de Ciências da Natureza, Universidade Federal do Piauí.The number of specimens collected in each municipality can be visualized in Table 1. Knowing thrips species that occur on cowpea is very important for this crop in the state of Piaui, and perhaps for the entire northeastern Table 1 . Number of thrips collected in the municipalities.
2018-12-11T11:06:21.326Z
2013-03-01T00:00:00.000
{ "year": 2013, "sha1": "04aed948b4025d6534e27ae233d08c08c4f2c0cd", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/bn/a/z5vBwQ4y9r49VKYcBSwRbCm/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "04aed948b4025d6534e27ae233d08c08c4f2c0cd", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Biology" ] }
1133812
pes2o/s2orc
v3-fos-license
A novel method for spectrophotometric determination of pregabalin in pure form and in capsules Background Pregabalin, a γ-amino-n-butyric acid derivative, is an antiepileptic drug not yet official in any pharmacopeia and development of analytical procedures for this drug in bulk/formulation forms is a necessity. We herein, report a new, simple, extraction free, cost effective, sensitive and reproducible spectrophotometric method for the determination of the pregabalin. Results Pregabalin, as a primary amine was reacted with ninhydrin in phosphate buffer pH 7.4 to form blue violet colored chromogen which could be measured spectrophotometrically at λmax 402.6 nm. The method was validated with respect to linearity, accuracy, precision and robustness. The method showed linearity in a wide concentration range of 50-1000 μg mL-1 with good correlation coefficient (0.992). The limits of assays detection was found to be 6.0 μg mL-1 and quantitation limit was 20.0 μg mL-1. The suggested method was applied to the determination of the drug in capsules. No interference could be observed from the additives in the capsules. The percentage recovery was found to be 100.43 ± 1.24. Conclusion The developed method was successfully validated and applied to the determination of pregabalin in bulk and pharmaceutical formulations without any interference from common excipients. Hence, this method can be potentially useful for routine laboratory analysis of pregabalin. Background Pregabalin (PRG), (S)-3-(aminomethyl)-5-methylhexanoic acid (Figure 1), is an antiepileptic and structurally related to the inhibitory neurotransmitter aminobutyric acid (GABA) It was recently approved for adjunctive treatment of partial seizures in adults [1,2] in United States and Europe and for the treatment of neuropathic pain from post therapeutic neuralgia and diabetic neuropathy. Currently, there is no official analytical procedure for pregabalin in any pharmacopeia. Several reports are there in literature for PRG determination based on chromatographic methods, i.e., gas chromatography-mass spectrophotometry (GC-MS), LC-MS-MS [3,4], HPLC [5][6][7] coupled with varying detection techniques like tandem mass spectrometry [8], fluorometry [9] and enantiospecific analysis [10]. These methods may involve procedural variations including pre-and post-column derivatization [10]. Recently, capillary electrophoresis and nuclear magnetic resonance technique was reported for PRG involving complexation with cyclodextrins [11]. All these are complex trace analysis techniques most of which have been employed for PRG determination in biological fluid samples. However, routine analysis of the drug in bulk powder and pharmaceutical preparations in research laboratories and pharmaceutical industry requires a relatively uncomplicated and a more cost effective method like UV/visible spectrophotometry or spectrofluorometry. Pregabalin, as such, has a poor UV/ visible absorbance profile ( Figure 2) and very few reported methods have relied on generation of a chromophoric product by reaction of the drug with some suitable reagent. Considering the limited literature reports available in this area [12][13][14], we found it very pertinent to investigate and develop a novel spectrophotometric method for determination of pregabalin in bulk powder and pharmaceutical preparations. Ninhydrin has been used as a chromogenic agent in spectrophotometric analysis of several amino acids, peptides and amines [15]. The present study describes the evaluation of ninhydrin as a chromogenic reagent in the development of simple and a rapid spectrophotometric method for the determination PGB in its pharmaceutical dosage forms. The procedure does not involve any extraction step with any organic solvent and can be directly carried out in phosphate buffer pH 7.4 which makes it ideal for routine analysis of the drug in bulk or in pharmaceutical formulations. Method In our efforts to design a novel spectrophotometric method for quantification of pregabalin, we investigated its derivatization with ninhydrin ((triketohydrindene hydrate) for generation of a chromophoric product. Figure 3 shows the UV/visible spectrum of the chromophoric derivative. The procedure involves formation of purple colored product by reaction of pregabalin with ninhydrin by heating at a temperature of 70-75°C for 20 minutes. Reaction of ninhydrin 1 with amines, alpha amino acids, peptides, and proteins yields an aldehyde with one carbon atom less than the alpha-amino acid; and carbon dioxide in stoichlometric amounts and varying amounts of ammonia, hydrindantin and a chromophoric compound known as Ruhemann's Purple (2-(3-hydroxy-1-oxo-1H-inden-2-ylimino)-2Hindene-1,3-dione). This pigment serves as the basis of detection and quantitative estimation of alpha-amino acids. 13 Mechanism proposed (Figure 4) for the reaction involves removal of a water molecule from ninhydrin hydrate 1 to generate 1,2,3-indantrione 2 in the first step, which then, forms a Schiff's base with the amino group of pregabalin resulting in the ketimine 3. Removal of the aldehyde RCHO generates an intermediate amine 4 (2-aminol,3-indandione). Condensation of this intermediate amine with another molecule of ninhydrin follows to form the expected chromophore 5 (Ruhemann's Purple). The ratedetermining step in the entire sequence of the ninhydrin reaction is the nucleophilic-type displacement of a hydroxy group of ninhydrin hydrate by a non-protonated amino group. Effect of pH Of the buffers investigated (acetate buffer, phosphate buffer), colour development was noted in case of phosphate buffer. The optimum buffer pH was found to be 7.4 and lower pH ranges resulted in an insufficient colour development. Effect of reagent concentration The addition of 1.0 mL of ninhydrin solution (0.2% w/v) was sufficient to obtain the maximum and reproducible absorbance values for the various concentration ranges of PGB. Smaller amounts resulted in incomplete reaction. Further increase in the concentration had no significant effect on complex formation, although absorbance increased slightly owing to the reagent background used. Effects of temperature and heating time The effects of temperature and heating time on the formation of the coloured complex were also optimized. At room temperature, the addition of ninhydrin did not lead to the formation of any coloured product and higher temperatures were required to accelerate the reaction. The colour intensity increased with increasing temperature and maximum absorbance was obtained following heating on a water bath at a temperature of 70-75°C for 20 minutes. Further heating caused no appreciable change in the colour. The complex obtained was highly stable for more than 6 h. Validation The method was validated with respect to linearity and range, accuracy and precision, limit of detection (LOD) and limit of quantification (LOQ), selectivity and robustness. The developed method was validated for the pure drug as well as marketed formulation of pregabalin (Pregabalin 75; Torrent pharmaceuticals) and the various validation parameters are shown in Table 1. Linearity and range The regression plots showed compliance with Beer Lambert's law (linearity) in the concentration range of 50 μg/mL-1000 μg/mL with a correlation coefficient (r 2 ) of 0.992. The standard plot is given in Figure 5. Table 1 summarizes the performance data and statistical parameters for the proposed method including concentration ranges, linear regression equation, correlation coefficient, molar absorptivity, Sandell sensitivity limit and these indicate a good linearity over the working concentration ranges. Precision Precision were investigated by analyzing different concentrations of pregabalin (0.2-1.4 mg/mL) in three independent replicates on the same day (intra-day precision) and on three consecutive days (inter-day precision). The data is represented as relative standard deviation (RSD) and results have been shown in Table 2. Low relative standard deviation (RSD) values for intra-day and interday analysis indicate good precision of the method. Accuracy The accuracy of the method signifies the closeness of the measured value to the true value for the sample. To determine the accuracy of the proposed method, different levels of drug concentrations were prepared from independent stock solutions and analyzed. To provide an additional support to the accuracy of the developed assay method, a standard addition method was employed, which involved the addition of different concentrations of pure drug to a known pre analyzed dilution of the pure drug as well as formulation sample and the total concentration was determined using the proposed method. Accuracy was assessed as the percentage relative error E r and mean % recovery. The percentage recovery of the added pure drug was calculated as: Where C t is the total drug concentration measured after standard addition; C i drug concentration in the formulation sample; C a , drug concentration added. The percentage relative error was calculated as: E r % = [(found -added)/added] × 100 Recovery values from standard addition method followed for the bulk drug analysis ranged from 96.5 to 100.3% (Table 3). Recovery studies with marketed formulation returned values ranging from 97.09 to 99.74% (Table 4). Interference Satisfactory values of the mean recovery values ± SD, RSD % and E r % in recovery studies in drug formulation (Table 4) revealed that there is no potential interference of the excipients listed by the manufacturer, i.e., talc, lactose monohydrate and maize starch. This may be attributed to the dependence of the reaction in the proposed method on the presence of a primary aliphatic amino group in the drug molecule which is not present in any of these excepients. Limit of detection (LOD) and limit of quantitation (LOQ) LOD and LOQ of the method were established using calibration standards (Table 1). LOD and LOQ were calculated as 3.3 σ/s and 10 σ/s, respectively, as per ICH definitions, where, s is the mean standard deviation of replicate determination values under the same conditions as the sample analysis in the absence of the analyte (blank determination), and s is the sensitivity, namely the slope of the calibration graphs. Robustness Repeatability is based on the results of the method operating over short interval of time under same conditions. Robustness was examined by evaluating the small variations in different experimental conditions such as heating temperatures (± 2°C), working wavelengths, volume and concentration of reagents. Three replicate determinations at six different concentration levels of the drugs were carried out. The within-day RSD values were found to be less than 0.6% indicating that the proposed method has reasonable robustness. Stability The stability of the final sample solutions was examined by their absorbance values and responses were found to be stable for at least 6 hours at room temperature. Table 5 gives the results of the assay for pregabalin carried out on marketed formulation by the proposed method and revealed that there is close agreement between the results obtained by the proposed methods and the label claim. The recovered drug content was found to be 99.48%. Analytical applications The results with the proposed method for the determination of pregabalin in its pharmaceutical formulation (Pregabalin75 capsules) suggest satisfactory recovery. Further, standard addition technique followed to check the validity of the method have given good recoveries of the drug in presence of formulation suggesting a noninterference from formulation excepients. Hence, this method can be recommended for adoption in routine analysis of pregabalin in in quality control laboratories. Conclusion The method proposed is simple, rapid, inexpensive and sensitive for the determination of pregabalin in bulk as well as in marketed form (capsules). There is no requirement of any sophisticated apparatus as in chromatographic methods. Omission of an extraction step with organic solvents is an added advantage. The method has been validated in terms of its sensitivity, simplicity, reproducibility, precision, accuracy and stability of the coloured species for ≥ 6 h suggesting its suitablility for the routine analysis of PGB in pure form (in bulk analysis) as well as pharmaceutical formulations without interference from excipients. Apparatus All absorption spectra were recorded using a Perkin Elmer lambda 15 UV-Visible spectrophotometer (German) with a scanning speed of 60 nm/min and a band width of 2.0 nm, equipped with 10 mm matched quartz cells. A CyberScan pH 510 (Eutech instruments) pH meter was used for checking the pH of buffer solutions. Materials and reagents All chemicals and materials were of analytical grade and were purchased from Qualigens fine chemicals, Mumbai, India. All solutions were freshly prepared in double distilled water. Pure samples Pregabalin (PGB) pure grade was graciously provided as a gift samples by Vardhman Chemtech limited, Derabassi, Punjab, India. Market samples Pregabalin75 capsules (label amount 75 mg PGB/Capsule) Torrent Pharmaceuticals were purchased from the market. Preparation of phosphate buffer pH 7.4 Phosphate buffer pH 7.4 was prepared by mixing 250 mL of 0.2 M potassium dihydrogen phosphate with 195.5 ml of 0.2 M NaOH and making up the volume to 1000 ml with distilled water. The pH of the buffer was adjusted to 7.4 using a precalibrated pH meter. Standard Stock solutions Stock solution of pregabalin (2 mg/ml) was prepared by dissolving 200 mg of pregabalin in 100 mL of phosphate buffer (pH 7.4). Preparation of ninhydrin solution The 0.2% solution of ninhydrin was prepared by dissolving 200 mg of ninhydrin in 100 ml of ethanol and was kept in an amber colored bottle. Standard plot Different aliquots were taken from the stock solution (2 mg/ml) and diluted with phosphate buffer pH 7.4 to prepare a series of concentrations ranging from 50 to 1000 μg/mL of pregabalin. To 5.0 mL of these aliquots taken in stoppered tubes, 1.0 mL of ninhydrin solution (0.2% w/v) was added and heated on a water bath at a temperature of 70-75°C for 20 minutes. The tubes were kept covered to avoid the loss of solvent due to evaporation. After cooling the solution to room temperature, the absorbance values were measured in triplicate at 402.6 nm against mixture of 5.0 mL phosphate buffer (pH 7.4) and 1.0 mL 0.2% ninhydrin as reagent blank. The calibration graph was obtained by plotting the absorbance values at the λ max of the drug (402.6 nm) against corresponding concentration values and compliance with Beer Lambert's law was assessed. Analysis of pharmaceutical formulation Preparation of capsule sample solution The contents of twenty capsules were mixed and weighed accurately. Separate quantities of the powder equivalent to 30 mg, 60 mg and 90 mg of PRG were transferred into a 100 mL volumetric flasks, dissolved in water, and sonicated for 5 min., the volume was then completed with water, shaken well for 5 min. and filtered into a dry flask. To 5.0 mL aliquots of the filtrate taken in stoppered tubes, 1.0 mL of ninhydrin solution (2.0% w/v) was added and solution heated on a water bath at a temperature of 70-75°C for 20 minutes. Solutions were cooled to room temperature and the absorbance values noted in triplicate at 402.6 nm against reagent blank.
2018-04-03T04:45:00.319Z
2011-10-07T00:00:00.000
{ "year": 2011, "sha1": "c93d00ac4be5669f00f8967a67a1ff31675f971d", "oa_license": "CCBY", "oa_url": "https://ccj.biomedcentral.com/track/pdf/10.1186/1752-153X-5-59", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9f95627c46da91bc01f5040c6aa90e142f6df84d", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
3540519
pes2o/s2orc
v3-fos-license
Application of Soluplus to Improve the Flowability and Dissolution of Baicalein Phospholipid Complex In this study, a novel ternary complex system (TCS) composed of baicalein, phospholipids, and Soluplus was prepared to improve the flowability and dissolution for baicalein phospholipid complex (BPC). TCS was characterized using differential scanning calorimetry (DSC), infrared spectroscopy (IR), powder X-ray diffraction (PXRD), and scanning electron microscopy (SEM). The flowability, solubility, oil–water partition coefficient, in vitro dissolution, and in vivo pharmacokinetics of the system were also evaluated. DSC, IR, PXRD, and SEM data confirmed that the crystal form of baicalein disappeared in BPC and TCS. Furthermore, the angle of repose of TCS of 35° indicated an improvement in flowability, and solubility increased by approximately eight-fold in distilled water when TCS was compared with BPC (41.00 ± 4.89 μg/mL vs. 5.00 ± 0.16 μg/mL). Approximately 91.24% of TCS was released at the end of 60 min in 0.5% SDS (pH = 6.8), which suggested that TCS could improve the dissolution velocity and extent. Moreover, TCS exhibited a considerable enhancement in bioavailability with higher peak plasma concentration (25.55 μg/mL vs. 6.05 μg/mL) and increased AUC0–∞ (62.47 μg·h/mL vs. 50.48 μg·h/mL) with 123.75% relative bioavailability compared with BPC. Thus, Soluplus achieved the purpose of improving the flowability and solubility of baicalein phospholipid complexes. The application of Soluplus to phospholipid complexes has great potential. Introduction Baicalein ( Figure 1A) is a bioactive ingredient of Radix Scutellariae. Baicalein has been reported with various pharmacological effects, such as anti-cancer [1], anti-tumor [2], anti-inflammatory [3], anti-pathogen [4], and antioxidant functions [5]. Ma's research showed that baicalein plays a vital role in suppressing metastasis of breast cancer cells through downregulation of both SATB1 and Wnt/β-catenin [6]. Moreover, baicalein could regulate bone formation via the mTORCI pathway [7]. However, Wu reported that baicalein is a Biopharmaceuticals Classification System (BCS) class IV compound because of its low solubility (solubility of 0.052 mg/mL in water) and poor lipophilicity (Papp = 0.037 × 10 −6 cm/s) [8]. The poor solubility and permeability of baicalein limit its oral absorption and bioavailability. Given the phospholipid's excellent biocompatibility and unique amphiphilicity, a drugphospholipid complex has been employed as a technique to improve the oral absorption of the drugs which belong to BCS class III and IV [9][10][11]. Devendra Singh Rawat pointed out that the water/n-octanol solubility of baicalein was improved in baicalein phospholipid complex (BPC) [12]. Nevertheless, the drug-phospholipid complex's strong lipid solubility can have a disadvantageous influence on the drug dissolution rate. Moreover, the non-flowing character of BPC as a semi-solid station limits its application in solid preparation. Soluplus ( Figure 1B) has been used as an excipient in many reports and with other processing methods, such as spray drying [13][14][15][16]. Soluplus has a natural amphiphilic structure that makes it miscible with hydrophobic drugs and maintains its aqueous solubility because of vinyl acetate, vinyl caprolactam blocks, and ethylene glycol blocks [17,18]. Hua Yang applied Soluplus as a stabilizer in the preparation of a nanosuspension to improve the bioavailability of fenofibrate [19]. Alireza Homayouni adopted antisolvent precipitation and high-pressure homogenization techniques with Soluplus to enhance the dissolution of Celecoxib [20]. Andres Lust prepared piroxicam and Soluplus into amorphous solid dispersions for qualitative and quantitative analyses of recrystallization during storage [21,22]. In the present study, a novel ternary complex system (TCS) composed of baicalein, phospholipids, and Soluplus was prepared to improve the flowability and dissolution in vitro of BPC. Due to ameliorative flowability, the successful development of TCS has created favorable conditions for largescale production of baicalein in industrial production. Likewise, the improvement in dissolution is also advantageous for the improvement of bioavailability. Based on these findings, we conclude that Soluplus achieved the purpose of improving the flowability and solubility of BPC. The safe and effective excipients incorporation of phospholipid complex provides a new idea for its better development. In our design, TCS was prepared via solvent evaporation with baicalein, phospholipids, and Soluplus. First, TCS was characterized using differential scanning calorimetry (DSC), infrared spectroscopy (IR), powder X-ray diffraction (PXRD), and scanning electron microscopy (SEM). Second, flowability, solubility, oil-water partition coefficient, and dissolution in vitro were evaluated. Finally, the baicalin content was detected in rats after oral administration to further assess bioavailability via High Performance Liquid Chromatography-Electrospray Ionization-Mass Spectrometry/Mass Spectrometry (HPLC-ESI-MS/MS) [23]. Materials Baicalein and baicalin were purchased from Sichuan Weikeqi Biology Technique Co., Ltd. Soluplus was kindly gifted by BASF SE (Ludwigshafen, Germany). Icariin was purchased from the China Institute of Pharmaceutical and Biological Products and employed as an internal standard. All other chemical reagents used in the experiments were analytical grade or better. Purified Milli-Q water was utilized in the experiments (Millipore, Billerica, MA, USA). Given the phospholipid's excellent biocompatibility and unique amphiphilicity, a drug-phospholipid complex has been employed as a technique to improve the oral absorption of the drugs which belong to BCS class III and IV [9][10][11]. Devendra Singh Rawat pointed out that the water/n-octanol solubility of baicalein was improved in baicalein phospholipid complex (BPC) [12]. Nevertheless, the drug-phospholipid complex's strong lipid solubility can have a disadvantageous influence on the drug dissolution rate. Moreover, the non-flowing character of BPC as a semi-solid station limits its application in solid preparation. Soluplus ( Figure 1B) has been used as an excipient in many reports and with other processing methods, such as spray drying [13][14][15][16]. Soluplus has a natural amphiphilic structure that makes it miscible with hydrophobic drugs and maintains its aqueous solubility because of vinyl acetate, vinyl caprolactam blocks, and ethylene glycol blocks [17,18]. Hua Yang applied Soluplus as a stabilizer in the preparation of a nanosuspension to improve the bioavailability of fenofibrate [19]. Alireza Homayouni adopted antisolvent precipitation and high-pressure homogenization techniques with Soluplus to enhance the dissolution of Celecoxib [20]. Andres Lust prepared piroxicam and Soluplus into amorphous solid dispersions for qualitative and quantitative analyses of recrystallization during storage [21,22]. In the present study, a novel ternary complex system (TCS) composed of baicalein, phospholipids, and Soluplus was prepared to improve the flowability and dissolution in vitro of BPC. Due to ameliorative flowability, the successful development of TCS has created favorable conditions for large-scale production of baicalein in industrial production. Likewise, the improvement in dissolution is also advantageous for the improvement of bioavailability. Based on these findings, we conclude that Soluplus achieved the purpose of improving the flowability and solubility of BPC. The safe and effective excipients incorporation of phospholipid complex provides a new idea for its better development. In our design, TCS was prepared via solvent evaporation with baicalein, phospholipids, and Soluplus. First, TCS was characterized using differential scanning calorimetry (DSC), infrared spectroscopy (IR), powder X-ray diffraction (PXRD), and scanning electron microscopy (SEM). Second, flowability, solubility, oil-water partition coefficient, and dissolution in vitro were evaluated. Finally, the baicalin content was detected in rats after oral administration to further assess bioavailability via High Performance Liquid Chromatography-Electrospray Ionization-Mass Spectrometry/Mass Spectrometry (HPLC-ESI-MS/MS) [23]. Materials Baicalein and baicalin were purchased from Sichuan Weikeqi Biology Technique Co., Ltd. Soluplus was kindly gifted by BASF SE (Ludwigshafen, Germany). Icariin was purchased from the China Institute of Pharmaceutical and Biological Products and employed as an internal standard. All other chemical reagents used in the experiments were analytical grade or better. Purified Milli-Q water was utilized in the experiments (Millipore, Billerica, MA, USA). Preparation of BPC The drug and phospholipid (1:2 mass ratio) were used to prepare BPC. In brief, baicalein and phospholipid (phosphatidylcholine) were dissolved in 10 mL of absolute ethanol. The solution was magnetically stirred for 0.5 h and then vacuum dried for 24 h to obtain the solid product, which was stored in a desiccator. Preparation of TCS The drug and phospholipid (BPC) with Soluplus were weighed at a mass ratio of 1:2:2 and dissolved in 10 mL of absolute ethanol. The first two were mixed and dissolved by magnetic stirring for 0.5 h. Then, the weighed Soluplus was added to the above mixed solution, stirring to dissolve completely. Further stirring for 0.5 h resulted in a Soluplus complex with mass ratios of 1:2:2. The complex was vacuum dried for 24 h to obtain the solid powder, which was then stored in a desiccator. Preparation of Physical Mixture (PM) of TCS PM was prepared by thoroughly mixing baicalein, phospholipid, and Soluplus (1:2:2 mass ratio) in a mortar for 10 min until a homogenous mixture was obtained. The sample was stored in a desiccator. Characterization of the Sample Differential Scanning Calorimetry (DSC) The thermal properties of Baicalein, BPC, TCS, and PM of TCS were studied using a differential scanning calorimeter (DSC 449F3, Netzsch, Selb, Germany) equipped with a thermal analysis system. Dry nitrogen was used as the purge gas (purge 40 mL/min). The right amount of samples of baicalein, BPC, TCS, and PM were weighed into an aluminum pan. The probes were heated at a temperature of 0-300 • C at a rate of 20 • C/min [24]. Infrared Spectroscopy (IR) The IR spectra of baicalein, BPC, TCS, and PM of TCS were obtained using an IR spectrophotometer (Ominic, New York, NY, USA). Samples were scanned over the wavenumber range of 4000-400 cm −1 . Powder X-ray Diffraction (PXRD) The crystalline state of prepared baicalein, BPC, TCS, and PM of TCS was acquired at room temperature with Cu Kα radiation source at 40 kV and 25 mA via XRD (Bruker AXS, D8, Karlsruhe, Germany). Scanning Electron Microscopy (SEM) The granule morphology of baicalein, BPC, TCS, and PM of TCS were determined using a SEM (6390LV, Tokyo, Japan). The samples were sputter-coated (E-1010, Hitachi Ltd., Tokyo, Japan) with gold-palladium and then observed at different magnifications. Flowability After TCS was added into the hopper, it was allowed to flow uniformly through the funnel until the highest point of the cone. The height (h) and radius (r) of the cone were then measured to calculate the angle of repose (θ). Solubility Water or n-octanol (5.0 mL) was added to excess baicalein, BPC, and TCS to determine solubility in the swing bed at 37 • C for 24 h [25]. The liquids were then shaken to balance and centrifuged at 15,000 rpm for 10 min to remove excess baicalein and TCS. A 10 µL aliquot of the resulting solution was injected into the HPLC system before the supernatant was filtered using a 0.45 µm membrane. Experiments were performed in triplicate. Oil-Water Partition Coefficient Studies After 2.0 mL of baicalein, BPC, or TCS-saturated n-octanol (water saturation) was shaken to balance, 2.0 mL of water (n-octanol saturation) was added. The miscible liquid was then agitated for 24 h, and the concentration in each phase was determined using HPLC after standing for layering. Experiments for each sample were performed in triplicate. Chromatographic Conditions All samples were analyzed using Waters HPLC with a quaternary pump (Waters 2695 separation module, Waters 2996 PDA detector) on a Diamonsil C18 column (200 mm × 4.6 mm × 5 µm) at 276 nm, which was maintained at 35 • C. The mobile phase was methanol-0.1% formic acid (60:40, v/v), and the flow rate was 1.0 mL/min. The injection volume was 10 µL. The method was optimized from the protocol described by Zhang [26]. Dissolution The rotating basket method with automated dissolution apparatus (RCZ-8M, Shanghai, China) was used according to appendix XC of Chinese Pharmacopoeia 2010 edition. The samples, which were equivalent to 10 mg of baicalein, were filled in hard gelatin capsules and placed in the dissolution vessel containing 900 mL of 0.5% SDS with phosphate buffer (pH 2.0 or 6.8). The vessel was maintained at 37 • C ± 0.5 • C and stirred at 100 rpm. Approximately 2.0 mL of sample was withdrawn from the dissolution medium and replaced by an equivalent volume of fresh medium at 5, 10, 20, 30, 45, and 60 min. The baicalein content was determined using HPLC after the sample was filtered through a 0.45 µm membrane. Experiments for each sample were performed in triplicate. Pharmacokinetic Study In Vivo Animals A total of 24 male SD rats (SPF grade) weighing 220 ± 10 g were purchased from Benbu Yinuogui Biology Technique Ltd. The rats were kept in the Animal Research Center to acclimatize to a new environment. The rats were then randomly divided into four groups-namely, baicalein, BPC, TCS, and PM of TCS. Plasma Sample Preparations All animal experimental protocols were approved by the Animal Care Committee of the Institute of Chinese Medicine in Jiangsu Province. The rats were fasted for 12 h and provided free access to water prior to the experiment. Each group was orally administered with a single dose equivalent to 40 mg/kg baicalein [27]. An aliquot of 300 µL of blood samples was taken from the eye ground veins at 0.12, 0.25, 0.50, 0.75, 1.0, 1.5, 2.0, 3.0, 5.0, 9.0, 13, and 24 h after oral administration. The supernatant was retained after centrifugation at 4500 rpm for 10 min. Plasma samples were collected and stored at −80 • C until further use. Plasma Sample Handing Baicalein, baicalin (corresponding to 5.75 and 5.06 µg/mL), 100 µL of plasma, 100 µL of mixed standard solution, and 10 µL of internal standard solution (5.60 µg/mL) were mixed and vortexed for 15 s. Approximately 0.5 mL of methanol and 100 µL of 1 M KH 2 PO 4 were added and vortexed for another 3 min. The mixture was centrifuged at 15,000 rpm for 5 min prior to evaporation under nitrogen. The dried residue was dissolved in 100 µL of methanol-distilled water (1:1, v/v). After being vortexed at 15,000 rpm for 5 min, the supernatant was injected for HPLC-ESI-MS/MS analysis. The MS/MS system (Waters-2695-MicroMass Quattro Micro, Milford, CT, USA) was operated under positive mode and multiple reaction monitoring mode. The MS conditions were as follows: ion spray voltage of +5.5 kV; nitrogen as nebulizer gas, auxiliary gas, and curtain gas at 30, 60, and 10 psi, respectively; collision gas set at medium; and auxiliary gas temperature of 400 • C. Analysis of Date The maximum concentration (C max ) and maximum time (T max ) were directly determined from the concentration-time data. Other pharmaceutical parameters (AUC 0-24 h and t 1/2 ) were analyzed using pharmacokinetic program DAS 2.1.1. ANOVA was performed using SPSS 16.0 software. Particle Morphology The SEM images of baicalein, PM of TCS, BPC, and TCS are presented in Figure 2. Baicalein ( Figure 2A) exhibited an almost rectangular crystal. TCS ( Figure 2D) was flake-like in shape, which was a great change compared with baicalein, PM of TCS ( Figure 2B), and BPC ( Figure 2C). The disappearance of crystals in TCS indicated the complete miscibility of the drug, phospholipid, and Soluplus. PM of TCS also exhibited crystals, although they were wrapped by phospholipid and Soluplus. Baicalein, baicalin (corresponding to 5.75 and 5.06 μg/mL), 100 μL of plasma, 100 μL of mixed standard solution, and 10 μL of internal standard solution (5.60 μg/mL) were mixed and vortexed for 15 s. Approximately 0.5 mL of methanol and 100 μL of 1 M KH2PO4 were added and vortexed for another 3 min. The mixture was centrifuged at 15,000 rpm for 5 min prior to evaporation under nitrogen. The dried residue was dissolved in 100 μL of methanol-distilled water (1:1, v/v). After being vortexed at 15,000 rpm for 5 min, the supernatant was injected for HPLC-ESI-MS/MS analysis. The MS/MS system (Waters-2695-MicroMass Quattro Micro, Milford, CT, USA) was operated under positive mode and multiple reaction monitoring mode. The MS conditions were as follows: ion spray voltage of +5.5 kV; nitrogen as nebulizer gas, auxiliary gas, and curtain gas at 30, 60, and 10 psi, respectively; collision gas set at medium; and auxiliary gas temperature of 400 °C. Analysis of Date The maximum concentration (Cmax) and maximum time (Tmax) were directly determined from the concentration-time data. Other pharmaceutical parameters (AUC0-24 h and t1/2) were analyzed using pharmacokinetic program DAS 2.1.1. ANOVA was performed using SPSS 16.0 software. Particle Morphology The SEM images of baicalein, PM of TCS, BPC, and TCS are presented in Figure 2. Baicalein ( Figure 2A) exhibited an almost rectangular crystal. TCS ( Figure 2D) was flake-like in shape, which was a great change compared with baicalein, PM of TCS ( Figure 2B), and BPC ( Figure 2C). The disappearance of crystals in TCS indicated the complete miscibility of the drug, phospholipid, and Soluplus. PM of TCS also exhibited crystals, although they were wrapped by phospholipid and Soluplus. Figure 3 shows the IR spectra of the samples, which confirmed the formation of TCS compared with the original drug based on the characteristic chemical bond. PM of TCS ( Figure 3B) exhibited a hydroxyl stretching band at 3478.68 cm −1 and carbon hydrogen bond at 2919 cm −1 . The spectrum of baicalein phospholipid complex also had the characteristics of baicalein infrared spectra, which was consistent with the trend of the infrared spectrum of PM, and there was no significant difference, indicating that the formation of phospholipid complex did not produce new chemical bonds between molecules. However, the intensity of both chemical bonds in TCS ( Figure 3D) was weaker. The overall trend of the curve and the characteristic peaks had undergone great changes because of the presence of the Soluplus in phospholipid complex-all results indicated TCS formation. Figure 3 shows the IR spectra of the samples, which confirmed the formation of TCS compared with the original drug based on the characteristic chemical bond. PM of TCS ( Figure 3B) exhibited a hydroxyl stretching band at 3478.68 cm −1 and carbon hydrogen bond at 2919 cm −1 . The spectrum of baicalein phospholipid complex also had the characteristics of baicalein infrared spectra, which was consistent with the trend of the infrared spectrum of PM, and there was no significant difference, indicating that the formation of phospholipid complex did not produce new chemical bonds between molecules. However, the intensity of both chemical bonds in TCS ( Figure 3D) was weaker. The overall trend of the curve and the characteristic peaks had undergone great changes because of the presence of the Soluplus in phospholipid complex-all results indicated TCS formation. X-ray Diffraction Pattern The X-ray diffraction patterns of baicalein, PM of TCS, TCS, and BPC are shown in Figure 4. The diffraction peaks of baicalein crystal were observed at a diffraction angle of 2θ, indicating that the drug was present as a crystalline material. Characteristic baicalein peaks also appeared in PM, but disappeared in TCS and BPC. This result suggested that baicalein in TCS completely existed in the amorphous phase. Differential Scanning Calorimetry DSC can screen drug-excipient compatibility and provide information about the interactions between them. Figure 5 shows the DSC spectra of baicalein ( Figure 5A), PM of TCS ( Figure 5B), BPC ( Figure 5C), and TCS ( Figure 5D). DSC thermograms showed that baicalein ( Figure 5A) had an endothermic peak at about 265 °C corresponding to the melting point of baicalein which suggested baicalein crystal formation. The weaker peak appearing at the same temperature in Figure 5B indicates that baicalein still exists in the form of crystallization in PM, whereas Figure 5C,D showed a horizontal line. We speculated that the melting point of a drug phospholipid complex may be changed so as to make it undetectable using DSC. These observations indicated that baicalein in TCS could exist in amorphous form due to the possible inhibitory effect of Soluplus on drug crystallization, which was consistent with the SEM results. X-ray Diffraction Pattern The X-ray diffraction patterns of baicalein, PM of TCS, TCS, and BPC are shown in Figure 4. The diffraction peaks of baicalein crystal were observed at a diffraction angle of 2θ, indicating that the drug was present as a crystalline material. Characteristic baicalein peaks also appeared in PM, but disappeared in TCS and BPC. This result suggested that baicalein in TCS completely existed in the amorphous phase. X-ray Diffraction Pattern The X-ray diffraction patterns of baicalein, PM of TCS, TCS, and BPC are shown in Figure 4. The diffraction peaks of baicalein crystal were observed at a diffraction angle of 2θ, indicating that the drug was present as a crystalline material. Characteristic baicalein peaks also appeared in PM, but disappeared in TCS and BPC. This result suggested that baicalein in TCS completely existed in the amorphous phase. Differential Scanning Calorimetry DSC can screen drug-excipient compatibility and provide information about the interactions between them. Figure 5 shows the DSC spectra of baicalein ( Figure 5A), PM of TCS ( Figure 5B), BPC ( Figure 5C), and TCS ( Figure 5D). DSC thermograms showed that baicalein ( Figure 5A) had an endothermic peak at about 265 °C corresponding to the melting point of baicalein which suggested baicalein crystal formation. The weaker peak appearing at the same temperature in Figure 5B indicates that baicalein still exists in the form of crystallization in PM, whereas Figure 5C,D showed a horizontal line. We speculated that the melting point of a drug phospholipid complex may be changed so as to make it undetectable using DSC. These observations indicated that baicalein in TCS could exist in amorphous form due to the possible inhibitory effect of Soluplus on drug crystallization, which was consistent with the SEM results. Differential Scanning Calorimetry DSC can screen drug-excipient compatibility and provide information about the interactions between them. Figure 5 shows the DSC spectra of baicalein ( Figure 5A), PM of TCS ( Figure 5B), BPC ( Figure 5C), and TCS ( Figure 5D). DSC thermograms showed that baicalein ( Figure 5A) had an endothermic peak at about 265 • C corresponding to the melting point of baicalein which suggested baicalein crystal formation. The weaker peak appearing at the same temperature in Figure 5B indicates that baicalein still exists in the form of crystallization in PM, whereas Figure 5C,D showed a horizontal line. We speculated that the melting point of a drug phospholipid complex may be changed so as to make it undetectable using DSC. These observations indicated that baicalein in TCS could exist in amorphous form due to the possible inhibitory effect of Soluplus on drug crystallization, which was consistent with the SEM results. Flowability of TCS The flow of powders as assessed by the angle of repose is based on the inter-particle cohesion: Values less than 25° is suggestive of "very-good flow," whereas values equal to and greater than 25° but less than 50° show "good flow", and values greater than 50° indicate "poor flow" [28]. The flowability of TCS was 35° according to the formula: tan θ = h/r, qualified as having "good flow". By contrast, the flowability of the BPC could not be measured due to the semi-solid state of the BPC, which was defined as a non-flowing material. Soluplus as a carrier played a role in the curing of BPC; TCS had a significant improvement in flowability compared with BPC, thus achieving the purpose of this experiment. Figure 6 displays the solubility data of baicalein, BPC, and TCS in distilled water and n-octanol. Table 1 exhibits the log p values of baicalein, BPC, and TCS. The solubility of TCS (41 ± 4.89 μg/mL) in distilled water was higher than that of BPC (5.02 ± 0.09 μg/mL) (p < 0.01). The solubility of TCS (230 ± 8.78 μg/mL) in n-octanol was slightly lower than that of BPC (260 ± 7.52 μg/mL). Moreover, TCS had lower log p values than BPC (2.01 vs. 2.04). The increase in solubility in distilled water and decrease in n-octanol may be caused by the natural hydrophilic structure of Soluplus. Therefore, TCS could enhance the water solubility of BPC. Flowability of TCS The flow of powders as assessed by the angle of repose is based on the inter-particle cohesion: Values less than 25 • is suggestive of "very-good flow," whereas values equal to and greater than 25 • but less than 50 • show "good flow", and values greater than 50 • indicate "poor flow" [28]. The flowability of TCS was 35 • according to the formula: tan θ = h/r, qualified as having "good flow". By contrast, the flowability of the BPC could not be measured due to the semi-solid state of the BPC, which was defined as a non-flowing material. Soluplus as a carrier played a role in the curing of BPC; TCS had a significant improvement in flowability compared with BPC, thus achieving the purpose of this experiment. Figure 6 displays the solubility data of baicalein, BPC, and TCS in distilled water and n-octanol. Table 1 exhibits the log p values of baicalein, BPC, and TCS. The solubility of TCS (41 ± 4.89 µg/mL) in distilled water was higher than that of BPC (5.02 ± 0.09 µg/mL) (p < 0.01). The solubility of TCS (230 ± 8.78 µg/mL) in n-octanol was slightly lower than that of BPC (260 ± 7.52 µg/mL). Moreover, TCS had lower log p values than BPC (2.01 vs. 2.04). The increase in solubility in distilled water and decrease in n-octanol may be caused by the natural hydrophilic structure of Soluplus. Therefore, TCS could enhance the water solubility of BPC. Flowability of TCS The flow of powders as assessed by the angle of repose is based on the inter-particle cohesion: Values less than 25° is suggestive of "very-good flow," whereas values equal to and greater than 25° but less than 50° show "good flow", and values greater than 50° indicate "poor flow" [28]. The flowability of TCS was 35° according to the formula: tan θ = h/r, qualified as having "good flow". By contrast, the flowability of the BPC could not be measured due to the semi-solid state of the BPC, which was defined as a non-flowing material. Soluplus as a carrier played a role in the curing of BPC; TCS had a significant improvement in flowability compared with BPC, thus achieving the purpose of this experiment. Figure 6 displays the solubility data of baicalein, BPC, and TCS in distilled water and n-octanol. Table 1 exhibits the log p values of baicalein, BPC, and TCS. The solubility of TCS (41 ± 4.89 μg/mL) in distilled water was higher than that of BPC (5.02 ± 0.09 μg/mL) (p < 0.01). The solubility of TCS (230 ± 8.78 μg/mL) in n-octanol was slightly lower than that of BPC (260 ± 7.52 μg/mL). Moreover, TCS had lower log p values than BPC (2.01 vs. 2.04). The increase in solubility in distilled water and decrease in n-octanol may be caused by the natural hydrophilic structure of Soluplus. Therefore, TCS could enhance the water solubility of BPC. Figure 7 shows the cumulative dissolution of different proportions between BPC (the mass proportion of baicalein and phospholipid was 1:2) and Soluplus. Baicalein dissolved to almost 40% and 30% in 60 min in 0.5% SDS with phosphate buffer at pH 6.8 and pH 2.0, respectively. TCS exhibited nearly 90% better dissolution extent relative to baicalein. When the mass ratio of BPC and Soluplus was 1:2, the amount of dissolution was higher than that of the ratio at 1:1; however, no distinguishable enhancement in dissolution was exhibited at 1:4 compared with 1:2. Two dissolution media showed the same phenomenon. Based on the present results, we can draw the conclusion that the optimal mass ratio of BPC and Soluplus is 1:2. Figure 7 shows the cumulative dissolution of different proportions between BPC (the mass proportion of baicalein and phospholipid was 1:2) and Soluplus. Baicalein dissolved to almost 40% and 30% in 60 min in 0.5% SDS with phosphate buffer at pH 6.8 and pH 2.0, respectively. TCS exhibited nearly 90% better dissolution extent relative to baicalein. When the mass ratio of BPC and Soluplus was 1:2, the amount of dissolution was higher than that of the ratio at 1:1; however, no distinguishable enhancement in dissolution was exhibited at 1:4 compared with 1:2. Two dissolution media showed the same phenomenon. Based on the present results, we can draw the conclusion that the optimal mass ratio of BPC and Soluplus is 1:2. Figure 8 presents the dissolution profiles of baicalein, BPC, TCS, and PM of TCS in 0.5% SDS with phosphate buffer at pH 6.8 and pH 2.0. TCS exhibited higher dissolution with respect to BPC at the end of 60 min in A and B (at 91.24% vs. 46.58% and 73.35% vs. 35.43%, respectively). Moreover, TCS increased to 84.26% in 20 min, which was nine-fold higher than BPC (8.55%) in A, and increased to 50.56% in B compared with BPC (8.30%). Two groups of BPC and baicalein showed no dissolution after 10 min, which indicated that the viscosity of the phospholipid complex could hinder the dissolution velocity, and the phospholipid decreased baicalein's water solubility. Bioavailability Analysis Plasma concentration-time profiles are presented in Figure 9, and the corresponding pharmacokinetic parameters are summarized in Table 2. The present study showed only one peak, which was in accordance with the findings of previous studies [29], although other studies reported a two-peak phenomenon [30,31]. Given that the content of baicalein cannot be detected after oral administration, the baicalin is predominant in the plasma when baicalein is administered orally, so baicalein absorption can be assessed by detecting the baicalin concentration and baicalein glycosides. TCS peaked at 0.63 h (25.55 μg/mL), showing considerable improvement (p < 0.01) compared with BPC, which peaked at 1.01 h (6.05 μg/mL). TCS exhibited a marked enhancement compared with BPC in oral bioavailability, with an increase in AUC0-24 h (53.16 μg·h/mL vs. 38.40 μg·h/mL) (p < 0.05), and Figure 8 presents the dissolution profiles of baicalein, BPC, TCS, and PM of TCS in 0.5% SDS with phosphate buffer at pH 6.8 and pH 2.0. TCS exhibited higher dissolution with respect to BPC at the end of 60 min in A and B (at 91.24% vs. 46.58% and 73.35% vs. 35.43%, respectively). Moreover, TCS increased to 84.26% in 20 min, which was nine-fold higher than BPC (8.55%) in A, and increased to 50.56% in B compared with BPC (8.30%). Two groups of BPC and baicalein showed no dissolution after 10 min, which indicated that the viscosity of the phospholipid complex could hinder the dissolution velocity, and the phospholipid decreased baicalein's water solubility. Figure 7 shows the cumulative dissolution of different proportions between BPC (the mass proportion of baicalein and phospholipid was 1:2) and Soluplus. Baicalein dissolved to almost 40% and 30% in 60 min in 0.5% SDS with phosphate buffer at pH 6.8 and pH 2.0, respectively. TCS exhibited nearly 90% better dissolution extent relative to baicalein. When the mass ratio of BPC and Soluplus was 1:2, the amount of dissolution was higher than that of the ratio at 1:1; however, no distinguishable enhancement in dissolution was exhibited at 1:4 compared with 1:2. Two dissolution media showed the same phenomenon. Based on the present results, we can draw the conclusion that the optimal mass ratio of BPC and Soluplus is 1:2. Figure 8 presents the dissolution profiles of baicalein, BPC, TCS, and PM of TCS in 0.5% SDS with phosphate buffer at pH 6.8 and pH 2.0. TCS exhibited higher dissolution with respect to BPC at the end of 60 min in A and B (at 91.24% vs. 46.58% and 73.35% vs. 35.43%, respectively). Moreover, TCS increased to 84.26% in 20 min, which was nine-fold higher than BPC (8.55%) in A, and increased to 50.56% in B compared with BPC (8.30%). Two groups of BPC and baicalein showed no dissolution after 10 min, which indicated that the viscosity of the phospholipid complex could hinder the dissolution velocity, and the phospholipid decreased baicalein's water solubility. Bioavailability Analysis Plasma concentration-time profiles are presented in Figure 9, and the corresponding pharmacokinetic parameters are summarized in Table 2. The present study showed only one peak, which was in accordance with the findings of previous studies [29], although other studies reported a two-peak phenomenon [30,31]. Given that the content of baicalein cannot be detected after oral administration, the baicalin is predominant in the plasma when baicalein is administered orally, so baicalein absorption can be assessed by detecting the baicalin concentration and baicalein glycosides. TCS peaked at 0.63 h (25.55 μg/mL), showing considerable improvement (p < 0.01) compared with BPC, which peaked at 1.01 h (6.05 μg/mL). TCS exhibited a marked enhancement compared with BPC in oral bioavailability, with an increase in AUC0-24 h (53.16 μg·h/mL vs. 38.40 μg·h/mL) (p < 0.05), and Bioavailability Analysis Plasma concentration-time profiles are presented in Figure 9, and the corresponding pharmacokinetic parameters are summarized in Table 2. The present study showed only one peak, which was in accordance with the findings of previous studies [29], although other studies reported a two-peak phenomenon [30,31]. Given that the content of baicalein cannot be detected after oral administration, the baicalin is predominant in the plasma when baicalein is administered orally, so baicalein absorption can be assessed by detecting the baicalin concentration and baicalein glycosides. TCS peaked at 0.63 h (25.55 µg/mL), showing considerable improvement (p < 0.01) compared with BPC, which peaked at 1.01 h (6.05 µg/mL). TCS exhibited a marked enhancement compared with BPC in oral bioavailability, with an increase in AUC 0-24 h (53.16 µg·h/mL vs. 38.40 µg·h/mL) (p < 0.05), and AUC 0-∞ (62.47 µg·h/mL vs. 50.48 µg·h/mL) (p < 0.05). The relative bioavailability of TCS was approximately 123.75% compared with BPC, confirming the enhanced bioavailability in the complex. Similarly, a four-fold increase in C max (25.55 µg/mL vs. 6.05 µg/mL) was observed. Soluplus was adopted as a hydrophilic pharmaceutical excipient to improve solubility, in vitro dissolution, or in vivo bioavailability in previous studies, such as solid dispersion [16], nanosuspension [19], and self-emulsification [32]. Soluplus was also used as a stabilizer to prevent agglomeration and crystal growth by reducing the surface energy of fine particles [33]. In the present study, a considerable enhancement was observed in the dissolution rate and extent in vitro and flowability of BPC by means of the application of Soluplus. The in vivo pharmacokinetic study showed that TCS could improve Cmax and AUC0-∞ of BPC. All these results demonstrated that TCS may be applied to baicalein's oral solid preparation. Conclusions In our study, a novel TCS composed of baicalein, phospholipids, and Soluplus was successfully developed. The 35° angle of repose of TCS indicated an improvement in flowability, which met the industrial demand (θ < 40°). Moreover, TCS exhibited a marked enhancement in both the rate and extent of dissolution in vitro, as well as the bioavailability parameters Cmax and AUC0-24 h, compared with BPC. The preparation method is simple and convenient, but also for Soluplus as a safe and effective drug excipient to explore a new pharmaceutical application. In conclusion, TCS is a promising method to improve the flowability and dissolution for drug-phospholipid complex. Soluplus was adopted as a hydrophilic pharmaceutical excipient to improve solubility, in vitro dissolution, or in vivo bioavailability in previous studies, such as solid dispersion [16], nanosuspension [19], and self-emulsification [32]. Soluplus was also used as a stabilizer to prevent agglomeration and crystal growth by reducing the surface energy of fine particles [33]. In the present study, a considerable enhancement was observed in the dissolution rate and extent in vitro and flowability of BPC by means of the application of Soluplus. The in vivo pharmacokinetic study showed that TCS could improve C max and AUC 0-∞ of BPC. All these results demonstrated that TCS may be applied to baicalein's oral solid preparation. Conclusions In our study, a novel TCS composed of baicalein, phospholipids, and Soluplus was successfully developed. The 35 • angle of repose of TCS indicated an improvement in flowability, which met the industrial demand (θ < 40 • ). Moreover, TCS exhibited a marked enhancement in both the rate and extent of dissolution in vitro, as well as the bioavailability parameters C max and AUC 0-24 h , compared with BPC. The preparation method is simple and convenient, but also for Soluplus as a safe and effective drug excipient to explore a new pharmaceutical application. In conclusion, TCS is a promising method to improve the flowability and dissolution for drug-phospholipid complex.
2017-07-23T01:25:44.818Z
2017-05-01T00:00:00.000
{ "year": 2017, "sha1": "ea4c96fe81aef4cee21cfa4e36232a4b2882dd88", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/22/5/776/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ea4c96fe81aef4cee21cfa4e36232a4b2882dd88", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
245952537
pes2o/s2orc
v3-fos-license
Bacteriophage-host interactions as a platform to establish the role of phages in modulating the microbial composition of fermented foods Food fermentation relies on the activity of robust starter cultures, which are commonly comprised of lactic acid bacteria such as Lactococcus and Streptococcus thermophilus. While bacteriophage infection represents a persistent threat that may cause slowed or failed fermentations, their beneficial role in fermentations is also being appreciated. In order to develop robust starter cultures, it is important to understand how phages interact with and modulate the compositional landscape of these complex microbial communities. Both culture-dependent and -independent methods have been instrumental in defining individual phage-host interactions of many lactic acid bacteria (LAB). This knowledge needs to be integrated and expanded to obtain a full understanding of the overall complexity of such interactions pertinent to fermented foods through a combination of culturomics, metagenomics, and phageomics. With such knowledge, it is believed that factory-specific detection and monitoring systems may be developed to ensure robust and reliable fermentation practices. In this review, we explore/discuss phage-host interactions of LAB, the role of both virulent and temperate phages on the microbial composition, and the current knowledge of phageomes of fermented foods. INTRODUCTION It is estimated that the deliberate fermentation of foods and beverages as a means to extend their shelf-life has been practised for almost 13,000 years [1] .Early food fermentations were introduced at this time on every continent and fermentation substrates encompassed regionally and seasonally available raw materials, including animal milk, meats, fish, cereals, vegetables, legumes, seeds, roots, and fruits [2,3] .While the earliest fermentations were spontaneous and prone to quality variation and failure, modern industry has adapted these fermentations to facilitate large scale productions with highly reproducible outcomes.Consequently, an abundance of fermented foods is manufactured globally, and their combined commercial value is estimated at 30 billion dollars per annum [4] .In addition to the preservation of various products, fermentation can impart desirable organoleptic properties (i.e., textures, flavours, appearances, etc.) to the final product [5] .Furthermore, the contributions of fermented foods to satisfy human dietary requirements and support health are highlighted by the beneficial effects of live microorganisms (probiotics) and soluble factors released from inactivated probiotics (post-biotics), as well as by a wide range of macro-and micronutrients [6][7][8] . Food fermentation processes encompass biochemical transformations of various organic substrates to metabolites (i.e., lactic acid, alcohol, free fatty acids, ammonia, etc.) through the enzymatic activities of specific microorganisms [7] .One specific group of microorganisms is intrinsically associated with food fermentations, i.e., the lactic acid bacteria (LAB), which include genera such as Lactococcus, Leuconostoc, Pediococcus, Streptococcus, Enterococcus, Carnobacterium, Alkalibacterium, Lactobacillus, Lacticaseibacillus, Lactiplantibacillus, Levilactobacillus, Ligilactobacillus, Limosilactobacillus and Weissella [3] .Depending on their role, LAB can be divided into two groups: (1) starter LAB, which is primarily responsible for acidification; and (2) non-starter LAB (NSLAB) that typically play a role in the ripening and maturation process [9] .Since starter LAB initiate and control the overall fermentation process by reducing the pH of the raw starting material, the selection of robust and technologically appropriate starter strains is critical to obtain high-quality products [10] . Traditional and artisanal production systems commonly rely on the indigenous microbiota of the substrates or production vessels [11] .These "natural" starters are predicted to continuously evolve, and fermentation may be achieved through a process termed "back-slopping", where a portion of the previous fermentate is used to initiate the next round of fermentation [12] .This back-slopping approach is used in many regional production systems and utilises mixed strain starter (MSS) cultures whose composition is undefined [Table 1].Production systems that apply undefined MSS range from farmhouse, small-scale production systems to large-scale, modern industrial fermentations.In meat fermentations, members of the Lactobacillales and pediococci tend to be most abundantly present, while Staphylococcus carnosus and Micrococcus spp.may also be present.In vegetable fermentations, Lactiplantibacillus plantarum (among other Lactobacillales), Weisella and Pediococcus spp.are highly abundant [13] .In dairy fermentations, the microbiota is dependent on the fermentation temperature, being either mesophilic or thermophilic.Mesophilic fermentations typically incorporate strains of Lactococcus lactis or Lactococcus cremoris and, in some cases Leuconostoc and Lactobacillales.Thermophilic dairy fermentations typically include strains of S. thermophilus and members of the Lactobacillales.Artisanal production systems may also incorporate additional organisms (e.g., enterococci) through the application of unpasteurised milk or traditional production vessels.Undefined artisanal cultures that have been preserved (in place of the back-slopping approach) may be directly applied to initiate the fermentation, in which case the culture is not continuously evolving as the starter culture is derived from a master stock.From these undefined artisanal and mixed starter cultures, individual strains have been selectively isolated based on their desirable technological properties.These individual strains may be used in industrial fermentations in so-called "defined strain starter" (DSS) cultures in which a small number of strains (typically two to six strains) are combined to achieve products with specific organoleptic properties [Table 1] [14] .Such DSS cultures are widely applied in the production of Cheddar-style cheeses.Furthermore, individually isolated and characterised strains may be applied as adjunct cultures to (1) ensure rapid acidification of the substrate and/or (2) support cheese ripening [15,16] . Bacteriophages in fermented food Phages are viruses that specifically infect bacteria, and are considered the most abundant biological entities on earth that co-habit any ecological niche where bacteria exist [17] .Since DSS cultures comprise a small number of strains, bacteriophage (phage) infection of one or more strains in the culture may have a catastrophic impact on the production regime and the final properties of the product [18] .In contrast, phage infection of strains within complex mixed starter cultures is less likely to impact severely on the production regime, though product inconsistencies may occur.In the context of fermented vegetables, phage predation has been associated with the progression of the LAB landscape and which is central to the development of the flavour and aroma profile of these products [13] .LAB-infecting phages represent one of the most significant challenges in the dairy fermentation industry, with infection of starter cultures being a common cause of production delays or even complete arrest of the fermentation process [19] .Among phages of LAB, those that infect Lactococcus lactis, Lactococcus cremoris and Streptococcus thermophilus have been studied most extensively and will, therefore, be the primary focus of this review [11,20] . All currently known LAB phages belong to the Caudovirales order of tailed phages, which possess a doublestranded DNA-containing capsid [17] .Lactococcal phages are classified based on their morphology and nucleotide homology into 10 taxonomic groups, while an additional novel isolate called phage Nocturne116 was described recently [21,22] .Among the described lactococcal phage groups, three are most frequently encountered in industrial fermentations: the P335, Skunaviruses (formerly called 936) and the Ceduoviruses (formerly called c2) groups.Members of the Skunavirus and Ceduovirus groups are highly problematic groups as they are virulent phages, whereas members of the P335 group may be virulent or temperate [23,24] .While certain genomic regions of the Skunaviruses and Ceduoviruses are highly conserved (within a phage species), specific regions of diversification have also been reported, including those that encode hostbinding domains within their neck passage structure and tail proteins [25,26] .Conversely, the P335 phages are rather heterogenous and appear to possess a mosaic genome structure, likely as a result of genomic recombination between related phages [27,28] .Temperate phages may exist in the virulent state, or they may integrate their chromosomes into that of the host bacterial cell, in which state they are termed prophages. The decision between the lytic and lysogenic lifestyles of temperate phages is assumed to be dictated by the availability of host cells and environmental conditions/cues.While temperate phages can exist in a dormant, prophage state in the genomes of their host starter strains without adverse effects on fermentation, they present an ever-present risk to the fermentation process should they revert to the virulent state [29,30] . S. thermophilus phages are problematic in thermophilic production systems, and several studies have reported their global prevalence in and the corresponding impact on industrial food fermentations [31][32][33] .S. thermophilus phages were originally classified into two major groups based on their structural protein content and DNA packaging mechanisms [34] .These two phage groups were termed the cos and pac groups, and have recently been renamed the Moineauviruses and Brussowviruses, respectively [35] .In 2011, the novel phage isolate 5093 was identified as a representative of a new phage group, followed by the identification of two additional novel phage groups, i.e., the 987 and P738 groups [36][37][38] .While the Moineau-and Brussowviruses continue to be the most prevalent dairy streptococcal phage groups (69% and 29%, respectively), the emergence of novel phage groups, possibly through recombination with phages of other streptococcal species or LAB such as Lactococcus spp., highlights the ongoing need for phage monitoring. Moineauviruses are virulent phages, while members of the Brussowvirus group may be virulent or temperate [20] .While the incidence of prophage induction in S. thermophilus appears to be rather low (2%), many apparently complete prophages or their remnants are present in their chromosomes, facilitating genomic plasticity of their phages contributing to the mosaicism and diversification [33,34,[39][40][41] .Furthermore, it is noteworthy that recombination-driven genetic shuffling and exchange events of functional modules have been observed between lytic phages [32] . PHAGE-HOST INTERACTIONS To understand how phages influence the overall microbial community composition in fermented foods, it is important to consider the diversity and basis of phage-host interactions occurring in these communities.Phage infection commences with the recognition of, and adsorption to, a receptor on the host cell surface, often involving an initial reversible binding followed by irreversible binding and associated commitment to infection [42] .The initial reversible binding step is facilitated by the phage-encoded anti-receptor, which typically comprises one or more receptor binding proteins (RBPs) located at the distal end of the phage tail, commonly supported by auxiliary host binding proteins [26,43] .RBPs may recognise saccharidic, teichoic acid, and/or proteinaceous receptors [Table 2].Considerable research has been undertaken to understand phage-LAB host interactions, which has rendered them a paradigm for a diverse range of microorganisms, but particularly Gram-positive bacteria that are infected by tailed phages. Saccharidic receptors The interactions between lactococcal and streptococcal phages and their cognate host have been the focus of intense research scrutiny over the past three decades [56,[59][60][61] .The majority of streptococcal and lactococcal phages recognise saccharidic receptors: exopolysaccharide (EPS) or rhamnose-glucose polysaccharide (RGP), and cell wall polysaccharide structures (CWPS), respectively [56,62] .Dupont et al. [44] identified the role of the cwps gene cluster in lactococcal phage adsorption by means of random insertional mutagenesis of two Lactococcus strains (L.lactis IL1403 and Wg2).Through this approach, bacteriophage insensitive mutant derivatives exhibiting reduced phage adsorption capabilities were isolated.The cwps gene clusters of lactococci typically encompass a DNA region of 25-30 kbp [62][63][64][65] .Due to the ever-increasing number of available genome sequences, it has been possible to interrogate the functions and genetic diversity of these clusters [62][63][64][65] .The cwps gene cluster is responsible for the biosynthesis of two CWPS elements: the peptidoglycan-embedded rhamnan (whose biosynthetic machinery is encoded by the 5' portion of the cwps cluster) and surface-exposed polysaccharide pellicle (PSP; the biosynthesis of which is performed by enzymes that are encoded by the 3' portion of the cwps cluster).Of the 11 distinct groups of lactococcal phages, Skunavirus, P335, 1358, 949, P087, and 1706 phages have been demonstrated to utilise saccharidic host receptors [44][45][46][47][48][49]63,66] . These actococcal phages specifically bind to the PSP component of the CWPS on the host cell surface, mediated by the phage RBP.The genetic diversity of this gene cluster among lactococcal strains, particularly in the 3' region corresponding to distinct glycosyltransferase-encoding genes, is responsible for the biochemical diversity observed in the PSP between strains.Lactococcal strains can, therefore, be grouped based on differences in the genetic composition of the gene cluster responsible for CWPS biosynthesis [62][63][64][65] .There are currently four defined cwps genotypes designated by type A-D, with C types further subdivided into eight subtypes (C 1 -C 8 ) [62,64] .A recent comparative analysis encompassing over 400 lactococcal sequences (including an industrial strain collection) also indicates the presence of several additional cwps types (A-H) and subtypes, suggestive of a continually evolving genetic composition of this gene cluster [67] .Furthermore, the different genotypes defined among these loci correspond to distinct CWPS chemical structures, thereby facilitating the prediction of structural features of lactococcal CWPS, including the number and order of component monosaccharides in the PSP, the likelihood of an oligosaccharide or polymeric (PSP) side chain and the presence of chemical modifications of the component monosaccharides [62] . Four of the five streptococcal phage groups (all except the newest P738 group for which this has not yet been studied) utilise saccharidic receptors on S. thermophilus cell surfaces [38,68,69] .The polysaccharide structures produced by S. thermophilus are either the loosely cell wall-associated EPS or the more tightly cell wall-bound CWPS.The gene clusters responsible for the biosynthesis of S. thermophilus EPS and CWPS are eps and rgp, respectively.Moineauvirus and Brussowvirus phage RBPs have been determined to recognise the host RGP [54,55] , whereas the RBP of the more recently described 987 group phages was found to target EPS structures [36,38] .Based on the sequences of 167 S. thermophilus strains (many of which are industrial strains), it has recently been proposed that there are three RGP groups (A-C), with 18 further subtypes [67] . This would represent an expansion on and reclassification of the previously proposed rgp grouping into types A through to E [70] .There are also six defined eps types (A-F) [70] (for a detailed review of the genetic diversity of these loci see [67] ).The genetic diversity of the gene clusters (and subsequent biochemical diversity) responsible for the biosynthesis of these cell wall polysaccharide structures accounts, at least in part, for the highly specific nature of LAB phage-host interactions, i.e., the narrow host range observed for many LAB phages. Proteinaceous receptors While many LAB phages use saccharidic receptors to infect their bacterial hosts, lactococcal Ceduoviruses have been found to bind reversibly to a saccharidic moiety and irreversibly to a proteinaceous receptor. Based on comparative genomics and host specificity, Ceduoviruses are classified into two subgroups: the c2type and bIL67-type phages [71] .The proteinaceous receptor these phages bind to is either the phage infection protein (PIP; in the case of c2-type phages) or YjaE (for bIL67-type phages) [61] .The genes encoding these proteins are ubiquitous and well-conserved in lactococcal strains.Consequently, the host range of Ceduoviruses tends to be much broader than those of Skunaviruses, among other lactococcal phages [72] .However, some Ceduoviruses have been found to have a preference for certain CWPS types, demonstrating how this initial, reversible step of infection may still be crucial and subsequently restrict a phage's potential host range [27] . Teichoic acid receptors Teichoic acids are phosphodiester-linked co-polymers (of glycerol-or ribitol-phosphate and carbohydrates) that represent a ubiquitous component of the cell envelope of Gram-positive bacteria.There are two groups of teichoic acids: lipoteichoic acids (LTAs) and wall teichoic acids (WTAs).A number of Siphoviridae phages use WTAs or LTAs as an initial receptor through reversible binding during phage infection.Phages infecting various Bacillus, Listeria, and Staphylococcus species use WTAs as receptors, as WTAs are the most abundant surface molecule in the Gram-positive bacterial order Bacillales [42] .Although limited information exists pertaining to the host receptors of Lactobacillus phages, it has been demonstrated that phage LL-H which infects Lactobacillus delbrueckii ssp.lactis employs LTAs for the purpose of host recognition [57] . Phage anti-receptors Despite the genetic diversity exhibited by siphophages, the genome architecture and synteny of the functional module responsible for tail morphogenesis is well conserved, incorporating the following functions (in a 5' to 3' direction): the tail tape measure protein, the distal tail protein (Dit), and the tailassociated lysin (Tal).This region is typically followed by genes encoding the baseplate proteins, including the RBP, with auxiliary binding proteins in many cases [Figure 1].The RBP-encoding genes of many of these phages were initially identified through comparative genome analysis, and representative RBPs have been well characterised in both lactococcal and dairy streptococcal phages [70,71,73,74] .Owing to the vast number of LAB phages whose genomes have since been sequenced, knowledge pertaining to phage anti-receptors, as represented by both RBPs and auxiliary binding proteins, has been substantially expanded and is now well defined for many lactococcal and streptococcal phages, whereas those of other LAB genera still requires considerable research attention. Recognition of saccharidic receptors The RBP of Skunaviruses was first identified in lactococcal phages sk1 and bIL170 using an in silico approach, and functionally assigned following isolation of recombinant phages encoding chimeric RBPs [74] .The 3-dimensional structure of the RBP of lactococcal phage p2 was shown to represent a homotrimer of three domains: the shoulders, neck, and head [75] .The head domain (C-terminal end) encompasses the actual The late-expressed genes that are commonly shared by these phages and make up the tail include the: major tail protein (MTP; green), tape measure protein (TMP; blue), distal tail protein (Dit; orange), tail-associated lysin (Tal; yellow), and receptor binding protein (RBP; red).Additional carbohydrate-binding modules (CBMs) found in auxiliary binding proteins (grey) have been identified and characterised in certain phages, such as the: neck passage structure (NPS), major tail extension protein (TpeX), and accessory baseplate protein (BppA).It is noteworthy that in some cases (e.g., certain P335 lactococcal phages), the NPS-encoding gene is located downstream of the RBP-encoding gene. receptor binding activity [76] .For Skunaviruses, comparative analysis of C-terminal sequences from various RBPs facilitated a phylogenetic grouping [63] .Currently, five rbp genotypes are defined that correlate well to the specific cwps genotype of their corresponding host(s) [65] .In addition to the receptor binding ability of the RBP C-terminal or head domain, a lactococcal virion may possess auxiliary CBM decorations on the Dit, major tail protein, and neck passage structure (NPS) that may all contribute to host cell binding.These CBMs facilitate specific host binding and indeed exhibit the same host specificity of its corresponding RBP albeit, in some cases, with an apparently reduced affinity relative to this RBP [26] . The RBP of lactococcal P335 phages was identified for phages Tuc2009 and TP901-1 where it was observed to form part of a multi-protein complex called a baseplate and in which the RBP was identified as the lower baseplate protein (BppL) [66] .Certain P335 group phages recognise CWPS structures, although no direct correlation between RBP subgroups and host CWPS has been determined to date, likely due to the heterogeneity of the P335 phage group [23,64] .However, it has been suggested that P335 phages may have a preference for cwps type A strains over type C or B strains based on a study incorporating a selection of 39 lactococcal strains and 17 P335 phages isolated from whey samples derived from cheese production facilities across multiple continents [23] . Similar to lactococcal phages, a number of phage-tail proteins are involved in the host binding process among S. thermophilus phages.The Tal was originally thought to be the protein primarily responsible for host specificity, among additional genetic determinants yet to be identified [77] .The bona fide RBP was later identified downstream of the Tal-encoding gene, as well as other genes specifying auxiliary CBMs that appear to facilitate host binding [55] .The functionality of the individual CBMs as well as the distal tail structure of streptococcal phage STP1 (Moineauvirus) was also determined [55] .The specific saccharidic host receptor of 5093 phages is yet to be confirmed.However, an esterase-like domain is present in the Cterminal end of a putative RBP, being consistent with the finding that saccharidic components on the S. thermophilus cell wall (such as the EPS and CWPS) incorporate phosphodiester-linked carbohydrate moieties [54,55] . Host adhesion is not limited to the RBP and is often aided by a number of auxiliary binding proteins found to contain additional CBMs in many lactococcal and streptococcal phages, including the: NPS, TpeX, BppA, and Dit.In certain Skunaviruses and P335 phages, the NPS forms a collar-whisker complex attached to the phage portal and contains a CBM involved in, but not essential to, host binding [78] .In addition, a TpeX has been identified in certain Skunaviruses and results in the presence of additional decorations along the tail.Through fluorescence binding assays, the CBMs of the NPS and TpeX have been determined to exhibit the same host specificity as the RBP (though with inferior affinity when compared to that of the RBP) and bind favourably towards the ends of the cell where cell division occurs [26] .Certain P335 phages also have an additional CBM located in an auxiliary binding protein known as the BppA [45,60] .Most lactococcal and streptococcal phages encode a Dit, which is either classified as classical or evolved.Evolved Dits are longer and contain an insertion of an additional CBM, which has been found to exhibit the same host specificity as the RBP [23,55,79] .5093 and 987 phage Dits are classical and do not incorporate CBM insertions.Lactococcal and streptococcal phages tend to possess a variety of CBMs (in addition to the RBP) in somewhat conserved combinations, and very rarely contain none or all possible auxiliary CBMs [26] .Beyond lactococci and dairy streptococci, limited studies have focused on the identification of the receptor moiety of LAB phages; however, it seems likely that many employ a saccaharidic receptors given their narrow host ranges.The phages of Leuconostoc, for example, are divided into two major groups that coincide with their host bacterial species, i.e., Leuconostoc mesenteroides and Leuconostoc pseudomesenteroides [80] .The receptor binding protein of these phages has been identified through the generation of phages harbouring chimeric RBPs and the host range and morphological alterations attributed to the "swapped" RBP domains. Recognition of proteinaceous receptors As mentioned above, Ceduovirus members are the only lactococcal phage members known to recognise and bind irreversibly to a proteinaceous receptor [50] .Relative to other lactococcal phages, there is limited information regarding the exact phage-encoded protein(s) responsible for host binding among members of the Ceduovirus group.While overall, there is limited sequence divergence among Ceduoviruses, a cluster of three genes displays reduced sequence similarity among members of this group.This cluster, which contains three late-expressed genes: l14, l15, and l16 in phage c2 and its equivalents ORF34, ORF35, and ORF36 in bIL67, has been suggested to be responsible for host recognition in Ceduovirus phages c2 and bIL67 [71] .This three-gene region is proposed to encode structural proteins responsible for binding to the host Pip or YjaE, however, the exact function of these genes is yet to be elucidated [61,71] .The non-LAB Bacillus subtilisinfecting phage SPP1 also uses a proteinaceous receptor for infection and has been thoroughly studied.SPP1, like Ceduovirus phages, first binds reversibly to a saccharidic receptor and then irreversibly to a cell surface-located proteinaceous receptor, YueB, which bears similarity to the lactococcal PIP [81,82] .The distal tail complex of SPP1 is well described and is structurally similar to the tail of lactococcal phages.Due to these similarities, SPP1 (despite infecting a non-LAB host) is a model for tailed phages that employ proteinaceous receptors [50] . Although much research has been dedicated in recent years to defining phage-host interactions between Siphoviridae and their Gram-positive hosts, in particular LAB, additional insights are required to fully understand these interactions at the molecular level.Also, due to the conserved nature of the tail structure of many Gram-positive host-infecting Siphoviridae phages, knowledge gained from understanding phagehost interactions in one group of bacteria (such as LAB) may be superimposed to better understand phagehost interactions involving other bacterial groups.One of these groups being Enterococcus strains, which enter the fermented food process (particularly that of cheese) either in the raw materials (such as milk) or at other stages of the manufacturing process.Little is known about the phages infecting these hosts in fermented foods, but they still may play a role in the microbial composition of these foods and should be further studied [83] .In addition, with respect to phage-host interactions within a microbial community, knowledge gained from understanding individual interactions may be applied and expanded to understand the network of phage-host interactions within more complex microbial communities across various environments. ROLE OF VIRULENT PHAGES IN MODULATING MICROBIAL COMPOSITION OF FOOD In industrial settings, virulent phages can represent a double-edged sword, depending on the environment and their host.For example, phages have been used as a non-chemical biocontrol tool to eradicate contaminating pathogenic species such as Listeria monocytogenes from many food products [84] .However, in the overall context of fermented food production, the presence of virulent phages that infect starter cultures is undesirable as they can cause slow or failed fermentation, with significant associated economic losses.In dairy fermentations, the impact of virulent phages on the fermentation process differs depending on the starter culture format, i.e., defined or undefined starter cultures, as well as the scale of fermentation, i.e., industrial scale vs. artisanal production systems. Erkus et al. [85] demonstrated that phage-sensitive strains are replaced by phage-insensitive strains of the same lineage, allowing for continued fermentation when using the complex Gouda cheese starter culture Ur.The composition of the original starter culture was dissected using culture-dependent and independent methods, allowing for community monitoring using genetic lineage-specific qPCR.The culture was able to maintain the relative composition of the different lineages, despite phage (originating from the original starter culture) pressure on individual strains.This was determined to be due to heterogeneity of the culture and, more specifically, variations in phage sensitivity of strains within and between lineages [85] . Phage attack can have a significant effect on both DSS and undefined starter cultures, although the impact and means to mitigate phage infection may vary.In DSS cultures, the specific composition, phenotypic, and behavioural characteristics of each strain that compromise the culture are known.If one or more of the strains in the DSS culture becomes infected by virulent phages, such strains may be readily replaced with phage-unrelated strains or phage-insensitive derivatives (possessing the same technological characteristics).This process ensures successful fermentation and product consistency/quality.Conversely, in undefined starter cultures, if the acidification of milk continues despite phage attack, the organoleptic properties or quality of the final product may be negatively impacted [86] .Phages infecting NSLABs present in the starter culture do not tend to impact acidification because these strains are typically utilised in fermentation specifically for the organoleptic properties they impart.In many cases, these inconsistencies are not observed until grading of the product occurs, at which point flavour, aroma, eye formation and surface ripening are evaluated as appropriate to the specific product.Such product inconsistencies are difficult to identify during the production process and can be costly to food producers through product down-grading. In addition to the negative impacts of virulent phage predation on microbial communities in fermentation, lytic phage also plays an important role in the composition and evolution of starter cultures by driving the genetic diversity of bacterial strains.For example, simple blends composed of representative strains from different genetic lineages (with varying phage sensitivity profiles) were created from the undefined culture Ur [18] .The relative abundance of the genetic lineages was monitored across sequential rounds of propagation in the absence or presence of phage pressure.Using this more defined version of a complex starter culture, the genetic lineages did not remain stable during sequential rounds of phage attack.However, phageresistant variants eventually arose from phage-sensitive genetic lineages, after which the cultures stabilised to the same relative composition as control blends without the presence of phage [18] .This study demonstrates how phages are key contributors driving the diversity among bacterial strains. Traditional and artisanal cheeses are produced based on starter cultures that typically consist of autochthonous bacteria already present in the fermentation materials (such as raw milk) or environment (e.g., fermentation vats).These cheeses are mostly produced using traditional production methods with fewer chemical and physical hurdles for phages to overcome, thereby allowing phage populations to emerge that are different when compared to those of modern fermentation facilities.For example, in Sicilian artisanal cheese facilities, it was shown that 16 of the 18 phages isolated from associated cheese whey and rennet samples belong to the 949 and P087 lactococcal group phages, which are rarely isolated from whey samples obtained from modern, large-scale cheese factories [11] .These phages are much more heat sensitive compared to the other more dominant lactococcal phages (such as Skunaviruses) and appear to thrive in this traditional fermentation environment due to the lack of pasteurisation [11] .While phages are a driving force of bacterial evolution, they are also continuously adapting and evolving in response to their hosts when the latter acquire resistance.Phages may mediate the transfer of genetic material via transduction (transfer of bacterial genetic material that has been packaged into the capsid of the phage), although this typically occurs at low frequencies.In contrast, temperate phages have the potential to contribute significantly to the transfer of genetic material from one strain to another and ultimately contribute to the evolution of a given bacterial species.In the ensuing section, we will explore the impact of temperate phages on starter bacterial species and culture composition in food fermentations. IMPACT OF TEMPERATE PHAGES ON STARTER CULTURE COMPOSITION Starter culture bacteria, including most Lactococcus species and Lacticaseibacillus rhamnosus and Lactiplantibacillus plantarum (vegetable fermentations) typically harbour one or more prophages in their genomes [87][88][89] .Prophage-mediated lysis of the culture may be considered an ambivalent phenomenon since it can confer both positive and negative effects on the associated fermentation product, i.e., culture lysis can cause incomplete/delayed acidification, while it may also cause the release of intracellular enzymes associated with flavour development.Also, interactions between host bacteria and prophages generate significant changes in bacterial chromosomes through the adoption and rearrangement of the functional module from prophages, resulting in the evolution of host bacteria as well as phages [28] . Induction of prophage(s) from starter strains Since the phenomenon of lysogeny in LAB was first reported in 1949 by Reiter [90] , the prevalence of lysogens in starter culture strains has been highlighted in many studies [87,[91][92][93] .For example, Terzaghi and Sandine [91] (1981) showed that all 45 tested lactococcal strains suffered from growth cessation and/or lysis following UV treatment along with the frequently observed release of phage particles.Regarding S. thermophilus, intact prophages or, more commonly, phage remnants (present as incomplete prophage genomes) are frequently observed, indicating that most S. thermophilus strains have been challenged by lysogenic phages [40] .Furthermore, various applications such as flow cytometry (FCM) have recently been developed to overcome the limitations of plaque-based methods, which are time-consuming and limited to infectious phages.Using FCM, detection of induced prophages is more precise without false-negative results [91,[94][95][96] .These findings highlighted the risk of prophage-carrying starter strains and led to a move away from traditional mixed starter fermentations for certain applications (such as Cheddar cheese production) where a consistent product profile is required. In contrast, Kelleher et al. [87] (2018) showed that only four out of 24 evaluated lactococcal strains, apparently possessing one or more intact prophages, formed intact phage particles following MitC (mitomycin C) exposure.During commercial fermentations, strains may be subjected to various stressful conditions such as high salt concentration, high (or low) temperatures, or extended exposure to low pH, though this does not seem to affect prophage stability, suggesting that prophage induction under production conditions does not appear to occur often [97] .In addition, lysis of starter culture cells during the ripening process is regarded as beneficial for flavour formation, as long as the acidification process is unaffected [98] .The release of intracellular enzymes from lysed cells and accessibility to substrates (i.e., casein and its peptide and amino acid breakdown products) promotes flavour development.Autolysis (and in some cases, prophage-mediated lysis) may be regarded as a critical step to achieving high-quality products.The correlation between "leaky" prophages and bacterial autolysis has been investigated with various starter strains.In particular, Husson-Kao et al. [99] (2000) proposed that the observed autolytic properties of a particular S. thermophilus strain are associated with the constitutive expression of phage genes, performing auxiliary roles for cell lysis.Furthermore, O'Sullivan et al. [100] (2000) proposed that the autolytic behaviour of lactococcal strains is associated with the presence of specific phage genes in the bacterial chromosomes, i.e., lysin-or holin-encoding genes.Notably, these lactococcal strains showed autolytic behaviour under the Cheddar cheese cooking temperature (38-40 °C), indicating the lysogenised starter strains can be used deliberately to improve the quality of products.These findings not only countered previous studies that present the undesirable aspects of prophages, but also highlighted the possibilities of prophages to be utilised in a positive manner for fermentation industries. Interactions between genomes of bacteria and prophages Even if prophages are dormant without the risk of excision, the presence of prophages may significantly influence host metabolism and genetic recombination.Many studies have highlighted the role of prophages as a reservoir of genetic variations, which facilitates the evolution of host bacteria resulting from the acquisition of prophage-derived anti-phage systems.These phage-derived defence mechanisms include adsorption inhibition, abortive infection (Abi), restriction-modification (R-M), or DNA injection blocking in L. lactis and S. thermophilus [20,28] [Figure 2].Ladero et al. [101] (1998) reported the superinfection immunity (Sii) displaying homo-immunity against superinfecting phage.This defence system blocks transcription of the lytic genes of homologous phages by expression of the repressor of their genetic switch following phagegenome entry into the cytoplasm.The repressor gene from Lacticaseibacillus phage A2 was identified, and the complete inhibition of phage infection against identical phage under the expression of the gene was observed when the phage genome was integrated into the bacterial host chromosome. Another phage defence mechanism termed superinfection exclusion (Sie) typically presents as a membraneassociated protein encoded by a gene in the lysogeny module.It is believed to provide resistance against heterogenous phages and block the initiation of superinfection by preventing DNA injection.The Sie 2009 system, encoded by lactococcal host strain UC509 harbouring temperate phage Tuc2009, is the bestunderstood phage exclusion system in LAB.Though its precise mode of action still remains unknown, its expression was found to cause DNA injection blocking [102] .However, lactococcal strains harbouring the sie 2009 gene are still sensitive against many tested phages, suggesting that full anti-phage activity requires high expression [103] .Furthermore, Ruiz-Cruz et al. [104] (2020) showed the prophage-carriage in Lactococcus provided resistance against various heterogenous phage groups, including Skunaviruses (or 936), P087, 949, as well as P335 phages. Abi systems prevent phage proliferation through the interference of an essential cellular activity such as DNA replication, transcription or protein synthesis before the completion of the phage infection cycle, resulting in host death and the production of few/no progeny phage particles [105] .Abi systems sense phage infection by the transcriptional and translational material of phages or their replication, before activating the cell-killing module of the Abi system.Various chromosomally-and plasmid-encoded Abi types (A-Z) have been studied, while Abi-encoding genes have also been identified as associated with prophages [106] .Kelleher et al. [87] (2018) reported that nine out of 30 assessed lactococcal strains possess one or more known Abi-encoding genes on their prophages.In addition, prophage-encoded Abi systems were also identified on genomes of Levilactobacillus brevis, Lacticaseibacillus paracasei, Limosilactobacillus fermentum, Lacticaseibacillus rhamnosus, Lactiplantibacillus plantarum, and Lactobacillus gasseri [107] .Furthermore, the prophage-derived AbiL 124 system exhibiting specific activity against phages infecting Latilactobacillus brevis and Lactococcus lactis demonstrated the potential of Abi systems to be used to generate novel phage- resistant starter strains. R-M systems protect host bacteria from the invasion of foreign DNA, such as phage infection, by cleaving invading DNA at specific sequences, which are protected in the resident DNA by methylation (except for Type IV system restricting incoming methylated DNA) [108] .Several lactococcal prophages were determined to harbour methylase genes which serve to protect the phage from endonucleolytic cleavage by host bacteria [87,109] .Furthermore, prophages encoding complete R-M systems confer protection to the host carrying the prophage, highlighting the potential symbiosis between the two entities [110] . These Abi, Sie, Sii or R-M systems encoded by prophages are presumed to enhance resistance against a variety of phages, thus providing fitness benefits to the host bacteria.Nevertheless, the homologous recombination between resident prophages and secondary infecting virulent phages contribute to the evolution of phages.In particular, the loss of lysogenizing functions of prophages by genomic rearrangement with infecting virulent phage genomes may result in the appearance of obligate lytic phages with the consequence of disruption of the fermentation process [30] .In addition, the metabolic burden of the lysogenized phages often impacts the fitness of the host strain despite the advantage to the host [111,112] .These findings highlight the ambivalent traits of prophages, and the importance of continuously expanding knowledge on the interrelationships between prophages and host bacteria in food fermentations. PHAGEOMES OF FERMENTED FOODS In the contemporary food fermentation industry, owing to increasing consumption and awareness of fermented foods, the establishment and implementation of a reliable and traceable manufacture system has been emphasised to achieve consistent, high-quality products.The crucial role of bacterial and/or fungal microbial communities in food fermentation processes has culminated in the generation of significant data pertaining to the microbiota of foods and food production environment and has been enhanced by recent developments in genome sequencing and meta-omics tools.In contrast, very few studies pertaining to the food production environments and their phageome, which represents the overall bacteriophage community of a given sample or environment, have been published [113] . To date, the presence of phages in fermented foods has been determined using several approaches, i.e., culture-dependent methods, direct detection, and metagenomic sequencing.Several studies employing classical microbiological approaches have described the evolving microbial landscape of fermented foods such as sauerkraut, natto, fermented cucumber and wine and highlighted the role of phages in the progression/disruption of the fermentation process [114] .While this approach has been very informative, it relies on the ability to culture and detect all microbial components in the food.It is now understood that culture-based approaches likely represent the dominant and culturable organisms but may not represent the complete population of bacterial and/or phages that may be present.Through analysis of metagenome data sets which capture entire microbial ecosystems, some phage-associated reads have been identified, though the extraction protocol had to be optimised in order to obtain a more complete image of phage prevalence, abundance and diversity [115] .Consequently, more targeted extraction methods for viral nucleic acid have been developed [116] .Viral metagenomics (or phageomics) has clearly increased our understanding of the prevalence, abundance, dynamics, and role of phages in a number of food fermentation processes.Recently, metagenomic sequencing of viral communities in kimchi and cheese surface has highlighted the viral diversity and its correlation with bacterial diversity [117,118] .Especially, Jung et al. [117] revealed that the viral communities in kimchi have a much more clear association with geographical origins than microbial communities, facilitating an in-depth understanding of fermented food ecosystems.Nevertheless, there remains an abundance of viral dark matter, which does not align with any reference virus sequences, obstructing the comprehensive understanding of the viral community.In addition, there are limited studies to date on phageome-specific extraction methods compared to the standard metagenome extractions to truly understand the potential benefits of a more targeted approach.While phageome studies of fermented foods are currently limited, it will be important to consider the sample preparation for phageome studies. The viral load and associated nucleic acid extract concentration can be low where the metagenome or direct phageome analysis approaches and identifying low abundance phages can be challenging.However, using enrichment approaches can lead to amplification of dominant members of the phageome.To overcome these challenges, qPCR systems to identify and quantify a range of phage species of the sample pre-and post-enrichment could be applied to track the changing population landscape to complement sequencing strategies.This represents an opportunity to expand and deepen the current understanding of the role, diversity and functionality of phages in food systems.To reduce the viral dark matter, not only an enrichment of viral sequences is required, but also the combinations with biological and molecular methods need to be improved. CONCLUSION AND FUTURE PERSPECTIVES Phages have maintained a prominent role in modulating the microbial composition of fermented foods.The "kill-the-winner" model of phage dynamics allows for the stabilisation of complex bacterial communities [119] .This hypothesis states that an increase in a particular bacterial strain within a microbial community will coincide with an increase of phages that can infect that strain, thereby reducing its abundance and preventing a single strain from dominating the community.Phages, therefore, play an essential role in maintaining the diversity and stability of the complex microbial communities necessary for the production of fermented foods.Culture-based methods, and more recently, molecular and genomicsbased methods have been instrumental in defining phage-host interactions among LAB.This knowledge can now be applied to better understand these interactions between phages and other lactococcal and streptococcal strains, as well as other important LAB such as lactobacilli.There is a diverse range of globally applied fermentation practices that utilise LAB, as well as other microbes.The microbes used during fermentation, whether autochthonous or introduced through starter cultures, vary depending on the geography, environmental conditions, climate, fermentation practices, and raw materials used.In addition, the demand for fermented foods is only increasing, demonstrating the importance of generating deeper insights into the microbial interactions in these communities.With an increasing demand for consistent, high-quality products, there is increasing pressure to employ robust and reliable fermentation practices.In particular, there is increasing interest in plant-based dairy alternatives, and in order to gain a better understanding of the phage-host interactions occurring in these unique environments, phageome and metagenome analysis should be utilised.Furthermore, there is an opportunity to expand current knowledge pertaining to the microbiota in fermented meat and vegetable products and to determine their contributions to metabolite production as well as product safety and quality [120,121] . With the expanding use of metagenome and phageome sequencing of fermented foods, we are only now beginning to uncover the true complexity of these microbial communities.By combining both phageome and metagenomic analyses, it will be possible to gain a better understanding of the ever-evolving phage-host interactions occurring in fermentation environments.Through the combination of culture and cultureindependent approaches, it is possible to achieve an in-depth, systems understanding of the genetic and functional diversity of microbial communities present in fermented foods.Analysis of CBMs in phageome data, particularly those found in phage RBPs, can be used to predict potential hosts of the phages present.A link between Skunavirus RBPs and host CWPS has already been defined (and may likely need to be expanded as more phages are isolated and sequenced) [26,63,65,67] .With continued phage-host interaction studies of LAB this predictive strategy presents a paradigm for the microbial interactions of a range of Gram-positive bacteria and their infecting tailed phages. Constant monitoring of the microbial community is essential in order to overcome the negative impacts of phages in fermentation and ensure the production of consistent, high-quality products.To date, phageome and metagenomic sequencing in fermentation have provided just a snapshot of the diversity in specific communities.These tools need to be expanded and used to monitor shifts in populations over time and with changing environmental conditions.Metagenomics and phageomics should be used as a tool to understand the phages (both virulent and temperate) and strains that are present in a specific fermentation factory.For instance, it may be possible to monitor the microbial community by monitoring specific genes, such as those encoding phage-host receptors or strain/genetic-lineage specific genes.This can then be linked to the phageome, where problematic phages associated with a specific factory can also be monitored and detected.By utilising culturomics, metagenomics, and phageomics (and incorporating transcriptomics, proteomics and metabolomics) in combination, factory-specific detection and enumeration strategies can be developed and utilised.This will allow for a better understanding of the phage-host interactions occurring in these complex microbial communities and for more reliable and robust fermentation practices. Authors' contributions Drafted initial manuscript: White K, Yu JH Involved in conceptualisation, review, and editing: van Sinderen D, Mahony J Reviewed and edited manuscript: Eraclio G, Dal Bello F, Nauta A All authors read and approved the final version of the manuscript. Figure 1 . Figure 1.Schematic representation of genes commonly present in the tail morphogenesis region of lambda-like Siphoviridae recognising saccharidic receptors.The late-expressed genes that are commonly shared by these phages and make up the tail include the: major tail protein (MTP; green), tape measure protein (TMP; blue), distal tail protein (Dit; orange), tail-associated lysin (Tal; yellow), and receptor binding protein (RBP; red).Additional carbohydrate-binding modules (CBMs) found in auxiliary binding proteins (grey) have been identified and characterised in certain phages, such as the: neck passage structure (NPS), major tail extension protein (TpeX), and accessory baseplate protein (BppA).It is noteworthy that in some cases (e.g., certain P335 lactococcal phages), the NPS-encoding gene is located downstream of the RBP-encoding gene. Figure 2 . Figure 2. Schematic representation of commonly occurring phage-host interactions.After a phage recognises and binds to a specific host receptor, a number of plasmid-, chromosomal-, and/or prophage-derived anti-phage systems may impede successful phage proliferation, such as: Sii, Sie, R-M, or Abi systems.Created with BioRender.com.
2022-01-15T16:02:51.936Z
2022-01-12T00:00:00.000
{ "year": 2022, "sha1": "2a7d38c16a79b0707492f70e9247465d40ebdfa1", "oa_license": "CCBY", "oa_url": "https://www.oaepublish.com/mrr/article/download/4527", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "038a5851de6550e349cbdcbd20bd959551d305f1", "s2fieldsofstudy": [ "Biology", "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [] }
222290529
pes2o/s2orc
v3-fos-license
Neural Monte Carlo Renormalization Group The key idea behind the renormalization group (RG) transformation is that properties of physical systems with very different microscopic makeups can be characterized by a few universal parameters. However, finding the optimal RG transformation remains difficult due to the many possible choices of the weight factors in the RG procedure. Here we show, by identifying the conditional distribution in the restricted Boltzmann machine (RBM) and the weight factor distribution in the RG procedure, an optimal real-space RG transformation can be learned without prior knowledge of the physical system. This neural Monte Carlo RG algorithm allows for direct computation of the RG flow and critical exponents. This scheme naturally generates a transformation that maximizes the real-space mutual information between the coarse-grained region and the environment. Our results establish a solid connection between the RG transformation in physics and the deep architecture in machine learning, paving the way to further interdisciplinary research. I. INTRODUCTION The renormalization group (RG) [1] formalism provides a systematic method for quantitative analysis of critical phenomena. Among all the RG schemes, the realspace renormalization group (RSRG), first proposed by Kadanoff [2], is the most intuitive and natural way to perform RG transformations on lattice models [3]. These methods allow for a straightforward construction of the critical surface and calculation of the critical exponents using numerical methods such as Monte Carlo renormalization group (MCRG) [4][5][6]. However, the RSRG transformation typically generates long-range couplings not present in the original Hamiltonian and truncation is necessary to make the method manageable. From the physical point of view, we expect the range of the renormalized interactions of a physical lattice system near the fixed point should not increase. Finding the optimal way to coarse-grain the Hamiltonian to systematically eliminate the irrelevant degrees of freedom is crucial for the success of any RSRG scheme. The fundamental difficulty lies in the enormous degrees of freedom in choosing the weight factors for the RG transformation. Several attempts in the past have been made to find the optimal transformation. Swendsen proposes an optimal MCRG scheme by introducing variational parameters into the RG procedure [7]. Blöte et al. propose to modify the Hamiltonian and the weight factors such that the corrections to scaling are small [8]. Ron et al. propose to choose parameters such that the critical exponent of interest was nearly constant during the MCRG iterations [9]. However, it remains unclear how to determine the weight factors without prior knowledge of the system. The general guideline in searching for an optimal RG transformation is to identify and eliminate the irrelevant degrees of freedom in the RG flow while retaining the relevant ones. However, it is difficult a priori to determine which degrees of freedom should be eliminated. This resembles the question in machine learning (ML) on how to extract relevant features from raw data. Deep learning (DL) [10] using deep neural networks (DNN) has significantly improved machine's ability in many areas such as speech recognition [11], object recognition [12], Go and video game playing [13][14][15], as well as aided discoveries in various fields of physics [16][17][18][19][20]. Multiple layers of representation are used to learn distinct features directly from the training data. The similarity between the structure of the DNN and the course-graining schemes in statistical physics inspires many efforts to establish connection between variational RG [21] and unsupervised learning of DNN [22][23][24][25][26][27][28][29][30]. Here, we want to address a different question: how can we train an DNN to obtain an optimal RSRG transformation? This issue is partially addressed from the informational theoretical perspective [25,26], where an optimal RG transformation is obtained by maximizing the real-space mutual information (RSMI). However, the proposed RSMI algorithm requires a mutual information proxy in order to probe the effective temperature(coupling) of the system along the RG flow, rendering it less practical. A more direct and transparent method that enables direct computation of the corresponding RG flow and critical exponents is thus highly coveted. Here we present a scheme called neural Monte Carlo RG (NMCRG) that parametrizes the RG transformation in terms of a restricted Boltzmann machine (RBM) [31]. The optimal RG transformation can be learned by minimizing the Kullback-Leibler (KL) divergence between the system distribution and the marginal weight factor distribution (defined in Eq. (5)). This provides an explicit link between the RG transformation and the RBM, allowing us to use the modern ML techniques to find the optimal RG transformation. In addition, the scheme is readily integrated with the MCRG techniques to directly determine the effective couplings along the RG flow, and critical exponents. We demonstrate the accuracy of this approach on the two-and three-dimensional classical Ising models. We find the optimal transformation leads to an efficient RG flow to the fixed point with short-range renormalized couplings, and saturates the mutual information toward the upper bound. II. PARAMETRIZATION OF REAL-SPACE RENORMALIZATION GROUP Consider a generic lattice Hamiltonian, where the interactions S α are combinations of the original spins σ and the K α are the corresponding coupling constants. A general RG transformation [3,26] can be written as with parametrized weight factors, where µ = ±1 correspond to the renormalized spins in the renormalized Hamiltonian H (µ) = α K α S α (µ) with renormalized couplings K α . W ij are variational parameters to be optimized. In particular, if W ij are infinite in a local block of spins and zero everywhere else, then we recover the majority-rule transformation [4]. Importantly, this parameterization satisfies the so-called trace condition µ P (µ|σ) = 1, which is required to correctly reproduce thermodynamics [3,24,26]. To make connection with the RBM in the following discussion, we define the weight factor distribution as where Z = σ,µ e ij Wij σiµj . The weight factor Eq. (3) is then simply the condition distribution of the weight factor distribution, that is, we have P (µ|σ) = P (σ, µ)/ µ P (σ, µ). An RBM is a generative model that is a main staple deep learning tool to solve tasks that involve unsupervised learning [32,33]. Hidden layers of an RBM can extract meaningful features from the data [34]. In this regard, an RBM with fewer hidden variables than the visible variables resembles coarse-graining in RG, first pointed out by Mehta and Schwab [22]. However, their proposed mapping from the variational RG procedure to unsupervised training of a DNN does not satisfy the trace condition Eq. (4), and thus does not constitute a proper RG (See appendix for a detailed comparison). Here we propose a direct mapping between the RBM and the weight factors such that Eq. (4) is naturally satisfied. An RBM can be written in terms of weights W ij , hidden variables h j and visible variables v i as where Z RBM = v,h e ij Wij vihj . The empirical feature distributionp (h) can be extracted from the empirical distributionp(v) througĥ where Q(h|v) = Q(v, h)/ h Q(v, h) is the conditional distribution of the hidden variables, given the values of the visible variables [32]. The optimal parameters for the RBM are chosen by minimizing the KL divergence between the empirical distributionp(v) and the marginal distribution h Q(v, h), where D(p q) = σ p(σ) log(p(σ)/q(σ)) for two discrete distribution p(σ) and q(σ). Motivated by the similarity between Eqs. (2) and (7), we identify the conditional distribution Q(h|v) in the RBM with our parametrized weight factor P (µ|σ) and associate the hidden and visible variables in the RBM with the renormalized and original spins, respectively. In analogy to the optimization scheme of an RBM, we propose an optimal choice of the parameters in the weight factors by minimize the KL divergence between the system distribution and the marginal weight factor distribution which can be carried out using standard ML techniques. III. STOCHASTIC OPTIMIZATION FOR THE OPTIMAL CRITERION The optimization problem is solved by the stochastic gradient descent, where the parameters are updated through decrementing them in the direction of the gradient of the KL divergence. We replace the system distribution e H(σ) /Z by its empirical distributionp(σ) over Monte Carlo samples drawn from the Wolff algorithm [35] and write the KL divergence Eq. (9) as an expectation value over the empirical distribution The gradient G ij of the KL divergence Eq. (10) with respect to W ij can be derived as where F (σ) is the free energy defined as F (σ) = log µ e ij Wij σiµj . The first term in Eq. (11) is simply a sample average of the derivative of the free energy and can be readily computed. The second term is approximated using the contrastive divergence algorithm [36] (CD k ) where the expectation value is calculated from samples drawn from a Markov chain initialized with data distribution and implemented by Gibbs sampling with k Markov steps. We update the weights in the direction of negative gradients where the superscript of the weight W (k) indicates the number of training epochs we have descended the weight. We initialize W (0) randomly around zero. Along the gradient descent we obtain a sequence of weight factors, which can be used to compute critical exponents and renormalized couplings, to see what feature distribution (p (h) in Eq. (7)) the RBM is trying to learn. For translational-invariant systems, translational invariant parametrization of the weight factor distribution Eq. (5) can be achieved via convolution [37]. IV. TWO-DIMENSIONAL ISING MODEL To validate our scheme, we first consider the twodimensional (2D) Ising model, where σ i = ±1, K 1 is the nearest-neighbor coupling and S nn denotes the collection of nearest-neighbor inter-spin interactions. In the following, we consider a 2D lattice of size 32×32 with the periodic boundary condition. We analyze the optimal weight factors' ability to remove longrange interactions by directly calculating the renormalized couplings and extract critical exponents [38]. The computational cost of finding the optimal representation takes seconds to several minutes with a single GPU computer. Figure 1 shows the weight factors along the optimization process (at 10th, 30th and 50th epochs corresponding to Fig. 2 (a)) learned with a translational invariant filter of size 8 × 8. The filters are initialized uniformly around zero. Localized features emerge after a few epochs of training and progressively aggregate toward the center, in agreement with the conventional wisdom that renormalized and original spins close to one another should couple more strongly than those further apart [39]. On the other hand, the RBM also picks up non-local correlations between the renormalized and original spins, where the interaction strength falls off exponentially with distance. We proceed to investigate the effect of the criterion of minimizing KL divergence to see what the machine is trying to learn. In Fig. 2 (a), we show the thermal critical exponents calculated from weight factors W (k) along the optimization flow. At the beginning of the training, the partially-optimized weight gives a poor estimate of the thermal critical exponent at the first step of RG transformation. After the 30th epoch, the value grows rapidly and converges to the exact value. In Fig. 2 (b,c), we use the weights obtained at each training epoch to calculate the renormalized coupling parameters along the training trajectory. The renormalized couplings, in machine-learning terms, completely describe the energy model underlying the empirical feature distribution (see Eq. (7)) extracted by the machine for the Ising empirical distribution. In Fig. 2 (b), we see that the interactions are dominated by nearest (K 1 ) and next-nearest (K 2 ) neighbor couplings. The values for the longer-range interactions flow progressively towards zero as shown in Fig. 2 (c). The trend shows that our optimal criterion aims to remove longer-range coupling parameters in the renormalized Hamiltonian. point. Slightly away from the critical point, the coupling parameters flow away to the infinite (zero) temperature trivial fixed points. Figure. 3(b) and (c), show the renormalized coupling parameters along the RG flow. The coupling parameters coarse-grained with the optimal weight factors reached K 1 = 0.3109(3), K 2 = 0.1051 (2) and K 3 = −0.0184(2) at the third RG step. The values for longer-range interactions are much suppressed compared to those obtained by the majority-rule transformation. Since the renormalized Hamiltonians should be dominated by short-range couplings, our learned weight factors are superior than those for the majority-rule transformation. Table I shows the critical exponents of the 2D Ising model computed using both the RBM and majority-rule transformations. Surprisingly, although the weights are learned without any prior knowledge of the model, the exponent is very close to the exact value at the first step of renormalization transformation giving y t = 1.000 (2), consistent with the exact value within the statistical error. Equally surprising is that the RBM trained on such small training data with only 10 4 samples can generalize well. In contrast, the majority-rule transformation gives y t = 0.975(3) at the first RG iteration. Even though the convergence for the thermal critical exponents looks extremely good, the scheme overestimates the magnetic critical exponents in the first RG step. The discrepancy in the magnetic exponents is also noted previously [7,40]. The weight factors considered in the literature are mostly short-range [41] (decimation and majority transformation), i.e., they only couple one renormalized spin to a few original spins in the immediate vicinity. However, despite the seeming locality, these weight factors generally lead to an infinite proliferation of interactions upon renormalizing. With our proposed criterion, the learned weight factors contain non-local terms that work as counter terms, making the renormalization transformation more local; therefore, only a few short-range interactions are produced during the RG transformation. We note that the strategy along this line of transferring the complexity in renormalized Hamiltonian to the weight factors has yielded the first exactly soluble RG transformation [39]. V. THREE-DIMENSIONAL ISING MODEL The scheme can be easily generalized to higher dimensions as long as we can train an RBM to represent the optimal RG transformation. Table II shows the thermal critical exponents computed using optimal filters starting at a system size of 64 × 64 × 64. The trailing numbers in the parentheses indicate the linear size of the filters. The filters at the first (64 → 32) and second (32 → 16) steps are learned. The filters in the following RG steps (16 → 8 and 8 → 4) use the same filter obtained in the second step. We compare the results with the values obtained from the majority rule [42]. Only the first twenty couplings out of the total 53 couplings in Ref. [42] are used. The 2 × 2 × 2 optimal filter gives the exponent closest to the best estimate from the Monte Carlo y t = 1.587 [43]. The 2 × 2 × 2 optimal filter is quite homogeneous, with an average value of 0.5254 (2), which is very close to the optimal choice 0.4314 in Ref. [9]. The weight values at the second, third and forth steps are 0.5057(9), 0.510(1) and 0.544(2) respectively. VI. REAL-SPACE MUTUAL INFORMATION We have now established that by parametrizing the weight factors as an RBM, we can learn the optimal RG transformation. On the other hand, the RSMI scheme argues that an optimal RG transformation can be obtained by maximizing the RSMI [25,26]. A natural question is how these two schemes are related. In particular, we would like to see if our optimal RG transformation also maximizes the RSMI. The RSMI measures the information that the knowledge of environment degrees of freedom E gives about the If E completely determines H, then the information gained is maximized and the I(H; E) reduces to the selfinformation (the entropy) of the relevant degrees of freedom H, which itself is upper bounded by the logarithm of all possible configurations of H. Adopting the definition in Refs. [25,26], we consider a system described by a quadripartite distribution P (V, E, H, O) ( Fig. 4(a)). We define the RSMI of the system as I(H; E), i.e., the mutual information between hidden and environment random variables. The relevant distributions needed to compute I(H; E) are appropriate marginals of P (V, E, H, O). Here we consider a 4 × 4 Ising model with the periodic boundary condition where RSMI can be can computed exactly. We train a 3 × 3 filter on the system to obtain an optimal weight factor distribution. Figure 4 (c) shows the evolution of RSMI during training. Random initialization of the filters gives a zero RSMI, and as the training progresses, the RSMI saturates to the upper bound ln 2 0.693. This shows clearly that the optimal weight factors obtained from our algorithm saturate the RSMI as proposed in Refs. [25,26]. However, our scheme allows for a direct calculation of the renormalized coupling parameters and critical exponents using the MCRG algorithms without resorting to proxy systems. VII. CONCLUSIONS We demonstrate a scheme based on RBM that is capable of learning the optimal RG transformation from Monte Carlo samples. The similarity between the standard RBM and the weight factors means that we can take advantage of the progress in the ML architectures and techniques to parameterize and train the filters for RG. This algorithm is flexible and can be easily applied to disordered systems [44]. Although we focus on the RBM with binary variables, for models with continuous variables such as XY or Heisenberg models, one can use Gaussian-Bernoulli RBMs to better model the RG transformation [45]. Generalization of the current scheme to quantum systems should be straightforward by the quantum-to-classical mapping of the d-dimensional quantum system to d + 1-dimensional classical system [46]. It would be interesting to test the NMCRG scheme on fermionic systems to see how fermionic sign manifests itself. Finally, we note that in the 2D Ising model, the filter has to reach the size of 8 × 8 to obtain reasonable critical exponents, while in the 3D case, a 2 × 2 × 2 filter suffices to give the best result. Wether this can be associated with the logarithmic correction in the 2D Ising model warrants further studies [47]. This work was supported by Ministry of Science and Technology (MOST) of Taiwan under Grants No. 108-2112-M-002-020-MY3 and No. 107-2112-M-002-016-MY3, and partly supported by National Center of Theoretical Science (NCTS) of Taiwan. We are grateful to the National Center for High-performance Computing for computer time and facilities. The code that generates data used in this paper is available at https: //github.com/unixtomato/nmcrg. Appendix A: Monte Carlo Renormalization Group Here we summarize the MCRG method used to calculate the critical exponents and renormalized coupling parameters from Monte Carlo samples for a given filter [38]. To determine the critical exponents, we need to calculate the derivatives of transformation which is given by the solution of the linear equation [4] ∂ S Here S (n) γ is the expectation of the spin combinations at the nth RG iterations. The derivatives of these expectation value of the spin combinations are obtained from the correlation functions (A4) Given a set of spin configurations sampled from some Hamiltonian H = α K α S α , we would like to infer back the coupling parameters of H. Define a specific spindependent expectation where z l = σ l e H l and H l = α K α S α,l and S α,l are combination of spins in S α that includes only σ l . Here z l and H l and hence S α,l l depend on spins neighboring to σ l . The summation of σ l can be carried out analytically and we obtain the formula S α,l l = S α,l tanh where S α,l ≡ σ l S α,l . The correlation functions can then be written in another form as where m α is the number of spins in the combination S α . Introducing a second set of coupling parameters { K α } we define Figure 5 shows the couplings used for the calculation of the renormalized coupling parameters for the twodimensional Ising model. The first seven even couplings in (a) are used to compute the thermal critical exponent. The odd couplings in (b) are used to compute the magnetic critical exponent. Appendix B: Comparison with Other RBM-based Schemes RG transformation and Normalizing Condition Consider again a general RG transformation where P (µ|σ) is the weight factor. The weight factor is required to satisfy the trace condition µ P (µ|σ) = 1. We argue that the trace condition is indispensable, since the condition leads to the invariance of free energy under renormalization and the following fundamental relation where f (K) is the free energy density of the system in the thermodynamic limit. For K consisting of nearest-neighbor coupling and magnetic field, under suitable transformation, we could arrive at where y t and y h are the often soughtafter critical thermal and magnetic exponents. In the following, we review the schemes proposed in Refs. [22] and [25] and point out the shortcomings in each scheme. Variational RG and Mehta and Schwab's Mapping In Ref. [22], the weight factor is defined as Here H(σ) is the original Hamiltonian, e.g., H(σ) = K ij σ i σ j . The W ij 's are the variational parameters. The form of the weight factor does not satisfy the trace condition and, in general, it is not possible to choose the parameters W ij to satisfy the trace condition (B2). The fundamental relation (B3) is only approximated. We note that in the original procedure of variational renormalization group [21], the form of the weight factor is chosen with variational parameters such that for all values of variational parameters the weight factor must satisfy the trace condition. The variational parameters are used, instead, to optimize the lower bound of the approximated free energy density. Define a distribution of the weight factor with variational parameters W ij , P W (σ) = µ e ij Wij σiµj σ µ e ij Wij σiµj . (B5) In Ref. [22] the variational parameters are chosen to make as small as possible. This completely fixes the variational parameters, leaving no room for optimizing the lower bound free energy approximation. That is to say, the variational approximation in machine learning (B6) and the variational approximation of thevariational renormalization theory work at completely different levels. The rationale of the criterion (B6) for choosing the variational parameters is that it is a necessary but not sufficient condition for the trace condition to be satisfied The normalization factor σ µ e ij Wij σiµj is equal to the partition function for the original Hamiltonian, denoted as Z. Therefore the divergence (B6) is exactly zero. The criterion is not sufficient since when we have where the trace condition fails up to some unknown constant not necessarily equal to one. On the other hand, with the parametrized form of weight factor as in (B4), the renormalized Hamiltonian would then describe the marginal distribution P W (µ) of the RBM. Define P W (µ) to be (B8) The normalization factor σ µ e ij Wij σiµj is thus equal to the partition function, Z , for the renormalized Hamiltonian irrespective of the choice of the variational parameters W ij . Therefore In this respect, we can say that the hidden variables of the machine is described by the renormalized Hamiltonian. Real-space Mutual Information Algorithm In Ref. [25], the weight factor factorizes as l a t e x i t s h a 1 _ b a s e 6 4 = " B A d n F + u n h E O R k r G s i h A 2 H o E g e h E = " > A A A B 8 n i c d V D L S g M x F M 3 4 r P V V d e k m W A R X Q 2 b a 0 Q o u C m 5 c V r A P m A 4 l k 2 b a 0 E x m S D J C G f o Z b l w o 4 t a v c e f f m G k r q O i B w O G c e 8 m 5 J 0 w 5 U x q h D 2 t l d W 1 9 Y 7 O 0 V d 7 e 2 d 3 b r x w c d l S S S U L b J O G J 7 I V Y U c 4 E b W u m O e 2 l k u I 4 5 L Q b T q 4 L v 3 t P p W K J u N P T l A Y x H g k W M Y K 1 k f x + j P W Y Y J 5 3 Z o N K F d m X y P F c D y K 7 j s 5 d r 2 6 I 4 y G n U Y O O j e a o g i V a g 8 p 7 f 5 i Q L K Z C E 4 6 V 8 h 2 U 6 i D H U j P C 6 a z c z x R N M Z n g E f U N F T i m K s j n k W f w 1 C h D G C X S P K H h X P 2 + k e N Y q W k c m s k i o v r t F e J f n p / p q B H k T K S Z p o I s P o o y D n U C i / v h k E l K N J 8 a g o l k J i s k Y y w x 0 a a l s i n h 6 1 L 4 P + m 4 t l O z 3 d t 6 t X m 1 r K M E j s E J O A M O u A B N c A N a o A 0 I S M A D e A L P l r Y e r R f r d T G 6 Y i 1 3 j s A P W G + f 6 B m R p w = = < / l a t e x i t > H < l a t e x i t s h a 1 _ b a s e 6 4 = " S p 4 v o 8 T s X 5 e r k x 5 k j 5 h z K I L m 5 + g = " > A A A B 8 n i c d V D L S s N A F J 3 U V 6 2 v q k s 3 g 0 V w F Z J W a g U X B T d d V r A P S E O Z T K f t 0 E k m z N w I J f Q z 3 L h Q x K 1 f 4 8 6 / c d J G U N E D A 4 d z 7 m X O P U E s u A b H + b A K a + s b m 1 v F 7 d L O 7 t 7 + Q f n w q K t l o i j r U C m k 6 g d E M 8 E j 1 g E O g v V j x U g Y C N Y L Z j e Z 3 7 t n S n M Z 3 c E 8 Z n 5 I J h E f c 0 r A S N 4 g J D C l R K S t x b B c c e z 6 V b V R c 7 B j O 0 t k x K 2 7 b h 2 7 u V J B O d r D 8 v t g J G k S s g i o I F p 7 r h O D n x I F n A q 2 K A 0 S z W J C Z 2 T C P E M j E j L t p 8 v I C 3 x m l B E e S 2 V e B H i p f t 9 I S a j 1 P A z M Z B Z R / / Y y 8 S / P S 2 D c 8 F M e x Q m w i K 4 + G i c C g 8 T Z / X j E F a M g 5 o Y Q q r j J i u m U K E L B t F Q y J X x d i v 8 n 3 a r t 1 u z q 7 U W l e Z 3 X U U Q n 6 B S d I x d d o i Z q o T b q I I o k e k B P 6 N k C 6 9 F 6 s V 5 X o w U r 3 z l G P 2 C 9 f Q K 5 K p G H < / l a t e x i t > x v l z c r W 9 s 7 u X n X / o K N E J j F p Y 8 G E 7 E V I E U Y 5 a W u q G e m l k q A k Y q Q b T a 4 K v X t P p K K C 3 + l p S s I E j T i N K U b a U E E / Q X q M E c u v Z 4 N q z b H 9 h u f 7 d e j Y d d 9 t u I 4 B n v n 9 c + j a z r x q Y F m t Q f W 9 P x Q 4 S w j X m C G l A t d J d Z g j q S l m Z F b p Z 4 q k C E / Q i A Q G c p Q Q F e Z z y z N 4 Y p g h j I U 0 j 2 s 4 Z 7 9 where H j = {µ j } consists of a single renormalized spin and V j = {σ 1 j , σ 2 j } consists of two original spins in the case of one-dimension system (and 2 × 2 in the case of two-dimensional system), see Fig. 6. The local weight factor is parametrized as P 5 C h R a p p E p r O w q H 5 r B f m X F m Q 6 v g h z y t N M E 4 4 X i + K M Q S 1 g c T 8 c U k m w Z l M D E J b U e I V 4 j C T C 2 q R U M S F 8 X Q r / B x 3 P d n 3 b u z 2 r N S + X c Z T B E T g G p 8 A F D d A E N 6 A F 2 g A D A R 7 A E 3 i 2 t P V o v V i v i 9 a S t Z w 5 B D / K e v s E The variational parameters Λ is obtained through only a single copy of the local weight factor and hence we omit the subscript j in the following. as large as possible, where the distributions needed in the right hand side are defined as above. Since P (E) is independent of Λ, we instead maximize a proxy A Λ = H,E P Λ (E, H) log (P Λ (E, H)/P Λ (H)) of mutual information. However, to evaluate the proxy A Λ , further approximations have to be made. In order to perform quantitative analysis, the authors construct a "thermometer" function T (A Λ ) which maps the proxy A Λ to the temperature. The thermometer works to extract effective temperature of the renormalized system. To construct such a thermometer, it is required to generate sets of MC samples at different temperatures. For each set of samples, one can compute the proxy A Λ and hence know the mapping from A Λ to the temperature T for this set of samples. For a given type of system (e.g., Ising), we can write T (A Λ ) as T (T 0 , L, b, l) where T 0 is the temperature of the initially prepared system, L is the initial system size, b is the scale factor, and l is the scaling length (l = 0 means the original system and l = 1 means one-step renormalization and so on). We can then fit a function to these sets of samples and construct the thermometer. M. Koch-Janusz and Z. Ringel postulate a scaling function of the form f ((L/b l ) 1/ν ) related to the effective renormalized temperature T (T 0 , L, b, l) as where T c is the critical temperature of the original system. Finally one could collapse the plot of (T −T c )/(T 0 − T c ) as a function of (L/b l ) 1/ν to estimate the value of ν and T c . Neural Monte Carlo Renormalization Group In our work, we define the weight factor to be P W (µ|σ) = e ij Wij σiµj µ e ij Wij σiµj . (B14) where, for translational invariant system, the variational parameters are shift invariant, that is, for different j and j we have in the case of one-dimensional system. The weight factor satisfies the trace condition for all values of W ij 's. Let us define a joint distribution out of this weight factor P W (µ, σ) = e ij Wij σiµj σ µ e ij Wij σiµj . (B16) Here P W (µ, σ) has exactly the same form of a RBM and the weight factor can be viewd as the conditional distribution P W (µ|σ) = P W (µ, σ)/ µ P W (µ, σ). Consider one of the breakthrough in the realm of deep learning where Hinton introduced a greedy layer-wise unsupervised learning algorithm (See Sec. 2.3 of [32]). Denote P W (µ|σ) the posterior over µ associated with the trained RBM (we recall that σ is the observed input). This gives rise to a (feature) empirical distribution p (µ) over the hidden variables µ when σ is sampled from the data empirical distribution p(σ): we have p (µ) = σ P W (µ|σ)p(σ). (B17) The samples of µ with empirical distribution p (µ) become the input for another layer of RBM. We can view RBM to work as extracting features µ from inputs σ. Note the similarity between RG transformation (B1) and the feature extraction process (B17). We could postulate that the input distribution p(σ) is determined by some Hamiltonian H(σ) where p(σ) = e H(σ) /Z. We postulate that the posterior distribution P W (µ|σ) of an RBM works as a weight factor to do RG transformation: e H (µ) = σ P W (µ|σ)e H(σ) . Hence the feature extraction process (B17) becomes a necessary condition for the system to perform the RG transformation. In other words, the feature distribution extracted by the machine is described by the renormalized Hamiltonian. Now the variational parameters in the weight factor P W (µ|σ) are free to change. All choices of parameters should derive a well-defined RG transformation. The criterion for choosing the parameters is entirely arbitrary from the perspective of doing RG: we do not know a priori what weights W ij 's could give a "nicer" RG flow. A nice RG flow, however, should bring the original Hamiltonian closer to the fixed point fast. Also, it should remove long-range coupling parameters for practical purposes of performing RG and, loosely speaking, for killing the irrelevant scaling fields. Critical exponents and the coupling parameters can be easily computed using the MCRG techniques described in the previous section. In the realm of machine learning, the weights of an RBM are chosen to make the divergence (B6) as small as possible. We note that the criterion is entirely machinelearning-theoretical. In contrast, in Ref. [22], the criterion also serves as a necessary condition for the weight factor to satisfy the trace condition, a notion which is RG-theoretical. In summary, our NMCRG scheme provides an ansatz for the weight factors in the RG transformation such that the trace condition is always satisfied and the optimal RG transformation can be learned. It also allows for a direct computation of the renormalized coupling parameters and critical exponents. As demonstrated in the main text, the MNCRG scheme also naturally saturates RSMI. The simplicity and flexibility of the scheme should find more applications in the future.
2020-10-13T01:00:44.359Z
2020-10-12T00:00:00.000
{ "year": 2020, "sha1": "63c21dfb2decdcff027f6498fb2e31bc7e81faa2", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevResearch.3.023230", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "63c21dfb2decdcff027f6498fb2e31bc7e81faa2", "s2fieldsofstudy": [ "Computer Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
267100904
pes2o/s2orc
v3-fos-license
Adrenal crisis mainly manifested as recurrent syncope secondary to tislelizumab: a case report and literature review As an immune checkpoint inhibitor (ICI), tislelizumab is an anti-programmed cell death protein 1 (PD-1) drug. With the extensive application of ICIs, there is an ever-increasing proportion of immune-related adverse events (irAEs) in clinical settings, some of which may even be life-threatening. Herein, we present a patient with tislelizumab-induced adrenal crisis. The main clinical manifestation was recurrent syncope accompanied by high-grade fever. Timely identification and hormone replacement therapy helped the patient overcome the crisis well. Finally, the patient discontinued tislelizumab and switched to antibody–drug conjugate (ADC) therapy. We report this case to improve our understanding of this situation, identify this kind of disease, and prevent adrenal crisis in time. Eventually, limiting toxicities reduces the interruption of immunotherapy. Since irAEs are multisystem damage with more non-specific symptoms, except for oncologists, general practitioners who endorse the need for taking a holistic approach to the patient should play a vital role in the management of cancer treatment. Introduction Nowadays, the treatment of multiple malignancies has been revolutionized by immune checkpoint inhibitors (ICIs), which prolong patients' long-term survival and produce durable remissions.ICIs are monoclonal antibodies that target two key signaling pathways related to T-cell activation and exhaustion by binding and inhibiting cytotoxic T lymphocyte antigen (CTLA)-4 or programmed death (PD)-1 and its ligand PD-L1 (1,2).However, ICIs may also demolish the maintenance of immunological tolerance to self-antigens (3), leading to immune-related adverse events (irAEs) in different organ systems, especially autoimmune-like manifestations targeting endocrine glands (4).These toxic effects are a major cause of onset, often leading to treatment discontinuation, and can have debilitating long-term consequences (1).Endocrine dysfunction is one of the most commonly reported irAEs in ICI clinical trials, including hypothyroidism, hyperthyroidism, hypophysitis, primary adrenal hypofunction (PAI), and type 1 diabetes (5). Little is known about severe adrenal insufficiency (AI) related to ICIs, with an incidence rate of ≤1% (6)(7)(8)(9).AI usually manifests as grade 1-2 irAEs, while adrenal crisis (AC) manifesting as grade 3-4 irAEs is rare.The presentation of AI is usually non-specific.The main clinical symptoms include fatigue, anorexia, and nausea, which may be misdiagnosed as complications of a malignant tumor.When AI is not recognized, misdiagnosis or delayed diagnosis may lead to life-threatening AC (10,11).A history of previous AC is a susceptible factor for patients with AI to experience AC again (10).Severe symptoms of adrenal crisis may lead to a decline in confidence and discontinuation of immunotherapy.Therefore, it is of great clinical significance to identify and treat AI in time. This case report describes a middle-aged man with non-invasive urothelial carcinoma who manifested AC characterized by recurrent syncopal episodes after treatment with a PD-1 inhibitor, tislelizumab.Syncope under the category of undifferentiated symptom diseases necessitates a significant investment of time, finances, and effort to pinpoint the precise etiology (12).We present this case to underscore the importance of pre-medication education and regular post-usage monitoring of relevant diagnostic parameters.Elevating the awareness of healthcare practitioners regarding adverse drug reactions contributes to minimizing the progression of such reactions, ultimately reducing the temporal and financial costs incurred by patients. Case description A 58-year-old male patient was admitted to our department due to recurring syncopal episodes for more than 3 months.He was also suffering from high fever, confusion, fatigue, anorexia, nausea, and vomiting.The patient's family once monitored his blood pressure after syncope with a systolic blood pressure of 50-60 mmHg and a blood glucose level of 4.6 mmol/L.In addition to physical symptoms, the patient was under great mental stress at the time of admission. Three years ago, he was diagnosed with urothelial carcinoma and underwent minimally invasive surgery.The tumor recurred 6 months after resection, and on May 20, 2022, he underwent complete transurethral resection of bladder tumor (TURBT).Histological diagnosis was low-grade non-invasive papillary urothelial carcinoma.Cancer tumor staging showed no metastasis or local invasion, and the last contrast-enhanced multislice computed tomography (CT) was normal.The patient received treatment with tislelizumab (approximately eight cycles).The first four cycles of immunotherapy were from May 25, 2022, to July 27, 2022, with the infusion of tislelizumab (200 mg, injection d1 3 weeks) without specific discomfort.Further details and information are presented in Figure 1.He has a smoking history for more than 30 years.Previous endocrine disorders were unclear. On admission, his body temperature (BT) was 35.5°C, and his blood pressure (BP) was 117/77 mmHg.His physical examination revealed increased breath sounds and a positive Murphy's sign.Combined with the results of previous laboratory examinations and clinical manifestations, we first consider the possibility of Adams-Stokes syndrome attack, viral myocarditis, transient ischemic attack (TIA), pulmonary embolism, sepsis, vasovagal syncope, insulinoma, PD-1-related adverse reactions, and so on. Laboratory data revealed a high level of high-sensitivity C-reactive protein.We further performed contrast-enhanced multislice CT, which showed interstitial infiltrates and exaggerated lung markings.The level of serum sodium was normal.Other abnormal data are shown in Table 1.Diurnal rhythm changes of serum adrenocorticotropic hormone (ACTH) and cortisol suggested extremely low cortisol and ACTH and inconsistent diurnal rhythm (Table 2), considering that the patient had hypoadrenalism.In view of the low level of ACTH and no abnormalities seen on adrenal CT, the diagnosis of central hypoadrenalism was confirmed.Given the history of immunotherapy, we considered the possibility of immune-related hypophysitis (irH).Consequently, we comprehensively evaluated the endocrine system of patients, as irH could involve the hypothalamic pituitary thyroid axis and gonadal axis in addition to the cumulative hypothalamic-pituitary-adrenal (HPA) axis.Thyroid function tests revealed that although thyroglobulin was slightly elevated, free triiodothyronine (FT3), free thyroxine (FT4), and thyroidstimulating hormone (TSH) were normal, and thyroid peroxidase and thyrotropin receptor antibodies were negative, suggesting normal pituitary thyroid function.The results of the sex hormone test showed that luteinizing hormone and progesterone were mildly elevated, and the remaining indexes were within normal limits, suggesting normal pituitary-gonadal function.To this point, the patient's etiology could be clarified, as irH triggered isolated adrenocorticotropic hormone deficiency (IAD).The common clinical manifestations of IAD were fatigue, nausea and vomiting, weight loss, hypoglycemia, hyponatremia, and refractory hypotension.The patient's symptoms were highly consistent with IAD. Finally, we performed pituitary MR imaging (Figure 2), which revealed a normal pituitary gland.The patient's family revealed that the patient experienced absolute low blood pressure (<100 mmHg) and hyperthermia with confusion during the syncopal episode; thus, adrenal crisis was the most reasonable diagnosis.After administration of hydrocortisone sodium succinate (0.15 g iv drip bid) and continuous fluid resuscitation, the patient's condition gradually improved.After discharge, he continued to be given prednisone 10 mg qd (8a) and 5 mg qd (5p) orally.After 3 months' follow-up, the patient did not have syncope again, and the symptoms of fever, fatigue, anorexia, nausea, and vomiting improved significantly.In addition, there was no recurrence of adrenal crisis or other immunerelated adverse symptoms during the follow-up period. Discussion Here, we introduced a case of adrenal crisis after treatment with PD-1 (tislelizumab), which was a 3-4 grade irAE related to PD-1.Several cases of immunotherapy-induced adrenal crisis have been reported, most of which manifested as high-grade fever, persistent hyponatremia, or acute abdomen, while recurrent syncope is rare, and non-specific symptoms made the diagnosis of diseases difficult.Due to enormous psychological pressure, despite its immense clinical benefits, the patient stopped treatment.Clinical physicians should develop an awareness of irAEs in order to identify the events timely and reduce incidences of discontinuation of ICIs. According to the American Society of Clinical Oncology (ASCO) Guideline ( 13), a routine endocrine examination should be taken to evaluate the endocrine gland or organ.In this case, we confirmed the diagnosis of central hypoadrenocorticism through endocrine examination.We then traced the patients' medical history to figure out the potential cause.The patient had no previous history of taking, inhaling, or injecting steroids and no history of opioid use.In addition to being treated with PD-1 for eight cycles, there were no other relevant reasons and incentives.Therefore, we considered whether there were PD-1-related adverse drug reactions.Among them, irH has attracted our attention, which is defined by the occurrence, in patients treated with ICIs, of functional defect of one or more pituitary axes with or without slight pituitary MRI abnormalities (14).The exact pathogenesis of irH is still unknown.CTLA-4 and PD-1/PDL-1-related hypophysitis are currently known to have different clinical features, which may suggest different underlying mechanisms.CTLA-4-related hypophysitis manifests as frequent impaired TSH and luteinizing hormone/follicle-stimulating hormone (LH/FSH) secretion accompanied by impaired ACTH secretion (15) and a greater propensity for type II hypersensitivity reactions associated with off-target effects of CTLA-4 in the pituitary (16).In contrast, PD-1-associated hypophysitis is less frequent (17), and most patients have a specific impairment of ACTH only, presenting as IAD (18).The pituitary gland of autopsy cases showed evidence of type IV hypersensitivity by cytotoxic T lymphocytes (16).Different clinical presentations are presented depending on the specific target gland axis of injury. Due to the lack of specific clinical manifestations and accurate onset time, the diagnosis of irH is difficult.At present, the diagnosis is mainly based on biochemical and imaging examinations.Specific immune biomarkers for its diagnosis are not currently available, with the most common biochemical evidence being a deficiency of pituitary hormones.Imaging can rely on pituitary MRI to provide diagnostic evidence: pituitary enlargement, stalk thickening, and enhancement with allogeneic or heterologous contrast media are present on MRI in 77% of patients with IRH, whereas 23%-33% of patients do not show abnormalities on MRI (16).Multiple studies have suggested that hypophysitis induced by PD-1/PD-L1 inhibitors may lack the typical pituitary enlargement compared to CTLA-4 inhibitors (19)(20)(21).Therefore, imaging studies showing a normal appearance of the pituitary gland do not rule out hypophysitis (22).In addition, the diagnosis of hypophysitis may lag a few weeks after imaging shows pituitary enlargement (23). According to the 2022 National Comprehensive Cancer Network (NCCN) guidelines for irH, MRI should be performed if the patient has symptoms during treatment (24).A recent study suggested that brain MRI after receiving ICI therapy should be compared with previous results to monitor changes in pituitary size, which may foreshadow that impending anterior pituitary hormone dysfunction is about to occur ( 22).An enlarged pituitary gland, as indicated by imaging studies, is important to exclude metastatic disease in addition to suspecting hypophysitis (23).Several studies showed that ICI-related central adrenocortical dysfunction appears to be permanent (22, 23, 25-27).However, most of these reports were central hypoadrenocorticism caused by another immune checkpoint inhibitor CTLA-4 drug.So far, there is a lack of histocytological evidence to prove whether pituitary adrenal axis function could be restored (28,29).To sum up, hormone replacement therapy should not be delayed by waiting for a pituitary gland MRI when an endocrine examination prompts central hypoadrenocorticism (30). The initial clinical manifestations of IAD lack specificity, which delays diagnosis and eventually progresses to adrenal crisis, threatening the patient's life.The main clinical manifestations of adrenal crisis are severe hypotension or hypovolemic shock, acute abdomen symptoms, vomiting, hyperthermia or hypothermia, and hypoglycemia.Among them, hypotension is a core symptom in the diagnosis of adrenal crisis.However, seemingly normal blood pressure could not rule out a crisis.According to the adverse event evaluation criteria (Common Terminology Criteria for Adverse Events (CTCAE)), adrenocortical insufficiency is usually grade 1-2 irAEs, while grade 3-4 irAEs, especially adrenal crisis, are rarely reported.In a large meta-analysis study containing 160 clinical trials and 40,432 patients, Jingli et al. found that among patients using ICIs, the incidence of all-grade and severe-grade Post-contrast T2-weighted MR image of the pituitary gland. hypoadrenalism was 2.43% and 0.15%, respectively (6).Our case had grade 3-4 irAEs induced tislelizumab who presented with adrenal crisis.The symptoms of our patient were unremarkable and could be easily overlooked if an irAE had not been suspected.Although adrenal crisis is rare, it is a life-threatening side effect of ICIs that requires immediate recognition and treatment with intravenous glucocorticoids.Therefore, a deep understanding of irAEs as well as adrenal crisis, early diagnosis, and treatment is significantly important.When an immunotherapy-related adrenal crisis occurs, an initial intravenous or intramuscular bolus of 100 mg hydrocortisone in addition to supportive fluid therapy is required, as well as a continuous intravenous infusion of 200 mg hydrocortisone q24h (daily) or an intravenous or intramuscular bolus of 50 mg hydrocortisone q6h (or 50 mg four times daily) (10). The recommended duration is 24-48 h until the patient can take oral hydrocortisone (11).Glucocorticoid replacement therapy should be the primary treatment when the patient's condition is stable.Our patient had no history of underlying endocrine diseases such as diabetes, so there were no specific restrictions on the dose of cortisol to be administered.In patients with diabetes, choosing the appropriate cortisol dose that in turn maximizes benefits and reduces associated side effects is a challenge.Considering that high-dose cortisol may aggravate the underlying disease or lead to new disease (31), the potential benefit of high-dose glucocorticoid treatment should be balanced against efficacy loss due to anticancer immunotherapy.Although this issue remains controversial (27), the dose of cortisol should be reduced appropriately.For adults, the oral maintenance dose of hydrocortisone needs to be 15-25 mg per day (32).The hydrocortisone dose should then be gradually reduced according to the patient's clinical manifestations, with close monitoring of blood pressure and recurrence of clinical symptoms (33). For patients who develop endocrine diseases that can be controlled using hormone replacement therapy, there is no need to discontinue ICIs despite grade 3-4 irAEs (34,35).Theoretically, during the treatment period of ICIs, at the same time as the immune system's reduced tolerance to triggering irAEs, its ability to recognize and kill cancer cells is enhanced, so the occurrence of irAEs may be a positive predictor of treatment response (22).Our patient underwent imaging examinations that showed no metastasis and recurrence of the tumor, indicating that this patient achieved a complete response to the treatment with tislelizumab.Meanwhile, several studies have shown a positive correlation between the development of irAEs as a result of ICI therapy and improved tumor response and survival.However, for grade 3-4 irAEs, lifethreatening side effects require urgent hospitalization for corresponding symptomatic supportive care.After adverse reaction disappearance, the restoration of ICIs requires consideration of many situations, such as previous tumor reactions, treatment duration, toxicity type and severity, toxicity resolution time, availability of alternative therapies, and patient's condition (13).Multiple studies have confirmed a significantly increased incidence of irAEs with the combination of ICIs, and it is not recommended to switch to a new ICI (27,(36)(37)(38).After a consult with an oncologist on this patient's condition, the oncologist recommended an antibody-drug conjugate (ADC) therapy.At our later follow-up visit, the patient decided to discontinue ICI therapy and switch to ADC to continue the antitumor treatment. ADCs are composed of monoclonal antibodies, cytotoxic payloads, and linkers (39,40).The efficacy and toxicities of an ADC as a cytotoxic therapy are contingent upon the critical contributions of each component (40,41).Although high specificity and low toxicity are expected for this novel compound, unpredictable toxicity still exists and demands prompt consideration (40,42).In the subsequent follow-up, our patient still exhibited adrenal insufficiency, but common side effects attributable to ADC drugs were not observed, such as thrombocytopenia, liver or ocular toxicity, and peripheral neuropathy (42).At present, a subset of clinical trials is underway for combined regimens involving ADC drugs and immune checkpoint inhibitors (43,44).Additionally, it underscores the importance of clinicians exercising caution with respect to the drug toxicities induced by the combined regimen. In addition, a growing number of clinical cases prove that endocrine diseases such as late-onset AC still occur after the termination of ICI therapy (45,46), which also proves that the anti-tumor effects of ICIs can be long-term in vivo and expressed (46,47).Therefore, it is recommended to be always alert to the possibility of irAEs even after the discontinuation of ICIs.Current medical examination methods cannot distinguish between immunological-and non-immunological-related causes and specific immunological biomarkers deprivation, making it difficult for clinicians to detect irAEs (48).Because of their specificity of presentation, atypical timing, and clinical coexisting with other diseases, irAEs may be more difficult to diagnose and identify (48)(49)(50).Especially for immune checkpoint inhibitors, risk factors predicting these events have yet to be determined.It is a challenge to predict who will develop severe or permanent toxicity (1).Before giving treatment to patients with PD-1/PD-L1 inhibitors, it is necessary to inquire about the history of endocrine diseases and autoimmune diseases in detail, conduct reasonable baseline screening, regularly monitor changes in endocrine indicators, increase vigilance against possible related symptoms and signs, and detect and promptly handle irAEs as soon as possible (22,51).Once the dose of hormones and the types of anti-tumor drugs are determined, it is necessary to further provide patients with knowledge of common adverse reactions to ICI and conduct regular follow-up visits.We believe that self-education and management of such patients play an important role in the progress of the disease (52).Timely identification limits toxicities while maximizing anti-tumor efficacy to reduce the interruption of immunotherapy. When reviewing the patient's history, it was found that the patient had skin manifestations of irAEs 5 months before admission.However, these failed to capture the attention of both the patient and clinicians.Despite that the most common organ system was involved (1,5), the initial warning symptom was ignored, resulting in consequential outcomes.According to the American Society of Clinical Oncology Clinical Practice Guideline, timely and latest education about immunotherapies should be provided throughout treatment and survivorship (34).However, our patient was only informed that this drug has therapeutic effects before treatment, accompanied by a spectrum of side effects affecting different organ systems.The detailed elaboration on these side effects was withheld due to multifarious factors.This case confirms significant deficiencies in our current approaches to education and management.This also poses a question: following a comprehensive explanation of the toxicities associated with ICIs and ADCs, which pharmaceutical approach do patients exhibit a greater inclination to for anti-tumor therapy?However, considering the preexistence of significant side effects, further inquiry at this juncture might compromise the objectivity of the responses from the patients. With the increase in clinical practice of tumor immunotherapy, the occurrence of immune-related adverse reactions will constantly increase.Specialists of different departments may receive referrals for patients suffering from specific symptoms of adverse events in their field of expertise.However, as irAEs are multisystem damage with more non-specific symptoms, except for specialists, general practitioners should play a greater role in the management of cancer treatment.Identifying and characterizing irAEs is a cornerstone in ascertaining the impact of cancer treatment on patients and healthcare professionals (53).Cancer survivors are often troubled by the long-term consequences of cancer and its treatment (54).Because primary care is an integrated and accessible healthcare service, most patients consult general practitioners initially once they have symptoms.Lower-grade irAEs can be identified and controlled so as to divert medical resource pressure and financial pressure away from tertiary healthcare toward primary healthcare.As gatekeepers to further services, general practitioners should play a greater role in improving the quality of care for cancer survivors. FIGURE 1 FIGURE 1Timeline of symptoms. TABLE 2 Diurnal rhythm changes of serum ACTH and cortisol.
2024-01-24T18:01:30.184Z
2024-01-16T00:00:00.000
{ "year": 2024, "sha1": "1488afce22bb4b3c7eace12093a35dc2cc09182e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2023.1295310/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e2084dddaf759cb8348722f311c3c594041f7caa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
23588448
pes2o/s2orc
v3-fos-license
Autophagy inhibition through PI 3 K / Akt increases apoptosis by sodium selenite in NB 4 cells Selenium possesses the chemotherapeutic feature by inducing apoptosis in cancer cell with trivial side effects on normal cells. However, the mechanism in which is not clearly understood. Emerging evidence indicates the overlaps between the autophagy and the apoptosis. In this study, we have investigated the role of autophagy in selenium-induced apoptosis in NB4 cells. We find that autophagy is suppressed in NB4 cells treated by sodium selenite, as measured by electron microscope, acridine orange staining and western blot. Moreover, selenite combined with autophagy inhibitor contributes to the up-regulation of apoptosis, while the PI3K/Akt signaling pathway is downregulated. Consistently, when the inhibitor of PI3K was applied, the autophagic level significantly decreased. In summary, sodium selenite increases NB4 cell apoptosis by autophagy inhibition through PI3K/Akt, and the inhibition of autophagy contributes to the up-regulation of apoptosis. [BMB reports 2009; 42(9): 599-604] INTRODUCTION Autophagy is a regulated process that degrades and recycles cellular constituents, where parts of the cytoplasm or entire organelles are sequestered into double-membraned vesicles termed autophagic vacuoles or autophagosomes.These autophagosomes then fuse with lysosomes to mature into single-membraned autophagolysosomes, in which sequestered cytoplasmic components are degraded by lysosomal hydrolases (1,2). Although autophagy can serve as a protective mechanism against apoptosis and starvation by recycling macromolecules and removing damaged mitochondria and other organelles, excessive autophagy results in cell death with appearance of excessive autophagic vesicles (3,4).While apoptosis is called type I programmed cell death, autophagic cell death is named as type II programmed cell death.These two kinds of cell death are distinct from each other.However, more and more evidence shows that there is a cross-talk between them (5). Recent studies indicate that the specific function of autophagy depends on the certain circumstances, including cell types, cellular context, and the nature of treatment. Selenium is an essential trace element with the potential chemopreventive effects against various cancers (6)(7)(8)(9).Our previous work has shown that 20 μM sodium selenite can induce apoptosis in NB4 cells (10).However, whether autophagy is involved in selenite-induced apoptosis of NB4 cells remains unknown.Therefore, it is of potential clinical significance to better understand the molecular mechanisms regulating the autophagic pathway in selenite-induced apoptosis in NB4 cells. The serine/threonine kinase, Akt (protein kinase B), is a downstream effecter of PI3K.The activation of this pathway allows cells to inhibit apoptotic and autophagic programmed cell death, which may contribute to malignant transformation and tumor growth (11)(12)(13).Moreover, activated Akt also stimulates the mTOR/P70S6-kinase pathway, whose activation is required for the initiation of protein synthesis (14,15).Our former studies showed that the expression of p-Akt, down-stream factor of PI3K, decreased in selenite induced apoptosis in NB4 cells (16).Consequently, we postulated that the PI3K pathway plays an important role in the inhibition of autophagy and the increase of apoptosis in NB4 cells. In this study, we find that the PI3K-Akt pathway promotes autophagy in NB4 cells.And Autophagy inhibitors enhance the apoptosis ratio in NB4 cells treated by sodium selenite. Selenite increases apoptosis with inhibiting autophagy in NB4 cells Autophagy is inhibited in selenium-induced apoptosis in NB4 cells as measured by electron microscopy (EM), acridine orange (AO) staining and western blot. Autophagy is featured by the formations of autophagosome and autolysosome.Using EM, we identified that the number of double-membraned autophagosomes (indicated by arrows) decreased while empty vesicles increased after the treatment with selenite in NB4 cells (Fig. 1A).It indicates less autophagosome production with the inhibition of autophagy in a selenite induced time-dependent manner.AO is a fluorescent weak base that accumulates in acidic spaces, such as autolysosomes and lysosomes, which are called acid vesicular organelles AVO fluoresce bright red, whereas the cytoplasm and nucleolus fluoresce bright green and dim red.After the culture of NB4 cells with or without selenite, cells were incubated with AO.And using cells incubated with Bafilomycin A1, an autophagy inhibitor, as negative control.Using Laser scanning confocol microscope, we found that bright red dots faded in NB4 cells after the treatment of selenite, while the green areas maintained (Fig. 1B). LC3 exists in two forms, LC3-I and its proteolytic derivative of LC3-II (18 and 16 kDa, respectively), which are localized in the cytoplasm (LC3-I) or autophagosomal membranes (LC3-II).LC3-II thus can be used to estimate the abundance of autophagosomes before they are destroyed through the fusion with lysosomes.The amount of LC3-II is closely related with the number of autophagosomes and serves as a good indicator of autophagosome formation.As shown by the western blots in Fig. 1C, both LC3-I and II decrease in a time dependent manner in NB4 cells after the treatment of selenite. Beclin-1 expression is shown to be involved in the formation of preautophagosomal structures.Similar to LC3-II, the expression of beclin-1 decreased following the treatment by selenite in NB4 cells (Fig. 1C). Taken together, selenite induced apoptosis with inhibition of autophagy in NB4 cells. Inhibition of autophagy increases selenite-induced apoptosis Selenite induces apoptosis with inhibition of autophagy in NB4 cells.Therefore, we asked whether autophagy inhibition can increase the rate of apoptosis.We used two autophagy in-hibitors to pre-treat NB4 cells for 1.5 h, and then sodium selenite was added into the system. Both 3-MA and Baf A1 could inhibit autophagy at dose of 10 mM and 100 nM respectively in NB4 cells (Fig. 2A).Expressions of LC3 and Beclin 1 both decreased when NB4 cells treated with 3-MA.In Baf A1 treated cells, autophagy is inhibited before the fusion of autophagosomes and lysosomes.As a result, LC3-II aggregates on the autophagosomes, and can not be degraded through the fusion of autophagosomes and lysosomes.Therefore, in these cells, the increase of LC3-II expression indicates autophagy is suppressed. After confirmed autophagy inhibitors can efficiently suppress autophagy in NB4 cells, we asked their effects on apoptosis in NB4 cells.We assessed the effects of autophagy inhibitors on caspases activation in NB4 cells by western blot.The results showed that the activated cleavage fragment of caspase-9 increased after exposure to autophagy inhibitors and combination treatment.In parallel with the activation of caspase-9, there were also increases in the cleavage of effector caspases, namely caspase-3 and -7 (Fig. 2B).The activation of caspase-9 indicated that mitochondria mediated apoptotic pathway might be involved in autophagy inhibitor induced cell apoptosis in NB4 cells.And Flow cytometry result is consistent to this.When NB4 cells were treated with 3-MA or Baf A1 combined with selenite, apoptosis ratio was significantly enhanced compared with the selenite alone treatment group (Fig. 2C).And Baf A1 alone can efficiently trigger apoptosis.Taken together, Inhibiting autophagy by 3-MA or Baf A1 can facilitate selenite-induced apoptosis in NB4 cells. PI3K/Akt/mTOR pathway promotes autophagy We already had known that autophagy inhibition by 3MA and BafA1 can enhance the selenite induced NB4 cells apoptosis.We further exploned the mechanism behind this phenomenon. Previous studies showed that p-Akt decreased when NB4 cells were treated with sodium selenite.Additionally, accumulating evidence supports that PI3K/Akt/mTOR signaling pathway is involved in autophagy (20)(21)(22)(23)(24).We investigated how this pathway functioned in inhibiting autophagy and promoting the apoptosis by selenite in NB4 cells.As shown in Fig. 3A, Akt phosphorylation decreased sharply when treated by selenium with or without autophagy inhibitor.Selenium could also inhibit p-p70S6K, the down effecter of mTOR, which is regulated by p-Akt (Fig. 3B).Moreover, the autophagy inhibitors enhance the suppression extent by selenium.It indicates that the suppression of PI3K/Akt/mTOR signaling pathway is involved in autophagy inhibition and promotes apoptosis in NB4 cells treated by selenium. LY294002 inhibits autophagy through inhibiting Akt To further discuss whether Akt-inhibition contributes to the suppression of autophagy, the inhibitor of Akt, LY294002 (LY) was added into this system.EM results show that the formation of autophagosome decreased sharply after treated by LY, compared to control group.(Fig. 4B) In addition, Beclin 1 and LC3 also decrease in NB4 cells treated with LY.It indicates that the Akt inhibition could suppress autophagy (Fig. 4A). DISCUSSION As selenium is a promising clinical medicine in dealing with cancer, this study did further investigations regarding the mechanism of selenium induced apoptosis in NB4 cells.Autophagy is suppressed in NB4 cells treated by sodium selenite in a time-dependent manner.And combination treatment of autophagy inhibitors and selenium enhances the apoptosis ratio in NB4 cells.In addition, PI3K/Akt signaling pathway suppression contributes to the inhibition of autophagy in NB4 cells.These studies expand our understanding concerning the roles of autophagy in regulating apoptosis and provide important information of the molecular mechanisms of autophagy after selenium treatment in NB4 cells. It was previously suggested that autophagy and apoptosis were distinct forms of cell death, but more and more recent data implies that there is a mechanistic overlap between these two types of cell death (3).Some observations indicate that autophagy plays a role in preventing cells from apoptosis through the sequestration of cytochrome c (25).Other evidence showed that active autophagy appeared to increase the tendency to undergo apoptosis (20,26,27).In our study, autophagy decreased in selenite-induced apoptosis in NB4 cells.Moreover, when autophagy was suppressed by 3-MA and Baf A1 in selenite-treated NB4 cells, apoptosis ratio was significantly enhanced compared to the selenite alone treatment group, suggesting autophagy could protect NB4 cells from death through antagonizing apoptosis induced by selenite.Autophagy serves a protective role in NB4 cells. The function of PI3K-Akt pathway and its links to autophagy and apoptosis are disputing problems.Some reports described that PI3K-Akt activation suppresses autophagy in mammalian cells (21)(22)(23)(24).However, emerging studies have pointed out that PI3K-Akt pathway positively regulates autophagy (20).In this study, PI3K-Akt was activated in an autophagic process, but suppressed in apoptosis.Many proteins, including PI3K, Akt, and mTOR, are related to both apoptosis and autophagy.However, the specific associations among these proteins are still not clear.As a result, it needs to be studied in the future of whether continuous expression of p-Akt could inhibit apoptosis in NB4 cells.In summary, PI3K/Akt signaling is augmented in autophagy, while it is suppressed in apoptosis. In conclusion, our results demonstrate that PI3K-Akt pathway promotes autophagy in NB4 cells.Autophagy facilitates the survival of NB4 cells.And the combined treatment of autophagy inhibitors and selenium raises the apoptosis ratio in NB4 cells than selenium treated alone. Cell culture NB4 cells were cultured in RPMI 1640 medium supplemented with 10% fetal bovine serum, 100 units/ml penicillin, and 100 μg/ml streptomycin at 37 o C in a humidified atmosphere with 5% CO2. Cell lysis and western blot analysis Approximately 1 × 10 7 Cells were collected, washed twice with ice-cold phosphate buffered saline (PBS), and lysed in Cell lysis RIPA Buffer for 5 min on ice and then subjected to sonication for 20 s.The lysate was centrifuged at 12,000 g for 20 min at 4 o C. The supernatant was collected, and protein concentration was determined by the Bradford assay.Equal amounts of protein were separated by 15% SDS-PAGE and transferred onto nitrocellulose membranes.The membranes were then blocked with Tris buffered saline-Tween-20 (TBST) containing 5% non-fat milk and incubated by primary antibodies overnight at 4 o C.After washing with TBST, membranes were incubated by secondary antibodies conjugated with HRP for 1 h at room temperature.After a second round of washing with TBST, the blots were probed with the ECL system. Flow cytometry analysis of apoptosis Cells were washed twice with ice-cold PBS, and fixed with 70% ethanol at 4 o C overnight.After washing with PBS, cells were incubated in 0.5 ml PBS containing 50 μg/ml RNase A for 30 min at 37 o C, and then added PI to achieve the final concentration of 50 μg/ml for 30 min on ice in the dark.The resultant cell suspension was then subjected to flow cytometry analysis (COULTER EPICS XL). Detection of acidic vesicular organelles with acridine orange staining To detect acidic vesicular organelles (AVO), treated cells were stained with 1 μM acridine orange for 15 min.Cell images were captured with laser scanning confocol microscope (LEICA TCS SP2 SE) excitation wavelength: 488 nm; emission wavelength: green light, 530 nm.Red light, 640 nm). Electron microscopy NB4 Cells were collected and fixed in 2.5% glutaraldelhyde for at least 3 h.Then cells were treated with 2% paraformaldehyde at room temperature for 60 min, 0.1% glutaraldelhyde in 0.1 M sodium cacodylate for 2 h, postfixed with 1% OsO4 for 1.5 h, after second washing, dehydrated with graded acetone, and was embedded in Quetol 812.Ultrathin sections were observed using a HITACHI H7100 electron microscope. Statistical methods All data and results presented were confirmed in at least three independent experiments.The data are expressed as means ±S.D with the statistical method of Student's t test.P < 0.05 was considered statistical significance. Fig. Fig. 1.Selenite inhibits autophagy in NB4 cells.(A) Electron microscopy pictures were taken of NB4 cells untreated (control) or treated with 20 μM selenite for 6 h, 12 h, 24 h respectively.Autophagosomes (arrow) and nucleus (N) are indicated.Scale bars, 3 μM.(B) After treatment with 20 μM selenite for 12 h, NB4 cells were stained with AO as described in the Material and Methods and detected by laser scanning confocol microscope.Non-treatment NB4 cells served as the positive control, with 100 nM Baf A1 treatment NB4 cells as the negative one.(C) After exposure to 20 μM selenite for 3 h, 6 h, 12 h, 24 h, NB4 cells were harvested and analyzed by Western blot with antibodies against Beclin-1and LC3. Fig. 2 . Fig. 2. inhibition of autophagy by 3-MA and Baf A1 increases selenite-induced apoptosis in NB4 cells.Cells were pretreated with 10 mM 3-MA or 100 nM Baf A1 for 1.5 h prior to the selenium treatment for 24 h.(A) Western-blot analysis of the effect of autophagy inhibition by 3-MA or Baf A1, using antibodies against Beclin1 and LC3.(B) Western-blot analysis of caspase cleavage treated by selenite, 3-MA or Baf A1, using antibodies against cleaved caspase-9, cleaved caspase-3 and cleaved caspase-7.(C) Flow cytometry analysis of the effect of autophagy inhibitor 3-MA and Baf A1 on selenite-induced cell apoptosis.Data are presented as the mean ± SD of triplicates.*P < 0.05. Fig. 3 . Fig. 3. Suppression of p-Akt and mTOR by autophagy inhibitor is enhanced with selenium.Cells were pretreated with 10 mM 3-MA or 100 nM Baf A1 for 1.5 h prior to selenium treatment for 24 h.(A) Western-blot analysis of the effect regarding p-Akt inhibition by selenium and its cotreatment with 3-MA and Baf A1. (B) Western-blot analysis of the effect regarding mTOR inhibition by selenium and its cotreatment with 3-MA and Baf A1.
2018-04-03T01:46:51.340Z
2009-09-30T00:00:00.000
{ "year": 2009, "sha1": "20068a34f6cbc1af48528d578f5d78b91108b3b4", "oa_license": "CCBYNC", "oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO200910103475589&method=download", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "20068a34f6cbc1af48528d578f5d78b91108b3b4", "s2fieldsofstudy": [ "Chemistry", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
12602704
pes2o/s2orc
v3-fos-license
Adaptation to potassium starvation of wild-type and K+-transport mutant (trk1,2) of Saccharomyces cerevisiae: 2-dimensional gel electrophoresis-based proteomic approach Saccharomyces cerevisiae wild-type (BY4741) and the corresponding mutant lacking the plasma membrane main potassium uptake systems (trk1,trk2) were used to analyze the consequences of K+ starvation following a proteomic approach. In order to trigger high-affinity mode of potassium transport, cells were transferred to potassium-free medium. Protein profile was followed by two-dimensional (2-D) gels in samples taken at 0, 30, 60, 120, 180, and 300 min during starvation. We observed a general decrease of protein content during starvation that was especially drastic in the mutant strain as it was the case of an important number of proteins involved in glycolysis. On the contrary, we identified proteins related to stress response and alternative energetic metabolism that remained clearly present. Neural network-based analysis indicated that wild type was able to adapt much faster than the mutant to the stress process. We conclude that complete potassium starvation is a stressful process for yeast cells, especially for potassium transport mutants, and we propose that less stressing conditions should be used in order to study potassium homeostasis in yeast. Introduction Alkali metal cations, especially potassium and sodium play an important role in cell physiology and metabolism. Among organisms studied, the yeast Saccharomyces cerevisiae is still chosen as a model to elucidate homeostasis in eukaryotic cells because of the availability of the complete genome sequence (Goffeau et al. 1996), an in silico prediction of all transporters (Nelissen et al. 1997), a wide range of genetic tools to generate mutants, and availability of postgenomic tools such as proteomic databases. In yeasts cells, intracellular concentrations of Na + and K + are around 10-20 and 200-300 mM, respectively. K + is required for many physiological functions, such as cell volume and intracellular pH regulation, maintenance of stable potential across the plasma membrane, protein synthesis, and enzyme activation (Hoffman 1964;Ariño et al. 2010). Saccharomyces cerevisiae cells are able to grow in media containing a large range of K + , from 2 μM to 2 M, in all conditions internal K + remains quite constant that allows normal cell growth and division (Ramos et al. 1994;Haro and Rodríguez-Navarro 2002). Two different systems of K + uptake have been described in S. cerevisiae (Ramos and Rodriguez-Navarro 1986). A low-affinity mode of transport with a Km in the millimolar range, observed in cells cultured without K + limitation, and a high-affinity transport with a Km in the micromolar range observed in either K + -starved cells or cells growing in the presence of Na + . Full activity of the high-affinity K + transport is usually observed after growing the cells without K + limitation in minimal medium and then starving the cells during 4-5 h in the same medium lacking added K + (i.e., arginine phosphate medium; Rodriguez-Navarro and Ramos 1984). Active K + uptake is mediated by two membrane transporters, Trk1 and Trk2, Trk1 being the most important (Ko et al. 1990; Ramos et al. 1994). Deletion of both genes leads to a growth inhibition at low K + concentrations, hyperpolarization of plasma membrane, and observation of residual ectopic potassium transport (Madrid et al. 1998;. Those phenotypes appear to be due mainly to TRK1 deletion as the effect of TRK2 absence is almost negligible in most experimental conditions (Madrid et al. 1998). Two-dimensional (2-D) gel-based comparative proteomics analyses have been widely used to characterize yeast strains (Usaite et al. 2008;Karhumaa et al. 2009), growth phase (Bruckmann et al. 2009;Cheng et al. 2009;Massoni et al. 2009), or stress responses (Braconi et al. 2009). In our laboratory, we previously focused our proteomic analysis on the double mutant (DM) trk1,trk2 mutant growing without potassium limitation in exponential and stationary phase (Curto et al. 2010). It was observed that there were almost no differences between wild-type and DM strains at the exponential phase of growth. However, significant differences related mainly to glycolytic enzymes were found at stationary phase. In this study, a similar kind of analysis was used to characterize the same wild-type and DM trk1,trk2 in the extreme condition of potassium starvation. Statistically significant differences were observed in the protein 2-D profile, corresponding both to the mutations and/or potassium starvation. Spot intensity values were subjected to uni-and multivariant statistical analyses and a clustering test. Major and variable spots were mass spectrometry (MS) analyzed, and 73 protein species, corresponding to 49 unique gene products were identified. We conclude that potassium starvation is a very stressful condition to study potassium homeostasis, especially in the case of double trk1,2 mutant strain. 2-D protein profiles Strains were grown in translucent YNB-F media with no limiting potassium (50 mM KCl) until they reach an OD 600nm of around 1.9 in order to obtain high cell biomass but still in exponential phase . Parental strain BY4741 and DM trk1,trk2 were then transferred to medium without added potassium, samples of cells were taken during 5 h and proteins were extracted. Protein yield obtained after extraction plus TCA-acetone precipitation was evaluated. Cells of both strains kept full viable as monitored by colony forming units counts (9.3 × 10 6 ± 0.2 and 9.5 × 10 6 ± 0.3 at time zero in wild-type and DM strains, respectively, and 2.1 × 10 7 ± 0.3 and 1.1 × 10 7 ± 0.4 after 5-h starvation), but total protein yield decreased in function of starvation time from 37.55 to 34.20 μg eq. serum albumin bovine mg −1 dry weight for the parental strain and from 31.08 to 5.28 μg eq. SAB mg −1 dry weight for the mutant strain were obtained (Table 1). After 2-D electrophoresis and Coomassie staining of the gels, 106-271 consistent spots (present in the three replicates) were resolved in the 5-8 pH range and 10-90 kDa molecular weight (Mr) range (Fig. 1). A 2-D gel image analysis was performed using PDQuest software v 8.01 and, at all times studied, quantitative and qualitative differences in spot intensity were observed between parental and mutant strains. During potassium starvation and comparing to the parental strain at time 0, we observed an increased number of missing spots; thus, 33 spots were missed after 5-h starvation in the case of the parental strain and 162 spots in the case of the trk1,2 DM (Table 1). Interestingly, new additional spots were not found either in the wild type or in the mutant during the starvation process. Quantitatively, the same behavior was observed in both strains, most spots intensity tends to decrease and just few of them were increased during starvation. After applying a two-way analysis of variance (ANOVA), 231 spots were assumed to be differentially accumulated between strains and 209 spots between the different sampling times (Fig. 2). A total of 61 spots were variable between strains, but not between sampling times, reflecting those proteins not affected by the experimental environment but by the mutation. On the other hand, 39 spots showed differences only between the different sampling times, reflecting those common responses to the treatment in wild-type and mutant strains. Most of the gel spots (170) presented significant differences between strains and time, showing that the experimental environment affects both wild-type and mutant strains, but in a different way. Moreover, 20 spots were invariable across strains and sampling times (Table S1). To obtain further information, different and additional statistical approaches were performed. First of all, a data reduction to the whole dataset by means of principal component analysis (PCA) analysis was applied. Of the potential 290 principal components (PC) extracted, the first six PCs accounted for 94.18% of the biological variability ( Table 2). The use of these components in a 2-D represen- tation (plotting PC1 and PC2) allowed the effective separation of samples into the different strains and sampling times (Fig. 3). In the DM, duration from 0 to 60 min were closely grouped in both plots, indicating similarity in the spot map. The correlation of each particular spot to PC 1 and 2 was determined from the loading matrix generated during the PCA (Tables S2 and S3). The five spots showing the highest correlation with each PC were determined. Of these spots, five (7, 9, 21, 62, and 68) were identified after MS analysis, corresponding to a co-chaperone protein, dihydrooronate dehydrogenase, glutaredoxin-1, S-adenosylmethyonine synthase, and ubiquitin-conjugating enzyme, respectively (see Discussion). Neural network-based analysis was performed employing Kohonen's Self Organizing Maps (SOM), known to be a powerful multivariate analysis method, with a mathematical basis completely different than PCA. Wild-type and DM strains were completely differentiated (Fig. 3). In the case of the wild type, samples coming from time 180 min (WT180) and 300 min (WT300) were in the same node, being this node closer to WT60 while WT30 and WT0 were more distant. In the case of DM, nodes grouping samples DM0, DM30, and DM60 were closer together while DM180 and DM300 were clearly more distant. MS analysis and protein identification Seventy-three variable spots from the most abundant were excised and analyzed by MALDI-TOF-TOF MS after trypsin digestion. Results obtained were compared to the UniProt database allowing the identification of 49 unique proteins. Resulting proteins were classified in functional categories and are presented in Table 3; complementary information including indications on location, number of molecules per cell, and the identified peptide sequence is available in Table S4. A good correlation between theoretical and experimental pI was obtained whereas some differences in Mr were observed. For some spots, a higher Mr was observed maybe due to the absence of mature form in UniProt database. We identified proteins with double experimental Mr, including spots 63 (superoxide dismutase, reported as homodimer), spot 16 (FK506 binding protein 1), and spot 38 (hydroxyacylglutathione hydrolase). On the other hand, there were spots with a much lower Mr experimental than theoretical, probably due to degraded proteins. This was the case for the spot 30 (glyceraldehyde-3-phosphate dehydrogenase 3), spot 70 (protein YLR301W), spot 73 (uracil phosphoribosyltransferase), spot 39 (inhibitory regulator protein Bud2), and spot 59 (Sok2). Finally, there was a group of spots corresponding to the same proteins but with different Mr, possibly due to the presence of isoforms. We identified in this group the glyceraldehyde-3-phosphate dehydrogenase 2 (spots 22-24), the glyceraldehyde-3-phosphate dehydrogenase 3 (spots 25-30), the hexokinase 2 (spots 33 and 34), the phosphoglycerate kinase (spots 47-56), and the uracil phosphoribosyltransferase (spots 71 and 72). Within the proteins identified, we found proteins related to metabolism (28 proteins), mainly involved in the glycolytic pathway (nine proteins), to stress response (nine proteins), protein fate (seven proteins), signaling (two proteins), or other functions (three proteins). The different pathways related to energy production were not altered equally during potassium starvation in the two yeast strains. The level of the nine proteins of the glycolysis tend to decrease radically in the mutant trk1,2 while their level remained constant in the parental strain, except in the case of two proteins (Hxk2 and Tdh2). On the contrary, proteins of the pentose phosphate pathway (Sol3) and the methylglyoxal pathway (Glo2) remained present during the starvation process although a tendency to smoothly decrease was observed in the methylglyoxal pathway in the case of the mutant strain. Interestingly, some biosynthetic pathways seem to remain active during potassium starvation, like the pyrimidine pathway (Ura1 and Fur1) that keeps a relative good level of proteins, and the amino acid biosynthesis-related proteins (Met22, Sam4, Sam1, and Lys9) that present a higher amount of proteins in the mutant strains during potassium starvation. Proteins related with DNA-repair system are found in both strain (Ubc2 and Mms2). In the case of the mutant, they were present with a two-fold factor increase. Proteins related with stress, especially with oxidative stress, presented the same behavior, Sod1 and Ccp1 were expressed in both strains, being more important the amount in the mutant strain. Other proteins such as Trx2 and Grx2 were only detectable in the mutant strain after 30-60 min of potassium starvation. Finally, other proteins showed a pattern difficult to explain with no homogenous behavior within a same functional category or between spots corresponding to the same single proteins (Hri1). In order to better understand the behavior of the different groups of proteins identified, differential spots were clustered employing Ward's minimum variance method over a Pearson distance-based dissimilarity matrix (Fig. 4), which has proved to be an accurate procedure for proteomics data (Meunier et al. 2007). Spots in the same tree were compared, employing a clustering method and a representation of quantitative variations between strains and times. While wild-type samples showed bigger distance between time 0 and the several starvation times, DM samples were closely grouped from 0 to 60 min suggesting a slower response of the mutant to the potassium starvation process. These results are in agreement with the PCA and SOM analyses shown above. Evaluation of hexokinase and alcohol dehydrogenase activity in wild-type and trk1,2 cells upon potassium starvation To complement the data obtained by our proteomic analysis, we selected two examples of glycolytic activities corresponding to proteins identified above, the first one is hexokinase, which is an example of activity likely to decrease (due to the loss of Hxk2), and the second is alcohol dehydrogenase (Adh), which is a possible example of unaltered activity. As observed in Figure 5, the hexokinase activity versus glucose or fructose is very similar for wild-type and trk1,2 cells in the presence of potassium. Interestingly, deprivation of potassium did not decrease the amount of glucose phosphorylation activity, but it resulted in a relative increase in the preference for fructose in wild-type and, even more markedly, in Trk-deficient cells. Although apparently surprising, this result is compatible with the disappearance of Hxk2. There are three glucosephosphorylating enzymes in yeast: Hxk1, Hxk2, and Glk1. They differ in their V max for fructose and glucose, being the fructose/glucose ratio of 3 for Hxk1 and 1.2 for Hxk2 (Barnar 1975). Glk1 barely phosphorylates fructose (Lobo and Maitra 1977). We observe that after 240 min, there is an increase in the fructose/glucose phosphorylation ratio from 0.60 ± 0.03 to 0.73 ± 0.06 in wild-type cells, and 0.52 ± 0.04 to 0.82 ± 0.09 in the trk1,2 mutant (in which the disappearance of Hxk2 is more prominent; see Table S4). The transcriptomic profile shown in Figure 5 indicates that HXK1, encoding the most effective fructose-phosphorylating isoform, is greatly induced by potassium starvation, whereas HXK2 is not. The emergence of Hxk1 and the disappearance of Hxk2 would explain the increase in the ratio of phosphorylationfructose/glucose. This increase is probably less drastic than expected because Glk1, whose activity levels are normally lower, is also significantly induced. In contrast, the amount of alcohol dehydrogenase activity is not decreased by lack of potassium (see Fig. 5), in agreement with the stability of Adh1, the major Adh isoform in exponentially growing yeast (Leskovac et al. 2002). Discussion Saccharomyces cerevisiae is endowed with two genes coding for the main plasma membrane potassium transporters. These proteins are essential for the cell to grow at limiting potassium concentrations and mutants lacking the corresponding genes (TRK1 and TRK2) show defective growth and transport at low potassium. However, at high potassium (in the mM range) other no specific systems can transport enough amounts of the cation, thus allowing mutant cells to grow at rates similar to those in the wild type . A first proteomic study of trk1,2 mutants was recently published (Curto et al. 2010). It is important to notice that in that paper both wild-type and mutant cells were grown under nonlimiting potassium concentrations (50 mM KCl) and the authors concluded that most of the differences observed between parental and DM strains corresponded to proteins related to glycolysis and redox-homeostasis enzymes. Considering that full activity of the so-called highaffinity/high-velocity potassium transport process dependent upon the Trk1/2 system is usually observed after obtaining K + -starved cells by incubation in media without added potassium during 4-5 h (Rodríguez-Navarro and Ramos 1984;Bertl et al. 2003), we decided to study changes in the proteome of the wild type and DM during the starva- tion process. The most important observation was the extraordinary decay in protein content and number of spots, observed during the 5 h of starvation with special relevance in the case of the mutant strain. It has to be considered that in 2D gels, only a fraction of the total proteome can be observed, mainly proteins that are the most abundant such as the housekeeping proteins (Buxbaum 2010). Previous work reported that K + deprivation during 24 h produced, in S. cerevisiae, a decrease in cell viability by inducing a programmed cell death process (Lauff and Santa-María 2010). Although this is a very important observation, in our conditions and after 5-h starvation we observed high decrease in protein content but no changes in cell viability. From these results, we conclude that potassium starvation is a very stressing process for the cells and it looks like 5-h incubation without potassium is excessive since it provokes a very important and general decay in many cellular processes. As mentioned above, 4-5 h K + -starvation is a general method used to induce full activity of the Trk1/2 system. However, we have recently shown that in the newly designed medium YNB translucent , used also in this work, adaptation to the highaffinity/high-velocity state is much faster. In fact, the higher affinity for Rb + (K + ) is observed after 30-min starvation and higher V max is reached after 2-h starvation. Therefore, we propose that this would be a much more rational way to obtain yeast cells expressing the high-affinity/high-velocity mode of transport. PCA analysis allowed a clear classification of samples. The plot of PC1 (50.7%) and PC2 (22.2%) shows differences between mutant and wild type and also between the different sampling times. This supports the above-indicated hypothesis in terms of the differences found between strains behavior during the adaptation to the lack of potassium. This idea was confirmed after the application of a SOM neural network, a methodology for the classification of the samples more powerful than PCA analyses. Wild-type strain adapted quicker to the new conditions, since samples taken at 0, 30, and 60 min were already separated. Adaptation seems to be already completed after 3 h since samples taken at 180 and 300 min group together. On the other hand, the DM needed more time to try to adapt to the new environmental condition, that is, samples at time 0, 30, and 60 min grouped together. In conclusion, the wild-type strain adapted and got stabilized faster to the stress condition while the mutant seems to have problems to sense or adapt to the absence of potassium. It is relevant that in the 2D gels we have identified most of the enzymes involved in glycolysis. During the starvation process, most of them enzymes were present in the wild type but in the mutant there was a fast decay. Our biochemical results on hexokinase activities are in agreement with this observation. The strong reduction in Hxk2 protein levels during starvation was not completely reflected in drastic changes in hexokinase activity. This apparent discrepancy could be explained by the induction of HXK1 and GLK1. The corresponding Hxk1 and Glk1 proteins, which would keep the capacity to phosphorylate glucose, are less abundant than Hxk2 and we did not identify them by proteomics; however, our transcriptomic results support that possibility. On the other hand enzymes involved in two other important energetic pathways were detected: pentose phosphate and methylglyoxal pathways; in general, proteins from both pathways remained present during the starvation process. In fact, the transcriptomic profile of GLO1 and GLO2, the two genes involved in detoxification of methylglyoxal, shows induction during starvation (not shown). It is tempting to speculate that the glycolysis pathway is more sensitive to low K + than the alternative pathways and for that reason it is more inhibited in the mutant. It has been reported that, on the one hand, potassium plays a crucial role in the activation of the glycolytic enzyme pyruvate kinase (see Page and Di Cera (2006) for a review) and, on the other hand, the mutant shows defective potassium transport. These facts may be related to the higher sensitivity of the glycolysis in the mutant. We have mentioned the importance of the stress induced by potassium starvation, especially in mutant cells. The fact that two ubiquitin enzymes related with DNA-repair system (Ubc2 and Mms2) were identified along the 5 h of the experiment is in agreement with this observation (Broomfield et al. 1998;Game and Chernikova 2009). Even more in the mutant, the amount of the two ubiquitin proteins was not only present, but significantly increased during potassium starvation. On the other hand, some important pathways seem to be unaffected by starvation. Two examples are the metabolism of some amino acids (methionine, lysine) and bases (pyrimidine ribonucleotides). The application of two different algorithms for sample classification, one of them based on recent algorithms based on neural networks, lead to the obtaining of complementary results increasing the discriminatory power of this analysis (Valledor and Jorrín 2011). Cluster analysis allowed a distance-based classification of the samples and spots reinforcing that idea. Five major groups of spots could be distinguished in the plot being relevant that most of the glycolytic proteins appear in groups I and II and show a completely different behavior in wild-type and mutant strain. It is conceivable to pose the question about how TRK mutation affects these metabolic processes. We have no definitive answer to this question but our results indicating a defective metabolic adaptation to the lack of potassium in the mutant may be taken as a clue on the relevant role of potassium fluxes and/or levels triggering adaptation. Unpublished results of our group show that wild-type and trk1,2 cells grown under nonlimiting KCl are able to adapt and reach a new internal K + stationary state when suspended in lower K + concentrations, requiring mutant cells higher external K + to keep similar internal amounts of the cation. In conclusion, the DM trk1,2 is still able to sense a decrease in external potassium but lacks the mechanism to properly adapt to this stress. A similar be-havior may explain the defective metabolic adaptation during starvation. In summary, the decrease in protein content during potassium starvation experiments lead to a global decrease of the basic cellular functions such as the cell energy production pathways, with a radical decrease of the glycolytic proteins that was more evident in the mutant. In the context of a general decrease of proteins, it is relevant that some cellular processes such as the pentose phosphate and methylglyoxal pathways were kept. These results indicate that conditions commonly used in the past to characterize adaptation to potassium (4-5 h in the absence of the cation) are too stressful for the cells and this should be considered in future studies on potassium homeostasis. In fact, the study of the proteome under less extreme potassium limitation is under way. This would allow to analyze differences between parental and mutant strains under more physiological conditions. 2-DE Immobilized pH gradients (IPG) strips (17 cm, 5-8 pH linear gradient; Bio-Rad) were passively rehydrated for 2 h with 500 μg of protein in 300 μL of IEF solubilization buffer (7 M urea; 2 M thiourea; 4% [w/v] CHAPS; 0.5% [v/v] IPG buffer 5-8, 20 mM DTT; and 0.01% [w/v] bromophenol blue). The strips were loaded onto a Bio-Rad Protean IEF Cell System and proteins were electrofocused at 20 • C with a first step of a gradual increase in the voltage (50-8000 V) and then reaching 60,000 Vh. Strips were immediately equilibrated according to Görg et al. (1987). Second dimension SDS-PAGE was performed on 12% polyacrylamide gels using Protean Dodeca Cell System (Bio-Rad). Gels were run first at 30 mA per gel for 15 min and then at 50 mA per gel until the dye front reached the bottom of the gel. Staining and image analysis Gels were stained twice with CBB G-250 (Bio-Rad) for 20 h following the method described by Mathesius et al. (2001). Images were acquired with a GS-800 calibrated densitometer (Bio-Rad) and analyzed with PDQuest 8.0.1 software (Bio-Rad) using 10-fold over background as a minimum criterion for presence/absence for the guided protein spot detection method. This criterion includes almost all spots of the gels and some staining artifacts and noise. A spot-by-spot visual validation of automated analysis was done thereafter to increase the reliability of the matching. Experimental pI was determined using a 5-8 linear scale over the total length of the IPG strip. Mr values were calculated by mobility comparisons with protein standards markers (SDS molecular weight standards, Broad range, Bio-Rad) run in a separate lane in the gel. Statistics Statistical analysis was performed following the recommendations proposed by Valledor and Jorrín (2011). In brief, spot volumes were preprocessed before statistical analyses. Spot volumes were first normalized as a proportion of the total spot intensities per gel (spot volume × 10 5 / gel spot volumes), and then the normalized volumes were log transformed to reduce the volume-variance dependency. Spot values passed the Levene's homoscedasticity and Kolmogorov-Smirnov normality tests. Differentially abundant spots were defined after applying a two-way ANOVA considering strain and sampling time as factors. False discovery rate (FDR) q-values were calculated with FDRtool package. Cut-off qvalue was set to allow less than one false positive in this study. The joint spot analysis was performed following three different multivariate approaches: PCA (centered normalized spot values, unrotated solution), heat map clustering (employing Euclidean distance and Ward's aggregation method), and a neural-network based SOM (centered values, 4 × 4, hexagonal topology). Three biological and one technical replicates were done for each time and strain. All of the statistical analyses were performed in R Environment v 2.12 (R Development Core Team 2011) employing its core functions and the packages gplots2, Kohonen, and FDRtool. MS analysis and protein identification Spots were manually excised and transferred to multiwell 96 plates. Spots were digested with bovine trypsin (sequencing grade Roche Molecular Biochemicals) using an Ettan TM digester station (GE Healthcare Life Sciences). The digestion protocol used was that of Schevchenko et al. (1996), with minor variations. Briefly, spots were washed twice with water and distained by twice 10-min incubation with 100% acetonitrile and dried in vacuum (Savant SpeedVac) for 30 min. Then the samples were reduced with 10 mM dithiotreitol in 25 mM ammonium bicarbonate for 30 min at 56 • C and subsequently alkylated with 55 mM iodoacetamide in 25 mM ammonium bicarbonate for 15 min in the dark. Finally, samples were digested with 12 μL of trypsin (12.5 ng/μL) in 25 mM ammonium bicarbonate (pH 8.5) overnight at 37 • C. After digestion, the supernatant was collected and 1 μL was spotted onto a MALDI target plate using the dry droplet method and 0.4 μL of a 3 mg/mL of α-cyano-4-hydroxy-transcinnamic acid matrix in 50% acetonitrile (ACN) and 0.1% trifluoroacetic acid (TFA). Samples were analyzed in a 4800 Proteomics Analyzer MALDI-TOF/TOF mass spectrometer (Applied Biosystems, Framingham, MA), in the m/z range 850-4000, with an accelerating voltage of 20 kV, in reflectron mode and with a delayed extraction set to 120 nsec. All MS spectra were internally calibrated with peptides from trypsin autolysis. The MS analysis by MALDI-TOF/TOF mass spectrometry produces peptide mass fingerprints and the peptides observed with a signal to noise greater than 20 can be collated and represented as a list of monoisotopic molecular weights. Proteins ambiguously identified by peptide mass fingerprints were subjected to MS-MS sequencing analysis. So, from the MS C 2012 The Authors. Published by Blackwell Publishing Ltd. spectra suitable precursors were selected for MS-MS analysis with collision-induced dissociation (CID) on (atmospheric gas was used) 1 kV ion reflector mode and precursor mass Windows ± 5 Da. The plate model and default calibration were optimized for the MS-MS spectra processing. For protein identification, the UniProt Knowledgebase Release 14.6 (UniProtKB/Swiss-Prot Release 56.6 of 16 December 2008, Uni-ProtKB/TrEMBL Release 39.6 of 16 December 2008) was searched using MASCOT search engine v.2.1 (Matrix Science; http://www.matrixscience.com) through the Global Protein Server Explorer software v3.6 from Applied Biosystems. The following parameters were allowed: taxonomy restrictions to S. cerevisiae, one missed cleavage, 50 80-100 and 50 80-100 ppm mass tolerance for peptide mass fingerprinting (PMF) and MS-MS searches, respectively, 0.3 Da for MS-MS fragments tolerance, carbamidomethylation cysteine as a fixed modification, and methionine oxidation as a variable modification. The parameters for the combined search (peptide mass fingerprint plus MS-MS spectra) were the same described above. In all protein identified, the probability scores were greater than the score fixed by Mascot as significant with a P-value less than 0.05. Enzymatic activity determinations Growth of cultures was as above, except that cells were washed in K + -free translucent medium instead of milliQ water. Whole-cell lysates (25 mL of culture) were prepared by resuspending the cells in 500 μL of homogenization buffer (20 mM imidazole, pH 7.0). One volume of 0.5-mm zirconia/silica beads (Biospec Products, Inc.) was added and cells were broken at 4 • C by vigorous shaking in a Fastprep-24 cell breaker (MP Biomedicals) for five times (30 sec each, setting 5), with intervals of 1 min on ice. After sedimentation at 1000 × g for 15 min at 4 • C, the cleared lysate was recovered and the protein concentration was determined by the Bradford assay. The hexokinase activity was determined as described in Gancedo et al. (1977) adding either 10 mM glucose or 25 mM fructose as substrate. The alcohol dehydrogenase activity was determined essentially as described in Ganzhorn et al. (1987) using 0.5 mM NAD + and 100 mM ethanol. All the enzymatic activity measurements were performed on 96-well microplates, with a final volume of 300 μL. The reactions were monitored by following the changes in absorbance at 340 nm using a microplate-based UV spectrophotometer (Multiskan Ascent, ThermoLabsystems). Supporting Information Additional Supporting Information may be found online on Wiley Online Library. Please note: Wiley-Blackwell is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.
2018-04-03T00:30:55.036Z
2012-06-01T00:00:00.000
{ "year": 2012, "sha1": "c526c212fe2c97a8ec2c1a9a22df821868254a7b", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/mbo3.23", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6a7c750f466b8028b5b348abbbcb5d435ca9d685", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
264965992
pes2o/s2orc
v3-fos-license
Probing gluon polarization with pi0's in longitudinally polarized proton collisions at the RHIC-PHENIX experiment This report presents double helicity asymmetry in inclusive $\pi^0$ production in polarized proton-proton collisions at a center-of-mass energy ($\sqrt{s}$) of 200 GeV. The data were collected with the PHENIX detector at the Relativistic Heavy Ion Collider (RHIC) during the 2004 run. The data are compared to a next-to-leading order perturbative quantum chromodynamic (NLO pQCD) calculation. Introduction Polarized lepton-nucleon deep inelastic scattering (DIS) experiments over the past 20 years revealed that only ∼25% of the proton spin is carried by the quark spin.Therefore the gluon spin and orbital angular momentum must contribute to the rest of the proton spin.In polarized proton-proton collisions one can explore the gluon polarization directly using the processes that gluons participate in.One of the promising probes is to measure the double longitudinal spin asymmetry (A LL ) in high p T particle production. The first measurement of A LL in π 0 production at RHIC during the 2003 run (run-3) has been published 1 .Present report shows the latest results of π 0 A LL for the range of 1-5 GeV/c in transverse momenta (p T ) and from −0.35 to 0.35 in pseudorapidity (η) obtained during the 2004 run (run-4). A LL is defined by the following formula. where σ is the cross section of the process in interest, + + (+−) denotes that the variable is obtained in the collisions with same (opposite) helicity beams.Taking into account the beam polarizations and the luminosity variations between the two possible spin orientations, equation 1 becomes, where P B1 and P B2 are the beam polarizations.N is the yield (of π 0 in this report), L is the integrated luminosity and R is what we call the relative luminosity.These P B1(B2) , N , and R were measured in the experiment. Experimental setup RHIC was operated with both proton beams polarized longitudinally at √ s =200 GeV.The machine performance in run-4 is compared to run-3 briefly in Table 1.For the double helicity asymmetry, the statistical figure of merit is expressed by P 2 B1 P 2 B2 L. In spite of the short run period in run-4, the figure of merit is larger due to higher beam polarization.The beam polarization was measured by the proton-Carbon CNI polarimeter 2 constructed near IP12, away from PHENIX, where the systematic error of the beam polarization was 32%.This affects the scaling error of the double helicity asymmetry by 65%. Since the stable direction of the beam polarization is vertical in RHIC, we must rotate the beam polarization before and after the collision point to obtain longitudinal polarization.The PHENIX a local polarimeter 1 confirmed that the direction of the proton spin in the PHENIX collision point was more than 99% longitudinal. The relative luminosity, R, was evaluated using the beam-beam counter and the zero-degree calorimeter in PHENIX to be δR = 5.8 × 10 −4 , which corresponds to δA LL = 1.8 × 10 −3 for a beam polarization of 40%. π 0 A LL results In the analysis, we did not subtract the background under the π 0 peak directly.Instead, we calculated A LL in the two-photon invariant-mass range of ±25 MeV around the π 0 peak (A raw LL ), we call this the "signal" region, then corrected it by A LL of the background (A BG LL ) to extract A LL of pure π 0 (A π 0 LL ) using where r is the fraction of the background in the "signal" range and is obtained by fitting.A LL of the background is evaluated using the mass a An overview of PHENIX is found in 3 . range near the π 0 peak.Table 2 shows the statistics of π 0 's within "signal" mass window and the fraction of BG under the π 0 peak.The systematic error of A LL non-correlated between bunches or fills can be evaluated by the "bunch shuffling" technique. 1We found such kind of systematics is negligible compared to the statistical error.The systematic error correlated over all bunches or all fills mainly comes from the uncertainty on the beam polarization and the relative luminosity described above. Figure 1 and Table 3 show run-4 results of π 0 A LL as well as that from run-3 1 and their combination with the statistical errors.Two theory curves 4 are also drawn in the figure.The confidence level between theory curves and our data combined for run-3 and run-4 was calculated to be 21-24% for the GRSV-standard model, 0.0-6% for the GRSV-maximum model, taking into account the polarization scale uncertainty.Our results favor the GRSV-standard model.3. π 0 A LL in run-4, run-3 and their combination. −0.9 ± 0.7 0.0 ± 0.8 −1.8 ± 1.7 6.2 ± 3.8 4. Future plan RHIC plans to operate with higher luminosity and polarization in the future.Figure 2 shows the expected π 0 A LL in run-5, where proton run will start from February 2005, as well as in the next long pp run expected in 2006-7.The center value of those points follow the GRSV-standard.Three pQCD theory curves are also in the figure.Those are calculated with ∆g = +g (same as GRSV-max in Fig. 1), ∆g = −g and ∆g = 0 at the input scale (Q 2 = 0.4 GeV 2 ). 5 We can further constrain ∆g in run-5.However, A LL of π 0 can be approximated by the quadratic function of ∆g/g and it is hard to determine the sign of ∆g only with low p T data due to the duality of the quadratic function.One solution of this problem is to measure π 0 A LL in the higher p T region where the duality becomes less.(See run-7 estimation in Fig. 2.) The other way is to combine results of π 0 with other channels, for example, A LL of direct photons which is a powerful probe and will be measured in future runs with higher statistics. Summary We reported the results of A LL in π 0 production in polarized proton-proton collisions at √ s = 200 GeV measured in 2004 with the PHENIX detector at RHIC. π 0 A LL was presented for 1-5 GeV/c in p T and |η| < 0.35.The data was compared to NLO pQCD calculations and favors the GRSV-standard model on the gluon polarization.The expectation in the future runs was also disscussed. Figure 1 . Figure 1.π 0 A LL as a function of p T .Table 3. π 0 A LL in run-4, run-3 and their combination. Figure 2 . Figure 2. π 0 A LL as a function of p T . Table 2 . The statistics of π 0 s and the BG fraction.
2019-04-14T02:27:20.748Z
2005-01-19T00:00:00.000
{ "year": 2005, "sha1": "56db12cf3550fffeb4d20c9f56c19a5414741033", "oa_license": null, "oa_url": "https://arxiv.org/pdf/hep-ex/0501049", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "56db12cf3550fffeb4d20c9f56c19a5414741033", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
248264214
pes2o/s2orc
v3-fos-license
Defects engineering simultaneously enhances activity and recyclability of MOFs in selective hydrogenation of biomass The development of synthetic methodologies towards enhanced performance in biomass conversion is desirable due to the growing energy demand. Here we design two types of Ru impregnated MIL-100-Cr defect engineered metal-organic frameworks (Ru@DEMOFs) by incorporating defective ligands (DLs), aiming at highly efficient catalysts for biomass hydrogenation. Our results show that Ru@DEMOFs simultaneously exhibit boosted recyclability, selectivity and activity with the turnover frequency being about 10 times higher than the reported values of polymer supported Ru towards D-glucose hydrogenation. This work provides in-depth insights into (i) the evolution of various defects in the cationic framework upon DLs incorporation and Ru impregnation, (ii) the special effect of each type of defects on the electron density of Ru nanoparticles and activation of reactants, and (iii) the respective role of defects, confined Ru particles and metal single active sites in the catalytic performance of Ru@DEMOFs for D-glucose selective hydrogenation as well as their synergistic catalytic mechanism. / 57 All chemicals were purchased from commercial suppliers and used without further purification. All dried samples were stored under N2 in a glovebox. Catalysis experiment The 50 g (25 wt%, 1.543 mol/L) D-glucose aqueous solution without any catalyst, in presence of Ru NPs, MOFs and Ru impregnated MOF catalysts (1 g), respectively, was transferred into a 100 mL stainless-steel high-pressure reactor. Before starting reaction, the reactor was purged with H2 to 5.0 MPa, then degas to 1.0 MPa at room temperature, this process was repeated for three times to remove the air. The D-glucose aqueous solution, stirred with predetermined rates (600 or 800 rpm), was heated at the desired temperature (100, 120 or 140 ºC) under 5.0 MPa H2 for a predetermined reaction time (ranging from 90 to 180 minutes), and then cooled to room temperature. Tiny amounts of aliquots were taken out every half an hour during reaction via a dip-tube inserted into the solution to test the activity of these catalysts. The separated reaction solution, obtained after removing the heterogeneous catalysts by centrifugation, was analyzed by HPLC to determine the conversion of D-glucose, selectivity and yield of sorbitol. To test the reusability of these catalysts, the catalysts were separated from the reaction solution, then washed with deionized water and ethanol, respectively, and finally dried in a vacuum oven at 80 ºC. The obtained catalysts after reaction were reused directly for the next run of biomass hydrogenation of D-glucose to form sorbitol. Characterization methods Powder X-ray diffraction (PXRD): PXRD patterns of all samples were recorded on a Rigaku Smartlab (3 KW) equipment with a Ni filter using Cu-Kα radiation (λ = 1.542 Å). The patterns were collected in reflectance of Bragg-Brentano geometry over a range of 2θ = 5 -50º at room temperature. UHV-FTIR spectroscopy and XPS: The ultra-high vacuum Fourier transformed infrared spectroscopy (UHV-FTIRS) and X-ray photoelectron spectroscopy (XPS) measurements were conducted with a sophisticated UHV apparatus combing a state-of-the-art FTIR spectrometer (Bruker Vertex 80v) and a multichamber UHV system (Prevac) [2][3][4] . This dedicated apparatus allows performing both IR transmission experiments on nanostructured powders and infrared reflection-absorption spectroscopy (IRRAS) measurements on well-defined model catalysts (single crystals and supported thin films). Both the optical path inside the IR spectrometer and the space between the UHV chamber as well as the spectrometer were evacuated to exclude the ambient molecule adoption, ensuring superior sensitivity and stability. The MOF sample (approximately 200 mg) was first pressed into an inert metal mesh which was mounted on an especially designed sample holder, and then activated in the UHV chamber at 500 K to remove all contaminants. Exposure to carbon monoxide (CO) was achieved using a leak-valve-based directional doser connected to a tube of 2 mm in diameter, which is terminated 3 cm from the sample surface and 50 cm from the hot-cathode ionization gauge. The IR experiments were carried out at temperatures as low as 110 K. All UHV-FTIR spectra were collected with 1024 scans at a resolution of 4 cm -1 in transmission mode, using a spectrum of the clean sample as a background reference. The XPS experiments were carried out using a VG Scienta R4000 electron energy analyzer. The pass energy was fixed at 200 eV for all the measurements. A flood gun was applied to compensate for the charging effects. The binding energies were calibrated to the C1s line at 284.8 eV as a reference. The XP spectra were deconvoluted using the software Casa XPS with a Gaussian-Lorentzian mix function and Shirley background subtraction. D0 represents pristine MIL-100-Cr, while D1a′-c and D2a′-c represent defect engineered MIL-100-Cr incorporated DL1 and DL2, respectively, with different feeding ratios (z) of defective linker (DLx, x = 1: 3,5-pyridinedicarboxylate; x = 2: m-phthalate) to total ligands (TL = DLx + parent linkers), ranging from 5% to 50%; and Ru@D2a before and after catalysis for 12 runs indicate that they have relatively large particle sizes with good crystalline. These results confirm that the cationic framework of MIL-100-Cr has good tolerance to the incorporated DLx (x = 1: 3,5-pyridinedicarboxylate, x = 2: m-phthalate). For both types of DEMOF and Ru@DEMOFs, the fine peaks gradually disappeared and merged into broad bands along with increasing the feeding ratio (z) of DLx to TL (x = 1, z ≥ 30%; x = 2, z ≥ 50%), attributed to the decrease of particle size ( Supplementary Fig. 33) 6 . After Ru impregnation, the presence of broad bands accompanying the disappearance of fine peaks in the PXRD pattern of D2b illustrates that the process of Ru impregnation results in decreases of particle sizes of D2b 7 . Noticeably, all these Ru impregnated MOFs catalysts show overall lower thermal stability than the respective corresponding MOFs supporters, illuminating that the Ru impregnation process has critical effects on the framework of MIL-100-Cr. All TGA curves of D1a′, D2a′, Ru@D1a′ and Ru@D2a′ are quite similar to that of D0, illuminating that they maintain the framework of MIL-100-Cr. As shown in Supplementary Fig. 4a-d, the evolution trends of thermal stability of DEMOFs containing type-A defects, DEMOFs containing type-B defects, Ru NPs impregnated DEMOFs containing type-A defects and Ru NPs impregnated DEMOFs containing type-B defects with feeding ratio (z) ranging from 0% to 10% are consistent with that of feeding ratio (z) ranging from 0% to 50%, respectively (Supplementary Fig. 3). As shown in Supplementary Fig. 6a Fig. 12 The ratios of the integrated peak areas of the F 1s band to that of C 1s band in the highresolution XPS spectra of Ru@D0 and Ru@D1a-c. The bands of F 1s are centered at the binding energy of ~684.4 eV ( Supplementary Fig. 11), and the ratios of the integral area of the F 1s band to that of C 1s band for the selected catalysts (Ru@D1a-c) are all lower than that of Ru@D0 ( Supplementary Fig. 12 The bands for Cr 3+ and Cr δ+ (δ < 3) cannot be distinguished in the high-resolution XPS spectra of Cr 2p, but the presence of Cr δ+ (δ ≤ 3) nodes could be clearly confirmed by UHV-FTIR spectra. compared to that of D0 and Ru@D0, respectively, don't result in the increase of mesopore, primarily attributed to a certain degree of blocking pores due to local disorder of these samples. / 57 Supplementary The STEM images (Supplementary Fig. 25) and the statistics of particle size distribution of Ru NPs ( Supplementary Fig. 26) show that the size evolution of confined dominate Ru NPs in these DEMOFs with feeding ratio (z) ranging from 0% to 10% is consistent with that of Ru@DEMOFs with higher content of DLx ( Supplementary Fig. 23, 24). NPs in Ru@D1a-c is lower than that in Ru@D2a-c with the same z of DLx, illuminating that the type-A defects can stabilize Ru NPs more efficiently than type-B defects against aggregation during catalytic reaction, mainly due to the stronger anchoring effect between confined Ru NPs and basic pyridyl-N atoms at type-A defects. The above results demonstrate that a rational tuning of defects can prevent the aggregation of Ru NPs embedded in Ru@DEMOFs. The average diameters of Ru NPs in both two kinds of Ru@DEMOFs, obtained from CO pulse chemisorption measurements, also increase upon increasing the feeding ratio of DLx (x = 1, 2) to TL, being consistent with that obtained from STEM measurements. The dispersions of Ru NPs in both two kinds of Ru@DEMOFs decrease along with increasing the feeding ratio of DLx (x = 1, 2) to TL, primarily attributed to the aggregation of defects with high concentrations. These results further confirm that the sizes and distributions of Ru NPs can be controllably adjusted by the type of introduced defects as well as their concentrations. In relation to that of Ru@D0, the main bands stemming from both CO-Cr 3+ (Supplementary Fig. 27a) and CO-Cr δ+ (Supplementary Fig. 27b) in D2a-c shift slightly to lower frequency with increasing z of DL2. These results demonstrate the formation of electron-enriched Cr δ+ defects via the partial reduction of pristine Cr 3+ -CUSs along with the incorporation of DL2. (d-f) in the Cr 3+ -related CO vibration region for Ru@D0 (d), Ru@D1a (e) and Ru@D2a (f). All samples were exposed to CO (0.01 mbar) at ~110 K, and then heated to the indicated temperatures. Prior to exposure, each sample was heated to 500 K to remove all adsorbed species. The binding energy of the Ru 3d is very close to that of the more intense C 1s peaks in the XPS spectra. However, the binding energy of Au 4f is non-overlapping with all elements of the framework, and thus can be used as a solid reference for a reliable analysis of the electronic structure changes. to TL, and that of these two kinds of Ru@DEMOFs with same z is comparable (Supplementary Table 6). Generally, the larger size of metal nanoparticles results in the smaller binding energy. On consideration of these two aspects, the binding energy of Au@D1c is expected to be comparable to that of Au@D2c, and both of which should be smaller than that of Au@D0. However, the XPS spectra ( Supplementary Fig. 29) show that the binding energies of the Au 4f7/2/4f5/2 doublet for both of the 39 / 57 Au@D1c (84.1/87.8 eV) and Au@D2c (84.3/88.0 eV) are higher than that of Au@D0 (83.9/87.6 eV), revealing that the embedded Au NPs in D1c and D2c DEMOFs are slightly positively charged. This finding is attributed to the electronic interaction between the embedded Au NPs and defective Cr δ+ -CUSs (δ < 3) acting as Lewis acid sites (electron acceptors) that lose one coordinating carboxyl in DEMOFs (see Fig. 2a-b). Furthermore, the binding energy of the Au 4f7/2/4f5/2 doublet in Au@D1c is lower than that of Au@D2c. indicating the additional electronic interaction between Au NPs and pyridyl-N atoms of DL1 in Au@D1c as Lewis base sites (electron donors) that are absent for the Au@D2c DEMOF. Overall, the above results confirm that the degree of charge transfer from metal NPs to the framework with type-B defects is larger than that to the framework containing type-A defects, and both of them are higher than that to the pristine framework. proposed synergistic catalytic mechanism of D-glucose selective hydrogenation to sorbitol for these two different kinds of Ru NPs impregnated DEMOFs (Fig. 1). As shown in Supplementary Fig. 31, the SEM images of all Ru@DEMOFs demonstrate the decrease of particle size along with increasing the feeding ratios (z) of DLx to TL (x =1, 2, z =10%, 30%, 50%). It is a main reason that the fine peaks in the PXRD of DEMOFs and Ru@DEMOFs gradually disappeared and merged into broad bands along with increasing the feeding ratio (z) of DLx to TL (x = 1, z ≥ 30%; x = 2, z ≥ 50%). / 57 The impregnated amount of Ru NPs plays a significant role in catalysis of D-glucose selective hydrogenation to sorbitol, consequently, the catalytic performances of Ru NPs impregnated D0, D1a and D2a with different loading amounts of Ru element ranging from 1 to 5 wt% towards the D-glucose selective hydrogenation, have been investigated with all the other reaction conditions being fixed. As shown in Supplementary Fig. 32, the yields of sorbitol reach the maximum values when D0, D1a and D2a were impregnated by RuCl3 precursor containing 2.5 wt% Ru element, named as Ru@D0, Ru@D1a and Ru@D2a, demonstrating that the optimal impregnated content of Ru NPs of these MOFs supporters is 2.5 wt%. Consequently, all the other Ru@DEMOFs catalysts were impregnated with Ru NPs by using RuCl3 precursor containing 2.5 wt% Ru element. As shown in Supplementary Fig. 33 The maximum sorbitol yields of the selected investigated catalysts Ru@D0, Ru@D1a and Ru@D2a can be raised along with the increase of applied stirring rate when all the other reaction conditions were fixed. However, considering the tolerance of these catalysts, the operation safety and economy, all catalytic reactions in this work were conducted at 800 rpm. Fig. 36 The curves of time-dependent D-glucose conversions (a) and selectivity to sorbitol (b) for the first four cycles of the reactions catalyzed by Ru@D0, Ru@D1a-c and Ru@D2a-c, respectively. Reaction conditions: D-glucose aqueous solution (25 wt%, 1.543 mol/L, 50 g), catalysts (1 g), hydrogen pressure (5 MPa), temperature (120 ºC) and stirring rate (800 rpm). Supplementary Fig. 36 shows the curves of time-dependent conversion of D-glucose and selectivity to sorbitol for the first four cycles of the reactions catalyzed by Ru@D0, Ru@D1a-c and Ru@D2a-c under the same optimized reaction conditions. The conversion of D-glucose and sorbitol selectivity for all these catalysts, except D1b obtaining the maximum sorbitol selectivity at 120 minutes, achieve the highest values after reacting for 150 minutes. Samples In order to gain an in-depth understanding of the reaction mechanism based on the role of the MIL-100-Cr MOF supporters, artificially implanted defects and impregnated Ru NPs as well as their synergetic catalytic effect on D-glucose selective hydrogenation to sorbitol, the catalytic performances of Ru NPs, pristine and defect engineered MIL-100-Cr supporters have also been investigated. The detailed discussions have been given in the section "Determination of the roles of each active species and their synergistic catalytic mechanism" of the main text.
2022-04-21T06:23:37.107Z
2022-04-19T00:00:00.000
{ "year": 2022, "sha1": "9352a5b3cbf3bb5fab30cfbcc7b6bdfd8c419bda", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-022-29736-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ff57804d4a5bbc9eb2faa28436665ce846b4750a", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
145926574
pes2o/s2orc
v3-fos-license
Improved biomechanical metrics of cerebral vasospasm identified via sensitivity analysis of a 1D cerebral circulation model Biomechanics Cerebral vasospasm (CVS) is a life-threatening condition that occurs in a large proportion of those affected by subarachnoid haemorrhage and stroke. CVS manifests itself as the progressive narrowing of intracranial arteries. It is usually diagnosed using Doppler ultrasound, which quantifies blood velocity changes in the affected vessels, but has low sensitivity when CVS affects the peripheral vasculature. The aim of this study was to identify alternative biomarkers that could be used to diagnose CVS. We used a 1D modelling approach to describe the properties of pulse waves that propagate through the cardiovascular system, which allowed the effects of different types of vasospasm on waveforms to be characterised at several locations within a simulated cerebral network. A sensitivity analysis empowered by the use of a Gaussian process statistical emulator was used to identify waveform features that may have strong cor- relations with vasospasm. We showed that the minimum rate of velocity change can be much more effective than blood velocity for stratifying typical manifestations of vasospasm and its progression. The results and methodology of this study have the potential not only to improve the diagnosis and monitoring of vasospasm, but also to be used in the diagnosis of many other cardiovascular diseases where car- diovascular waves can be decoded to provide disease characterisation. Introduction Among the complications of subarachnoid haemorrhage (SAH) and stroke, cerebral vasospasm (CVS) is the leading cause of cerebral ischemia which leads to death of cerebral tissue cells (Fehnel et al., 2014;Macdonald, 2016). CVS manifests itself in the gradual contraction of intracranial arteries, resulting in a narrowed lumen, and initiates days after SAH. The effects of cerebral ischemia typically become evident after one week. Each year, SAH occurs in between 8 and 10 out of every 100,000 adults, of whom approximately 50% go on to develop CVS, which often results in longlasting neurological impairment or death (British Society of Neurological Surgeons, 2006;Pluta et al., 2009). The mechanism behind SAH-induced CVS is not completely understood. Several pathological mechanisms have been suggested to explain CVS, including changes in vascular responsiveness, and inflammatory or immunological responses of the vascular wall (Lin et al., 2014). Currently, Transcranial Doppler ultrasound (TCD) is the most commonly used diagnostic tool (Kumar et al., 2016). In patients with CVS, TCD typically shows an increase in blood velocity from baseline values in the affected vessels (Aaslid et al., 1984;Fontana et al., 2015;Harders and Gilsbach, 1987). This biomechanical metric (henceforth referred to as a biomarker) effectively detects CVS in larger arteries, but its sensitivity decreases when CVS affects vessels concealed behind thick bone tissue in the peripheral intracranial vasculature. As a result, CVS is commonly diagnosed by excluding other possible causes and there is a need for more effective diagnostic biomarkers. Pulse waves in the cardiovascular system originate from periodic contraction of the heart, and propagate through the elastic vessels of the cardiovascular system. These waves reflect at sites of mechanical discontinuity, such as bends, bifurcations, or sudden narrowing, including those caused by vasospasm. Numerical models, in particular 1D and 0D-distributed models, have been https://doi.org/10.1016/j.jbiomech.2019.04.019 0021-9290/Crown Copyright Ó 2019 Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). successfully used to capture and describe the physics of propagation of these waves (Alastruey et al., 2007;Reymond et al., 2009). Several computational models have been proposed to better understand the cause-effect mechanisms governing CVS (Baek et al., 2007;Lodi and Ursino, 1999;Robinson et al., 2010). In these studies, CVS was assumed to affect only the middle cerebral artery (MCA) and the focus was on the currently used biomarker, blood flow velocity, showing its increase in the narrowing vessel. To the authors' knowledge, there is no evidence in the literature of studies that link CVS in different locations (including the peripheral circulation) to pulse waveform features measured proximally. The construction of patient-specific 1D cardiovascular system models is also challenging because the model inputs (anatomical parameters, boundary conditions, and initial conditions) are difficult or expensive to measure in the clinical setting. It is therefore important to ensure that biomarkers and model predictions are robust under a range of plausible model inputs. An extensive exploration of the model input-space is computationally intensive because the model must be run many times with different sets of inputs, but can be made tractable by replacing the mechanistic numerical model with a fast-running emulator. The most commonly used emulation methods are Gaussian process and polynomial chaos expansions (PCE) (Donders et al., 2015;O'Hagan, 2013). In a previous study, we used a Gaussian process (GP) to emulate a 1D cardiovascular system model to a high level of accuracy . We showed how the GP emulator could reduce the computational cost of model runs by 95%. A GP emulator can treat inputs and outputs explicitly as uncertain quantities, and so by determining the proportion of output variance that could be accounted for by each uncertain input we were able to calculate variance based sensitivity indices for each input and output of the model. The aims of the present study were to (i) develop a 1D numerical biomechanical model of blood flow in the cerebral circulation, (ii) build a GP emulator of this model, (iii) use the emulator to examine the effect of simulated CVS with varying severity and extension on the properties of the pulse waveform, and (iv) to identify biomarkers that show the greatest sensitivity to the presence and extent of CVS. In the wider context, the present study describes the use of sensitivity indices as a way to identify effective biomarkers, which is a novel approach that has the potential to result in clinically useful tools. Materials and methods The study consisted of the following steps: 1. Complete network model: implementation of an arterial network model including vessels typically affected by CVS (complete model). 2. Biomarker-pool selection: identification of pulse waveform descriptors. 3. Sensitivity analysis: development of a Gaussian process emulator using a reduced number of model runs evenly distributed across the input parameter-space. The resulting GP was then used to run a full sensitivity analysis (SA) of the complete model outputs of interest to the input parameters of interest, i.e. vessel radii reduction. 4. CVS simulation: The 1D model was reduced to include only the internal carotid artery (ICA) and its peripheral vessels (reduced model) and employed to simulate CVS with different degrees of severity. The model reduction was done to ensure a constant perfusion of the arteries affected by CVS, and to include the effect of vascular autoregulation. The reduced model included the left ICA, anterior cerebral artery (ACA), and the MCA with its peripheral ramifications. The results of this second set of simulations were then analysed to establish a relation between biomarkers and CVS properties w.r.t. traditional biomarker. Complete network model The numerical model used to simulate the propagation and reflection of pulse waveforms in typical vascular networks, together with its validation is described in Melis et al. (2017) and Melis (2017). Briefly, the model is based on the reduced 1D form of the general continuity and Navier-Stokes equations for incompressible flows within narrow straight elastic tubes. The model assumes a flat velocity profile to compute the convective term (which goes to zero) and a parabolic profile for viscous losses. The model outputs are pressure, flow rate, and velocity waveforms. These are time-and space-dependent signals with period T equal to the cardiac cycle (T ¼ 0:8 s). A constitutive equation taking into account the elastic wall behaviour is used to close the equation system whose mechanical properties are set to resemble physiological values for healthy subjects. The vascular network employed in this study is made of 61 vessels connected in a tree-like configuration (Fig. 1). It starts from the ascending aorta and includes the entire circle of Willis and a detailed description of the MCA daughter branches. The mechanical properties of the arteries were taken from Reymond et al. Table 1). The circle of Willis network from Alastruey et al. (2007) was extended to include the detailed anatomical description of MCA branches with data from Gibo et al. (1981) and Tanriover et al. (2003). The numerical solution of the complete network model was computed by means of openBF (Melis, 2018), a finite-volume solver for 1D networks of elastic vessels. Pressure, flow, and velocity waveforms were obtained along five equispaced locations in each vessel. The inlet boundary condition at ascending aorta was set as a typical volumetric flow rate waveform taken from Alastruey et al. (2007) (Fig. 2(a)), and the outlet boundary conditions were set by coupling each 1D model outlet with a three-element Windkessel model of the peripheral vasculature, with typical peripheral resistance and compliance values taken from Alastruey et al. (2007). In order to reduce unrealistic reflections, the first Windkessel resistance value R 1 was changed at each time step to match the 1D segment impedance. The second resistance R 2 was then calculated so that R 1 þR 2 ¼R p , where R p was the peripheral resistance set to a constant value that matched typical resistances Table 1 Cerebral circulation mechanical properties. Posterior communicating artery (PCoA), middle cerebral artery (MCA), anterior cerebral artery (ACA), posterior cerebral artery (PCA), anterior communicating artery (ACoA). ID Artery of the distal capillary network at that location. The vessel impedance was then computed as q  c=A 0 , where q was the constant blood density, c the pulse wave velocity (which is a function of the lumen diameter A ¼ Aðx; tÞ and vessel compliance), and A 0 the constant reference lumen diameter (Table 1). Biomarker-pool selection We selected potential biomarkers as waveform features that would be easy to extract in the clinical context, and those that are commonly used to calculate more complex metrics such as the augmentation and pulsatility indices. The biomarker-pool therefore consisted of minimum, maximum, and time average in one cardiac cycle (minðÁÞ; maxðÁÞ, and meanðÁÞ, respectively) of velocity (u) and pressure (P) waveforms, along with their first time derivatives (u t and P t , respectively). Sensitivity analysis SA was performed to identify the set of outputs most sensitive to a change of each single input of the numerical model . In a highly non-linear vascular model, waveforms may be sensitive to more than one model parameter or combinations of them. Hence, only outputs highly sensitive to changes in lumen radius were selected as CVS biomarkers. SA was performed considering for each vessel the lumen radius (R 0 ) and its length ('), the wall's Young's modulus (E), and the peripheral resistance and compliance of the Windkessel models at the outlets (R p , and C p ). We computed the sensitivity to R p because this remained constant throughout each model run, whereas the individual Windkessel resistances R 1 and R 2 changed at each time step. SA was done by means of the analysis of variance decomposition (equation 11 in Saltelli et al. (2010)) from which Sobol's sensitivity indices were computed (Saltelli et al., 2010;Sobol, 2001) using the method described in our earlier work , where the numerical model is replaced by a fast-running GP emulator Oakley and O'Hagan, 2004;O'Hagan, 2006). The GP emulator had a zero mean and a squared exponential kernel as described in our earlier paper , and the GP was fitted using the inputs and outputs from an initial set of 50 numerical simulations as described below. The sensitivity indices measure the contribution to variance in each model output (i.e., variance of a waveform biomarker) from the variance of each model input (first-order indices), or from the interaction of one input with other inputs (total-order indices). By definition, Sobol's indices are in the ½0; 1 interval; in the case of first-order indices, S i;j ¼ 0 indicates that none of the output j variance can be ascribed to input i, and S i;j ¼ 1 indicates that the output j variance depends only on input i. The interactions between inputs are taken into account by total-order indices, and the difference between total-and first-order indices returns higher-order indices, H j . For example, H i;j > 0 indicates that the output j variance is influenced by the interaction between input i and other input variables. The aim of SA in this case is to identify those model outputs sensitive to changes in radius (high firstorder indices w.r.t. input R 0 ) but not to input interactions. The model outputs satisfying these conditions were selected as CVS biomarkers; the SA was performed in four steps: All model inputs were initialised with typical reference values (Table 1) and then changed within AE50% of their reference values as such variation is plausible for the CVS scenario (Findlay et al., 2016). A homogeneous coverage of the input space was ensured by the use of Latin hypercube sampling method as explained in Melis et al. (2017). A set of 50 input parameter points was identified by the Latin hypercube sampling from uniform distributions of each model input in the range AE50% of typical reference values given in (Table 1). A set of numerical simulations run using these inputs were used to train the GP emulator. Latin hypercube sampling ensured an even coverage of the input space, and the number of training runs was based on our earlier experience . 2. The GP emulator was trained using normalised inputs and outputs from the training data, and its hyper-parameters optimised via the stochastic gradient descent method. The number of GP hyper-parameters was twice the number of inputs. 3. The trained GP emulator was used to generate the predictions on the Oðd  10 3 Þ (where d is the number of inputs) input set. 4. Sobol's sensitivity indices were computed from these outputs and converted to percent values. The outputs were ranked accordingly to the values of their indices with respect to the radius. In addition, the higher-order indices were computed and used to ensure that only features sensitive to R 0 were selected. The GP emulator was trained on a set of 50 1D numerical model runs, and was validated against an additional set of 200 model runs. To assess the error made by the GP on predicting numerical outcomes, the mean average prediction error (MAPE) was computed as where N is the total number of input points, and Y i and y i are the i th output from the numerical model and from the GP emulator, respectively. CVS simulations To simulate CVS, we used a reduced network model. The reduced model starts from the internal carotid artery and bifurcates into the ACA and the MCA (segments 18, 25, and 23 in Fig. 1, respectively). The inlet boundary condition at the ICA root was set as a typical volumetric flow rate from Alastruey et al. (2007) (Fig. 2(b)). The ACA was included to consider the effect of flow redistribution caused by the CVS and it was closed by a three-element Windkessel model, whereas all the distal vasculature past the MCA was included to simulate the effect of CVS. The propagation and narrowing levels of vasospasm was decided in consultation with two fully trained Interventional Neuroradiologists from University Hospital of Tours, Tours, France. Various CVS types describing the progression of CVS were considered. These included either CVS spreading from the MCA towards periphery (forward CVS) or from the periphery towards the MCA (backward MCA) in a symmetric or asymmetric configuration (Fig. 3). In all the configurations, the lumen diameter reduction was gradually increased in six stages from the baseline condition (0%) to severe vasospasm (60% of vessel narrowing). Narrowing was uniform across the lumen, with no change in vessel shape. Complete network model and GP emulator validation The complete network model was validated against results reported in Alastruey et al. (2007) to ensure correct parameterisation of the additional vessels. The waveforms calculated by the two models match at several locations within the system (Fig. 4a). The complete network model was further validated by comparing predicted velocity values time-averaged over the cardiac cycle in the MCA, and in the presence of vasospasm, with published measurements from SAH patients (Sloan et al., 1989) ( Fig. 4b). The complete network model is in good agreement with literature. Discrepancies are smaller than 7% for lumen diameter reduction < 50%; for narrowing > 50%, the complete network model predictions diverge from the experimental measurements (30% difference at 60% narrowing). The GP emulator, trained upon 50 simulator runs, was validated against an additional set of 200 simulations. The comparison of numerical results and GP predictions is reported in Fig. 4c where the points scattered along the diagonal line of equality indicate a good agreement between simulations and emulator predictions. The MAPE scored on the test-set was 3:53%, which is small with respect to the 300% whole biomarker variation within the CVS range. The total simulation time using the complete network model for the 50 point dataset used for emulator training was equal to 31 h on a standard Linux workstation. The emulator training and the SA on the complete d  10 3 dataset were performed in 10:4s. The total computational time estimated to perform a complete Monte Carlo analysis using the complete network model would have required approximately 612 h. Therefore, the entire SA by means of GP emulator took 5% of the Monte Carlo computational time. SA and CVS biomarkers selection The sensitivity analysis results are reported in terms of Sobol's sensitivity indices (Fig. 5a). The total and first-order sensitivity indices were converted to percentage (i.e. sensitivity indices were multiplied by 100) and reported in bar charts (Fig. 5) as plain and hatched bars, respectively. For instance, the SA indicates that the mean value of the pressure waveform is more sensitive to changes in peripheral resistance (R p ) than in peripheral compliance at the set heart rate (C p in Fig. 5c). The velocity first time-derivative is highly sensitive to changes in lumen radius (R 0 , Fig. 5b); in particular, the minimum value of the velocity time-derivative is exclusively dependent on the changes in radius as the difference between first-and total-order indices is 1%. Therefore, in the fol-lowing analysis, the effectivity of min(u t ) as a CVS biomarker is tested. CVS simulations Results are presented as percent changes of the selected CVS biomarkers with respect to the pre-vasospasm value (C%). For different levels of vessel narrowing, the C% was computed as CðxÞ¼100 x where x CVS is the value of the CVS biomarker as the CVS is occurring and x REF is the value of the CVS biomarker in the pre-vasospasm configuration. The results in terms of time average and minimum first timederivative velocity biomarkers for proximal and distal CVS are reported in Fig. 6. The meanðuÞ biomarker is sensitive to small changes in MCA radius as it increases by 50% for 20% lumen radius decrease (Fig. 6a). The minðu t Þ biomarker is less sensitive to proximal changes in radius and it decreases by 50% only for severe CVS (lumen reduction > 50%). Conversely, in case of peripheral CVS (Fig. 6b), the meanðuÞ biomarker slowly decreases as the lumen radius decrease and a change of 50% is obtained for severe CVS (lumen reduction 60%). The minðu t Þ biomarker change is more sensitive to peripheral CVS as a 50% increase is obtained for 30% lumen reduction in any CVS configuration. In the case of symmetric CVS (circle marker), the increase in minðu t Þ is of 75% for 20% diameter reduction. In an attempt to identify biomarkers capable of detecting vasospasm from an early stage of inception the velocity biomarkers (meanðuÞ and minðu t Þ; meanðu t Þ, and maxðu t Þ) for 10% lumen reduction for all CVS configurations are reported in Fig. 7. This comparison highlights how different biomarkers can be used for early detection of CVS occurring at different locations. The meanðuÞ and meanðu t Þ biomarkers show a 20% increase in all the cases in which the MCA is affected by CVS (i.e., I; I þ II, and I þ II þ III,i nFig. 7(a, c)), but they only decrease by 8% in case of peripheral CVS (i.e., II þ III and III cases). On the other hand, the minðu t Þ biomarker increases of at least 10% in the peripheral CVS cases (Fig. 7b). The maxðu t Þ biomarker changes are lower than 5% in all the simulated cases (Fig. 7d). Discussion The aim of this study was to identify more efficient biomarkers capable of characterising different types of CVS and allow its early detection. A 1D model of the cerebral circulation was developed to describe the physics of wave propagation through a typical vascular network and to predict the effects of vasospasm on waveforms. A pool of CVS biomarkers was identified through SA on waveform features. The biomarkers examined were pulse waveform features whose variation is caused by changes in mechanical properties due to CVS. A computationally efficient exploration of the parameter space in a 1D model of the cerebral circulation was performed by means of GP emulation. Our modelling approach showed that a CVS occurring at the measurement location caused an increase (more than 150% in severe cases) of meanðuÞ, as expected, and validated by clinical observations. However, when the CVS occurred more peripherally (Fig. 8), meanðuÞ was only marginally affected by the vessel lumen reduction and was effective only for severe levels of vessel narrow-ing (it decreased by 47% when vessel narrowing was 50%). The decrease was due to the increase in peripheral resistance and, in turn, to the flow diversion towards the distal ICA. This demonstrates that meanðuÞ cannot be a good diagnostic predictor for CVS in distal arteries. Conversely, the minimum gradient of the velocity waveform (minðu t Þ) was more sensitive to CVS affecting distal arteries. An increase (up to 155%) of this biomarker occurred for all the CVS configurations tested. Thus we propose that minðu t Þ has potential diagnostic value for CVS in distal arteries. The influence of vessel narrowing on the decelerating part of the pulse waveform is what would be expected from simple biomechanical considerations. From the Moens-Korteweg equation (relating wave speed to lumen and other mechanical properties), reduction in lumen diameter will result in a higher wave propagation speed of the reflected waves through the narrowed sections, which in turn will result in an 'earlier' superposition of the reflected wave on the incident waves and therefore on changes to the decelerating slope of the proximally measured waveform. Minimum ðu t Þ occurs immediately after peak systole, when there is superposition of both incident and backward travelling waves. Figure 1 are compared with waveforms published in Alastruey et al. (2007) and computed in a network without the extended MCA description. (b) Comparison of velocity change in the MCA against percentage lumen diameter reduction for the current study (continuous line) and published experimental measurements (dashed line) (Sloan et al., 1989). (c) Comparison between the GP emulator predictions and the mechanistic model outputs: points lying along the line of equality indicate good agreement between the two methods. Backward waves are produced by mechanical discontinuities such as a change in radius, and so we would speculate that minðu t Þ responds to changes in the backward waves relative to the incident wave. Further numerical and experimental work would be needed to establish if this is the case. Overall, our velocity biomarker results were in agreement with previously published experimental work where an increase in meanðuÞ was associated with a decrease in MCA lumen diameter. However, while the numerical model accurately predicted the velocity biomarkers, it lacks a detailed and mechanistic representation of the autoregulation system that is believed to play an important role in the response of the cardiovascular system after haemorrhagic events. In this study, cerebral autoregulation was simulated in an indirect fashion, by imposing a constant flow rate at the inlet of a reduced network. Therefore, a further study should include the entire vascular network (i.e., the whole circle of Willis and both sides of the cerebral circulation) and a dynamic representation of the autoregulation mechanism to ensure a direct estimation of the effects of an extended network on waveforms. This network could also be used to identify better measurement locations for monitoring early CVS onset and progression. Alongside, the numerical results described in the present study can be used as hypothesis to inform experimental measurements, and to validate the newly found biomarker prediction accuracy. Fig. 7. Velocity biomarkers for CVS onset (lumen reduction 10%). Depending on the CVS configuration, different velocity waveform features can be used. CVS including the main MCA branch (first generation, I) causes an increase in meanðuÞ and meanðutÞ, while peripheral CVS (second and third generation, II and III) causes the increase in minðut Þ. Fig. 8. Comparison between the current CVS biomarker (meanðuÞ) and the proposed biomarker (minðut Þ) for the peripheral CVS case. A change in 50% of the minðut Þ biomarker occurs for small changes in lumen diameter (15%) whereas the meanðuÞ biomarker decreases of 50% only in the case of severe vasospasm (60% diameter reduction). The proposed mechanistic model can be used to describe pressure waveforms in transcranial arteries affected by CVS. This model was used to identify more sensitive CVS biomarkers than those currently used in clinical practice. In particular, the mean velocity measured by TCD does not detect distal CVS whereas the minimum gradient of the velocity waveform is capable of differentiating between location and severity of CVS. However, the model used in the present study assumes flat and parabolic velocity profiles for the convective term an viscous losses respectively, and these assumptions may be an important factor that determines whether the CVS biomarkers identified in this study can be used effectively in the clinical setting. Nevertheless, this study, once clinically validated, has the potential to guide development of better monitoring and diagnostic protocols and technologies for a more effective management of patients not only in the context of vasospasm, but in the treatment of many other cardiovascular diseases, whose presence and progression may be decoded in a similar fashion to provide disease characterisation. Conflict of interest The authors declare no conflicts of interest.
2019-05-07T13:57:22.217Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "eeb1330a8fc3ffa1c16b99da3721b229e26df466", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.jbiomech.2019.04.019", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "b97c575cbb7deb48e77e65939cb8815fc6d06ad5", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
53304047
pes2o/s2orc
v3-fos-license
The Novel Neuroprotective Compound KMS99220 Has an Early Anti-neuroinflammatory Effect via AMPK and HO-1, Independent of Nrf2 We have previously reported a novel synthetic compound KMS99220 that prevented degeneration of the nigral dopaminergic neurons and the associated motor deficits, suggesting a neuroprotective therapeutic utility for Parkinson's disease. Microglia are closely associated with neuroinflammation, which plays a key role in the pathogenesis of neurodegenerative diseases. In this study, we investigated the effects of KMS99220 on the signaling involving AMP-activated protein kinase (AMPK) and heme oxygenase-1 (HO-1), the enzymes thought to regulate inflammation. KMS99220 was shown to elevate the enzyme activity of purified AMPK, and phosphorylation of cellular AMPK in BV2 microglia. It increased the level of HO-1, and this was attenuated by AMPK inhibitors. KMS99220 lowered phosphorylation of IκB, nuclear translocation of NFκB, induction of inducible nitric oxide synthase, and generation of nitric oxide in BV2 cells that had been challenged with lipopolysaccharide. This anti-inflammatory response involved HO-1, because both its pharmacological inhibition and knockdown of its expression abolished the response. The AMPK inhibitors also reversed the anti-inflammatory effects of KMS99220. The induction of HO-1 by KMS99220 occurred within 1 h, and this appeared not to involve the transcription factor Nrf2, because Nrf2 knockdown did not affect the compound's HO-1 inducing- and anti-inflammatory effects in this time window. These findings indicated that KMS99220 leads to AMPK-induced HO-1 expression in microglia, which in turn plays an important role in early anti-inflammatory signaling. Together with its neuroprotective property, KMS99220 may serve as a feasible therapeutic agent against neuroinflammation and neurodegeneration. AMP-activated protein kinase (AMPK) is an enzyme involved in the regulation of cellular homeostasis and metabolic function. Accumulating evidence suggests that AMPK is also an important regulator of neuroinflammation. In microglial cells, direct pharmacological activation of AMPK lowered the lipopolysaccharide (LPS)-induced production of TNF-α, IL-6 and inducible NO synthase (iNOS) and nuclear translocation of NFκB [2,3]. In macrophages, overexpression of AMPK results in decreased inflammatory response, its knockdown leads to enhanced inflammatory response [4,5], and activation of its signaling downregulates the function of NFκB system [4,6]. Hence, AMPK is considered as a potential therapeutic target in neuroinflammation-related diseases. The phase-2 enzyme heme oxygenase-1 (HO-1) has also been shown to possess anti-inflammatory properties. Deficiency of HO-1 exhibited abnormalities including chronic inflammation in mice [7], increased secretion of pro-inflammatory cytokines in activated mouse splenocytes [8], and hyperinflammation in human [9,10]. HO-1 induction in macrophages has been shown to mediate the switch from the proinflammatory M1 phenotype to the anti-inflammatory M2 phenotype [11]. In microglia, induction of HO-1 expression using phytochemicals or chemical agents has shown to mediate the resolution of inflammatory response [12][13][14][15]. We recently synthesized a novel morpholine-containing chalcone compound KMS99220 (chemical structure shown in Fig. 7) that had a good pharmacokinetic profile and neuroprotective activity [16]. This compound exhibited excellent bioavailability and metabolic stability and no apparent side effect issues such as toxicity and cytochrome p450 inhibition. KMS99220 was shown to bind to Keap1 protein, activate Nrf2, and induce expression of its target genes including HO-1 [16]. On the other hand, it has been reported that some chalcone compounds are anti-inflammatory [17][18][19] and can activate the AMPK pathway [20][21][22][23], and that AMPK can trigger HO-1 induction [24][25][26]. Taken together, we hypothesized that KMS99220, being a chalcone, might trigger AMPK activation and HO-1 expression in microglia resulting in modulation of neuroinflammatory responses. Synthesis of KMS99220 KMS99220 was organically synthesized according to the method we had previously published [16]. Cell cultures BV2 mouse microglial cells [27] were grown in Dulbecco' s modified Eagle' s medium with 10% fetal bovine serum in the presence of 100 IU/L penicillin and 10 μg/ml streptomycin. Nrf2sh cells and GFPsh cells were produced and grown in culture media as previously reported by us [28]. The cells were maintained at 37 o C in 95% air and 5% CO 2 in humidified atmosphere. AMPK kinase assay The activity of AMPK enzyme was determined in the presence or absence of KMS99220 using purified AMPK protein (service provided by Eurofins Scientific, Dundee, UK). The purified AMPK was first incubated with KMS99220 in an assay buffer containing 8 mM MOPS (pH 7.0), 0.2 mM EDTA, 200 μM AMP, and 100 μM of the substrate AMARAASAAALARRR. The reaction was initiated by the addition of Mg/ATP mix (final concentrations, 10 mM magnesium acetate and 45 μM [γ-33 P]-ATP). After incubation for 40 min at room temperature, the reaction was stopped by the addition of phosphoric acid (final concentration, 0.5%). Ten μl of the reaction was then spotted onto a P30 filtermat and washed four times for 4 min in 0.425% phosphoric acid and once in methanol prior to drying and scintillation counting. Cytotoxicity assays Cell viability was assessed by determining the intracellular level of ATP using CellTiter-Glo ® kit as described before [29]. Griess assay Cells were treated with various concentrations of KMS99220 with 0.2 μg/ml LPS. After 24 h, 100 μl of culture medium was mixed with 50 μl of Griess reagent (1% sulfanilamide, 0.1% naphthylethlyene diamine dihydrochloride and 2% phosphoric acid) and incubated at room temperature for 10 min. The nitrite level was measured at 540 nm with a microplate reader (SPECTRA MAX 340 pc; Molecular Devices, Menlo Park, CA, USA). siRNA transfection Cells were transiently transfected with siRNA (final concentration of 40 nM) for HO-1, Nrf2 or control using Lipofectamine RNAiMax reagent according to the manufacturer' s instructions. After 24 h, the cells were treated with KMS99220 or LPS, and then RT-PCR, western blot analysis and Griess assay were conducted. Data analyses Statistical tests were carried out using PRISM (GraphPad Software, San Diego, CA, USA). A value of p<0.05 was considered statistically significant. Comparisons of three or more groups were analyzed by one-way ANOVA (analysis of variance) and post Dunnett' s multiple comparison tests. KMS99220 activates the AMPK signaling pathway in microglia We first examined whether our chalcone compound KMS99220 might activate AMPK. When purified AMPK protein was exposed to KMS99220, the activity of the enzyme was increased in a concentration dependent manner, with 17% elevation at 10 μM (Fig. 1A). When the murine microglial BV2 cells were exposed to KMS99220, an increase in the phosphorylated AMPK was observed in a manner dependent on KMS99220 concentration (Fig. 1B). This occurred within 15 min of the exposure, after which the level was restored to that of the untreated control (Fig. 1C). In accordance with the previous report that activated AMPK translocates into the nucleus [30], the KMS99220 treatment resulted in accumulation of the phospho-AMPK in the nucleus, between 15 and 60 min (Fig. 1D). KMS99220 had no cytotoxicity in the concentration range tested (Fig. 1E). AMPK is involved in microglial HO-1 induction by KMS99220 We tested if KMS99220 might induce HO-1 expression in microglial cells, and if so, whether this might involve AMPK. We performed RT-PCR, because we had previously demonstrated that the results of RT-PCR corresponded well to those of real-time RT- Fig. 2. AMPK is involved in HO-1 induction by KMS99220. (A) BV2 cells were treated with various concentrations of KMS99220 for 6 h or 24 h, and RT-PCR and western blot analysis was performed against HO-1. For quantitation, values obtained from densitometry were normalized against respective loading controls (GAPDH or β-actin) and are expressed as induction fold of untreated control, as indicated above each panel. (B) BV2 cells were pretreated with Ara A or Compound C for 1 h, KMS99220 was added to the final concentration of 10 μM, and the cells were further cultured for 6 h. RT-PCR was performed against HO-1 using GAPDH as a loading control. For quantitation, values obtained from densitometry were normalized against GAPDH and are expressed as fold of KMS99220-treated control. PCR for all genes investigated in the present study under our experimental conditions [18]. As shown in Fig. 2A, KMS99220 dramatically and dose-dependently elevated the mRNA and protein levels of HO-1. On the other hand, when the cells were pretreated with the AMPK pharmacological inhibitors Ara A or Compound C, the KMS99220-induced HO-1 elevation was not as apparent (Fig. 2B). This suggested that AMPK played a role in the HO-1 induction. KMS99220 suppresses NFκB signaling via HO-1 induction Since KMS99220 induced the HO-1 expression, it was possible that it might also suppress the signaling of the pro-inflammatory transcription factor NFκB. Western blot analysis showed that nuclear NFκB, which had been elevated upon exposure to the inflammagen LPS, was suppressed in the cells treated with KMS99220 (Fig. 3A). Since NFκB is known to be activated through phosphorylation and the subsequent degradation of its cytosolic inhibitor protein IκB [31], we asked if KMS99220 might also modulate this upstream step. As shown in Fig. 3B, KMS99220 indeed blocked the elevation of phospho-IκB. The decrease in the total IκB protein level after the LPS exposure, implicating degradation of phospho-IκB, was also not as apparent in the cells treated with KMS99220, supporting the notion that phosphorylation of IκB has been compromised by KMS99220. We tested whether this effect of KMS99220 on the IκB/NFκB system might be mediated by HO-1. For this, HO-1 expression was knocked down by transfection of BV2 cells with siRNA (Fig. 3C). As shown in Fig. 3D, in the cells whose HO-1 expression was obliterated, KMS99220 was no longer able to inhibit the LPSinduced IκB phosphorylation, and there was no significant difference between the LPS-alone and the LPS+KMS99220-treated cells. This was in contrast to the cells transfected with the control siRNA, in which LPS+KMS99220 treatment lowered the level of phosphorylated IκB. This phenomenon was also confirmed using a pharmacological approach. When BV2 cells were pretreated with the HO-1 inhibitor SnPP, the inhibitory effect of KMS99220 on IκB phosphorylation was reversed, and this occurred in a dosedependent manner (Fig. 3E). KMS99220 suppresses LPS-induced iNOS expression via HO-1 induction Because activated NFκB leads to production of various proinflammatory mediators including NO in microglia, and KMS99220 was found to suppress NFκB activation, we tested whether KMS99220 might also lower NO production. For this, we assessed expression of iNOS, the NO-synthesizing enzyme, and generation of NO in LPS-activated BV2 cells. As shown in Fig. 4A, the mRNA and protein levels of iNOS, which was elevated by LPS, was dosedependently suppressed by the co-treatment with KMS99220. The generation of NO was also suppressed: the LPS-induced NO production was dose-dependently lowered by KMS99220, and 10 μM KMS99220 was able to completely block the increase (Fig. 4B). On the other hand, in the cells whose HO-1 expression was knocked down by transfection with HO-1 siRNA, the inhibitory effect of KMS99220 on the iNOS expression was reversed, and there was no significant difference in the iNOS level between the LPS alone and the LPS+KMS99220 treated cells (Fig. 4C). Pharmacological inhibition of HO-1 with SnPP also dose dependently alleviated the downregulating effect of KMS99220 on iNOS, com-pletely blocking the effect at 10 μM SnPP (Fig. 4D). Taken together, the suppression of iNOS expression by KMS99220 appeared to be mediated by HO-1. AMPK mediates the anti-inflammatory effect of KMS99220 We asked whether AMPK might also play a role in the inhibitory effect of KMS99220 on the NO system. As shown in Fig 5A, pretreatement with the AMPK inhibitor Ara A was able to dose-dependently reverse the inhibitory effect of KMS99220 on the LPSinduced iNOS expression. This correlated well with the amount of NO produced under this condition (Fig. 5B). Compound C, another pharmacological inhibitor of AMPK, showed a similar effect (Fig. 5C). Taken together with the finding that these AMPK inhibitors also suppresses HO-1 induction by KMS99220 (Fig. 2), these results suggested that AMPK likely acts upstream of HO-1 in mediating the anti-inflammatory effect of KMS99220. KMS99220 can exert early anti-inflammatory effects independent of Nrf2 Since the effect of KMS99220 on the IκB/NFκB system occurred within 1 h, and HO-1 mediated this response, the HO-1 induction would be expected to be increased at an early time point. As shown in Fig. 6A, the HO-1 mRNA level was indeed elevated within 1 h of the KMS99220 treatment. As the HO-1 gene is known to be under the control of the transcription factor Nrf2, it was possible that this early induction occurred via Nrf2. However, when we examined the expression levels of NQO1, GCLM and GCLC, whose gene expressions are also known to be under the control of Nrf2 signaling, they were not changed in this time frame (Fig. 6A). This suggested that a mechanism independent of Nrf2 signaling might be involved in the KMS99220-induced early expression of HO-1. To test this notion, we asked if the KMS99220 effect is still present in cells whose Nrf2 expression has been knocked down. For this, BV-2 cells were transfected with Nrf2 siRNA and the obliteration of Nrf2 mRNA was confirmed (Fig. 6B). When these cells were exposed to KMS99220 for only 1 h followed by a brief LPS challenge, the increase in the expression of HO-1 was evident, compared to LPS-alone control (Fig. 6C). The degree of HO-1 induction in the Nrf2 knockdown cells was not smaller than that in the control cells. This indicated that the early HO-1 induction observed after KMS99220 exposure indeed did not require Nrf2. Western blot analysis on the same samples against phosphorylated IκB revealed that the inhibition of IκB activation by KMS99220 still occurred in the absence of Nrf2 (Fig. 6C). As expected, the subsequent expression of iNOS was still downregulated in the Nrf2 knockdown cells in a manner not different from the control cells (Fig. 6D). For further confirmation, we performed the same test in BV2 microglia cells whose Nrf2 had been stably knocked down by introducing Nrf2sh [28] (Fig. 6E). Again, KMS99220 effectively induced HO-1 expression and inhibited IκB phosphorylation in the Nrf2 knockdown cells (Fig. 6F). In addition, the downregulation of iNOS expression still occurred in these cells (Fig. 6G). Taken together, these results indicated that the KMS99220-induced early induction of HO-1 and the subsequent anti-inflammatory response occurred independently of the presence of Nrf2. DISCUSSION With the increasing evidence that neuroinflammation plays a vital role in neurodegeneration, candidate drugs that target neuroinflammation toward therapy for neurodegenerative diseases are being actively sought. In the present study, our novel morpholinecontaining chalcone KMS99220, which was previously shown to possess a neuroprotective property with excellent pharmacological properties [16], also exerts anti-inflammatory effects on microglia, and that this is mediated by AMPK activation followed by HO-1 induction that occurs independent of the presence of Nrf2. The present study shows that KMS99220 can suppress activation of NFκB, the inflammation-associated transcription factor, expression of its downstream gene iNOS and production of the KMS99220 Is Anti-neuroinflammatory inflammatory mediator NO. This inhibition of inflammatory signaling by KMS99220 appears to be mediated by HO-1 induction: KMS99220 is able to induce HO-1 gene expression, and the inhibitory effects of KMS99220 on iNOS expression and IκB activation are lost in both HO-1 siRNA-transfected cells and HO-1 inhibitor-treated cells. This is in line with the previous reports on the anti-inflammatory role of HO-1. The exact mechanism of anti-inflammatory activities mediated by HO-1 remains unclear, but the enzymatic metabolites bilirubin and carbon monoxide are thought to be involved. For example, it has been reported that bilirubin inhibits iNOS expression and NO production in response to endotoxin in rats [32], and that carbon monoxide attenuates NO production and NFκB activation in LPS-induced endothelial cells [33]. We show in the present study that the early induction of HO-1 expression that occurs within 1 h of KMS99220 exposure and is associated with the anti-inflammatory response might take place independent of the transcription factor Nrf2. This was demonstrated by the finding that HO-1 expression and anti-inflammatory responses still occur in the absence of Nrf2. In addition, the expression levels of other Nrf2-dependent genes, such as NQO1, GCLM and GCLC, were not increased in this time frame. We show that the early expression of HO-1 is dependent on AMPK signaling. KMS99220 was able to activate AMPK within 15 min and induce HO-1 expression, and inhibition of AMPK activity abolished the HO-1 induction. Other transcription factors reported to be involved in HO-1 induction include CREB [34,35] and FOXO1 [36,37], and they are reported to be activated by AMPK [38,39]. Therefore, it is possible to speculate that AMPK, upon activation by KMS99220 and subsequent translocation into the nucleus, acts on any of these transcription factors and leads to the early HO-1 expression. We have previously shown that KMS99220 is able to bind to Keap1, the inhibitory protein for Nrf2, and lead to Nrf2 activation Fig. 6. KMS99220 exerts early anti-inflammatory effects independent of Nrf2. (A) BV2 cells were treated with 10 μM KMS99220 for 0.5 or 1 h, and RT-PCR was performed against HO-1, NQO1, GCLM and GCLC using GAPDH as a loading control. (B-D) BV2 cells were transfected with siRNA for control or Nrf2 for 24 h. (B) RT-PCR against Nrf2 was performed to confirm the knockdown. (C) The cells were treated with 10 μM KMS99220 for 1 h, and then with 0.2 μg/ml LPS for 0.5 h. Western blot analysis was performed against HO-1 and p-IκB using β-actin as a loading control. (D) The Nrf2 knockdown cells were exposed to 10 μM KMS99220 and/or 0.2 μg/ml LPS for 6 h, and RT-PCR was performed against iNOS. (E) RT-PCR against Nrf2 was performed to confirm the knockdown in the GFPsh cells and Nrf2sh cells. (F) GFPsh cells and Nrf2sh cells were treated with 10 μM KMS99220 for 1 h, and then with 0.2 μg/ml LPS for 0.5 h. Western blot analysis was performed against HO-1 and p-IκB. (G) GFPsh cells and Nrf2sh cells were exposed to 10 μM KMS99220 and/or 0.2 μg/ml LPS for 6 h, and RT-PCR was performed against iNOS. For quantitation, values obtained from densitometry were normalized against respective GAPDH or β-actin, and expressed as fold of untreated (A), control siRNA (B), GFPsh (E), or respective LPS-treated control (C, D, F and G). Ji Ae Lee, et al. [16]. Because HO-1 is a target gene of Nrf2, it was possible that the KMS99220-induced HO-1 elevation occurred via this signaling. However, we noted a discrepancy in time course, in that the suppression of IκB phosphorylation and NFκB nuclear localization by KMS99220 took place faster than the increase in HO-1 resulting from Nrf2 activation. This led us to ask whether there might be another, earlier mechanism for activation of the anti-inflammatory response. The present study shows that AMPK activation and the resulting expression of HO-1 occurs faster, within the same time window as the NFκB activation. Since KMS99220 also interacts with Keap1, it can be postulated that the KMS99220-induced HO-1 induction is biphasic, first via AMPK activation and later via Nrf2 signaling (Fig. 7). KMS99220 appears to act as direct activator of AMPK, because it led to increased enzyme activity of purified AMPK protein. Studies have suggested the utility of AMPK-activating compounds as a therapeutic agent for neuroinflammatory disorders. In microglial cells, pharmacological activation of AMPK using its direct AMPK agonists 5-amino-4-imidazole carboxamide riboside and ENER-GI-F704 lowered the LPS-induced production of TNF-α, IL-6 and iNOS and nuclear translocation of NFκB [2,3]. In addition, activation of AMPK resulting from exposure to the phytochemicals such as (+)catechin, resveratrol, and lycopene have all been linked to suppression of microglial activation [13,40,41]. In macrophages, overexpression of AMPK results in decreased production of TNF-α and IL-6 after LPS exposure, whereas knockdown of AMPK expression leads to increased production of these proinflammatory cytokines [4]. In addition, macrophages generated from AMPKα1-deficient mice exhibited enhanced inflammatory response [5], and the AMPK signaling downregulates the function of NFκB system [4,6]. In conclusion, our novel chalcone compound KMS99220 activates AMPK signaling, leading to downregulation of inflammatory response in microglia, and this appears to be mediated by HO-1 that is induced Nrf2-indepdendently downstream of AMPK at an earlier time point. Together with our previous finding that KMS99220 exhibits excellent neuroprotective property and pharmacokinetic profile, the compound might be utilized as a potential candidate for therapy of neuroinflammation-and neurodegeneration-associated disorders.
2018-11-17T16:21:27.875Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "577094feea5e0da3dafc92510b161b994f6748ea", "oa_license": "CCBYNC", "oa_url": "https://www.en-journal.org/journal/download_pdf.php?doi=10.5607/en.2018.27.5.408", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "577094feea5e0da3dafc92510b161b994f6748ea", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
248267351
pes2o/s2orc
v3-fos-license
An exploration of technology acceptance among nursing faculty teaching online for the first time at the onset of the COVID-19 pandemic Background The COVID-19 pandemic has brought to the forefront the importance for schools of nursing to use creative and innovative tools that are of high quality and accessible to learners. Faculty who may have been resistant to teaching online prior to the pandemic, no longer had the option to teach face-to-face and were mandated to teach online despite any apprehensions they may have had. Purpose The purpose of this study was to learn more about faculty attitudes and acceptance of teaching online by applying the Technology Acceptance Model to nursing faculty teaching online for the first time during the COVID-19 pandemic. Methods This descriptive-correlational study used an online survey tool to explore factors related to technology acceptance among nursing faculty teaching online for the first time during the COVID-19 pandemic. A sample of 87 full-time and part-time nursing faculty completed an adapted version of the Faculty Acceptance Survey. Results Findings from this study revealed an overall enjoyment of teaching online, confidence in online teaching skills and comfort with technology. However, findings also indicated struggles with workload balance, inferior interactions with students and the need for additional support. Conclusion Findings from this study demonstrate that nursing faculty are generally accepting of technology and positive outcomes are possible if identified concerns are addressed and positive feelings are fostered and supported. Introduction The resistance of faculty to online education is well documented (Ahmed & Ward, 2016;Gratz & Looney, 2020;Lloyd, Byrne, & McCoy, 2012;Mitchell, Parlamis, & Claiborne, 2015). Reasons for resistance include factors such as fear of the unknown, a discipline not being suited to online teaching, an absence of time for online course preparation, and a lack of skills or confidence in teaching online, as well as lack of formal training (Gratz & Looney, 2020;Mitchell et al., 2015). The COVID-19 pandemic has brought to the forefront the need for schools of nursing to use creative and innovative teaching tools and strategies that are of high quality and accessible to learners. Faculty who may have been resistant to teaching online prior to the pandemic, no longer had the option to teach face-to-face and were mandated to teach online despite any apprehensions they may have had. In as much as some nursing faculty may prefer face to face teaching/learning situations, the pandemic has spotlighted the significance of online teaching/learning. According to the Technology Acceptance Model (TAM) (Davis, 1989), there are 2 constructs that can determine faculties' willingness to teach in the online environment. These include perceived usefulness (PU) and the perceived ease of use (PEU) (Davis, 1989). PU corresponds with the belief that technology is needed to effectively carry out or enhance one's job functions. PEU is related to the degree of difficulty one anticipates from learning or using a technological tool. The TAM was later revised to the TAM2, which extended the TAM to include factors that influence PU (Venkatesh & Davis, 2000) such as social influence processes which corresponded with concepts such as subjective norms, voluntariness, and image and cognitive instrumental processes, which corresponded with concepts related to job relevance, output quality, result demonstrability, and perceived ease of use (Venkatesh & Davis, 2000). While the TAM and the TAM2 have been applied to a variety of areas in higher education, there are few studies that have focused on the TAM as it relates to teaching online in the area of higher education (Alsofyani, Aris, Eynon, & Majid, 2012;Gibson, Harris, & Colaric, 2008;Huang, Deggs, Jabor, & Machtmes, 2011), and even fewer that have explored the TAM or the TAM2 as it relates to faculty in nursing education (Tacy, 2018). Factors related to technology acceptance among nursing faculty is an important concept to study because online nursing programs continue to increase, and faculty cooperation is essential to the success of these programs (Blundell, Castaneda, & Lee, 2020;Walters, Grover, Turner, & Alexander, 2017). Furthermore, faculty in the area of higher education were compelled to teach online during the COVID-19 pandemic despite any previous resistance they may have had to this modality. These areas of resistance may have impacted their ability to teach online in an effective way or revealed other barriers otherwise unknown. Identifying factors that contribute to technology resistance among nursing faculty teaching online for the first time during the COVID-19 pandemic may identify barriers, resources and needed support related to technology acceptance that may assist nursing faculty in teaching online effectively. The purpose of this study was to learn more about faculty attitudes and acceptance of teaching online by exploring a variety of concepts related to PU and PEU among nursing faculty teaching online for the first time during the COVID-19 pandemic. This quantitative research extends the Technology Acceptance Model by applying it to nursing faculty in higher education. Review of literature According to Wingo, Ivankova, and Moss (2017), higher education faculty in the United States are increasingly being required to teach online however, there is unwillingness among faculty to accept online teaching. Among reasons for resistance were fear of change, concerns about reliability of technology systems, skepticism about students' outcomes and concerns about workload (Wingo et al., 2017). Wingo et al. (2017) highlighted that it is critical for institutions of higher education to foster faculty acceptance of online education methods through the use of training and support. Chow, Herold, Choo, and Chan (2012) noted that healthcare researchers are noticeably lagging in showing the usefulness of technology acceptance and cited the Technology Acceptance Model (TAM) as being predictive in its ability to bring to light the constructs that have an influence on the intentions of individuals to use technology. The TAM conceptualizes an individual's behavioral intention to use technology systems and is determined by two factors. First is the technology's perceived usefulness (PU), that is, "the extent to which an individual believes the technology system will enhance his/her work performance" (Venkatesh & Davis, 2000, p 187). Second, the perceived ease of use (PEU), that is, the extent to which an individual believes using a technology system will require little to no effort to be used accurately (Venkatesh & Davis, 2000). TAM focuses mainly on behavioral intention and actual behavior (Chen, Yang, Tang, Haung, & Yu, 2008). Chen et al. (2008) suggest that behavioral intention is the most significant determinant of behavior. The authors further suggest that there are a few studies that have explored nurses' behavioral intentions toward web-based learning but that these studies lack a theoretical framework to explore the determinants of webbased learning. Furthermore, these studies do not address nursing faculty acceptance of technology. Among the few studies to explore technology acceptance among nursing faculty, Tacy (2018), noted that nursing faculty may experience stress when teaching traditional nursing courses in non-traditional ways due to the expectation of teaching, stimulating, and facilitating learning using technology. Tacy (2018) further suggests that it is technology and its integration into teaching that may create stress which may affect nursing faculty attitudes toward the use of technology; consequently, interfering with job performance and satisfaction. Hence it is important to identify stress related to the use of technology systems or technostress and use effective stress reduction techniques to decrease the stress and improve the quality of teaching and learning. (Tacy, 2018). The TAM and TAM2 The TAM is a model that is used to determine how an individual's beliefs and values may impact their intention to use technology (Davis, 1989). Though originally applied to the area of business, since it's development in 1989, the TAM has been widely used and noted for its applicability to a vast array of disciplines including business, education, and health care (Abdullah & Ward, 2016;Jokar, Noorhosseini, Allahyari, & Damalas, 2017;Pando-Garcia, Periañez-Cañadillas, & Charterina, 2016;Scherer, Siddiq, & Tondeur, 2019). The premise behind the TAM is that attitudes and beliefs predict intention and intention predicts behavior (Cheng, 2019). The origins of the TAM are deeply rooted in the Theory of Reasoned Action (TRA) which holds that an individual's intention toward an action is significantly impacted by their beliefs as well as the consequences of that action (Ajzen & Fishbein, 1980;Teo, 2012). The TAM was further extended to demonstrate the relationship between specific factors that had the potential to influence technology acceptance, resulting in the development of the TAM2. Researchers found that PEU was significantly affected by computer self-efficacy both before and after use (Davis & Venkatesh, 1996), while PU was significantly affected by social influence processes and cognitive instrumental processes (Venkatesh & Davis, 2000). Social influence processes included factors such as subjective norms, voluntariness, experience and image, whereas cognitive instrumental processes included factors such as job relevance, output quality, result demonstrability, and perceived ease of use (Venkatesh & Davis, 2000). The TAM2 is a validated framework that has been successfully applied to a variety of disciplines to determine factors related to technology acceptance (Khoa, Ha, Nguyen, & Bich, 2020;Venkatesh & Davis, 2000;Wingo et al., 2017). While this model was developed several years ago, recent studies applying this model to a variety of healthcare and educational contexts, demonstrate that it is still relevant today (Granić & Marangunić, 2019;Rahimi, Nadri, Afshar, & Timpka, 2018;Salloum, Alhamad, Al-Emran, Monem, & Shaalan, 2019). The flexibility of this model is also demonstrated in its applicability to modern constructs such as virtual reality and social network sites (Sagnier, Loup-Escande, Lourdeaux, Thouvenin, & Valléry, 2020;Weerasinghe & Hindagolla, 2018). The TAM2 is particularly suited to an investigation of faculty perceptions related to technology and teaching online as it was developed to explore beliefs related to technology as well as beliefs about how the use of technology might affect an individual's role in their organization (Wingo et al., 2017). This study will answer the following research questions: 1. What are the beliefs and attitudes of nursing faculty teaching online for the first time during the COVID-19 pandemic related to the perceived ease of online education? 2. What are the beliefs and attitudes of nursing faculty teaching online for the first time during the COVID-19 pandemic related to the perceived usefulness of online education? 3. Do nursing faculty teaching online for the first time during the COVID-19 pandemic nursing faculty prefer teaching online compared to face-to-face teaching Study design This descriptive-correlational study used an online survey tool to explore factors related to technology acceptance among nursing faculty teaching online for the first time during the COVID-19 pandemic. An Institutional Review Board (IRB) exemption was granted by Lehman College, City University of New York (CUNY) prior to the recruitment of participants. Completion of the survey tool implied consent to voluntarily participate in this study. Sample All full-time and part-time nursing faculty teaching online for the first time during the the COVID-19 pandemic, which coincided with the Spring 2020 semester, were eligible to participate in this study. I recruited faculty virtually from the Sigma Theta Tau International (SIGMA) Nursing Honor Society Nurse Educator Forum, the American Association of Colleges of Nursing CONNECT Forum and several Facebook groups with a focus on Nursing Education. Faculty who did not meet the inclusion criteria were excluded from the study. A power analysis using a population size of 10,568 full-time nurse educators as reported by the National League for Nursing (National League for Nursing, 2020). Although, part-time educators were invited to participate, a population size for both full-time and part-time educators was not available. A confidence interval of 95% and a margin of error of 10% yielded a proposed sample size of 96 participants. Based on a literature review of previous studies related to the TAM and college educators, I sought a goal of 100 participants for this study (Alsofyani et al., 2012;Gibson et al., 2008;Huang et al., 2011). Participants were recruited between February 2021 and March 2021. Instrument The TAM2 scales of perceived usefulness, perceived ease of use, and behavioral intention were measured using items adapted from a 44-item survey exploring the predictive power of the TAM2 in relation to faculty intent to teach online called the Faculty Acceptance Survey (Stewart, Bachman, & Johnson, 2010). In addition to demographics, the survey measured the TAM2 constructs including PU and PEU as well as additional variables related to online teaching acceptance. Survey categories included Computer Use, Ease of Use, Perceived Usefulness, Faculty Motivations for Online and Traditional Instruction, Faculty Acceptance, Faculty Intent, and Faculty Support & Development Opportunities. The survey tool was originally piloted with a group of six collegelevel administrators and 121 faculty members employed at a large, public, open-enrollment University and demonstrated a high measure of consistency with a Cronbach alpha of 0.63 (PEU), 0.95 (PU) and 0.82 (Facilitating Conditions) (Stewart et al., 2010). Results from that study demonstrated that the TAM2 was an effective framework to determine faculty intent to teach online and related beliefs about teaching online and faculty acceptance. The survey was revised in this study so that it applied to various Learning Management Systems (LMS) and faculty from different Colleges/Universities with a focus on faculty motivations for teaching online. Content validity for revised tool was determined by 2 nursing faculty research experts. The adapted survey was pilot tested with a group of 28 participants at a single site with faculty from a variety of disciplines prior to use in this study. Minor changes were made to make the survey more applicable to multi-site faculty from a single discipline, such as including questions about college/university demographics and learning management systems. In addition to demographic questions, the tool included Likert-style questions that included a variety of scales with possible response ranges such as "Not Comfortable" to "Very Comfortable", "Not Useful" to "Very Useful", "Not Easy" to "Very Easy", and "Strongly Disagree" to "Strongly Agree". The revised tool demonstrated good internal consistency with a Cronbach Alpha of 0.937. The survey was distributed through LimeSurvey, a versatile online survey tool. Data collection An invitation to participate in this study was posted to various nurse educator forums including Sigma Theta Tau International (SIGMA) Nursing Honor Society Nurse Educator Forum, the American Association of Colleges of Nursing CONNECT Forum and several Facebook groups with a focus on Nursing Education. An invitation explaining the purpose of the study, eligibility requirements and a link to the survey was posted on each platform twice a month (every 2 weeks) in February 2021 and March 2021. Data analysis Frequency distributions were evaluated for the PEU and PU variables. Frequency tables and bar plots were used to interpret data results. Percentage values for selected variables are reported below. Correlational analyses for determining the relationship between select variables and preference for teaching online were determined through a one-sided t-test. Results One hundred and twenty-six participants responded to the survey invitation. Of those, 6 respondents who taught online prior to Spring 2021 were removed. In the end, 120 responses were used in the final analysis. Demographic data for the study participants are listed in Table 1. Perceived Ease of Use (PEU) Three items assessed PEU using a four-point response scale ranging from 1 (not at all easy to use) to 4 (very easy to use). For these items, many participants found it somewhat easy to find educational resources online (39%), become more skillful in using educational technology (45%) and use their Learning Management System (LMS) (42%). The relationship between age and PEU and PU variables were evaluated. Participants in the 20-40 year age range were statistically significantly more comfortable with using internet based social networking programs than older age groups (p < 0.001). Participants in lower age groups (<40 years) were statistically significantly more likely to agree that teaching online allowed them more time to dedicate to home responsibilities (p < 0.05). There were no other statistically significant differences between age and PEU and PU variables. Perceived Usefulness (PU) PU is comprised of a variety of factors including subjective norms, voluntariness, experience, image, job relevance, out-put quality and result demonstrability (Davis, 1989). Subjective norms Subjective norms refer to the extent to which an individual believes that others in the organization find value in technology (Venkatesh & Davis, 2000). This concept was captured under the category Faculty Support & Development Opportunities as this section focused on the value that the institution placed on preparing educators to teach online. Voluntariness In this study, online teaching was compulsory due to the COVID-19 pandemic, therefore voluntariness was not assessed. Experience Even though all participants were teaching fully online for the first time during the Spring 2021 academic semester, most of them (73%) had some experience with their Learning Management System prior to Spring 2020. The vast majority of participants felt comfortable using a computer (80%) and search engines such as Google (85%). Most participants also rated using a variety of software tools very often, such as Microsoft Word (89%), Microsoft PowerPoint (74%), the internet (97%) and email (97%). Image A large percentage (83%) of participants felt that it was important for online degree programs at their College/University to be recognized as being of high-quality. However, most participants (74%) also felt that an online degree was not as prestigious as a degree earned by taking face-toface classes. Job relevance Many participants felt that students who completed online degrees would have the same opportunities in the workforce as students who completed face-to-face degrees (62%) and to attend graduate school (76%). Almost all participants felt that it was important for students completing online degrees to have the same learning opportunities as face-to-face graduates (96%) and the same post-graduate opportunities as face-to-face graduates in terms of hiring opportunities and attending graduate school (93%). Output quality Almost all participants (99%) found educational technology such as their LMS and tools such as YouTube to be useful for content delivery. Almost all participants also found the tools in their Learning Management System (LMS) to be useful in helping to meet their learning objectives. Among the most useful tools were the features that allowed them to share their course Syllabus (94%), weblinks/media files (94%) assignments (95%), the Announcements feature (95%), send emails (91%) and the Grade Center (91%). While many participants (74%) felt that teaching online would make their teaching less effective than teaching face-to-face, 95% felt that online education was at least somewhat effective for student learning. Result demonstrability Most participants felt that teaching online left them with less time to dedicate to other teaching responsibilities (61%), research responsibilities (58%) and service responsibilities (55%). However, 49% of participants felt that teaching online allowed them more time to dedicate to home responsibilities compared to 46% that disagreed. Faculty motivations for online and traditional instruction Commuting related issues such as wear and tear on car, gas, and mileage was not a significant motivating factor for teaching online as only 50% of participants agreed with this statement and the other 50% either disagreed or neither agreed nor disagreed. Courses being scheduled at inconvenient locations also did not play a significant motivating factor with only 42% of participants agreeing with this statement. More participants were motivated to teach online because they enjoyed teaching online (45%), than those who did not have this motivation (22%) and 48% felt confident in their online teaching skills as opposed to 18% who were not motivated by confidence. Only 12% of participants were motivated to teach online by a belief that students would learn more in online classes than in hybrid or face-to-face classes and only 3% were motivated to teach online because of the financial incentives provided for online teaching. Additionally, only 21% felt fearful of teaching online and 28% felt excited. An additional 51% felt neither fearful, nor excited. In regard to teaching face-to-face, 92% of participants agreed that they enjoyed teaching face-to-face and 86% preferred it over teaching online due to the ability to interact with students. Most participants also felt that students desire (72%) and learn more in (61%) face-to-face classes versus online classes. However, there was no significant difference between those who felt that teaching online was frustrating and cumbersome (41% agree, 45% disagree, 14% neither agree nor disagree). Only 25% of participants felt that their student evaluations would suffer due to teaching online, but many participants felt that it was more difficult to communicate with (56%) and assess students effectively (62%) online. Most participants also felt they were more responsive to students in face-to-face classes (57%) and more motivated while teaching face-to-face classes (60%). A majority of participants did not mind commuting to school (62%) and felt they were scheduled to teach at convenient times (56%) and locations (61%). In addition, many participants found face-to-face classes easier to teach than online classes (66%) and felt that online teaching required more effort than face-toface teaching (76%). Faculty acceptance Most participants (76%) felt that students who completed online degrees would have the same opportunities to attend graduate school as students who completed face-to-face degrees, and 53% felt that students who complete online degrees would have the same opportunities in the workforce as students who complete face-to-face degrees. However, only 12% agreed that an online degree was as prestigious as a degree earned by taking face-to-face classes. Faculty intent to teach online A majority of participants expressed interest in teaching online (63%) and receiving additional training (68%). Most participants were also interested in receiving training from a certification program (66%) and having their online courses evaluated by peers (60%). Faculty support & development opportunities In terms of support and development opportunities, almost all participants (85%) found support services such as 24/7 LMS support and tutorials as important. Most participants also found additional services as important such as e-library resources (86%), library tutorials (78%), a virtual writing center (76%), a virtual advising center (79%), a virtual student services center (82%), and a virtual student with disabilities center (94%). Almost all participants (98%) also agreed that it was important for faculty to be trained in how to offer good online courses and that online degree programs at their College/University were recognized as being of high-quality (83%). Participants also felt that it was important for students completing online degrees to have the same learning (96%) and post-graduate (93%) opportunities as face-to-face graduates in terms of hiring opportunities and attending graduate school. Preference for teaching online To test whether nursing faculty preferred online teaching compared to face-to-face teaching, attention was restricted to those who expressed some level of agreement or disagreement with select variables. Participants scoring in the neutral category (4 = 'Neither agree nor disagree') were excluded from the analysis as well as those with missing data. We performed a one-sided t-test evaluating the following hypothesis: • H0: probability that online is preferred is ≤0.5 • H1: probability that online is preferred is >0.5 Small p-values (p < 0.05) led to rejection of the null hypothesis. p-Values less than 0.05 indicated a statistically significant preference for online teaching among those who were not neutral. Table 2 represents the variables with statistically significant results indicating a preference for online teaching among those not neutral. Discussion There is a lack of research that addresses nursing faculty's use of technology. The application of the TAM2 to nursing education provides an opportunity to expand teaching and learning capacity. As such, it is important to understand factors that contribute to nursing faculty acceptance of technology and online education. This study has identified several important findings among nursing faculty teaching online for the first time during the COVID-19 pandemic such as an overall enjoyment of teaching online, confidence in online teaching skills and comfort with technology such as Learning Management Systems (LMS). The following discussion will explore highlighted findings as they relate to the TAM2 scales. Perceived Ease of Use (PEU) One of the few studies conducted with nursing faculty related to technology acceptance found that educators perceived stress when they were unable to adapt to and use technology in a healthy manner (Tacy, 2018). Other studies among college educators found that stress related to PEU was typically due to technological barriers as well as the time it took to learn and use new technology (Bolliger & Wasilik, 2009;DeGagne & Walters, 2010;Green, Alejandro, & Brown, 2009). In addition to this, previous studies have identified computer self-efficacy and faculties' beliefs about their own computer skills and competence as a barrier to satisfaction with teaching online (Zhen, Garthwait, & Pratt, 2008). However, this current study found that nursing faculty were confident in their ability to use computers and were comfortable using technology and managing their learning management systems (LMS). In fact, all participants felt either somewhat comfortable, comfortable or very comfortable using a desktop or laptop computer and using internetbased search engines. Ninety-eight percent of respondents reported feeling some level of comfort using their LMS to teach online and 90% of respondents found some level of ease in finding online educational resources to assist with teaching. Perceived Usefulness (PU) Perceived usefulness is defined as the degree to which a person believes that using a particular technology will enhance his or her job performance (Davis, 1989). PU is comprised of a variety of factors including subjective norms, voluntariness, experience, image, job relevance, out-put quality and result demonstrability. In regard to PU, 90% of participants felt that educational technology was useful for content delivery, however, 74% felt that teaching online was less effective for them than teaching face-to-face. Additional variables related to PU are further discussed in the following sections. Preference for teaching online This study found that a statistically significantly greater percentage of nursing faculty enjoyed teaching online (p < 0.05). In addition, a statistically significant percentage of participants felt confident in their online teaching abilities (p < 0.001). These are important findings to note, as previous research has shown that faculty who were more confident about their technical skills were more willing to teach online (DeGagne & Walters, 2010;Green et al., 2009). Some of the variables that did not support a preference for teaching online were those related to workload. Regarding these variables, approximately 60% of respondents felt that teaching online allowed less time to dedicate to other teaching responsibilities, research, and service responsibilities. This finding is consistent with previous studies indicating that many faculty members felt teaching online required more time and effort than face-to-face teaching (Bacow, Bowen, Guthrie, Lack, & Long, 2012;DeGagne & Walters, 2010;Huang et al., 2011;Lloyd et al., 2012;Mason et al., 2010). In the TAM2, these variables related to the construct Result Demonstrability. Result demonstrability relates to the perceived tangible results or benefits that a technology offers (Venkatesh & Davis, 2000). Previous studies also identified dissatisfaction with other tangible benefits such as compensation where many faculty members felt they were not adequately compensated when teaching online (Bacow et al., 2012;DeGagne & Walters, 2010;Huang et al., 2011;Lloyd et al., 2012;Mason et al., 2010). This current study also found that nursing faculty did not feel there was adequate financial incentive for teaching online. In addition to time and workload, another variable related to the PU construct of Result Demonstrability was time with students. Nursing faculty felt that they were less interactive with and less responsive to students in online courses as opposed to those in face-to-face courses. This is a finding that has been reported in multiple studies by students enrolled in fully online courses (Murphy & Stewart, 2017;Sorensen & Donovan, 2017). In general nursing faculty felt it was easier to assess, teach and grade students in face-to-face classes as opposed to online courses. Another variable related to the construct Result Demonstrability that did not support a preference for teaching online was related to student evaluations. While only 25% of participants felt their student evaluations would suffer as a result of teaching online, only 25% felt their evaluations would improve. This finding was contradictory to other reported findings in this study that reported a high confidence in online teaching abilities; however, this is not a new discovery as previous research has shown that concerns about tenure, promotion and poor student evaluations were a concern among faculty teaching online in the area of higher education (Gaytan, 2009;Green et al., 2009;Mason et al., 2010;Orr, Williams, & Pennington, 2009). Participants in this study did not believe that students desired online courses more than face-to-face courses or that students learned more in online courses. These variables related to the PU construct Output Quality in the TAM2, which is concerned with how effective technology is in accomplishing specific tasks (Venkatesh & Davis, 2000). Previous studies in the area of higher education have connected this construct to the effectiveness and usability of the institutional Learning Management System (LMS), educational technology tools and students' ability to navigate technologies successfully (Bolliger & Wasilik, 2009;Green et al., 2009;Ward, Peters, & Shelley, 2010). This raises the question of whether students are provided with training related to taking online course or are adequately prepared to be successful in online classes. An area of need that was revealed in this study was the need and desire for faculty support and development opportunities. In the TAM2, these variables fall under the PU construct, Subjective Norms. Subjective norms refer to the extent to which an individual believes that others in the organization find value in technology (Venkatesh & Davis, 2000). Previous studies revealed that positive communications from administrators about the reasons for teaching online and institutional goals and policies that aligned with online education were supportive of positive subjective norms (Betts & Heaston, 2014;Huang et al., 2011;Maguire, 2009;Orr et al., 2009;Wang & Wang, 2009;Wickersham & McElhany, 2010). An additional factor that was associated with subjective norms was institutional support such as instructional design support and access to proctoring software (Chapman, 2011;Wickersham & McElhany, 2010). This finding was supported in this current study where an average of 96% of participants reported faculty support and development opportunities as important. In terms of the PU construct Image, most participants felt that an online degree was not as prestigious as a degree earned by taking face-toface classes. This finding is supported by previous findings that suggested faculty believed that online degrees and student outcomes were inferior to those of face-to-face degrees and programs (Allen & Seaman, 2012;Allen & Seaman, 2015;Bacow et al., 2012;McQuiggan, 2012). Implications for nursing education This study highlights the need for education and training in developing innovative ways to engage students in online courses. This study also suggests that support is needed for students taking online courses to help them be successful. Additionally, academic leaders such as Deans and Chairs should be aware of the impact that factors such as workload and course evaluations may have on faculty that are either new to or struggling with teaching online. These factors may also have a significant impact on tenure and promotion results. As a result, administrators might consider offering release time or reduced workloads to offset this barrier (Lloyd et al., 2012). Future research There is an opportunity for further research into the impact that factors related to workload, student evaluations have on the attainment of tenure and promotions for faculty teaching primarily online. Additionally, there is a need to explore beliefs about the prestige of online degrees and courses as this was identified as an area of concern among faculty. While the purpose of this study was to learn more about nursing faculty attitudes and acceptance related to teaching online, further studies might explore the relationship between various demographic variables such as age on attitudes related to teaching online as well as the availability of supportive resources at private vs. public schools. Limitations One significant limitation of this study was in exploring of the TAM2 construct Voluntariness. Nursing faculty members who were teaching online during the COVID pandemic were compelled to teach online after physical campuses closed to minimize exposure and spread of the virus. In addition, faculty were given a very short period of time to adapt their courses to the online environment. This type of introductory experience to faculty new to this mode of teaching can be slightly traumatic and induce negative feelings due to barriers and circumstances that were out of the control of themselves and administrators. These feelings may have slightly biased responses. Also, the availability of resources for transitioning online may vary from institution to institution, which was not a concept explored in this study. An additional limitation of this study was the self-selection process of participants. It is possible that participants who felt very positive about their online experience may have chosen to participate whereas those who felt less confident and successful in their first endeavor teaching online may have elected not to participate. Conclusion In order for online programs to be successful, faculty must be supportive and "buy-in" to the ideas and vision shared by academic leadership programs (Blundell et al., 2020;Walters et al., 2017). As online programs in the area of nursing education continue to increase, it is important to understand barriers identified by faculty who may have previously been resistant to teaching online. This study revealed encouraging data about the relatively high confidence and comfort that nursing faculty have with teaching online and overall positive attitudes about teaching online. Areas of concern that were revealed were those around concerns of workload balance, inferior interactions with students and the need for additional support. Overall, these findings demonstrate that nursing faculty are generally accepting of technology and positive outcomes are possible if identified concerns are addressed and positive feelings are fostered and supported. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. That the College/University online degree programs were recognized as being of high-quality 1 2 3 4 5 That students completing online degrees had the same learning opportunities as face-to-face graduates 1 2 3 4 5 That students completing online degrees had the same post-graduate opportunities as face-to-face graduates in terms of hiring opportunities and attending graduate school 1 2 3 4 5
2022-04-21T13:10:34.685Z
2022-04-20T00:00:00.000
{ "year": 2022, "sha1": "d4f6109a1c43c7343fefca4adb65e01eef0a2b0f", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.profnurs.2022.04.002", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "b7493e69fd75ef25346942f098f0ecb53cf61986", "s2fieldsofstudy": [ "Education", "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
119267405
pes2o/s2orc
v3-fos-license
VHE gamma-ray observations of transient and variable stellar objects with the MAGIC Telescopes Galactic transients, X-ray and gamma-ray binaries provide a proper environment for particle acceleration. This leads to the production of gamma rays with energies reaching the GeV-TeV regime. MAGIC has carried out deep observations of different transient and variable stellar objects of which we highlight 4 of them here: LSI+61 303, MWC 656, Cygnus X-1 and SN 2014J. We present the results of those observations, including long-term monitoring of Cygnus X-1 and LSI+61 303 (7 and 8 years, respectively). The former is one of the brightest X-ray sources and best studied microquasars across a broad range of wavelengths, whose steady and variable signal was studied by MAGIC within a multiwavelength scenario. The latest results of an unique object, MWC 656, are also shown in this presentation. This source is the first high-mass X-ray binary system detected that is composed of a black hole and a Be star. Finally, we report on the observations of SN 2014J, the nearest Type Ia SN of the last 40 years. Its proximity and early observation gave a remarkable opportunity to study important features of these powerful events. Introduction The MAGIC collaboration has developed a large-scale observational program to study gammaray binaries and to search for very high energy (VHE, E > 100 GeV) emission from X-ray binaries since the first telescope started taking data in 2004. X-ray binary systems are composed of a compact object, which can be a neutron star (NS) or a black hole (BH), that orbits a stellar companion. These systems can be split into High Mass X-ray Binaries (HMXBs) and Low Mass X-ray Binaries (LMXBs) according to the type of the stellar companion. In this work, we focus on HMXBs. These systems contain a young star of spectral type O or B and a compact object that accretes material from the companion star through an accretion disk or strong stellar wind. X-ray binaries that emit more energy in the gamma-ray range than at X-ray are classified as gamma-ray binaries. In this proceeding we will show the results of 3 variable binary systems: Cygnus X-1, MWC 656 and LS I +61 • 303. Nevertheless, X-ray binaries are not the only variable galactic events expected to emit in the VHE range. Other transient events, like supernovae (SNe), show a proper environment for VHE gamma-ray emission. In this proceeding, we present the results for SN 2014J, the nearest Type Ia SN in the last four decades. Type Ia SNe are extremely luminous stellar explosions whose origin is also a binary system where one of the members, a carbon-oxygen white dwarf, reaches the Chandrasekhar mass limit of 1.4M . When this happens, the electron-degenerate core can not support the increase in gravitational pressure, so a thermonuclear explosion of approximately 10 51 erg takes place. After these powerful events no compact remnant is expected. All the observations presented in this proceeding were performed with the MAGIC telescopes, a stereoscopic system of two 17 m diameter Imaging Atmospheric Cherenkov Telescopes (IACTs) located in El Roque de los Muchachos on the Canary island of La Palma (28.8 • N, 17.8 • W, 2225 m a.s.l.). MAGIC was composed of just one stand-alone IACT until 2009 when MAGIC II was constructed. Between summer 2011 and 2012 both telescopes underwent a major upgrade involving the trigger and readout system as well as the MAGIC I camera, enhancing the performance to achieve an integral sensitivity (E>220 GeV) for sources with a Crab Nebula-like spectrum of 0.66±0.03% of the Crab Nebula flux in 50 hours of observation in stereoscopic observational mode [1]. For stand-alone mode (only one telescope operating) the sensitivity above 280 GeV is 1.6% Crab in 50 hours [2]. Cygnus X-1 Cygnus X-1 is one of the most studied and brightest HMXBs [3] of our galaxy, located in the Cygnus region at a distance of 2.15±0.2 kpc [4]. The binary system, composed of a black hole with mass ∼15 M [5] and a 17-31 M O9.7 Iab supergiant companion star [6], follows a circular orbit of 5.6 days [7]. It has been firmly established as a microquasar after the detection of a highly collimated (opening angle <2 deg) relativistic (v≥0.6c) one-sided radio-emitting jet [8]. The X-ray studies (E< 20keV) revealed that the system behaves as a typical black-hole transient system with the two distinguishable soft and hard state [9]. In 2006, MAGIC observed Cygnus X-1 for 40 hours between June and November with the stand-alone MAGIC telescope (MAGIC I). No significant excess for steady or variable emission was detected, except for one day. During the night of September 24, which was concurrent with the hard state of the source, an excess of approximately 4.1σ after trials was observed, where the significance (σ ) was computed using equation 17 of [10]. The spectrum for that day followed a power-law defined as dN/(dAdtdE) = (2.3 ± 0.6) · 10 −12 (E/1TeV ) −3.2±0.6 cm −2 s −1 TeV −1 . After this hint of activity in the gamma-ray regime, MAGIC has carried out observations from 2007 to 2014 focusing on the hard state. The source was observed for ∼80 hours (after data quality cuts) at a zenith range between 6 and 50 deg during 5 observation campaigns in 5 years. The first campaign was performed in a standalone mode (with MAGIC I) between June and November 2007 with the tracking mode called wobble-mode [11]. The second one was carried out using only MAGIC I in the on-off tracking mode on July 2008. The campaign from June to October 2009, presents data taken in stand-alone and also stereo mode under wobble-mode. On September 2011, Cygnus X-1 was observed with both telescopes. Finally, on September 2014, the source was observed with same conditions as the previous campaign but during its soft state. We performed searches for VHE emission on daily basis (due to the variability of the source), as a function of the different X-ray states and for the full data sample. None of the conditions yielded to significant excess of photons from the source position. MWC 656 MWC 656 is currently the only detected binary composed of a Be star and a BH [12], where the authors conclude that this source is a HMXB with a measured BH of 3.8-6.9 M . The system is located at a distance of 2.6±0.6 kpc [12] and from observed optical photometric modulation, the period of the orbit was determined to be 60.37±0.04 days [13] with the periastron at φ per =0.01±0.10 [12]. On July 2010, AGILE detected a gamma-ray flare locally coincident with the system, which triggered MAGIC observations [14]. In order to evaluate if the source emits in the VHE gamma-ray regime, MAGIC observed it during two campaigns in a zenith range between 22 and 51 deg. From May to June 2012, 21.3 hours of good quality data in stand-alone mode were taken with MAGIC II between orbital phases φ =0.2 and φ =1.0. On June 2013, the system was observed for ∼3.3 hours in stereoscopic mode in the orbital phase range φ =0.0-0.1, just after the periastron. During this last observation epoch, XMM-Newton observed the source immediately following MAGIC observations for ∼1 hour on June 4th (φ =0.8). No specific information in the X-ray energy range was available for the period of 2012. The system did not show significant VHE gamma-ray excess in any epoch, either steady or daily basis emission. The integral flux upper limit (UL) for the entire data sample was set at 2.0 · 10 −12 cm −2 s −1 at 95% confidence level (CL) above 300 GeV with a photon index of Γ=2.5. The data distributed along a phase binning width of 0.1 was also analyzed with no significant emission. The computed differential flux ULs between 245 GeV (energy threshold of the analysis) and 6.3 TeV at 95% CL, with five bins per decade of energy, are shown in Figure 1. Any potential steady VHE emission is far away from a detectable level with any IACT, although the possibility of a detection can no be ruled out in the case of a flare occurring at the level of the flux detected by AGILE. The spectral energy distribution (SED) includes EVN radio ULs [15], the AGILE energy flux [16] and Fermi-LAT data taken simultaneously with the AGILE observations [17]. LS I +61 • 303 LS I +61 • 303 is a member of the small group of gamma-ray binaries that has been detected in a very broad wavelength range, from radio up to VHE gamma-rays. The system is composed of a Be star (spectral type B0Ve, [18]) with a circumstellar disk and an unidentified compact object (NS or BH). The orbit that the compact object performs presents an eccentricity of 0.54±0.03 and a period of 26.4960(28) days obtained by radio analysis [19]. The periastron phase has been set to φ per =0.23±0.03. The source was detected in HE gamma-ray by Fermi-LAT with periodic outbursts around φ ∼0.3-0.45. In the VHE regime, the first detection was performed by MAGIC [20], and confirmed later on by VERITAS [21], whose periodic peak is detected at φ ∼0.6-0.7 (next to the apastron). Sporadic emission with significant flux at phases φ ∼0.8-1.0 has also been reported by MAGIC (∼4% Crab flux) [22] and only detected close to the periastron (φ =0.081) once by VERITAS [23]. The non-thermal component of the emission presents periodical outbursts in an orbit-to-orbit base (1667±8 days, detected in the radio band). This superorbital variability was also found in HE [24]. Despite the multiwavelength information, the nature of its VHE emission is not clearly defined yet. There are three proposed scenarios: the microquasar scenario, due to the observed extended jet-like radio-emitting structure [25], the pulsar wind scenario, thanks to the rotating tail-like elongated morphology of overall size 5-10 mas obtained by VLBA [26] and finally, LS I +61 • 303 was proposed to be the first binary system that holds a magnetar [27], after the Swift detection of two very short-timescale (< 0.1 s) highly-luminous (> 10 37 ergs −1 ) bursts from the source direction. In this flip-flop magnetar model the pulsar magnetosphere is disrupted close to the periastron due to the circumstellar disk of the Be star (propeller regime) suppressing VHE gamma-ray emission, while in phases next to the apastron the particles can be accelerated up to TeV due to the rotational-powered pulsar (ejector regime). A higher mass-loss rate of the star implies a higher circumstellar disk of the star and vice versa, so that the propeller regime can take place even close to the apastron. In order to confirm the superorbital variability in the VHE regime, MAGIC observed LS I +61 • 303 when the periodical VHE outburst happens, φ =0.55-0.75, from August 2010 to September 2014 in stereoscopic mode (except January 2012, when mono observations with MAGIC II were performed). Archival MAGIC data (from 2006 to 2010, giving rise to a total 8-year campaign) and published VERITAS data have also been used to study this variability. The study of the viability of the pulsar wind scenario, and more specifically the flip-flop magnetar model, is carried out searching for (anti-)correlation of the TeV emission and the mass-loss rate of the Be star. To achieve that goal MAGIC performed simultaneous observations with the optical telescope LIVERPOOL (also located in El Roque de los Muchachos) during orbital phases φ =0.8-1.0 where sporadic TeV emission has previously been detected. The mass-loss rate of the star is correlated with the H α line emission, thus the Pearson correlation coefficient and the probabilities of the correlation between the equivalent width (EW), full width half maximum (FWHM) and the velocity of the H α line and the TeV flux were calculated. Since LS I +61 • 303 presents yearly variability in the peak of the periodical outburst, spectral studies to find evidence for various mechanism of gamma-ray production were carried out. Studies using the entire data set and studies using samples based on superorbital, orbital phase and flux level yield a spectral index compatible with 2.43±0.04 in general. Figure 2 shows the data folded in the superorbital period. The probability that the data describes a constant flux is a negligible value of 4.5 · 10 −12 . Assuming a sinusoidal signal, the fit probability reaches 8%. The data have furthermore been fitted by a step function, resulting in a fit probability of 4.7 · 10 −2 . This shows that the intensity distribution cannot be described only with a high and a low state, but that a gradient is needed. It can be concluded that there is a superorbital signature in the TeV emission of LS I +61 • 303 and that it is compatible with the ∼4.5 year radio modulation seen in other wavelenghts. No (anti-)correlation has been found between the mass-loss rate of the star and the TeV emission. Nevertheless, the relation between these two parameters can not be either confirmed or denied since the timescale of the observations in optical and TeV differs from minutes to hours, respectively. SN 2014J On January 21st, 2014, the UCL Observatory detected SN 2014J at a distance of 3.6 Mpc in the starburst galaxy M82, which was classified by the Dual Imaging Spectrograph on the ARC 3.5m telescope as a Type Ia SN. The proximity of this event gave SN 2014J the title of the nearest SN Type Ia in the last 42 years. Due to its proximity, a multiwavelength follow-up observation campaign was carried out. SN 2014J was observed with MAGIC from January 27th to 29th under moderate moonlight conditions and on February 1st and 2nd under dark-night conditions. In total, 6 hours of good-quality data were taken at medium zenith angles, between 40 and 52 deg. No signal from the direction of the source was detected. The integral UL for energies above 300 GeV was set to 1.50 · 10 −12 cm −2 s −1 with 95% CL. Taking into account the detection of the host galaxy, M82, reported by VERITAS [28] for energies above 700 GeV, (3.70 ± 0.8 stat ± 0.7 syst ) · 10 −13 cm −2 s −1 , we have also established an integral UL for this energy range at 3.90 · Figure 2: Peak flux emitted for each orbital period, for orbital phases 0.5-0.75, in terms of the superorbital phase as defined by radio [19]. MAGIC (magenta dots) and VERITAS (blue squares) points have been used in this analysis. The fit to a sinusoidal (solid red line), to a step function (solid green line) and with a constant (solid blue line) are also represented. The gray dashed line represents 10% of the Crab Nebula flux. The gray solid line marks the zero level, just as a reference. 10 −12 cm −2 s −1 at the same CL. Daily flux ULs for energies above 300 and 700 GeV were also computed and are shown in Figure 3. Conclusions MAGIC has performed several observation campaigns on transient and variable stellar objects. Cygnus X-1 was observed from 2007 to 2014 focusing on the hard X-ray spectral state when a detection was expected according to the hint obtained previously by MAGIC. No excess of VHE gamma rays has been found in any period. MWC 656 was observed during the periastron passage, close to the reported AGILE emission, with no significant excess. Integral ULs at energies above 300 GeV have been set for the ∼25 hours of good-quality data available for this source. LS I +61 • 303 observations revealed that the source displays a superorbital variability consistent with the radio and HE results. In order to probe the flip-flop magnetar model, the (anti-)correlation between the mass-loss rate of the Be companion star and the TeV emission has been studied, although no clear correlation has been found. Observations of SN 2014J have also been presented in this work. No detection in the VHE regime by MAGIC can be reported. Acknowledgments We would like to thank the Instituto de Astrofísica de Canarias for the excellent working conditions at the Observatorio del Roque de los Muchachos in La Palma. The support of the German BMBF and MPG, the Italian INFN, the Swiss National Fund SNF, and the ERDF funds under the Spanish MINECO is gratefully acknowledged. This work was also supported by the CPAN
2015-10-11T21:01:00.000Z
2015-10-11T00:00:00.000
{ "year": 2015, "sha1": "cec64a845a7742e5d6e88fd24e2c51aff99ee142", "oa_license": "CCBYNCSA", "oa_url": "https://pos.sissa.it/236/732/pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "75da8efca621c23331a698ddb27243982ae42523", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
10949831
pes2o/s2orc
v3-fos-license
IgE Reactivity of Blue Swimmer Crab (Portunus pelagicus) Tropomyosin, Por p 1, and Other Allergens; Cross-Reactivity with Black Tiger Prawn and Effects of Heating Shellfish allergy is a major cause of food-induced anaphylaxis, but the allergens are not well characterized. This study examined the effects of heating on blue swimmer crab (Portunus pelagicus) allergens in comparison with those of black tiger prawn (Penaeus monodon) by testing reactivity with shellfish-allergic subjects' serum IgE. Cooked extracts of both species showed markedly increased IgE reactivity by ELISA and immunoblotting, and clinical relevance of IgE reactivity was confirmed by basophil activation tests. Inhibition IgE ELISA and immunoblotting demonstrated cross-reactivity between the crab and prawn extracts, predominantly due to tropomyosin, but crab-specific IgE-reactivity was also observed. The major blue swimmer crab allergen tropomyosin, Por p 1, was cloned and sequenced, showing strong homology with tropomyosin of other crustacean species but also sequence variation within known and predicted linear IgE epitopes. These findings will advance more reliable diagnosis and management of potentially severe food allergy due to crustaceans. Introduction Shellfish play an important role in human nutrition and health, but can provoke serious IgE-mediated adverse reactions in susceptible individuals. Shellfish are a major cause of food-induced anaphylaxis [1][2][3]. Currently, there is no specific therapy for shellfish allergy, with only emergency treatment following accidental exposure [4]. Unlike most food allergies, allergy to shellfish is typically life-long and predominantly affects the adult population [5][6]. A major difficulty in managing shellfish allergy is the lack of reliable diagnostic assays due to limited knowledge of clinically relevant shellfish allergens. The shellfish group includes crustaceans (phylum arthropoda including prawns, lobsters and crabs) and molluscs (phylum mollusca including oysters, mussels and squid). In the few studies characterizing shellfish allergens to date, including our own, one of the most frequently recognized (major) allergens of species in both shellfish phyla is the abundant muscle protein tropomyosin (TM) [7][8][9][10][11]. Other identified allergens are also derived from muscle tissue: myosin light-chain, arginine kinase, sarcoplasmic Cabinding protein and troponin C [9,[11][12][13][14]. However, only a few species have been studied to date, mostly shrimp and prawn, with few reports on crab allergens [11,15]. Prior to our current report, only one crab allergen, TM from the crucifix crab (Charybdis feriatus), was published in the International Union of Immunolog-ical Societies (IUIS) allergen database (http://www.allergen.org/ index.php). Furthermore, there is little information on shellfish from the southern hemisphere or Asia-Pacific region. Patients frequently report clinical reactions to more than one shellfish species, but whether this is a result of multiple sensitivities or from IgE cross-reactivity between allergens of different shellfish species is unknown [5,16]. This information is vital for optimal management of shellfish allergy. Adding complexity, there are reports of altered stability and allergenicity of food proteins after processing [17][18][19]. Most members of the TM allergen family are highly heat-stable [20][21]. However, there is a paucity of information on the effects of heating on allergens within whole shellfish extracts [22], with most studies testing heated purified allergens. Heating can enhance allergenicity through several mechanisms including protein denaturation and exposure of new epitopes, aggregation and chemical modification such as the Maillard reaction [23]. We report here the characterization of allergenicity of a commonly eaten crustacean species, the blue swimmer crab (Portunus pelagicus), and in particular the identification of the major allergen Por p 1. Evidence of cross-species IgE reactivity with another commonly consumed species, the black tiger prawn (Penaeus monodon), was sought and the effect of heating on allergens of both species and their cross-reactivity was assessed. Clinically relevant IgE reactivity to the shellfish extracts was assessed by a whole blood basophil activation assay. Ethics Statement Informed written consent was obtained from all subjects, with ethics approvals from the Alfred Hospital Research Ethics Committee (Project number 192/07) and the Monash University Human Ethics Committee (MUHREC CF08/0225). Study Population and Sera Serum samples were obtained from twenty-four shellfish-allergic subjects (mean age 32610.5 years; 13/24 female), seven nonatopic controls (mean age 40.3612.3 years; 4/7 female) and one atopic non-shellfish-allergic subject (age 28 years, female). Allergic subjects were identified from the Alfred Hospital Allergy clinic seafood allergy database on the basis of clinical history of allergy to shellfish and positive shrimp-specific IgE (ImmunoCAP [Phadia Pty Ltd, Uppsala, Sweden] .0.35 kU A /L) ( Table 1). Of these subjects, 18/24 (75%) were also positive for crab-specific IgE. Eight control subjects were selected on the basis of no clinical history of shellfish allergy; seven were non-atopic, i.e. had a negative skin prick test response to a panel of common aeroallergens and one was atopic (Bahia grass pollen-sensitized). Preparation of Shellfish Extracts Fresh blue swimmer crab (Portunus pelagicus) and black tiger prawn (Penaeus monodon) were purchased at Prahran market (Melbourne, Australia). For raw crab (RC) and raw prawn (RP) extracts, the outer shell was removed and muscle collected. Finely cut muscle was blended with PBS pH 7.2 and left overnight at 4uC with constant mixing. The crude extract was centrifuged at 13,000 rpm at 4uC for 20 minutes. Supernatant was collected and filter sterilized before storage at 280uC in aliquots. For cooked crab (CC) and cooked prawn (CP) extracts, the outer shell was kept during the heating process (20 minutes immersed in boiling PBS) before removal and extract preparation as above. The protein concentration of each extract was determined using the Bradford assay kit (Bio-Rad Laboratories, Hercules, CA) using bovine gamma globulin as a standard. IgE ELISA and Inhibition IgE ELISA Wells of a 96-well EIA/RIA plate (Costar, St. Louis, MO) were coated with 100 mL extract (RC, CC, RP or CP; 1 mg/mL in PBS, or PBS alone for 'no-antigen' control wells), and incubated overnight at 4uC. All of the following incubations were performed for 1 hour unless otherwise stated and the plate was washed 4 times in 0.05% Tween 20/PBS (PBS-T) between steps. Blocking was performed using 5% skim milk powder diluted in PBS-T. Serum, 100 mL diluted 1:10 in 1% skim milk powder/PBS-T, was added to wells and then incubated for 3 hours at room temperature with shaking (45 rpm). Rabbit anti-human IgE antibody (1:4000; Dako, Glostrup, Denmark) and goat anti-rabbit IgG-HRP (1:1000; Promega, Madison, WI) were added sequentially to wells and plates incubated at room temperature for 1 hour with gentle shaking. Plates were then washed 5 times in PBS-T, followed by 3 washes in PBS. IgE binding was detected using TMB (3,39,5,59-Tetramethylbenzidine) substrate (Invitrogen). After 5 minutes, the reaction was terminated using 1 M HCl and the absorbance (O.D.) at 450 nm measured by spectrophotometry (Thermo Fisher Scientific, Melbourne, Australia). Sera from seven non-atopic subjects were screened to determine the extent of nonspecific binding. 'No-antigen', background values were subtracted from test sera data. Two standard deviations above the mean O.D. 450 nm value for the non-atopic subject sera were used to determine the cut-off for positive IgE binding in shellfish-allergic subjects. For inhibition ELISA experiments, individual subject sera were first titrated (1:10-1:1280) for IgE reactivity with CC or CP to determine the concentration at which the O.D. 450 nm was ,1 and within the linear phase of the titration curve. Using this dilution, serum samples were incubated with an equal volume of shellfish extract (RC, CC, RP or CP at 0.16, 0.8, 4, 20 and 100 mg/mL), purified recombinant TM from black tiger prawn (rPen m 1.0101; Kamath et al., unpublished) or purified recombinant TM from blue swimmer crab (rPor p 1) (at 0.048, 0.24, 1.2, 6, and 30 mg/mL) for 1 hour at room temperature and then tested for reactivity with either CC or CP extracts. Percent inhibition was calculated using the following equation: percent inhibition = 1002 [(O.D. 450 nm of serum with inhibitor/O.D. 450 nm of serum without inhibitor) X100]. For comparison of inhibition by different extracts, the extract (inhibitor) concentration that caused 50% inhibition was calculated from dose-response curves. To assess non-specific inhibition by extracts, serum from a non-shellfish-allergic, atopic (Bahia grass pollen-sensitized) subject was incubated with the above inhibitors, and then tested for IgE reactivity with Bahia grass pollen extract in comparison with untreated serum. IgE Immunoblot and Inhibition IgE Immunoblot Proteins of each of the four extracts (RC, CC, RP, CP) were separated by SDS-PAGE as above with 3 mg/well loaded into a 10 or 15-well 4-12% Bis-Tris gel (NuPage), or 60 mg protein loaded into a 6 cm 2D well 4-12% Bis-Tris gel (NuPage). Proteins were transferred to a nitrocellulose membrane (0.45 mm; Thermo Scientific, Rockford IL) using an Xcell II blotting apparatus (Invitrogen) at a constant voltage of 30 V for 1 hour. Successful transfer of protein was evaluated by Coomassie brilliant blue staining of the gel as above. The membrane was blocked using 5% skim milk powder/PBS-T for 1 hour at room temperature with gentle rocking. Subject serum (1:40 in 5% SMP/PBS-T) was then applied to the membrane using a Miniblotter apparatus (Immunetics, Boston, MA). To detect IgE binding, the immunoblot was incubated sequentially with rabbit anti-human IgE antibody (Dako; 1:15000) and goat anti-rabbit IgG-HRP conjugated antibody (Promega; 1:15000) each for 1 hour at room temperature with gentle horizontal shaking. Following incubation with chemiluminescent peroxidase substrate (Sigma-Aldrich, St. Louis, MO), proteins were visualized using enhanced chemiluminescence technique [24]. TM bands were identified in parallel immunoblots using a rat anti-TM monoclonal antibody (mAb; 1:6000; Abcam, Cambridge, UK) followed by rabbit anti-mouse IgG-peroxidase antibody (1:80000; Sigma-Aldrich) and development as above. For inhibition IgE immunoblot experiments, serum samples (final concentration 1:40) were first incubated with whole crude extracts (RC, CC, RP or CP at 4, 20 and 100 mg/mL) or rPen m 1 (1.2 mg/mL) for 1 hour at room temperature and then tested for reactivity with the CC or CP extracts by immunoblotting as described above. In a separate experiment to directly compare rPor p 1 with rPen m 1, serum samples were incubated with rPor p 1 or rPen m 1 at 0.048, 0.24 and 1.2 mg/mL. A 'no inhibitor' control was also included. Complete protease inhibitor (Roche, Basel, Switzerland) was added to the diluent (one tablet in 50 mL of diluent) to prevent non-specific proteolytic degradation of IgE antibodies. Non-specific inhibition was assessed as described for the inhibition IgE ELISA above. Sequence Analysis of Blue Swimmer Crab Tropomyosin A band corresponding to the predominant IgE-reactive 39 kDa protein was excised from the SDS gel for mass spectrometric analysis. The band was de-stained, reduced, alkylated and digested with trypsin as reported previously [9]. Digested protein was injected into a DIONEX Ultimate 3000 liquid chromatography system (Germering, Germany) coupled with a QSTAR XL hybrid quadrupole-quadrupole/time-of-flight tandem mass spectrometer (QqToF-MS/MS; Applied Biosystems/MDS Sciex, Foster City, USA). The resultant tandem spectra were searched using the Matrix Science (Mascot) search engine (precursor and product ion mass tolerance set at 0.1 Da). For cloning and full sequencing of the crab TM, total RNA was extracted from crab pincer muscles using Trizol reagent (Invitrogen) following the manufacturer's instructions. Single stranded cDNA was reverse transcribed from the RNA using RT-PCR with a cDNA synthesis kit (Bioline, Sydney, Australia). Oligo(dT) primers were used for the reverse transcription. Due to the high amino acid sequence identity of the N-and C-terminal regions of TM among crustacean species, the primers were designed based on the amino acid sequence of Rock lobster TM, previously determined by our group (Rock lobster, Jasus lalandii, Genbank accession number JX860677.1). The primer pair used was BSC-TM (F) 59-GCCGGATCC-ATG-GACGCAATCAAGAAGAAGATGCAG-39 and BSC-TM (R) 59-GCGGAATTC-TTAGTAGCCAGACAGTTCG-39. The PCR included one cycle of denaturation at 94uC for 2 minutes, 35 cycles at 94uC for 30 seconds, annealing at 55uC for 45 seconds and elongation at 72uC for 1 minute, and a final elongation step at 72uC for 7 minutes. The amplified full length open reading frame was cloned into the sequencing vector, pCR 2.1 (Invitrogen) using the BamH1 and EcoR1 restriction sites for cloning, and the open reading frame for TM sequenced (Macrogen Inc, Seoul, South Korea) to obtain the construct pCR2.1_TM. Expression and Purification of Recombinant Blue Swimmer Crab Tropomyosin The open reading frame of TM was cross-cloned from the construct, pCR2.1_TM to the expression vector, pProEXHT-B using restriction sites for BamH1 and EcoR1 and ligation was performed using T4 DNA Ligase (Invitrogen, CA, USA). This expression plasmid construct was transformed into chemicalcompetent NM522 E.coli cells using heat-shock for 30 seconds, incubation in SOC medium at 37uC for 1 hour and grown overnight on Luria Bertani (LB) agar plates with 100 mg/mL ampicillin (Amresco, USA) at 37uC. Positive colonies were tested using PCR as described above and selected for protein expression. For recombinant protein expression, 5 mL of fresh overnight culture from a single colony was used to initiate growth in 1 L LB broth containing 100 mg/mL ampicillin. Recombinant protein expression was induced using 0.6 mM isopropylthio-b-galactoside (Amresco, USA). After expression, the culture was centrifuged at 3500 g for 10 minutes to obtain the bacterial pellet and subsequently resuspended in extraction buffer (25 mM Tris-HCl, 300 mM NaCl, 1 mM imidazole, pH 8). Recombinant blue swimmer crab TM containing the 6xHis tag was extracted from the E.coli cells using a French-Pressure Cell, purified using nickel charged metal-chelate affinity chromatography (GE Healthcare, USA) following the manufacturer's instructions and stored at 280uC until further use. The protein concentration of the purified protein was determined by absorbance at 280 nm using a nanodrop spectrophotometer (ND-1000, NanoDrop Technologies Inc., Wilmington, Delaware, USA). Whole Blood Basophil Activation Test Shellfish extracts were tested for basophil activation using our established methodology [25]. Briefly, heparinised peripheral blood samples (100 mL) from five shellfish-allergic subjects, a non-shellfish allergic atopic control and one non-atopic control were incubated with shellfish extracts (0.01-10 mg/ml) or rPen m 1 (0.001-1 mg/mL) for 20 minutes at 37uC and then basophil activation was assessed by flow cytometry by determining the percentage of viable (7-AAD negative), high IgE-positive cells expressing surface CD63. Anti-IgE antibody (cross-linking) and the bacterial peptide f-Met-Leu-Phe (fMLP) were used as positive controls (for IgE-dependent and -independent activation respectively), and stimulation buffer alone was used as a negative control. Statistical Analysis The Wilcoxon matched-pairs signed rank test was used to compare overall serum IgE reactivity between shellfish extracts, and Spearman's correlation test was used to assess correlation between individual specific IgE levels against different extracts or using different assays. Analyses were performed using GraphPad Prism version 5.04 for Windows (GraphPad, San Diego, CA). SDS-PAGE Analysis of Shellfish Extracts Analysis of raw and cooked shellfish extracts by SDS-PAGE and Coomassie brilliant blue staining ( Figure 1) revealed an array of proteins ranging from <6 to 188 kDa. A prominent protein band at 37-39 kDa was seen in all extracts, consistent with TM (34-39 kDa). Other bands were observed at positions consistent with the known shellfish allergens arginine kinase (<42 kDa), myosin light chain, sarcoplasmic calcium binding protein and troponin C (<21 kDa), but several other bands were also apparent at positions that do not correspond to known shellfish allergens. Some differences could be seen between the RC and RP extracts, most notably the band at 69 kDa seen strongly in the RC but only weakly in the RP. In addition, there was only one major protein band in the TM region in RC, whilst there were two bands in RP. More pronounced differences were seen when raw and cooked extracts of both species were compared. For both CC and CP extracts, the higher MW proteins seen in the raw extracts were not present, most likely due to protein degradation during the cooking process. This is supported by the appearance of lower (,35 kDa) MW proteins only present in the cooked extracts. The actual sizes of these lower proteins differed between the crab and prawn extracts. The MW of the prominent TM region band for the prawn extract decreased from 39 kDa to 37 kDa on cooking, but did not change for the CC extract, remaining at 39 kDa. ELISA for Serum IgE Reactivity to Shellfish Extracts Quantitation of serum IgE binding to the shellfish extracts by ELISA (Figure 2) showed that the cooked extracts have markedly higher IgE reactivity than the corresponding raw extracts. Median O.D. values for CC and RC were 0.86 and 0.41 respectively (CC vs RC p,0.001) and for CP and RP were 0.51 and 0.08 respectively (CP vs RP p,0.001). The RC extract was significantly more IgE reactive than RP (p,0.001), but there was no overall difference between the two cooked extracts. Of the 24 shellfishallergic subjects, 5 (21%) had positive IgE reactivity to RC, 15 (63%) to CC (including 4 of the 5 RC positives), none to RP, and 11 (46%) to CP. A similar pattern of reactivity was observed between the CC and CP extracts. All subjects who were positive to CP were also positive to CC, and of those positive to CC but not to CP, reactivity was only weak (10, 14, 15 and 16). These same four subjects had a negative crab ImmunoCAP. Overall there was a significant correlation between IgE levels by ELISA for the CC and CP and the relevant ImmunoCAP values (p,0.01), but not for the raw extracts. However, several subjects showed a lack of concordance of positive or negative result for ELISA with cooked extracts and ImmunoCAP. IgE Immunoblotting of Shellfish Extracts Sera from 12 subjects with IgE positivity to RC and/or CC by ELISA, and where sufficient serum was available, were used for immunoblotting to visualize IgE-reactive proteins in the shellfish extracts ( Figure 3). Immunoblotting showed markedly increased IgE binding to cooked compared with raw extracts, in terms of number of proteins recognized and intensity of binding. In particular there was increased IgE binding to proteins within the TM region (37-39 kDa); 9 (75%) subjects showed IgE binding within this region for CC (7 in RC) and 7 (58%) subjects for CP (3 in RP). The identity of the protein(s) within this region was confirmed as TM using an anti-TM mAb (data not shown). An increase in IgE-reactive lower MW proteins (,39 kDa) was observed in the CC extract and to a lesser extent in the CP extract. A protein of <62 kDa was recognized by 5/12 (42%) subjects in the RP extract but showed little or no reactivity in the RC or cooked extracts. For both the raw and cooked extracts, IgEreactive proteins were observed at 40 kDa and 20-28 kDa that could correspond to other documented allergens i.e. arginine kinase, and sarcoplasmic calcium binding protein, myosin light chain and troponin C, respectively. Sequence Analysis of Blue Swimmer Crab Tropomyosin Following analysis of allergenic proteins using IgE immunoblot, the highly IgE-reactive 39 kDa protein of the blue swimmer crab was identified as TM by peptide mass fingerprinting analysis ( Table 2). The blue swimmer crab TM was subsequently cloned and the complete sequence derived from cDNA and published in Genbank under accession number JX874982.1 (Figure 4). Based on the allergen sequence and patient serum IgE reactivity (see below), this allergen has been designated Por p 1.0101 by the IUIS allergen nomenclature subcommittee (http://www.allergen.org/ index.php). The most similar TMs were from the American lobster (Homarus americanus) and the black tiger prawn (Penaeus monodon) which both showed 98% sequence identity. Sequence identity with other Portunus species TM, and the only crab TM, Cha f 1, listed within the IUIS allergen database was 92%. There were no amino acid differences between relevant regions in the blue swimmer crab TM and published linear IgE epitopes described for Penaeus aztecus (Pen a 1) [26][27] and Penaeus monodon (Pen m 1) [28] TMs, except for the last epitope (amino acid 266 to 280) where two amino acid substitutions were identified. Another region (amino acid 43 to 56) of known and predicted IgE epitope specificity for other crustacean species showed several amino acid differences for the blue swimmer crab (Figure 4). IgE Reactivity of Recombinant Blue Swimmer Crab Tropomyosin, rPor p 1 Recombinant blue swimmer crab TM, rPor p 1, was successfully expressed and purified with approximately 95% purity. Coomassie brilliant blue staining of SDS-PAGE of rPor p 1 under denaturing conditions showed a single band with a MW of approximately 41 kDa ( Figure 5A). The higher MW of the rPor p 1 band compared to that for the natural blue swimmer crab TM in whole extract is due to the attached linker chain and 6-histidine tag. When an immunoblot was probed with sera from shellfishallergic patients, rPor p 1 was shown to be highly IgE reactive ( Figure 5B). There was strong IgE binding to the 41 kDa band for 75% of the patient sera tested, but no reactivity by a control nonatopic donor serum. Sera that showed IgE reactivity with rPor p 1 were those that reacted with the prominent TM band in the IgE immunoblots of the whole blue swimmer crab extract (Figure 3). Additional IgE-reactive higher MW bands observed in the rPor p 1 preparation are most likely multimers of rPor p 1 since their reactivity paralleled that for the major 41 kDa band. Basophil Activation Test To assess biologically relevant shellfish allergen-specific IgE antibody reactivity, the ability of the different extracts to activate basophils from five shellfish-allergic subjects (7,8,19,22,24) was analyzed by flow cytometry. Activated basophils were identified by high IgE expression and up-regulation of surface CD63 ( Figure 6A, B). No non-allergen specific activation of basophils or toxicity was caused by the different shellfish extracts, as determined by incubating extracts with the basophils from a non-shellfish allergic, atopic subject and a non-atopic subject (data not shown). Dose-dependent basophil activation to the crab and prawn extracts was observed, with a range of sensitivities for the subjects, consistent with their different crab-and prawn-specific IgE reactivities by ImmunoCAP and our ELISA and immunoblotting assays ( Figure 6C). When subject basophil sensitivities were compared by examining the extract concentration required for 50% maximal stimulation, three subjects (19,22,24) showed markedly higher basophil activation by the cooked extracts than the raw extracts, with little difference between the two crustacean species. Subjects 7 and 8 showed lower basophil activation with similar sensitivity to the four extracts. rPen m 1 was shown to induce strong basophil activation in those subjects with high reactivity to the whole extracts ( Figure 6D). Inhibition IgE ELISA Inhibition ELISA was used to quantitate the degree of IgE cross-reactivity between the two shellfish species and the effects of cooking. No non-specific inhibition of IgE reactivity by the shellfish extracts, rPen m 1 or rPor p 1 was evident using serum IgE from a Bahia grass pollen-sensitized control subject and Bahia grass pollen extract (data not shown). Using shellfish-allergic subjects, both cooked extracts showed greater inhibition of IgE binding than the corresponding raw extracts (Table 3). For most subjects, the lowest concentration of cooked extract (0.16 mg/mL) resulted in .50% inhibition of IgE binding to both CC and CP. RC was a more efficient inhibitor of IgE binding to the cooked extracts than RP. Both rPen m 1 and rPor p 1 inhibited .50% serum IgE reactivity to CC and CP at the lowest concentration (0.048 mg/mL) for all except three subjects (11,19,24). In these cases, higher concentrations were required, but these were still well below the maximum tested except for serum 11. It was noted that this serum showed marked IgE reactivity to allergens other than TM by immunoblotting. Inhibition IgE Immunoblotting Inhibition immunoblotting was used to examine the proteins responsible for cross-reactivity of IgE antibodies between the different extracts, and in particular whether there were potentially unique allergens within the blue swimmer crab. Again, nonspecific inhibition by the extracts was excluded by testing for inhibition of the binding of serum IgE from a Bahia grass pollenallergic control subject to Bahia grass pollen extract (data not shown). Four shellfish-allergic subjects (5, 19, 21 and 24) with IgE reactivity to TM (39 kDa) were selected for this assay. For all subjects, rPen m 1 markedly inhibited IgE binding to the TM band, further confirming the identity of this protein as TM (Figure 7). The CC and CP extracts also strongly inhibited IgE binding to TM in both species. However, the raw extracts inhibited IgE binding to TM only at higher concentrations, with RC showing greater inhibition than RP. Similarly, reactivity with the low MW proteins in both CC and CP was inhibited by the cooked extracts and RC, but poorly with RP. rPen m 1 could also inhibit the reactivity to the low MW bands, suggesting that they were predominantly breakdown products of TM. When the IgE inhibitory reactivity of rPor p 1 and rPen m 1 were compared over a concentration range using two shellfishallergic patient sera, rPor p 1 showed slightly greater inhibition of IgE reactivity with the TM band in CC than rPen m 1, and rPen m 1 inhibited reactivity with CP TM slightly better than rPor p 1 (Figure 8). Neither recombinant TM preparation inhibited IgE reactivity with the higher MW bands. Discussion Shellfish allergy is a serious and increasing health issue, with current limitations in accurate diagnosis and management due to lack of information regarding clinically relevant allergens and IgE cross-reactivity between shellfish species. Crustaceans, especially crabs and prawns, are a common cause of shellfish allergy worldwide. This study examined the IgE-reactive components of a commonly eaten crustacean species, the blue swimmer crab, compared with those of a well characterized species, the black tiger prawn. In particular, the effects of heating on IgE reactivity and cross-reactivity of the crab allergens were investigated. In addition, the major TM allergen of the blue swimmer crab was cloned and sequenced and its IgE reactivity characterized. When shellfish-allergic subject serum IgE reactivity with whole extracts was compared, the raw crab extract showed greater IgE reactivity compared with the raw prawn extract by both ELISA and immunoblotting. Whether this was due to inherent differences between the proteins of the blue swimmer crab and black tiger prawn, or to differences in sensitizing species or route of sensitization is not clear. We and others have shown previously that inhalation during commercial processing can result in sensitization to seafood [29][30][31][32]. In the present study, although some subjects identified food handling as a cause of adverse reaction, this route could not be distinguished in all subjects and the majority reported ingestion-related allergic episodes. More strikingly, we found that cooked extracts were more IgE reactive than raw for both species. This may be due to the more common ingestion of cooked crab and prawn or to chemical modification of the crustacean proteins on heating as discussed below. That the IgE reactivity of the crustacean extracts observed in our study was clinically relevant was demonstrated by the functional, basophil activation test, again confirming higher allergenicity of the cooked extracts. IgE immunoblotting studies were performed to examine individual IgE-reactive proteins. As for other crustacean species, TM is a major allergen of the blue swimmer crab. Over 50% of shellfish-allergic subjects showed IgE-reactivity to proteins corre- sponding to TM in both raw and cooked crab extracts and TM identity was confirmed by TM-specific mAb reactivity and peptide mass fingerprinting of the highly IgE-reactive 39 kDa protein. We report here for the first time the cloning and full sequence analysis of the Portunus pelagicus TM, Por p 1. This revealed strong homology of the blue swimmer crab TM with other crustacean TMs, but regions of amino acid sequence difference at sites of known and predicted linear IgE epitopes support the need for crustacean species-specific diagnostic reagents. The purified rPor p 1 showed high IgE reactivity by direct immunoblotting, but in agreement with sequence differences, IgE inhibition immunoblotting showed differential reactivity of rPor p 1 and rPen m 1. When compared for inhibition of IgE reactivity with TM in shellfish extracts, rPor p 1 could better inhibit reactivity with the cooked blue swimmer crab TM and rPen m 1 was the better inhibitor of reactivity with black tiger prawn TM. Several other allergenic proteins of the blue swimmer crab were recognized at lower frequency. For some subjects, these proteins were recognized in the absence of TM reactivity. Testing of a larger subject cohort is required to assess the clinical importance and cross-reactivity of these proteins and hence define an appropriate panel of defined allergens for refined diagnostic assays. In view of potential food matrix-associated effects on the outcome of heating of allergens [33], we chose to examine heating of whole extracts rather than purified allergens. For both the crab and prawn species studied, ELISA and immunoblotting showed markedly increased IgE reactivity of cooked extracts compared with raw. In particular, IgE immunoblotting demonstrated increased IgE-reactivity of TM within the cooked extracts. Although all sera including the non-atopic serum bound weakly to the TM band in the CP extract suggesting non-specific binding due to glycosylation for example, if this reactivity was used as a background reference, stronger binding of some shellfish-allergic sera was observed. The higher non-specific reactivity of the CP extract in this assay is consistent with the higher cut-off value for the IgE ELISA for this extract. Several higher MW bands with smearing, also typically a feature of glycosylated proteins, were observed in the cooked extracts. Some of these may represent heat-modified TM multimers. In addition, a range of highly IgE-reactive lower MW proteins was observed following heating, presumably largely TM fragments since this was especially notable for sera that reacted with the single TM band in the raw extracts. Investigation of the mechanisms responsible for the increase in allergenicity of TM and other allergens within the whole shellfish extracts following cooking is warranted as most shellfish is heat processed before consumption. Potential mechanisms include denaturation of protein with exposure of neo-epitopes and the Maillard reaction. In this heat-dependent reaction, sugars, both endogenous and exogenous, are non-enzymatically attached at different locations on the protein molecule generating advanced glycation end products [23,34]. The Maillard reaction has not been well explored in the context of shellfish allergy, although found to play a role in the IgE reactivity of other allergens, particularly peanut allergens [35][36]. Our findings support the inclusion of thermally-processed whole extract as well as defined allergen preparations in commercial diagnostic tests for shellfish allergy. Cross-reactivity between crustacean species is essential to understand in order for shellfish-allergic subjects to receive the best clinical advice on food avoidance. There are limited studies of crustacean cross-reactivity to date, often with small subject cohorts [15,[37][38][39]. Our study showed that IgE cross-reactivity between the blue swimmer crab and black tiger prawn was high, especially between the cooked extracts. Cross-reactivity between the cooked extracts was symmetric as both were able to effectively inhibit IgE binding to each other to a similar extent [40]. This means that the sensitizing species is unable to be determined in most cases without accurate clinical history. As shown by inhibition ELISA, except for serum 11, the capacity of cooked whole extracts to inhibit IgE binding to the cooked crab extract was similar to that for rPor p 1, consistent with allergenicity in the cooked blue swimmer crab being largely due to cross-reactive TM. TM has previously been documented as the major allergen of the black tiger prawn [9]. Our sequence analysis of the blue swimmer crab TM, Por p 1, and alignment with black tiger prawn TM, Pen m 1, provides a molecular basis for the high IgE cross-reactivity observed between these species in our study. The different reactivity seen with serum 19) showing analysis of activated basophils (up-regulation of cell surface CD63) for negative and positive controls, and CC (10 mg/mL). C. Dose-dependent activation of basophils from shellfish-allergic subjects (7,8,19,22,24) by shellfish extracts. D. Dose-dependent activation of basophils from shellfish-allergic subjects (7,8,19,22,24) 11 is consistent with its stronger reactivity with a higher MW band than with TM by IgE immunoblotting. Screening of shellfish-allergic subjects by IgE ELISA against raw and cooked extracts gave insight into whether the current diagnostic ImmunoCAP for crab is relevant in a southern hemisphere clinical setting. Although a double-blind placebocontrolled food challenge can confirm diagnosis of food allergy, this procedure has a high risk of anaphylaxis for adults with shellfish allergy who may have other comorbidities, and is not routinely performed in our hospital [41]. Similarly, skin prick testing of these patients is also not routine practice. For these reasons, in this study only ImmunoCAP data together with a careful clinical history were collected. In most cases, subjects were unable to identify clearly which individual crustacean species had provoked their clinical reaction. However, for the three subjects who did identify crab as an offending species (Table 1), only two of these had positive ImmunoCAP scores for crab-specific IgE. The third subject (14) had a negative crab ImmunoCAP but tested positive for both raw and cooked blue swimmer crab in our IgE ELISA. We found a significant correlation between the cooked, but not raw, crustacean extract ELISA results and ImmunoCAP scores, but in addition to the subject mentioned above, a small number of subjects with a negative crab ImmunoCAP result showed IgE reactivity to the crab extracts in both the IgE ELISA and immunoblot. This finding suggests either greater sensitivity of our assays, and/or lack of appropriate crab species or preparation method in the ImmunoCAP allergen preparation (Cancer pagurus or brown crab, a northern hemisphere species). All subjects with IgEreactivity to crab or prawn TM by immunoblot had moderate to high levels of allergen-specific IgE ($2.37 kU A /L) as determined by ImmunoCAP to shrimp and/or crab. These subjects were also more likely to have had severe allergic symptoms, such as angioedema and anaphylaxis, upon contact with shellfish. However, there were some subjects who had a strong Immuno-CAP result to shrimp and/or crab and clinical history of severe reactions but showed low or no IgE reactivity by ELISA or immunoblot. These subjects may have species-specific IgE with limited or no cross-reactivity with the crustacean species in our study [42][43], likely due to ingestion of different species. Previous studies have concluded that specific-IgE to TM is an accurate predictor of shrimp allergy [44][45]. Our study supports the importance of TM as a major allergen of the blue swimmer crab, but several other crab proteins were shown to be allergenic and subjects exhibited different allergen reactivity profiles, several with no TM IgE reactivity. Component-resolved allergen-microarray technology would allow the simultaneous screening of serum IgE reactivity to a full panel of shellfish allergens, including whole allergen extracts, purified native and recombinant allergens, allergen fragments, and cooked and raw preparations. This would be of great advantage for the sensitive and specific diagnosis of shellfish allergy, and more information regarding the correlation between allergen-sensitization and severity of clinical symptoms could be gathered.
2016-05-15T20:13:51.397Z
2013-06-19T00:00:00.000
{ "year": 2013, "sha1": "d2ce8da47abc1da9d3af7338f0a9301b4d57d599", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0067487&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2c141b2cd9acd525f5672882603e83c74fa35d79", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
11324508
pes2o/s2orc
v3-fos-license
Competence Matching in Collaborative Consortia for Service-Enhanced Products . To exploit new market challenges in manufacturing industries, collaborative environments permit that different stakeholders achieve value creation and differentiated products when addressing the design and development of products that include associated business services (service-enhanced products). In order to facilitate a suitable context for networks of SMEs in the domain of highly customized and service-enhanced products, the GloNet project addressed collaborative networks with different configurations and purposes. In this context, this paper is focused on how a collaborative consortia for service-enhanced products can be created having into account the requirements for a new product and the necessary normalization of organization’s competences. Introduction In manufacturing, to meet customers' demands, the current tendency is to achieve highly customized products. Therefore, the idea of service-enhanced products represents a growing trend due to the fact that consumers more and more demand not only the manufactured products, but also services associated to the physical products. Examples of such services associated to products can be: insurance to protect it, expertise to install it, support to maintain its operation during its life cycle, etc. [1,2]. Examples of service-enhanced products are: power plants, intelligent buildings, or complex technical infrastructures. As a consequence, if business services are associated to products in order to create and add value, then certainly that new business or collaboration opportunities can emerge. Hence, to achieve such services it is necessary to consider the involvement and collaboration of several stakeholders organized in collaborative enterprise networks [3]. Nevertheless, one obstacle that organizations typically face when creating a consortium to respond to a new business or collaboration opportunity is how to identify the needed competences to respond to the requirements. To achieve some solutions in this field, the GloNet project [4] (collaborative project funded under the Factories of the Future program of the European Commission) aimed "at designing, developing, and deploying an agile virtual enterprise environment for networks of SMEs involved in highly customized and service-enhanced products through end-to-end collaboration with customers and local suppliers". Prior researches addressed the product specification and design such as collaborative CAD systems [5,6]. In our previous work the requirements for a product specification system within a Coopetition environment like VBEs and goal oriented VOs [7] and how users can be guided through the specification process [8] were investigated. In this line, the main challenge that is presented in this paper is twofold: (i) the description of the service-enhanced product specification that considers the customers' requirements and a mapping with existing standards of products classification; and (ii) considering the previous requirements and specification (product goals) with the corresponding mapping of product classification, match it with VBE members' competences to create a consortium. Section 2 briefly summarizes the main outcome of the GloNet project, section 3 responds to the main challenges here presented and, section 3 concludes. GloNet Approach In GloNet, depending on the type of required support and objectives, collaborative networks with different configurations and purposes were addressed. Those networks are summarized in Table 1. Manufacturers Network Similar to a virtual organizations breeding environment (VBE), based on long-term alliances involving: manufacturers, project/product designers, service providers, and other relevant supporting stakeholders and entities. Customer Network Or "customer related community" -a part form the customer, it also includes, service providers, and a variety of local support entities. Although this network might not be fully structured, some principles are shared, such as geographical context, business domain, legal restrictions, culture, etc. Product Development Network Short-term VO related to the design and development of the physical product and its associated business services. This VO usually ends once the product is delivered. Product Servicing Network Long-term VO with the aim to support the involved stakeholders in the required business services during the product life-cycle. Service Cocreation Network Temporary VO with the aim of co-design and co-create innovative products and/or associated business services. For the above networks definitions, the adopted definitions for VBE and VO were: − "Virtual Organization Breeding Environment (VBE) is an association of organizations and related supporting institutions, adhering to a base long term cooperation agreement, and adopting common operating principles and infrastructures, with the main goal of increasing both their chances and preparedness towards collaboration in potential Virtual Organizations" [9,10]. − "Virtual Organization (VO) is a temporary alliance of organizations that come together to share skills or core competences and resources in order to better respond to collaboration opportunities and produce value-added services and products, and whose cooperation is supported by computer networks." [11]. The creation of a new service-enhanced product is therefore performed by a temporary goal-oriented network, in this case a product development network, whose formation is triggered by a new business opportunity brought in by the customer, and eventually found by a broker member of the VBE. The partners of the product development network (VO) are primarily selected from the manufacturers network (VBE) and if needed, additional partners, from the customer related "community" and other stakeholders from the geographical region of the customer, can be added. The creation of the product development network is prepared inside the manufacturers network. After receiving the product order from the customer the member in charge of all the necessary arrangements for the creation of the VO (i.e. the VO Planner) begins the formation procedure, that: starts with the detailed specification of the product order; continues with the selection of the partners according to their competences against the skills and capacities that are necessary; and ends with the establishment of the necessary agreements and contracts to regulate the VO during its operation phase [12,13]. Therefore, the product starts being developed under the supervision of the product development network and this network lasts the necessary time to have the product created. Meanwhile, new ideas for new business services might appear. These ideas need to be discussed and brainstormed with some of the product development network partners. Thus, after choosing the most appropriate partners to the mission, a service co-creation network is created. This network (also a VO) has as its main purpose the co-design and co-innovation of novel value-added services to enhance the product being developed by the product development network. Considering that numerous new business concepts may occur, also numerous service co-creation networks might be formed. Once the product and its services are created, the product development network dissolves. Fig. 1 illustrates the interplay among manufacturers' network, customer networks, and product development network. The GloNet project developed an ICT support environment system comprising a cloud-based platform supporting various collaboration spaces and a collaborative networking framework that enabling the specifications of service-enhanced products and management of collaborative networks, which gives support to the networks presented before (Fig. 2). The work presented in this paper relies on two core components: the Service-Enhanced Product Support and the Goal-Oriented Networks Management. Matching Competences for Consortia Creation One major challenge after the specification and design of a product development network, was how to map the specification of the service-enhanced product order with the corresponding required competences of the partners to be part of the VO. As a response to such challenge, the European standard NACE [14] was adopted as a match with organizations' competences. In this line, three main systems were designed and developed to provide an integrated solution: the VBE Member Management, of the VBE Management system; the Products and Services Specification System; and the VO Formation System. Fig. 3 , exemplifies the flow of the creation and operation phases of a serviceenhanced product. The creation phase corresponds to a product development network, and the operation and maintenance/monitoring (O&M) phase corresponds to a product servicing network. The dashed rectangle identifies the interactions of the mentioned three systems, and their description follows. VBE Member Management System A VBE's main aim is to provide commonality and support for interactions among its members [9]. Moreover, a VBE facilitates the collection and maintenance of members profile data, enabling refined selection criteria usage, comprising levels of trustworthiness, value system, and historical collaboration performance. As such, the establishment of a VBE requires a proper management system that should provide functionalities to manage the members' profiles and VBE's competences, to enable performance management, and to facilitate trust building amongst VBE members. A strategic objective of a VBE is to increase preparedness of each member of the alliance concerning fast configuration of VOs in response to market opportunities. The VBE members management system is intended to facilitate the VBE administrator with the management of the VBE members. It provides functionalities for VBE member admission and registration in addition to withdrawal. Furthermore, it contains functionalities that VBE members can use to list all other VBE members, and to permit searching for any VBE member and its profile. In both cases, the usage of these functionalities is conditioned by the permissions that the VBE members have. It is considered that the VBE administrator has fully access to all information of VBE members. The VBE members profile management permits the management of the VBE members' profiles. The profile contains a number of defining elements for each individual VBE member (e.g. business domain, name, address, capabilities, skills, etc.) about each VBE member. Moreover, the VBE member competences consists on the main part of the VBE member profile including the latest information related to each VBE member capabilities and capacities, and conspicuous information of their validity, making the VBE members involvement in actions and activities in the VBE eligible, and prepared to VO creation. In order to have a common understanding within the VBE, a taxonomy of competences is included, consisting of a hierarchical classification of the competences foreseen for the VBE (i.e. the competences that the VBE would like to fulfill in order to be as competitive as possible in its domain of operation). Each taxonomy class includes a related informal description. To make this taxonomy as broad as possible, the representation of competences is based on the European standard NACE [14,15]. Service-Enhanced Product Specification Tool The service-enhanced product specification system supports two main functionalities of product/service specification and product/service registration. This tool is designed and implemented to primarily assist the design & engineering phase of the serviceenhanced product lifecycle (PLC). Nevertheless, it may also be needed from time to time to specify new sub-products within the operation and maintenance phase of the PLC. The system facilitates specification of new customer-oriented service-enhanced products, and enables involvement of different stakeholders, e.g. equipment providers as well as the EPC designers. The service-enhanced product specification system primarily captures, handles, and manages the detailed meta-data about the product and all its sub-products/services, and assists users with their specification. The aims of the functionality provided in the product specification system are three-fold. The first goal is to support gradual specification of the products. This is needed to reflect the reality of products that are neither defined in one session and nor by one stakeholder. Therefore, detailed specifications that capture and transform customer requirements for a product into discrete sub-product specifications, can be gradually defined by the involved multi-stakeholders, using the developed service-enhanced product specification system. The second goal is to properly capture the classification of all relevant sub-products and enhancing services in a granular and modular manner in the product environment, e.g. distinguishing and capturing both the electrical and mechanical aspects of a sub-product. This will in turn support effective multiperspective retrieval/discovery of information related to sub-products, as well as creating their concise presentation, needed for common understanding among different related stakeholders. The third goal is to capture all the details related to sub-products in a reusable from. As such, the existing specifications of already introduced subproducts can be either fully or partially (e.g. at the level of certain detailed feature-kind) be reused for the specification of other sub-products. Fig. 4 demonstrates the overall flow for the specification process for complex products. Other than the above mentioned functionalities sub-products specified using the product specification system are semi-automatically classified using the PRODCOM [16] convention for classifications, that are pre-defined in the system. The choice of PRODCOM suits this case because it is an extension of the CPA and as such compatible with NACE [14]. Furthermore, PRODCOM itself is accepted within the EU as a standard extendable coding system. The NACE Rev. 2 coding system, as suggested by its description of "Statistical Classification of Economic Activities in the European Community", classifies the economical activities preformed to realize a product or service. This provides valuable input when a specification is being realized by the VO formation sub-system and organizations need to be selected for the VO. A PRODCOM code is constructed by further extending the NACE activity code, and therefore embeds in itself the activity that is preformed to realize the product. If a product class is associated with at least one PRODCOM code, we can then use this coding in order to find suitable organizations that can realize this product class by focusing on the NACE part of the PRODCOM code and looking for organizations that have announced this NACE code as one competence. Mapping of Service-Enhanced Product's Specification and Organizations Competences As suggested before, the recommendation of a set of organizations that can produce or support a certain class of products can be achieved by looking at the NACE code of their competences, when related to the PRODCOM code of the product. If the class of some product is not directly associated with a PRODCOM code, then sub-products of the product can be used, were their classes are coded with PRODCOM, so that the competence of organizations standardized through the NACE [14,15] coding convention can be used to build the link to that product. As such interlinking the subproducts of the product with the competences required from organizations to realize the sub-product, are established. As an example, the PRODCOM code "C27.11.43.80" refers to "Transformers, n.e.c., having a power handling capacity > 500 kVA" and its respective NACE code of "C27.11" refers to: "Manufacturing of electric motors, generators, transformers and electricity distribution and control apparatus". Now if a sub-product for instance is associated with this PRODCOM code "C27.11.43.80", then the suitable organizations that can realize and/or support this product can be identified by taking the NACE part of this PRODCOM coding which is "C27.11", and then looking for those organizations who have defined the NACE code "C27.11" as one of their competences. The above mechanism enables our system to recommend a set of organizations capable of realizing and supporting a complex product. The approach suggested by Resnik [17] was used in order to further rank the suitability of a NACE code used as organization competency for realizing a product associated with a PRODCOM code. Clearly in the case when a product is associated with more than one PRODCOM code, and the organizations are associated with more than one competence (i.e. NACE codes), which is rather a typical case, then the most overlap between the interlinked NACE and PRODCOM codes are desired, in other words the more matches between an organizations set of competencies and the PRODCOM codes defined for a product, the more that organization is suited for realizing and support that product. Such a large number of matches in fact indicate that the sub-product and the organization belong to the same sector. Consortia Creation The formation of the VO for service-enhanced product development mostly involves the plan and scheduling of the work order consistent with the service-enhanced product specification, and selecting the appropriated partners considering the existent VBE competences. Nevertheless, to finish the VO formation, and assist the VO partners in attaining agreements, negotiation functionalities are also essential [13]. The VO agreements will serve as foundations for the VO governing principles through its operation phase [12]. Fig. 5 exemplifies the main flow of the VO formation process. Nevertheless, as the focus of this work is on competences for consortia creation, this section highlights the three first services: New order characterization, VBE members competences analysis, and Potential VOs assemblage. As such, important services in the VO formation are: New Order Characterization: to assist the VO planner in the characterization of the new order for products or/and services. Considering the identified requirements of the new order, the characterization relies on the decomposition of the new order into goals considering the outcome of the product specification tool and the VBE competences taxonomy. VBE Members Competences Analysis: to support the VO planner in searching for the suitable partners for the VO. It allows checking how the New Order Characterization goals can be fulfilled considering the existing competences of the VBE members. Thus, the VO planner can properly detect the appropriated partners for the VO being created. The members are therefore searched within the VBE, the customer network, and also other relevant local entities or suppliers related with the VBE. Potential VOs Assemblage: to assist the VO planner in making the conceivable consortia compositions considering the information already provided by the New Order Characterization and the VBE Members Competences Analysis. In this step, the VO planner already has a list containing the potential partners that can deliver, or are able to complete, the defined goals. Moreover, the VO planner can make a VO risk assessment [18] using other VBE functionalities (such as evaluate the trustworthiness levels of partners [19], and their value systems alignment [20]). As mentioned, to finalize the VO formation, the VO planner starts the negotiation process [13] to achieve agreements among VO partners, and launch the VO. Conclusions One of the aims of the GloNet project was the design, development, and deploy of an agile enterprise environment for networks of SMEs involved in highly customized and service-enhanced products through collaboration with customers and local suppliers. One of the achievements was the specification of configurations and purposes of collaborative networks: product development network, service co-creation network, product servicing network, manufacturers' network, and customer network. This paper addresses the creation of a product development network focusing on three systems that were developed to permit its support, namely: the VBE Member Management, of the VBE Management system; the Products and Services Specification System; and the VO Formation System. The interaction of these three systems considered the normalization of the requirements of a new product and the necessary competences of organizations. Therefore, the challenges of addressing: (i) the service-enhanced product specification, mapping it with existing standards; and (ii) the consortium creation focusing on the VBE members matching, were achieved. The approach was validated in the scope of the GloNet project, in a solar industry and intelligent buildings networks. As a general result, the solution was positively validated in terms of developed models and prototypes. Nevertheless, when evolving to commercial products, some improvements shall be considered, namely in terms of user interfaces.
2018-04-03T00:26:30.025Z
2016-10-03T00:00:00.000
{ "year": 2016, "sha1": "29bbc08799ed14a229b4988ad7afaaa58bb3b317", "oa_license": "CCBY", "oa_url": "https://hal.inria.fr/hal-01614595/file/430868_1_En_30_Chapter.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "6708f9f0368dc0340e875fbc24a07695dfd67bb1", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business", "Computer Science" ] }
54761299
pes2o/s2orc
v3-fos-license
Pedagogic Perspectives on Chinese Characters Teaching for Latin American Students Chinese characters are one of the most representative components of Chinese language. However, due to its complexity, the teaching of the language has become an important research topic. With the expansion of this language, it is important to analyze and reconsider approaches to inspire and guide students with different cultural backgrounds, languages and learning habits, and highlight their advantages and disadvantages. Based on two beginner level groups of Chinese in Mexico, this report analyzes teaching strategies, pedagogical activities and students’ attitudes towards two professors, a local Mexican teacher and a Chinese teacher. After observing both classes we found significant differences on their approaches to teach Chinese characters. The Chinese teacher emphasized the importance of characters as a communication tool and therefore tried to develop accuracy and efficiency, while the Mexican teacher focused on knowledge about characters, the association with students’ own experiences and self-directed learning techniques. We conclude with making remarks about which of these teaching approaches are more suitable for the teaching context of Chinese language in Latin American countries like Mexico. pronunciation of the character.In the case of teaching characters to adults who are accustomed to using an orthographic rather than logographic system, there is a constant debate on how the characters should be taught, including the optimal time for character teaching (Ye, 2013), the efficiency of the use of visual and phonological mnemonics (Kuo & Hooper, 2004), radical analysis (Taft & Chung, 1999;Shen, 2007), effectiveness of stroke-order learning (Hsiaung, Chang, Chen, & Sung, 2017) and the use of electronic resources in character learning (Lam et al., 2001).Although comparisons of different teaching approaches have been made, results of these researches are only applicable to a few aspects in character teaching.They guide teachers in the effectiveness of different exercises and help teachers to make an informed decision in the design of the teaching plan based on the course objectives and the characteristics of the students.Nevertheless, few research has focused on the macroscopic aspects regarding pedagogic approaches and objectives in the process of character teaching.This article is based on observing classes of two first-level groups of Chinese language in the Confucius Institute of the National Autonomous University of Mexico (UNAM) and analyzes the process and results of different teaching methods employed by two teachers in order to know how they affect the acquisition of knowledge, obtainment of skills for character recognition and writing, as well as changes in student attitude.The aim of this article is to present the complexity and importance of research on Chinese character teaching and to highlight aspects that should be taken into account when taking an approach to the teaching of this writing system. Case study 2.1 Objects of analysis We observed pedagogical activities in two groups of first-level Chinese course from week 2 to week 8 in the spring of 2017 at the Confucius Institute, UNAM.Opinions and comments from students and teachers were collected.During this period, students began to learn characters related to basic daily communication.Both groups used the same textbook Hanyu Jiaocheng (Course on Chinese Language) Vol. 1 Part one, which covers the basics of the language and elementary communicative skills such as shopping and ordering food, and requires students to master about 60 characters.An important aspect to be mentioned of this book is that it does not take into account the difficulty of characters.It is organized based on grammatical concepts and communication needs, so the teachers can modify the teaching sequence to reduce difficulty of the course.Group A had 23 students and was taught by a local Mexican teacher, while group B had 25 students and was taught by a teacher of Chinese nationality.Both teachers have more than 5 years of experience in teaching Chinese. Case description Below is a description of highlighted parts of both classes.The contents related directly to the teaching of characters will be analyzed especially in the following aspects. Teaching objectives and methods Through a brief interview both teachers were asked how they thought the teaching of characters should be and a summary of their answers is given below: Group A: The teacher believes that the learning process must be as active as possible, trying to arouse the interest and attract the attention of the students so as to allow learning in advanced stages to be simpler and self-directed, and to help students remember and give them a wider context.She mentioned, "I try to relieve stress from the classroom where we can talk freely about the characters so as to make students feel that the Chinese characters are not too difficult, because they were also hard for me when I was a student.In this way I try to give them a message that they can adopt whatever methods that suit them the best to learn characters."Group B: The teacher bases the class on her knowledge of the language structure and believes that it is important to learn characters in an orderly way and pay attention to details of the characters.She said, "The characters you learn in the first-level period will be the basis for later stages, so it is very important for students at this stage to know well how to write characters, including the basic strokes, stroke order, the radicals and their meaning, and the size and proportion of each component of the characters.If not, it will be difficult to get rid of bad habits later.Also, practicing writing ensures that you learn to read quickly."2.2.2 Ways to introduce and teach the characters Group A: In the beginning of week 2 the teacher explained the differences between orthographic and logographic writing systems, giving several examples of cultures employing logographic systems, particularly the Maya civilization as reference which developed in Mexico and Central America.The Maya people used glyphs to represent concepts which in turn were related to the sound of a syllable just like Mandarin Chinese.The teacher gave the example of the word montaña (mountain) (Figure 1) to show the similarities in the logic of the characters (a pictorial representation associated with a concept) and that the Chinese writing is not very complex compared to the Maya system, in which each symbol has many more details as can be seen on the character mountain.She also made use of the reference to the Maya writing system to elucidate another similitude between Chinese characters and Mayan hieroglyphics.The use of meaningful components or radicals to form new symbols reduces learning and writing difficulty as shown in Figure 2 where the component of the Mayan word ik' is inserted at the center of another ass.ccsenet.a way to remember and associate.Therefore, even if the presented explication did not match the historical origin of the character, it was still instrumental for students to remember and comprehend the logic of the characters, which motivated them to present different interpretations without being afraid of errors, for instance, when studying the character 文 (wen2, culture) students offered several interpretations: "文 has to do with the first characters that were written in tortoise shells, so the character for writing and culture is represented with a turtle.""Culture is what gives you identity.It is like a big tattoo that you have written on your chest.""People who had culture and knew how to write were those with a lot of money and ample clothes like the person that seemed walking in the character wearing a large robe."Some of the interpretations were based on the students' imagination, which is not necessarily the official origin of the characters, but were accepted as useful in order to understand the characters. In the exercise of writing sentences on the board the students made multiple mistakes in writing the characters.The teacher asked all students to look for the error and to correct it together.Group B: All the mistakes on shape, strokes and proportion were pointed out by the teacher, sometimes orally while checking the students' writing in class and sometimes marking the mistakes in the homework or dictation. Assessment results All students who regularly attended the class in both groups passed the assessments in the period when the class was observed, though a notable increase of writing speed and improvement of character quality could be seen in group B. Opinions and attitude of students Students were asked to write a reflection at the end of week 4 and week 8 about their learning experiences, including their opinions about the teaching methods, how much time they spend in learning Chinese per week, difficulties they encounter, progress they have made and overall attitude towards the course.At week 4, the reflection showed no big difference between two groups.One of the common difficulties mentioned by students was the characters, but most of them remained positive that they would be able to master them one day.However, at week 8, differences of students' attitude towards their own progress in characters could be noticed between group A and B. Students from group A still demonstrated strong interest and confidence in learning characters since they were accustomed to looking for information themselves while students from group B showed more complaints with 15 out of 25 mentioning that they might not have enough time to practice the characters and doubted whether they could learn to write the characters someday. Discussion The teaching results of both groups can be considered a success because the general objectives of the course were attained and most of the students understood the required knowledge as shown in the assessments and had a positive opinion towards both courses.However, it is easy to observe that there are great differences in the teaching approaches, which are highlighted by the educational customs and are worth discussing.We do not attempt to point out that the teaching methods of both teachers are representatives of local teachers or teachers of Chinese nationality.The observation allows us to analyze ways that can guide the teaching of Chinese characters and their advantages respectively.Throughout the courses we can see that two aspects of character were developed: ability to read and write and knowledge about reading and writing.The Chinese teacher in group B put a remarkable emphasis on the development of writing skills, sought to achieve the course objectives stated in terms of language acquisition and did not explicitly show the focus on the development of cross-cultural communication abilities, cultural knowledge or learning techniques, while the Mexican teacher in group A put more attention on knowledge about the characters themselves and the ability to analyze them, taking her own learning experience as a reference. In terms of foreign language acquisition a clear distinction has been made between language learning and language acquisition (Krashen, 1984), since metalinguistic knowledge is not so useful in developing students' communicative capacities as constant communication (including reception and production) of comprehensible messages in the target language.Although acquisition of the writing system follows very distinct mechanisms from those of spoken language, this communicative approach can be effective in reading and writing for several reasons.The memorization and recognition of characters is a process that requires constant practice.The brain should be able to access information on the shape of the characters and the procedure to write them at a relatively high speed.This process can be trained by means of continuous writing, like the teacher did in group B through repetition tasks and by writing characters on the board slowly while pronouncing the sound of the character with the students following her.In this way, students tried to reinforce the connection between the movements, the image of the character and its sound or meaning in their brains by means of Hebbian learning, in which signals that are triggered simultaneously or sequentially in the brain are reinforced, increasing the possibility to repeat the signal sequence in the future.This kind of training was completed during the learning of some characters in group B, where the learning of characters managed to keep pace with that of spoken language, and characters were written with great care in aesthetic aspects and with higher speed.However, a problem emerged as students progressed in the study of the language as can be seen in group B students' reflection at the end of week 8: 60% of the students mentioned the problem of study time, saying that they probably would not have enough time for practice in the future. The time problem in the study of characters is a fundamental problem and there have been experts who for this reason consider that the ability of handwriting is not imperative in the study of Chinese and Japanese because of the time it takes and the electronic tools that are currently available (Allen, 2008).It is a principal problem in the repetition-based teaching techniques as those analyzed in group B. Although the pace of progress in the spoken language of the first level is slow owing to the study of pronunciation and basic grammars, the students feel that they do not have sufficient time even to only memorize the most used characters.This strongly discourages students who have learned Chinese for only two months as can be seen in their comments.The teacher of group A focused more on learning knowledge about writing and acquiring skills to learn and recognize characters.Although it is clear that the main purpose of this knowledge is to obtain writing skills, the teacher did not spend too much time in or out of class on the practice of characters.However, the students achieved similar results to those in group B. There may be several reasons.First, the students are adults who know the way they learn effectively.Second, they have the ability to copy characters or make strokes since everyone knows how to write in Spanish.The training in stroke direction or copying characters may be more applicable to children who do not yet master the use of a pencil, but for adults who have limited time and know how to write, it does not necessarily need to spend so much time on this kind of practice or orientation. The knowledge imparted by the Mexican teacher did not directly help the students understand more words but arouse their interest and intention in details, linking characteristics of the characters to something with which the students are more familiar such as the Maya writing that is more close to students' life since it was introduced in primary education in Mexico.In addition, the rest of the information concerning characters were presented and discussed by the students themselves, making the class an active place of conversation where students could discuss history, their problem in searching information and even joke with one another.It helps to reduce classroom stress and is beneficial for language acquisition in both oral skills (Philips, 1992) and writing skills (Rai, Loschky, Harris, Peck, & Cook, 2010).The use of additional resources in group A encouraged students to be independent in their learning, which is conducive in language learning since the amount of information and skills to learn is high and one cannot depend totally on teachers, particularly in the teaching of Chinese in Mexico where class time is limited and class size is big, making it difficult for teachers to give individual attention. The discussion among peers and use of their own resources are methods that accord with constructivist principles of education which have demonstrated efficiency in knowledge transfer and motivation (Schunk, 2012).They are fundamental in character learning that requires analytical skills and a longer learning time than many other writing systems even if students only try to obtain reading skills.In this way the teacher of group A played the role of facilitator, orientating discussions while the students helped each other to understand the difficult parts of characters, creating a scaffolding structure to achieve utmost learning, according to Vygotskyan theories of social learning (1978). Another factor that affects the motivation and perception of the difficulty of characters for the students lies in the pressure to master a correct form of writing.The teacher in group B paid great attention to see if the characters were as correct as possible, directing students' attention to even aesthetic details of the characters.In addition, she exerted pressure on the weekly dictation, in which students always scored low.The dictation and continuous evaluation of results are a highly efficient method to consolidate concepts and abilities in long term memory, surpassing the efficiency of elaboration on concepts in many other cases (Karpicke, 2011).However, this efficiency can only be achieved when the students have already had the knowledge and comprehend it in an efficient manner.In the dictation of group B, many students did not have sufficient time to learn, resulting in bewilderment in their pace of progress.In group A, character exercises were carried out in teams and errors were marked by students themselves without receiving a score, which avoids students from perceiving a high difficulty in the learning process or hurting their self-esteem. In general the differences between both groups lie in how the teacher perceives the learning of characters based on their own experiences.The Chinese teacher learned the writing system when she was a child and has the knowledge of the language, culture and basic concepts of writing.The reading and writing skills allow her to transcribe what she means to communicate, which takes a long time to obtain.The Mexican teacher has studied a long time in Mexico, learning the spoken language, writing and culture at the same time, thus she believes that students do not necessarily have to spend all their time on practice this ability, and takes practicing characters as an opportunity to relax the class and talk about history and interact with students, which was not the focus of group B. Follow-up research on the use of these two approaches will be useful to identify students' learning outcomes. Conclusions The teaching of Chinese writing system can be developed in different ways.However, the system is not only an auxiliary of the language.If it were only a transcription tool, it might have been abandoned a long time ago, since it is a difficult system and takes time to master as can be noticed when students do repetition exercises.The Chinese characters are a fundamental element that gives identity to the language and constitutes a great part of the Asian culture. Therefore, learning the language is not merely a process of memorization and automation in the process of recognizing and writing the necessary symbols for the purpose of communication.Because of its logographic nature, each symbol and its components can be directly related to concepts at a sub-morphemic level that is not necessarily associated with a sound.The concepts reduce the difficulty in the learning process and help students to understand important elements of Chinese culture and history which in turn can become an additional tool for memorization and creativity in a practical way.The acquisition of this knowledge in the cases analyzed in this article not only helped to arouse students' personal interest in knowing more of this writing system, but was also conducive to reduce the complexity perceived by the students in the writing process.In Latin American countries like Mexico where students usually take a great interest in active class and attend Chinese classes out of interest in learning a new culture, an active learning approach where students learn and comprehend to seek connections with their own knowledge and resources may be more efficient and attractive in the long run. Figure
2018-12-12T16:40:47.516Z
2017-11-28T00:00:00.000
{ "year": 2017, "sha1": "4eca9c3065246158175c1ef3913ca58a396dd79d", "oa_license": "CCBY", "oa_url": "https://www.ccsenet.org/journal/index.php/ass/article/download/71115/39427", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4eca9c3065246158175c1ef3913ca58a396dd79d", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [ "Sociology" ] }
262746580
pes2o/s2orc
v3-fos-license
Genome sequence of cluster A15 Gordonia terrae bacteriophage Nebulosus ABSTRACT The temperate Gordonia phage Nebulosus was isolated from soil on Gordonia terrae and is a siphovirus. The genome is 52,175 bp in length, has 62% GC content, and encodes 96 protein-coding genes. Nebulosus encodes a partitioning system, ParABS, which is likely involved in lysogeny maintenance. Studying Actinobacteriophage increases our knowledge of phage evolution, viral defense systems, and virus-host interactions (2,4,5).Phage Nebulosus was isolated using direct isolation from composted soil collected on 9/1/2023 in Old Town, Maine (44.915628N, 68.69072W) (6).A soil extract was prepared in peptone-yeast extract-cal cium (PYCa) medium, filtered on a 0.22-µM filter, plated with 0.5 mL of Gordonia terrae 3,612 on PYCa agar plates, and incubated at 30°C for 2 days.Plaques were purified after seven rounds of plaque purification using standard methods (6).On a lawn of G. terrae, Nebulosus forms 5-mm turbid plaques with three halo rings (Fig. 1).Nebulosus forms stable lysogens and is immune to cluster A15 phage ReMo superinfection (6).The particle morphology of Nebulosus was determined by negatively stained transmission electron microscopy of a single particle (Fig. 1).Nebulosus has a siphovirus morphology with a 120-nm-long noncontractile tail and a 46-nm-diameter icosahedral head. A DNA phenol-chloroform extraction method was performed on a high-titer lysate (8).DNA was prepared for sequencing using the NEBNext UltraII library preparation kit (New England BioLabs, Ipswitch, MA).Sequencing on an Illumina MiSeq platform produced 163,600 single-end, 150 bp reads.De novo assembly and checks for complete ness were performed using Newbler v2.9 and Consed v29 (9), yielding a 52,175-bp genome with a shotgun coverage of 223-fold.The genome has 62% GC content, and the genome ends with 10 bp, 3′ single-stranded overhangs (CGGGTGGTTA) (10).The genome shares >35% gene content with phages in cluster A in the Phamerator Actino_Draft database (version 521) and was assigned to subcluster A15 (4, 7). Nebulosus and all A15 cluster members encode a gene (gp47) downstream from the DNA polymerase I (gp46) with strong HHpred hits to a domain of unknown function (DUF6197) found in a Streptomyces kanamyceticus kanamycin biosynthetic gene cluster (19).Nebulosus and 10 other A15 members carry a second copy of this gene (gp72).There appears to have been an INDEL in gp72 as the other 10 A15 members carry instead a longer gene with two tandem DUF6197 domains and nearly identical amino acid sequence identity to Nebulosus gp72 at the C-terminus end. ACKNOWLEDGMENTS This research was made possible by the SEA PHAGES program at the Howard Hughes Medical Institute.We thank Daniel Russell for sequencing services and assembling the Horseradish and Yummy genomes.We are grateful to Emma Brown of the University of Maine and Geoff Williams at the Brown Bioimaging Facility for providing electron FIG 1 ( FIG 1 (A) Genome map of Gordonia phage Nebulosus.The ruler represents the genome coordinates in units of kilobase pairs, and the colored boxes above and below the ruler represent genes transcribed in the forward and reverse directions, respectively.Genes were assigned to a phamily using Phamerator (7) in the Actino_draft database, and different phamilies are indicated by different colors.Predicted functions were centered above and below forward-and reverse-transcribed genes, respectively.(B) Electron micrograph of Nebulosus.(C) Nebulosus plaques on a lawn of G. terrae.
2023-09-27T06:17:53.894Z
2023-09-26T00:00:00.000
{ "year": 2023, "sha1": "e36f09983e1348e5178129756810c1b504be6ba6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1128/mra.00699-23", "oa_status": "GOLD", "pdf_src": "ASMUSA", "pdf_hash": "7268b0c5de696969178cb3cc71a86cfb3fbfaa5c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
125089622
pes2o/s2orc
v3-fos-license
INFLUENCE OF SCHOOL TYPE ON AVAILABILITY AND USE OF HISTORY AND GOVERNMENT INSTRUCTIONAL RESOURCES IN KENYA * Florence J. Chelimo 1 , Dr. Emily J Bomett and Mrs. Edna Nyaga. 1. Doctoral student, Maasai Mara University, P.O Box 40580-00100 Nairobi, Kenya. 2. Senior Lecturer, Department of Education Management & Policy Studies, Moi University, P.O Box 390030100 Eldoret, Kenya. 3. Lecturer, Department of Curriculum Instruction and Education Media, Moi University, P.O Box 3900-30100 Eldoret, Kenya. ...................................................................................................................... Manuscript Info Abstract ......................... ........................................................................ Manuscript History The purpose of the study was to investigate on the influence of type of school in the use of history and government instructional resources in Kenya a case of secondary schools in Nakuru County. The objectives of study were to; Investigate on the nature of the teachers teaching the subject; ways of obtaining the instructional resources; the difference on the two types of school on the availability and use of instructional resources. This study adopted a survey design in which public and private secondary schools in Nakuru County were involved. The target population was 1153.Stratified sampling was used to categorize schools into public and private. 102 History and Government students and 19 history and government teachers were randomly sampled from the two categories of schools. Data was collected using questionnaires, observation and document analysis. The data collected was analyzed using descriptive statistics and presented using frequency tables, graphs and pie charts. From the study it was found out that, that some teachers in private schools teaching the subject were untrained while some were overloaded with lessons as compared to those in public schools more so public schools had more instructional resources and were used more often than private schools. It was recommended that schools should employ trained teachers on the subject to teach, in-service courses be organized to equip the teachers with more pedagogical skills on use making and use of instructional resources and the Quality Assurance and Standard Officers should ensure proper supervision in schools on use of resources. It is hoped that the findings of this study will form a basis for formulation of policies on strategies for effective use of instructional resources in secondary schools and provide solutions to the challenges being faced. Instructional resources develops a sense of causal relationship that characterize human development in society, it also helps in making learning permanent through utilization of more than one sensory channel, hence maximum use of all senses. They also add joy and interest to learning thus enabling a learner to pay attention in a class if they write, look, listen and read as they think. It also encourages active learner involvement in a lesson, for it creates a sense of participation and involvement in learning. Learners are also exposed to practical experience where they acquire skills, understand concepts and develop the powers of imagination, observation and reasoning. Chelule (2009) points out that, it is the trainer"s job to undertake all possible efforts to make learning more effective and interesting; this therefore makes it imperative to use instructional resources. Every trainer should use the resources to improve the effectiveness of his/her work. Chelule (ibid) further argues that, however articulate a trainer can be, lack of appropriate instructional resources cannot only hamper attainment of learning objectives but also derail them, by creating confusion, misunderstanding and lack of interest in learning among trainees. Thompson (1962) in Isutsa (1996) points out that, in any instructional process all learners` senses must be catered for using a variety of media. He states that "all man`s knowledge of external world and his relationship to it is obtained through factual auditory, visual, olfactory and proprioceptive senses." In the teaching of History and Government, Kochhar (1991) summarized the value of the use of instructional resources as; it makes learning real, vivid, interesting, enjoyable, enable learners retain information learnt as well as enable learners utilize more than one sense. He contended that, apart from textbook, instruction in History and Government is guided and made more interesting by use of other teaching and learning resources like maps, pictures, charts and many others. So, the teacher who has adequate and relevant teaching and learning materials will be more confident, effective and productive thus be able to implement instruction with ease. The government of Kenya has since independence initiated several improvements in education system to make education more meaningful to the learners and also meet the nation"s political and economic needs. Like the Revision of 8-4-4 curriculum which included topics with contemporary issues like HIV/AIDS in History and Government. Several development plans have also shown that, Kenya in the last three decades has allocated 30-40 of her national income to education, but 60-70 of these funds had been directed towards payment of teachers` salaries, hence provision of instructional resources was not given the consideration it deserved. This was also evident in MOEST (2005) report, on the Kenya education sector support programme 2005-2010, its summary on the yearly costs for secondary education had less allocation for learning materials than the other allocations. The Sessional paper no. 55 by KIPPRA 2006, pointed out that, in 2004/5 the government expenditure on secondary education as % of GDP and total education budget was 1.6% and 21.7% respectively. This financing is predominantly recurrent expenditure only 6.5% of this goes to non salary. This therefore shows that, less money was allocated for facilities and resources, so how is the nature of instructional resources in public secondary schools? What about in private schools who receive no grants from the government? Public schools get government support in different ways while private ones are run by individuals and other organizations. Kamunge report (1988) pointed out that, the government will continue to encourage the development of private schools in order to provide more educational opportunities. But they will ensure they adhere to the regulations laid down by the government for provision of physical facilities, equipments and teachers. It is also necessary that parents who take their children to private schools take greater role in ensuring that the schools provide quality education. KNEC (2008) recommended that, schools should provide adequate reference materials for candidates. 652 Koech report (1988), recommended that, the government should encourage establishment of good private educational institutions because they play an important role in the provision of education and training. The Government appreciates the fact that, for the country to achieve a set policy targets in all education subsectors, the private sector must be provided with appropriate incentives and enabling environment to expand the provision of education services (Onsomu et al, 2006). It should however ensure maintenance of acceptable standards, quality and relevance in education and training by requiring those who manage private schools to encourage establishment of Parent Teacher Association for the purpose of ensuring maintenance of high standards of education. Eshiwani (1986) argued that, private schools have more serious problems as some provide below average facilities and teaching. This was corroborated by the findings of Nabwire (1998), who in her study on availability and utilization of non-projected media resources in Geography indicated a noticeable difference between private and public secondary schools, of which she stated that, on average public schools were better endowed and had high incidence of utilization of Geography resources than private schools. It is with this knowledge that the study was undertaken in Nakuru County to find out whether it was similar to private secondary schools in the county. It is against this background that this study was undertaken to provide detailed analysis on availability and use of instructional resources and the analysis was guided by the following research questions; how are the available instructional resources obtained? Are the available instructional resources being used? Does the school type influence availability and use of instructional resources? The broad objective therefore, was to investigate the influence of school type on the availability and use of instructional resources in History and Government on selected private and public secondary schools within Nakuru County. It was hoped that, the findings, conclusions and recommendations of this study will go a long way in improving the quality of instruction in the subject in our secondary schools. Statement of the Problem:- Education has been defined as the entire process of developing human abilities, potentialities and behaviours (MOEST, 2004). It is an organized and sustained instruction meant to transmit a variety of knowledge, skills, understanding and attitudes necessary for life. If this has to be achieved, then instruction ought to be effective and can never be without the use of instructional resources which are vital in any teaching and learning process. For the contribution of a subject in shaping student experiences and awareness of the world in which they live is affected by the manner in which the subject is presented. The question of instructional resources is important in any subject. A 1990 KIE evaluation report on 8-4-4 system of education noted that, schools lacked adequate learning resources and most teachers were reluctant to use the few available due to lack of competence. Despite attempts by scholars to show the relevance and importance of instructional materials, teachers still waver over their use. Teaching of History and Government in Kenyan secondary schools has been criticised in regard to the use of instructional resources, like reliance on only one type of resource the textbook, a problem which might have led to unpopularity of the subject in some schools and lack of achievement of subject objectives. If this problem is not addressed, it might lead to a greater problem of students not selecting the subject now that, it is elective after form two. The teacher must make full use of all available instructional resources to facilitate maximum learning by the student, as a creative use of instructional resources increases teacher"s feeling that the students have learnt more and will retain better and this is likely to result to increased performance. Master plan in education and training 1997-2010 revealed that, even with the Government contribution through cost sharing a number of public schools fail to meet their operations such as salaries for non-teaching staff and learning materials. Moreover there has been a lot of criticism of private schools in provision of teaching and learning resources as Koech report (Republic of Kenya, 1999) pointed out, an outcry on the growing number of private educational institutions that are providing substandard resources for learning. It is therefore evident that, there are inadequate instructional resources in our public and private secondary schools, but the rate of inadequacy may differ with school type thus the need for this study. Nakuru County has many private and public secondary schools, with this increasing number, quality of services they offer should be looked into, to ensure they also improve on provision of instructional resources. For secondary 653 education is very important as it provides a vital link between basic education and the world of work on one hand and further training on the other (Onsomu et al, 2006). Purpose of the Study:- The purpose of the study was to investigate on the influence of school type on the availability and use of History and Government instructional resources in secondary schools. Objectives of the Study:- The Objectives of the study were to:  Identify the nature of the teachers teaching History and Government in secondary schools.  Investigate on ways of obtaining instructional resources for History and Government instruction in the secondary schools.  Establish the difference of type of school on the availability and use of History and Government instructional resources. Significance of the Study:- The findings of this study forms a basis for feedback on the status of teaching and learning resources in History and Government instruction to the curriculum developers, policy makers, teachers and all those involved in planning educational programmes. Additionally the findings are useful in evaluation of each category of secondary schools as well as reveal more grey areas on instructional resources for further research. Materials and Methods:- This study was conducted in Nakuru County, one of the 47 th counties of the republic of Kenya. The research design used in this study was survey. The study aimed at collecting information from respondents on availability and use of History and Government instructional resources and the data was obtained from questionnaires, observation and document analysis. This research design was intended to produce statistical information about aspects of education that interest policy makers and educators like the researcher (Orodho, 2009). The study opted for descriptive survey design because it aimed at describing the nature of instructional resources for History and Government in Nakuru County. Consequently the instruments used to collect data; questionnaires, observation and document analysis are good for survey design. The target population was 103 History and Government teachers and 1050 History and Government students. Stratified sampling involves dividing the population into homogenous subgroups and taking a simple random sample in each subgroup as each strata share a particular characteristic . It was then used in this study of which the stratum was represented by private and public secondary schools, simple random sampling was then used to get respondents from each stratum. Reliability is the degree of consistency, that is, the accuracy of estimate of the target attribute (Mertens, 2008, . Content validity was used to validate the instruments of this study during piloting. In order to ensure the reliability of the instruments internal consistency reliability was carried out when piloting the instrument, since it requires only a single administration of the test (Orodho, 2009). A high KR-20 coefficient 0.80 was attained in this study which indicated a homogeneous test. According Too and Kafu (2010) reliability index of 0.50 and above is acceptable. To achieve the objectives, descriptive data analysis was employed whereby frequencies and percentages were calculated on influence of school type on availability and use of History and Government instructional resources in public and private secondary schools. Ngechu (2003) explains that, descriptive statistics includes use of measures of central tendency to describe a sample or a group of individuals. The data indicated that, a majority of the public schools had four teachers with 45.5 %. Most private schools had two History and Government teachers with 66.7 %. The few teachers in private schools hinder them to effectively prepare for an instruction, since they have more lessons per week thus may not have time to prepare and use instructional resources. Whenever there is a shortage of teachers in some private schools, they take long to replace them meanwhile use teachers trained in other subjects to teach this hinders the effective use of instructional resources, unlike in public schools where they are replaced immediately by TSC or school board. Results and Discussion Questionnares on the training of teachers in History and Government was adminstered and below data as shown in figure 1.1 was collected On analysis of the above results, all the public school teachers were trained to teach History and Government while 75 % of the private teachers were trained. Some private schools prefer employing untrained teachers because they are cheap, but in most public schools the Teacher Service Commission (TSC) deploys teachers to them who are trained in the subject they are deployed to teach. The challenge of untrained teachers is they don"t have the skills of availing and preparing relevant instructional resources for teaching thus limits its use especially in private schools. Data was collected on the number of lessons taken per week as shown in table 1.2 Public School Teachers Trained Private School Teachers Trained Private School Teachers Given .0 Students and teachers should be encouraged to prepare learning resources, from the questionnaires administered to them, as the above results show 84.8 % and 87.2 % public and private schools students respectively do not make learning resources. Among the public school teachers 63.6% make their own teaching resources while 66.7% of the private school teachers make their own teaching resources. The above results indicate that, students rarely make learning resources; this could be because of lack encouragement by their teachers while others could be satisfied with the textbooks. Some teachers may not be making their own resources this could be due to lack of skills, a lot of work or lack of funds to buy the materials. Students from both schools mostly make charts as their learning resources while the teachers mostly make charts and maps. Maps and charts are easy to prepare and materials used are not expensive. From the data collected in the questionnaires administered to the students on frequency of use of prepared learning resources, among the public school students 42.9% use the resources very often and another 42.9% use them often, while 50% of the private school students use the resources often as shown in table 2.3 below. 100.0 Therefore public students use resources they make more than private students, this shows that private school students make less resources than those of public. It also explains why private students do not make a lot of resources, because even if they make them, they don`t use hence don"t get motivated to prepare. Research was established on how easy to obtain teaching resources and the results are shown in table 2.4 below Most teachers from public schools pointed out that, they obtain teaching resources for History and Government in the school easily compared to teachers in the private schools. In private schools, fees is the main source of funds but in public schools there are various sources of funds hence easy to acquire resources. In private schools the head teachers are the employers and so most teachers avoid closer interaction with them thus cannot follow up on need of teaching resources unlike in public schools where teachers are more free with their head teachers thus can follow up on facilities they need like instructional resources. Public schools had more streams than private schools, this contributed to a high number of students in public schools taking History and Government. This therefore influenced the type of resources used, as class size influences the selection of the resource. This high number of students made the teachers to mainly use textbooks. Factors like spacing also influence the selection of instructional resource. Public schools receive funds from the government, thus were able to avail more resources than private schools who received no funds from the government but mainly relied on schools fees to meet all the school expenses, therefore allocated a smaller percentage of funds to instructional resources From the instruments administered to teachers on use of teaching resources below results were collected as shown in table 2.5 and 2.6. Res. 4(11), 650-663 658 The public school teachers with 50% strongly agreed that instructional materials are the best in teaching, 54.5% agreed that they usually achieve lesson objectives without them, another 54.5% disagreed that teaching using instructional resources don't make any difference, using the resources drags the syllabus so you can't finish on time and that they have enough experience so they don't need the materials. Some teachers agreed that, they don't use instructional resources because it is for the teachers on practice. Among the private school teachers, 50% strongly agreed that instructional materials are the best in teaching, 80% disagreed that using the resources drags the syllabus so they can't finish on time and instructional resources are for the teachers in practice while all teachers strongly disagreed that training they received is not adequate to enable them use variety of resources. Reasons for the Difference in the Two Categories of Schools:- Respondents indicated on the type of school attended as shown in table3.1 below 659 100.0 Among the public and private school student"s respondents, 89.4% and 85.4% respectively have never attended both public and private secondary school within Nakuru County. Those who have, 80% of the public and 57.1% of the private pointed out that, they used the instructional resources in public more than private. Public schools are more endowed with resources because they don`t only rely on school fees as the only source of income like private schools. Reasons for the difference in the two categories of schools were given as shown in figure 3.1 below The public and private school teachers who said yes gave their reasons of usage as; there is a wide variety of resources with a representation of 75% and 25% respectively. Questionnaires administered to teachers teaching both schools gave the following results on the difference between the two schools as shown in 100.0 Among the teachers respondents, 63.6% from public school and 75% from private school have never taught in both public and private schools. All the public school teachers who had taught in both schools noted a difference on availability and use of instructional resources while the private school teachers indicated no difference. The difference noted by public school teachers was that, private schools had fewer resources. From the observation checklist the researcher also noticed that public schools had more instructional resources than private The level of inservice training for teachers were also established as shown in figure 3.2 below Majority of the public school teachers with 54.5% have never attended a History and Government in-service course training since they began teaching History and Government in their schools while all private school teachers have never attended any in-service course training. Public schools are better in organizing for in -service courses for their teachers unlike private schools who are reluctant to. Some teachers who teach History and Government in private schools are not trained to teach the subject therefore cannot attend the in-service course. In-service Course was rated as below by the teachers 100.0 Among the public and private schools teachers 80% and 66.7% respectively pointed out that in-service course was very useful in relation to teaching History and Government. Teachers in both schools acknowledge the importance of in-service courses Conclusions:- School type influenced both availability and use of instructional resources, this was because; the two categories of schools were funded and managed differently. Public schools which were funded by the government had more funds to acquire resources if not directly supplied by the Government, therefore had more resources. Private schools relied on school fees to pay salaries and provision of material resources; therefore they couldn"t manage to supply all the resources. As far as usage was concerned, resources for History and Government were found to be used more in public schools than private ones. This could be due to its availability and variety, also the public teachers attended inservice courses, where they were equipped with skills of handling instructional resources. Private schools at times employ untrained teachers to teach History and Government; therefore have no competence in handling instructional resources even with the available resources. Therefore, there was a big difference on availability and use of History and Government instructional resources between public and private schools, with Public schools being better than private schools. Recommendations:-1. Effective Supervision by Quality Assurance and Standard Officers. The Quality Assurance and Standard Officers should carry out regular inspection of the private and public schools to check whether the recommended instructional resources are adequate and if they are being used. 2. In-service for Teachers. History and Government teachers are advised to always attend workshops, seminars, vocational courses to make them be abreast of the current development in the subject as well as refresh them. Regular in-service courses should therefore be organized for them since teacher qualification is a significant quality determinant of perfomance in the schools. 3. Employ trained teachers to teach the subject. Schools especially private ones should always employ teachers to teach the subjects they are trained on.
2019-04-22T13:04:06.515Z
2016-11-30T00:00:00.000
{ "year": 2016, "sha1": "e76085256cc33e0a9b316e7873a927fe5d28e6f2", "oa_license": "CCBY", "oa_url": "http://www.journalijar.com/uploads/946_IJAR-13411.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "29f44b6c88fe9609ca9e870a721ec7e0dda69e5b", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Mathematics" ] }
251334455
pes2o/s2orc
v3-fos-license
Jurnal Kimia Sains dan Aplikasi Molecular Docking of Gallic Acid and Its Derivatives as the Potential nNOS Inhibitors The global prevalence of anxiety and depression rates have increased by 25% due to the impact of the COVID-19 pandemic. Depression can occur due to an increase in NO produced by the nNOS enzyme. Gallic acid and its derivatives can be obtained from nature and have various biological activities. This study aimed to determine the potential of gallic acid and its derivatives as nNOS inhibitors using the molecular docking method with parameters of binding energy values, RMSD values, and specific binding to amino acid residues. The results showed that gallic acid, 4-O-methyl gallic acid, and epigallocatechin gallate had bond energies of −1.87; −2.36; and −0.12 kcal/mol, respectively. Compared to the standard ligand, which had binding energy of −2.84 kcal/mol, gallic acid 4 -O-(6- galloyl glucoside) had binding energy of −4.12 kcal/mol. Based on these results, gallic acid 4-O-(6-galloyl glucoside) can potentially inhibit nNOS. Introduction As a world health authority, the World Health Organization (WHO) announced Coronavirus Disease 2019 (COVID-19) as a global pandemic on 11 th March 2020. Millions of people worldwide have been affected by the COVID-19 pandemic [1]. According to a scientific report released by WHO in March 2022, the global prevalence of anxiety and depression increased by 25% due to the impact of the COVID-19 pandemic. Meanwhile, mental disorders and depression in Indonesia increased by 6.5% nationally, mainly due to social restrictions and job loss. This increase in depression rates has prompted countries worldwide to include mental health and psychosocial support as a part of their COVID-19 response plans [2]. Psychological factors such as stress, depression, loneliness, and unhealthy behaviors can interfere with the immune system's response to vaccines. Inflammation is one of the body's responses that appear after vaccination which is characterized by fever, muscle aches, and fatigue. This is a good sign because the increase in temperature creates a less favorable environment for the virus and stimulates the body to produce more immune cells. However, based on previous studies, the inflammatory response to vaccination is higher and lasts longer in individuals with depression [3]. This chronic inflammation causes vaccines to not work optimally in forming antibodies needed by the body [4]. Although there are numerous varieties of antidepressants available, some negative effects persist. For instance, selective serotonin reuptake inhibitors (SSRIs) and fluoxetine, most commonly prescribed to treat a severe depressive disorder, have some side effects. Common side effects include anxiety, nausea, vomiting, dry mouth, dizziness, and visual disturbances. Some patients also have serious side effects, such as sexual dysfunction, suicidal ideation, tachycardia, insomnia, and anorexia [5]. Therefore, regarding tolerability and safety levels, researchers are increasingly concerned with discovering and developing antidepressants in traditional herbal medicines [6]. Depression can occur due to several mechanisms. The monoamine hypothesis has long been used in developing antidepressants [7]. The increase in NO produced during neuroinflammation can lead to brain dysfunction and diseases, such as depression and anxiety [8]. NO is synthesized from L-arginine by three isoforms of the enzyme nitric oxide synthase (NOS), namely neuronal (nNOS), endothelial (eNOS), and induction (iNOS). The nNOS enzyme is prevalent in several brain regions involved with stress and depression-including the hippocampus, hypothalamus, locus coeruleus, and dorsal raphe nuclei. Reducing NO levels by blocking NOS enzymes in the brain can produce an antidepressant effect [9]. Gallic acid has various bioactivities such as antioxidant, anti-inflammatory, anti-microbial, anticancer, antineoplastic, antiviral, and antidepressant effects, indicating that it can increase serotonin levels [10]. Gallic acid derivative compounds have high antidepressant-like activity if they have higher lipophilicity, as indicated by a methyl group [11]. Gallic acid derivatives such as epigallocatechin are commonly found in tea leaves, such as green tea, affecting anxiety and depression [12]. The presence of a galloyl group is known to have the activity to inhibit NO production [13]. This study aimed to predict its activity as an antidepressant through the nNOS mechanism using the molecular docking method. Based on binding energy, this method can predict bond conformation in position, type, and affinity. This prediction plays a role in developing compounds that are assumed to have a biological activity to be employed as reference compounds for further drug development [14]. Tools and Materials In this study, the hardware used was a Linux-based operating system (OpenSUSE Leap 15.0) with Intel (R) Core™ i3-7100, CPU @ 3.90Ghz, 8 GB of RAM 64-bit. The software was AutoDock 4.2.6 [15], UCSF Chimera [16], GaussView 5.0.8 [17], Gaussian 09W [18], Open Babel GUI [19], and Discovery Studio Visualizer [20], which were accessed via hardware facilities at the Computational Chemistry Laboratory, Faculty of Mathematics and Natural Sciences, Gadjah Mada University. The materials needed in this study include the molecular structure of gallic acid derivatives and the macromolecular structure of neuronal nitric oxide synthase (nNOS) with PDB ID code 6AV2 obtained through the RSCB.org database and shown in Figure 1. Bioavailability analysis and preparation of standard ligand, proposed ligands, and proteins The proposed gallic acid derivatives were analyzed for their properties related to the feasibility of oral drugs based on Lipinski rules using Molsoft Drug-Likeness (https://molsoft.com/mprop/). Furthermore, the proposed gallic acid derivatives were sketched in GaussView, and geometric optimization was done in Gaussian using the DFT B3LYP method with a 6-311G++ (d,p) basis set. The B3LYP method has been reported to perform well on organic molecules. The basis set was added by a diffusion function (+) which is a spherical harmonic that allows the calculation of electrons with the probability of being far from the nucleus, and the addition of a polarization function (** or d,p) that can provide better flexibility for the deformed wave function, thus providing a more accurate geometry and vibrational frequency [21]. The optimized ligand compound molecule was then converted to .mol2 format file using the Open Babel GUI, and the ligand is ready to proceed to the following step. The nNOS macromolecule previously downloaded on RCSB.org was then prepared using UCSF Chimera by separating the protein and its standard ligand. Chain A protein was selected from the nNOS (6AV2) protein, and the non-standard residue was removed. The charge and hydrogen atoms were then added using the Dock Prep feature to obtain an empty protein that ignores the presence of water in the .pdb extension. Standard ligands were also prepared and saved in .mol2 file format. Docking of proteins and standard ligand The docking of proteins and standard ligand was performed using the AutoDock4 Tools software. The grid box size was 60 × 60 × 60 Å 3 at coordinate x = 122.02; y = 244.24; z = 359.302 with a grid spacing of 0.375 Å. The catalytic residue GLU 597 served as the main amino acid target in the process of nNOS molecular docking. The AutoDock4 parameter was set at 100 GA runs, 2,500,000 energy evaluations, and 150 population sizes. The molecular docking simulation was done using the Lamarckian Genetic Algorithm Local Search (GALS) calculation. The docking method is valid when the redocking between the standard ligand (code: BY7) and the nNOS receptor shows an RMSD value of 2 Å so that the docking method can be applied to proposed ligands [22]. The best conformation of the standard ligand was determined by the lowest binding energy (BE) value and RMSD ≤ 2 Å. The standard ligand (BY7) redocking was performed to determine the target protein's active site. The conformation and position of the docked standard ligand would be a reference for the following proposed ligands docking process. The standard ligand redocking had an RMSD value of 0.61, which means that the docking method employed in this study is valid and the parameter settings used have fulfilled the validation criteria. Therefore, the method and parameter settings can be used to dock the proposed ligand molecules. The overlay of the standard ligands before and after docking is shown in Figure 2. The docking of gallic acid-derived ligand molecules was achieved by equating the conformation and position coordinates of the gallic acid-derived ligands with the standard ligands using the dejaVu GUI feature on AutoDock4 Tools. Gallic acid-derived ligands were saved as .pdbqt files. Furthermore, the docking process for gallic acid-derived ligands on macromolecules used the same parameters, grid box size, and grid box coordinates as standard ligands. The best conformation of the proposed ligands was selected based on the lowest binding energy value, RMSD lower than 2 Å, and the interaction formed with Glu 597 residue. The conformational results of the ligand-receptor complex were stored in .pdb format. Docking results in .pdb format were then visualized using the Discovery Studio Visualizer (DSV) software. Visualization was done in 3D and 2D diagrams to view the protein-ligand interactions. Analysis of potential inhibitory properties The potential antidepressant properties of the proposed ligands were analyzed using the molecular docking results with AutoDock4. The ligand was docked to the binding pocket of nNOS protein. The parameters used for the analysis were binding energy, RMSD value, and the interaction formed with the amino acid residue of the receptor. The binding energy demonstrates the stability of the ligand to protein, which also reveals that the bond can occur spontaneously-the lower the binding energy value, the stronger and more stable the bonding. The type of hydrogen bond formed was used to analyze the interaction mechanism. RMSD was also reviewed since it may quantify how much the protein-ligand interaction in the crystal structure changes before and after docking. The docking method and parameters are valid if the RMSD value is 2 Å [23]. Bioavailability analysis of the proposed ligands oral drug This research tested four proposed ligands of gallic acid derivative docked to nNOS protein. The proposed ligands include gallic acid, 4-O-methyl gallic acid, epigallocatechin gallate, and gallic acid 4-O-(6-galloyl glucoside). The bioavailability analysis of oral drugs based on Lipinski's rules using Molsoft Drug-Likeness is shown in Table 2. The drug-likeness model score of the proposed ligands is shown in Figure 3. Lipinski's rule is essential in developing and discovering a candidate orally administered drug compound. The five rules (rule of five) of Lipinski include a molecular weight of not more than 500 Daltons, high lipophilicity with a log P value of not more than 5, hydrogen bond donors not more than 5, bond acceptors not more than 10, and the molar refractivity is between 40-130 [24]. Molecules with molecular weights greater than 500 Daltons would have difficulty penetrating through the digestive and epidermal membranes [22]. The log P value describes the lipophilicity of the compound [25]. A compound with a log P value greater than five may potentially be toxic due to its low solubility in water. The substance is, therefore, difficult to eliminate and may accumulate in the body [26]. The absorption of a compound is determined by its solubility and permeability. The number of hydrogen bonds indicates an essential parameter in drug permeability [27]. The value of the drug-likeness score represents the structural similarity of the proposed ligands to drug compounds discovered previously and contained in the system database. Molsoft's analysis calculated the score's drug-likeness model representing the combined effect of physicochemical, pharmacokinetic, and pharmacodynamic properties. The blue curve indicates that the compound is categorized as having drug-like properties, while the green curve does not show druglike properties [27]. Epigallocatechin gallate and gallic acid 4-O-(6galloyl glucoside) did not meet the requirements for hydrogen bond donor and acceptor stipulated in Lipinski's rule as an oral drug. This can indicate that both compounds have poor absorption. On the other hand, gallic acid and 4-O-methyl gallic acid comply with Lipinski's rule. However, both compounds are not categorized as oral drug-like since their drug-likeness values are less than 0. It can be said that all the proposed ligands have low oral bioavailability. Theoretically, 100% bioavailability in the blood can be achieved if the active compound is administered directly by the intravenous route [28]. The method of forming micro-sized particles also increases bioavailability, so further research is required to develop this drug compound [29]. Molecular Docking Interaction studies were done by docking gallic acid molecules and their derivatives as an alternative to potential antidepressants. This study was carried out by docking the molecular structure of gallic acid and several of its derivatives as ligands to the nNOS protein (code: PDB 6AV2). The proposed ligands used were geometrically optimized to obtain a stable geometry of a compound structure, and the molecules were docked to the prepared receptor. The results of each docked proposed ligand were taken based on the lowest binding energy value and Root Mean Square Deviation (RMSD) ≤ 2 Å. The specified RMSD value represented the change in conformation or distance from the molecular reference structure to the molecular structure resulting from a positional change. The smaller the RMSD value, the closer the position of the standard ligand resulting from molecular docking to the standard crystallographic ligand [30]. Gallic acid and its derivatives as nNOS inhibitors High levels of NO in the brain can be lowered by nNOS inhibition. The standard and proposed ligands were docked to nNOS protein, and the results were analyzed to determine the nNOS inhibition. The molecular docking results of standard and proposed ligands are shown in Table 3. According to these results, the 4-O-(6-galloyl glucoside) had a binding energy value of −4.12 kcal/mol, followed by 4-O-methyl gallic acid of −2.36 kcal/mol, gallic acid of −1.87 kcal/mol, and epigallocatechin gallate of −0.12 kcal/mol. The gallic acid 4-O-(6-galloyl glucoside) exhibited a more negative binding energy than the other proposed and standard ligand. Therefore, the gallic acid 4-O-(6-galloyl glucoside) interaction with the standard ligand was compared to examine the possible interactions. The interaction between residues and standard and proposed +ligands was visualized in 2D and 3D using the Discovery Studio Visualizer. Further discussion would compare the interactions between standard ligands and gallic acid 4-O-(6-galloyl glucoside) on the nNOS receptor, as it was previously mentioned that gallic acid 4-O-(6-galloyl glucoside) had more negative binding energy than standard ligands. A comparison of the interaction of the standard ligand with gallic acid 4-O-(6-galloyl glucoside) is shown in Figure 4. According to the results of standard ligand binding in Figure 4, it is known that interactions with nine amino acid residues were formed, including GLU 597, TRP 592, ASP 602, PHE 589, PRO 570, ARG 608, ARG 486, and MET 594. These results were used to compare the bound amino acid residues and the type of bond formed with the proposed ligands of gallic acid derivative in inhibiting nNOS protein. All proposed ligands had the same interaction with residue GLU 597. The docking results of the proposed compound gallic acid 4-O-(6-galloyl glucoside) in Figure 5 show interactions with residues including GLU 597, TRP 592, ARG 608, GLN 483, GLY 591, and PRO 570. The proposed compound gallic acid 4-O-(6-galloyl glucoside) showed more hydrogen bonds to the residue than the standard ligand, indicating that the interaction was formed at a more stable binding site area. The visualization results in Figure 5 demonstrated the similarity of the bound amino acid residues to the standard ligand, such as GLU 597 and TRP 592. Both residues had the same hydrogen bonding type as the standard ligand. Further analysis in Figure 5 revealed that the gallic acid 4-O-(6-galloyl glucoside) interaction with the nNOS receptor resulted in seven hydrogen bonds, of which two were bound to the catalytic residue of GLU 597. In contrast, the standard ligand in Figure 4 only formed a hydrogen bond in the GLU 597 residue. Since GLU 597 is a catalytic residue directly involved in the catalyzed reaction, the specific bond on that residue needs to be considered [31]. Furthermore, the gallic acid 4-O-(6-galloyl glucoside) produced more hydrogen bonds than the standard ligand, leading to higher conformational stability when interacting with the nNOS receptor. Another similar residue that interacted was ARG 608. The interaction type of the ARG 608 was unfavorable positive-positive in the standard ligand but hydrogen-bonded in gallic acid 4-O-(6-galloyl glucoside). The type of interaction formed was different between the standard ligand and gallic acid 4-O-(6galloyl glucoside) at residue ARG 608. However, the hydrogen bonding of gallic acid 4-O-(6-galloyl glucoside) has a better type of interaction. The gallic acid 4-O-(6-galloyl glucoside) had the same type of interaction and number of residues when reviewed based on the similarity of interactions formed from the gallic acid 4-O-(6-galloyl glucoside) and the standard ligand to the receptor. Therefore, the gallic acid 4-O-(6-galloyl glucoside) had nearly the same binding position as the standard ligand. The similarity of the docking position is emphasized in Figure 6 when viewed from the same visualization angle. The gallic acid 4-O-(6-galloyl glucoside) exhibited the same biological activity as the standard ligand since it was bound to the identical amino acid residue. Hydrogen bonding becomes a specific interaction essential in the ligand-receptor interaction process since it contributes to increasing the compound's affinity to the receptor. Hydrogen bonds may form electrostatic interactions or hydrogen donors and acceptors. This is in line with the crystal structure's hydrogen bonding and the docking validation results that the GLU 597 residue is predicted to have a stable interaction. The analysis showed that gallic acid and its derivatives 4-O-methyl gallic acid and gallic epigallocatechin had specific bonds with GLU 597 residue. However, these compounds have not demonstrated a good inhibitory potential against nNOS protein because they have higher binding energy than nNOS protein standard ligand. The three compounds have not shown potential biological activity as antidepressants. The gallic acid 4-O-(6-galloyl glucoside) had better inhibitory activity when bound to the nNOS receptor than standard and other proposed ligands of gallic acid derivatives. According to the study, the gallic acid 4-O-(6-galloyl glucoside) deserves further research on its synthesis and conducting molecular dynamics simulations to determine how the compound interacts with water in the body and evaluate its activity in vivo or in vitro. Conclusion According to the study on gallic acid and its derivatives, gallic acid-4-O-(6-galloyl glucoside) exhibited nNOS inhibitory activity because it has more negative binding energy than the standard ligand and a specific bond with residue GLU 597.
2022-08-05T15:21:30.283Z
2022-05-17T00:00:00.000
{ "year": 2022, "sha1": "fd8dc488663eb827c576e27d30e93635769d88e8", "oa_license": "CCBYSA", "oa_url": "https://ejournal.undip.ac.id/index.php/ksa/article/download/42223/22001", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "003e21bca7811695b0bfcbb19f4c93f092ee7c8e", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [] }
264783360
pes2o/s2orc
v3-fos-license
Ankylosis of the hips and knees due to sickle cell disease This is a case report of a 29-year-old Saudi male with sickle cell disease (SCD) with severe stiffness of his joints, mainly both knees and hips, secondary to complications of SCD. He was severely crippled: unable to sit, stand or walk, and was bedridden for 8 years when he was presented to us. Radiographs showed fusion of both knees and hips. There was no evidence of active osteomyelitis by Gallium scan. The patient’s hemoglobin S decreased to levels below 30% by exchange transfusion. Bilateral total hip replacement, as well as unilateral total knee replacement, was carried out to improve his level of function. There is only one reported case of such severe and multiple joint complications in a single patient suffering from SCD. The increased life expectancy that medical advances have offered to the sickle-cell patients has led to the appearance of sickle-cell-related complications, which were previously only seen rarely. These complications were successfully managed and the patient was able to move and transfer using a wheel chair. Introduction Sickle cell disease (SCD) is an autosomal recessive genetic disorder characterized primarily by chronic anemia and periodic episodes of pain, affecting millions throughout the world 1 . SCD patients are at increased risk of bony complications of the disease, such as osteomyelitis, osteonecrosis, osteopenia and ankylosis 2 . SCD is an important public health concern in the Kingdom of Saudi Arabia (KSA), particularly in the Eastern and South West provinces 3 . Painful crises and avascular necrosis of the femoral head are common complications observed in these regions 3 . This case presentation is that of a patient with severe bony complications resulting from his SCD. Total bilateral hip and unilateral knee arthroplasty were performed to correct hip and knee ankylosis secondary to SCD. To our knowledge, total bilateral hip and unilateral knee arthroplasty has not been described previously in the literature except in one case of an African male with a similar presentation 4 . Case report A 29-year-old Saudi male with SCD, was referred from the general surgery service to improve his poor physical condition. He was severely crippled and bed bound for 8 years with severe bilateral knee and hip dysfunction secondary to complications of SCD. Past history included an admission to hospital 8 years before for fever, swelling and severe pain of the right knee. Needle aspiration, as well as irrigation and debridement, was performed and the patient was diagnosed with septic arthritis. Since then he had developed progressive joint stiffness involving the dorsal and lumbar spines as well as the lower extremity. On physical examination his knee range of motion (ROM) bilaterally was almost nonexistent with the knees held in full extension. His hips ROM was also severely limited, with no ROM. Loss of hip flexion was noted to result in the most severe functional loss with the hips fixed in full extension, adduction of 15° and 50° of external rotation. He required a 2-person assist as well as a walker to weight bear however he was unable to mobilize. Serial radiological studies were done. Radiological investigations demonstrated severe avascular necrosis and ankylosis of the hip ( Figure 1) and severe erosion of the articular surfaces of the knees as well as ankylosis ( Figure 2). Computed tomography (CT) scan of the hips and knees showed similar findings as that of the X-rays. Shoulders and spine X-ray were done for further assessment. Gallium bone scan demonstrated no evidence of active osteomyelitis. Our patient had a proper preoperative evaluation and blood transfusion to prevent adverse outcomes and sickle cell complications postoperatively 5 . We took him to surgery for bilateral cementless total hip replacement (THR) in one session. The patient was operated on with an aim to provide movement at the hips and knees and to recover his ability to sit, stand, transfer, balance and improves personal hygiene. Bilateral THR was done successfully (Figure 3). The procedures were performed utilizing the Harding lateral approach. Particular attention was paid to padding bony prominences and skin care in view of his poor skin condition. Intraoperatively, the bone was noted to be very fragile. The patient was kept in the intensive care unit for two days post operatively for observation and pain control. Post bilateral THR, the Left total knee replacement (TKR) was performed one month after the THR (Figure 4). Intraoperatively, the bone quality was noted to be severely deficient. The quadriceps muscle was adherent to the femur and atrophied; therefore, V-Y quadricepsplasty was done. Soft tissue was mobilized carefully around the joint prior to component insertion. Intraoperatively, knee flexion approached 50°. Post-left-TKR rehabilitation was severely restricted secondary to poor bone quality. This prevented adequate ROM exercises and function, and ROM gains at the knee were minimal. Poor bone quality also limited the ability for the patient to progress to functional lower extremity weight-bearing activities. Postoperatively, rehabilitation, patient education, transfer training and functional rehabilitation were carried out. This allowed the patient to transfer safely and independently from bed to wheel chair without much pain, and improved the quality of life by changing the patient's functional ability and allowing the freedom to mobilize independently with a wheelchair. Discussion Unlike normal red blood cells (RBC), sickled cells, due to abnormal morphology, are unable to negotiate small blood vessels resulting in arterial occlusion and subsequent ischemia. This process can ultimately damage tissues and vital organs. The prevalence of avascular necrosis (AVN) in sickle-cell-disease patients is increasing, especially with the increased life expectancy in these patients. It is believed that the AVN results from repeated episodes of localized areas of epiphyseal, metaphyseal and diaphyseal bone marrow infarctions, resulting in ingrowths of new bone as well as diffuse sclerosis resulting from vascular occlusion by sludging of sickle cells in the sinusoids. In general, the bilateral simultaneous THR Consent Written consent was obtained from the patient and the next of kin for the publication of the clinical details and clinical images related to the case report. Author contributions Both the authors have contributed equally for the study, the corresponding author wrote the manuscript. Competing interests No competing interests were disclosed. Grant information The author(s) declared that no grants were involved in supporting this work. has demonstrated better functional outcome than the staged procedure in patients with the same condition, with no significant increase in the rate of dislocation or thromboembolic events 6 . A similar case was published in 2000: a Congolese sickle cell patient was referred to Belgium for management of severe stiffness of all his major joints. That patient was managed by bilateral staged girdle stone procedure initially, with a subsequent surgical site infection of the left hip. The infection was managed with appropriate antibiotics and debridement. Once the infection was treated, the patient had a staged bilateral THR procedure. The patient regained his ability to walk with crutches after the surgeries 4 . Another case was reported in 2008: a 25-year-old female with severe ankylosis of her hips and knees secondary to rheumatoid arthritis, and her function was severely limited. She underwent staged bilateral THR, followed by a staged bilateral TKR, the functional outcome was excellent in that case as well 7 . The increased life expectancy that medical advances have offered to the sickle-cell patients has led to the appearance of sickle-cell-related complications, which were previously only seen rarely. Orthopedic surgeons should be aware of the optimal management options as well as possible operative and postoperative complications of patients with this disease.
2016-05-12T22:15:10.714Z
2012-10-19T00:00:00.000
{ "year": 2012, "sha1": "900cd6b1c09843beb7c54ef2645c1cfe9e4b0fdc", "oa_license": "CCBY", "oa_url": "https://f1000research.com/articles/1-32/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "78b39d1193de418893e148994a2323a4a6148b49", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267031619
pes2o/s2orc
v3-fos-license
Interactions of obesity, body shape, diabetes and sex steroids with respect to prostate cancer risk in the UK Biobank cohort Abstract Background Obesity and diabetes are associated inversely with low‐grade prostate cancer risk and affect steroid hormone synthesis but whether they modify each other's impact on prostate cancer risk remains unknown. Methods We examined the independent associations of diabetes, body mass index (BMI), ‘a body shape index’ (ABSI), hip index (HI), circulating testosterone, sex hormone binding globulin (SHBG) (per one standard deviation increase) and oestradiol ≥175 pmol/L with total prostate cancer risk using multivariable Cox proportional hazards models for UK Biobank men. We evaluated multiplicative interactions (p MI) and additive interactions (relative excess risk from interaction (p RERI), attributable proportion (p AR), synergy index (p SI)) with obese (BMI ≥30 kg/m2) and diabetes. Results During a mean follow‐up of 10.3 years, 9417 incident prostate cancers were diagnosed in 195,813 men. Diabetes and BMI were associated more strongly inversely with prostate cancer risk when occurring together (p MI = 0.0003, p RERI = 0.032, p AP = 0.020, p SI = 0.002). ABSI was associated positively in obese men (HR = 1.081; 95% CI = 1.030–1.135) and men with diabetes (HR = 1.114; 95% CI = 1.021–1.216). The inverse associations with obesity and diabetes were attenuated for high‐ABSI ≥79.8 (p MI = 0.022, p RERI = 0.008, p AP = 0.005, p SI <0.0001 obesity; p MI = 0.017, p RERI = 0.047, p AP = 0.025, p SI = 0.0005 diabetes). HI was associated inversely in men overall (HR = 0.967; 95% CI = 0.947–0.988). Free testosterone (FT) was associated most strongly positively in normal weight men (HR = 1.098; 95% CI = 1.045–1.153) and men with diabetes (HR = 1.189; 95% CI = 1.081–1.308). Oestradiol was associated inversely in obese men (HR = 0.805; 95% CI = 0.682–0.951). The inverse association with obesity was stronger for high‐FT ≥243 pmol/L (p RERI = 0.040, p AP = 0.031, p SI = 0.002) and high‐oestradiol (p RERI = 0.030, p AP = 0.012, p SI <0.0001). The inverse association with diabetes was attenuated for high‐FT (p MI = 0.008, p RERI = 0.015, p AP = 0.009, pSI = 0.0006). SHBG was associated inversely in men overall (HR = 0.918; 95% CI = 0.895–0.941), more strongly for high‐HI ≥49.1 (p MI = 0.024). Conclusions Obesity and diabetes showed synergistic inverse associations with prostate cancer risk, likely involving testosterone reduction for diabetes and oestrogen generation for obesity, which were attenuated for high‐ABSI. HI and SHBG showed synergistic inverse associations with prostate cancer risk. | INTRODUCTION Contrary to the conventional expectation that obesity and diabetes are associated with higher risk of cancer, 1,2 they are both associated with lower risk of low-grade prostate cancer. 3,4Given that obesity is associated with higher risk of diabetes, a question arises whether diabetes development is required for obesity to influence prostate cancer risk.Further, low-grade prostate cancer is usually androgen-dependent 5 and both obesity and diabetes are associated with lower androgen levels. 6,7A question, therefore, arises whether androgen suppression is involved in the inverse associations of obesity and diabetes with prostate cancer risk.Furthermore, obesity is associated with higher oestrogen levels, 6 raising the question of oestrogen involvement in prostate cancer risk. Absolute abdominal size evaluated with waist circumference (WC) is associated strongly positively with overall size evaluated with body mass index (BMI) and both reflect general obesity. 8Correspondingly, both WC and BMI have been associated inversely with prostate cancer risk in UK Biobank. 9Relative abdominal size, however, evaluated with 'a body shape index' (ABSI), has not been associated with prostate cancer risk in men overall, 8,10 despite an inverse association of ABSI with circulating total testosterone (TT) and free testosterone (FT). 6This is not paradoxical, because WC and ABSI define abdominal obesity differently.While WC is a better measure of abdominal fat quantity compared to BMI, which reflects overall fat quantity, ABSI reflects fat distribution. 11ABSI compares the abdominal size of a given individual with the average abdominal size of individuals with the same BMI and height and is thus uncorrelated with BMI 12 and complements rather than replaces BMI.As the factors contributing to fat accumulation can differ from the factors contributing to fat distribution, it is important to clarify the association of ABSI with prostate cancer risk. Hip circumference (HC), similarly to WC, reflects fat accumulation and is correlated strongly positively with BMI, 8 while hip index (HI), in analogy to ABSI, reflects fat distribution. 11HI compares hip size of a given individual with the average hip size of individuals with the same BMI and height and is thus uncorrelated with BMI. 13 Our previous cancer-wide study in UK Biobank suggested an inverse association of HI with prostate cancer risk. 8larifying associations with HI is important because aromatase levels are highest in gluteofemoral adipose tissue 14 and, correspondingly, circulating oestrogen levels are higher for higher HI. 6 In this study, we used data from UK Biobank with additional follow-up time compared to our previous study, 8 to clarify the relationships between BMI, ABSI, HI, diabetes, circulating sex steroids, sex steroid binding globulin (SHBG) and prostate cancer risk in men overall and according to BMI categories and diabetes status. | Study population UK Biobank has recruited and is following-up half a million participants aged 40-70 years at baseline (between 2006 and 2010), living within 40 km of an assessment centre in England, Scotland and Wales, and registered with the National Health Service. 15We restricted this study to men with self-reported white ancestry, as other ethnic groups were limited.We additionally excluded 33,243 (14.5%) men with prevalent cancer at recruitment, defined as in Christakoudi et al., 8 missing anthropometric measurements, mismatch between the genetic and selfreported sex, sex steroid treatment or prostate surgery (see Table S1 for details on exclusions and Table S2 for lists of excluded medications). The outcome of interest was total prostate cancer incidence, defined as the first primary prostate cancer diagnosed after recruitment, with code C61 from the 10th version of the International Statistical Classification of Diseases (ICD10) and malignant behaviour (behavioural code 3 or 5), with no separation by grade or aggressiveness at diagnosis, as this information is not available in UK Biobank.Follow-up was censored at the date of diagnosis for first primary incident cancers in locations other than the prostate, defined as in Christakoudi et al., 8 excluding non-melanoma skin cancers but including skin squamous-cell carcinomas.Follow-up was censored for all men remaining cancer-free at the date of the last complete cancer registry (31 March 2020 for England and Scotland, 31 December 2016 for Wales), or at the date of death, if earlier. | Anthropometric indices and diabetes status Anthropometric measurements were obtained by dedicated technicians according to established protocols. 16C was measured at the natural indent or the umbilicus and HC at the widest point.We calculated ABSI with coefficients from the National Health and Nutrition Examination Survey (NHANES) 12 but HI with coefficients derived for UK Biobank men, 11 as HI calculated with coefficients from NHANES 13 was correlated inversely with BMI in UK Biobank men 8 : Diabetes status at recruitment was based on selfreported information about diabetes mellitus (without distinction between type 1 and type 2), insulin treatment, treatment with antidiabetic medications (listed in Table S2) 17 or glycated haemoglobin HbA1c ≥48 mmol/ mol (see further details in Appendix S1). | Biomarker measurements Blood samples were obtained throughout the day with no requirement for fasting.Biomarker measurements were performed by UK Biobank.Serum levels of oestradiol, testosterone and SHBG were measured with chemiluminescent immunoassays (competitive binding for sex steroids, two-step sandwich for SHBG) on Beckman Coulter DXI 800 analyser. 18We calculated FT with law-of-mass-action equations, 19 using measured albumin.We used HbA1c as biomarker of glucose metabolism, as this is not affected by fasting and provides information for glucose status over the past 3 months.HbA1c was measured in red blood cells with high-performance liquid chromatography on Bio-Rad VARIANT II Turbo analyser.To accommodate undetected values for oestradiol in men (91%), we considered oestradiol dichotomised at the lowest detected level (175 pmol/L).For testosterone, SHBG and HbA1c, values below or above the limits of detection were few and were set to either half the lowest detected level or the upper limit value, correspondingly. | Definition and selection of covariates In addition to age at recruitment, we selected the following potential confounders a priori, based on literature reports of their associations with the exposures and the outcome (see related references in Table S3): height, weight change within the year preceding recruitment (weight loss, stable weight, weight gain), smoking status (never, former occasional, former regular, current), alcohol consumption (≤3 times/ month, ≤4 times/week, daily), physical activity (less active, moderately active, active), Townsend deprivation index quintiles (as a proxy of socio-economic status) and family history of cancer (none, breast/lung/bowel, prostate).Given that we examined associations with biomarkers in blood, we also considered fasting time (0-2 h, 3-4 h, ≥5 h) and time of blood collection (<12:00, 12:00 to <16:00, ≥16:00) (see further details on definitions in Appendix S1).We then examined their pairwise associations with the exposures (using linear regression models for anthropometric indices, testosterone, and SHBG, and logistic regression models for oestradiol) and with the outcome, prostate cancer risk (using Cox proportional hazards models) (Figure S1).We excluded physical activity and fasting time from the final list of covariates, as these were not associated with prostate cancer risk.We retained the remaining covariates because these were associated with at least some of the exposures, as well as with prostate cancer risk.We used only family history of prostate cancer as family history of other cancers was not associated with prostate cancer risk. | Statistical analysis We used Stata-13 for the statistical analyses and R version 4.1.3 20for data management. In the main analyses, we used BMI, ABSI and HI on a standardised continuous scale (z-scores, value minus mean, divided by standard deviation, SD).We estimated hazard ratios (HR) and 95% confidence intervals (CI) with delayed-entry Cox proportional hazards models, which are conditional on surviving free of cancer to cohort recruitment.The underlying time scale was age, with origin at the date of birth, entry time was the date at recruitment, and exit time was the date of diagnosis of the first incident cancer, or death, or last complete follow-up, whichever occurred first.We first examined a model including BMI, ABSI, HI (per one SD increment, HR per_SD ) and diabetes (yes vs. no, HR Yes_No ) as exposures.We then examined models additionally including either FT and SHBG jointly as exposures, or oestradiol or TT individually.All models were stratified by age at recruitment, region of the assessment centre, and family history of prostate cancer, and adjusted for height (z-scores), recent weight change, smoking status, alcohol consumption, Townsend deprivation index and time of sample collection.We tested the proportional hazards assumption based on Schoenfeld residuals and established that family history of prostate cancer must be used as a stratifying variable for the assumption to be valid.Missingness for covariates and diabetes status was very low (<1% for all, except <2% for recent weight change).Nevertheless, we performed multiple sequential imputations with chained equations (function mi impute in Stata 13, m = 5 imputed datasets) using multinomial logistic regression models (for recent weight change, smoking status, alcohol consumption, Townsend deprivation index quintiles and time of sample collection) or a logistic regression model (for diabetes status), with stratification for region and adjustment for age at recruitment, BMI, ABSI and family history of prostate cancer.We derived the estimates of coefficients and standard errors with Rubin's combination rules (function mi estimate in Stata-13). 21Tests of statistical significance were two-sided. To explore heterogeneity by BMI and diabetes status, we examined groups of men according to World Health Organisation categories of BMI (normal weight BMI <25 kg/ m 2 ; overweight BMI = 25 to <30 kg/m 2 ; obese BMI ≥30 kg/ m 2 ) and groups of men with and without diabetes.Only for subgroup analysis, we considered men with unknown diabetes status as no-diabetes because group size was not allowed to vary between the imputed datasets.We evaluated multiplicative interactions with BMI on a continuous scale or with diabetes status with Wald test for the corresponding interaction term, included for each anthropometric index or biomarker individually in the fully adjusted model (p MI ).We additionally calculated the Relative Excess Risk from Interaction with obesity or diabetes status (RERI; ranges between -infinity and + infinity; null value 0), the attributable proportion due to interaction (AP; ranges between −1 and + 1; null value 0), and the ratio between combined and individual effects (synergy index (SI); ranges between 0 and + infinity; null value 1) 22 from fully adjusted models including a cross-classification of each body shape index or biomarker individually (dichotomised as high/low with respect to ≥median, or ≥175 pmol/L for oestradiol detection) with either obese (BMI ≥30 kg/m 2 , yes/no) or diabetes (yes/ no), or a cross-classification between obese and diabetes.To ensure that all three measures of interaction on the additive scale are directionally consistent with each other, we used as reference the cross-classification category with the lowest observed prostate cancer risk, as recommended in. 23Thus, in the equations below, HR High-High indicates the crossclassification category expected to have the highest prostate cancer risk when the two examined factors were coded as the opposite states to those in the reference cross-classification category with the observed lowest risk.This approach defined RERI, AP and SI with respect to excess risk: We calculated confidence intervals and p-values (p RERI , p AR , p SI ) with the multiple imputations equivalent of the delta method for non-linear combinations (mi estimate in Stata-13). 21For SI, we calculated confidence intervals for log-transformed SI and transformed these back to the SI scale, similarly to the approach used in function reri, which we could not use because it is only available in the latest release of Stata. 24o explore potential non-linearity, we examined quintile categories of ABSI, HI, FT, TT and SHBG, decile categories of FT and TT, and detailed categories of BMI according to, 25 and tested non-linearity with the Wald test for the non-linear (second spline) term from fully adjusted models including restricted cubic splines for the exposure of interest (knots at ±2SD and the mean). For comparison with traditional body shape measures, we examined associations of WC and HC individually with total prostate cancer risk, adjusting for height, diabetes status and covariates. | Sensitivity analyses To explore the influence of covariates, we compared a minimally adjusted model (including BMI, ABSI, HI and height, and stratified by age at recruitment) with a model additionally stratified by region and family history of prostate cancer risk and further adjusted for covariates, and with the model additionally including diabetes status (the main fully adjusted model) and calculated adjustment differences in HR estimates for BMI, ABSI and HI compared to the minimally adjusted model.To examine the influence of adjustment for biomarkers, we calculated adjustment differences in HR estimates for BMI, ABSI, HI and diabetes status, comparing fully adjusted models with and without biomarkers, restricted to men with available biomarker measurements, and for HbA1c, also to men without known diabetes.To explore reverse causality, we excluded from the fully adjusted model men with less than 2 years of follow-up, lagged the entry time with 2 years, and calculated lag differences in HR estimates for BMI, ABSI, HI and diabetes status compared to the fully adjusted model including all men.We have highlighted adjustment or lag differences ≥2%. | Cohort characteristics During a mean follow-up of 10.3 years, 9417 incident prostate cancers were ascertained in 195,813 men with mean BMI = 27.8 kg/m 2 and mean ABSI = 79.8(Table 1).Less than 1% of men (n = 442) were underweight (BMI <18.5 kg/ m 2 ), so we included these in the normal weight category.Half of men were overweight, one quarter were obese, and 13,987 (7.1%) had diabetes.There were large differences in WC and HC between normal weight and obese men, but differences in ABSI and HI were minimal.Men with diabetes had higher BMI and larger ABSI compared to men without diabetes, with little difference in HI.Obese men were more likely to have diabetes compared to normal weight men.TT, FT and SHBG were lower in obese men and men with diabetes compared to normal weight men and men without diabetes, correspondingly, while oestradiol was higher in obese men, with minimal difference according to diabetes status (Table 1).Obese men and men with diabetes were older at recruitment, had higher Townsend deprivation index and were more likely to be former regular smokers but less likely to consume alcohol or to have family history of prostate cancer compared to, correspondingly, non-obese men and men without diabetes (Table S4).Obese men, however, were more likely to have gained weight during the year preceding recruitment compared to non-obese men, while men with diabetes were more likely to have lost weight compared to men without diabetes (Table S4). | Associations of diabetes and anthropometric indices with prostate cancer risk In men overall, diabetes was associated inversely with prostate cancer risk (HR Yes_No = 0.772, 95% CI = 0.709-0.842),more strongly in obese men (HR Yes_No = 0.691, 95% CI = 0.604-0.790)(Figure 1).Although BMI was also associated inversely in men overall (HR per_SD = 0.959; 95% CI = 0.937 to 0.982) and more strongly in obese men (HR per_SD = 0.905; 95% CI = 0.847-0.967), it was associated positively in normal weight men (HR per_SD = 1.186, 95% CI = 1.047-1.344),with no evidence for association in overweight men.The inverse association with BMI was stronger in men with diabetes (HR per_SD = 0.832, 95% CI = 0.768-0.902)than in men without diabetes (p MI = 0.0003 for an inverse multiplicative interaction between BMI (z-scores) and diabetes).Prostate cancer risk was lowest when obese and diabetes occurred together and the risk for non-obese without diabetes (expected to be the highest-risk group) was lower than the additive individual effects of non-obese and no-diabetes (RERI NonObese_NoDiabetes = −0.248;95% CI = −0.474 to −0.022; p RERI = 0.032; p AR = 0.020; p SI = 0.002) (Figure 1), such that the inverse association was strongest when obesity and diabetes occurred together.ABSI was associated positively with prostate cancer risk but only in obese men (HR per_SD = 1.081, 95% CI = 1.030 to 1.135, p MI = 0.022 for a positive multiplicative interaction with BMI) and in men with diabetes (HR per_SD = 1.114, 95% CI = 1.021 to 1.216, p MI = 0.017 for a positive multiplicative interaction with diabetes) (Figure 1).Prostate cancer risk was lowest when low-ABSI <79.8 and obese occurred together and the risk for high-ABSI ≥79.8 and non-obese (expected to be the highest-risk group) was lower than the additive individual effects of high-ABSI and non-obese (RERI HighABSI_NonObese = −0.156;95% CI = −0.271 to −0.041; p RERI = 0.008; p AR = 0.005; p SI <0.0001).The association and interaction patterns were similar when considering diabetes instead of obesity (RERI HighABSI_NoDiabetes = −0.229;95% CI = −0.455 to −0.003; p RERI = 0.047; p AP = 0.025; p SI = 0.0005), such that the inverse associations with both obesity and diabetes were attenuated for high-ABSI.HI was associated inversely with prostate cancer risk in men overall (HR per_SD = 0.967; 95% CI = 0.947-0.988),without strong evidence for interactions with BMI or diabetes status (Figure 1). | Associations of sex steroids with prostate cancer risk FT was associated positively with prostate cancer risk in men overall (HR per_SD = 1.067; 95% CI = 1.041 to 1.093), but most strongly in normal weight men (HR per_SD = 1.098; 95% CI = 1.045 to 1.153), with less evidence for association in obese men, although without evidence for a multiplicative interaction with BMI (p MI = 0.443, Figure 2).Prostate cancer risk, however, was lowest when high-FT ≥243 pmol/L and obese occurred together and the risk for Note: Comparisons between BMI categories and between diabetes status groups were performed with one-way ANOVA for anthropometric indices and logtransformed biomarkers and chi-square test for categorical variables and oestradiol detection.All differences were significant at p < 0.0001, except p = 0.016 for HI and p = 0.003 for oestradiol detection comparing diabetes yes versus no. Abbreviations: ABSI, a body shape index; BMI, body mass index; Cases, number of incident prostate cancer cases (percentage from total overall); Rate, number of incident prostate cancer cases per 1,000,000 person years of follow-up in each group; E2, oestradiol (detected ≥175 pmol/L); FT, free testosterone; HbA1c, haemoglobin A1c (glycated haemoglobin); HC, hip circumference; HI, hip index; n (%), number of participants (percentage from total overall for cohort and cases or total per column for diabetes); Normal weight, BMI <25 kg/m 2 ; Overweight, BMI ≥25 to <30 kg/m 2 ; Obese, BMI ≥30 kg/m 2 ; SD, standard deviation; SHBG, sex hormone binding globulin; TT, total testosterone; WC, waist circumference.F I G U R E 1 Associations of diabetes and anthropometric indices with prostate cancer risk.ABSI, a body shape index; AP, attributable proportion due to interaction; BMI, body mass index; cases, number of incident prostate cancer cases per group; CI, confidence interval; HI, hip index; HR, hazard ratio; MI, multiplicative interaction term for each of ABSI or HI individually (z-scores) with either BMI (z-scores) or diabetes (no/yes), or between BMI (z-scores) and diabetes; NW, BMI <25 kg/m 2 ; OW, BMI ≥25 to <30 kg/m 2 ; Obese, BMI ≥30 kg/m 2 ; rate, number of incident prostate cancer cases per 1,000,000 person years of follow-up in each group; RERI, relative excess risk from interaction; SD, standard deviation; SI, synergy index.Estimates from Cox proportional hazards models in men overall, including diabetes (no/yes) and BMI, ABSI and HI (z-scores, value minus mean divided by SD) as exposures, stratified by age at recruitment, region of the assessment centre and family history of prostate cancer, and adjusted for height, recent weight change, smoking status, alcohol consumption, Townsend deprivation index and time of blood collection.Low/High-No/Yes-groups of men according to a cross-classification of each body shape index individually (low/high), dichotomised at the median (ABSI ≥79.8 or HI ≥49.1), with either obese or diabetes (no/yes), or a crossclassification between obese and diabetes. low-FT and non-obese (expected to be the highest-risk category) was lower than the additive individual effects of low-FT and non-obese (RERI LowFT_NonObese = −0.127;95% CI = −0.249 to −0.006; p RERI = 0.040; p AR = 0.031; p SI = 0.002), such that the inverse association with obese was stronger for high-FT.The positive association with FT was stronger in men with diabetes (HR per_SD = 1.189; 95% CI = 1.081 to 1.308) than in men without diabetes (p MI = 0.008 for a positive multiplicative interaction with diabetes).Prostate cancer risk was lowest when low-FT and diabetes occurred together and the risk for high-FT without diabetes (expected to be the highest-risk category) was lower than the additive individual effects of high-FT and no-diabetes (RERI HighFT_NoDiabetes = −0.322;95% CI = −0.580 to −0.064; p RERI = 0.015; p AR = 0.009; p SI = 0.0006), such that the inverse association with diabetes was attenuated for high-FT (Figure 2).TT was similarly associated positively with prostate cancer risk only in normal weight men and in men with diabetes, although there was less evidence for interactions with obese or diabetes (Figure S2). Oestradiol was associated inversely with prostate cancer risk only in obese men (HR Yes_No = 0.805; 95% CI = 0.682 to 0.951 for oestradiol ≥175 pmol/L), although without evidence for a multiplicative interaction with BMI (p MI = 0.408) (Figure 2).Prostate cancer risk was lowest when high-oestradiol and obese occurred together, and the risk for low-oestradiol and non-obese (expected to be the highest-risk category) was lower than the additive individual effects of low-oestradiol and non-obese (RERI LowOestradiol_NonObese = −0.268;95% CI = −0.509 to −0.027; p RERI = 0.030; p AP = 0.012; p SI <0.0001), such that the inverse association with obesity was stronger for high-oestradiol.There was no evidence for interaction of oestradiol with diabetes (Figure 2). Further adjustment of models for oestradiol for FT and SHBG and adjustment of models for FT for oestradiol made no material difference to the estimates (Figure S3). SHBG was associated inversely with prostate cancer risk in men overall (HR per_SD = 0.918; 95% CI = 0.895 to 0.941), with little evidence for interactions with obesity or diabetes (Figure 2), but with some evidence for an inverse multiplicative interaction with HI (p MI = 0.024) (Figure S2). | Non-linearity of the associations with prostate cancer risk There was strong evidence for non-linearity of the association of BMI with prostate cancer risk in men overall (p non-linearity <0.0001), with similar risk for BMI between 21.0 and 30.0 kg/m 2 and lower risk for obese BMI (Figure 3).Although the risk was also lower for very low BMI (<21 kg/m 2 ), this association was partly attenuated after removing current smokers.There was no evidence for non-linearity of the positive association with ABSI in obese men.In men overall, prostate cancer risk was similarly higher for all ABSI quintiles compared to the lowest but was lower only for the highest HI quintile (p non-linearity = 0.049).There was little evidence for nonlinearity of the inverse association with SHBG.The positive associations with FT and TT plateaued at high levels in men overall, with higher risk for all deciles compared to the lowest (p non-linearity = 0.017 for FT; p non-linearity = 0.003 for TT), but with less evidence for non-linearity in normal weight men (Figure 3). | Comparisons with waist and hip circumferences Both WC and HC showed inverse associations with prostate cancer risk, although weaker for WC than for HC (Figure 4).For both WC and HC, prostate cancer risk was lower specifically for the highest compared to the lowest quintile (p non-linearity = 0.001 for WC; p non-linearity = 0.0006 for HC).Also, for both WC and HC, the inverse association was stronger in men with diabetes than in men without diabetes, with evidence for an inverse multiplicative F I G U R E 2 Associations of circulating sex steroids and SHBG with prostate cancer risk.AP, attributable proportion due to interaction; cases, number of incident prostate cancer cases per group; CI, confidence interval; HR, hazard ratio; MI, multiplicative interaction term for each biomarker individually (z-scores for free testosterone and SHBG or oestradiol ≥175 pmol/L no/yes) with either BMI (z-scores) or diabetes (no/yes); NW, BMI <25 kg/m 2 ; OW, BMI ≥25 to <30 kg/m 2 ; Obese, BMI ≥30 kg/m 2 ; rate, number of incident prostate cancer cases per 1,000,000 person years of follow-up in each group; RERI, relative excess risk from interaction; SD, standard deviation; SHBG, sex hormone binding globulin; SI, synergy index.Estimates from Cox proportional hazards models in men overall with available total and free testosterone and SHBG measurements (n = 168,091) or with available oestradiol measurement (n = 171,858), including either jointly free testosterone and SHBG (z-scores, value minus mean divided by SD, following log-transformation), or individually oestradiol (≥175 pmol/L yes/no), stratified by age at recruitment, region of the assessment centre and family history of prostate cancer, and adjusted for diabetes status, BMI, ABSI, HI, and height (z-scores), recent weight change, smoking status, alcohol consumption, Townsend deprivation index and time of blood collection.Low/High-No/Yes-groups of men according to a cross-classification of each biomarker individually (low/high), dichotomised at the median (free testosterone ≥243 pmol/L; SHBG≥37.1 nmol/L) or at the lowest detected level for oestradiol (≥175 pmol/L), with either obese or diabetes (no/yes).Associations and interactions with total testosterone and multiplicative interactions of sex steroids and SHBG with body shape indices are shown in Figure S2. F I G U R E 3 Associations of anthropometric index and biomarker categories with prostate cancer risk.ABSI, a body shape index; BMI, body mass index; cases, number of incident prostate cancer cases per group; CI, confidence interval; cut-off, upper boundary of the group; D1-D10, deciles; G1-G8, BMI categories according to 25 ; HI, hip index; HR, hazard ratio; Q1-Q5, quintiles; rate, number of incident prostate cancer cases per 1,000,000 person years of follow-up in each group; SD, standard deviation; SHBG, sex hormone binding globulin.a Cox proportional hazards models, including BMI categories, ABSI and HI (z-scores, value minus mean divided by SD), stratified by age at recruitment, region of the assessment centre and family history of prostate cancer, and adjusted for diabetes status, height, recent weight change, smoking status, alcohol consumption, Townsend deprivation index and time of blood collection in men overall (n = 195,813, left) or excluding current smokers (n = 171,525, right); b Like 'a' but including quintiles either for ABSI or HI and scores for BMI in men overall and, for ABSI, additionally restricted to obese men (BMI ≥30 kg/m 2 ; n = 49,863, right); c Like 'a', but in men overall with available total and free testosterone and SHBG measurements (n = 168,091), including as exposure SHBG quintiles adjusted for free testosterone (z-scores, following log-transformation) and z-scores for BMI; d Like 'c', but including as exposure free testosterone deciles in men overall (n = 168,091), or free testosterone quintiles in normal weight men (BMI <25 kg/m 2 ; n = 42,011), with adjustment for SHBG (z-scores, following log-transformation); e Like 'd', but including as exposure total testosterone deciles (men overall) or quintiles (normal weight men) without adjustment for SHBG; p non-linearity -p-value from Wald test for the non-linear (second spline) term from fully adjusted models including restricted cubic splines for the exposure of interest (knots at ±2SD and the mean).interaction with diabetes (p MI = 0.006 for WC, p MI = 0.0003 for HC), and with the lowest prostate cancer risk when high-WC ≥96 cm or high-HC≥103 cm occurred together with diabetes.Only for HC, however, there was consistent evidence for an additive interaction with diabetes, with a lower risk for low-HC without diabetes (expected to be the highest-risk category) than the additive individual effects of low-HC and no-diabetes (RERI LowHC_NoDiabetes = −0.241;95% CI = −0.463 to −0.020; p RERI = 0.033; p AP = 0.024; p SI = 0.003) (Figure 4). | Sensitivity analyses The associations of BMI with prostate cancer risk were attenuated in the fully adjusted compared to the minimally adjusted models, with mainly the adjustment for covariates attenuating the positive association in normal weight men but mainly the adjustment for diabetes status attenuating the inverse association in obese men (Table S5).Restricting the analysis to men with at least 2 years of follow-up had little influence on association estimates.Adding sex steroids and SHBG to the fully adjusted main models had little influence, except for some attenuation of the inverse association with BMI in men with diabetes after adjustment for TT.In men without diabetes, adjustment for HbA1c had no material influence on association estimates and there was little evidence for association of HbA1c with prostate cancer risk (HR per_SD = 0.983; 95% CI = 0.961 to 1.006 in men overall; HR per_SD = 0.974; 95% CI = 0.928 to 1.022 in obese men), even for pre-diabetic levels (HR = 0.968; 95% CI = 0.864 to 1.085 for HbA1c ≥42 to <48 compared to HbA1c <42 mmol/mol) (Table S5). | DISCUSSION In this study, obesity and diabetes were associated more strongly with lower total prostate cancer risk when occurring F I G U R E 4 Associations of waist and hip circumferences with prostate cancer risk.AP, attributable proportion due to interaction; cases, number of incident prostate cancer cases per group; CI, confidence interval; HR, hazard ratio; MI, multiplicative interaction term for either waist or hip circumference (z-scores) with diabetes (no/yes); NW, BMI <25 kg/m 2 ; OW, BMI ≥25 to <30 kg/m 2 ; Obese, BMI ≥30 kg/m 2 ; Q1-Q5, quintiles (cut-offs: 88, 93, 99, 105 cm for waist circumference; 98, 101, 104, 109 cm for hip circumference); rate, number of incident prostate cancer cases per 1,000,000 person years of follow-up in each group; RERI, relative excess risk from interaction; SD, standard deviation; SI, synergy index.Estimates from Cox proportional hazards models, including individually waist or hip circumference (z-scores, value minus mean divided by SD) as exposure, stratified by age at recruitment, region of the assessment centre and family history of prostate cancer, and adjusted for diabetes status, height, recent weight change, smoking status, alcohol consumption, Townsend deprivation index and time of blood collection.Low/High-No/Yes-groups of men according to a cross-classification of either waist or hip circumference (low/high, dichotomised at the median: waist circumference ≥96 cm; hip circumference ≥103 cm) with diabetes (no/yes).p non-linearity -pvalue from Wald test for the non-linear (second spline) term from fully adjusted models including restricted cubic splines either for waist or hip circumference (knots at ±2SD and the mean). together.For high ABSI, total prostate cancer risk was higher in obese men and men with diabetes and the inverse associations with obesity and diabetes were attenuated.For high circulating FT, total prostate cancer risk was higher in nonobese men and men with diabetes and the inverse association with obesity was stronger but the inverse association with diabetes was attenuated.Associations with TT were similar but weaker.For high oestradiol, prostate cancer risk was lower in obese men and the inverse association with obesity was stronger.HI and SHBG were associated inversely with prostate cancer risk, more strongly when both were high. A recent umbrella meta-analysis concluded that the evidence supporting an inverse association of BMI with prostate cancer risk was suggestive for low-grade (4 studies) and weak for non-advanced prostate cancer (14 studies) but there was a null association for total prostate cancer risk. 4 further meta-analysis, including 17 prospective studies, similarly reported an inverse association for nonaggressive prostate cancer but with significant heterogeneity. 26The main outlier reporting a positive association, however, was a cohort study in high-risk men undergoing a baseline prostate biopsy on suspicion of prostate cancer. 27Although we could not discriminate prostate cancers by stage or grade in our study, our findings are in agreement with a study in the European Prospective Investigation into Cancer and Nutrition (EPIC) cohort, reporting inverse associations of BMI with total prostate cancer, as well as with localised and low-intermediate grade, 28 suggesting that the larger part of prostate cancers in UK Biobank men are localised and lowgrade.Our findings of an inverse association with BMI only in obese men, with strong evidence for non-linearity, differ from a dose-response meta-analysis of 12 earlier prospective studies, which found no evidence of non-linearity for localised prostate cancer. 29Our findings, however, are consistent with more recent large studies in European men reporting inverse associations with total and low-intermediate grade prostate cancer for BMI above 27 kg/m 2 , 28,30 and with a previous UK Biobank study reporting lower risk for obese but not for overweight compared to normal weight men. 9 Furthermore, a large study in Swedish men reported lower risk of total and low-intermediate grade prostate cancer for BMI <22.5 kg/m 2 , 30 in agreement with the lower risk for BMI <21 kg/m 2 in our study.The latter association, however, was partly attenuated in our study after removing current smokers and may thus reflect some influence of smoking, which is associated inversely with prostate cancer risk. 31Regarding the inverse association with diabetes, a meta-analysis including 45 studies (29 cohort, 16 case-control) concluded that this was supported by strong evidence, with no difference between cohort and case-control studies. 3Consistent with our findings, lower prostate cancer risk has previously been reported for UK Biobank men only for high HbA1c (at least 42 mmol/mol), 32,33 suggesting that sustained hyperglycaemia is required.A novel contribution of our study is showing that obesity and diabetes facilitate each other for their inverse associations with prostate cancer risk. Our findings of a null association of ABSI with prostate cancer risk in men overall are consistent with a previous report for UK Biobank including all ethnicities, 10 but differ from an inverse association reported for the Australian and New Zealand Diabetes and Cancer Collaboration of prospective studies. 34The inverse association with ABSI in the latter report, however, was prominent only for ever smokers, comprising over 60% of men compared to 40% for ever regular smokers in UK Biobank, and could thus reflect the influence of smoking, which is associated with lower prostate cancer risk 31 and higher visceral adiposity. 35,36A larger number of previous studies have examined associations with WC, with a null association reported by an umbrella meta-analysis for total prostate cancer risk and for non-advanced stage and a positive association only for advanced stage. 4In UK Biobank and EPIC, however, the association pattern of WC with total and low-intermediate grade prostate cancer resembled BMI, with an inverse association more specifically for high WC, consistent with the findings of our study, and a positive association only for high-grade prostate cancer, 9,28,33 corresponding to the strong positive correlation between BMI and WC.Thus, when the influence of factors related to fat distribution differs from the influence of factors related to fat accumulation with respect to the outcome of interest, WC provides similar information to BMI, while ABSI can reflect the differences because it is not correlated with BMI.To our knowledge, associations of HI with prostate cancer risk have not been examined by other authors.The suggestive inverse association from our earlier study in UK Biobank 8 was statistically significant in this study, which included a larger number of cases.Few studies have also examined associations with HC, but findings in EPIC were similar to our study, with the association pattern of HC resembling BMI and WC, 28 corresponding to their strong positive correlations. Our findings are further in agreement with a previous study of prostate cancer risk in UK Biobank, reporting positive associations with FT and inverse with SHBG based on quintile categories and fewer cases. 37They are also in agreement with a recent pooled analysis of 25 nested casecontrol studies and a Mendelian Randomisation analysis by the Endogenous Hormones, Nutritional Biomarkers and Prostate Cancer Collaborative Group. 38This showed positive associations of FT with total (15,000 cases), as well as with low grade, and nonaggressive prostate cancer, clearer in normal weight men and stronger in men with diabetes, although an inverse association with SHBG was noted only in the observational and not in the MR analysis and an inverse association with TT was noted in the observational analysis, 38 likely influenced by the strong positive association of TT with SHBG, which we have previously described. 6FT has additionally been associated with higher risk of ERG-positive prostate cancers, which have an androgen-induced fusion between the androgenregulated TMPRSS2 gene and the oncogene ERG. 39The null association of oestradiol with prostate cancer risk in men overall reported in our study is consistent with previous smaller-scale studies. 40,41The inverse association in obese men, however, is unexpected because it is currently believed, based on animal studies and prostate cancer cell lines, that oestrogens cooperate with androgens to facilitate prostate cancer development and A-ring hydroxylated oestrogens lead to DNA adducts, DNA oxidative damage and lipid peroxidation. 42Nevertheless, no clear positive association of circulating oestradiol with prostate cancer risk has been demonstrated in prospective studies to date and previous studies have not examined oestradiol specifically in obese men. Our findings support an involvement of testosterone suppression in the inverse association with diabetes, as FT, which is considered the active and biologically relevant part of testosterone, was lower in men with diabetes and high FT attenuated the inverse association with diabetes, such that the risk was lower in men with diabetes only when FT was low.This disagrees with suggestions that lower prostate cancer risk in individuals with diabetes simply reflects delayed diagnosis 43 and disagrees with smaller-scale studies, which have found limited evidence for inverse associations of diabetes or hyperglycaemia with testosterone levels and have thus considered testosterone involvement unlikely. 44,45In the considerably larger UK Biobank, however, our current study and a previous study show lower TT and FT levels for diabetes (based on self-reported diagnosis and antidiabetic treatment) and for HbA1c ≥42 mmol/mol. 33Low testosterone has also been associated not only with higher insulin resistance in cross-sectional studies but with higher risk of developing insulin resistance in prospective studies. 45lthough testosterone treatment in men with low testosterone can reduce glycaemia, 46 glycaemia improvements are not consistent across studies and routine testosterone testing and treatment is not recommended in men with type 2 diabetes without clinical symptoms of testosterone deficiency. 47Our study indicates that low testosterone is required for the inverse association of diabetes with prostate cancer risk, suggesting a mechanistic role of diabetes in risk reduction, at least for low-grade prostate cancers. The role of testosterone for the inverse association of obesity with prostate cancer risk appears more complicated.Although TT and FT are lower in obesity, high testosterone facilitated rather than attenuated the inverse association with obesity as did high oestradiol, suggesting that testosterone acts as substrate for adipose tissue aromatise and oestradiol generation.In accordance, TT and FT were correlated strongly positively with each other and both were correlated positively with oestradiol. 6It would be important, however, to clarify whether testosterone and oestradiol strengthen the inverse association with obesity specifically for low-grade prostate cancer and whether their relationship with high-grade prostate cancer is different.Notably, in a large, pooled analysis, FT was associated positively with aggressive prostate cancer only in men younger than 60 years at blood collection but inversely in older men. 38Furthermore, while androgen receptor positivity increases in advanced and metastatic prostate cancers, oestrogen receptor ERα expression peaks for high-grade prostatic intraepithelial neoplasia and is reduced for metastatic prostate cancer. 48Differences in sex steroid metabolism in prostate tissues of obese and non-obese men may also be relevant, as oestrogen receptor ERα and aromatase expression is lower in the stroma of non-cancerous prostate acini from obese compared to non-obese prostate cancer patients. 49he attenuation by ABSI of the inverse associations of obesity and diabetes with total prostate cancer risk would not involve testosterone, because high ABSI is associated with lower TT and FT 6 and testosterone administration reduces ABSI in men. 50This could, however, reflect the positive association of waist size with high-grade prostate cancers, which are more common among obese men, 1,4 highlighting the need to clarify whether ABSI is associated positively with low-grade prostate cancers.The inverse association with HI is unlikely to involve oestradiol, because HI was associated positively with oestradiol mainly in obese men, 6 while the inverse association of HI with total prostate cancer risk did not show heterogeneity between BMI categories.Furthermore, HI reflects differences in gluteofemoral fat mainly in obese men but, in non-obese men, reflects differences in gluteofemoral lean mass. 11imilarly, the inverse association of SHBG with prostate cancer risk did not show heterogeneity according to BMI.It is unclear, therefore, what is the mechanism underlying the synergistic inverse associations of HI and SHBG with total prostate cancer risk. A clear strength of our study is the prospective cohort design and the large sample size for incident prostate cancer cases and for biomarker measurements, which permitted examining subgroups and cross-classifications.Anthropometry was performed by trained personnel according to standardised protocols, avoiding bias from self-reporting.We chose ABSI and HI as the appropriate indices of body shape, since by design they effectively account for the high correlations of WC and HC with height, weight and BMI, which is not true for popular indices of central body obesity such as the waist-to-height ratio, body roundness index, conicity index, weight-adjusted waist index among others, which retain corelations or even introduce stronger correlations with height or BMI. 51nified quality control procedures were applied to biomarker measurements, minimising measurement errors.Information for major lifestyle factors was available, permitting adjustment and minimising confounding.Some information for oestradiol levels, albeit limited, was also available for most of the cohort. A major limitation of our study is the low sensitivity of the oestradiol assay, which permitted identification only of the top tenth of the distribution.The dichotomisation led to loss of information and prevented calculation of free oestradiol.Like other observational studies, we did not have FT measurements and relied on law-of-massaction equations, which may not be valid for obesity or diabetes.Blood samples were collected with no requirement for fasting and throughout the day, which could have contributed to a larger variability, although we have adjusted for time of blood collection and found no association of fasting time with prostate cancer risk.We had information for exposures only at baseline and could not account for changes during follow-up.Importantly, we did not have information for prostate cancer grade and stage.We could not separate type 1 from type 2 diabetes either, although hyperglycaemia-related pathways would be relevant to both types.The definition of diabetes had to rely on self-reported information, which may be incomplete, and on a single HbA1c measurement.We were unable to clarify ethnic differences, which would be important because sex steroid levels and associations of diabetes with prostate cancer risk differ between ethnic groups. 3,52Last, UK Biobank is not representative of the general population and includes participants with healthier lifestyle. 53n conclusion, our study showed synergistic inverse associations of obesity and diabetes with total prostate cancer risk, attenuated in men with high-ABSI and, for diabetes, additionally attenuated in men with high-FT.The inverse association with obesity, however, was apparently facilitated by high-oestradiol and by high testosterone, likely as a precursor for oestradiol synthesis via adipose tissue aromatisation.HI and SHBG showed synergistic inverse associations with total prostate cancer risk with unclear mechanism. Anthropometric characteristics and biomarker levels by BMI and diabetes category. T A B L E 1
2024-01-19T06:18:03.693Z
2024-01-17T00:00:00.000
{ "year": 2024, "sha1": "1952400cde06154be7fa9440d120e15b77d81118", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cam4.6918", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b2b5ff619111e8d46ad0d1c89c9dc8c41e950f15", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252996201
pes2o/s2orc
v3-fos-license
Brainstem networks construct threat probability and prediction error from neuronal building blocks When faced with potential threat we must estimate its probability, respond advantageously, and leverage experience to update future estimates. Threat estimation is the proposed domain of the forebrain, while behaviour is elicited by the brainstem. Yet, the brainstem is also a source of prediction error, a learning signal to acquire and update threat estimates. Neuropixels probes allowed us to record single-unit activity across a 21-region brainstem axis in rats receiving probabilistic fear discrimination with foot shock outcome. Against a backdrop of diffuse behaviour signaling, a brainstem network with a dorsal hub signaled threat probability. Neuronal function remapping during the outcome period gave rise to brainstem networks signaling prediction error and shock on multiple timescales. The results reveal brainstem networks construct threat probability, behaviour, and prediction error signals from neuronal building blocks. This manuscript examines single unit spiking activity in the rat brainstem in order to address the question of whether and how the brainstem may be involved in threat assessment and prediction. Briefly, traditional interpretations of how the brain responds to threats have suggested that the forebrain and neocortex are primarily responsible for assessing threat probability and prediction error, while the brainstem is involved in the associated behavior. This view has been challenged by recent literature providing evidence that indeed the brainstem is also involved in assessing threat probability and prediction error. Here, the authors use Neuropixels probes to capture spiking activity from a large population of neurons in the rat brainstem as the rats engage in a probabilistic fear discrimination task. Their results demonstrate that different populations of neurons, or subnetworks, are involved in assessing threat probability, modulating behavior, and registering outcomes and prediction errors. The results therefore provide additional evidence that threat assessment may be a brain-wide phenomenon, rather than one that is restricted to individual brain regions like the forebrain. The data presented and the results provide strong evidence for this conclusion, as fundamentally they show that individual neurons and clusters of neurons exhibit clear modulation in their activity that is related to threat uncertainty and outcome. As such, it appears that their conclusions are reasonably supported, although there are some suggestions that may improve the overall strength of the conclusions. One issue that could be addressed is how clusters of neurons that comprise the various subnetworks are constructed, particularly across animals. The authors effectively pool all of the spiking responses from all neurons together before performing a clustering analysis. Based on this approach, they find clusters that differentially respond to threat uncertainty and behavior and to outcome and prediction error. However, it is not clear if such subnetworks or differential activity are present within individual animals, or whether this may simply reflect the possibility that different animals exhibit differential neural activity for threat probability or behavior, for example. In other words, is it possible that the subnetworks identified in figure 2 or figure 4, for example, are simply reflecting a clustering between different animals. Does k1, for example, all come from one animal? Do these same clusters exist within individuals? The authors suggest that their data demonstrate that there is a cue subnetwork that exhibits dynamic probability to behavior signaling, while there is a cue supra-network that exhibits sustained behavioral signaling. To show this, they regress the activity of each network with either probability or behavior, and then perform a PCA analysis on the regression weights. The conclusion regarding the dynamic changes are largely based on the changes observed in the PC2 weight. However, the regression weights themselves would suggest that this distinction is not so clear (Fig 3a). There are several clusters within the subnetwork that exhibit a sustained and clear response to either probability or behavior without evidence of this dynamic. Similarly, there are several clusters in the supranetwork that exhibit preferential signaling of probability. Similarly, the anatomic distinction between the subnetwork and supranetwork also appears less clear than the authors would suggest. This is presented in figure 3E, but here there appears to be a fair amount of overlap between anatomic regions between the two networks. In the second half of the manuscript, the authors then focus on the neural responses to the shock, and whether neurons encode shock outcome versus prediction error. This is a nice demonstration that indeed brainstem activity is involved in prediction error. A key question, however, is whether the same clusters involved in threat probability and behavior are also involved in outcome and prediction error. The authors identify clusters independently for outcome and prediction error (the same criticism regarding individual animals versus the pooled group of neurons applies here) and then ask how much overlap there is between neurons in one subnetwork versus another using a Chi squared test. Based on this, they conclude that the cue subnetwork is distinct from the tonic outcome network. However, it appears that there are more neurons in the cue network that are involved in the tonic outcome network (190) compared to those involved in the phasic outcome network (169). It is true that it is less likely that tonic outcome neurons are involved in the cue subnetwork, but looking at the relationship in the opposite direction would lead to the opposite conclusion. Perhaps there may be a clearer way to demonstrate the relative contributions of each subnetwork and cluster to both probability/behavior and outcome/prediction. An interesting result is the differential response to unexpected shock, expected shock, and then omission. The authors identify the omissions during the uncertain trails as unexpected. However, given the shock schedule used in the task, one would think that these omissions should actually be classified as expected, similar to the safety trials, since they happen much more frequently than the shocks. Finally, some of the anatomic conclusions are determined based on their method for localization. This involves locating the tip of the probe and the entry in 3D space, and then using the Allen atlas to then estimate the locations of each electrode. Since the authors have histology, is it possible to identify the actual location of each probe, and therefore each electrode, directly from the individual rat brainstem based on the histological sices? Reviewer #2 (Remarks to the Author): This is an interesting study describing the role of brainstem in construction of threat probability, behaviour, and prediction error. To this end the authors used neuropixels probes to record single-unit activity across a 21-region brainstem axis during probabilistic fear discrimination. They identified a dorsally-based brainstem network rapidly signaled threat probability. The article is clear and well written, but the work does not completely support the conclusions and claims. The experiments and analysis seem carefully performed; yet the methodology is not always detailed and there are some questions which need to be addressed: 1) In the experimental setup and anatomical location of the Neuropixels (Fig 1) could the authors add a figure that describes the suppression ratio that is used as fear behaviour measure throughout the paper? It remains very abstract. Also, I think using the suppression of reward seeking behaviour (the rats searching for food) is suboptimal to measure fear, because it implies reward circuits. Please comment why freezing is not used for fear. 2) The values that were clustered (e.g. waveform?,..) presented in Fig 2B are not clear; could you please add precisions in the text and Figure 2? How the number of clusters was found (was it a number that was chosen before, and if so on which criteria?) was also not clear; please add this information to better understand the number of 21 clusters. On which trials the PCA was made (are all trials included in one PCA, and if so, how were the curves for the single experimental conditions computed?). Please add precision in the text or methods. In these clusters, some of the smaller clusters look quite similar in their firing z-score. For example, cluster 1+5, cluster 2+8 or cluster 6+7. There might have been over clustering of the data? Please explain and discuss how baseline changes are likely to change the outcome of the z-score. 3) Data (clusters) presented in the Fig 2C are interpreted as reflecting different threat probability. However, none of the clusters shows an important difference in firing towards the uncertain or the safe stimulus. It could be that the rats simply did not understand the association of the 'uncertain' tone to the shock? Please discuss. 4) Clusters (about firing latency to danger) presented in Fig 2D seem to contain two groups of cells with different latencies. Maybe these clusters contain different subclusters? Please discuss. Figure 2E show cue firing correlation between clusters. Clusters 1-8 are shown to have very high correlation. But if there was overclustering in these clusters, the correlation would only indicate that some clusters belong together. How did you checked for overclustering? 5) 6) The clustering ( Fig 2G) might be explained by the firing onset corresponding to the location of the most cells in the descending pathways from cranial to caudal. Indeed, the danger firing latency in 2D might be dependent on the location of the cells in the brainstem. In figure 2G the clusters that contained more cranial neurons had shorter firing latencies (e.g., cluster 1, cluster 10). Please discuss. 7) The authors tried to explain the firing of the single clusters over time in Fig 3 by doing a linear model, in which the threat probability and the behaviour were used as regressors. Please add methodology details for this model. Do the authors validate if the different firing rates in the different threat probability conditions are not representing the reaction to the different tones that were played in the different conditions? 8) Some of the clusters in Fig 3A seem to be correlating with both threat probability and behaviour regressors. Even if the curves representing the regressors are not identical, they seem to be mirrored around the x axis in some clusters (k6,12,19,20). Why you state that the firing that is explained by behaviour should be not affected by the threat probability? 9) In Figure 3D, the larger network is disabled and the small one still intact. The beta for the cue firing is smaller in the PC1, which means that the beta for the cue firing is also dependent on the larger network. In the text, the authors state that cue firing would be entirely dependent on the small network. Why do you think then that it was 'entirely' dependent? 10) Brainstem location of the neurons in the larger network containing cluster 9 to 21 and of the smaller cluster is presented in Figure 3E. It looks like the lPAG is more present in the smaller network. In the text, the authors state this was a proof that there is a more dorsal network that computes fast threat probability and a more ventral network that computes behavioural reaction and slow threat probability. Could you be more precise? This statement is too large. First, it is known that there is behaviour computed in both ventrolateral and dorsal PAG. Second, the higher contribution of lPAG neurons in the smaller network could also be due to the location of the neuropixel probe in the brainstem. In Figure 1A it looks as if the Neuropixel was implanted more lateral, touching the lateral PAG directly, but not the ventral part of the brainstem. Therefore, the neurons recorded from the lPAG were closer to the probe and maybe easier to identify in the data than the ones in the ventral areas. Therefore, they might be overrepresented in the network. Thus, is it valid to draw a conclusion of the location of a network on this? 11) In Figure 4 is presented the cluster firing regarding different conditions: Predicted shock, unpredicted shock, unpredicted omission, and predicted omission. The unpredicted omission occurs in the trial condition where the rat never experiences a shock following the tone. Therefore, the rat did not associate the tone to a shock in first place. If there is no expectation of a shock, how can the shock be omitted? Could it be that the reactions is to the 'safety' condition rather as a pure reaction to the tone as a negative prediction error? Please precise from which trial the 'unpredicted foot shock' originates. If the unpredicted foot shock is the foot shock from the uncertainty condition, it is not a completely unpredicted foot shock. Therefore, the positive predication error signal might be not pure in this condition. Please discuss. In Figure 4B the duration was long, 10s (after the offset). Did the authors check if the firing of the clusters could also be related to motion? 12) Linear regression model to explain the weights of either the shock or a prediction error in the clusters firing is presented in Figure 5. Could the authors add a regressor estimating the impact of motion? (to compare with the reaction to the shock?) Introduction to the revised manuscript We are grateful for the thorough and thoughtful critiques from both reviewers. As we revised the manuscript, we meticulously checked our code to verify analyses. During this process we uncovered a simple, yet critical error in our trial averaging. We apologize in advance for this very detailed error description, but feel this is owed and necessary. Our discrimination sessions consist of 16 trials (4 danger, 2 uncertainty shock, 6 uncertainty omission, and 4 safety). Trial type order is randomized so that every session is unique. To analyze firing and behavior across all sessions, we must first standardize the trial order. Above is an example of how this works. The three columns on the left show a hypothetical trial order for one recording session. This is the specific order that rat experienced for that session, and due to random trial selection, no other rat will receive this specific trial order. For each of the 16 session trials we determine the type (of 4 possible) and the number of occurrences for that type. We then rearrange the trial types in a standardized order for analysis. Danger trials are first, uncertainty shock second, uncertainty omission third, and safety last. Trial number is maintained within each trial type. Danger trial 1 is first, trial 2 second, etc. We perform this standardization for every session, which results in a 3D matrix: trial types are rows (X), time bins are columns (Y), and sessions are stacked layers (Z). Many analyses require us to average all trials of a single type. This is where the error occurred. To find mean danger firing, we average rows 1:4. For uncertainty shock, rows 5:6; uncertainty omission, rows 7:12; and for safety, rows 13:16. However, when we checked our code a '1' was omitted from safety averaging. In error, we averaged rows 3:16 instead of 13:16. What we thought was mean safety firing was actually mean firing for ALL cues, except the first two danger cues. This error profoundly affected reported safety firing. Having identified the error, we reran all analyses. The correction made obvious impacts on our results. All manuscript figures and supplements have been completely remade to reflect the accurate safety firing data. Correcting safety firing clarified and solidified our findings. Here are some of the biggest changes:  Brainstem single units showed markedly less firing to safety than either uncertainty or danger during cue presentation ( Fig. 2A).  Functional populations showed discriminative firing that clearly distinguished danger from uncertainty and uncertainty from safety (Fig. 2B).  Principal components analysis revealed differential cue firing to be the primary low dimensional feature, with PC weights for uncertainty falling almost exactly between danger and safety (Fig. 2C).  Behaviour (Fig. 3B, PC1) and threat probability (Fig. 3B, PC2) signaling were now more separate low-dimensional firing features with clearer relationships to the supra and subnetworks. The cue subnetwork contributed more greatly to threat probability signaling (Fig. 3C) while the cue supranetwork contributed more greatly to fear behaviour signaling (Fig. 3D).  Brainstem single units showed almost no firing changes to safety during the outcome period (Fig. 4A).  Functional populations showed little safety firing during the outcome period, and those that did fire did so in a manner opposing that for uncertainty shock (Fig. 4B).  Principal components found minimal and negative PC weights for safety (Fig. 4C). Given this relatively simple error, we were concerned about additional errors. Particularly because many of our analyses are somewhat complex: principal components and regression combined with cluster-specific shuffling. We triple checked all code and found no additional errors. Looking back, it seems that we were so concerned with nailing down our more complex analyses, that we overlooked one of the simplest errors that could be made. We apologize for this error. If a more detailed explanation is needed and/or if the reviewers would like to be walked through portions of the analyses -we are more than happy to comply. Reviewer #1 This manuscript examines single unit spiking activity in the rat brainstem in order to address the question of whether and how the brainstem may be involved in threat assessment and prediction. Briefly, traditional interpretations of how the brain responds to threats have suggested that the forebrain and neocortex are primarily responsible for assessing threat probability and prediction error, while the brainstem is involved in the associated behavior. This view has been challenged by recent literature providing evidence that indeed the brainstem is also involved in assessing threat probability and prediction error. Here, the authors use Neuropixels probes to capture spiking activity from a large population of neurons in the rat brainstem as the rats engage in a probabilistic fear discrimination task. Their results demonstrate that different populations of neurons, or subnetworks, are involved in assessing threat probability, modulating behavior, and registering outcomes and prediction errors. The results therefore provide additional evidence that threat assessment may be a brain-wide phenomenon, rather than one that is restricted to individual brain regions like the forebrain. The data presented and the results provide strong evidence for this conclusion, as fundamentally they show that individual neurons and clusters of neurons exhibit clear modulation in their activity that is related to threat uncertainty and outcome. As such, it appears that their conclusions are reasonably supported, although there are some suggestions that may improve the overall strength of the conclusions. We appreciate your time and feedback. We address each suggestion below and in the revised manuscript. One issue that could be addressed is how clusters of neurons that comprise the various subnetworks are constructed, particularly across animals. The authors effectively pool all of the spiking responses from all neurons together before performing a clustering analysis. Based on this approach, they find clusters that differentially respond to threat uncertainty and behavior and to outcome and prediction error. However, it is not clear if such subnetworks or differential activity are present within individual animals, or whether this may simply reflect the possibility that different animals exhibit differential neural activity for threat probability or behavior, for example. In other words, is it possible that the subnetworks identified in figure 2 or figure 4, for example, are simply reflecting a clustering between different animals. Does k1, for example, all come from one animal? Do these same clusters exist within individuals? To address this, we have made a supplemental figure (Fig S4 & below) comparing individual identity to cluster identity across all 1,812 neurons. For the most part, cluster neurons were distributed across rats. The one exception was cluster 3, which appeared to largely come from Rat #9. Fewer units came from subjects 4, 5 and 6, the rats that completed very few recording sessions (Fig S1). The authors suggest that their data demonstrate that there is a cue subnetwork that exhibits dynamic probability to behavior signaling, while there is a cue supra-network that exhibits sustained behavioral signaling. To show this, they regress the activity of each network with either probability or behavior, and then perform a PCA analysis on the regression weights. The conclusion regarding the dynamic changes are largely based on the changes observed in the PC2 weight. However, the regression weights themselves would suggest that this distinction is not so clear (Fig 3a). There are several clusters within the subnetwork that exhibit a sustained and clear response to either probability or behavior without evidence of this dynamic. Similarly, there are several clusters in the supranetwork that exhibit preferential signaling of probability. We agree it is more accurate to say that clusters that are stable and dynamic in terms of firing give rise to information that is dynamic. In the revised manuscript we explicitly point out the variety of signaling patterns -both for their temporal properties and type of information signaled [Lines 91-94]. We end the cue regression section by directly pointing out that clusters with varying properties give rise to consistent signals for behaviour and threat probability [Lines 121-125]. We also feel this better supports our 'construction' framing in the manuscript. These brainstem-wide signals we observe are not the product of a single functional neuron type working identically across many regions. Instead, many disparate functional neuron types signaling unique aspects of threat and behavior combine to construct a signal greater than themselves. Similarly, the anatomic distinction between the subnetwork and supranetwork also appears less clear than the authors would suggest. This is presented in figure 3E, but here there appears to be a fair amount of overlap between anatomic regions between the two networks. In the second half of the manuscript, the authors then focus on the neural responses to the shock, and whether neurons encode shock outcome versus prediction error. This is a nice demonstration that indeed brainstem activity is involved in prediction error. A key question, however, is whether the same clusters involved in threat probability and behavior are also involved in outcome and prediction error. The authors identify clusters independently for outcome and prediction error (the same criticism regarding individual animals versus the pooled group of neurons applies here) and then ask how much overlap there is between neurons in one subnetwork versus another using a Chi squared test. Based on this, they conclude that the cue subnetwork is distinct from the tonic outcome network. However, it appears that there are more neurons in the cue network that are involved in the tonic outcome network (190) compared to those involved in the phasic outcome network (169). It is true that it is less likely that tonic outcome neurons are involved in the cue subnetwork, but looking at the relationship in the opposite direction would lead to the opposite conclusion. Perhaps there may be a clearer way to demonstrate the relative contributions of each subnetwork and cluster to both probability/behavior and outcome/prediction. An interesting result is the differential response to unexpected shock, expected shock, and then omission. The authors identify the omissions during the uncertain trails as unexpected. However, given the shock schedule used in the task, one would think that these omissions should actually be classified as expected, similar to the safety trials, since they happen much more frequently than the shocks. This is valid point. With foot shock being the rarer of the two outcomes following the uncertainty cue, it might be more 'surprising'. Further, there is evidence that midbrain dopamine -which unambiguously signal signed reward prediction errors -are additionally sensitive to rare outcomes (Rothenhoefer et al., 2021). We see the larger point of your observation, though. Having the uncertainty cue followed by shock on 50% of trials would have made for a more balanced experimental design. In fact, the first time we tried this discrimination procedure that is exactly what we did (Berg et al. 2014 European Journal of Neuroscience). Unexpectedly, we found that a cue predicting foot shock on 50% of trials supported levels of nose poke suppression that were nearly equal to those for a 100% cue. Evolutionarily this makes sense. The goal of defensive systems is to serve, not be precisely afraid. However, in our experiments we want to see complete discrimination of our three cues. To practically achieve this, we needed to reduce the foot shock probability of the uncertainty cue to 25%. We have included this rationale in the revised manuscript [Lines 42-43]. Finally, some of the anatomic conclusions are determined based on their method for localization. This involves locating the tip of the probe and the entry in 3D space, and then using the Allen atlas to then estimate the locations of each electrode. Since the authors have histology, is it possible to identify the actual location of each probe, and therefore each electrode, directly from the individual rat brainstem based on the histological sices? Figure S2. The detailed path for each individual also allows us to some sanity checks. For example, single units should be sparser in regions dense with fibers of passage. We observe this pattern in single rats and across the complete data set. Reviewer #2 This is an interesting study describing the role of brainstem in construction of threat probability, behaviour, and prediction error. To this end the authors used neuropixels probes to record single-unit activity across a 21-region brainstem axis during probabilistic fear discrimination. They identified a dorsally-based brainstem network rapidly signaled threat probability. The article is clear and well written, but the work does not completely support the conclusions and claims. The experiments and analysis seem carefully performed; yet the methodology is not always detailed and there are some questions which need to be addressed: We appreciate your thoughtful feedback. We address each question in turn and note manuscript revisions. 1) In the experimental setup and anatomical location of the Neuropixels (Fig 1) could the authors add a figure that describes the suppression ratio that is used as fear behaviour measure throughout the paper? It remains very abstract. Also, I think using the suppression of reward seeking behaviour (the rats searching for food) is suboptimal to measure fear, because it implies reward circuits. Please comment why freezing is not used for fear. We apologize for omitting information about suppression ratio. The revised manuscript includes more information about the use of suppression ratios in the Results section [Lines 46-49] and complete information in Methods [Lines 459-469]. Briefly, suppression ratios are calculated from baseline and cue nose poke rates: (baseline poke rate -cue poke rate) / (baseline poke rate + cue poke rate). A suppression ratio of '1' indicates complete suppression of nose poking during cue presentation relative to baseline. A suppression ratio of indicates '0' indicates equivalent nose poke rates during baseline and cue presentation. Gradations between 1 and 0 indicate intermediate levels of nose poke suppression during cue presentation relative to baseline. 2) The values that were clustered (e.g. waveform?,..) presented in Fig 2B are not clear; could you please add precisions in the text and Figure 2? We have updated the manuscript to specifically state the values used for clustering: "Kmeans clustering for mean, single-unit firing in 1-s bins from 2-s prior to 2-s following cue presentation (danger, uncertainty, and safety)" [Lines 57-58]. How the number of clusters was found (was it a number that was chosen before, and if so on which criteria?) was also not clear; please add this information to better understand the number of 21 clusters. We identified '21' clusters by systematically increasing cluster number from 1 to 30. Cluster number was optimized to produce the fewest number of clusters and the smallest mean Euclidean distance of each cluster member from its centroid. For example, 1 cluster would produce the fewest number of clusters -all units belong to a single cluster. But the mean Euclidean distance of each cluster from the centroid would be very, very large. 30 clusters would mean that each unit would have a very, very short Euclidean from its centroid, but that we were dividing like neurons into separate types. A sign of overclustering is when clusters contain a single unit. For the k-means clustering for mean single-unit cue firing, 22 was the first number that produced a cluster with a single unit. Examining the cluster firing means, 21 appeared to capture consistent and unique clusters. However, we concede that selecting any cluster number, no matter how reasonable, involves judgment. In the manuscript we are careful to never claim that there are exactly 21 clusters. Only that the brainstem can be functionally divided into at least 21 clusters. On which trials the PCA was made (are all trials included in one PCA, and if so, how were the curves for the single experimental conditions computed?). Please add precision in the text or methods. PCA was performed on mean, single-unit cue firing across all trials. This is now stated on Lines 57-58. In these clusters, some of the smaller clusters look quite similar in their firing z-score. For example, cluster 1+5, cluster 2+8 or cluster 6+7. There might have been over clustering of the data? Please explain and discuss how baseline changes are likely to change the outcome of the z-score. Your point is taken about the similarity of clusters. (Note that the identities changed following safety firing correction) Clusters 1+5 and 2+6 appeared to be similar in terms of population firing. However, comparing these clusters in other aspects of firing show differences. Cluster 1 neurons had significantly stronger firing relationships to the cue subnetwork clusters than did cluster 5. For clusters 2 and 6, cluster 2 had shorter and less variable latencies. This pattern held for many clusters that looked similar in terms of population firing. Despite this similarity, no two clusters are identical when considering all aspects of firing (PC1 information, danger latency, cluster-cluster correlations and unit-cluster correlations). Fig 2C are interpreted as reflecting different threat probability. However, none of the clusters shows an important difference in firing towards the uncertain or the safe stimulus. It could be that the rats simply did not understand the association of the 'uncertain' tone to the shock? Please discuss. 3) Data (clusters) presented in the Behaviorally, we know that rats understood the difference between the cues. Nose poke suppression differed between each cue pair. The data presented in the Fig 2C reflects low dimensional firing information present across all neurons. Cue presentation was evident in this information, as all cues had positive PC1 weights. Clearly, danger had the most positive weight and was 'treated' differently from uncertainty and safety. Importantly, the PC1 weights for uncertainty and safety are not completely overlapping. They are just very close. This pattern would be expected of a signal for threat probability. After all, shock occurs on 0% of safety trials and only 25% of uncertainty trials. These should be treated more similarly than a danger trial on which 100% of trials end in foot shock. Of course, principal components for firing rates cannot determine whether firing reflects threat probability. This is why we later use linear regression. Fig 2D seem to contain two groups of cells with different latencies. Maybe these clusters contain different subclusters? Please discuss. 4) Clusters (about firing latency to danger) presented in We wondered about this too. However, we think this is more a product of our analysis approach. To determine latency, we broke down cue firing into 1 ms bins (10,000 bins) then looked for the most abrupt change in firing rate slope between adjacent bins. For phasically responsive neurons this worked well, identifying the first deflection from baseline. However, for tonically responsive neurons or very minimally responsive neurons, none of these deflections was larger than the deflection at the very end of cue, which compared the final value to zero. Continuing to look beyond cue offset was not fair as danger trials would always end in shock presentation, which would reveal firing for a totally different reason. Thus, single units with values of 10,000 are those that did not show abrupt deflection during cue presentation. Figure 2E show cue firing correlation between clusters. Clusters 1-8 are shown to have very high correlation. But if there was overclustering in these clusters, the correlation would only indicate that some clusters belong together. How did you checked for overclustering? 5) Clusters 1-8 indeed have high correlation coefficients. In theory, this could indicate that they are of the same functional type, and we have overclustered. However, these clusters differed in other ways. For example, above it was noted that clusters 1+5, and 2+6 showed somewhat similar mean cue firing patterns. However, when firing latency was compared, differences emerged. So is this case when examining correlated firing. Cluster 1 was superior to cluster 5 in both cluster-cluster and unit-cluster correlations (Fig. 2E). Clusters 2 and 6 show similar cluster x cluster correlations, but cluster 2 neurons show stronger unit x cluster correlations. We feel that when considering all aspects of firing (mean cue firing, PC1 contribution, danger firing latency, cluster x cluster firing correlations and unit x cluster firing correlations) 21 clusters holds up well. Still, we acknowledge and carefully state that the brainstem does contain exactly 21 functional neuronal types, but at least 21 types. 6) The clustering ( Fig 2G) might be explained by the firing onset corresponding to the location of the most cells in the descending pathways from cranial to caudal. Indeed, the danger firing latency in 2D might be dependent on the location of the cells in the brainstem. In figure 2G the clusters that contained more cranial neurons had shorter firing latencies (e.g., cluster 1, cluster 10). Please discuss. We agree. This was part of the rationale for describing the superior colliculus and periaqueductal gray as hubs for the cue subnetwork. Though not only did cluster 1 neurons have shorter firing latencies, but they also contributed more greatly to PC1 firing information the activity of each individual neuron better correlated to the cue subnetwork. We describe these combined features on Lines 79-82, then relate this to anatomy on Lines 83-85. 7) The authors tried to explain the firing of the single clusters over time in Fig 3 by doing a linear model, in which the threat probability and the behaviour were used as regressors. Please add methodology details for this model. Do the authors validate if the different firing rates in the different threat probability conditions are not representing the reaction to the different tones that were played in the different conditions? Linear regression is described in the supplemental methods. However, we understand that descriptions alone can make it difficult to determine what exactly regression is doing. To better describe our regression approach, we provide an example regression from the manuscript. In this example, linear regression for Z firing of cluster 1 neurons is being performed to determine whether firing is better explained by threat probability or behaviour. The 16 session trials are organized by type (danger vs. uncertainty vs. safety) and mean Z firing is determined for each individual trial. The matlab regress function requires a constant '1' be used in linear regression. The two regressors are the shock probability specific to the cue (1, 0.25 or 0) and the mean suppression ratio observed to each cue. The output of linear regression is a beta coefficient providing the direction and magnitude of the relationship between each regressor and firing. In this example, both regressors are positive, indicating a positive relationship with firing, but the threat probability regressor is of greater magnitude. Thus, cluster 1 firing in the first 1s of cue presentation is best captured by threat probability. Fig 3A seem to be correlating with both threat probability and behaviour regressors. Even if the curves representing the regressors are not identical, they seem to be mirrored around the x axis in some clusters (k6,12,19,20). Why you state that the firing that is explained by behaviour should be not affected by the threat probability? Figure 3D, the larger network is disabled and the small one still intact. The beta for the cue firing is smaller in the PC1, which means that the beta for the cue firing is also dependent on the larger network. In the text, the authors state that cue firing would be entirely dependent on the small network. Why do you think then that it was 'entirely' dependent? 9) In We agree that 'entirely' dependent is an overstatement. We have removed this language and instead state that behaviour signaling depended more on the supranetwork, whereas threat probably signaling depended more on the cue subnetwork [Lines 111-117]. 10) Brainstem location of the neurons in the larger network containing cluster 9 to 21 and of the smaller cluster is presented in Figure 3E. It looks like the lPAG is more present in the smaller network. In the text, the authors state this was a proof that there is a more dorsal network that computes fast threat probability and a more ventral network that computes behavioural reaction and slow threat probability. Could you be more precise? This statement is too large. First, it is known that there is behaviour computed in both ventrolateral and dorsal PAG. Second, the higher contribution of lPAG neurons in the smaller network could also be due to the location of the neuropixel probe in the brainstem. In Figure 1A it looks as if the Neuropixel was implanted more lateral, touching the lateral PAG directly, but not the ventral part of the brainstem. Therefore, the neurons recorded from the lPAG were closer to the probe and maybe easier to identify in the data than the ones in the ventral areas. Therefore, they might be overrepresented in the network. Thus, is it valid to draw a conclusion of the location of a network on this? We agree. Further, correcting the safety firing somewhat shifted the location of the clusters contributing to the cue subnetwork and cue supranetwork. It is now the case that anatomical biases were only apparent for the cue subnetwork, and they were subtle [Lines 118-121]. In regards to the higher concentration of lPAG neurons in the subnetwork, we do not think this is due to 'higher' location on the probe. The highest probe locations were actually in retrosplenial and visual cortex. Although we did not collect activity from those portions of the probe because they were not the focus of our study. Further, the region from which we obtained the most neurons was the paramedian raphe, one of the most ventral regions we recorded from. On top of this, when we map out the electrode path through the brain we start by identifying the tip -the most ventral location -then the top -the most dorsal location. Once the extreme are identified we fit a linear path then inspect the entire path to see if areas of low unit yields corresponded to fiber tractswhich they did. Figure 4 is presented the cluster firing regarding different conditions: Predicted shock, unpredicted shock, unpredicted omission, and predicted omission. The unpredicted omission occurs in the trial condition where the rat never experiences a shock following the tone. Therefore, the rat did not associate the tone to a shock in first place. If there is no expectation of a shock, how can the shock be omitted? Could it be that the reactions is to the 'safety' condition rather as a pure reaction to the tone as a negative prediction error? Please precise from which trial the 'unpredicted foot shock' originates. If the unpredicted foot shock is the foot shock from the uncertainty condition, it is not a completely unpredicted foot shock. Therefore, the positive predication error signal might be not pure in this condition. Please discuss. 11) In In Figure 4B the duration was long, 10s (after the offset). Did the authors check if the firing of the clusters could also be related to motion? We are careful not to use the term 'unpredicted' anywhere in the manuscript. Instead we use 'surprising'. The uncertainty predicts both shock and omission. However, when the uncertainty cue is presented, the rat does not which trial type is occurring. This is because the order of uncertainty shock and uncertainty omission trials is randomly determined every session. Receipt of foot shock is 'surprising' because there was a 75% chance omission would occur. Receipt of omission is also 'surprising', though less so, because there was a 25% chance shock would occur. This is in contrast to 'predicted' shock on danger trials. Every time the rat hears the danger cue, it receives a foot shock. The shock is fully predicted. The same logic applies to the safety cue. Every time the rat hears the safety cue, it never receives a foot shock. The absence of shock is fully predicted. So we agree that foot shock presentation on uncertainty trials is not 'unpredicted' and we never make that claim. Foot shock presentation on uncertainty trials is 'surprising' because shock omission is also possible. We address motion below. 12) Linear regression model to explain the weights of either the shock or a prediction error in the clusters firing is presented in Figure 5. Could the authors add a regressor estimating the impact of motion? (to compare with the reaction to the shock?) Our Neuropixels recording rig is not equipped to quantify locomotion. However, a recent study from a talented graduate student in the lab (Amanda Chu) set out to construct complete temporal ethograms of behavior during our fear discrimination procedure. In brief, she trained 24 rats (12 females) in our danger, uncertainty, safety discrimination; recorded and hand-scored behavior frames prior to and following cue presentation. Nine total behaviors were quantified but here I am showing locomotion. Locomote was defined a propelling the body forward by movement of the legs. The line graph on the left shows mean +/-SEM % locomote behavior. We observe danger-specific increases in locomotion, however these increase are only apparent towards the end of cue presentation. The bar graph on the right shows cue mean (danger, red; uncertainty, purple; and safety, blue) and the points are individual rats. During late cue presentation there was greater locomotion to danger compared to baseline, as well as to danger compared to safety. Assuming the rats in this study showed the same pattern, late cue firing could in part reflect (loco)motion. Indeed, cue cluster k15 only increases firing during late danger presentation -perhaps signaling locomotion. However, we have many clusters that preferentially fire during early cue presentation (k1, k3, k5, k12, etc.). The activity of these clusters cannot reflect locomotion. 13) It is not clear how the authors reached the conclusion, that the brainstem is constructing threat probability. Indeed, in the figure 1, there seems to be no significant firing difference to uncertain foot shock firing and safety firing. Could it be that the rats did not understand the association between the insecure foot shock and the tone? Even if the signalling would represent threat probability signalling, it is not proof for the construction of the signalling in the brainstem (the signals could be simply conducted by the brainstem) We see how this was interpreted in the original manuscript. Once we corrected the safety averaging error it was clear that differential firing to safety and uncertainty is just as robust as differential firing to danger and uncertainty. The authors have addressed the major concerns raised in the initial review. Importantly, they have also revised their analyses to account for a bug in their code, which has primarily affected the responses during the safety trials. This has not changed the overall conclusions of the manuscript. In terms of my specific comments, I appreciate the responses offered by the authors and the clarifications that were made. Reviewer #2 (Remarks to the Author): The authors have submitted a revised version of their manuscript on threat probability, fear behaviour and aversive prediction error in the brainstem. The authors have adequately addressed part of my concerns. However, some important points still need to be addressed. From the first to the second version of the paper, the clustering was changed drastically! The reason for this does not seem to be adding more neurons to the analysis, as the numbers of animals and neurons have remained equal. The number of the overall clusters remains the same, but the sizes and response shapes of the single clusters varied a lot, as did the PC1 contribution in 2C. I find this quite curious and would be interested to find out which modifications were carried out (in detail) from the 1rst to the 2nd version. Fig 2 E. After reclustering, clusters 6-9 and clusters 1-2 seem very correlated to each other. Maybe you could adjust the color scale of 2E and show R2 values above 0.5 to make differences between clusters more visible. From the first to the second version of the paper, the clustering was changed drastically! The reason for this does not seem to be adding more neurons to the analysis, as the numbers of animals and neurons have remained equal. The number of the overall clusters remains the same, but the sizes and response shapes of the single clusters varied a lot, as did the PC1 contribution in 2C. I find this quite curious and would be interested to find out which modifications were carried out (in detail) from the 1rst to the 2nd version. The reason for the differences in the individual clusters is the corrected firing activity of each neuron during the safety cue between the 1 st and 2 nd versions. In the first version, firing during the safety cue accidentally included firing during all firing types rather than just the safety cue. The clustering analysis used firing activity of each neuron, during all three cues, meaning that the specific activity patterns of each cluster were altered by correcting firing during the safety cue. As you pointed out however, the total number of clusters that best fit the data remained the same, as does the common feature of many of the clusters in that they are threat sensitive by differentiating between the cues. The altered firing activity during the safety cue, and resulting clustering analysis, therefore also changed the precise contribution to PC1 of each cluster, mostly seen for cluster 2. In version 2, contribution to PC1 is also shown as mean contribution per unit, rather than overall contribution of the cluster. We feel this best reflects PC1 contribution of the neurons within each cluster to PC1 and better accounts for different numbers of neurons within each cluster. Fig 2 E. After reclustering, clusters 6-9 and clusters 1-2 seem very correlated to each other. Maybe you could adjust the color scale of 2E and show R2 values above 0.5 to make differences between clusters more visible. Your observations are correct. Clusters 1-2, as well as clusters 6-9 show strong positive correlations with one another. The clear outlier is cluster 3. This was the only cluster to not show ordered cue firing (|danger > uncertainty > safety|). It makes sense this would not correlate well with most others. Certainly, not all cue clusters are equally correlated. We have edited the manuscript to make this clear. That said, much more is going than just clusters 1-2 and 6-9 being correlated. Cluster 1 strongly correlates with the majority of clusters: 2, 4-8. Cluster 2 strongly correlates with 4, and 6-9. This is almost certainly why clusters 1 and 2 were found to be hubs. This might also suggest an additional level of organization within the cue subnetwork, with clusters 1 and 2 directing smaller networks within. These observations are now clearly stated on Lines 79-83. Fig 4. The clusters described in these figures have also been reclustered since the first version of the paper. How were they reclustered and why? All relevant firing analysis were redone after correction of safety firing, including the clustering analysis for the cue and outcome periods. Although the specific firing patterns of the clusters did vary as a result of this analysis, PC1 across all the clusters remained similar with the expectation of activity during the safety cue. This change in outcome firing activity was also the result of the corrected safety firing and analyses updated to reflect this. In the previous version safety cue firing included activity from all cues, inflating firing activity to the safety cue in the outcome period. Rather, brainstem neurons showed very little firing changes in the safety outcome period, with the corrected analysis revealing opposing firing patterns for shock and prediction error in the way shown in the revised manuscript.
2022-10-20T13:50:36.256Z
2022-10-19T00:00:00.000
{ "year": 2022, "sha1": "eef9ecb7681b1cd43fbc4b67ca444a5ef3457513", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-022-34021-1.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a51d622dffc693fdedffdff501336d2c4992efc0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
235247156
pes2o/s2orc
v3-fos-license
Antimicrobial activity of bacteria isolated from tissue of the coral Palythoa caribaeorum (Zoantharia: Sphenopidae) from Paraíba, Brazil coastal reefs Introduction: The coral-associated bacteria with antimicrobial activity may be important to promote the health of their host through various interactions, and may be explored as a source of new bioactive compounds. Objective: To analyze the antimicrobial activity of bacteria associated with the zoanthid Palythoa caribaeorum from the coral reefs of Carapibus, Paraiba state, Brazil. Methods: The phylogenetic analysis of the bacteria was conducted based on partial sequences of the 16S rRNA gene using molecular and bioinformatics tools. The anti-microbial activity of the 49 isolates was tested against four bacterial strains and one yeast strain: Bacillus cereus (CCT0198), Escherichia coli (ATCC 25922), Staphylococcus aureus (ATCC 25923), Pseudomonas aeruginosa and Candida albicans (ATCC 10231). The antibiosis and antibiogram assays were conducted and the Minimal Inhibitory Concentration (MIC) was determined by the microdilution method. Results: The bacterial isolates belonged to Firmicutes phylum (84 % of the isolates) and the Proteobacteria phylum (16 % of the isolates). Among the 49 isolates five genera were found, with the Bacillus genus being the most abundant (82 % of the iso-lates), followed by Vibrio (10 %), Pseudomonas (4 %), Staphylococcus (2 %) and Alteromonas (2 %). Antibiosis test revealed that 16 isolates (33 %) showed antimicrobial activity against one or more of five tested reference strains. The highest number of antagonistic bacteria were found in the Bacillus genus (12 isolates), followed by Vibrio (three isolates) and Pseudomonas (one isolate) genera. The B. subtilis NC8 was the only isolate that inhibited all tested strains in the antibiosis assay. However, antibiogram test with post-culture cell-free supernatant of NC8 isolate showed the inhibition of only B. cereus , S. aureus and C. albicans , and the lyophilized and dialyzed material of this isolate inhibited only B. cereus. The lyophilized material showed bacteriostatic activity against B. cereus , with a MIC value of 125 μg/μl, and in the cytotoxicity assay, the hemolysis value was of 4.8 %, indicating its low cytotoxicity. Conclusions: The results show the antimicrobial potential of some bacterial isolates associated with the P. caribaeourum tissue, especially those belonged to Bacillus genus. Reef environments are restricted to the tropical regions and they are spread over 3 000 km along the coast in Brazil, showing high rate of endemic coral species (Francini-Filho et al., 2013;Leão et al., 2016).Coral reefs are distributed in the state of Paraiba over the entire coastal stretch of 138 km (Costa, Sassi, Costa, & Brito, 2007). DOI 10.15517/rbt.v69i2.40809Palythoa caribaeorum is a species of typically sessile colonial zoanthid found frequently in coral reefs along the coast of the Atlantic Ocean and oceanic islands, being one of the most representative species of the several coral reefs of Brazil, Caribbean and Florida.Zoanthids, such as P. caribaeorum, may occupy large surface area of disturbed reefs since their high physiological tolerance and reproductive rates (Francini-Filho et al., 2013;Silva et al., 2015;Durante, Cruz, & Lotufo, 2018). In various regions of Northeastern Brazil, including the Paraiba coast, P. caribaeorum is one of the most abundant zoanthid in the reef environments (Costa, Sassi, Gorlach-Lira, Lajeunesse, & Fitt, 2013;Melo, Lins, & Eloy, 2014;Araújo, Gorlach-Lira, Medeiros, & Sassi, 2015;Silva, 2015).The occupational success of this species is mainly due to competitive strategies and rapid growth, even in unfavorable conditions such as high sedimentation (Castro, Segal, Negrão, & Calderon, 2012).The microbiota associated with P. caribaeorum is still little known.According to Pereira, Palermo, Carlos, & Ottoboni (2017), the Alphaproteobacteria were abundant in the mucus of this species, while Silva (2015) revealed that the majority of bacterial isolates from P. caribaeorum tissue belonged to the Bacilli class of Firmicutes phylum, followed by the Gammaproteobacteria. Recent works has demonstrated the importance of the bacterial community for the health, development and resilience of various species of corals and zoanthids (Pham, Wiese, Wenzel-Storjohann, & Imhoff, 2016;Pereira et al., 2017).According to these studies, the antimicrobial activity performed by associated bacteria promotes the health of its host through ecological interactions, and may represent a source for obtaining new bioactive compounds, which can be used in the production of new drugs.Bacillus was found to be one of the leading genera with antimicrobial activity among bacteria associated with corals (Pham et al., 2016;Pereira et al., 2017).The bioactive compounds with antimicrobial properties of marine Bacillus species have been extensively reviewed by Mondol, Sin, & Islam (2013). Since several studies (Li et al., 2011;Li et al., 2012;Pereira et al., 2017;Mickymaray et al., 2018) show that bacteria associated with corals might be promissory producers of bioactive compounds, we aimed in this work to perform phylogenetic analysis and to analyze antimicrobial activity of bacteria isolated from the tissue of P. caribaeorum from the reefs of Carapibus, Paraiba state, Brazil. DNA extraction and amplification: Bacterial isolates were incubated in Brain and Heart Infusion Broth (BHI) at 37 ºC for 48 hours.The extraction of genomic DNA from bacterial isolates was performed using the KIT HiPura TM Miniprep (HiMedia), according to the manufacturer's instructions.The 16S rRNA gene was amplified in the thermocycler (Primus, USA) using the following universal primers (50 pmol): forward 26F: 5′-GAG TTT GAT CMT GGC TCA G -3′ and reverse 1492R: 5′ -ACG GCT ACC TTG TTA CGA CTT -3′ (Lane, 1991), 200 ng of genomic DNA and the Master Mix PCR kit (Promega), according to the manufacturer's instructions.The amplification and purification of the 16S rRNA were done as described by Silva (2015). Phylogenetic sequencing and analysis: The sequencing of the samples was carried out at the Federal University of Pernambuco Sequencing Platform, Recife, Brazil, using the automatic sequencer ABI-PRISM 3100 Genetic Analyzer (Applied Biosystems).The generated sequences were submitted to a query for similarity with the data deposited in the GenBank accessed through the NCBI (National Center for Biotechnology Information) using the program BLAST-"Basic Local Alignment Search Tools" (Altschul et al., 1997).Sequences with more than 97 % similarity were considered valid.The multiple alignment of the sequences and the construction of the phylogenetic tree were performed using the MEGA version 6 program (Tamura, Stecher, Peterson, Filipski, & Kumar, 2013).The sequences of antagonistic isolates used to construct the phylogenetic tree were deposited in the NCBI sequence database (GenBank access numbers: MT071323-MT071338). Antibiosis test: The antimicrobial activity of isolates was analyzed against the following standard strains: Escherichia coli (ATCC 25922), Pseudomonas aeruginosa, Bacillus cereus (CCT0198), Staphylococcus aureus (ATCC 25923) and Candida albicans (ATCC 10231).The isolates and standard strains were grown in the Brain and Heart Infusion Agar (HiMedia) at 37 ºC for 48 hours.The B. cereus was incubated for 24 hours due to its rapid formation of endospores.For the antibiosis test, the cross-streak method was used, where each tested isolate was inoculated in a central line of a Petri dish containing Mueller Hinton Agar and incubated for 48 hours at 37 ºC.After this period, the standard strains were inoculated perpendicularly to the central streak culture.The cultures were analyzed after 24 hours of incubation in order to verify possible inhibition of the growth of standard strains. Antibiogram test: The diffusion method in solid medium (antibiogram) on Mueller-Hinton Agar (HiMedia) was used to evaluate antimicrobial activity of the cell-free supernatant, lyophilized material and dialysate of the NC8 isolate against the five standard strains mentioned above.The NC8 isolate was grown in Marine Broth (sea water 1 l, peptone 5 g, yeast extract 2 g) and Mueller Hinton Broth (HiMedia) at 37 ºC for 48 h, and after incubation a 1.5 ml aliquot of the culture was centrifuged for 10 minutes at 12 000 rpm. 50 µl aliquots of cell-free supernatant were placed in the wells on Mueller Hinton Agar previously inoculated with standard strains.After the incubation period for 24 hours at 37 o C, the diameter of the inhibition halo was measured.All analyzes were done in duplicate.An antibiogram test was also conducted using the lyophilized material (1.0 g/ml) obtained after lyophilization process of 400 ml of NC8 isolate supernatant The lyophilized material was also subjected to dialysis with a cellulose membrane with a flat width of 10 mm and 6 mm in diameter (Sigma), that retain most proteins of molecular weight 12 000 or greater, obtaining the material with concentration of 0.01 g/ml. Determination of Minimum Inhibitory Concentration (MIC): The antimicrobial susceptibility test performed for the NC8 isolate was based on the reference method for broth microdilution tests for aerobic growth bacteria (M27-A6) (NCCLS, 2003).The MIC of the lyophilized material of the NC8 isolate was determined against the standard strain B. cereus in a 96-well microplate, using BHI broth and nine dilutions (1.95-500 μg/μl) of the lyophilized material.Each well received 10 μl (3 x 10 8 CFU/ml) of the standard strain cell suspension of B. cereus.The aliquots of 10 μl of antibiotic streptomycin sulfate (0.1 g/ml) was used as a control for the relative evaluation of the level of inhibition of the tested samples.Controls were also carried out for the viability of the tested microorganism and the sterility of the culture medium.The test was performed in triplicate.The microplates were incubated at 37 °C in the Thermo Scientific™ Multiskan™ GO Microplate Spectrophotometer and the optical density measurements (540 nm) were recorded every 1hour during 24 hours of incubation.The Minimum Bactericidal Concentration (MBC) was determined using 10 μl aliquots collected from each CIM assay well and inoculated on Mueller-Hinton agar medium.After incubation at 37 °C for 24 hours the growth of bacterial colonies was observed.The CBM value was considered as the lowest concentration of lyophilized material in which microbial growth was not detected. Hemolysis test: Hemolytic activity was measured by determining human erythrocyte lysis (hRBCs), provided by the hospital of the Federal University of Rio Grande do Norte, Natal, Brazil.Hemolytic activity was tested by incubating the material subjected to lyophilization and dialysis (0.01 g/ml) with erythrocytes at 2 % of the O-group washed three times with PBS (phosphate buffered saline), pH 7.2.Saline solution (NaCl 0.9 %) was used as a negative control and Triton X-100 (1 %) as a positive control.The samples were incubated for 4.8 and 12 h at 37 °C and then centrifuged at 2 500 rpm for 5 min.Hemolysis was measured by spectrophotometry at a wavelength of 540 nm in 96 wells microplate, using 200 μl of samples.All tests were performed in triplicate and expressed as a percentage (Ahmad, Khan, Manzoor, & Khan, 2010). Phylogenetic analysis of bacteria: The bacterial isolates obtained from healthy (19 isolates) and necrotic tissue (30 isolates) of the zoanthid P. caribaeorum were identified on the basis of partial sequences of 16S rRNA.The isolates showed 98-100 % similarity with the sequences deposited in the GenBank and belonged to the phyla: Firmicutes with 84 % and Proteobacteria with 16 % of the isolates (Table 1).The Firmicutes phylum was represented by two families, Bacillaceae and Staphylococcaceae, distributed in Bacillus and Staphylococcus genera, respectively (Table 1).The isolates of Proteobacteria phylum belonged to the families of Pseudomonadaceae, Vibrionaceae and Alteromonadaceae, distributed in genera of Pseudomonas, Vibrio and Alteromonas, respectively (Table 1). Antibiosis: The antibiosis test revealed that among 49 tested isolates, 16 (33 %) exhibited antagonistic activity against at least one of the five standard strains tested (Fig. 1).Among the healthy tissue isolates, only two (Bacillus sp.PS1, Vibrio sp.PS11) showed antagonistic activity, while 14 isolates from the necrotic tissue were positive in this test, with 11 isolates belonging to the Bacillus genus, 2 to the Vibrio genus and 1 to the Pseudomonas genus. Among the tested microorganisms, C. albicans and E. coli were more sensitive to the antimicrobial action of most isolates.C. albicans growth was inhibited by 13 isolates, and among them 10 isolates were Bacillus spp.and three isolates Vibrio spp.The E. coli growth was inhibited by seven isolates of Bacillus spp., two of Vibrio spp.and one isolate of Pseudomonas sp.Among these antagonistic isolates, eight were obtained from necrotic and two from healthy tissue of P. caribaeorum. The growth of S. aureus was inhibited by four Bacillus isolates of the necrotic tissue of the zoanthid, and P. aeruginosa growth was weakly inhibited by two isolates of Bacillus spp.The B. cereus was inhibited only by the isolate Bacillus subtilis NC8.Among the bacterial isolates tested in the antibiosis assay, the NC8 isolate was the only one that showed growth inhibition of all standard strains in the antibiosis test. Antimicrobial activity of Bacillus subtilis NC8: The cell-free supernatant of the NC8 isolate showed antimicrobial action against standard strains of B. cereus, S. aureus and C. albicans (Table 2).However, the activity of the lyophilized material (1.0 g/ml) and material subjected to dialysis (0.01 g/ml) showed antibacterial activity in the antibiogram test only against B. cereus, with no antimicrobial action against other standard strains.Therefore, the tests to determine the MIC and MBC were conducted only against the strain of B. cereus using lyophilized material. On the basis of the microdilution test, the MIC value of lyophilized material was 125 μg/ μl (Fig. 2).The optical density values (540 nm) revealed that the B. cereus did not show growth during 24 hours of the test in the presence of 500.0 μg/μl, 250.0 μg/μl and 125.0 μg/μl lyophilized material (OD 540 nm: 0.09) (Fig. 2).Concentrations of 62.5 μg/μl, 31.25 and 15.63 μg/μl inhibited the growth of B. cereus up to 6 hours of incubation, and after this period the growth of the isolate was detected The result of the MBC test showed the microbial growth in all concentrations of lyophilized material (500 μg/μl-1.96μg/μl) tested, demonstrating that the antimicrobial activity of the lyophilized material of NC8 isolate had bacteriostatic activity. The hemolytic rate of the lyophilized material subjected to dialysis was 4.8 % in the concentration of 0.01 mg/ml during the 12 hours of incubation (Fig. 3).Hemolysis obtained by Triton X-100 (1 %) (positive control) was considered 100 % hemolysis. DISCUSSION In the coastal reefs of Carapibus Beach of Conde (Paraiba state, Brazil), the colonies of P. caribaeorum are wide spread and some colonies are affected by a tissue necrosis (Silva, 2015).Among the bacteria associated with the tissue of P. caribaeorum the Bacillus genus was the most abundant (84 % of the isolates), followed by genera of Vibrio, Pseudomonas, Staphylococcus and Alteromonas. When studying the biodiversity of bacteria associated with the soft coral Alcionium digitatum, abundant in the Baltic Sea, Pham et al. ( 2016) also identified the genus Bacillus as the most abundant and diverse group, with 17 species.The genus Bacillus was also found in other species of corals, however, in smaller proportions in relation to other taxa found.For example, Eiahwany, Ghozlan, Eisharif, & Sabry (2013) found that bacteria associated with soft coral Sarcophyton glaucum the Red Sea reefs were representatives of the Gammaproteobacteria, Actinobacteria and Firmicutes, and 11 species of Bacillus were found.The authors reported a high proportion of bacteria with antimicrobial and antifungal activities, especially those belonging to the Bacillus genus that showed higher antimicrobial activity.They suggested that these bacteria may play an important protective role, helping their host in the defense against marine pathogens. In our study, the class of gram-negative bacteria Gammaproteobacteria was found in a smaller proportion (16 %), however, other works has reported that Gammaproteobacteria dominate the microbial community, although there is a variation of genera associated with zoanthids and corals (Moreira et al., 2014;Pereira et al., 2017). In our study, some isolates, mostly from the necrotic tissue of P. caribaeorum, were phylogenetically related to such as V. campbellii, known to be a pathogen of aquatic organisms.Among the isolates were also found Staphylococcus epidermidis, disease-causing species in humans, and P. stutzeri, considered an opportunistic pathogen in clinical settings. Potentially pathogenic bacteria for humans have also been found in the microbiota of P. caribaeorum on the coral reef of Ponta Verde in Maceio, Alagoa state, Brazil, exposed to untreated sewage dumping (Paulino, 2017).This work reported changes in the microbial community associated with P. caribaeorum due to anthropogenic effect, studied, showing that about 25 % of sequences obtained by pyrosequencing techniques belonged to the Streptococcus, Staphylococcus and Propionibacterium genera. The antimicrobial activity was evidenced in 33 % of the tested bacterial isolates, and higher number of antagonistic bacteria was obtained from the necrotic tissue of P. caribaeorum.The genus Bacillus presented a greater number of isolates with antimicrobial activity, and among them the B. subtilis NC8 was the only one that inhibited the growth of the five standard strains in antibiosis test.The hemolysis assay of the lyophilized and dialyzed material of the NC8 isolate was less than 5 %, indicating that the antimicrobial compounds has probably no cytotoxic activity, that is very important characteristic for the potential use of this isolate for the production of antimicrobials. Several studies report that marine Bacillus species with potential action and broad antimicrobial spectrum were isolated from several species of corals (Pham et al., 2016;Pereira et al., 2017).Li et al. (2011) studied the bogorol A, a new peptide antibiotic produced by B. subtilis isolated from a reef in Papua, New Guinea, and found its high activity against methicillinresistant S. aureus (MRSA), vancomycin resistant enterococcus (VRE) and E. coli.In a later study, Li et al. (2012) reported that the amide group C-12 amikoumarkin produced by marine bacterium B. subtilis from the Red Sea exhibited antimicrobial activity against B. subtilis, S. aureus and Laribacter hongkongensis. Various species of Bacillus produce bacteriocins, for example Bacillus sp.SM01, isolated from mangrove sediments which produced bacteriocin Bac-SM01 with long-range antimicrobial activity, strongly inhibiting the growth of S. aureus methicillin resistant (MRSA), Acinetobacter baumannii, P. aeruginosa and E. coli (Mickymaray et al., 2018). Antimicrobial compounds are promising sources for the production of new drugs and can be used to fight infectious diseases.In our study various marine isolates showed antimicrobial activity against a range of pathogenic microorganisms, and particularly one isolate B. subtilis NC8, that show a potential to be explored in the future studies. Fig. 1 . Fig. 1.Phylogenetic tree of antagonistic bacteria isolated from the healthy and necrotic tissue of Palythoa caribaeorum and from bacterial strains of the GenBank based on the comparison of the sequences of RNAr 16S using neighbor-joining analysis and the Tamura 3-parameter model.The bootstrap values shown in the tree were obtained based on 1 000 replicates.Access numbers for GenBank strains are shown in parentheses. Fig. 2 . Fig. 2. Growth kinetics of standard strain of B. cereus in the presence of lyophilized material (1.96-500 μg/ml) of the B. subtilis NC8 isolate in the microdilution test.The growth was measured by spectrophotometry at 540 nm in 96 wells microplate during 24 hours of incubation.Control-the culture medium without the addition of lyophilized material. Fig. 3 . Fig.3.Hemolytic activity of the material subjected to lyophilization and dialysis (0.01 g/ml) obtained from isolate B. subtilis NC8.Hemolysis of erythrocytes at 2 % of the O-group was measured by spectrophotometry at 540 nm in 96 wells microplate after 4.8 and 12 hours of incubation.Positive control-1 % of Triton X-100; negative control-0.9% of NaCl. TABLE 1 Classification of bacterial isolates from the tissue of P. caribaeorum based on partial sequences of 16S rRNA.The E values were 0.0 and the maximal identity 98-100 % NNR: GenBank access code. TABLE 2 Antibiogram test of cell-free supernatant and lyophilized and dialyzed material from B. subtilis NC8 isolate (OD 540 nm: 0.14-0.19).Concentrations below 15.63 μg/μl lyophilized material did not reduce the growth of B. cereus.
2021-05-29T19:58:13.253Z
2021-02-19T00:00:00.000
{ "year": 2021, "sha1": "01723ea98d0acc508617d3843a3c2c5c67d7881a", "oa_license": "CCBY", "oa_url": "https://revistas.ucr.ac.cr/index.php/rbt/article/download/40809/46010", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "98c90165af755eb864fdfe754be4ff220cb56ee5", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
139869450
pes2o/s2orc
v3-fos-license
The Effect of Zinc Stearate Modification on Aging Characteristics of Asphalt Binder Different types of modifiers have been investigated to improve aging characteristics of asphalt binders. In this research, the effect of zinc stearate modification has been studied on aging resistance of asphalt binder. For this purpose, viscous flow properties and other rheological characteristics of neat and modified asphalt binders have been evaluated by using conventional experimental tests and superpave protocol test. In order to evaluate rheological behavior of asphalt binder in presence of zinc stearate, a neat asphalt binder sample with three modified samples containing 0.5, 1 and 2 wt% of zinc stearate were aged by means of the Rolling Thin Film Oven Test (RTFOT). The Brookfield rotational viscosity (RV) was implemented at six different temperatures from 100 to 200 degree of centigrade. Furthermore, the dynamic shear rheometer (DSR) was used for evaluating complex modulus (G*) and phase angle (δ) of all neat and modified samples at 10, 20, 30, 40, 50 and 60°C in frequency sweep mode. To evaluate effect of adding zinc stearate by using viscosity results, proportion of asphalt binder’s viscosity after aging to its viscosity before aging was defined as viscosity aging index (VAI). Results showed that adding zinc stearate to neat asphalt binder reduces the viscosity aging index. Results of frequency sweep test indicated that zinc stearate causes the values of complex shear modulus in aging asphalt binder approach to neat state which means the effect of zinc stearate modification is positive for improving aging resistance of asphalt binder. Introduction Aging of asphalt binder during its life cycle from mixing with aggregates to fully destruction has always been one of the main concerns of researchers in the field of pavement's materials and design [1,2]. With respect to such a phenomenon, the initial rheological characteristics of asphalt binder will be changed which in some cases improves and in some other cases weakens the performance of the asphalt pavement [3]. For example, loss of volatile and light oils at high service temperatures leads to hardening asphalt binder due to aging and increases the rutting resistance of asphalt binders. However, at low service temperature the asphalt binder becomes harder and more brittle and the risk of cracking increases [4]. Generally, controlling and decreasing the effect of aging on rheological characteristics 1234567890''"" of unaged asphalt binder were topics of numerous researches in the field of asphalt binder modification. Numerous modifiers have been carried out to increase aging resistance of asphalt binder. Some of these modifiers have been used as a rejuvenator of recycled asphalt mixture and some others as an anti-aging to enhance asphalt binder's rheological and mechanical performance. Zinc dialkyldithiophosphate (ZDDP), carbon black, octabenzone (UV531) and bumetrizole (UV326) are some well-known substances that have been used as an anti-aging agent. However, the effect of these anti-aging agents is not significant on the properties of asphalt binders [5]. Nanotechnology has been gradually penetrated into the field of asphalt modification [6]. Fini et al. proved that the addition of bio-binder which was derived from waste swine manure can reduce the viscosity of the asphalt binder. Furthermore, they indicated that bio-binder has the potential to improve thermal cracking performance of conventional asphalt binders [7]. In this research, zinc stearate was used as an additive to enhance anti-aging properties of original asphalt binder. For this purpose, one neat asphalt binder sample and three modified ones containing 0.5, 1 and 2 wt% of zinc stearate were aged by means of the rolling thin film oven test (RTFOT) and then two classical tests were performed including penetration test and softening point. The Brookfield rotational viscosity (RV) was also carried out on these samples. Generally, the viscosity of aged asphalt binder is greater than un-aged asphalt binder. In this paper, the viscosity index has been calculated by dividing viscosity of aged asphalt on viscosity of un-aged asphalt which obviously increases due to the loss of volatile and light oils. This index will be somewhat larger than one. Therefore, observing the viscosity index can be a suitable method for assessing the effect of zinc stearate on the aging characteristics of asphalt binder. In addition, investigation of rheological behavior of modified asphalt binder can provide useful information on the behavior of asphalt binder in presence of zinc stearate. For this purpose, the dynamic shear rheometer (DSR) was used at test temperatures between 10 and 60°C. Materials In this study samples were prepared using asphalt binder with penetration grade of 60/70 (the common asphalt binder in Iran) acquired from the Tehran Petroleum Refinery of Pasargad Oil Company located in Tehran, Iran. The Zinc Stearates (ZS) is a micronized and hydrophilic white powder that used in its formal sense is soap. This industrial-grade material is derived from the reaction of fatty acids with a combination of zinc metal. This material widely used in industrial, which derived from the reaction of fatty acids with a combination of zinc. Stearates are used as a lubricant for the polymer industries. Polymers are long chain molecules that are viscous and sticky in their high melting points. The friction force between polymer-polymer, polymer-metal, polymer-fillers, fillers-fillers and fillers-metal causes difficulty for polymers flow. The appropriate solution for aforementioned problem is reducing the coefficient of surfaces friction. Among several types of lubricants, zinc stearate is an important additive in the process of polymers production. It is used as an internal lubricant for molding. Figure 1 shows appearance shape at the room temperature and the molecular structure of zinc stearates. Preparation of the samples In this research, there are four different asphalt samples including BASE, ZS05 (0.5%), ZS1 (1%) and ZS2 (2%) which are fabricated at 140°C by mixing neat asphalt binder and specified percentage of zinc stearate for 45 min with a low shear mixer. To simulate the short-term aging (hardening or oxidation) characteristics of the asphalt binders' samples, the RTFO test was used. Therefore, two samples of asphalt binder, including neat and RTFOaged asphalt were obtained. In the RTFO test, the asphalt binder specimens should be heated inside the main container so that the temperature does not exceed 150°C. Then Pour 35 g of asphalt with a precision of 0.001 into eight sample bottles. The asphalt binder samples are heated up in a 163°C oven until it is completely fluid and pourable. The samples are kept in the oven, on temperature, with air flow of 400 min/ml and the carriage rotating for 85 min. When mass-change bottles have cooled to a safe handling temperature, their weights are measured to the nearest 0.001 g and recorded. Test methods In this study, the most important classical tests, including the ASTM D5 test (penetration point) and softening point Test (ASTM D36), which are well-known and used in the industrial literature and consultants and contractors have been performed on all samples. These experiments were carried out on unaged samples. In addition to the conventional tests, the superpave protocol experiments included rolling thin film oven test (RTFOT) and rotational viscosity (RV) were implemented at 100, 120, 135, 160, 180 and 200°C. Furthermore, the dynamic shear rheometer (DSR) was used for evaluating complex modulus (G*) and phase angle (δ) of all neat and modified samples in frequency sweep mode from 0.1 to 100 Hz. The diameter of the test sample was 25 mm with thickness of 1 mm. Both experiments were carried out in two cases of aged and unaged neat and modified samples. Table 1 indicates results of conventional experimental tests. As it can be seen adding zinc stearate has no significant effect on physically characteristics of asphalt binder including its penetration grade and softening point. Therefore, it can be concluded that addition of stearates does not change physical properties of asphalt binder including consistency and heat sensitivity. Figure 2 depicts results of rotational viscosity test based on the viscosity aging index (VAI). This index can be calculated according to the following equation: Binder Unaged Binder Aged Viscosity Viscosity Index Aging Viscosity  (1) Figure 2. Viscosity versus temperature for neat and zinc stearate modified asphalt binder Figure 2 illustrates that the addition of zinc stearate can reduce the viscosity aging index. It is apparent that increasing the percentage of ZS causes further reduction of VAI. For 2% of ZS this index becomes less than 1. These results proved that introducing ZS to neat asphalt binder can control and reduce the effects of aging and cracking. On this basis, it can be concluded that ZS as a modifier can enhance the aging characteristics of asphalt binder. The dynamic shear rheometer (DSR) was used for evaluating complex modulus (G*) and phase angle (δ) of all neat and modified samples at 10, 20, 30, 40, 50 and 60°C in the frequency sweep For example in figure 5 which is related to the addition of 1% by weight of stearates to neat asphalt binder, it is clearly visible that for three temperatures of 10, 20 and 30°C, the complex shear modulus of neat sample and modified samples containing 0.5% of ZS approach to each other that confirms the results of the rotational viscosity test. Furthermore, according to the set of figures it is clear that the addition of stearates, particularly at 1% and 2%, causes the graphs to shift upwards which means 1234567890''"" increasing the stiffness and the complex shear modulus of the modified sample relative to neat asphalt binder. Conclusion The effect of zinc stearate as a modifier on aging resistance of asphalt binder was discussed in this paper. The following conclusions can be drawn:  Zinc stearate as a modifier can be concluded that addition of stearates does not change physical properties of asphalt binder including consistency and heat sensitivity.  Addition of zinc stearate can reduce the viscosity aging index. It means that positive effect of this modifier on aging characteristics of asphalt binders.  Results of frequency sweep test indicated that zinc stearate causes the values of complex shear modulus in aging asphalt binder approach to neat state which means the effect of zinc stearate modification is positive for improving aging resistance of asphalt binder.  the results of a complex shear modulus illustrate that adding zinc stearates, especially in 1 and 2% by weight increases the asphalt binder stiffness
2019-04-30T13:08:32.620Z
2018-10-26T00:00:00.000
{ "year": 2018, "sha1": "0e1847ec5d20d59a7c1a03625414c69db8850938", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/416/1/012083", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e6faf0df6340f2ba38c54dfc53fe4f02f4ce2120", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
253344195
pes2o/s2orc
v3-fos-license
Metformin and kidney; overview on current concepts Type 2 diabetes (T2DM) is a chronic disorder categorized by hyperglycemia due to insulin resistance of cells. T2DM can cause many micro or macrovascular complications. Metformin, a biguanides derivative, has multiple benefits except anti-hyperglycemia effect, comprising amelioration blood cholesterol levels, blood pressure and depressing vascular complications accompanied with T2DM. It is proposed that metformin act via adenosine monophosphate-activated protein kinase (AMPK) -dependent or -independent approaches. The mechanisms by which metformin regulates glycemic level in T2DM are complex. In addition to its peripheral effects on insulin resistance and glycogenesis, metformin has direct beneficial effect on the beta-cell secretion. A large part of the metabolic advantages of metformin can be related to effects on gastrointestinal glucose uptake and the interaction of metformin with numerous new objects for glucose depressing in the gastrointestinal tract, including the incretin receptors, bile salt transporters and the gut microbiota. Introduction Galega officinalis has been consumed as a natural treatment in contrast to diabetes, since middle ages. Galega officinalis comprises the phytochemicals, galegine and guanidine, which both have anti-hyperglycemic effects, but lead to side effects. The study of galegine, guanidine and correlated molecules (such as biguanide) caused growth of oral antidiabetic medicines including metformin. Biguanides for example metformin are constituted of two guanidine molecules combined together with the losing of ammonia molecule. Unlike other biguanides (phenformin and buformin), metformin is a moderately harmless drug, with known pharmacokinetics and controllable toxicities (1). Search strategy For this review, we searched Web of Science, EBSCO, Scopus, PubMed/Medline, DOAJ (Directory of Open Access Journals), Embase, and Google Scholar, using various keywords including; type 2 diabetes, hyperglycemia, Metformin, biguanides, gut microbiota, diabetic nephropathy, glycogenesis and glycolysis. Diabetes Insulin resistance and beta-cell dysfunction are two mechanistic essential elements for diabetes. Type 1 diabetes (T1DM) happens when the pancreas disables to generate sufficient insulin for glucose metabolism. Type 2 diabetes (T2DM) is a chronic disorder categorized by hyperglycemia due to insulin resistance of cells. T2DM can cause many micro or macrovascular complications. Metformin, a biguanides derivative, has multiple benefits except anti-hyperglycemia effect, comprising amelioration blood cholesterol levels, blood pressure and depressing vascular complications accompanied with T2DM. Diabetic nephropathy The main pathway of metformin removal is transfer of metformin from peritubular capillaries to the renal tubular lumen in the kidney. Renal illness is very common in T2DM, since moderate renal functional damage (estimated glomerular filtration rate [eGFR] <60 mL/min) happens almost in one quarter of T2DM patients. The patients with advanced renal damage are at bigger risk for hypoglycemia. Insulin and some the incretin hormones (anti-hyperglycemic drugs) are removed more gradually, with renal elimination. Thus, dose reduction and careful assessment of consequences (glucose level and edema) may be essential (2). It is suggested that metformin could inhibit from suffering fibrosis, which has important role in the advancement of diabetic nephropathy (3). In some recommendations, 30<eGFR< 45 mL/min (serum creatinine around 2 mg/dL) could be regarded as a stop stage for metformin administration in patients with recognized renal damage. In spite of old worries about the danger of lactic acidosis in patients with kidney Implication for health policy/practice/research/ medical education Lowing blood sugar effect of metformin is carefully associated with its abilities in reduction of hepatic glycogenesis, insulin sensitivity, improvement of beta-cell functions, and gastrointestinal glucose absorption. damage, the United States Food and Drug Administration (FDA) agency has lately loosened its approvals about administrating metformin. The latest recommendations from the FDA will further help encourage the administration of metformin at 30<eGFR< 45 mL/min by dose reduction and with caution to decrease glucose levels in diabetic nephropathy patients. Pharmacokinetics and pharmaceutics of metformin Metformin hydrochloride (dimethylbiguanide) is usually prepared from the reaction between dimethylamine hydrochloride and dicyandiamide through a simple cyano addition reaction at high temperature about 130 o C (4). Metformin has acid dissociation constant (pKa) value about 11.5, thus occurs as the hydrophilic cationic species with high water solubility at physiological pH values and cannot pass through the cell membranes quickly due to low lipophilicity. Currently, investigators in diverse studies try to formulate more lipophilic derivatives of metformin with improved bioavailability (5). Metformin is broadly delivered into body tissues by transporters include organic cation transporters (OCTs), multidrug and toxin extrusion transporters and plasma membrane monoamine transporter (5). Metformin has been usually supposed to operate in the liver to reduce hepatic glucose generation. The absorption of metformin in liver is interceded chiefly by OCT1 (gene SLC22A1) and maybe by OCT3 (gene SLC22A3). Thus, decreased transport by OCT can decrease metformin efficiency. In OCT1-impaired persons such as native South American Indians, due to deficit some polymorphisms in OCT1 gene, the metformin level in the liver was meaningfully lesser in contrast with most East Asian and Oceanian individuals. Consequently, it is assumed that OCTs is vital for the hepatic uptake of metformin (5,6). Mechanisms of action of metformin versus hyperglycemia Lowing blood sugar effect of metformin is carefully associated with its abilities in reduction of hepatic glycogenesis, insulin sensitivity, improvement of beta-cell functions, and gastrointestinal glucose absorption (7). 1-Glycogenesis and glycolysis by mitochondrial action Mitochondrial complex I is addressed as an important drug target to produce desired therapeutic effect of diabetes. Mitochondrial complex I inhibition is capable to lessen hyperglycemia in diabetic experimental samples through stimulating glycolysis, glucose consumption and reducing hepatic glycogenesis. It is proposed which metformin can act on liver via adenosine monophosphate-activated protein kinase (AMPK) dependent or -independent approaches (7). In AMPK dependent approach, metformin by increase in adenosine triphosphate (ATP) consumption leads to increase adenosine monophosphate (AMP) to ATP ratio. This increase in the AMP:ATP ratio prompts the activation of AMPK, which has a variation of effects including enhancing insulin sensitivity (via fat and glucose metabolism), reducing 3'-5'-cyclic adenosine monophosphate (cAMP), and thus reducing the expression of gluconeogenic genes. Additionally, analog of AMP, 5-aminoimidazole-4-carboxamide ribonucleoside increases AMPK-dependent glucose absorption through glucose transporter type 4 (GLUT-4) translocation mediated by phosphatidylinositol 3-kinase pathway that, mimicking the effects of extensive exercise practice (8). AMPK-independent effects of metformin on the liver is summarized at a report (9) that some those include inhibition of mitochondrial respiration, inhibition of gluconeogenesis (through reduction glucagon secretion and inhibition of fructose-1,6-bisphosphatase generation by AMP), inhibition of inflammation (9,10). Suppression of complex I via AMPK independent-approach by metformin may attribute to the shift of the cellular nicotinamide adenine dinucleotide to NADH ratio by a new target, mitochondrial glycerol-3-phosphate dehydrogenase (mGPD) (11,12). 2-Insulin sensitivity Metformin is broadly demanded as an insulin sensitizer for anti-diabetic properties, via the simulation of glucose uptake by peripheral tissues. The improvement in insulin sensitivity by metformin could be attributed to its helpful effects on insulin receptor expression and tyrosine kinase activity inhibited glycogen synthesis, phosphorylation of the acetyl-CoA carboxylase and an increase in the activity of GLUT4 glucose transporters (13). Tyrosine kinase activity in the cells includes cell-cycle regulation and characteristics of gene transcription. Phosphorylation of the acetyl-CoA carboxylase isoforms prevent fat synthesis and elevate of fat oxidation, thus decrease fat stocks and increase insulin sensitivity in liver (7). 3-Effect on beta cells and insulin secretion In 2005, Diabetes Prevention Program (DPP) study presented that metformin enhances the insulin secretion a little accompanied by the development of insulin sensitivity. Some popular trials using metformin are shown in Table 1. It is obtained currently metformin influences on essential functions of the pancreatic beta cell is categorized as insulin release, proliferation, transcriptional regulation and protective effects against toxicity and apoptosis (viability). It also can regulate transcriptional expression by increasing of glucagon-like peptide-1 receptor (GLP-1R) specially, while it did not increase plasma levels of the other incretin hormone (such as glucose-dependent insulinotropic polypeptide or peptide YY). Metformin can avoid the functional, biochemical and structural irregularities and glucosestimulated insulin secretion in human islets from chronic exposure to high glucose. Moreover, metformin protects beta cells against lipotoxicity, glucotoxicity and palmitic acid (PA)-induced apoptosis (14,15). It is showed that metformin exposure in pregnancy modifies initial steps of beta cell growth to increase the total of pancreatic progenitors and these variations finally conclude in a greater beta cell heritage portion for child (16). 4-Gut tract of metformin The liver has been regarded as the key site of action for metformin through several hepatic pathways for reduction serum glucose including inhibition of mitochondrial complex I via activation of AMPK, weaken glucagon signaling and more recently, prevention of mGPD (17). However the liver is regarded the key site for metformin actions, new experimental investigations also show the gut as a main place of activity, because of oral administration of metformin is more efficient than intravenous administration. Several actions of metformin within the gut include increase glucose absorption, generation of lactate; secretion of the GLP-1, effects on the biochemical signaling between the gut and the central nervous system, bile acid metabolism and gut microorganisms. Bile acid metabolism causes reduction serum cholesterol concentrations by increasing bile acid synthesis from cholesterol (18). The primary effect of metformin aims to stimulate levels of certain bacteria to enrich the microbiota milieu. It is shown mucin-degrading Akkermansia muciniphila and several butyrate-producing bacteria were positively associated with metformin administration. The prescription of metformin in people with diabetes appears to favorably alter their gut microbiome, resulting in an improved glucose metabolism. It is tested how to restore balance in the gut microbiota to avoid disease beginning (19). Lately, novel metformin formulations have been advanced for likely improvements in adeptness and tolerance. Besides the common formulation, extendedrelease (ER) metformin based time extent of digesting is now prepared. Metformin ER may be more tolerable for diabetic nephropathy patients with higher risk of lactic acidosis due to fewer side effects (20). An ER tablet of metformin is prepared by engineering a composition comprising metformin, using a mixture of hydrophobic and hydrophilic polymers (21). Advantages and adverse effects The most prevalent beneficial and side effects of metformin are listed in Table 2. One of important side effects of metformin is gastrointestinal effects: diarrhea, vomiting and stomach irritation that can occur in up to half of patients using metformin. Diabetes Prevention Program Outcomes Study (DPPOS) trial described an amplified danger of deficient vitamin B12 amounts with long-standing therapy of metformin (23). Conclusion Lowing blood sugar effect of metformin is carefully associated with its abilities in reduction of hepatic glycogenesis, insulin sensitivity, improvement of beta- An worldwide study of the comparative efficacy of rosiglitazone, glyburide, and metformin in newly identified type 2 diabetes cell functions and gastrointestinal glucose absorption. It is suggested that gut-based pharmacology of metformin provide fresh remedial methods to mediate diabetes and related complications.
2022-11-05T15:44:20.916Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "1c9abfa176102c0e1d2488c1ea5879bcf5af4dd3", "oa_license": "CCBY", "oa_url": "https://jrenendo.com/PDF/jre-8-e21064.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "bdca278bc35b64bdf9a510759a3bfec850198aaa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }