id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
255815119 | pes2o/s2orc | v3-fos-license | Dimethyl itaconate alleviates the pyroptosis of macrophages through oxidative stress
Macrophages are involved in the pathophysiology of many diseases as critical cells of the innate immune system. Pyroptosis is a form of macrophage death that induces cytokinesis of phagocytic substances in the macrophages, thereby defending against infection. Dimethyl itaconate (DI) is an analog of itaconic acid with anti-inflammatory effects. However, the effect of dimethyl itaconate on macrophage pyroptosis has not been elucidated clearly. Thus, the present study aimed to analyze the effect of DI treatment on a macrophage pyroptosis model (Lipopolysaccharide, LPS + Adenosine Triphosphate, ATP). The results showed that 0.25 mM DI ameliorated macrophage pyroptosis and downregulated interleukin (IL)-1β expression. Then, real-time quantitative polymerase chain reaction (RT-qPCR) was used to confirm the result of RNA-sequencing of the upregulated oxidative stress-related genes (Gclc and Gss) and downregulated inflammation-related genes (IL-12β and IL-1β). In addition, Gene Ontology (GO) enrichment analysis showed that differential genes were associated with transcript levels and DNA replication. Kyoto encyclopedia of genes and genomes (KEGG) enrichment showed that signaling pathways, such as tumor necrosis factor (TNF), Jak, Toll-like receptor and IL-17, were altered after DI treatment. N-acetyl-L-cysteine (NAC) reversed the DI effect on the LPS + ATP-induced macrophage pyroptosis and upregulated the IL-1β expression. Oxidative stress-related protein Nrf2 is involved in the DI regulation of macrophage pyroptosis. Taken together, these findings suggested that DI alleviates the pyroptosis of macrophages through oxidative stress.
Background
Macrophages are essential cells of the innate immune system and play critical roles in the diseases such as sepsis [1]. Pyroptosis is cell death but distinct from apoptosis and necrosis. It is a general and natural immune effector mechanism, contributing to the inflammatory reaction in bacterial infections and various noninfectious diseases [2][3][4]. It is characterized by cell swelling, the formation of holes in the plasma membrane, and the release of proinflammatory cytokines, including interleukin (IL)-1β and IL-18. Thus, the process of pyroptosis exerts a dual effect: it protects the body from microbial infections and endogenous hazards, while excessive activation of pyroptosis leads to pathological inflammation [5]. Previous studies [6][7][8] have shown that macrophage pyroptosis is involved in the development of sepsis and that the regulation of the process pyroptosis may offer novel therapeutic approaches to sepsis.
Itaconic acid is a metabolite produced by the activation of immune cells, especially macrophages. The primary effect of the acid on the cellular metabolism during macrophage activation has been attributed to the inhibition of succinate dehydrogenase (SDH) [9,10]. In addition, itaconic acid attenuates reperfusion injury by SDH and induces an antioxidant stress response [11]. It has a variety of anti-inflammatory, antioxidant, and immunomodulatory effects [12]. Itaconic acid and its membrane-permeable derivative, dimethyl itaconate (DI), selectively inhibit a subset of cytokines [9], including IL-6 and IL-12. A recent study showed that DI enhanced the survival rate, decreased the serum level of tumor necrosis factor-alpha (TNF-α) and IL-6, and ameliorated lung injury in septic mice. DI also suppressed the lipopolysaccharide (LPS)-induced production of TNF-α, IL-6, and nitric oxide synthase 2 in bone marrow-derived macrophages (BMDMs) [13].
Oxidative stress refers to the imbalance of oxidation and antioxidation in the body under the attack of harmful stimulating factors [14]. Moreover, cardiovascular, neurodegenerative, metabolic, and inflammatory diseases are known to be associated with oxidative stress [15], and the resulting reactive oxygen species (ROS) is considered to be the driving force of pyroptosis [16]. A study revealed that mitochondrial ROS promote macrophage pyroptosis by inducing gasdermin-D oxidation [17].
However, the role and mechanism of DI on macrophages pyroptosis have not yet been clarified. Therefore, in this study, the role and mechanism of DI on macrophage pyroptosis was analyzed by LPS + Adenosine Triphosphate (ATP)-induced pyroptosis model of BMDMs from C57BL/6 mice pretreated with DI.
BMDM isolation, culture, and treatment
Male C57BL/6 mice aged 6-8-weeks-old were purchased from Zhejiang Academy of Medical Sciences, Hangzhou, China. Following euthanasia by cervical dislocation, the lack of heartbeat was confirmed in each animal in accordance with the approved Zhejiang Academy of Medical Sciences protocol. BMDMs from the bilateral posterior femur of mice were rinsed using the DMEM (Genom, Hangzhou, China) medium. BMDMs were cultured in DMEM media supplemented with 50 ng/mL mouse recombinant macrophage colony-stimulating factor, 10% fetal bovine serum (FBS), penicillin (100 U/mL), and streptomycin (100 µM) in a humidified atmosphere containing 5% CO 2 at 37 °C. After 7 days of culture, the cells were divided into different groups as follows: vehicle; DI (0.25 mM, Sigma, USA) + vehicle; Dimethyl sulfoxide (DMSO, Sigma, USA) + LPS (500 ng/mL, for 4 h, Sigma, USA) + ATP (5 mM, for 1 h, Sigma, USA); DI (0.25 mM, pre-treatment for 2 h) + LPS + ATP; N-acetyl-L-cysteine (NAC, 1 mM, Sigma, USA) or ML385(10 µM, Selleck, China), DI (0.25 mM, con-treatment for 2 h) + LPS + ATP. The concentrations were performed as described [18,19]. The Ethics committee of the Zhejiang Academy of Medical Sciences approved the experimental protocol. All animal experiments met the ARRIVE guidelines [20].
Cell viability assay
For cell viability assay, 5 × 10 3 cells/well were seeded in 0.1 mL of DMEM supplemented with 10% FBS in a 96-well plate and cultured for 24 h, followed by treatment with a gradually increased concentration of DI (0.03125, 0.0625,0.125 and 0.25 mM) for 24 h. Then, 10 µL cell counting kit-8 (CCK-8,7Sea Pharmatech Co.Ltd., Shanghai, China) was added to each well and incubated at 37 °C for an additional 2 h. The absorbance was measured at 450 nm on a microplate reader (Thermo Scientific, San Jose, CA, USA).
Propidium iodide (PI)-stained fluorescence microscopy
The cell mortality in each group was assessed via PI (BD Biosciences, USA) staining. The cells were incubated in a six-well plate at the density of 5 × 10 5 cells/mL. The different groups were treated as described above and then incubated with 5 µL of PI for 10 min at room temperature in the dark. Subsequently, the cells were examined under an inverted fluorescence microscope (Nikon, Japan). Red presented PI-positive cells.
Enzyme-linked immunosorbent assay (ELISA)
Cell-free supernatants were collected from each group and stored at -80 °C. ELISA kits (Thermo Scientific, USA) for IL-1β was utilized following the manufacturer's protocol.
ROS detection
The level of ROS in DI + LPS + ATP group and NAC + DI + LPS + ATP group were detected by the dichlorodihydrofluorescein diacetate (DCFH-DA, Beyotime, China). Briefly, the cells were cultured in 96-well plates, treated as the previously described and incubated with 10 µM DCFH-DA for 30 min at 37 °C. After washing with DMEM thrice, the fluorescence intensity of ROS was detected with a fluorescence microplate reader (Thermo Scientific, San Jose, CA, USA) at 488 nm excitation wavelength and 520 nm emission wavelength. The concentration of ROS was expressed as fluorescence value.
Real-time quantitative polymerase chain reaction (RT-qPCR) for mRNA expression
Total RNA was extracted from each group using the RNA Rapid Extraction Kit (Yishan, Biotechnology, Shanghai, China) and reverse transcribed into complementary DNA (cDNA) using the ReverTra-Ace-qPCR-RT kit (Toyobo Corporation, Osaka, Japan). Subsequently, qRT-PCR was performed using the SYBR green real-time PCR master mix (Toyobo) on a LightCycler 480 (Roche, Germany). GAPDH (glyceraldehyde-3-phosphate dehydrogenase) served as an internal control. The primers used are listed in Table 1.
Library construction and sequencing
Total RNA was extracted from DMSO + LPS + ATP and DI + LPS + ATP groups using TRIzol (Thermo Fisher, USA). The experiments were performed in independent cultures from three mice. The mRNA was specifically captured using Dynabeads Oligo (dT) 25-61005 (Thermo Fisher, USA) and fragmented using NEBNext ® UltraTM RNA Library Prep Kit for Illumina ® (NEB, USA). cDNA was synthesized and constructed in the presence of reverse transcriptase (Invitrogen SuperScript ™ IIReverse Transcriptase, USA) library and sequenced. The processed clean data were aligned to the reference genome, and the expression was annotated and quantified using StringTie (2016) and gffcompare. Finally, gene expression obtained as fragments per kilobase of exon model per million mapped readsexon fragments (FPKM) was evaluated using ballgown.
Analysis of differential transcripts
The differentially expressed mRNAs were selected with fold-change > 2 or fold-change < 0.5 and P-value < 0.05 using R package edgeR or DESeq2, followed by Gene Ontology (GO) enrichment and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analyses to identify the differentially expressed mRNAs.
GO functional class and pathway enrichment analysis
The GO database reflects the distribution of the number of differentially significant genes on the GO term enriched in biological process, cellular component, and molecular function in the form of bar charts. KEGG is a database for the systematic analysis of correlations between genes and their coding products, gene function, and genomic information [21]. Also, pathways significantly enriched in expressed genes were identified.
Statistical analysis
Data were processed using GraphPad Prism version 7 and presented as mean ± standard deviation (SD), unless stated otherwise. The multigroup comparisons of means were carried out by one-way analysis of variance (ANOVA) test, with post hoc contrasts performed using Tukey's multiple comparisons test. Paired t-test was used for comparison between groups. P < 0.05 indicated a statistically significant difference.
Effect of DI on BMDM cell viability
The BMDM cells were treated with different concentrations of DI (0.03125, 0.0625, 0.125, and 0.25 mM) for 24 h. The CCK-8 assay showed that DI-treated groups did not differ in cell viability compared to the vehicle group (P > 0.05, Fig. 1).
DI ameliorated cell mortality of BMDMs activated by LPS + ATP
The cell mortality of each group was detected by staining the cells with PI (Fig. 2a). As shown in Fig. 2b, Table 1 Target primer sequences the cell mortality of DMSO + LPS + ATP group was 50.27 ± 3.70% compared to the vehicle group was 2.97 ± 0.13%, indicating a significant increase ( **** P < 0.0001), while that of the DI treatment group was 28.80 ± 2.30% compared to the DMSO + LPS + ATP group, indicating a significant decrease ( #### P < 0.0001).
DI decreased the level of IL-1β in BMDMs
The level of IL-1β in BMDMs was detected by ELISA and RT-qPCR. As shown in Fig. 3a, LPS + ATP-induced pyroptosis of BMDMs increased the serum levels of IL-1β, while DI treatment reduced the concentration of IL-1β. Similarly, DI treatment decreased the mRNA expression of IL-1β (Fig. 3b). These findings demonstrated that DI decreases the level of IL-1β in BMDMs.
mRNA sequencing of DI treatment on the LPS + ATP induced pyroptosis in BMDMs
The comparative analysis of two groups was based on mRNA sequencing that identified 2040 differentially expressed genes (DEGs), including 983 upregulated and 1057 downregulated genes ( Fig. 4a/b). The top five upregulated DEGs with the highest significance were Gclc, Ednrb, Gss, Acss2, and Layn, and the top five downregulated DEGs with the highest significance were Edn1, Fscn1, IL-12β, IL-1β, and Saa3. In Fig. 4c, FPKM was used from mRNA sequencing results (Additional file 1: Table S1). Real-time PCR was used to verify the expression of these genes (Fig. 4d), indicating that the expression trends of these ten genes were consistent with the sequencing results. All the data were statistically significant ( * P < 0.05).
GO enrichment of DEGs after DI treatment
In Fig. 5, the GO enrichment for the biological process of the enriched genes was signal transduction, biological process, regulation of transcription, DNA-templated, positive regulation of transcription by RNA polymerase II, and cell differentiation. Notably, DEGs were also enriched in the oxidation-reduction in the biological process. The GO enrichment for the cellular component of mainly enriched genes was membrane, cytoplasm, nucleus, an integral component of the membrane, and cytosol. The GO enrichment for molecular function of mainly enriched genes was protein binding, metal ion binding, molecular function, identical protein binding, and nucleotide-binding.
KEGG enrichment of genes after DI treatment
KEGG enrichment was used to explore the changes in the pathway after DI treatment. The KEGG enrichment of mainly enriched genes was cytokine-cytokine receptor interaction, malaria, fluid shear stress, and atherosclerosis, TNF signaling pathway, and Jak-STAT signaling pathway (Fig. 6). Thus, DI mainly affects the expression of inflammatory signaling pathways.
NAC reversed DI effect on the LPS + ATP-induced pyroptosis of BMDMs
Based on the sequencing results, we found that DI significantly upregulated the oxidation-reduction-related genes (Gclc and Gss), and the differential genes in GO analysis were also enriched in the biological process of oxidation-reduction. Therefore, we speculated that the oxidation-reduction process plays an essential role in the effect of DI; hence, we treated DI-induced macrophage pyroptosis with NAC. Next, we detected cell mortality by staining cells with PI (Fig. 7a) and the level of ROS by the DCFH-DA in DI + LPS + ATP and NAC + DI + LPS + ATP groups (Fig. 7c). As shown in Fig. 7b, the cell mortality of the NAC + DI + LPS + ATP group (43.5 ± 0.64%) was significantly increased compared to that of the DI + LPS + ATP group (27.67 ± 0.41%), ( #### P < 0.0001). And the level of ROS in the NAC + DI + LPS + ATP group was significantly decreased compared to that of the DI + LPS + ATP group ( ## P < 0.005). Then, the expression of IL-1β in BMDMs was detected by ELISA (Fig. 7d). Compared to the DI + LPS + ATP group, the level of IL-1β in the NAC + DI + LPS + ATP group increased significantly. These findings proposed that NAC reversed the DI effect on the LPS + ATP-induced pyroptosis of BMDMs.
ML 385 reversed DI effect on the LPS + ATP-induced pyroptosis of BMDMs
Among upregulated DEGs with the highest significance, genes like Gss, Gclc and Hmox1 [22,23], were suggested to be regulated by the transcription factor NF-E2-related factor 2 (Nrf2), an important transcription factor that in response of the cellular oxidative stress. Thus, we cotreated macrophage pyroptosis model with ML385 (Nrf2 inhibitor) and DI. As shown in Fig. 8, ML385 similarly reversed the cell mortality (****P < 0.0001) and the level of IL-1β (***P < 0.0005) of DI effect on macrophage pyroptosis. In summary, these results indicated that oxidative stress-related protein Nrf2 is involved in the DI regulation of macrophage pyroptosis.
Discussion
In the current study, we demonstrated that DI improves the cell mortality of LPS + ATP-induced macrophage pyroptosis and reduces the inflammatory factor IL-1β, while NAC could reverse this protective effect. We also used high-throughput sequencing to investigate the DItreated macrophage pyroptosis and pyroptosis model and found that the upregulated differential genes (Gclc and Gss) were mainly associated with oxidation-reduction, while the downregulated differential genes (IL-1β and IL-12β) were associated with inflammatory responses.
In the GO and KEGG enrichment analyses, we found that DI affects the biological process, including oxidation-reduction and the inflammatory signaling pathway.
Besides, we also found that oxidative stress-related protein Nrf2 is involved in the DI regulation of macrophage pyroptosis. Thus, DI upregulated Gss and Gclc at the transcriptional level activating the antioxidant stress response and decreasing the level of IL-1β, thereby inhibiting macrophage pyroptosis. Macrophage pyroptosis is involved in various inflammation-related diseases, such as psoriasis [24], osteoarthritis [25], and sepsis [8], and the associated inflammatory response can be attenuated by inhibiting macrophage pyroptosis. Some studies have shown that the regulation of macrophage pyroptosis mainly consists of Caspase-1-dependent classical pyroptosis pathways and non-Caspase-1-dependent non-classical pyroptosis pathways, which are complex signaling pathways. Moreover, the Keap1/Nrf2/HO-1 pathway can prevent pulmonary ischemia-reperfusion injury by reducing oxidative stress and promoting the antioxidant enzyme activity to inhibit alveolar macrophage pyroptosis [26]. In addition, the inhibition of the TNF-α/HMGB1 inflammatory signaling pathway suppresses macrophage pyroptosis to improve liver and kidney function during acute kidney injury and acute liver failure [27].
Macrophage pyroptosis is one of the ways of macrophage death, in addition to apoptosis and necrosis. A previous study showed that high dose itaconate surrogate 4-octyl itaconate (4-OI) induces apoptotic cell death independently of the classical inflammasome pathway [28]. Currently, there are no studies to confirm the association of DI with apoptosis and necrosis. It has been demonstrated that itaconic acid inhibits the activation of NLRP3 inflammasome in macrophage and reduces the level of IL-1β, which is negatively correlated with the level of intracellularly accumulated itaconic acid. DI is a cell-permeable itaconic acid analogue that is not metabolized to itaconic acid intracellularly, but has a strong electrophilicity and can downregulate the level of IL-1β [29]. The current study also confirmed that DI reduces IL-1β levels and alleviates cell death in macrophage pyroptosis. Nonetheless, additional studies are required to further investigate whether DI is associated with apoptosis and necrosis in the future.
Among the differentially upregulated genes, Gclc and Gss, were associated with redox response. Gclc and Gss are also key enzymes for glutathione synthesis (GSH). It has an antioxidant effect that maintains the stability of cell redox and avoids mass cell death [17]. Among the differentially downregulated genes, IL-1β and IL-12β are inflammatory factors. IL-1β is an upstream pro-inflammatory cytokine, and some studies reported that blocking IL-1β also reduces immunosuppression [30], which results in late sepsis. IL-12β is a cytokine of the IL-12 family and a key pro-inflammatory cytokine produced by macrophages [31]. Therefore, DI may further improve the inflammatory response to sepsis by increasing the level of anti-oxidative stress-related factors and decreasing the expression of inflammatory factors in macrophages at the transcriptional level.
In addition, differentially enriched genes in GO enrichment analysis of biological processes include oxidation-reduction process. Next, we compared cell deaths and IL-1β levels in the DI + LPS + ATP and NAC + DI + LPS + ATP groups by PI staining and ELISA Fig. 8 Effect of ML385 on DI treatment of LPS + ATP-induced pyroptosis of BMDMs. a Cell mortality was observed by fluorescence microscopy after PI staining. b The cell number was quantified by counting in three random at 10×, and the mortality was expressed as mean ± SEM. c The concentration of IL-1β was detected by ELISA. (n = 3 per group, *** P < 0.005, **** P < 0.0001 compared to the DI + LPS + ATP group) and found that the number of cell deaths and the level of IL-1β was increased by NAC + DI co-treatment. NAC is an antioxidant, and in the presence of redox-active transition metals, it causes biological damage via thiol oxidation by the metal ion followed by the generation of superoxides, H 2 O 2 and •OH. NAC exerts diverse, complex effects that are largely associated with maintaining the levels of intracellular glutathione (GSH) [32]. Besides, studies [22,23] showed that Nrf2 could regulate genes, like Gss, Gclc and Hmox1. We used Nrf2 inhibitor and DI to co-treat the macrophage pyroptosis model, and found that it has a similar effect to NAC. In our study, we found that NAC reduced the level of ROS in the DI + LPS + ATP group and reversed DI effect on macrophage pyroptosis, and oxidative stress-related protein Nrf2 is involved in the DI regulation of macrophage pyroptosis.
Several studies have shown that DI modulates different signaling pathways to exert effects. DI protects against fungal keratitis by activating the Nrf2/HO-1 signaling pathway [33]. It also prevents LPS-induced mastitis by activating MAPK and Nrf2 and inhibits the NF-κB signaling pathway [34] and LPS-induced endometritis by suppressing the TLR4/NF-κB and activating the Nrf2/HO-1 signaling pathway [35]. The immunomodulatory effects of dimethyl chlortetracycline on IL-17-IκBς axis-induced inflammation were observed in an animal model of imiquimod-induced psoriasis [19]. In this study, KEGG enrichment analysis suggested a significant effect of DI on many signaling pathways such as TLR, IL-17, and PI3K-AKT, which are involved in the regulation of oxidative stress processes. Therefore, DI alleviates macrophage pyroptosis, the underlying mechanism is related to the oxidative stress response, but whether it is related to these signaling pathways needs to be investigated further.
Conclusions
Taken together, DI alleviates the pyroptosis of macrophages through oxidative stress, which provides an experimental basis for the regulation of sepsis pyroptosis and a theoretical basis for anti-inflammation and suppression of oxidativess stress in clinical sepsis. | 2023-01-15T14:40:54.971Z | 2021-11-08T00:00:00.000 | {
"year": 2021,
"sha1": "da7af1c62b3fbe2ce53004a057c0e1fff3e9999a",
"oa_license": "CCBY",
"oa_url": "https://bmcimmunol.biomedcentral.com/track/pdf/10.1186/s12865-021-00463-3",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "da7af1c62b3fbe2ce53004a057c0e1fff3e9999a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
225046704 | pes2o/s2orc | v3-fos-license | Assessment of odor hedonic perception: the Sniffin’ sticks parosmia test (SSParoT)
Qualitative olfactory dysfunction is characterized as distorted odor perception and can have a profound effect on quality of life of affected individuals. Parosmia and phantosmia represent the two main subgroups of qualitative impairment and are currently diagnosed based on patient history only. We have developed a test method which measures qualitative olfactory function based on the odors of the Sniffin’ Sticks Identification subtest. The newly developed test is called Sniffin’ Sticks Parosmia Test (SSParoT). SSParoT uses hedonic estimates of two oppositely valenced odors (pleasant and unpleasant) to assess hedonic range (HR) and hedonic direction (HD), which represent qualitative olfactory perception. HR is defined as the perceivable hedonic distance between two oppositely valenced odors, while HD serves as an indicator for overall hedonic perception of odors. This multicenter study enrolled a total of 162 normosmic subjects in four consecutive experiments. Cluster analysis was used to group odors from the 16-item Sniffin’ Sticks Identification test and 24-additional odors into clusters with distinct hedonic properties. Eleven odor pairs were found to be suitable for estimation of HR and HD. Analysis showed agreement between test–retest sessions for all odor pairs. SSparoT might emerge as a valuable tool to assess qualitative olfactory function in health and disease.
Scientific Reports
| (2020) 10:18019 | https://doi.org/10.1038/s41598-020-74967-0 www.nature.com/scientificreports/ (TDI) then allows to discriminate between normal (normosmia), reduced (hyposmia), or severely impaired (anosmia) olfactory function 31,32 . The cut-off scores for these classifications are usually interpreted within predefined age groups, since olfactory function deteriorates with age [31][32][33] . Nevertheless, results-and associated cut-off scores-from young adults (olfactory reference group) serve as a general benchmark for olfactory function, since subjects from this age group demonstrate the best olfactory test results in quantitative terms 31 . Although objective methods are established in clinical routine for the assessment, diagnosis, and follow-up testing of quantitative OD, the diagnosis of qualitative OD is currently mainly based on the medical history or the use of a questionnaire only 27 . This shortcoming highlights the need for higher acceptance of previously proposed methods that focus on qualitative OD in a clinical context [34][35][36][37][38][39] .
Odor quality refers to the object (e.g., the smell of a rose) that is associated with the odor, whereas the hedonic feature of odors is defined as the valence (i.e., pleasant or unpleasant) of an odor 40,41 . The perceived pleasantness of odors shows sex-specific differences and is strongly correlated with odor intensity 34,42,43 . Based on the consideration that olfactory threshold function is usually measured in patients that complain about reduced perception of odor intensity (quantitative OD), the main complaint of patients with parosmia or phantosmia (unpleasant hedonic perceptions) might serve as an objective parameter in qualitative OD. Such a method would be of clinical significance in counselling and follow-up of patients to elucidate factors that may modulate it 23 . Moreover, new insights into qualitative OD might also reveal the prognostic value of distorted odor perceptions in neurodegenerative diseases or even in mood disorders.
Therefore, built on the concept of pre-existing protocols, we developed a new test method which measures hedonic olfactory perception based on pairwise presented odors by Sniffin' Sticks, that we called the Sniffin' Sticks Parosmia Test (SSParoT). The aim of this study was to (i) define objective parameters that are exemplary for qualitative OD, (ii) evaluate the suitability of well-known odors for SSParoT testing, (iii) assess test-retest reliability of SSParoT, and (iv) present normative values of these newly defined parameters derived from normosmic subjects within the olfactory reference group.
Results
Developing SSParoT. In order to develop a method which measures qualitative olfactory dysfunction, we designed a diagnostic test method that utilizes odors that are hedonically oppositely valued. Based on the consideration that patients with parosmia and phantosmia usually complain about unpleasant odor perceptions, the SSParoT measures hedonic range (HR) and hedonic direction (HD) of pairwise presented, pleasant and unpleasant odors based on a 9-point hedonic scale (Supplementary Fig. S1). The HR represents the perceptible range, while the HD depicts the balance or in-balance between two hedonically oppositely valenced odors. Odors are presented pairwise using felt-tip pens starting with the pleasant odor (counterbalanced).
Eight odors from the Sniffin' Sticks Identification test are suitable for hedonic testing.
To determine whether the 16 odors from the German version of the Sniffin' Sticks Identification test are suitable for the SSParoT method, we applied these to our cohort of 50 normosmic subjects. Odors were presented according to the presentation order of the original 16-item Identification test and participants were asked to rate hedonic estimate and intensity during one visit. Descriptive statistics are reported, followed by cluster analysis to merge hedonically similar odors into three groups (i) pleasant, (ii) neutral, and (iii) unpleasant ( Supplementary Fig. S2). In a final step, odors from the pleasant and unpleasant groups were paired with one another using two methods (method one: most pleasant odor with most unpleasant, method two: most pleasant odor with least unpleasant) and HR and HD were calculated for each pair. In order to depict the optimal pairing method, which was defined as equal HR between each pair of odors, we performed Kruskal-Wallis tests. We excluded rose (phenylethyl alcohol) from pairing, since this odor has shown to be perceptually unstable with high inter-individual variability during our investigation in suprathreshold testing and in threshold detection 45 .
The results from hedonic and intensity rating for each of the 16 odors are detailed in Table 1 and Fig. 1. Hierarchical cluster analysis and dendrogram for hedonic ratings revealed a three-cluster solution for these odors to be optimal (Supplementary Figs. S2 and S3). According to the mean hedonic values, these were defined as: Cluster 1 (pleasant): Banana, Pineapple, (Rose), Apple, Peppermint, Cinnamon; Cluster 2 (neutral): Coffee, Shoe leather, Anise, Liquorice, Orange, Lemon; Cluster 3 (unpleasant): Fish, Garlic, Turpentine, Clove (Supplementary Table S1). Subsequent pairing using both methods resulted in 4 pairs for each method (Supplementary Table S2). Kruskal-Wallis tests for both methods revealed a significant effect of pair on HR in both models, Method 1: H = 10.32, p = 0.02, df = 3; method 2: H = 61, p < 0.01, df = 3. Since this difference was smaller using the first pairing method (most pleasant with least unpleasant), we chose this method for further analysis in order to benefit from homogenous HR values during the interpretation of results. Subsequently, odor pairs were defined as: Pair 1: Peppermint and Fish; pair 2: Apple and Garlic; pair 3: Pineapple and Turpentine; pair 4: Banana and Clove.
Hence it seems that eight odors (merged into 4 pairs) of the 16-item Identification Sniffin' Sticks were suitable for the assessment of HR and HD. In a next step, we assessed these odor pairs for test-retest reliability in another cohort of normosmic subjects.
Agreement between test and retest measurements in HR and HD for odors selected from the Sniffin' Sticks Identification test. The reproducibility in terms of test-retest reliability and agreement plays a significant role during the development process of new test methods. A new cohort of 33 subjects was therefore included in a test-retest study with at least one day in-between sessions (mean/SD = 12/7 days, min/max = 2/34 days). We applied the Bland-Altman statistical method, which compares two measurements (test-retest) of the same variable (i.e., HR and HD for each odor pair) to assess their degree of agreement. The interpretation of results is based on (i) the bias (mean difference between both sessions), (ii) the 95% Limits of Agreement (LoA, mean difference between both sessions ± 1.96 standard deviation of the difference between sessions), and (iii) the visual examination of Bland-Altman plots. A bias of zero would indicate no systematic differences between sessions, while a smaller 95% LoA indicates better agreement between two measurements. Subsequently, Wilcoxon matched-paired signed rank tests were performed to depict potential differences in HR and HD between both sessions.
In a next step, we extended the currently used 8 odors from the 16-item Sniffin' Sticks Identification test and evaluated 24 new odors based on felt-tip pens as carrier medium.
Fourteen additional odors are suitable for hedonic testing. In order to optimize test performance in terms of higher accuracy, we evaluated 24 additional odors (Supplementary Table S4). Felt-tip pens were selected as carrier medium, since they have proved to be very suitable for hedonic testing during the first and second experiment. Concentrations of newly added odors were adjusted using the weakest and most intense odor from experiment one as benchmark. Hedonic and intensity estimates were documented in a new cohort of 52 subjects. Odors were paired using the method from the first experiment (most pleasant with least unpleasant). Olfactory performance was screened using the16-item Sniffin' Sticks Identification test at the end of the third experiment.
Results from hedonic and intensity estimates are presented in Table 2 and Table S5). Subsequent pairing using the same method as In a further step, we analyzed whether the newly defined odor pairs also show agreement between two measurements in an additional cohort of normosmic subjects. Agreement between test and retest measurements for HR and HD of additionally evaluated odors. To assess the reproducibility of HR and HD based on the seven newly depicted odor pairs, we con- www.nature.com/scientificreports/ ducted a test-retest reliability study (commensurate with experiment 2) with at least one day in-between sessions (mean/SD = 13/16 days, min/max = 1/56 days). We therefore included a new cohort of 27 subjects with no complaints regarding the sense of smell. Calculation of the mean differences between test-retest sessions (retest minus first test-session) revealed a bias of nearly zero for all HR measures ( Table S7) and only one significant difference in HR for pair 11 (p = 0.004). However, as we did not correct for multiple testing (in order to detect subtle differences), the difference for pair 11 (mean HR = 4.4 vs 3.1) was still acceptable.
Hence, all 7 additional odor pairs showed acceptable agreement between test-retest sessions. The next step included calculations of normative values for HR and HD.
Normative values for HR and HD for evaluated odor pairs. In accordance to established test methods in chemosensory research, we calculated normative values (e.g., mean, standard deviation, percentiles) for interpretation of HR and HD 29,30,46,47 . We only included results from normosmic subjects of the olfactory reference group aged between 18 and 35 years, since young adults demonstrate the best olfactory test results in www.nature.com/scientificreports/ quantitative terms 32 . Similar to above mentioned test methods in chemosensory research, we also defined the 10th percentile as cut-off value to distinguish between "normal" or "reduced/negative" HR and HD. We first calculated normative values of HR and HD for (i) each odor pair separately, (ii) the short version of SSParoT based on HR and HD from odor pair 1 to 4 (representing odors from the 16-item Sniffin' Sticks Identification test), and (iii) the extended version of SSParoT based on HR and HD from all odor pairs (1 to 11). SSParoT results (HR and HD) can therefore be interpreted for each odor pair separately, or in comparison to cut-off scores of the short or extended SSParoT version depending on available felt-tip pens (i.e. interpretation of results based on cut-off scores of the short version of SSParoT when only using the 16-item Sniffin' Sticks Identification test). Furthermore, since the hedonic judgement of odors shows sex-specific differences, all normative data are stratified by sex (Tables 3 and 4).
Discussion
Significant progress has been made in the objective diagnosis of quantitative olfactory dysfunction, but there is a gap in tests for qualitative impairments 7, 34-39, 48, 49 . Although distorted odor perceptions have long been a well-known symptom of qualitative OD, little is known about the pathophysiology, clinical course, and potential prognostic value 9, 10, 22, 50-52 . For these reasons, we established the SSParoT, which is based on the presentation of hedonically oppositely valued odors using felt-tip pens 29,30 . SSParoT measures hedonic range (HR) and hedonic direction (HD) of these odor pairs, which can then be interpreted individually (for each pair) compared to normative data based on the 10 th percentile. Here we showed that 4 pairs (8 odors) from the original Sniffin' Sticks Identification test are suitable for hedonic testing. In addition, the reproducibility of SSParoT was also validated, as test-retest results showed substantial agreement. Moreover, 24 additional odors were introduced from which 14 (7 pairs) also appeared to be valid for hedonic testing. Finally, results from these 14 additional odors have also demonstrated reproducibility.
In reference to the development process of SSParoT, the first aim of this work was to define suitable, objectifiable parameters that are exemplary for qualitative OD. Since previous works provided evidence that parosmia is usually characterized as hedonically unpleasant, evaluation of hedonic estimates seemed intuitive. Measurements of perceived pleasantness of different odors were achieved by using 9-point hedonic scales with visual representation, which have been long established in chemical sensory science 53,54 . We defined HR as the perceivable hedonic distance (dynamic range) between two oppositely valenced odors, while HD serves as an indicator for the general hedonic direction (pleasant or unpleasant) of odors in daily life.
The second objective of our study was to develop a test, that is readily available, practical in daily use, and bears the potential of reusability. Based on the 16-item Sniffin' Sticks Identification test, part of the SSParoT can www.nature.com/scientificreports/ be immediately implemented into clinical routine by using pre-exising tools based on a different test protocol 7,29,30,55 . Long shelf-life also ensures that this new method is cost-effective 29,30 . To minimize the potential bias of habitutation and adaptation processes (repeated presentation of the same odors), SSParoT should be performed based on the proposed protocol prior to odor identification testing 56,57 .
The third objective of this study was to develop a method, which controls for the association between hedonic judgement and perceived intensity, since previous studies have demonstrated a close interrelation 34, 58-60 . We therefore adapted intensity (concentration) of 24 additional odors from experiment three based on results from the first experiment using the weakest and most intense odors as benchmarks. Intensity ratings confirmed preliminary experiments, showing that intensity ratings were comparable to the benchmark (first experiment).
Regarding the normative data presented in this study, these were derived from healthy volunteers of the olfactory reference group, who yielded olfactory test scores in the normosmic range 32 . Previous published test methods in chemosensory research that measure quantitative olfactory and gustatory function provided evidence for the usefulness of normative data (and associated cut-off scores) in the clinical evaluation of patients with chemosensory dysfunctions and for research purposes 30,31,61 . Regarding quantitative olfactory function, the 10th percentile (based on normative data derived from healthy subjects of the olfactory reference group) has been proposed as "general" cut-off score to distinguish between healthy and diseased 31 . The authors reasonably noted that the interpretation of test results based on this cut-off score remains to some extend an "arbitrary" decision, since it was derived from subjects of the age group with the overall best olfactory function. Since olfactory function deteriorates with age [31][32][33] , individual test results must be always interpreted within each age group. Based on these considerations, we propose that the 10th percentile in HR and HD for each odor pair might also serve as "general" cut-off score to distinguish between "normal" and "reduced/impaired" hedonic perception in terms of qualitative olfactory function. Based on the main complaints of patients with qualitative OD (e.g., unpleasant hedonic perceptions), we would expect those patients with odor-specific parosmia to achieve odor-pair specific HR and HD below the 10th percentile, while those with non-specific parosmia triggered by any odor to achieve HR and HD below the 10th percentile in the short (odor pair 1 to 4) and extended (odor pair 1 to 11) version of SSParoT. For patients with phantosmia, we would expect similar results compared to those with non-specific parosmia. However, since unpleasant hedonic perceptions have been historically more commonly associated with parosmia-it has even been termed 'cacosmia' 8 -HD (representing general hedonic experiences of odors in daily life) might be lower in patients with parosmia compared to those with phantosmia. Since HR and HD are based on the same parameters, while addressing different aspects of hedonic perception, both high and low HR can result in a value of zero in HD. www.nature.com/scientificreports/ A previous study using a method similar to that of SSParoT which assessed HR in patients with Parkinson's disease (PD) revealed that PD patients with concurrent smell loss exhibit significantly lower HR compared to healthy, normosmic subjects. This difference was also found in normosmic PD patients compared to healthy, normosmic subjects. The authors hypothesized that this difference might be mediated independently from quantitative olfactory dysfunction, hence reduced HR (anhedonia) might be a distinct olfactory symptom in PD patients. Moreover, the authors also showed that HR was negatively correlated with the Snaith-Hamilton-Pleasure-Scale, which measures self-perceived anhedonia (higher scores indicated higher levels of present state of anhedonia). This finding further adds evidence to the practical framework and validity of HR, since higher self-perceived state of anhedonia was also associated with lower measured HR 62 . Furthermore, preliminary results from three patients with qualitative OD included in the current investigation-two patients with non-specific parosmia and one patient with phantosmia (Supplementary Table S8) provided further evidence for the proof of concept of SSParoT. Compared with normative data, all patients performed below the 10th percentile in HR based on the extended version of SSParoT.
This test method is unique as it partly uses established tests for olfactory performance to study the effect of odor hedonic perception. However, this study also has limitations. The first and main limitation remains the full validation of SSParoT, since we only included three patients with qualitative OD. Additionally, the usefulness of both parameters (i.e. HR and HD) and cut-off scores of the short and extended versions of SSParoT also need further investigation including larger cohorts of patients with qualitative OD. Secondly, the effect of cross-adaptation during odor presentation 41, 63-65 (i.e., the presentation of one odor raises olfactory threshold and decreases the perceived intensity to another odor) during repeated judgements within a test session might have biased the results of odor intensity and hedonic valence ratings. However, since we used an interstimulus interval of 30 s between odor presentations 56, 57 , cross-adaptation might not have biased our results to a large extent. Thirdly, the general labeled magnitude scale (gLMS) 66, 67 might have been an appropriate alternative for the assessment of perceptual responses compared to the nine-point hedonic scale and the visual analogue scale for intensity estimates. Fourthly, although we provided first normative data, we only included subjects from the olfactory reference age group younger than 35 years. A previous study on hedonic responses to various odors in different age groups provided evidence that these responses might be mediated by odor semantic knowledge www.nature.com/scientificreports/ that differs during the course of life 68 . Therefore, further normative data are needed for different age groups to allow an individual interpretation of results. This study adds to the current literature on olfactory test methods in three important ways. First, it introduces a new method that measures hedonic estimates of odors (HR and HD, respectively) based on pre-existing tools, allowing clinicians to integrate this new test into clinical routine immediately. Second, it provides evidence that fourteen additional odors are also suitable for hedonic testing. Third, it provides the first normative data derived from healthy, normosmic subjects of the olfactory reference group for the interpretation of test results. Study population. This multicenter, prospective experimental study was conducted at the Medical University of Vienna and University of Erlangen-Nürnberg. Healthy adults were recruited through invitational notices displayed at various pin boards at all study sites. Eligible subjects met the inclusion criteria of no self-reported complaints (quantitative and qualitative) regarding the senses of smell and taste. All participant underwent a routine ear-, nose-, and throat examination and a medical history was obtained. A total of 179 subjects were screened and 162 normsomic subjects (89 male, 74 females, mean ± SD age, 33.5 ± 15.6 years, range 18-82) www.nature.com/scientificreports/ were enrolled over two study centers. In addition, three patients presenting with self-reported qualitative olfactory dysfunction (parosmia and phantosmia) were recruited at the University of Erlangen-Nürnberg (1 male, 2 females, mean ± SD age, 31.3 ± 12.5 years, range . Development of the SSParoT. Hedonic estimates were assessed using a 9-point hedonic scale 53,54 (Supplementary Fig. S1). This scale has first been introduced by Peryam et al. 53 to measure food preferences and has quickly been adopted by the industry to measure the acceptability of various products related to food and cosmetics. We hypothesized that hedonic estimates of odors can be categorized into (i) pleasant, (ii) neutral, and (iii) unpleasant 69 . We therefore defined hedonic range (HR) as the perceivable hedonic distance between two oppositely valenced odors, and hedonic direction (HD) as an indicator for general hedonic experiences of odors in daily life (e.g., how pleasant will a pleasant odor be perceived and how unpleasant will an unpleasant odor be perceived). Both scales were developed based on the practical framework that patients with qualitative OD mainly complain about unpleasant hedonic perceptions [7][8][9][10] . A predefined odor pair consisting of one pleasant (E1) and one unpleasant (E2) odor were therefore presented pairwise and subjects' task was to rate the hedonic tone. Both scores were then calculated as follow (for each pair separately): (i) Hedonic range of each odor pair was defined as the difference between both estimates (in whole numbers): Given that the hedonic scale ranges from -4 to + 4, the newly defined HR can take whole numbers ranging from -8 to 8. (ii) Hedonic direction of each odor pair was defined as the mean value between both estimates (in whole numbers): Since the hedonic scale ranges from -4 to + 4, HD can take whole or half numbers ranging from − 4 to + 4. Felt-tip pens (Burghart Messtechnik GmbH, Wedel, Germany) were chosen to serve as the carrier medium for all odors . These felt tip pens are widely used and characterized by (i) reusability, (ii) long shelf-life, (iii) easy application, and (iv) the possibility to be self-filled using blank pens 29,30 .
In four experiments, we assessed (i) the suitability of the 16-item Sniffin' Sticks Identification test to assess hedonic estimates, (ii) the reliability of HR and HD of odor pairs selected from the 16-item Sniffin' Sticks Identification test, (iii) the suitability of 24 additional odors for hedonic estimates, and (iv) the reliability of HR and HD of odor pairs selected from the extended version. Since perceived intensity of odors is known to be associated with pleasantness 34, 58-60 , we simultaneously evaluated intensity of each presented odor on a visual analogue scale ranging from 0 to 10 (left hand end: 0 = no intensity, right hand end: 10 = strong intensity). Experiment 1: subjects. Fifty-two subjects with no complaints regarding the sense of smell were screened using the German-version of the 16-item Sniffin' Sticks Identification test and 50 normosmic subjects (28 males, 22 females, mean ± SD age, 28.8 ± 9.8 years, range 18-62) from both study centers (Erlangen, n = 20; Vienna, n = 30) were included during one visit. Experiment 1: design. This experiment was carried out at the Friedrich-Alexander University Erlangen-Nürnberg and the Medical University of Vienna. The aim was to assess the hedonic estimates of commonly used odors from the original 16-item Sniffin' Sticks test and to identify hedonically oppositely valenced odors (pleasant and unpleasant). Odors were presented according to the order of the original 16-item Identification test with a break of at least 30 s in-between each odor to prevent the effect of olfactory desensitization 44,56,57 . Subjects' task was to rate hedonic estimates, followed by intensity ratings. At the end of the first experiment, subjects underwent olfactory testing using the 16-item Identification test. Normal olfactory function was defined as ≥ 11 out of 16 possible points 32 . Experiment 1: statistical analysis. Hedonic estimates were first grouped into clusters based on hierarchical clustering using the squared Euclidean distance metric and average-linkage method. Based on visual inspection of the cluster profiles and the inverse scree test (elbow) method 70 , we determined the optimal number of clusters to be three. Cluster membership of odors was then identified in a second step using the non-hierarchical K-means analysis and clusters were named according to their hedonic valence: Cluster 1 = pleasant odors, Cluster 2 = neutral odors, and Cluster 3 = unpleasant odors. Subsequently, HR and HD of paired odors (pleasant and unpleasant) from Cluster 1 and 3 were calculated based two different methods (i) pairing the most pleasant odor from Cluster 1 with the most unpleasant odor from Cluster 3 and (ii) pairing the most pleasant odor from Cluster 1 with the least unpleasant odor from Cluster 3. In a final step, HR of each method were tested for significant differences using Kruskal-Wallis tests in order to pick out the pairing method with the smallest difference. www.nature.com/scientificreports/ the study centers twice with at least one day separation between visits (mean ± SD duration, 11.8 ± 7.3 days, range 2-34). Experiment 2: design. This second experiment took place at the Friedrich-Alexander University Erlangen-Nürnberg and was designed to assess the reliability of HR and HD estimates. Hedonically matching pairs of opposing odors selected from experiment 1 were presented pairwise with a break of 30 s in-between odors. Odor pairs were presented starting with the most pleasant odor (counterbalanced), followed by its matched odor pair: Pair 4: Banana and Clove, pair 3: Pineapple and Turpentine, pair 2: Apple and Garlic, pair 1: Peppermint and Fish. Subjects' task was to estimate hedonic valence and intensity as described in experiment 1. Olfactory performance was screened using a randomized procedure after the first or second visit to detect potential confounders based on above described cut-off values.
Experiment 2: statistical analysis. HR and HD were calculated for each pair and session. To assess agreement between measurements, we first calculated the absolute difference (bias) in HR and HD scores for each pair of odors, followed by graphical visualization using Bland-Altman plots 71 . In a next step, we compared HR and HD of both sessions using Wilcoxon matched-paired signed rank tests without correcting for multiple comparisons in order to identify small differences between both sessions.
Experiment 3: statistical analysis.
Hedonic estimates were clustered following the same methods described in experiment 1. Again, three clusters were determined as the optimal number: Cluster 1 = pleasant, Cluster 2 = neutral, and Cluster 3 = unpleasant. Subsequently, pleasant and unpleasant odors from Cluster 1 and 3 were paired using the same method as described in experiment 1. Finally, HR of each newly matched pair were calculated and tested for significant differences using the Kruskal-Wallis test to validate the odor pairs. Experiment 4: subjects. The last experiment screened another 34 subjects and included a total of 27 normosmic subjects (12 males, 15 females, mean ± SD age, 43 ± 19 years, range 20-82) for further analysis. Subjects visited the study center twice with at least one day separation between visits (mean ± SD duration, 13.5 ± 16.3 days, range, 1-56). Experiment 4: design. This experiment was conducted at the Friedrich-Alexander University Erlangen-Nürnberg. As has been the case in experiment 2, the last experiment was also designed to investigate the reliability of HR and HD estimates of the additionally matched pairs of opposing odors. Odor pairs were presented starting with the most pleasant odor, followed by its matched odor pair (counterbalanced): Pair 5: Peach and Butter; pair 6: Coco and n-Butyric acid; pair 7: Caramel and iso-Butyric acid; pair 8: Raspberry and Indole; pair 9: Ice bonbon and Skatole; pair 10: Lemon and Civet; pair 11: Orange and Valeric acid. Olfactory performance was either tested after the first or second visit based on a study protocol inherent randomization. Experiment 4: statistical analysis. HR and HD were calculated for each of the newly added odor pairs.
Bland-Altman plots were visualized, following calculations of the absolute agreement between visits. Group comparisons were performed using Wilcoxon matched-paired signed rank tests to assess differences for HR and HD between both visits. In order to detect subtle differences, we did not correct for multiple testing.
Normative values. We only included HR and HD results from normosmic subjects of the olfactory reference group aged between 18 and 35 years: Odor pair 1 to 4 (n = 50, experiment 1 and experiment 2-first visit) and odor pair 5 to 11 (n = 56, experiment 3 and experiment 4-first visit). Descriptive statistics of HR and HD (i.e., mean, standard deviation, min-max, 10th percentile) were calculated for (i) each odor pair separately, (ii) the short version of SSParoT, based on HR and HD of odor pair 1 to 4, and (iii) the extended version of SSParoT based on HR and HD of odor pair 1 to 11.
Data availability
The institutional ethics committee (Ethikkommission der Medizinischen Universität Wien-Borschkegasse 8b/ E06, 1090 Vienna) is imposing legal and ethical restrictions on the present data. Requests for data will be administered by the corresponding author.
Received: 4 June 2020; Accepted: 11 September 2020 Scientific Reports | (2020) 10:18019 | https://doi.org/10.1038/s41598-020-74967-0 www.nature.com/scientificreports/ Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. | 2020-10-24T05:06:04.155Z | 2020-10-22T00:00:00.000 | {
"year": 2020,
"sha1": "6cc6a201a10b247eecff9d378d4e355380581dfb",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-74967-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6cc6a201a10b247eecff9d378d4e355380581dfb",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
14398524 | pes2o/s2orc | v3-fos-license | Hepatitis B virus and its sexually transmitted infection - an update
Epidemiology: incidence and prevalence: About 5% of the world’s population has chronic hepatitis B virus (HBV) infection, and nearly 25% of carriers develop chronic hepatitis, cirrhosis, and hepatocellular carcinoma (HCC). The prevalence of chronic HBV infection in human immunodeficiency virus (HIV)-infected individuals is 5%-15%; HIV/HBV coinfected individuals have a higher level of HBV replication, with higher rates of chronicity, reactivation, occult infection, and HCC than individuals with HBV only. The prevalence of HBV genotype A is significantly higher among men who have sex with men (MSM), compared with the rest of the population. Molecular mechanisms of infection, pathology, and symptomatology: HBV replication begins with entry into the hepatocyte. Sodium taurocholate cotransporting polypeptide was identified in 2012 as the entry receptor of HBV. Although chronic hepatitis B develops slowly, HIV/HBV coinfected individuals show more rapid progression to cirrhosis and HCC. Transmission and protection: The most common sources of HBV infection are body fluids. Hepatitis B (HB) vaccination is recommended for all children and adolescents, and all unvaccinated adults at risk for HBV infection (sexually active individuals such as MSM, individuals with occupational risk, and immunosuppressed individuals). Although HB vaccination can prevent clinical infections (hepatitis), it cannot prevent 100% of subclinical infections. Treatment and curability: The goal of treatment is reducing the risk of complications (cirrhosis and HCC). Pegylated interferon alfa and nucleos(t)ide analogues (NAs) are the current treatments for chronic HBV infection. NAs have improved the outcomes of patients with cirrhosis and HCC, and decreased the incidence of acute liver failure.
INTRODUCTION
A sexually transmitted infection (STI) is defined as an infection that results from transmission of a pathogenic organism by sexual contact (i.e., any genital or anal contact with another person's genitals, anus, or mouth) and that accounts for a noticeable amount of illness in the general population or in a defined subpopulation [1,2]. Although there is no consensus on when the terms STI and sexually transmitted disease (STD) should be used, the American Sexual Health Association (ASHA) makes a distinction between the two terms [3]. The concept of "a disease," as in STD, suggests a clear medical problem, usually some obvious signs or symptoms. However, most people infected with one or the other of several of the most common STIs do not manifest signs or symptoms, or have mild signs and symptoms that can be easily overlooked. A sexually transmitted virus or bacterium can infect its host, which may or may not result in "a disease." In this article, we use the term STI.
Organisms such as hepatitis B virus (HBV) that cause infections via sexual transmission can also cause infections via other routes, such as percutaneous transmission by contaminated needles and vertical transmission in utero or during delivery. As a typical STI, HBV infection is present in all types of populations. Sexual contact and vertical transmission from mother to infant are responsible for the large majority of HBV infections worldwide [4].
HBV infection as an STI is well documented. It is mainly common among men who have sex with men (MSM), because multiple partners are common in this population; and anal sex is usually more traumatic than vaginal intercourse, resulting in increased risk of exposure to blood [4,5]. HBV infection is also extremely common among heterosexual individuals who have multiple sex partners or contact with sex workers [6].
Routine immunization with hepatitis B (HB) vaccine is strongly recommended for the prevention of HBV infection in MSM and other individuals at risk for STIs. HB vaccination of adults has been found to be effective at conferring immunity to individuals who are exposed to HBV via sexual transmission. However, the first priority is directly preventing the spread of HBV by the most reliable and appropriate method, which is use of a condom for safe sexual contact.
Chronic HBV infection is the cause of chronic hepatitis, cirrhosis, and hepatocellular carcinoma (HCC) [7]. The goals of antiviral therapy for patients with chronic HBV are to slow the progression of chronic liver disease and decrease the development of complications, including cirrhosis and HCC. At present, pegylated interferon alfa (PEG-IFN-α), entecavir, and tenofovir disoproxil fumarate (tenofovir) are available for the treatment of HBV infection [8]. Sodium taurocholate cotransporting polypeptide (NTCP) was recently identified as the receptor for HBV entry into hepatocytes [9]. Because NTCP is essential for HBV infection, it may have potential as a new therapeutic target.
The purpose of this article is to provide up-to-date information on HBV and HBV infection as a major STI.
Hepatitis B virus (HBV)
HBV is classified in the family Hepadnaviridae. It is a very small, partially double-stranded DNA virus. Humans are known to be the only natural host. HBV reaches the liver through the systemic circulation and can only replicate in hepatocytes [10]. Since HBV is a hepatotropic virus, injury to the liver results from the immune-mediated destruction of infected hepatocytes [6].
The infectious HB virion has a diameter of 42-47 nm and is a double-shelled particle in serum. Its concentration can be as high as 108 virions per mL [6,10]. The infectious HB virion consists of an outer lipoprotein coat (also called envelope) containing hepatitis B surface antigen (HBsAg). HBsAg surrounds an inner nucleocapsid composed of hepatitis B core antigen (HBcAg) that encapsidates the HBV genome and DNA polymerase [11,12].
Genome structure and proteins
The HBV genome consists of a partially double-stranded, circular DNA molecule. Its total genome is 1700-2800 nucleotides long or 3020-3320 nucleotides long (for the short and full-length strand, respectively). Every nucleotide in the genome is active in 4 highly overlapping coding regions, or open reading frames (ORFs), as shown in Fig. 1 [13,14,15,16]. The polymerase gene (P gene) encodes the key enzyme for replication of the genome [13]. The enzyme has DNA polymerase (DNA Pol), reverse transcriptase (RT) and RNase H activities, and also acts as the terminal protein (TP) [13,17]. The core gene (C gene) has at least two in-frame start codons, and encodes HBcAg and HBeAg [13]. HBcAg is the protein that encapsidates the viral DNA. It can also be expressed on the surface of hepatocytes, and evokes the cellular immune response [18]. HBeAg is a marker of active viral replication [13]. Secreted HBeAg is OPEN significantly more efficient than intracellular HBcAg at producing T-cell tolerance [19]. The surface gene (S gene) encodes three different envelope glycoproteins, known as the pre-S1, pre-S2, and S proteins. The pre-S1 protein (large HBsAg) is the largest of the HBV surface proteins, and is produced starting at the first initiation codon of the ORF. The pre-S2 protein (middle HBsAg) is produced starting at the second initiation codon. The S protein (small HBsAg), which is commonly referred to as HBsAg or the Australia antigen, is produced starting at the third initiation codon. The X gene encodes the multifunctional X protein [13]. It controls the level of HBV replication and acts as a cofactor in the development of HCC [20].
Natural history of HBV infection
HBV infection can cause acute hepatitis, acute liver failure, or chronic hepatitis, or can cause an asymptomatic infection. Chronic HBV infection can result in cirrhosis or HCC.
The probability that a person with HBV infection will progress to chronic infection is strongly dependent on the person's age at the time of HBV infection [21]. More than 90% of HBV-infected infants and 25%-50% of children infected between the ages of 1 and 5 years will develop chronic hepatitis. More than 25% of HBV-infected infants and children older than 6 years will develop HBV-related cirrhosis and HCC [10]. The rate of progression to cirrhosis and HCC is less than 1% per year for patients in the inactive chronic hepatitis stage, while the rate of progression to cirrhosis may be 2%-10% per year for patients in the immune active stage. By contrast, less than 10% of older children and adults with acute hepatitis progress to chronic infection. The progression from cirrhosis to HCC may occur in 2%-4% of adult patients per year [22]. In addition to age when first infected, the rates of progression of HBV infection are generally affected by gender, the level of HBV replication, HBV genotypes and variants, coinfecting viruses (hepatitis C virus [HCV], hepatitis delta virus [HDV], human immunodeficiency virus [HIV]), host lifestyle (drinking, smoking), exposure to carcinogenic substances, host genetic factors, and probably comorbidities (metabolic syndrome, diabetes and obesity) [22].
The natural history of chronic HBV infection can be separated into five stages, which are not necessarily sequential [23]. These stages are summarized in Table 1 [6].
Stage 1: "Immune tolerant" The initial stage represents the incubation period. When HBV is actively replicating, HBV DNA, HBeAg, and HBsAg are detected in the serum [24]. The serum alanine aminotransferase (ALT) is only slightly or not elevated, and the infected person is not symptomatic. The immune response is limited to production of antibody to hepatitis B core antigen (anti-HBc) (immunoglobulin M [IgM] followed by immunoglobulin G [IgG]); however, these antibodies do not neutralize the infection [25]. This first stage occurs more frequently and has a longer duration in babies infected during delivery or during the first years of life [23]. There are only few or no findings of fibrosis. In this stage, though treatment is not generally indicated, monitoring is required.
Stage 2: "Immune active" (HBeAg-positive chronic hepatitis) HBeAg can be detected in the serum. A somewhat lower level of HBV DNA is seen in some patients, who are clearing HBV, than in stage 1 [24]. Compared with the previous stage, the serum ALT level is higher, and there is moderate or severe liver necroinflammation and more rapid progression of fibrosis [23, 26,27,28]. For patients with chronic HBV infection, 10 years or more may pass before cirrhosis develops, immune clearance takes place, or HCC develops. The immune response reduces the level of HBV replication, and begins to clear HBeAg and HBsAg. The rate of development of antibody to hepatitis B e antigen (anti-HBe) and HBeAg clearance (HBeAg seroconversion) is 10%-20% per year. Chronic infection will develop in 80%-90% of infected infants [29], whereas less than 5% of infected adults will fail to resolve acute hepatitis [30]. This stage ends with HBeAg seroconversion [23]. In this stage, treatment may be indicated.
Stage 3: Inactive chronic hepatitis "immune control" (previously called inactive carrier) The stage of inactive chronic hepatitis may follow the seroconversion to anti-HBe and clearance of HBeAg. The stage is characterized by very low or undetectable HBV DNA in the serum and serum aminotransferase levels in the reference range [23]. Through immunological control of HBV infection, the majority of patients will have a favorable outcome with very low risk of cirrhosis or HCC [31,32]. HBsAg is still present in the serum, but HBsAg clearance and development of antibody to hepatitis B surface antigen (anti-HBs) may occur spontaneously in 1%-3% of cases per year [33]. In this stage, although treatment is not generally indicated, monitoring for reactivation and HCC is required. Stage 4: "Immune escape" (HBeAg-negative chronic hepatitis) The HBeAg-negative chronic hepatitis stage may follow clearance of HBeAg and development of anti-HBe during the inactive chronic infection stage (stage 3) or directly from the immune active/clearance stage (stage 2). It is important to distinguish inactive HBV carriers from individuals negative for HBeAg who have chronic hepatitis. The former patients will have a good outcome with a very low risk of complications, while the latter have a high risk of progressive liver disease, including decompensated cirrhosis and HCC [23]. In this stage, treatment may be indicated. Stage 5: "Reactivation" or "acute-on-chronic hepatitis" In the final stage, HBV reactivation may occur spontaneously or may be triggered by cancer chemotherapy or other immunosuppressive therapies, and may result in serious acute-on-chronic hepatitis. Occult HBV infection is defined as persistence of HBV DNA in the liver of individuals in whom HBsAg is undetectable in the blood. Individuals who have cleared HBsAg and are negative for serum HBV DNA but anti-HBc positive may develop reactivation if they are being treated with potent immunosuppressive medications [6].
HBsAg loss before the onset of cirrhosis is associated with improved outcome, with a reduced risk of cirrhosis, decompensation, and HCC [23]. If cirrhosis develops before natural or treatment-induced clearance of HBsAg, patients remain at risk of HCC [34]. In this stage, treatment is indicated.
HBV life cycle NTCP was recently identified as a receptor for HBV entry, which enabled the establishment of a susceptible cell line that can efficiently support HBV infection. This discovery should lead to a deeper understanding of the requirements for effective HBV infection and clarification of the molecular mechanism of HBV entry.
The replication cycle of HBV begins with entry of the virus into hepatocytes, which is mediated by the binding of the pre-S1 region on the virion envelope to the hepatocellular NTCP [13]. The virion is then uncoated and transported into the nucleus. The viral relaxed circular DNA (rcDNA) or linear DNA genome, with a protein attached to the 5' end of the minus strand and a short RNA attached to the 5' end of the plus strand [35], is converted into covalently closed circular DNA (cccDNA) through covalent ligation [14].
This cccDNA is responsible for viral persistence and is highly resistant to antiviral therapy. It serves as the template for the transcription of viral mRNAs. The pregenome mRNA serves for the synthesis of core protein (nucleocapsid subunit) and viral reverse transcriptase. The viral genome is replicated by reverse transcription of pregenomic RNA. During this process, both the protein and the RNA are removed [35]. The reverse transcriptase binds to the 5' end of its own mRNA template, and the complex is then packaged into nucleocapsids, where viral DNA synthesis occurs. These nucleocapsids can also move into the nucleus to increase the copy numbers of cccDNA. Since cccDNA does not undergo semiconservative replication, all cccDNA copies result from viral DNA made in the cytoplasm via the reverse transcription pathway [36].
An increase in the level of viral envelope proteins inhib-its synthesis of high levels of cccDNA, which can be toxic to hepatocytes. Once partially double-stranded DNA has been produced, nucleocapsids can undergo a maturation event that enables them to obtain an outer envelope via budding into the ER. The mature nucleocapsids may be recycled FIGURE 2: Schematic representation of the HBV lifecycle, from entry into hepatocytes to release from hepatocytes. Entry: HBV (Dane particle) obtains entry into hepatocytes by binding to the receptor NTCP [37,38,39] and possible additional hepatocyte-specific factors on the cell surface. The HBV membrane fuses with the membrane of the host hepatocyte, and the virion is endocytosed. Uncoating: The HBV membrane releases the viral DNA (partially double-stranded circular DNA) with the core particle into the cytoplasm [39]. The viral membrane is lost (uncoating). The viral nucleocapsid containing the viral genomic DNA is transported into the nucleus in the relaxed circular form. Repair and cccDNA formation: In the nucleus, the viral DNA polymerase synthesizes fully double-stranded DNA, and fully doublestranded DNA is converted to a cccDNA by the viral DNA polymerase [38,39]. The formation of cccDNA remains poorly understood. It is most likely formed via the DNA repair mechanism [38]. Transcription: cccDNA is transcribed into the pregenomic and subgenomic mRNAs by host RNA polymerase [38,39]. Translation and reverse transcription: Pregenomic RNA is the template for the translation of both DNA polymerase and the core proteins, and for reverse transcription. The DNA polymerase binds to the packaging signal of the pregenomic RNA, and both are then combined into the viral capsid, which is the core particle [38,39]. The HBV genome matures in the core particle via reverse transcription of pregenomic mRNA to DNA [39]. DNA synthesis: After synthesis of the (-) strand DNA and (+) strand DNA, the nucleocapsid, containing partially-double stranded circular DNA, is generated. Assembly: HBsAg and the nucleocapsid containing partially double-stranded circular DNA are assembled together to become a new complete virion [39]. Release: The mature HBV virion (Dane particle) is released from the infected hepatocyte or is recycled back into the nucleus for amplification of cccDNA [38]. Other events: The C gene directs the synthesis of two major gene products: HBcAg (p21c), which comprises the nucleocapsid; and HBeAg (p17e), which is a secreted antigen. Noninfectious particles (empty particles), which are composed of HBsAg, a 22-kDa precore protein (p22cr), and HBeAg, are also produced as a trap for the host immune system, in order to protect the infectious Dane particles. Serologic testing can assess HBeAg, p22cr, and HBcAg as hepatitis B core-related antigen (HBcrAg). Abbreviations: HBV, hepatitis B virus; NTCP, sodium taurocholate cotransporting polypeptide; cccDNA, covalently closed circular DNA; RC-DNA, relaxed circular DNA; HBsAg, hepatitis B virus surface antigen; HBcAg, HBV core antigen; HBV e antigen, HBeAg; p22cr, precore protein; HBcrAg, hepatitis B core-related antigen.
Incidence: worldwide view and HIV/HBV coinfection
More than one third of the world's population are estimated to be infected with HBV. About 5% of the world's population are chronic carriers of HBV, and HBV infection causes more than one million deaths every year [40]. The HBsAg carrier rate varies from 0.1% to 20% of different populations worldwide. In low-risk regions, the highest incidence of infection is seen in teenagers and young adults. Based on the data from Western cohorts, HIV/HBV coinfection has a profound impact on almost every aspect of the natural history of HBV infection [6]. The consequences include higher rates of chronicity after acute HBV infection, higher levels of HBV replication and rates of reactivation, less spontaneous clearance, higher rates of occult HBV infection (i.e., detectable HBV DNA positivity in the absence of HBsAg seropositivity), more rapid progression to cirrhosis and HCC, higher rates of liver-related mortality, and decreased treatment response compared with individuals without HIV coinfection [41,42]. Recent longitudinal cohort studies have found that coinfection with HBV also can lead to increased rates of progression to acquired immune deficiency syndrome (AIDS)-related outcomes and all-cause mortality [43,44]. An estimated 5% to 15% of the 34 million HIV-infected individuals worldwide are coinfected with HBV, as a chronic infection [45,46]. The burden of coinfection is greatest in Southeast Asia and sub-Saharan Africa [6].
Prevalence: international statistics
An estimated 240 million people are chronically infected with hepatitis B [47]. The prevalence of chronic HBV infection varies geographically, ranging from 1% to 20%. Populations with high rates include Alaskan Eskimos, Asian-Pacific islanders, Australian aborigines, and populations of the Indian subcontinent, sub-Saharan Africa, and Central Asia. In some locations, such as Vietnam, the rate is as high as 30% [48]. The prevalence of the HBV carrier state is related to differences in the mode of transmission, including iatrogenic transmission, and the age of primary infection.
In low-prevalence (< 2%) regions, the lifetime risk of HBV infection is less than 20%. Sexual transmission and percutaneous transmission during early adulthood are the main routes of spreading the infection. About 12% of HBVinfected individuals live in the low-prevalence regions, which include North America, northern and western Europe, Australia, and New Zealand [48]. In these areas, most HBV infections occur in adolescents and young adults belonging to relatively well defined high-risk groups, which include injection drug users, MSM, healthcare workers, and patients who undergo regular blood transfusions or hemodialysis [48,49].
In intermediate-prevalence (3% -5%) regions, sexual and percutaneous transmission and vertical transmission during delivery are the major routes of infection. These regions include eastern and southern Europe, Japan, the Mediterranean basin, the Middle East, Latin and South America, and Central Asia. One study reported that approximately 43% of HBV-infected individuals live in southern, central, and western Asia; eastern Europe; Russia; and Central and South America. The lifetime risk of HBV infection is 20% -60% [48]. The persistently high rates of chronic infection are mostly due to infections occurring in infants and children.
In high-prevalence (10% -20%) regions, transmission occurs predominantly in infants and children. During early childhood, HBV is transmitted vertically from the mother to infant or occurs via close contact. In some regions, percutaneous exposure to contaminated needles or an unsafe injection is also a possible route of HBV infection. Since most infections in children are asymptomatic, there is little evidence of acute HBV-related disease, but the rates of chronic liver disease and HCC in adults are high. Approximately 45% of individuals infected with HBV live in highprevalence regions. The lifetime risk of infection is higher than 60%, as demonstrated by the presence of anti-HBc in sera [48]. The high-prevalence regions are mostly regions with developing economies and large populations. They include China, Southeast Asia, Indonesia, sub-Saharan Africa, the Pacific Islands, parts of the Middle East, and the Amazon Basin [48].
HBV serotypes and genotypes
Based on some of the antigenic determinants of HBsAg, nine serological types -referred to as subtypesadw2, adw4, adrq+, adrq, ayw1, ayw2, ayw3, ayw4 and ayrhave been identified [50]. Ten genotypes of HBV (A-J) have been identified, and these correspond to specific geographic distributions [51]. Genotype A is more frequently found in North America, northwestern Europe, India, and Africa. Genotypes B and C are endemic to Asia, and genotype D predominates in eastern Europe and the Mediterranean [52]. Type E is found in western Africa; type F, in South America; and type G, in France, Germany, Central America, Mexico, and the United States. Type H is prevalent in Central America [48]; type I, in Vietnam; and type J (possible recombination with type C), in Japan [53].
HIV-seropositive MSM populations predominantly coinfected with HBV genotype A have been reported in European countries and Japan [54,55,56]. The prevalence of HBV genotype A is significantly higher in the MSM population than in the rest of the population [56]. In addition, Araujo et al. speculated in their review that HBV subgenotypes A2 and C are likely to predominate in populations at high risk of infection via sexual transmission [57]. Additionally, HBV genotype A develops into a persistent infection more often than genotype C [58,59].
Individuals infected with genotypes C and F have higher rates of HCC than individuals infected with genotypes B and D [6]. Evidence increasingly suggests that genotype C and F affect disease severity and response to treatment [60,61,62]. Results of studies in Asia suggest that patients infected by HBV genotype C show a more rapid progression [63,64,65], and subgenotypes of HBV genotype C are probably responsible for the increased rate of HCC in patients who were positive for HBeAg [66]. Studies in Europe and North America have found that higher proportions of patients with chronic hepatitis associated with genotype D infection progressed to cirrhosis and HCC than those with chronic hepatitis associated with genotype A infection [26,67,68,69].
Pathological findings
Currently, most liver biopsies are performed to confirm the existence of chronic hepatitis and to determine its level of activity. This section mainly describes chronic hepatitis, which plays an important role in HBV infection.
1) Acute hepatitis B
Because acute hepatitis B is always diagnosed by clinical symptoms and serologic markers related to HBV infection, liver biopsies are not often performed. In general, acute hepatitis shows more areas of spotty parenchymal inflammation and more severe damage than typical chronic hepatitis. The lesions mainly contain diffuse sinusoidal and portal mononuclear infiltrates (lymphocytes, plasma cells, Kupffer cells), swollen hepatocytes and/or necrotic hepatocytes (also called apoptotic or acidophilic hepatocytes, or Councilman bodies) [70,71]. Cell plates and sinusoids may be indistinct in more severe cases as a result of hepatocyte swelling, filling of sinusoids by mononuclear inflammatory cells, and regenerating hepatocytes. Significant lobular necrosis leads to acute liver failure [70].
2) Chronic hepatitis B and cirrhosis
In chronic HBV infection, there is a varying degree of predominantly lymphocytic portal inflammation with interface hepatitis and spotty lobular inflammation. The inflammation is minimal in the immune-tolerant or inactive chronic infection stages, but is prominent in the immune-active stage. Bridging necrosis is identified as inflammation "connecting" portal tracts to one another or to central veins [71]. Confluent necrosis affects multiple contiguous hepatocytes. Inflammation is typically associated with scarring, which can vary from a mild portal extension to periportal fibrous strands, bridging fibrosis, and cirrhosis. Livers that develop central to portal bridging necrosis or confluent necrosis are likely to have a higher fibrosis stage. The Scheuer classification for grading and staging of chronic hepatitis is often used, as shown in Table 2 [72]. The hepatocytes that express a high level of HBsAg may have a "ground-glass" cytoplasm, which can be highlighted by special immunohistochemical stains (Shikata's orcein and Victoria blue). Ground-glass hepatocytes may also be seen in other conditions [73].
Cirrhosis is diagnosed when the loss of normal centralportal relationships is observed. The atypical enlargement of nuclei with an increase in the nuclear-cytoplasmic ratio, known as "large cell change", is very common in cirrhosis. This cytologic abnormality should only be used to support the evidence of regeneration and architectural abnormalities, which is used for diagnosing cirrhosis [70].
Serologic markers related to HBV infection
The serologic markers of HBV infection are as follows: HBsAg and the corresponding antibody anti-HBs, HBeAg and the corresponding antibody anti-HBe, immunoglobulin M antibody to hepatitis B core antigen (IgM anti-HBc), immunoglobulin G antibody to hepatitis B core antigen (IgG anti-HBc), and serum HBV DNA. The diagnosis of acute or chronic HBV infection requires serologic testing (Table 3) [2]. The first detectable markers in acute HBV infection are HBsAg and IgM anti-HBc. Total anti-HBc is present over the entire lifetime of the infected individual. It is found in individuals with chronic HBV infection and in those who recover from HBV infection [10]. The presence of anti-HBc alone might indicate acute, resolved, or chronic infection, or a false-positive result [2]. HBsAg and HBeAg can be used as surrogate markers of HBV replication [74]. HBsAg is eliminated from the sera of individuals who recover from HBV infection, and anti-HBs is detectable during recovery [10]. Detection of HBsAg indicates early acute infection. To ensure that an HBsAgpositive test result is not false positive, samples with repeatedly reactive HBsAg results should be tested with a US Food and Drug Administration (FDA)-cleared neutralizing confirmatory test [2]. HBeAg is a marker of high levels of viral replication. Detection of HBeAg indicates that the blood and body fluids of an infected person are highly infectious. Detection of anti-HBeAg indicates inactive chronic hepatitis. The persistence of HBeAg for longer than 10 weeks and/or HBsAg and serum HBV DNA for longer than 6 months, indicates transition to chronic HBV infection [74]. Detection of anti-HBs indicates immunity against HBV. An-ti-HBs can also be detected in individuals who were immunized by the HB vaccine. Most individuals who recover from HBV infection are expected to be positive for both anti-HBs and anti-HBc [10]. Individuals positive for anti-HBc only are unlikely to be infectious, except under unusual circumstances, including direct percutaneous exposure to large quantities of blood (e.g., blood transfusion and organ transplantation) from individuals positive for anti-HBc only [2].
Acute hepatitis B
The incubation period (duration from exposure to HBV to onset of symptoms) of HBV-infected individuals with acute hepatitis ranges from 60 to 150 days, with an average of 90 days [75,76]. The signs and symptoms of acute hepatitis B are described in detail in the "Signs and symptoms" section. As mentioned previously, the clinical manifestations of acute HBV infection are age dependent [10]. Over 90% of infants with HBV infection are asymptomatic, while the typical manifestations of acute hepatitis are prominent in 5% to 15% of newly infected young children (aged 1-5 years) and in 33% to 50% of children older than 6 years of age [10,21]. Serologic markers related to acute hepatitis B are described in the subsection"Serologic markers related to HBV infection". As described in that subsection, the persistence of HBeAg indicates the transition to chronic HBV infection [74].
Chronic HBV infection
The natural history of HBV infection, including the transition to chronic infection, is described in the "Etiology" section. Chronic HBV infection is defined as either the pres-ence of HBsAg in the serum for at least 6 months or the presence of HBsAg in a person who tests negative for IgM anti-HBc [10]. Unlike individuals who recover from acute HBV infection, patients with chronic HBV infection do not produce anti-HBs, and serum HBsAg positivity typically persists for a long period of time [10]. In patients with chronic HBV infection, the disappearance of HBeAg and detection of anti-HBe usually indicate a reduction in viral load [77]. Each year, approximately 0.5% of adults with chronic HBV infection will clear HBsAg and produce anti-HBs [78,79,80]. Although patients with chronic HBV infection die of causes unrelated to HBV, chronic HBV infection is responsible for most of the morbidity associated with HBV [10]. Follow-up studies of individuals first infected with HBV when they were infants or young children, show that approximately 15% to 25% of patients with chronic infection die prematurely from cirrhosis or HCC [81,82].
Signs and symptoms Symptoms and physical findings
The manifestations of HBV infection during the acute phase vary from subclinical hepatitis to acute hepatitis and acute hepatic failure. During the chronic phase, disease progression varies from asymptomatic chronic infection to chronic hepatitis, cirrhosis, and HCC [83]. The findings on physical examination vary from minimal to remarkable, according to disease severity. The signs, symptoms, and findings on physical examination are listed in Table 4.
Acute hepatitis B is an illness that begins with general fatigue, loss of appetite, nausea, vomiting, body aches, low-grade fever, dark urine, and jaundice. The illness lasts for several weeks and then gradually improves in most affected individuals. A few individuals may develop more severe liver disease (acute hepatic failure) and may die. In addition, acute hepatitis B infection may be entirely asymptomatic and may go unrecognized [84].
Some acute hepatitis B patients (about 1%) may devel-op acute liver failure, which is characterized by evidence of decompensated liver disease and is fatal in up to 50% of cases [83]. Patients with acute liver failure can present with the following signs and symptoms: hepatic encephalopathy, somnolence, disturbed sleep patterns, mental confusion, coma, ascites, variceal bleeding, and coagulopathy. Individuals with chronic HBV infection may be asymptomatic or may manifest the signs and symptoms associated with chronic hepatic inflammation. Patients with chronic active hepatitis, especially during the replicative stage, can manifest symptoms similar to acute hepatitis (fatigue, anorexia, nausea, and mild upper quadrant pain or discomfort). Physical examination of patients with chronic HBV infection can reveal the typical characteristics of chronic liver disease, including hepatomegaly, splenomegaly, muscle wasting, palmar erythema, spider angioma, and vasculitis.
In cases with progressive liver disease, the following manifestations may be present: hepatic decompensation, hepatic encephalopathy, somnolence, disturbed sleep patterns, mental confusion, coma, ascites, variceal bleeding, coagulopathy, ascites, jaundice, peripheral edema, gynecomastia, testicular atrophy, and collateral abdominal veins (caput medusa). Pleural effusion and hepatopulmonary and portopulmonary syndrome may occur in patients with cirrhosis. Patients with cirrhosis may have the following findings: ascites, jaundice, history of variceal bleeding, peripheral edema, gynecomastia, testicular atrophy, and collateral abdominal veins.
1.
Accurate risk assessment and education and counseling of individuals at risk on ways to avoid STIs through changes in sexual behavior and use of recommended devices of prevention;
2.
Pre-exposure vaccination of individuals at risk for vaccine-preventable STIs; 3. Identification of asymptomatically infected individuals and individuals with symptoms associated with STIs;
4.
Effective diagnosis, treatment, counseling, and follow up of infected individuals;
5.
Evaluation, treatment, and counseling of sex partners of individuals who are infected with an STI.
Abbreviation: STIs, sexually transmitted infections. [86,87]. Serum-sickness-like syndrome occurs in the setting of acute hepatitis B, often preceding the onset of jaundice [88]. The manifestations often subside shortly after the onset of jaundice, but can persist throughout the duration of acute hepatitis B [11]. About 30% to 50% of people with acute necrotizing vasculitis (polyarteritis nodosa) are HBV carriers [89]. HBV-associated nephropathy has been described in adults but is more common in children [90,91]. MGN is the most common form. Other immune-mediated hematological disorders, such as essential mixed cryoglobulinemia and aplastic anemia, can also occur [11]. A variety of cutaneous lesions can appear during the early course of viral hepatitis, including transient maculopapular rash.
TRANSMISSION AND PROTECTION Transmission
As described previously, HBV is transmitted mainly via percutaneous or permucosal exposure to HBV-containing body fluids. The most critical source of infection is blood (serum) [92]. HBV transmission has been found to occur through various forms of human contact, including vertical transmission from mother to newborn, sexual contact, close household contact, needle sharing, and occupational (healthcare) exposure (horizontal transmission) [10]. HBV transmission can result from the accidental inoculation of small amounts of blood or other body fluids during medical procedures [6]. Nowadays, blood transfusion and organ transplantation are extremely rare routes for HBV transmission. This section will primarily focus on sexual transmission, which is a common route of HBV infection.
HBV is efficiently transmitted by sexual contact [10]. The primary risk factors are unprotected sex with an HBVinfected partner, mainly unvaccinated MSM and heterosexual individuals with multiple sex partners or contact with sex workers [6]. MSM have long been known to have high rates of STIs [93]. They continue to show higher seroprevalence rates of HBV-related markers than the general population [94]. Progression through the infection stages is very rapid, and the immune tolerant stage is sometimes absent [24,95].
Heterosexual transmission is still important, as shown by the 40% transmission rate to nonimmune partners of patients with acute HBV hepatitis or chronic HBV infection [96,97]. The seroprevalence rates of HBV-related markers are positively correlated with increasing numbers of current and lifetime heterosexual partners [98,99].
Behavioral approaches
The 2015 Centers for Disease Control and Prevention (CDC) guidelines describe five major strategies for the prevention and control of STIs (Table 5) [100]. For primary prevention, the first approach is to change the sexual behavior that can increase the risk of STIs. Information on sexual behavior that can increase the risk of STIs should be provided tactfully. In addition, adolescents and young adults should be made aware that some of the information on protection against STIs may be inaccurate [100]. Correcting misinformation on protection against STIs may also reduce the incidence of high-risk sexual behavior [101]. One of the most reliable methods for preventing an STI is refraining from sexual contact, which includes oral, vaginal, and anal sex [100]. Over the past 10 years, condom use by unprotected heterosexuals has increased in the United States, suggesting that information on the prevention of STIs is being widely disseminated and understood [102]. Additionally, possible sexual partners should be tested for STIs before sexual contact is initiated [100]. If one partner has an STI or his/her infection status is unknown, a new condom should be used for each sexual contact.
In summary, safe sex practices, including minimizing the number of sex partners and using barrier protection, can reduce the risk of HBV infection.
Hepatitis B immune globulin (HBIG) and hepatitis B (HB) vaccine
Both HBIG and HB vaccines have been approved for preventing HBV infection [103,104].
HBIG is prepared from human plasma containing a high concentration of anti-HBs and provides short-term (3 to 6 months) protection from HBV infection. It is typically used as post-exposure prophylaxis along with HB vaccination for individuals who have never been vaccinated or who have not responded to HB vaccination. The recommended dose of HBIG is 0.06 mL/kg [100].
HB vaccines contain HBsAg that is produced by a recombinant yeast strain [105]. Epidemiologic studies have not found any evidence of an underlying association between HB vaccination and sudden infant death syndrome or other causes of death during the first year of life [106,107]. Thus, HB vaccination can be considered safe.
HB vaccination is the most effective method of preventing HBV infection [108]. The introduction of universal HB vaccination for newborns has been reported to be a very reasonable and cost-effective strategy [109,110]. The World Health Organization (WHO) has now included HB vaccination in the Expanded Program on Immunization [22]. WHO recommends that all infants receive HB vaccine as soon as possible after birth, preferably within 24 hours. In 2013, 183 WHO member states immunized infants against HBV as a part of their routine vaccination schedule, and 81% of children received HB vaccines [47].
HB vaccine is available for younger children, adolescents, and healthy adults [2]. In adolescents and healthy adults (aged younger than 40 years), approximately 30% to 55% of recipients achieve protective antibody responses (i.e., anti-HBs ≥10 mIU/mL) after the first vaccination, 75% after the second, and over 90% after the third. Therefore, HB vaccination can be thought to induce protective antibody response (anti-HBs ≥ 10 mIU/mL) in the majority of recipients. Regardless of the specific patient considerations needed when an HB vaccination schedule is selected, a complete vaccine series should be administered [100]. Recommendations on the HB vaccine dosage and schedule vary, depending on the product used and the recipient's age [2]. Details on HB vaccination are described in guidelines [6,104].
HB vaccine-induced immune memory has been established to last for more than 20 years [111,112,113]. According to the 2015 CDC guidelines, periodic monitoring of anti-HBs levels after routine HB vaccination is not needed, and booster doses of HB vaccine are not currently recommended [2]. However, the American Red Cross report suggests that HB-vaccine-induced immune memory might be limited; although HB vaccination can prevent clinical liver injury (hepatitis), 100% of subclinical infections cannot be prevented [114]. Indeed, although HB vaccine is sufficiently effective at preventing the development of clinical disease (hepatitis), it cannot prevent 100% of HBV infections, resulting in detectable anti-HBc [114]. Additionally, there is a report of acute hepatitis B infection in a patient who received five HB vaccinations [115]. An MSM patient, who received several HB vaccinations and showed an anti-HBs serological response of >10 mIU/mL (accepted threshold for protection), was reported to have developed a chronic HBV genotype F infection [116]. These cases suggest that monitoring anti-HBs levels after routine vaccination might be necessary for certain patients. When the anti-HBs level is too low to provide protection from HBV infection (anti-HBs <10 mIU/mL), a booster vaccination should be administered. Although HB vaccines are highly immunogenic, postvaccination serologic testing might be indicated for infants whose mothers were infected at delivery, individuals with occupational exposure to blood, sexually active individuals such as MSM, or immunosuppressed individuals [10].
Pre-exposure vaccination
In 1992, WHO recommended that all countries should introduce universal HB vaccination into their routine immunization programs [117]. HB vaccination is recommended for all unvaccinated children and adolescents, all unvaccinated adults with risk of HBV infection (especially MSM, adults with multiple sex partners, and drug users), and all adults desiring protection from HBV infection [104]. HB vaccine should be routinely offered to all unvaccinated persons who attend STI clinics or seek evaluation or treatment for STIs in other settings, especially correctional facilities, facilities providing treatment and prevention services for substance use disorder, and settings serving MSM (e.g., HIV care and prevention settings) [2].
Postexposure prophylaxis
Both passive-active postexposure prophylaxis (simultaneous administration of HBIG and HB vaccine at separate sites) and active postexposure prophylaxis (administration of HB vaccination alone) have been demonstrated to be highly effective for preventing HBV infection [103]. Unvaccinated individuals or those known not to have received a complete HB vaccine series should receive both HBIG and HB vaccine as soon as possible (preferably ≤24 hours) after exposure to blood or body fluids containing HBsAg. HB vaccine should be administered at the same time as HBIG, but at a separate injection site; and the HB vaccine series should be completed, using the age-appropriate vaccine dose and schedule [2]. Individuals with certification that they received a complete HB vaccine series and who have never undergone post-vaccination serologic testing should receive a single vaccine booster dose. These individuals should be treated according to guidelines for the management of individuals with occupational exposure to blood or body fluids that contain HBV [118].
Treatment
The primary treatment goals for patients with HBV infection are preventing the progression to severe liver disease. The prevention of cirrhosis, hepatic failure, and HCC are most important. The risk factors for progression of chronic HBV include male gender, older age, family history of HCC, elevated alpha-fetoprotein (AFP) level, and coinfection with other viruses (HCV, HDV, or HIV) [119].
For the best outcome, a synergistic approach that decreases the viral load and uses immunotherapeutic interventions to boost the immune response is needed [120]. The prevention of HCC often includes antiviral treatment using pegylated interferon (PEG-IFN) or nucleos(t)ide analogues (NAs), which are described later [121]. Overall management for types of HBV infection Patients with acute hepatitis B are treated by supportive care, with no specific treatment. Patient care is focused on maintaining comfort and adequate nutritional balance, including replacement of fluids lost from vomiting and diarrhea. Patients with chronic HBV infection should be referred for evaluation to a provider experienced in the management of chronic HBV infection [122]. A variety of treatment algorithms have been proposed, including ones from the American Association for the Study of Liver Diseases (AASLD) [28], the European Association for the Study of the Liver (EASL) [23], the Asian Pacific Association for the Study of the Liver (APASL) [123], the Canadian Association for the Study of the Liver (CASL) [124], and the National Institute for Health and Clinical Excellence (NICE) [125]. In general, for patients with chronic HBV infection who are positive for serum HBeAg, treatment is advised when the serum level of HBV DNA is at or greater than 20,000 IU/mL (10 5 copies/mL) (or >2,000 IU/mL [EASL recommendation]) and the serum ALT level is elevated (>20 U/L for females and 30 U/L for males) for 3-6 months. For patients with chronic HBV infection who are negative for HBeAg, treatment is advised when the serum level of HBV DNA is at or greater than 2,000 IU/mL (10 4 copies/mL) and the serum ALT is elevated (>20 U/L for females and 30 U/L for males) for 3-6 months [126].
Treatment of HIV infection with nucleos(t)ide analogs active against HBV greatly improves the outcomes of hepatic disease, including cirrhosis and HCC, in HIV-HBV coinfected patients, especially when tenofovir is part of the antiviral regimen [127]. HBV/HDV coinfection should be treated with PEG-IFN therapy [128].
The National Institutes of Health (NIH) also advises that immediate therapy is not usually indicated for the following patients: 1) patients who are in the immune-tolerant stage, with chronic hepatitis B and high serum levels of HBV DNA but normal serum ALT levels or little activity on liver biopsy; 2) patients who are in an inactive chronically infected/low replicative stage and have low serum levels of or undetectable HBV DNA and normal serum ALT levels; and 3) patients who are not immunosuppressed and have latent HBV infection, defined as detection of HBV DNA in the absence of HBsAg [126].
Pharmacologic management
The therapeutic agents cleared by FDA for the treatment of chronic hepatitis B can achieve sustained suppression of HBV replication and remission of liver disease [122]. Currently, PEG-IFN-α 2a, entecavir, and tenofovir are available for the treatment of HBV infection. These are the main treatments that have been approved worldwide. Lamivudine, telbivudine, and adefovir are now "nonpreferred" agents, and considered to be of historical interest [129].
Pegylated interferon alpha 2a (PEG-IFN-α 2a)
IFNs are naturally produced cytokines. They induce direct antiviral activity by stimulating the host's antiviral immune response and mediating conflicting effects on viral replication. PEG-IFN-α 2a has a longer half-life and enhanced effi-cacy relative to standard IFN-α. Pegylation lowers the rate of absorption following subcutaneous injection, reduces renal clearance, and decreases the immunogenicity of IFN [130].
The advantages of IFN therapy are the absence of viral resistance, the finite course of treatment (normally 48 weeks), an increased chance of sustained virological response (SVR), and HBeAg and HBsAg clearance; compared with patients treated by NAs [131]. A 48-week regimen of PEG-IFN-α 2a has been found to induce HBeAg seroconversion in 27% of patients and disappearance of serum HBV DNA in 25% of patients [130]. Long-term studies have demonstrated that IFN treatment is associated with a significant reduction in the risk of cirrhosis and HCC, even in patients who fail to clear HBeAg [24]. However, IFN has a poor side-effect profile (including persistent flu-like symptoms and psychiatric complications) compared with NAs, requires subcutaneous injection, and is not recommended for patients with decompensated cirrhosis [25]. HBV genotypes A and D are important and independent predictors of IFN responsiveness in patients with chronic hepatitis B [132]. IFN treatment is more effective for patients who are most likely to benefit, especially younger patients, who have more potential years in which to develop complications from their chronic hepatitis B infection and thus have more to gain from achieving an SVR [25,95].
Nucleos(t)ide analogues (NAs)
The NIH recommends nucleos(t)ide therapy for the treatment of HBV-infected patients with acute liver failure as well as for cirrhotic patients who are positive for serum HBV DNA; and for patients with clinical complications, cirrhosis, advanced fibrosis with serum positive for HBV DNA, or reactivation of chronic HBV during or after chemotherapy or immunosuppression [126].
Entecavir, a guanosine nucleoside analogue, is a firstline agent for the treatment of HBV infection [51]. It is a powerful inhibitor of HBV polymerase. It competes with the natural substrate, deoxyguanosine triphosphate (dGTP), to inhibit HBV polymerase (reverse transcriptase) activity. The advantages of therapy with this agent include potent antiviral activity and a low rate of resistance to the drug [51], although entecavir is used less frequently than other agents for the treatment of lamivudine-resistant HBV.
Results of a retrospective study indicated that assessment of serum HBV DNA levels 12 months after initiation of entecavir therapy may be useful for evaluating entecavir therapy for NA-naïve, HBV-infected patients [7]. Investigators found 3 independent predictors for viral suppression lasting 3 years after the start of entecavir therapy: the lowest level of HBV DNA that can be detected, undetectable serum HBV DNA at month 12, and seronegative for HBeAg at the start of therapy. Serum HBV DNA undetectable at month 12 also increased the probability of HBeAg seroconversion and lowered the risk of drug resistance [133].
In another study, after 240 weeks of continuous entecavir therapy for HBeAg-positive patients, 94% of pa-tients had less than 300 copies/mL of serum HBV DNA, and 80% had normal ALT levels. An additional 23% of patients achieved HBeAg seroconversion, and HBsAg disappeared in 1.4% of the patients. Only 1 patient developed resistance to treatment with entecavir within 5 years of treatment [134].
Another study found that long-term treatment with entecavir (about 6 years of cumulative therapy [range, 267-297 weeks]) for NA-naïve patients with chronic HBV infection and advanced fibrosis or cirrhosis, resulted in durable virologic suppression, continued histologic improvement, and reversal of fibrosis or cirrhosis [135].
Tenofovir is the newest antiviral agent. It is a nucleotide-analogue (adenosine monophosphate) inhibitor of viral reverse transcriptase. It may be used as first-line therapy for treatment-naïve patients [51]. Patients who received tenofovir continuously for 240 weeks had sustained suppression of serum HBV DNA levels (less than 400 copies/mL). The rate of viral suppression in patients negative for HBeAg was 83%, and in patients positive for HBeAg was 65%. Of the patients positive for HBeAg who received tenofovir through 240 weeks, the rate of disappearance of HBsAg was 9%, and the HBsAg seroconversion rate was 7%. The rate of disappearance of HBeAg was 46%, and the HBeAg seroconversion rate was 40%. There was no evidence of resistance to tenofovir over the treatment period [125]. Of note, follow up of the same cohort revealed that after 4.5 years of tenofovir, 87% had histological improvement, 51% had regression of fibrosis, and 74% of the patients with cirrhosis at baseline were no longer cirrhotic [136].
Monitoring considerations
The tests that should be used and the frequency of testing will depend on the patient's serological profile (HBeAgpositive or -negative) and HBV DNA viral levels [126]. Patients with chronic active hepatitis should undergo blood testing (aminotransferase levels, HBV status, viral load, and AFP levels), as well as treatment.
For individuals with inactive chronic HBV infection, the current guidelines recommend monitoring for serum HBV DNA and ALT levels at least annually [23 , 28,137]. Patients with cirrhosis must be monitored for HCC by determination of AFP levels every 6 to 12 months and undergoing surveillance by abdominal ultrasonography [28,137]. Note, however, that determination of AFP levels was excluded from the AASLD guidelines [28].
Curability
Despite notable progress regarding many aspects of HBV infection, particularly with respect to prevention and treatment, chronic HBV infection remains strictly noncurable, because residual HBV cccDNA can always be detected in the liver, even after clearance of HBsAg and development of anti-HBs in the serum [138]. Moreover, HBV DNA sequences can integrate into the hepatocyte genome, as demonstrated in individuals seronegative for HBsAg [139]. Therefore, the term "cure" cannot be used to indicate that HBV is completely eradicated.
There is ongoing discussion on the meaning of "'cured' of chronic HBV infection". The primary treatment goal for HBV infection is to improve patient quality of life and reduce the risk of death from liver disease [140]. Large cohort studies of patients with chronic hepatitis B have found a 15% to 40% cumulative risk of developing cirrhosis. Two to five percent of patients with established cirrhosis will develop HCC [141]. Three possible types of cure are identified, which are described below [140].
"Absolute cure" means that the patient is free of HBV. That is, there are no HB virions and cccDNA anywhere in the body, including hepatocytes. The patient recovers to the degree of health and medical condition prior to HBV infection, and the probability of developing cirrhosis or HCC depends on age and gender. Although this type of cure is uncommon, it is the most desirable [140].
"Functional cure" means that HBV progression can be controlled. The patient recovers to his or her state of health equal to that of a person who has recovered spontaneously from HBV infection. Both have a similar likelihood of developing cirrhosis or HCC. Current therapy can only achieve "functional cure" through suppression of HBV replication [140].
"Apparent virologic cure" is defined as a sustained offdrug suppression of virologic markers and the normalization of liver function. This last definition includes an SVR, which is the ongoing suppression of viral load following the cessation of therapy, and adds the disappearance of all circulating viral markers (seroclearance), and the possible suppression of cccDNA. A complicating factor with HBV infection is that patients who have achieved a serologic resolution of infection (loss of HBsAg, undetectable serum HBV DNA, appearance of anti-HBs) can develop reactivation of their disease because of immunosuppression or the use of anti-inflammatory medications [142,143]. In addition, it is important to note that in occult HBV infection, despite the complete loss of HBsAg and undetectable or very low levels of HBV DNA in serum, there may still be an increased risk for progression to cirrhosis and the development of HCC [144].
An "apparent virologic cure" or "functional cure" as a desirable endpoint for therapy is supported by a recent study looking at the risk of HCC in patients with or without spontaneous seroclearance of HBV seromarkers [145]. However, none of these endpoints turned out to be a reliable indicator of favorable long-term outcome of chronic HBV infection. Thus, for the time being, HBsAg loss is viewed as the best possible predictor of a favorable longterm outcome of HBV infection and is used as an endpoint [146,147].
CONCLUSIONS
HBV infection is one of the common STIs having a major worldwide impact on a patient's clinical health status and on public health, and is also associated with liver-related morbidity and mortality. New HBV infections in industrialized countries are becoming increasingly concentrated among individuals at risk for STIs, infants, and injection drug users. HB vaccines have been an effective prevention strategy for individuals at risk through sexual exposure, especially MSM and heterosexuals with multiple sex partners. The proper education of persons at risk for STIs may also help with their acceptance of HB vaccination.
Regarding the persistence of HB vaccine-induced immunity, the effectiveness of routine HB vaccination might not last long enough to prevent 100% of HBV infections. In our opinion, postvaccination serologic testing, especially for anti-HBs, should be introduced for groups at high risk for HBV infection, after careful consideration. Potential causes of vaccine failure, such as infection with HBV variants, require further study. The need for booster doses to preserve vaccine-induced immunity should be evaluated regularly, especially for infants whose mothers were infected, individuals with occupational risk, sexually active individuals such as MSM, or individuals under immunosuppression.
CONFLICT OF INTEREST
Potential conflicts of interest: none reported. | 2017-08-27T08:20:02.537Z | 2016-09-05T00:00:00.000 | {
"year": 2016,
"sha1": "4a4649f0c06450b18d882a0b645d854b92345e93",
"oa_license": "CCBY",
"oa_url": "http://microbialcell.com/wordpress/wp-content/uploads/2016/09/2016A-Inoue-Microbial-Cell.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4a4649f0c06450b18d882a0b645d854b92345e93",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247315149 | pes2o/s2orc | v3-fos-license | Basics of Fourier Analysis for High-Energy Astronomy
The analysis of time variability, whether fast variations on time scales well below the second or slow changes over years, is becoming more and more important in high-energy astronomy. Many sophisticated tools are available for data analysis and complex practical aspects are described in technical papers. Here, we present the basic concepts upon which all these techniques are based. It is intended as a condensed primer of Fourier analysis, dealing with fundamental aspects that can be examined in detailed elsewhere. It is not intended to be a presentation of detailed Fourier tools for data analysis, but the reader will find the theoretical basis to understand available analysis techniques.
Fourier 101
Two centuries ago, in 1822 the French mathematician Jean-Baptiste Joseph Fourier publishes his book on the theory of heat, where he claims that all functions, including those containing discontinuities, can be expresses as a series of sinusoids. This is a very important fact, as it allows to deal with functions that otherwise would be intractable. However, it is not formally correct, as the function must satisfy a set of conditions, formulated by Dirichlet a few years later (see below). The application of Fourier theory nowadays is so widespread in science that it can be considered one of the main scientific discoveries in history. It would be impossible to list all fields where its application is essential, from telecommunication to image analysis. Here we will limit ourselves to what is needed for the analysis of time series. By time series we mean any one-dimensional signal as a function of time. It is not important what this signal measures, it can be the varying flux of a cosmic X-ray source, the Dollar-to-Euro exchange rate, the number of points scored by a basketball team. In principle, it is not even important that the independent variable is time, as long as we are dealing with a one-dimensional measurement, but it is simple to remain closer to what is needed for timing analysis. Specifically, this section of the Handbook is devoted to timing analysis of high-energy astronomical data, but what we describe here is more general.
In this chapter, we will present the basic aspects of Fourier theory, which are at the base of all methods of analysis of time variability that are described in other chapters. We will not describe techniques that can be applied to data analysis, but the principles that these techniques are based upon and that should be known when they are applied. We will by no means be exhaustive, but cover the main properties of Fourier analysis. For more details connected to this topic there are excellent books [5,9,19].
Fourier series
The aim of Fourier analysis is to express any function as a sum of different sines and cosines , characterised by an angular frequency ω or a corresponding time period P = 2π/ω. Therefore, it is easier to visualise the decomposition of an arbitrary periodic function into its sine and cosine components. Such a decomposition is known as a Fourier series.
A periodic function is defined by its shape within a basic period P 0 , which is then repeated ad infinitum at the interval of P 0 . If we define ω 0 = 2π/P 0 , then sin ω 0 t and cos ω 0 t share the same repetitive property and the sum a sin ω 0 t + b cos ω 0 t can represent an infinite (but not exhaustive) variety of periodic functions with period P 0 by arbitrarily adjusting the values of a and b. One may then note that sines and cosines with frequencies that are integral multiples of ω 0 also repeat with this period, albeit repeating also within the period. When all such functions are put together in the decomposition, namely [a(n) sin nω 0 t + b(n) cos nω 0 t] then it can represent practically any periodic function with period P 0 , with suitable adjustment of the coefficients a(n) and b(n), which are known as the Fourier coefficients.
The conditions under which the Fourier decomposition is valid are that f (t) has only a finite number of finite discontinuities and only a finite number of extreme values within a period. These are known as Dirichlet conditions and the functions obeying them are called piece-wise regular. Since sines and cosines are continuous functions, at the location of a discontinuity in f (t), its Fourier series representation (eqn. 2) evaluates to a definite value, which is the average of the left and right limits of the original function.
The Fourier coefficients a(n) and b(n) can be determined by performing the following integrations: cos nω 0 t dt, n = 0, 1, 2... (3) Alternatively, the Fourier decomposition may also be expressed in an equivalent exponential form: In the above expansions, the n = 0 term is often called the constant or the D.C. (Direct Current) component, the n = 1 term the fundamental and the terms with n > 1 the harmonics.
Such a decomposition is possible for any periodic function because the sine and cosine functions, or the exponentials, involved in the above expressions form a complete orthogonal basis set. Further, considering t on the entire real axis [−∞, ∞] we note that sines and cosines of arbitrary frequency ω form also a complete, orthogonal set and so do the corresponding exponentials. It should therefore be possible to express any well-behaved function f (t) in terms of these bases.
Continuous Fourier transform
Now we are ready to extend the decomposition to any function f (t), expanding the integral over the entire real axis. We define the Fourier transform (FT) of a function f (t) as where ω = 2πν is the angular frequency (in radians) and ν is the frequency (in Hz). Equation 5 is a linear transformation and no information is lost. The representations of a function in time and frequency domains are equivalent. The original f (t) can be recovered by applying the inverse Fourier transform Since the expression e iωt is a sinusoid, this is equivalent to the decomposition of the original signal into a sum of sinusoids. If the original signal is itself a sinusoid with frequency ω 0 , it is easy to see that its transform is δ (ω − ω 0 ).
The transform has a number of interesting properties. It is linear, not necessarily a real function, and its amplitude is invariant to time shift (but not its phase). Moreover, what is shown in Tab. 1 applies (throughout the chapter we will use lowercase letters for functions in the time domain and uppercase for frequency domain. The complex conjugate of a function f will be indicated as f * ). Unless the original function is even, an unlikely situation in the case of a time series, its Fourier transform is complex: each sinusoid at angular frequency ω is characterized by its amplitude and its phase. Since the phases are meaningful only in the case of periodic signals, what is usually considered for Fourier analysis is the Power Density Spectrum (PDS), defined as the Fourier Transform multiplied by its complex conjugate and therefore the square modulus of the Fourier Transform: If the original function is real, which is of course usually the case for time series, the PDS is an even function (see Tab. 1), so the values at negative frequencies are redundant. Notice that although the FT is a linear function, the PDS obviously is not. This means that while the FT of the sum of two signals is the sum of the FT of the signals, in the case of the PDS this is not true and there are cross-terms to be considered. More specifically, if our two signals are f (t) and g(t), the PDS of the sum of the two is If the two signals are uncorrelated, the cross-term is zero and linearity applies. The PDS is by definition a real function. We can see some simple example. If f (t) = a · cos(ω 0 t), its transform is F(ω) = a · δ (ω − ω 0 ) (a real function as the original function is Real and Even (see Tab.1) and its PDS is P(ω) = a 2 · δ (ω − ω 0 ). If the original function is a one-sided exponential its tranform is and its PDS is since we are interested here in functions of time, it is interesting to remark, that the human ear works essentially in the same way: the hairs on the organ of Corti in the inner ear vibrate and record the intensity of the incoming sound, but are not sensitive to the phase. Effectively, a PDS is produced to be transmitted to the brain.
The autocorrelation (ACF) of a function f (t), which we will examine in more detail later, is defined as Equation 12 shows that the autocorrelation of a function is the Fourier transform of its PDS (here * means complex conjugate and ⇐⇒ means "is the Fourier transform of"). Since the PDS of a real function is real and even, Tab. 1 tells us that the ACF is also real and even. From Eqn. 12 it is simple to derive Parseval's theorem (simply setting t=0) This is important: the ACF of a function and its PDS are equivalent and contain the same amount of information.
Discrete Fourier transform
What we have shown so far is interesting mathematically, but it is rather abstract as it deals with continuous functions, possibly even in the complex domain, extending from −∞ to +∞. What we have in X-ray astronomy is discrete measurements extending from 0 to T: a time series (commonly called "light curve") consisting of N measurements x k taken at equally-spaced times t k from 0 to T (we will see later what happens if there are gaps, or the times are not equally spaced). In this case we can move to the discrete Fourier transform (and its inverse), defined as As in the case of the continuous version, there is no loss of information: N numbers in the signal, N numbers in the FT. The FT is in general a complex quantity even for a real signal, doubling the information per frequency, but the positive and negative values are clearly correlated.
Since the data are equally spaced, the times are kT /N and the frequencies are j/T . The time step is δt = T /N and the frequency step is δ ν = 1/T . This is a very useful version of the transform as it can be applied to data rather than functions, but this comes with limitations. As the discrete time series has a time step δt and a duration T , there are limitations to the frequencies that can be examined. The lowest frequency is 1/T , corresponding to a sinusoid with a period equal to the signal duration. The highest frequency that can be sampled, is called Nyquist frequency (Eqn. 16) It corresponds to a sinusoid sampled twice per cycle, therefore appearing as upand-down in the original signal. Notice that δ ν = 1/T means that the frequency resolution of your transform is inversely proportional to the duration of the signal and does not depend on the signal sampling.
There is a zero frequency, at which the FT value is simply the sum of the signal values Notice that the PDS at negative frequencies is identical to that at positive frequencies, as in the case of its continuous version.
Parseval's theorem applies also to the discrete case: from which one can see that the variance of the signal is 1/N times the sum of the a j over all indices besides zero (see [17]). Also the connection with the discrete autocorrelation applies: the PDS is the Fourier transform of the ACF.
Fourier theory is an extremely powerful tool for extracting information from time series. An example can be seen in Fig. 2. The top panel shows part of a simulated time series (black line) consisting of strong Gaussian noise superimposed on a weak sinusoidal modulation (red line). The modulation is completely invisible by eye, but appears clearly in the PDS (bottom panel).
Windowing and sampling
We have seen two definitions of Fourier Transform: the continuous FT, which applies to functions over the (−∞, ∞) interval, and the discrete FT, which deals with N sampled data over the (0, T ) range. The question now is how to connect the first, which has numerous properties but is not realistic for data analysis, to the second. In order to do this we can apply one of the fundamental properties of the continuous FT: the Fourier Transform of the product of two functions x(t) and y(t) is the convolution of the Fourier transforms of the functions: Our discrete time series x k vs. t k can be seen as the product of a continuous function f (t) over (−∞, ∞) and two additional functions: w(t) to limit it to the (0, T ) interval and s(t) to sample it at times t k : w(t) is a boxcar window function (more on windows below), which is 1 in the (0, T ) interval and zero outside. s(t) is a series of delta functions at t k , spaced by T /N (see above). It is simpler to show this graphically in Fig. 3.
Windowing effects
To see what the effect of windowing and sampling is let us consider a purely sinusoidal function f (t) = sin(ωt), whose FT is a delta function at ω. The multiplication by the window function corresponds to the convolution of the delta function with the FT of the window. It is simpler to consider a window function that is unity in the (−T /2, T /2 interval, as it is a real and even function, whose FT is also real and even (the (0, T ) case leads to an FT with the same amplitudes, but non-zero phases). In this case, it is simple to calculate that which is a sinc function. One can see two windows of duration T = 1s and T = 5s and their FT in Fig. 4. The FT peak is broader for shorter T .
Therefore, the effect of a finite window, something that cannot be avoided in an astronomical observation, is the broadening of narrow peaks. The resolution of the signal FT is therefore higher the longer the observation is. In addition to the broadening, there is the formation of side lobes. They are much lower than the central peak, but cannot always be ignored. In the PDS, the drop in peak power scales as ν −2 , so if the signal contains a noise component that is steeper than that, the power spilled over to higher frequencies due to the right-side side lobes is more than the signal power at those frequencies. As a result, the final slope will be -2, the slope of the envelope of the side lobes. In other words, no signal steeper than ν −2 in power can be recovered using a Fourier transform.
Sampling effects: aliasing
As to the effect of sampling, the FT of a series of regularly spaced delta functions with spacing T /N is itself a series of delta functions with spacing T /N, as shown in Eqn. 23.
Therefore the effect of sampling on the FT of a sinusoidal signal with frequency ν 0 (a delta function at ν 0 ) is that of adding an infinite sequence of delta functions spaced by N/T , called aliases. What is important is that N/T is twice the Nyquist frequency (see Eqn. 16). This ensures that if the data contain a sinusoidal signal at a frequency below ν Nyq , the aliases will not be in the "allowed" frequency range (0, ν Nyq ). Notice that the FT amplitude is an even function for a real function, so the aliases will also be present in the negative frequency range. An example can be seen in the top panel of Fig. 5). Here we have a signal at ν 0 = 15Hz, which appears as two peaks, at ν 0 and −ν 0 (in black). The Nyquist frequency is ν Nyq = 20Hz, therefore the region where we can search with our FT is that marked in blue. Because of aliasing, the two peaks are also repeated infinitely in both directions, with a step equal to 2ν Nyq = 40Hz, two of which can be seen in the plot (red). These aliases do not represent a problem, as they are both beyond ν Nyq . However, problems arise when the signal is at a frequency above ν Nyq , as in the bottom panel of Fig. 5), where ν 0 = 35Hz. We are not in the condition of detecting this signal, but because of data sampling one of the aliases (in red) appears at ν a = 5Hz, which means we see it in our analysis.
We have all experienced aliasing effects when looking at fast rotating objects like an air fan under fluorescent light. The light provides a sampling at 50 Hz (or 60 Hz, depending on where you live), while the fan has a periodicity. Depending on its angular speed, you can see it rotating apparently much slower, or even to stop and rotate in the opposite direction. A graphical example in the time domain can be see in Fig. 6. Here the signal (in blue) has a period of 10 s (ν 0 = 0.1 Hz), but the sampling (red points) is every 13 s (ν Nyq = 0.038 Hz). The signal is out of range, but its alias at ν a = 0.023 Hz (period of 43.33 s, red dashed curve) is not. This appears to be a very serious problem and it is, but in high-energy astronomy we do not sample signals, but integrate them over finite time bins. In this case, one does not multiply the signal by a sampling function s(t), but convolve it with a binning function (see Eqn. 24). Therefore, the signal FT will be mutiplied by that of the binning function, which is again a sinc function and can be seen in Eqn. 25.
is a broad function that reaches 0 at 2ν Nyq and has the value of 2/π at ν Nyq . Therefore, aliasing is not an issue here. However, because of the reduction of amplitude due to the binning function, it is important to use, if possible, a fast binning, so that the Nyquist frequency is much higher than the signal one is looking for.
Window carpentry
We have seen the effects of the observing window upon the output FT and PDS. Of course such a window cannot be avoided, as one cannot have measurements infinite in duration. Having a longer observation reduces the width of the main window peak, but does not change the possible spillover effects. However, in some cases it can be advantageous to multiply the data by another window, not boxcar-shaped. This results in a loss of signal, as some data are multiplied by a factor less than unity, but there are advantages, depending on the chosen window.
The above has led to the concept of window carpentry: many window functions with different characteristics have been designed and one can tailor them depending on what is needed. The main features that identify a window in its PDS are: the width of the main peak ∆ ω, the relative amplitude of the first side lobe L (expressed in decibels) and the slope of the decay of side lobes n (see Fig. 7).
The boxcar window, the one you do not need to apply explicitly, is the one with the lowest ∆ ω, but with alternative windows it is possible to obtain a significant
Observational windows
As we have seen, even if no custom window is applied to the data, a boxcar window is determined by the start and end time of the signal. In case the signal is made of separate intervals, the production of a single PDS over the full time span means that a multiple boxcar window is applied. This results in a more complicated effect on the output PDS. An example can be seen in Fig. 9. In the top left pair of panels is a time series of 10 5 s, with 1-s binning, containing Gaussian noise and a sinusoid with a period of 200 s and its PDS, zoomed to the frequencies around 0.005 Hz. The oscillation appears as a narrow peak, due to the long duration of the signal. In the top right panel the signal is the same, but limited to 10 4 s: the peak in the PDS is broadened. In the bottom left panel the signal is the same, but split into three 3333 s intervals distributed at equal distances over 10 5 s. The PDS was made including the gaps as zero points. The effect of the more complex window is visible. In the bottom right panel the two large gaps are filled with Gaussian noise with the same average as the signal. The window effects on the PDS are reduced, as the sharp drops are removed, but the broadening of the peak, together with its sidelobes, remains.
Fast Fourier Transform
Evaluation of the Discrete Fourier Transform (DFT, eqn. 14) of N samples involves ∼ N 2 multiplication and addition operations -for every Fourier component a j , each of the N samples x k needs to be multiplied by a phase factor e −2πi jk/N and then they have to be summed. The Fast Fourier Transform algorithm has been devised to accomplish this computation in much fewer steps, typically with ∼ N log 2 N multiplications and additions. This provides enormous computational savings for large transforms, and has made Fourier analysis accessible to cases where it would have been otherwise prohibitive. Several different versions of the FFT algorithm exist. We will illustrate the concept using the original algorithm of Cooley and Tukey [7] for transforms of length N equalling integer powers of two, as presented in [15]. One may write the Discrete Fourier Transform (DFT) as This can then be divided into even and odd parts: This is a sum of two transforms, each requiring ∼ (N/2) 2 operations, and thus the total number of operations required is reduced by a factor of two. One may now continue to divide each of these transforms further, gaining a factor of two reduction in computation at each step. Taking this all the way down to one-point transforms (identity operations), the net number of required operations to construct the full transform becomes ∼ N log 2 N. In the implementation of the algorithm, some additional bookkeeping is required regarding which element of the original array needs to be combined with which others at each stage of the transform. It turns out that if in the beginning the elements of the original array are reordered in a special way, then each step of the computation can be carried out by operations involving just the adjacent elements or the adjacent transform products. To do such a rearrangement, for any element of the original array one first expresses the array index in binary notation. One then reverses the order of the bits of the index value to generate a new index, to which the element is now moved.
The Power Density Spectrum and its representation
As we have seen, the PDS is a powerful way of representing a signal in the frequency domain, both when one is interested in coherent signals and when the data contain incoherent noise originating from the source. In this section, we discuss the choice for PDS normalization, which is important both for statistical reasons and for extracting physical information. Moreover, the way the PDS is represented is also important in order to highlight important information.
PDS Normalization
Since the FT is a linear transformation and the PDS is its squared modulus, the PDS scales with the square of the intensity level of the signal (see Parseval's theorem above). It is possible to normalize the PDS in different ways. In high-energy astronomy, two normalizations are commonly used, each of which has a different purpose. The first normalization was introduced by [10]: where N γ is the total number of photons in the signal. This normalization leads to a known statistical distribution of signal power: if the signal is dominated by fluctuations due to Poisson statistics and if N γ is large, powers follow a chi square distribution with 2 degrees of freedom, < P >= 2 and Var(P) = 4. The reason is that the periodogram is the sum of the squares of the real and imaginary parts of the FT. For a stochastic process, the latter are normally distributed, so the sum of their squares is distributed as a chi square with 2 degrees of freedom. If the signal is divided into S segments and the resulting PDS are averaged (see below) and rebinned by a factor M, the powers will be distributed as a chi square with 2SM scaled by 1/SM: therefore, the average power remains < P >= 2, but the variance is now Var(P) = 4/SM (see also [17]). This is very important in order to establish the significance of an excess power over the (flat) Poissonian level.
The so-called Leahy normalization however does not allow a direct extraction of quantitative indicators such as the fractional rms and does not remove the dependence of power on the signal intensity. A different normalization was introduced in 1990 by [2], usually called "Belloni normalization" or "rms normalization.", obtained dividing the power by the net source intensity: where C is the detected intensity (source plus background) and B is the background intensity. The reason for this choice is not statistical, but to have a PDS in units of squared fractional rms. With this normalization, different observations or sources can be compared in terms of fractional rms. Computing the square root of the integral of the the power in a given frequency range yields directly the fractional rms in that range. In principle it would be possible to take the square root to convert it to fractional rms, but this would alter the shape of the PDS.
The Poissonian contribution to the PDS is therefore in principle expected to be a flat spectrum at the level of 2 and with variance 4. Since the noise due to counting statistics is, also in principle, independent of the intrinsic signal, the cross term in Eqn. 8 is null and the flat component can be subtracted from the PDS. After an estimate of the Poissonian component is obtained, it is usually subtracted from the PDS. However, detector dead time does modify the shape and the level of the Poissonian component and introduces a correlation between source and noise signals (this is discussed elsewhere in this book).
PDS representation
The PDS is usually plotted as a function of frequency, limiting the frequency axis to the allowed band between 1/T and ν Nyq . Examples are shown in the left panel of Fig. 10. A different way to represent PDS graphically is now also being used (see [3], where the power on the y axis is multiplied by frequency, as it is done with energy spectra. Examples can be seen in the right panel of Fig. 10. This representation, called νP ν , shows squared rms per decade and therefore is more useful to assess the rms contribution of the different components. Moreover, in the νP ν representation, Lorentzian functions peak at their ν max frequency (see below). Finally, it is easier to visualise power-law behavior with indices between 0 and -2, as they become +1 and -1 respectively. As we have seen, one way to reduce uncertainties in the PDS is rebinning in frequency. One has to be careful, as a coherent feature, especially in a long observation, can be very narrow and be diluted or even lost after rebinning. Moreover, for displaying broad noise components, it is common to rebin the PDS not linearly, but logarithmically. Instead of averaging n bins all over the frequency range, each bin is made larger than the previous by a small amount; a typical value is around 1%. This is advantageous because, unlike the case for coherent peaks, a broad component can emerge more clearly from the data after rebinning, since its power is not concentrated in a small number of bins.
PDS decomposition
As we have seen above, the PDS is not a linear transformation and is not additive. Cross terms can be neglected if there is no correlation between contributing components. Detector-related issues aside, Poissonian noise is uncorrelated from source signal and can be treated as an additive component. For separate source components, this is not necessarily true, in particular for source noise components, but it is customary to ignore the possibility of non-zero cross-terms.
In the case of coherent oscillations such as pulsations from neutron stars, the shape of the PDS is determined by the window used (see above) and, in the case of binary systems, the smearing that results from orbital doppler effects. In many cases, the signal itself consists of complex and often strong noise components, which require a functional characterization. While in the past broken power laws have been used, it has become customary to fit these PDS with a combination of both broad and peaked components modeled as Lorentzians (see Eqn. 32).
where ν 0 is the centroid frequency, ∆ the HWHM and r the integrated fractional rms. How coherent such a component is can be characterized by its quality factor Q = ν 0 /2∆ [13,4]. In the case of ν 0 = 0 we have a zero-centered Lorentzian, flat at low frequencies and decreasing as ν −2 at high frequencies, which is used to fit band-limited noise. Examples can be seen in Fig. 10. The decomposition of noise PDS into a sum of homogeneous components has helped to unify timing properties of accreting X-ray binaries (see Fig. 11), as Lorentzians can fit both narrow and broad components. In particular this decomposition has led to the identification of characteristic frequencies. For a Lorentzian, the characteristic frequency ν max is defined as in Eqn. 33 [3,4].
As can be seen in Fig. 10, ν max is the frequency at which the Lorentzian contributes most in terms of power per logarithmic frequency and also the frequency at which the component peaks in the νP ν representation. From Fig. 10 and Eqn. 33 one can see that, for a narrow component such as that with Q = 50, ν max ≈ ν 0 , but for broad components it deviates substantially and is more representative of a "special" frequency in the PDS.
However, although the fits are often good, it has to be noted that we do not yet have a physical backing behind the use of Lorentzian components, other than the generic fact that a Lorentzian is the PDS of a damped oscillator.
Bartlett's method and data gaps
As we have seen, with the Leahy normalization the noise power is distributed as a chi square with two degrees of freedom: this means that the average is 2 and the standard deviation is 2. Quite a noisy spectrum! We also have seen that dividing into S segments and rebinning by a factor of M reduces the error. The technique of dividing the time series into equal-duration intervals and averaging the corresponding PDS is called Bartlett's method and is commonly used in high-energy astronomy. Of course, this does not allow to detect changes in the variability properties with time for a non-stationary signal, which we will examine later. Also, the reduction in time duration T increases the minimum frequency in the PDS ν min = 1/T . This method also allows one to skip over data gaps, which have dramatic effects on the PDS. A gap in the time series corresponds to an interval of signal at level 0, which means a modification to the boxcar window, with serious effects on the resulting PDS. In addition to skipping the gaps, selecting only continuous intervals, one can (if the gaps are short) fill them with a simulated signal. For instance, an average value extrapolated from non-gap parts, adding Poissonian noise. However, the effects on the PDS will not be completely removed and gap-filling can add biases to the data.
Auto and cross-correlation
The Cross-correlation of two functions f (t) and g(t) is defined as The result is a function of the "lag" τ introduced between the two functions, and is often used to estimate the similarity between two different time series, as a function of lag. For example, if a common underlying process causes the time variation of intensity at two different electromagnetic bands but the signals suffer differential delays while propagating to the observer, then the cross correlation function of the two time series will exhibit a peak at the corresponding lag, namely the relative delay between the two bands. The autocorrelation function (Eqn. 12) is a special case where a function is correlated with itself, which would always show a peak at zero lag. Convolution (Eqn. 20) is an operation akin to the Cross-correlation, but the function g(t) in the integrand is inverted to g(−t) before adding the shift.
Unlike convolution, Cross-correlation is not commutative: Other important properties of Cross-correlation include: where F represents the Fourier transform. The definition in Eqn. 34 involves the functions f and g over the entire real line. In practical use, both these would be time series of finite duration, and not necessarily of equal length. The input to the correlation integral will therefore not be the original functions f and g defined over (−∞, ∞) but windowed copies of them: f w f and gw g where w f and w g are boxcar window functions of amplitude unity over the durations of the respective time series and zero elsewhere. As can be seen, this would cause a lack of overlap between parts of the two functions when they are sufficiently shifted with respect to each other. There are two ways this can be handled -the non-overlapped portion could either be wrapped, resulting in a cyclic correlation, or could be ignored. Cyclic correlation is appropriate for periodic functions.
In other cases, ignoring the non-overlapped portions will decrease the lengths of the functions being multiplied, and will thus alter the net normalisation of the integral. To account for this, one may divide the integral by the Cross-correlation of the two window functions w f and w g at the same lag: For discretely sampled functions, this is equivalent to dividing by the total number of overlapped points at the corresponding lag: Often it is also customary to normalise the cross correlation function such that its amplitude does not exceed unity. This is achieved by dividing the cross correlation by the square root of the product of the two autocorrelation functions at zero lag: The result is referred to as the Normalised Cross Correlation Function. The Fourier transform of the Cross-correlation function defines the Cross Spectrum (Eqn. 38). Non-zero lags in the cross correlation function naturally manifest as corresponding phase gradients in the cross spectrum. It is easy to see that the autocorrelation function ACF( f ) has for its Fourier transform |F( f )| 2 , namely the Power Spectrum.
At times, the observed time series could be composed of multiple narrowband components, with the lags for each of them being independent, and different. Depending on the relative amplitudes of these components, such lags may not necessarily manifest themselves in the cross correlation function as distinct peaks offset from zero lag, but instead cause an asymmetry in the cross-correlation profile. Fig. 12 shows an example of this. Such frequency-dependent phase lags are more clearly detected in the Cross Spectrum, as illustrated in the next section.
Cross-spectra, phase lag spectra and coherence
Above we have introduced the PDS as the Fourier Transform of a function times its complex conjugate, and we have shown that the PDS is the Fourier Transform of the autocorrelation function. Then, we have introduced the cross-correlation between two time series. As in the case of the autocorrelation, the cross-correlation does not allow to discriminate between different frequencies and is usually used only to assess global delays between simultaneous signals. Also in this case, it is possible to explore the frequency domain through the cross spectrum. and g(t) and their respective FTs F(ω) and G(ω) we define the cross spectrum as in Eqn. 42: Analogous to the PDS and the autocorrelation, the cross spectrum between two signals is the Fourier transform of their cross-correlation (and vice-versa). Since, unlike the autocorrelation, the cross-correlation is not by definition an even function, the cross spectrum is not by definition a real quantity. At each frequency, its argument represents the phase difference between intensity fluctuations in the two signals at that frequency. The difference in phase at a certain frequency can be translated into a time delay dividing it by the frequency. As an example, Fig. 13 shows the PDS of the signal for which the cross-correlation in Fig. 12 was calculated, with the peaks corresponding to the three sinusoids, and the Cross Spectrum between the two time series with lagged sinusoids. In the cross spectrum, the lags of the three sinusoids are clearly readable. This Cross Spectrum is much easier to interpret than the corresponding cross-correlation. While for a coherent signal the phase delays (or phase lags) are simple to interpret, in many cases the signal consists of noise plus broad peaks. For these signals, the interpretation is not directly obvious, as the decomposition of broad-band noise into sinusoids does not correspond to physical oscillations at the different frequencies. It is important to remark that the phase of a sinusoid at a certain frequency is a quantity that makes sense only for that frequency (or multiples of it), but it does not make sense to compare it with the phase at a different frequency. This means that averaging phases or phase lags over a range of frequencies does not result into a physical quantity. This does not apply of course to time lags.
As we have seen, the cross spectrum of two signals at each frequency is a complex number whose argument represents the phase delay between the signals at that frequency. However, let us suppose to have a set of independent measurements of the two signals, for instance obtained by slicing the signals into n shorter samples. Then an important information is whether the phase delay measured with the cross spectrum is the same across samples. This can be measured by computing the coherence between the two signals. If the two signals are s 1 (t) and s 2 (t), their PDS are P 1 (ν) and P 2 (ν) respectively and their cross spectrum is CS(ν), then their coherence is defined as in Eqn. 43 where the angle brackets mean average over the n samples. The averaging of the cross-spectra at the numerator in Eqn. 43 means that at each frequency the complex vectors are summed. If their argument, representing the phase delay at that frequency, is always the same, the resulting sum is a straight vector, if their are random it will be a null vector. Intermediate cases will lead to a non-straight vector. The coherence represents the ratio of the actual sum vector and the straight vector. In case of constancy of phase delays, it will be unity. In case of random phase delays it will be zero. In other words, if the two signals are related by a linear transform, the coherence is unity at all frequencies.
Bispectrum and bicoherence
If f (t) is a time series of measurements and F(ω) is its Fourier Transform, then we have seen above that the Power Spectrum is the Fourier Transform of the autocorrelation function of f (t). The autocorrelation function may be thought of as the second order correlation where the angular brackets denote an ensemble average. Ideally the average so defined would involve infinite length of data train f (t), but in practice even with shorter lengths the average would be independent of time t if stationarity is assumed. The Power Spectrum may then also be shown to be equal to the angular brackets again denoting ensemble average, this time in frequency domain. The two frequencies ω and ω sum to zero as a consequence of stationarity. Bispectrum is an extension of the above concept to triple correlations. The third order correlation function would have the double Fourier Transform which is defined as the Bispectrum. It can be shown that The bispectrum provides information about non-linear interaction between waves. The bispectrum is non-zero at a pair of frequencies (ω 1 , ω 2 ) only if the Fourier component F(ω 1 + ω 2 ) is statistically dependent on the product F(ω 1 )F(ω 2 ), indicating a non-linear coupling between the original frequencies or a specific phase relation between them. The bispectrum may be normalised to define a quantity called bicoherence b(ω 1 , ω 2 ), with a value between 0 and 1:
Lomb-Scargle technique for non-uniform sampling
The Fourier methods described so far are designed to deal with evenly sampled data. However, if the available data samples have non-uniform intervals, a technique that is often used in Astronomy to search for periodicities is the Lomb-Scargle periodogram. An useful exposition of this method is presented in [18]. As discussed above, the sampling function may be represented as a series of delta functions located at the sample points which, in this case, are non-uniformly spaced. The available time series is then a multiplication of the underlying function with this sampling function. In the Fourier domain, the transform of this time series will be a convolution of the transform of the underlying function with that of the sampling function. The latter, unlike in the case of uniform sampling, is noise-like in character. The resulting convolution thus makes it harder to identify the features of the underlying function, hampering the search for periodicity. Nevertheless, for sufficiently strong periodic signals, one can find associated peaks in the analogue of the Power Spectrum: where f n are the time series values sampled at the epochs t n , and N is the total number of samples. The above can be written as This is called the Classical or the Schuster Periodogram. The Lomb-Scargle Periodogram is a slight modification of this [16]: where This form is identical to that obtained by least-square fitting a model consisting of simple sinusoids at each frequency and constructing a periodogram out of the respective χ 2 values [11]. This modification of the expression of the classical periodogram simplifies some of its statistical properties so that relatively simple expressions can be used to estimate detection thresholds and false alarm rates [16]. We have discussed above (Eqn. 16) the role of sampling in limiting the highest frequency that the data are sensitive to, and referred to it as the Nyquist limit. One property of an unevenly sampled time series is that the effective Nyquist limit may often be raised to high values, even well beyond the reciprocal of the shortest sampling interval present in the data. In this context, Eyer and Bartholdi [8] prove the following: Let p be the largest value such that each sampling epoch t i could be written as t i = t 0 + n i p, where n i are integers. Then the Nyquist frequency is 1/(2p). This reduces to the conventional Nyquist frequency in the case of uniform sampling. In the case of non-uniform sampling one needs to find the largest interval p such that each sampling interval present in the data is an exact integral multiple of p. If the sampling intervals are truly incommensurate with each other then p would be vanishingly small and there would be no Nyquist limit.
Time-frequency analysis
The properties of the observed signal are often not constant in time, i.e. the signal might not be stationary. For instance, there could be a state transition causing an abrupt change in the signal, or the characteristic frequency of a component could vary in time. Of course one can produce PDS from selected intervals of data in order to isolate different behaviours, but it is often more efficient to perform timefrequency analysis. The periodicity is detectable in both periodograms as a spectral peak at 0.0078 Hz, but on the right the noise level is enhanced due to non-uniform sampling. The periodogram on the left is identical to the Power Spectrum obtained by Fourier Transform.
Short-time Fourier transform
We have seen Bartlett's method, which consists in dividing the signal into equallength intervals and averaging the corresponding PDS. Instead of averaging, we can keep the single FFTs (what is called short-time Fourier transform), generate PDS and produce a spectrogram (often called "dynamical power spectrum" in highenergy astronomy): an image that contains power as a function of frequency and time. A simulated example can be seen in Fig. 15: the signal is shown on top, the Bartlett PDS on the right and the spectrogram in the center panel. The signal is made of two "chirps", one whose frequency grows with time and the other having the opposite trend: this is not visible in the time series and PDS plots, but appears evident in the spectrogram. Notice the presence of side lobes, caused by the boxcar window, they can be suppressed using another window, at the expense of broadening the two chirp lines. We have shown above that the frequency resolution of the PDS of a signal of duration T is ∆ ν = 1/T and this is also the resolution in frequency of the spectrogram. The resolution in time is the window duration ∆ T = T . This immediately shows that there is a trade-off between time and frequency resolution. This is an expression of the Gabor limit, which is itself connected to the uncertainty principle in time series analysis (indirectly related to Heisenberg's). There is no need to go into precise definitions and more mathematical detail, but this is a strong limitation for time-frequency analysis: the resolution element in the spectrogram cannot be made arbitrarily small in one direction without increasing it in the other. It is possible to divide the time series into overlapping intervals, so that each of them shares a percentage of time with the next one. This reduces frequency resolu-tion, but also averages out noise. Adding the PDS results in the Welch's method, an extension of Bartlett's.
Wavelets
In the past decades, new mathematical tools have been developed for the analysis of both images and time series, called wavelets. For time series, they constitute a way to work around the Gabor limit.
A wavelet is a "small wave," where small derives from the fact that it is mostly limited to an interval of time. A wavelet ψ has to satisfy two requirements: its integral must be zero and the integral of its square must be unity (see Eqn. 55).
It is easy to see that these requirements imply that ψ(t) is essentially non-zero only over a limited range of t and that it has to extend both above and below zero. Three examples of wavelets can be seen in Fig. 16. A wavelet can be shifted by τ and dilated by a scale parameter σ (Eqn. 56): There are two types of wavelet transforms: continuous and discrete. The continuous transform, in which the scale and shift parameters are changed smoothly and can assume all values, is what can be used for time-frequency analysis. The discrete wavelet transform, where scales and shifts are changed in discrete fashion by factors of two, is used for compression and de-noising and has no use in the analysis of time series. Here, we will only deal with the continuous transform. The wavelet transform of a function f (t) is computed by correlating f (t) with the complex conjugate of ψ τ,σ (t) (Eqn. 57) (wavelets can also be complex functions): The power of the wavelet transform is that it allows localization both in time and frequency. Its properties allow to sample large time durations for low frequencies, and at the same time short time durations for higher frequencies by the scaling properties of the wavelet transform. An example is shown in Fig. 17. Here a very variable 1-s light curve of the bright and peculiar black-hole binary GRS 1915+105 is shown. The signal consists of intervals showing a strong quasi-periodicity alternated with broad smooth dips. In panel (a) the spectrogram is shown (here with a Hann window and 400-s sliding intervals that overlap by 90% with the previous one). Here one can see power in the 50-100 mHz range and absence of power in the dips, but not much can be seen in detail. In panel (b) the wavelet transform is shown. A complex-valued version of the Morlet wavelet was used and what is shown is the amplitude of the complex valued transform (phases are also available). Wavelet scales have been converted to frequency for comparison (frequencies are inversely proportional to scales). Features corresponding to quasi-periodicities are much more visible. A scalogram can also be produced, where the square modulus of the amplitude of the wavelet transform is used.
More information on wavelets and wavelet transforms can be found in [1], [14] and [12].
Other techniques
Finally, it is interesting to show, briefly, another approach to time-frequency analysis. It starts from the Wigner-Ville distribution (WV), originally developed by Wigner for calculations in physics and brought to the signal analysis field by Ville. Given the usual time series f (t), its Wigner-Ville distribution (in its continuous representation) is shown in Eqn. 58): It does look quite different from the previous ones, as it involves a crosscorrelation of the time series with its own time-reversed version. Notice that for each t values distant in time have the same weight of values nearby, indicating that this is a highly non-local transformation. An example can be seen in Fig. 18, where the WV transform of the same double chirp as in Fig. 15 is shown. Very visible are the two important aspects of the distribution: the positive one is that the chirps are sharper in both time and frequency if compared with the spectrogram, while the negative one is there are spurious ghost patterns in the image. A more serious problem of the ghosts is that the statistics to use for the detection of significant features is not clear, as it can be demonstrated that with the exception of very special cases, the Wigner-Ville distribution cannot be everywhere positive.
A generalization of the WV distribution was introduced by Cohen [6], who showed that all time-frequency representations can be expressed in the form of the Cohen class shown in Eqn. 59: In the Cohen class, the function φ (θ , τ) is called kernel. The kernel is what determines the distribution and its properties. There are many possibilities (see [6]): if φ = 1 we recover the WV distribution, while if we obtain which is the functional form of the short-time Fourier transform. | 2022-03-09T04:43:22.160Z | 2022-03-08T00:00:00.000 | {
"year": 2022,
"sha1": "84b4b043a1acedc131e78fa1dfee360c07471935",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2203.04106",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "84b4b043a1acedc131e78fa1dfee360c07471935",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
212782188 | pes2o/s2orc | v3-fos-license | Household Food Security: Evidence From South Sumatera
This study aims to determine the phenomenon of food security in South Sumatra Province. Food security is calculated using Shortfall/Surplus Index and Head Count Ratio. Binary Logistic Regression method is used to determine factors affecting food security. This study obtains data from National Socio-Economic Survey on March 2017 regarding average number of calories consumed by household per day, socio-economic characteristics of household, and household heads in South Sumatra. The Survey used total sample of 9,752 households consisting of 3,099 urban households and 6,653 rural households. The results of the study shows that using 2,100 kcal standard limit, most of districts in South Sumatra have entered safe food security limit. However, they have not entered safe food security limit using 2,500 kcal standard limit. Factors that affect household food insecurity in South Sumatra Province are caused by the number of household members and the education of household heads.
INTRODUCTION
The development process in countries around the world continues to experience improvements based on the results of the MDGs in 1995 and 2015. Unfortunately, 800 million people in the world are still sleeping in a state of hunger (World Bank, 2015). ADB revealed that the economy continues to grow and there is a reduction in the level of poverty, but food insecurity still affected the population in Asia (ADB, 2013).
The fullfillment of food and nutrition has become the priority of development in Indonesia (Purwantini, 2012). This is as the result of many of Indonesian faces the problem of malnutrition in Indonesia, with calorie intake below the minimum nutritional level is still far from the target (Purwantini, 2015). Reported in 2011, the number of people who consumed energy less than 1,400 kcal (considered as a very food insecure condition) reached 14.65 percent, higher than the target by government which is 6.15 percent. The continuing condition of malnutrition will harm the future of the nation. Especially, if the malnutrition experienced by the children, this will harm the growth of the children. It can be harmful for the children's growth and brain that can danger the future of the children and their nation. (Ministry of National Development Planning Agency, 2011).
The Food Security Agency and the Ministry of Agriculture have mapped out food security by district in Indonesia. Districts are rated 1 to 6, where the most vulnerable conditions are in priority 1 and the secure conditions are in priority 6. Districts in South Sumatra are generally in priorities 5 and 6. This is not suprising because South Sumatra is one of the national rice barns.
But, unfortunately based on table 1.1 there are still several districts which are categorized as food deficits such as OKU, Pali and Muratara. This condition is shown from the NCPR score which has a value of more than 1, indicating that food vulnerability still looms over the South Sumatra Province. The NCPR (Normative Consumption per capita ratio) value is the ratio of normative consumption per capita to the net availability of rice cereals, maize, and tubers. The normative value of cereals consumption per day / per capita is 300 gr and the calculation of the net availability of cereals per capita per day is calculated by dividing the total availability of cereals against the total population. If the normative consumption ratio of an area is greater than 1 then it is considered a food deficit, and a ratio value of less than 1 is considered a surplus area for cereal production.
The government has implemented a National Food Security program as a solution to overcome food insecurity in Indonesia. The food insecurity can be reduced only if the food security condition achieved in household level. Thus, the fullfillment of food needs in household level is an important issue for the government. The government need an information such as the number of population and the characteristics of households experiencing food insecurity, as well as the causes to reduce the food insecurity.
Food insecurity also the main issue of the dynamics of human life as it is noted as one of the the main goals in Sustainable Development Goals is the completion of poverty and food insecurity (Dalgleish et al., 2007). Unlike its title as the Food and Energy barn, South Sumatra faces the unability to provide an adequate food supplies to achieve the food security of households and individuals in South Sumatra. Therefore, the analysis of the number of people who experience food insecurity as well as the causes especially in the province of South Sumatra is needed to decrease the food insecurity, proving the title to be rightful for this province.
In addition, several food security studies have been carried out in Indonesia, but they generally used macro data and indicators. While studies using micro data are still few, even though food security is very dependent on individual conditions in a household. Therefore this study aims to show the relationship between individual characteristics and household food security using micro data (individuals) in South Sumatera.
Food security simply described as the adequate food availability. Initially the definition of food security was only at the national level to measure food self-sufficiency, then this concept is expanding to the household level. Households are considered to have food security if they have the ability to obtain food needed by all of the members (Pinstrup-Andersen, 2009). Decree of the Rome Declaration described the food security in wider scope that it occurs when everyone at all times has physical and economic access to food that is sufficient, safe and nutritious to fulfill food and food preferences for an active and healthy life (FAO, 1996).
On the contrary, food insecurity is the inability of both individuals and households to access decent food in terms of quantity and quality (FAO IFAD UNICEF, 2017). Food insecurity can also be interpreted as insecurity experienced by households and individuals when there is no certainty in the future regarding food availability and access, insufficient number, type of food (quality) needed for healthy living or the need to use unacceptable methods socially to get food (Barrett, 2010).
Food insecurity also described as the condition of food insufficiency experienced by regions, communities or households, at certain times to meet the standards of physiological needs for plants and public health. Sufficient food consumption is an absolute requirement for the realization of household food security. Food insecurity can be illustrated by changes in food consumption which lead to a decrease in quantity and quality including changes in the frequency of consumption of staple foods. Food insecurity in Indonesia itself is not a problem in low food production but rather a problem of food distribution patterns. Food insecurity is a reflection of the situation of the adequacy of food and individual nutrition in communities or community groups in an area as a result of the inaccessibility of food, both physically, socially and economically (Purwantini, 2015).
FAO describes four main dimensions of food security, namely physical food availability, economic, and physical access to food, food utilization, and other threedimensional stability. Availability of foodfood supply is the physical existence of the choice and quantity of nutritious food that is sufficient to meet consumer needs at competitive prices. Adequacy of food supply is determined by factors such as location and accessibility of retailers and outlets, availability of food in outlets, as well as price, quality, variety, and promotion of food (FAO, 1996).
Food Access-food demand is the ability of consumers to obtain food that is safe, affordable, competitively priced, culturally acceptable and nutritious by using physical or financial resources. Access depends on individual financial resources and total household expenditure, physical mobility, distance and availability of transportation to food stores, and food preferences (FAO, 1996).
Utilization includes food preparation, cooking and storage facilities, and incorporates food safety issues. It depends on food preferences, which are influenced by eating habits and socio-cultural factors, as well as nutritional knowledge and the impact of time availability on the individual's ability to prepare healthy food. Food security can be experienced at the national, community, household or individual level. The focus of food security and vulnerability in this study is on individuals and households access to food compared to other dimensions of food security.
Results of research conducted by Saliem et al. (2002) show the characteristics of food insecure households characterized by: a) the age of the head of the family and the wife of a productive age, low education, and the number of children who have dropped out of school, b) limited control of agricultural land and livestock, c) not all households store food the principal and even if kept in small amounts, d) the average income is below the poverty line and most of the income comes from the agricultural sector, and e) the share of food expenditure is very dominant and the largest proportion is for the grains group.
Research on food insecurity is not only carried out in Indonesia, but also in many other developing countries. Study on food insecurity and malnutrition for children in Nigeria shows that rural people have a higher level of food insecurity. Furthermore, environmental factors affecting food insecurity are access to water for safe cooking and drinking, cooking fuel, toilet facilities, the presence of electricity, the location of the kitchen and arrangements. Arrangements are defined as whether the community is in an urban, suburban or rural area (Atoloye et al., 2015).
Other socio-economic factors that positively affected food security are gender, age, education level, cooperative membership, and extension agent contact, farming experience, access to credit, income, and agricultural size. The findings that household size and child dependency ratio negatively affect food security (Funmilola, M. & P.O., 2015;Oyekale, Ayegbokiki & Adebayo, 2017). The larger the size of the family, the greater negative impact on household food security (Olayemi, 2012).
Household food security is positively affected by variables: male households' heads, household members with agricultural and allied jobs, age of household head, percentage of irrigation area, number of livestock owned by households, and operator owners. Female-headed households are more vulnerable to food insecurity compared to male households heads (Ibok et al., 2014;Zakari, Ying & Song, 2014;Joshi & Joshi, 2017).
Many literatures find that individuals who are at food insecurity risk are also living in poverty. Food insecurity usually occurs in groups of job seekers, long term sickness, disabled people, households with children, single parents, homeless people, members of the community traveler, retirees, single people and households with heads of families with low education (King et al., 2015).
Besides, the economic statuseconomic opportunity, access to land and economic power-unsurprisingly affected the high risk of food insecurity for women (Ivers and Cullen, 2011). The female-head household likely to suffer the food insecurity. The female-head household push the woman to take responsibility to earn money but still need to provide the domestic chores-this could give more burden to provide the adequate food (Mallick & Rafi, 2010).
Food poverty refers to the inability to obtain food or consume sufficient quality or adequate quality of food in a socially acceptable manner, or uncertainty that a person obtains in food (Riches, 1997). Poverty is seen as the inability of individuals to fulfill basic food and non-food consumption for food, clothing, housing, education, health and other basic needs. The limit of food consumption used by poor people is less than 2100 per capita calories per day which is equivalent to 320 kg / capita / year in rural areas and 480 kg / capita / year in urban areas. Minimum basic needs are translated as financial measures in the form of money. The value of the minimum basic needs is known as the poverty line. Residents whose income is below the poverty line are classified as poor (BPS, 2015).
The food poverty rate in two family heads, namely male and female family heads in the Lagos, State of Nigeria based on the food poverty line N 39,759.49 shows that 36 percent of the sample of male households' heads live below 3,000 calories every day. Whereas, the female households heads are 80 percent live below 3,000 calories a day. This shows that food insecurity is higher in female than in male households heads (Lawson, 2014).
Many studies have combined socioeconomic factors in determining the determinants of poverty. Research on the relationship of independent variables to poverty in Kenya using two analytical methods, namely augmented regression and logistic regression shows that land, education, household size, gender of households head, household characteristics, access to facilities and the number of assets influence the poverty status of individuals in Kenya (Ngunyi et al., 2015). Other studies show that the important variables of poverty in Nigeria are gender, employment, length of school, household size, per capita expenditure on health, education and food and the number of people working (Edoumiekumo, Karimo & Stephen, 2013).
METHOD
This study aims to determine the phenomenon of food security in South Sumatera. The measurement will be carried out using two standard calorie intake as comparison. The first standard reference according to Indonesian Central Bureau of Statistics is the minimum calorie intake of 2100 451
Azwardi, A., et al, Household Food Security:
Evidence From South Sumatera kcal and the second reference is using 2500 kcal limit. This study using the average number of calories intake by household members per day, the socio-economic characteristics of the heads of households and the number of households in South Sumatra. Data is obtained from The National Socio-Economic Survey in March 2017 with total sample of 9,752 households consisting of 3,099 urban households and 6,653 rural households.
The variables used are dependent and independent variables. The dependent variable used is the food insecurity status obtained from the calculation of calories intake per capita. Then the independent variables are divided into individual and household characteristics. Individual characteristics used include age, gender, marital status, education, and work. Whereas household characteristics include number of household members, lighting and electricity, drinking water, government subsidized rice, and regional areas.
This study used descriptive analysis. There is a simple analysis of a distribution of data by presenting in the form of tabulations and drawings to provide a description of the dynamics of food security according to regional status during the study period. The statistical measure used in this descriptive analysis is the average value of household food expenditure in South Sumatra.
The method used in calculating Food Security is Shortfall / Surplus Index and Head Count Ratio. The Food Security Index can be mathematically written as: (8) The interpretation fot the value of a j is, if the value of x j increases by 1 unit then the chance of the value of Y = 1 will increase e aj times. When X = [x 1 , x 2 ] and Y = [0,1] hence the odds ratio θ can be formulated as: When the odds are 1 <θ <∞, the probability of Y = 1 is greater for X= x1 compared to X= x2 Based on the binary logistic regression formula, the food insecurity determinant model used is: …………………… (11) Logit (security status ) = b 0 + b 1 gender of household head + b 2 residential location + b 31 education 1 + b 32 education 2 + b 33 education 3 + b 41 Number of household members 1 + b 42 Number of household members 2 + b 5 working + b 6 subsidized rice recipient + b 7 drinking water + b 8 lighting + To ensure the significance of logit model, it is necessary to test the significance of the model. Likelihood Ratio Test is used to measure the simultaneous effect of all independent variables to dependent variable. Whereas, Walt Test is used to measure the individual effect of each independent variable. Likelihood Ratio Test is performed using G 2 Test. Hypothesis in this test are: H 0 : β 1 = β 2 = β 3 = ... =β k = 0 (There is no relation between independent variables and dependent variable) H 1 : at least one β j ≠ 0 (At least one independent variables has relationship with dependent variable). Where: L 0 = likelihood without independent variables L k = likelihood with all independent variables H 0 is rejected if the significance level is less than α = 0.05, which means it can be concluded that the x independent variables simultaneously affect the dependent variable y. H 0 is rejected, meaning that there is at least one βj ≠ 0. To know whether βj is zero (not significant), it uses to test the parameter coefficient β partially.
Hypothesis test for Walt Test are:
H 0 : βj = 0 for one particullar j ; j = 0,1,...,p H 1 : βj≠ 0 This statistic has a Chi-square distribution with degree of freedom = 1 or by symbolically written Wj ~ X 2 . H0 is rejected if Wj> X 2 α,1 with α is the degree of significance.. Ho is rejected, means that independent variables are individually significant at the degree of significance α.
RESULTS AND DISCUSSION
When talking about food security, we have to look at the existing food stocks / production, especially in South Sumatra Province. South Sumatra Province has food potential that should be enjoyed by the entire population. This can be seen from the available main food production data including rice, meat, milk, and fish production as food intake that is generally consumed by the population. Rice is the staple food for the people of South Sumatra. Based on figure 1, the increase in the population of South Sumatra is accompanied by an increase in rice production every year. Rice production increased by 15 to 18 percent per year, while population growth of 1 to 1.5 per year. It turned out that the growth of rice production still exceeded population growth. If rice production is divided by population, each population can receive about five quintals each year, where per capita rice consumption per year is only around 70-90 kg. Seeing the adequacy of rice stocks every year, it should be able to meet the entire carbohydrate intake of the population. Foods other than staple foods are complementary foods such as meat, eggs, fish, and milk. South Sumatra's meat production continues to increase every year except in 2017, while there was a significant decrease in beef and fish production in 2016. The biggest potential for complementary food intake is fish whose production reaches millions of tons per year. This is supported by South Sumatera's geographical conditions which is bordered by sea and has many rivers.
If the production of complementary foods is compared with the population, a smaller result is obtained compared to rice which is the staple food. One resident will receive around 60 to 80 kg per year with this existing production. Existing complementary stock is enough to meet individual calories intake.
The adequacy of the staple food supply and complementary food that is available should be ensured that the entire population of South Sumatra has met its food needs. However, besides stocks, there are other factors that affect individual food security. The distribution chain and the level of consumption that is affected by economic capacity also influences food security.
Access to food is related to the ability of individuals to get enough good food from their own production, buying, or giving. Food stock may be available in sufficient areas but cannot be accessed by individuals because of limited physical, economic, and social aspects (Food Security Council & Ministry of Agriculture, 2018). Therefore, to see the condition of food access to individuals, further analysis of the food security index is needed.
The FSI boundary line used in this research is the 2100 kcal limit and 2500 kcal limit. The FSI value above 1 indicates the secure condition and vice versa, the value below 1 indicates insecure food security index. When FSI reviewed by district city, the average FSI value seems homogeneous. By using the 2100 kcal limit, all regencies show secure food security status. Whereas with a limit of 2500 kcal, the majority of districts are in insecure status and only 3 districts show secure status, namely PALI, Lubuk Linggau and Banyuasin. while the lowest average FSI values happened in Musi Rawas, North Musi Rawas and Ogan Komering Ilir Timur districts with an average FSI value of only 1.06 (minimum limit of 2100 kcal).
The FSI value of South Sumatra is 1.12 based on calculation using 2100 kcal limit and 0.96 using 2500 kcal limit. This result indicated that households in South Sumatra on average is still lack calories intake because the FSI value is 0.96 which is below 1. Considered still lack in calories intake, the condition mostly not severe, because the FSI value averaged higher than 0,9. Banyuasin, PALI and Lubuk Linggau district show the FSI higher than 1 in 2500 kcal limit. The pattern of food distribution gives informations for the distribution and availability of food in vulnerable areas. The population dependency on rice in South Sumatra makes it important to maintain rice's availability and quality. In addition, the diversity of food consumption such as protein, fat, vitamins, and minerals should be encouraged. Increasing the diversity of food consumption will increase calories intake and hence increase nutritional intake and reduce stunting rates in infants and toddlers. Table 4 and 5 presented the food security status both in 2100 kcal and 2500 kcal. Percentage of the number of households according to Food Security Status is calculated using the Headcount Ratio value. Based on the calories intake limit of 2100 kcal, there is no significant difference between the population that is prone to food insecurity in rural (44.27) and urban area (44.88). Based on the 2500 kcal calories intake limit, the percentage of households that are prone to food insecurity becomes higher such as 72.10 percent in urban areas while 71.57 percent in rural areas. National Socio-Economic Survey, 2017 (processed) This percentage presents that people in urban areas face the food insecutiry slightly higher than people living in rural areas. In 2500 kcal limit, more than half of the population both in urban and rural areas living in food insecurity, although around 30 percent of population already consumed higher than 2100 kcal limit. People living in urban area mostly face the food insecurity affected by some factors, such as the dificulty living condition, local environmental risks, and limited access to markets (Craverio, 2016). The higher standard of living in Urban areas might affected the househsold financial ability to have adequate amount and variety of food.
The trend of urbanization also lead the people in urban area living below the standard. As the result of the competition difficulty in labor market, most of migrant become unemployement.
Urbanization positively correlated to urban poverty (Zhang, 2016). The result show that 59,72 percent unemployement have food insecurity in 2100 standard. Not only that, in 2500 kcal standard, 81 percent unemployement still struggling to consume adequate amount of food and variety of food. Other factors such as urbanization, poverty, and income needed further research.
The result in 2100 kcal standard and the 2500 kcal standard show the same result based on gender of household-head. The male-head household are more likely to experience food insecurity than women. The result contrary to the previous study (Lawson, 2014;Ibok et. al, 2014) that found the female-head household likely face the food insecurity. The contrary result from the previous study could be as the result of the social condition in the household. Previous study explained that developing countries still depend on the agricultural sector, as the main source of food for direct and raw consumption (Dunga & Dunga, 2017). The malehead household might preferred to consume cheaper food, and less nutritional food that leads to food insecurity. Lack of education could lead to the food insecure condition where the result show that the higher level of family-head education, the less the number of households experiencing food insecurity. Dunga and Dunga (2017) also present that education plays important role in vulnerability of food insecure in Melawi. The result supported by the findings where the higher education in urban and rural area of South Sumatera, the less food insecurity. Non-education face the food insecurity more than 70 percent in 2500 kcal standard. Education levels may determine the ability to absorb information, and the income earned (Kumba, 2015), the higher education background of individu would likely to lower the food insecurity. Bachelor degree and above (70,54%) food secure while no-education background (53%) food secure. The result show higher percentage gap where bachelor degree and above (42,48%) more food secure than the no-education background (22,37%). The lower education level also present the lower food secure percentage, this result supported King et al., 2015, that the low education leads to food insecurity.
This showed an indication that education has an important role in fulfilling the food intake of the population. Beside being able to increase knowledge and access to information on the importance of adequate food needs, education is also generally positively correlated with the welfare status of households, where the more prosperous a household, the greater the resources they have for their intake of food needs.
The educational background also plays an important role to income earned. Unemployment likely to face the food insecurity, in 2500 kcal standard. The percentage of unemployement that face food insecurity 81,06 percent. Unemployment of household-head or the membersignificantly affected the food insecurity in household (Huang et al. 2014). Not only the inability to seek job, the uneployment houseld member could be the household leaders with no activities. Household leaders with no activities actually have a greater number of food insecurity than household leaders with daily routine activities. The head of the household with no activities is dominated by the elderly who have been unable to carry out any activities. This indirectly shows elderly households have a greater tendency to experience food insecurity. This become an issue because the elderly are no longer able to work well to make a living and make them a vulnerable group.
The determinants of the food insecurity in urban and rural areas in South Sumatra will be explained based on data processing result, the binary logistic regression model is obtained as follows: ln ( ) = − 1,103 -0.31 Gender -0. The result shows that all social, economic, and demographic variables have significantly affected the household food security status in South Sumatra. For model testing in the Model Summary table using the statistic value of -2 Log likelihood which is greater than chi-square and its significant value is less than 0.001, this means that the overall model results for each equation are significant at α = 0.001. Hence, the model chosen is best describe the condition of data. The Nagelkerke R-Square value is 0.125, which means the model explains 12.5 percent status of household food security in South Sumatra while the rest is influenced by other variables not found in the model.
Based on the equation (15), the coefficient for gender is negative. This shows that households with female head households tend to be less likely to suffer food insecurity. The household opportunity value with female household head is 0.969 times compared to male household head. This findings supported the previous study by Lawson, 2014;Ibok et. al, 2014 where female-head household likely to suffer food insecurity.
There are 5 age groups in this analysis, namely age less than 30 years, age 31-40, age 41-50, age 51-60 and age above 60 years. From the equation (15), the coefficient value in all age groups is negative where the reference variable used is 30 years and under. This indicates that households with the age of household head over 30 have a tendency to have more food than households with the age of household head below 30. Age 1 coefficient is -0.073, this means that the opportunity of households having food security with the age of household head between 31 to 40 years is 0.930 times more compared to households with the age of household head 30 years and below.
Maturity of the household heads affects psychological and financial stability. The older the age of the household head, the more experienced they have in finding food sources and hence more established in managing family finances. Ibok et al., (2014) find that the food security positively affected by several factors, one of them is age household head.
The coefficients of marital status variables are positive. This shows that there is tendency of households with married, divorced, and death divorced status to be more vulnerable to food insecurity compared to those who are single. The chance of household with married status to experience food insecure is 1.350 times more compared to households with single status in South Sumatra. Marital status is closely related to the number of household members. Heads of households who are bound to marriage or divorce need to fulfil their children's needs, in terms of food, as well as other material. The woman divorcee also face some problems financially and struggle to provide food and other chores. This is in line with another study by Yusuf et al (2015) which states that married households have higher food security than others. The head of the married household shows the greater number of household members that can be used as labor supply to increase income. But, this is different from the study of Haliu and Regassa (2007) which stated that divorced, widowed, and single headed households have higher food security than married headed households, while research shows that single headed household is more food resistant because it is associated with small family size.
The number of Household Members variable has a positive sign with a coefficient of 0.333. The result supported Olayemi, 2012 that found the greater the number of household members, the greater chance to face the food insecurity. Each addition of 1 household member will increase the chance of occurrence by 0.333. The number of household members is certainly closely related to the amount of food that must be available in the household. The more member at home, the more food that must be provided.
According to Antwi et al (2018), there was a negative correlation between food security and member size in Ghana. The more the number of household members, the lower the household food security. The large number of household members that is not accompanied by an increase of available food resources, makes individuals food consumption is not enough. These results are in line with the research of Abele et al (2015) and Habyarimana (2015).
The level of education shows a negative coefficient value. The head of the household with any education level has a smaller tendency of 0.764 times to experience food insecure compared to household no education at all. The regression result shows that the higher the education level of the head of household, the lower the risk of the household experiencing food insecurity. Higher education will open up opportunities for individuals to receive high income. In addition, information about good food intake for the body, as well as better food access is usually available to households with more educated household heads.
According to equation (15), the coefficient of the working status variable is -0.129. This shows that the chances of households with unemployed household head to experience food insecurity are smaller than those who are employed household heads by 0.879 times. There is a possibility when the unemployed household heads are supported by other household members who work and provide for the needs of household members.
Households who are using nongovernment provided electricity as lighting devices have a higher tendency to experience household food insecurity. Households who are using non-government provided electricity have a higher chance of 1.174 times to experience food insecurity compared to households using government provided electricity. Households who do not use any electricity have a higher chance of 1.860 times to experience food insecure compared to households with government provided electricity. The existence of electricity indicates the location of household. Households without access to electricity are usually households with isolated locations or poor households. This certainly will affect the distribution of food ingredients or lack of access to food due to economic factors.
All coefficient for drinking water variables are positive. Based on equation (14), the coefficient value of 0.177 in clean water 1 shows that households with refilled drinking water has a tendency of 1.194 times more in experiencing food insecurity compared to households that use bottled water as drinking water. Likewise, the coefficient value of 0.354 for drinking water 5 means that the chance for households using other drinking water sources (such as rainwater, etc.) to experience food insecurity is 1.425 times more compared to households that use bottled water as drinking water. Generally the price of bottled water is more expensive than other sources of drinking water, so households that consumed bottled water indicated better welfare standards.
The subsidized rice variable refers to the government subsidized rice received by households. According to equation (15), coefficient for subsidized rice is -0.095. This means that the opportunity for households that did not receive subsidized rice to experience food insecurity is 0.909 times more compared to households that received the rice. This indirectly shows that recipients of subsidized rice have been right, namely households that are vulnerable to experience a lack of food calories intake. Therefore, the existence of subsidized rice helps food intake for these households who need it. The subsidy provided by the Government for those who living below the standard. This refers to the poverty in society that can not afford the good amount and quality of food.
The tendency of rural population to experience food insecurity is smaller than that of urban population which is 0.765 times. This is related to the availability of natural resources in rural areas whereas in urban area, natural resources are hardly to find. Besides, the economic factors-standard of living, income earning, and prices in rural area lower than in urban area. The most important variable to understand the food insecurity in South Sumatera district by analyzing the other factors is the poverty. The poverty led the household to cut the budget for amount and quality of food consumed. Hadley et. al (2011) summarized that the poverty plays important role to understand the food security risk both in rural and urban area.
CONCLUSION
Food security is not only a matter of enough food production for the entire population, but also the problem of individual access to food itself. The individual access to high quality and variety of food urgently needed. The condition of food security in South Sumatra has entered a safe limit under calories intake limit of 2100 kcal. However, under calories intake limit of 2500 kcal, food security in South Sumatra has not entered a safe stage. Judging from the household socio-economic factors, the number of household members and the education of household heads are dominant factors influencing household food security in South Sumatra. The number of household members is directly related to the amount of food that must available in the household. The more household members, the more food must be provided by the Household Leader. Meanwhile, education is related indirectly to the level of income earned. The higher the education level, the higher the income earns. Financial sufficiency has a positive influence on meeting the food needs of all household members.
The findings implicated that the main determinant of food insecurity in South Sumatra district was educational background that leads to the lower income earned-even worse-being unemployment. The study recommends that family planning and educational program policies are actually implemented in southern Sumatra and even Indonesia to improve household food security status. Besides that poverty alleviation programs can also be a solution to improve food security for example by providing rice for poor households (Raskin). The government could provide education in managing land and crops reminded South Sumatera district mainly still depend on primary sectors like agricultural. The modern technology and more educated farmers would bring the food security for household in South Sumatera.
The urgency to maintain the food security in South Sumatera closely related to economic development. The severe food insecurity would lead to higher government investment in food and public health lead to risked the fall of gross domestic product. Because good food will provide good health that will affect efficiency, productivity, income which in turn has an impact on economic growth and serves as the basis for achieving sustainable economic growth. On the other hand, food insecurity reflects the economic conditions of households that do not have access or are limited to consume adequate food which indicates that food insecurity traces to poverty.
This study uses micro data while many other studies use macro data. However, data lag (1 year) and limited types of variables are the weaknesses of this study. Future studies can use more up to date and more diverse data. | 2020-03-19T19:30:31.099Z | 2019-12-27T00:00:00.000 | {
"year": 2019,
"sha1": "b4c12205eacba7f5afbf32f78212ba52c9cb1bbf",
"oa_license": "CCBY",
"oa_url": "https://journal.unnes.ac.id/nju/index.php/jejak/article/download/20264/10030",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "154fde7e5ebb110f87f20554bd81444771203414",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Geography"
]
} |
55323954 | pes2o/s2orc | v3-fos-license | White wines from Narince grapes : impact of two different grape provenances on phenolic and volatile composition
Methods and results: Samples were subjected to physicochemical, total phenolics, individual phenolics and aroma compounds analyses. Gallic acid content of the Erbaa and Emirseyit wines at the end of fermentation was respectively 3.49 mg/L and 3.09 mg/L; (+)-catechin content 23.46 mg/L and 21.30 mg/L; and (-)-epicatechin content 9.46 mg/L and 8.74 mg/L. The differences in gallic acid and (-)-epicatechin contents of the wines produced from the grapes harvested from Erbaa and Emirseyit were found to be significant at the end of fermentation. A total of 31 aroma compounds were also analyzed in the wines. The aroma substances were the same in both wines (with the exception of E-3-hexanol found exclusively in Erbaa wines), but the levels were different: the wines produced from the grapes harvested from Erbaa (205605.32 μg/L) had higher total aroma compounds than the wines produced from the grapes harvested from Emirseyit (179547.85 μg/L).
Introduction
Local differences may influence development of grapevines, ripening of grapes and composition and sensory characteristics of wines.Quality wines get their characteristic features from the places where the grapes are produced.Location of the vineyard or local conditions (soil, climate, topography) influence wine quality and style.The concept of "terroir", used in origin check of the wines, is defined by the geographical location, topography, climate and solar radiation of the region in which the grapes are produced (Li et al., 2011).
Phenolic compounds are secondary metabolites of plants, and they are the most common compounds in plants.They constitute a chemically heterogeneous group, and today there are almost 10000 compounds with already defined structure (Taiz and Zeiger, 2008).Phenolic compounds are one of the most significant quality criteria of wines, contributing specific flavors to the wine (Proestos et al., 2005).Phenolic compounds of the wines mostly come from the grape (Ali et al., 2010).They are influenced by many factors, mainly geographical origin.Phenolic compounds of the wines and grapes are thus greatly influenced by "terroir" (Li et al., 2011).It was reported in previous studies that phenolic compounds of white wines had higher absorption rates in human metabolism and may have positive contributions in prevention of ischemia-reperfusion injury of the heart.Also, phenolic compounds of white wines have higher antioxidant activity, are better at preventing blood serum lipid oxidation, and have higher cytotoxicity against normal peripheral mononuclear blood cells (Nardini et al., 2009).
Aroma is another significant quality criterion in wines.Wines have a quite complex aromatic structure composed of several aroma compounds (San-Juan et al., 2011).Grape cultivar, environmental factors (climate and soil), fermentation conditions (yeast flora, pH and temperature), technological processes used in wine production and wine aging conditions are the basic factors influencing formation of aroma compounds (Cabredo-Pinillos et al., 2008).
There are several local grape cultivars grown for white and red wine production in Turkey.Narince is an indigenous white grape cultivar to Tokat province, and it is also grown in different parts of Turkey (Kiliç et al., 2007).Narince grape cultivar is grown in several villages located in northern and southern parts of Kazova Valley 3 km away from the town of Turhal (Tokat province), in some villages of Turhal and Zile districts, and in several villages of Niksar and Erbaa districts located in the Kelkit Valley region (Astan, 2006).Narince is a local grape cultivar processed into the best dry and semi-dry wines.Since it is a late ripening cultivar, harvest generally takes place in early October.Narince wines have a green-yellow color, fruity aromas, and a compact structure.Since their acid ratios are well, they are quite suitable for aging (Buhurcu, 2004).
There are several studies worldwide on phenolic compounds and aroma substances to classify the wines based on their geographical origins (terroir), but such studies are quite limited in Turkey.Therefore, there is a need for systematic studies dealing with local grape cultivars grown in different parts of Turkey.In the present study, aroma and phenolic compounds of wines produced from Narince grapes harvested from two different localities (Erbaa and Emirseyit) of Tokat province were determined, and their effects on wine quality were investigated.
Grapes and wines
Narince grapes harvested (2013) from two different localities of Tokat province (Erbaa and Emirseyit) were used in this study.Narince is a white table grape cultivar grown in Tokat province in the Middle Black Sea region of Turkey.Vines are cultivated using a bilateral cordon system.The altitudes of the vineyards from which the grapes were harvested in Erbaa and Emirseyit were respectively 360 m and 665 m.The vines are 12 years old.Two different wines were produced from the grapes harvested from Erbaa and Emirseyit.Wine production was performed at facilities of Diren Wines Co. Analyses of the wines were made at the laboratories of the Food Engineering Department of Gaziosmanpaşa University -Faculty of Engineering and Natural Sciences.
Winemaking
Wines were produced from the grapes harvested from Erbaa and Emirseyit.Following mechanical destemming and crushing, grapes were pressed, placed into 20000-L stainless steel fermentation tanks with temperature control and mixing apparatus, and left for fermentation.The must was supplemented with 30 ppm SO 2 .For ethyl alcohol fermentation, tanks were supplemented with 20 g/hL Saccharomyces cerevisiae (Oenobrands, Montpellier, France).
Alcohol fermentation was performed at 21-24 °C.Temperature and density measurements were performed daily throughout the fermentation process.
Samples to be analyzed were taken at the beginning and end of fermentation and at the end of clarification processes.Experiments were conducted in two replications.
Must and wine analyses
Total acidity, pH, reducing sugar, free and total SO 2 , density, alcohol content and volatile acid analyses were carried out in accordance with OIV (1990).
Total phenolic content
The Folin-Ciocalteu method as modified by Slinkard and Singleton was used to determine total phenolic -83 -OENO One, 2018, 52, 2, 81-92 ©Université de Bordeaux (Bordeaux, France) content.Spectrophotometric determination of the total phenolic content was done with the Folin-Ciocalteu micro method as adapted for wine analysis (Waterhouse, 2002) using gallic acid as the standard.The calibration curve of absorbance concentration of standard was used to quantify phenolic content.Calibration curve was prepared from gallic acid standard (at concentrations of 0, 50, 100, 150, 200 mg/mL in water).Results were expressed as mg gallic acid equivalents per liter of wine (mg GAE/L).
Individual phenolic compounds
Gallic acid, (+)-catechin, (-)-epicatechin, vanillic acid, caffeic acid, p-coumaric acid, ferulic acid and quercetin contents were quantified by HPLC (High Performance Liquid Chromatography) (Bayram, 2011).All standards were supplied from Sigma-Aldrich.Samples were analyzed with a Shimadzu HPLC system.Detection and quantification was carried out with a CBM-20A Prominence system controller, a LC-20 AT Prominence pump, a CTO-10A SVp column oven and a SPD-M10AVP diode array detector with wavelengths set at 280 nm.Separation was performed on an Intersil C18 EPS-3 (250 x4.6 mm, 3 μm ID) column.All chromatographic separations were carried out at 40 °C using gradient elution with mobile phases A Stock solutions (1 mg/mL) of all standards were prepared with methyl alcohol.The standards were kept at -18°C.Wine samples to be analyzed were filtered through a 0.45-μm (Millex-HV) membrane filter with a syringe.About 20-µL extract samples were directly analyzed.For quantitative analyses of phenolic acids, UV-Vis/DAD detector and internal standards were used at 280 nm.A calibration curve was drawn for these standard compounds and samples were quantitatively assessed through this calibration graph.Gradient elution programs for phenolic compounds in HPLC are given in Table 1.
Aroma compounds
Liquid-liquid extraction technique was used in aroma analyses.Extractions were performed in three replications for each sample with dichloromethane (CH 2 Cl 2 ) solvent.In each extraction process, 100-mL wine sample was used.Wine sample was supplemented with 50 mL dichloromethane solvent and 40 μg internal standard (4-nonanol), and the resultant mixture was placed into 500-mL Erlenmeyer flask.The mixture was stirred under nitrogen gas at 4-5.°C for 30 minutes with a magnetic stirrer.Then the sample mixture was centrifuged at 0.°C for 20 min (at 6000 rpm).Following centrifugation, the solvent phase containing the aroma substances was concentrated to 5 mL at 45.°C in a Vigreux concentrator.Then the 5-mL solvent phase was further concentrated to 0.5 mL in a microconcentrator.The concentrated extract was injected (3 μL) into a GC-MS (Gas Chromatography-Mass Spectrophotometry) device, and aroma substances were determined.To identify aroma substances, Wiley 7.0 and NIST aroma substances library of GC-MS, standard substances and Kovats Index values were used.Following the identification of the peaks, quantity of aroma substances was calculated through internal standard method (Priser et al., 1997).
Statistical analyses
Statistical analyses were carried out with SPSS (Version 20.0) software, and Duncan test was used to compare the means.
General chemical analyses for must and wines
Analysis results for the must obtained from Narince grapes harvested from Erbaa and Emirseyit are provided in Table 2, and analysis results for the wines after clarification are provided in Table 3.
Density of the wines produced from the grapes harvested from Erbaa and Emirseyit was 0.990 g/mL, and free SO 2 content at the end of fermentation was respectively 23 mg/L and 22 mg/L.SO 2 plays a significant role in wine production, aging, and prevention of wine spoilage and defects (Cabaroğlu and Canbaş, 1994).
Wines with a sugar content below 4 g/L are included in dry wines (Turkish Food Codex, 2009).Based on this classification, all of the wines produced in this study can be classified as dry wine with full fermentation.
Alcohol content of the wines produced from the grapes harvested from Erbaa and Emirseyit was respectively 12.3% and 12.4%.Alcohol is a significant component influencing characteristic taste and odor.Grape ripening level and variety may influence alcohol content of the wines (Jordão et al., 2015).According to wine regulation of Turkish Food Codex (2009), actual alcohol content of wine in volume should be at least 9% and total alcohol content should be a maximum of 15%.
Volatile acid content (expressed as acetic acid equivalent) of the wines produced from the grapes harvested from Erbaa and Emirseyit was respectively 0.323 g/L and 0.392 g/L.Volatile acids are formed during alcohol fermentation, and the majority of them forms acetic acid.The amount of volatile acid depends on must composition (acid, sugar, nitrous substances), yeast strain and fermentation conditions (Ough and Amerine, 1988).According to wine regulation of Turkish Food Codex (2009), volatile acid content (in acetic acid equivalent) should not be more than 18 meq/L for partially fermented grape -85 -OENO One, 2018, 52, 2, 81-92 ©Université de Bordeaux (Bordeaux, France) must, 18 meq/L for white and pink/rose wines, and 20 meq/L for red wines.Current findings were consistent with the literature and were lower than values specified in wine regulation.
Total acidity (expressed as tartaric acid equivalent) of the wines produced from the grapes harvested from Erbaa and Emirseyit was respectively 4.18 g/L and 4.21 g/L; pH values of the wines were respectively 3.58 and 3.53.Acidity influences taste and resistance of wines and brings freshness to the wines.It is also effective on color tone, durability and taste of the wine (Navarre, 1988).According to wine regulation of Turkish Food Codex (2009), total acidity of wines (expressed as tartaric acid equivalent) should be at least 3.5 g/L or 46.6 meq/L.
Total phenolic content of wines
Total phenolics of the wines produced from Narince grapes harvested from the different localities are provided in Table 4.Total phenolic content of the must obtained from the grapes harvested from Erbaa and Emirseyit was respectively 470.96 mg GAE/L and 515.88 mg GAE/L.Total phenolic content of the wines was respectively 443.39 mg GAE/L and 403.39 mg GAE/L at the end of the fermentation process, and 383.39 mg GAE/L and 412.56 mg GAE/L at the end of the clarification process (Table 4).The difference in total phenolics of the must and wines produced from the grapes harvested from two different localities were not found to be significant.Shahidi and Naczk (1995) reported total phenolics of white wines as between 50-2000 mg/L.In another study carried out with Narince grapes of Tokat province, total phenolics of the wines was reported as 345 mg GAE/L (Şen, 2014).Bisson and Ribéreau-Gayon (1978) investigated the effects of cultivar and environmental conditions on phenolic compounds of Cabernet Franc, Merlot, Pinot Noir and Gamay grape cultivars grown in two different regions and reported that total and individual phenolics of black grapes varied with the 2008).Besides these factors, the region where grapes are produced, soil characteristics and agricultural practices influence color components and phenolic compounds of the grapes (Ünsal, 2007).
Individual phenolic compounds of wines
Phenolic compounds of wines produced from Narince grapes harvested from the different localities are provided in 5).
(+)-Catechin was the major phenolic compound in the must obtained from Narince grapes harvested from Erbaa and Emirseyit, followed by (-)epicatechin and gallic acid.Only the difference in (+)-catechin and p-coumaric acid contents of the must was to be significant.
At the end of clarification, the greatest (+)-catechin, (-)-epicatechin, caffeic acid and gallic acid contents were observed in Erbaa wines and the greatest pcoumaric acid and vanillic acid contents were observed in Emirseyit wines.The differences in gallic acid, p-coumaric acid, vanillic acid and (-)-epicatechin contents between Erbaa and Emirseyit wines were found to be significant at the end of fermentation, while the differences in ferulic acid, pcoumaric acid, vanillic acid and (-)-epicatechin contents were found to be significant at the end of clarification.
Total phenolic acids of Erbaa and Emirseyit must were respectively 5.95 mg/L and 6.36 mg/L.At the end of the clarification process, total phenolic acids of Erbaa and Emirseyit wines were respectively 19.14 mg/L and 16.76 mg/L.Phenolic acid content of wines was higher and flavonoid content of wines was lower than in must for both localities.As compared to the must, gallic acid, p-coumaric acid and caffeic acid contents of the wines of both localities were higher at the end of clarification.
Phenolic acids are classified as hydroxycinnamic and hydroxybenzoic acids.Although hydroxycinnamic acids exist in fruits as ester, various natural conditions or technological processes result in formation of hydroxycinnamic acids in free forms (Somers et al., 1987).Lower p-coumaric acid and ferulic acid contents of the must can be explained by reduced polyphenoloxidase enzyme activity through SO 2 addition to the must before fermentation and prevention of enzymatic degradation of complex hydroxycinnamic acids.Similar results were also reported by Budic-Leto and Lovric (2002).
Increased p-coumaric acid and ferulic acid contents at the end of fermentation as compared to the must may be related to possible hydrolysis of hydroxycinnamic acid esters like caftaric, coutaric and fertaric acid.Similar findings on this issue were also reported by Budic-Leto and Lovric (2002).
Effects of terroir on phenolic compounds of various grape cultivars were reported in previous studies.Ünsal (2007) determined some phenolic compounds (gallic acid, (+)-catechin, (-)-epicatechin, vanillic acid and syringic acid) of the wines produced through classical maceration method from French and Turkish wine grapes (Kalecik Karası, Gamay and Cabernet sauvignon) harvested from Mürefte and Hoşköy localities of Trachia with an HPLC and compared the wines for these phenolic compounds.
Results revealed that gallic acid was the major phenolics in both localities and all three wines; the other phenolics varied in the different wines.It was also observed that gallic acid, (+)-catechin and (-)epicatechin contents were higher than vanillic acid and syringic acid contents in both locations and all wines.It was concluded in that study that each three cultivars was well adapted to the region, especially Cabernet sauvignon which yielded quite strong wines rich in phenolic compounds.
In another study, Kelebek et al. (2010) investigated the effects of vineyard region (Denizli, Elazığ, Nevşehir, Ankara) on red grape (Öküzgözü, Kalecik Karası, Boğazkere) phenolic compounds.Öküzgözü cultivar was found to be rich in (+)-catechin and Kalecik Karası cultivar was found to be rich in (-)epicatechin; Boğazkere cultivar had low (+)-catechin and (-)-epicatechin contents, but high procyanidin (B1, B2, B3 and B4) contents.With regard to colorless phenolic compounds, Boğazkere grapes of Elazığ region were richer than the grapes of Denizli region; the grapes of Nevşehir region were richer than the grapes of Ankara region.With regard to colored phenolic compounds, differences were observed in wines: Öküzgözü wines had high (+)-catechin contents and Kalecik Karası wines had high (-)-epicatechin contents; Boğazkere wines had low catechin and epicatechin contents, but high transcaftaric and trans-coutaric acid contents.With regard to colorless phenolic compounds, wines of Elazığ region were found to be richer than the wines of Denizli region.Kumšta et al. (2012) analyzed 43 different Riesling wines from four vintages and 16 different localities in six sub-viticultural regions and reported that phenolic composition of the grapes and wines varied with the localities and the wines; wine regions were related to trans-resveratrol concentration.Lampíř and Pavloušek (2013) investigated the effects of regions on phenolic compounds of white wines produced from grapes grown in two different regions of Czech Republic and reported that protocatechuic acid, p-hydroxybenzoic acid, caftaric acid, cis-piceid, (+)catechin and (-)-epicatechin were significantly influenced by terroir.
Aroma compounds of wines
Description and olfactory perception thresholds of aroma substances and aroma compound amount of Erbaa and Emirseyit wines are provided in Table 6.While 31 aroma compounds were identified in wines produced from the grapes harvested from Erbaa, 30 aroma compounds were identified in wines produced from the grapes harvested from Emirseyit.The quantity of E-3-hexanol in Erbaa wines was 99.84 μg/L; the compound was not observed in Emirseyit wines.The other aroma substances were the same, but the levels were different: the wines produced from the grapes harvested from Erbaa had higher total aroma compounds (205605.32 μg/L) than the wines produced from the grapes harvested from Emirseyit (179547.85μg/L).In both localities, alcohols were the greatest aroma compounds, followed respectively by acids and esters.The differences in levels between both wines were significant for 22 aroma compounds.
The levels of some of the volatile compounds are well correlated with the aromatic composition of wines made with grapes of the same varieties.Grape type and quality affect the chemical composition of the wines.Depending on the fermentation conditions and must treatments (temperature, micronutrients, vitamins and nitrogen composition of the must) S. cerevisiae produces different concentrations of aroma compounds (Carrau et al., 2008).The microflora of the grapes and fermentation medium contribute to wine final aroma by mechanisms: firstly by utilizing grape juice constituents and biotransforming them into aroma-or flavor-impacting components; secondly by bringing enzymes that transform neutral grape compounds into flavor-active compounds and lastly by the de novo synthesis of many flavor-active primary and secondary metabolites (Fengmei et al., 2016).Also many of the aroma and flavor compounds found in the finished wine come not from the grape, but rather from compounds formed during primary (essential) or secondary metabolism of the wine yeast during alcoholic fermentation (Styger et al, 2011).
Total quantity of 11 higher alcohol compounds was 173024.96µg/L in Erbaa wines and 145831.14µg/L in Emirseyit wines.Among these alcohol compounds, isoamyl alcohol, phenylethyl alcohol and isobutyl alcohol were the greatest in both localities.
Phenylethyl alcohol content was 37225.28µg/L in Erbaa wines and 21568.90µg/L in Emirseyit wines; isoamyl acid content was 120251.57µg/L in Erbaa wines and 109069.58µg/L in Emirseyit wines.Higher alcohols exist in aliphatic (straight chain) and aromatic structure.Higher alcohols are the secondary products of yeast metabolism.While they give a sharp and bitter taste to wine at high concentrations, they contribute to fruity aroma of the wine at optimum concentrations (Lambrechts and Pretorius, 2000;Swiegers et al., 2005).Ribéreau-Gayon et al.
(2000) indicated that while higher alcohols give the desired aroma to wines at a total concentration below 300 mg/L, they negatively influence taste and odor at a total concentration above 400 mg/L.Higher alcohol content of the present study was lower than the value specified by Ribéreau-Gayon et al. (2000).Of the 11 aroma compounds, only two (isoamyl alcohol, phenylethyl alcohol) were determined above the odor perception threshold.Nykänen and Suomalainen (1989) Based on their origins, esters can be gathered under two groups.The first group is composed of the acetates of higher alcohols and includes isoamyl acetate, isobutyl acetate, methyl acetate and 2-phenyl acetate; the second group is composed of ethyl esters of fatty acids and includes ethyl hexanoate, ethyl octanoate and ethyl decanoate (Etiévant, 1991).Based on odor activity values, ethyl hexanoate adds ripe banana aroma, isomethyl acetate adds pineapple aroma, isoamyl acetate adds banana aroma and 2phenylethyl acetate adds fruit jam aroma to the wines (Antonelli et al., 1999) (Çelik, 2012).Acetic acid (vinegar aroma), propionic acid (goat aroma) and butanoic acid (rancid butter aroma) influence the aroma of the wines.Except for acetic acid, wine acids are usually present below the perception threshold levels (Rapp and Mandery, 1986;Costello, 2005).Selli et al. (2006), in a study carried out with Narince grapes, investigated the effects of maceration treatment (at 15°C for 12 hours) on aroma compounds of the wines and reported hexanoic acid content of 3019 µg/L in 1998 and 2932 µg/L in 1999 and octanoic acid content of 5245 µg/L in 1998 and 5260 µg/L in 1999.
Carbonyl compounds are synthesized through carbohydrate or citric acid metabolism, lipid oxidation or aminoacid reduction by microorganisms throughout the fermentation (Swiegers et al., 2005).In the present study, acetoin content of the wines was 244.67 µg/L for Erbaa and 298.96 µg/L for Emirseyit.Acetoin content should not exceed the perception threshold value (150 mg/L).In the present study, wines of both localities had acetoin contents below the perception threshold value.Selli et al. (2006) reported acetoin content of Narince grapes of 296 µg/L in 1998 and 223 µg/L in 1999.
g-Butyrolactone is the most significant lactone compound formed during the fermentation process.This lactone is formed by the lactonization of g-hydroxybutyric acid formed through decarboxylation and deamination of glutamic acid through the Ehrlich pathway.This compound may also come directly from the grape (Ribéreau-Gayon et al., 2000).In white wines, lactone quantities may significantly increase when the grapes are processed with their stems.It was also reported in a previous study that gamma lactones may greatly contribute to wine aroma; yeast strain and wine aging might significantly influence the quantities of these compounds (Rocha et al., 2004).In the present study, g-butyrolactone content was 1896.48 µg/L for Erbaa and 1642.69 µg/L for Emirseyit.
Conclusion
In the present study, aroma and phenolic compounds of the wines produced from Narince grapes harvested from two different localities (Erbaa and Emirseyit) of Tokat province were analyzed.Results revealed that different localities and process stages influenced both chemical composition and phenolic compounds of wines.Considering the total phenolics of wines at the end of fermentation and clarification stages, it was observed that Emirseyit wines had higher total phenolic contents, although the differences between -90 -OENO One, 2018, 52, 2, 81-92 ©Université de Bordeaux (Bordeaux, France) Mustafa Bayram and Miyase Kayalar the localities were not found to be significant.While the differences in (+)-catechin and caffeic acid content of the wines were found to be significant, the differences in (-)-epicatechin, ferulic acid and vanillic acid contents were not.The identified aroma substances were similar in both localities, but the wines produced from the grapes harvested from Erbaa had higher levels of aroma compounds than the wines produced from the grapes of Emirseyit.Therefore, it was concluded that the differences in some individual phenolics and aroma compounds of wines produced from the grapes harvested from different localities were consistent with the concept of "terroir".In conclusion, there were no distinctive differences in total phenolics between wines produced from Narince grapes harvested from two different localities, but there were differences in individual phenolics and aroma compounds.
Table 2 . Physicochemical characteristics of grape musts
Results are presented as mean ± standard error (n=3).* expressed as tartaric acid equivalent
Table 3 . Physicochemical characteristics of wines
Results are presented as mean ± standard error (n=3).* expressed as tartaric acid equivalent, ** expressed as acetic acid equivalent
Table 4 . Total phenolic content of wines
Results are presented as mg/L gallic acid equivalent.Different capital letters in the same column indicate significant differences between wine production stages; different small letters in the same row indicate significant differences between localities (p<0.05;n=3).
Table 5 . Some individual phenolic compounds of wines
Results are presented in mg/L.Different capital letters in the same column indicate significant differences between wine production stages; different small letters in the same row indicate significant differences between localities (p<0.05;n=3).p-coumaricacid,ferulic acid).HPLC chromatograms of phenolic standards and Erbaa and Emirseyit wines are presented in Figure1.
. A total of 12 ester compounds were identified in wines of the present study.The total quantity was 13410.045µg/L for Erbaa wines and 16832.86µg/L for Emirseyit wines.Of the 12 aroma compounds, five (isoamyl acetate, ethyl hexanoate, ethyl octanoate, ethyl decanoate, phenylethyl acetate) were determined above their perception threshold value.Perception threshold value of isoamyl acetate in wine is 30 µg/L.Isoamyl acetate content of wines was 3962.48 µg/L for Erbaa and 5795.98 µg/L for Emirseyit.Isoamyl acetate gives banana aroma to wines.Perception threshold value of ethyl hexanoate in wine is 5 µg/L.Ethyl hexanoate content of wines was 1223.91 µg/L for Erbaa and 1120.65 µg/L for Emirseyit.Ethyl hexanoate gives apple and banana aroma to wines. | 2018-12-05T02:47:36.938Z | 2018-06-25T00:00:00.000 | {
"year": 2018,
"sha1": "caa3b659555ac9b34d182f8f3d551056055f2f90",
"oa_license": "CCBY",
"oa_url": "https://oeno-one.eu/article/download/2114/4710",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "caa3b659555ac9b34d182f8f3d551056055f2f90",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
169790422 | pes2o/s2orc | v3-fos-license | Improvements to motor fuel taxation in Russia as impetus for sustainable development of cities
The article considers the role of the transportation system in the sustainable development of cities. The article pays special attention to the development dynamics of road transport in Russia and of the emissions of air pollutants by motor vehicles. The role and significance of fuel taxes in the system of government revenue are investigated. The author analyzes the existing system of fuel taxes in the Russian Federation and concludes that the mechanism of taxing motor fuel is based on consumption and does not take into account the adverse impact on the environment and the energy content of fuels. The author provides arguments for changing the mechanism of calculating the rate of the excise tax on motor fuel so that it factors in its energy efficiency and carbon dioxide emissions. A conclusion is drawn that the introduction of the proposed adjustments would produce a substantial fiscal impact on government revenues and would make it possible to eliminate the existing distortions for two competing sources of energy (petrol versus diesel fuel); would encourage the use and development of public transport and help reduce the emissions of air pollutants. The objective of the study is to work out proposals for improving fuel taxation that would also contribute to the sustainable development of cities.
Introduction
The sustainable development of cities has become a widely accepted strategy over the past decade as the larger part of the world's population. In Russia, for example, 74 % of people are city dwellers. Sustainable development is the sum total of multiple factors including -economic well being that acts as the development driver for modern cities; -urban environment and infrastructure (including the transport system) that form the living conditions and standards of the population; -environmental conditions.
The role of the transport system in the harmonious development of cities cannot be overestimated. The transport system determines the look and viability of a modern city [1]. According to Vuchic [2], transport is the "the life force" of cities that connects all other subsystems and functions, including economic, social and other ones. Consequently, the efficiency of the urban transport system determines to a large extent the effective and reliable operation of other systems in a modern megalopolis. Road transport plays the most important role in the transport systems of modern cities. From 2000 to 2016, the car fleet in Russia grew 110 %, primarily because of an increase in the number of passenger cars (Table 1). In 2016, the share of cars in the total vehicle stock was 87.5 %. As the passenger car fleet grows, the use of public transport declines considerably. From 2000 to 2016, the number of public transport passenger journeys declined by 59.6 %, while passenger journeys by tram and trolleybus -the most environmentally friendly modes of public transport -decreased 5.6-fold. As a result, privately owned passenger cars account for 90 % of road traffic today. The share of public transport and commercial vehicles is below 10 % [3]. The growth in the number of privately owned cars brings about negative external effects that are most pronounced in big cities. The quality of atmospheric air has deteriorated considerably in some urbanized areas of the country, worsening public health problems and driving up mortality [4,5]. Motor vehicles are one of the main sources of toxic pollutants and greenhouse gases (including СО2) that harmful to health. Cities are more prone to higher concentrations of pollutants because of motor vehicles [6]. Carbon dioxide (СО2) is the main constituent of the exhaust gases from internal combustion engines. Growing atmospheric concentrations of carbon dioxide causes climate change [7]. Pallavidino, Prandi et al [8] consider passenger cars to be a major source of CO2 emissions.
Our analysis of the air pollutant emissions in Russia in 2000-16 shows that the volume of emitted pollutant did not increase, but the structure of emissions sources changed. Stationary sources have been able to decrease emissions, while motor vehicles accounted for 45.1 % of air pollutant emissions in the country in 2016, up from 41.8 % in 2000 (Table 2). Privately owned passenger cars are the biggest contributors to air pollutants emissions. Moreover, Hollantz and Tamms [9] showed that a high concentration of cars in a city not only leads to lower 3 1234567890 ''"" efficiency of the entire transport system of the city, but generally decreases the quality of life and safety for the entire urban community. Mayburov and Leontyeva [1] posit that all cities in Russia are in serious need of effective taxation and administration measures for regulating the development of various modes of city transport. The main goal of such regulation is to reduce the use of privately owned vehicles in urban areas and to drive up the use of public transport. Some economic studies [10,11] have proved the effectiveness of indirect taxes as a regulatory tool for decreasing the emissions of CO2 and other pollutants. Indirect taxes create pricing stimuli that force consumers to change their "polluting" behavior and act in a more eco-friendly and energy efficient way.
The excise tax on energy products (petrol, diesel fuel, motor oils) is included in the price of motor fuel and affect the cost of the journey. Today, the fuel tax rates in Russia do not factor in fuel efficiency and CO2 emissions. Changing the mechanism of computing fuel tax rates by taking into account the above factors would increase the tax rates and the price of motor fuels and, consequently, provide an incentive for motorists to use public transport. As a result, one should expect a substantial increase in government revenues, lower air pollutants emissions, improvements in public health and better efficiency of the urban transport system. In other words, positive changes would occur in all key components of the sustainable development of cities.
Research methods
The author used various theoretical and empirical research methods. The theoretical methods employed included analysis, synthesis, generalization and classification. The analysis of the system of energy taxes in Russia showed that the mechanism of taxing motor fuel is based on the volume consumed but does not take into account negative environmental impacts and the energy content of the fuel.
The empirical methods that included observation and comparison were used for identifying main economic trends and substantiating and selecting a mechanism of computing the fuel tax rate in Russia that would factor in fuel efficiency and the level of CO2 emissions.
Works by Russian and foreign scholars served as the methodological and theoretical foundation of the study. The list of data sources for the study included statues and regulations, data of the Federal State Statistics Service (gks.ru) and the European Commission (ec.europa.eu), the press, online resources and the author's own research findings.
Fiscal importance of fuel taxes in the Russian Federation
Excise taxes on motor fuel fall into the category of indirect energy taxes. Table 3 indicates the role and importance of excise taxes on fuel in the system of government revenues in Russia. Over the assessed period, government revenues from environmental taxes (excluding VAT and customs duties) as a share of GDP varied from 4.5 % to 5.1 %. Over the five-year period, the figure went down 0.4 %, but it still considerably higher than in the EU member states (2.3 % to 2.5 % of GDP). The mineral extraction tax contributes the biggest share to government revenues from environmental taxes, accounting for an average of 3.8 % of GDP. Indirect environmental taxes generate much smaller revenue. For example, fuel tax revenues as a share of GDP vary from 0.4 % to 0.6 %, which is much lower than in the EU (1.9 % of GDP on average across the European Union). It possible to conclude that the potential of environmental excise taxes is not fully utilised in the Russian Federation.
The primary taxable energy products that account for over 95 % of fuel tax revenues in Russia are petrol and diesel fuel. In 2016, revenues from the excise tax on petrol made up 65.7 %, or two thirds of total government revenues from fuel taxes. Meanwhile, the consumption of diesel fuel by all types of motor vehicles is four times higher than the consumption of petrol (18.5 m tonnes versus 4.6 m tonnes). This indicates a competitive distortion between the two main types of motor fuel.
Revenues from the excise tax on petrol grew the fastest between 2012 and 2016 (145.2 %). Revenues from an excise tax are usually an outcome of the consumption of the taxable product and changes in the tax rates. From 2012 to 2016, the actual volume of petrol consumption did not change considerably. In 2013, for example, petrol consumption increased 3.6 %, but then decreased annually by 0.2 to 0.8 %. The tax rate for petrol increased by 48.5 % to 80 % (Table 4) depending on the type of petrol. The slower growth in revenues from petrol tax, by contrast with the increase in the tax rate, is due to a transition to production of better-quality petrol that is taxed at a lower rate. Consequently, there has been an environmental effect of the tax because the consumption of greener petrol reduces the harmful impact on the environment.
In 2015, there was a drop in tax revenues from both petrol and diesel fuel for road use. That was due to a " tax maneuver" that is essentially a cut in export customs duties and excises imposed on oil and oil products along with a hike in mineral resources extraction tax and a decrease in the rates on petrol (by 15 to 35 % depending on its environmental class) and diesel fuel (by 45 %). The maneuver was aimed at preventing an increase in the prices of oil products in the domestic market. The cut in the tax and customs duty rates was executed in line with an agreement of the Eurasian Economic Union on the establishment of a common market for oil products and crude oil and unification of export duties.
Excise fuel taxes in Russia today
Petrol has been on the list of excisable products in Russia since 1996. Originally, the tax was levied as a percentage of the sales price. Starting from 1998, the tax is levied as a fixed rouble-per-tonne rate and varies based on octane levels. In 2001, diesel fuel was placed on the list of excisable products, and the tax rates grew considerably.
The most significant reform of fuel taxation took place in 2011 when a transition was made from tax rates pinned to octane ratings to tax rates varying by emissions standard type. Additionally, motor oils, straight-run gasoline, and other fuels were included in the list of excisable energy products in 2011. Between 2015 and 2018, there has not been any considerable changes to the list of excisable products (Table 4).
In Russia, like in European countries, fuel taxes are charged on a per unit volume of the consumed fuel that is used as the tax base. The tax base does not reflect the amount of pollutants that the fuel contains, yet the tax rates vary by fuel environmental class. The rates are lower for greener fuels (Euro 5 petrol) and higher for the types of petrol that are below the Euro 5 standard.
Since the existing tax rates are based on fuel consumption, but do not take into account the carbon footprint of the fuel and its energy content, the approach results in distortions for competing sources of energy, for example, petrol versus diesel fuel. Diesel fuel is far more efficient than petrol and causes a lot more damage to the environment. Some expert assessments find that petrol for road use generates negative environmental impacts of 5.9 euro cents per km, while diesel fuel generates 7.7 euro cents worth of environmental damage per km. However, both in Russia and the EU member states diesel fuel is taxed at a lower rate than petrol despite the former's higher energy efficiency and environmental hazard. In Russia, however, the tax rate for petrol and that for diesel fuel vary at a wider margin. In 2018, the tax rate charged on petrol that is below the Euro 5 emissions standard was 70 % higher than the tax rate for diesel fuel, whereas in the EU the minimum tax rate for petrol is 27.6 % higher than that for diesel fuel. The actual tax rate difference in most EU countries is even narrower. In Russia, however, diesel fuel tax rates have been growing faster (up 120 % between 2015 and 2018) than those for petrol and much faster than the rate of inflation. The disproportion is due to the fact that petrol is primarily consumed by individuals using cars, while diesel fuel is largely consumed by transportation businesses (most of freight vehicles and buses are powered by diesel fuel). A lower tax rate for diesel fuel and, consequently, a smaller share of the tax in the selling price work as an incentive for road freight transport and alleviates the tax burden for the cargo industry. Non-commercial consumption of diesel fuel has, however, been growing. Higher demand for diesel fuel has been pushing prices up in comparison with petrol prices.
The cost of producing diesel fuel is much lower than the cost of refining oil into petrol. The reason lies in the technology that apart from straight distillation incorporates such costly processes as isomerization, reforming, catalytic cracking with hydrotreatment, and alkylation. The diesel fuel production process only includes fractional distillation and hydrotreatment [7]. The selling prices of diesel fuel and petrol and the narrow gap between them do not match the relevant cost of production and are largely due to growing demand for diesel fuel in the market. At present, petrol stations sell petrol and diesel fuel at the same prices despite the fact that the cost of diesel fuel production and the excise tax on diesel fuel are much lower than that of petrol. Consequently, it is oil refineries, middlemen and petrol stations who get to keep the additional revenues generated by the higher demand for diesel fuel, whereas the government is losing on the insufficiently low tax rate.
The excise tax accounts for a fairly high share of the selling price of petrol and diesel fuel in European countries -from 30.5 % in Hungary to 46.4 % in the UK for diesel fuel, and 33.6 % in Hungary to 50 % in the Netherlands for petrol. In Russia, the excise tax accounts for an average of 24 % of the selling price of petrol and 16 % of the selling price of diesel fuel. Given the considerably lower tax rates on main fuels in Russia than in Europe, there is enough room for an increase.
In general, the motor fuel taxation mechanism does not appear to be optimal and needs improving.
Ways of improving excise taxes on motor fuel in Russia
The mechanism of charging environmental taxes in Russia and the EU countries is based on fuel consumption and does not take into account negative environmental impacts and the energy content of the products being consumed. In the Russian Federation, the environmental damage caused by the combustion of fuels is reflected in tax rates that vary by fuel eco-class and fuel use. Consequently, both in Russia and the EU, motor fuels and other energy products are taxed on the basis of their environmental impacts that are, however, not entirely computed in the taxes. As a result, there are distortions in the taxation of competing fuels (petrol versus diesel fuel) that show in unreasonably low tax rates on diesel fuel compared with petrol and the non-receipt of substantial tax revenues by the government.
A design of fuel excise tax that factors in the environmental impacts of fuel consumption should help strike a match between the tax burden and the environmental pollution caused by the taxed product, and therefore make taxation fairer and equitable. The implementation of such an approach calls for a change to the mechanism of motor fuel taxation. It is necessary to eliminate distortions between competing fuels and create incentives for better energy efficiency and emissions reduction.
The author believes that taxes on motor fuels must take into account their energy content and environmental impacts. Some authors [4,7] propose introducing a carbon tax. Today, a tax on CO2 emissions is used in Denmark, Ireland, Finland and Sweden, but it is not harmonized at EU level. It is worth taking a look at the European Commission's proposal to revise the Energy Tax Directive and adopt energy taxes that would be split into two components: -one component would be based on the emissions of CO2 from the energy product and would be charged on a per-tonne basis; -the other component would be based on the energy content of the product, rather than its use. The minimum rate was proposed to be set at a euro-per-GJ basis.
In practice, though, it might be difficult for the taxpayer to determine the tax base (to measure CO2 emissions and the amount of generated energy) in compliance with the proposed mechanism.
That being said, the system of fuel taxation in the Russian Federation should be designed with the following considerations in mind: 1. The tax base remains the same: the volume of consumed fuel expressed in tonnes. 2. When computing the tax rate it is necessary to factor in CO2 emissions and energy content. 3. CO2 emissions from the combustion of fuel depends on its density. The density of petrol is around 0.75 kg/L; the density of diesel fuel is around 0.84 kg/L. Carbon dioxide emissions of petrol-powered cars amount to 3.134 tonnes-per-tonne, and 3.174 tonnes-per-tonne in diesel-powered cars. The calorific value of petrol is 30.8 MJ/L, or 41.06 GJ/t. The calorific value of diesel fuel is 36.3 MJ/L, or 43.22 GJ/t. The per-tonne charge for CO2 emissions is set at 575.6 RUB; the per-GJ charge for emitted energy is set at 11,305 RUB. 4. The tax rate for petrol remains unchanged. 5. The tax rate for diesel fuel will be 13,730 RUB per tonne.
The fiscal impact of the revised tax will be significant provided that the production and consumption of diesel fuel in Russia remain unchanged and the tax rate is set at 13,730 RUB per tonne of diesel fuel (an equivalent of 221.5 dollars per tonne). In that case, annual revenue from the excise tax is projected to grow by 122.5 bln RUB (an equivalent of 1,975.5 m dollars).
If implemented, the proposal would encourage the consumption of energy from sources which produce less CO2. The energy component of the tax rate would make it possible to eliminate the existing distortion of competition between petrol and diesel fuel. Taxation of fuel and energy products tied to energy content and CO2 emissions would encourage a more effective use of energy resources and reduce carbon dioxide emissions because the proposed mechanism sends a clear price signal to the consumer about the real energy value of the product he consumes. It would also be unnecessary to introduce a separate tax on carbon dioxide emissions.
The downside of a higher excise tax on diesel fuel is that it would drive up selling prices and, consequently, the expenditures of individuals and transportation companies and transportation costs in other industries, including agriculture. The price growth would be less significant compared to the hike in the excise tax if oil refineries and petrol stations reduced their profit margin. Higher transportation costs incurred by agricultural producers could be smoothed out with subsidies for agriculture businesses in order to keep agricultural commodity prices down.
The higher cost of motor fuel will spur a transition to more efficient and environmentally friendly vehicles and alternative fuels (natural gas, propane, electricity). Motorists will be encouraged to use public transport. Haulage businesses will have to streamline logistics. All of this will reduce car 7 1234567890 ''"" ownership rates and motor traffic in big cities, improve the environmental situation and public health and make the urban transport system more effective.
Conclusion
The analysis of the system of excise taxes on motor fuel for road use in Russia shows that the taxation mechanism is based on consumption and does not take into account negative environmental impacts and energy content. Moreover, there is distortion of competition between the two main types of motor fuel -petrol and diesel fuel -that shows in unreasonably low tax rates charged on diesel fuel compared with petrol and the non-receipt of substantial tax revenues by the government. The existing motor fuel tax system fulfills the fiscal and regulatory functions of taxation quite effectively, is not optimal and needs improvement. Excise tax tied to the energy efficiency and carbon dioxide emissions of fuel could generate additional public revenue to an amount of 122.5 bln RUB (an equivalent of 1,975.5 m dollars) annually provided that the production and consumption of diesel fuel in Russia remain unchanged. Taxes are a powerful instrument for influencing taxpayers' behavior with economic means and encouraging them to use "green" fuel and environmentally friendly vehicles. The implementation of the proposed novelties will drive up tax rates and the selling prices of diesel fuel, thus reducing its consumption and spurring a transition to more efficient and environmentally friendly vehicles and alternative fuels. One should, therefore, expect an increase in public revenue, a decrease in pollutant emissions, a reduction in car traffic, improvements in the environmental situation and public health in cities and better efficiency of urban transport systems, which will eventually contribute to the sustainable development of cities. | 2019-05-30T23:45:21.062Z | 2018-08-10T00:00:00.000 | {
"year": 2018,
"sha1": "db90f3eb337f708a1f4f0b82b850ef9ffd0d8031",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/177/1/012023",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "086a25f437f3db290072f4a485478296b355714f",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Business",
"Physics"
]
} |
254034890 | pes2o/s2orc | v3-fos-license | Thrombin increases the expression of cholesterol 25-hydroxylase in rat astrocytes after spinal cord injury
Astrocytes are important cellular centers of cholesterol synthesis and metabolism that help maintain normal physiological function at the organism level. Spinal cord injury results in aberrant cholesterol metabolism by astrocytes and excessive production of oxysterols, which have profound effects on neuropathology. 25-Hydroxycholesterol (25-HC), the main product of the membrane-associated enzyme cholesterol-25-hydroxylase (CH25H), plays important roles in mediating neuroinflammation. However, whether the abnormal astrocyte cholesterol metabolism induced by spinal cord injury contributes to the production of 25-HC, as well as the resulting pathological effects, remain unclear. In the present study, spinal cord injury-induced activation of thrombin was found to increase astrocyte CH25H expression. A protease-activated receptor 1 inhibitor was able to attenuate this effect in vitro and in vivo. In cultured primary astrocytes, thrombin interacted with protease-activated receptor 1, mainly through activation of the mitogen-activated protein kinase/nuclear factor-kappa B signaling pathway. Conditioned culture medium from astrocytes in which ch25h expression had been knocked down by siRNA reduced macrophage migration. Finally, injection of the protease activated receptor 1 inhibitor SCH79797 into rat neural sheaths following spinal cord injury reduced migration of microglia/macrophages to the injured site and largely restored motor function. Our results demonstrate a novel regulatory mechanism for thrombin-regulated cholesterol metabolism in astrocytes that could be used to develop anti-inflammatory drugs to treat patients with spinal cord injury.
Introduction
An estimated 133,000 to 226,000 cases of acute spinal cord injury (SCI) occur globally every year (Lee et al., 2014). A dominant pathological feature after primary spinal cord injury (SCI) is secondary tissue damage, characterized by tissue edema, nerve cell necrosis, and inflammation, eventually leading to the formation of a spinal cavity (Ju et al., 2014;Schwab et al., 2014). The progressive neuropathology that occurs is partly attributed to disruption of the blood-spinal cord barrier, which results in infiltration of immune cells that promote ongoing damage within the lesion site microenvironment (Wang et al., 2019a). In addition, some blood-derived factors contribute to deterioration of the injury site milieu and have profound effects on cellular events within the injured spinal cord (Fu et al., 2020). If effective measures are not taken immediately after SCI, secondary tissue damage can worsen the neuropathology. Astrocytes are immediately activated following SCI, undergoing morphological, molecular, and functional changes that are collectively known as the glial reaction. The reactive astrocytes lose their ability to maintain homeostasis, and therefore switch dynamically between having detrimental or beneficial functions on spinal cord recovery (Okada et al., 2018;Yoshizaki et al., 2021). To date, however, the mechanism by which astrocytes become reactive in response to SCI has not been elucidated.
Under normal physiological condition, astrocytes perform a wide variety of essential functions in the central nervous system (CNS), including promoting development of the neural circuit, providing structural and nutrient support to the neurons, and regulating metabolite recycling (Sofroniew and Vinters, 2010;Tsai et al., 2012;Rodnight and Gottfried, 2013;Jayakumar et al., 2014). Astrocytes are also important cellular centers of lipid synthesis and metabolism, thereby contributing to CNS lipid homeostasis (Ioannou et al., 2019). Disruption of the astrocyte lipid metabolism pathway exacerbates the neuropathology of traumatic SCI and neurodegenerative brain disorders (van Deijk et al., 2017;Zhu et al., 2019). Several lines of evidence indicate that loss of astrocyte cholesterol synthesis affects neuronal development and function (Ferris et al., 2017), while aberrant lipid metabolism by astrocytes can result in the production of neurotoxic derivatives (Vaya and Schipper, 2007;Maki et al., 2009). SCI-induced neuronal necrosis leads to interrupted communication with astrocytes and a reduction in lipid transfer; thus, it is reasonable to assume that the astrocytes activate lipid metabolic pathways in response to Research Article their over-accumulation. However, the regulatory mechanism of astrocyte lipid metabolism has not yet been fully elucidated.
Oxysterol is a derivative of cholesterol that participates in several aspects of lipid metabolism (Russell, 2000). The oxysterol 25-hydroxycholesterol (25-HC) is the main product of the membrane-associated enzyme cholesterol-25-hydroxylase (CH25H), which catalyzes cholesterol by adding a second hydroxyl at position 25 (Cyster et al., 2014). 25-HC has been shown to participate in signaling pathways that regulate innate and adaptive immune responses by influencing macrophages (Castrillo et al., 2003;Wong et al., 2020), mast cells (Nunomura et al., 2010), T cells (Bensinger and Tontonoz, 2008), and B cells (Bauman et al., 2009). It also mediates the pathogenesis of chronic diseases such as Alzheimer's disease (Shibata et al., 2007) and multiple sclerosis (Forwell et al., 2016). We previously found that 25-HC was involved in macrophage migration inhibitory factor induced neuropathological progression following SCI in a rat model (Zhu et al., 2019). Whether SCIinduced disruption of astrocyte metabolism promotes 25-HC production and contributes to the resulting pathologic effects deserves further study.
Thrombin, a serine protease involved in hemostasis, is generated from bloodderived prothrombin by the combined actions of factors V and X in the presence of ionized calcium (Ca 2+ ) at the site of vascular injury (Coughlin, 2005). Also, the serine protease induces a variety of protease-activated receptor (PAR)-mediated responses in the CNS that mediate neuropathology, including microglial activation, astrogliosis, demyelination, and other neurotoxicities (Suo et al., 2002;Niego et al., 2011;Burda et al., 2013;Yoon et al., 2013;Radulovic et al., 2016). Thrombin-induced cell events result from the proteolytic activation of PARs, from which the extracellular NH 2 terminus is cleaved by the thrombin and unmasks an amino acid sequence that acts as a tethered receptor ligand to initiate intracellular signaling (Grand et al., 1996). A total of four members of the PAR family, PAR1, 2, 3, and 4 have been identified, and PAR1, 3, and 4, but not PAR2, can be activated by thrombin (Coughlin, 2000;Bae et al., 2007). PAR1 is the most abundantly expressed PAR family member in the CNS (Whetstone et al., 2017). Because injuryinduced activation of thrombin is a potent lipid metabolic mediator, and astrocytes are the primary cellular source of lipids in the CNS (Citron et al., 2016), it is assumed that thrombin induces astrocytic expression of CH25H, which then influences neuropathology. In the present study, we explored dynamic changes in thrombin and CH25H expression at lesion sites following SCI in a rat model. The effects of thrombin on CH25H expression, as well as the underlying mechanisms, were investigated in vitro using primary cultured astrocytes. Finally, rat locomotor function was evaluated following administration of the PAR1 inhibitor SCH79797 to the lesion site of the contused cord.
Animals
Because male rats have fewer postoperative complications than females, which facilitates postoperative care (Patil et al., 2013), a total of 52 specific pathogen free adult male Sprague-Dawley rats aged 8-12 weeks, each weighing 180-220 g, from the Center of Experimental Animals, Nantong University (license No. 220196463), were used in this study. All procedures involving animals were approved by the Animal Care and Use Committee of Nantong University and the Animal Care Ethics Committee of Jiangsu Province (approval No. S20200323-217, January 1, 2021). All experiments were designed and reported according to the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines (Percie du Sert et al., 2020). The animals were housed in separate cages under a 12-hour light/dark cycle at 22 ± 2°C and 50% relative humidity. Food and water were provided ad libitum. Animals were sacrificed by CO 2 asphyxia. Briefly, animals were put in a box, and the CO 2 content of the container was increased gradually at a rate of 30-70% per minute until the animals were unconscious. Death was confirmed by cessation of movement and breathing, and having dilated pupils for 2 minutes, after which the CO 2 supply was interrupted.
Establishment of SCI rat model and drug treatment
The rat model of SCI was established as previously reported (Chehrehasa et al., 2014). Briefly, the animals were randomly divided into two groups (SCI + vehicle and SCI + SCH79797 groups, n = 24). All rats were anesthetized with 2% sodium pentobarbital (0.1 mL/kg, Sigma, St. Louis, MO, USA) by intraperitoneal injection. A skin incision was made from the eighth to the tenth thoracic vertebral level (T8-T10), and the paravertebral muscles were dissected, and the spinous process was removed at the ninth thoracic vertebral level (T9). Next, an IH-0400 Impactor (Precision Systems and Instrumentation, Lexington, KY, USA) was used to deliver a 150-kilodyne contusion injury from a height of 3 cm. Then, the impact rod was removed, the injury site was observed for edema and/or bleeding, and the wound was irrigated.
For drug delivery, the PAR1 inhibitor SCH79797 (R&D Systems, Shanghai, China) was fully dissolved in dimethyl sulfoxide (DMSO; Sigma) at a stock concentration of 10 mM and stored at -20°C. As the vehicle for SCH79797 contains DMSO, which is not biologically inert (Brayton, 1986), the 10 mM SCH79797 stock solution was prepared by adding 1 mg of SCH79797 to 0.225 mL of 2.2 mg/mL DMSO dissolved in 0.1 M phosphate-buffered saline (PBS).
For experimental use, the DMSO SCH79797 solution was diluted with 0.01 M PBS to a concentration of 5 mM and was slowly injected intrathecally (50 μg/kg, 4.5 μL of 5 mM SCH79797) before the incision was sutured. A vehicle control was also included for comparison purposes, which contained 0.225 mL of 2.2 mg/mL DMSO dissolved in 0.1 M PBS and was intrathecally delivered to rats at a dose of 4.5 μL. After surgery, penicillin (Sigma) was injected subcutaneously at a dose of 150 mg/kg for 1 week. The rats' bladders were squeezed by applying gentle pressure on the abdomen twice daily until spontaneous urination was restored.
Cell culture and treatment
Primary astrocytes were cultured according to previously described methods (Zhou et al., 2018). Briefly, a total of 88 newborn 1 to 2 days old Sprague-Dawley rats from the Center of Experimental Animals, Nantong University were used. All animals were anesthetized by hypothermia and sacrificed by the liquid nitrogen quick freezing method (Diesch et al., 2009;Mellor, 2010). The spinal cords of the newborn rats were removed and placed in 0.01 M PBS containing 1% penicillin-streptomycin. The spinal cord meninges were carefully peeled away using dissection tweezers and then subjected to enzymatic digestion (0.25% trypsin, 37°C, 15 minutes). After centrifugation at 160 × g for 5 minutes, the cells were suspended in Dulbecco's modified Eagle's medium (DMEM, Sigma) supplemented with 10% fetal bovine serum (Gibco, Carlsbad, CA, USA), 1% penicillin-streptomycin (Beyotime, Shanghai, China), and 1% L-glutamine (Beyotime). The cells were then seeded into a culture flask and maintained in a 37°C incubator with 5% CO 2 . The medium was replaced every 3 days until the cells were 90-100% confluent. After shaking at 6.94 × g for 12-15 hours to remove non-astrocytes, the cells were used in vitro. The isolated astrocytes were more than 95% pure, as evaluated by immunofluorescence staining for the astrocytic marker glial fibrillary acid protein (GFAP, CST, Danvers, MA, USA) and Hoechst 33342 (a nuclear staining reagent that permeates the cell membrane and stains DNA; MCE, Shanghai, China), which was considered acceptable for subsequent experiments.
We used RAW264.7 cells as a macrophage model of innate immune cell. RAW264.7 cells were purchased from FuHeng Biology (Cat# FH0328, Shanghai, China). The cells' identity was confirmed by immunostaining with F4/80 and Hoechst 33342 before use, and the purity was greater than 95%. The cells were then suspended in DMEM containing 10% fetal bovine serum and maintained in a 37°C incubator with 5% CO 2 .
To determine the effects of thrombin-induced astrocyte production of CH25H/25-HC, astrocytes were washed three times in serum-free DMEM for 5 minutes each time. Then the cells were treated with 1 U/mL thrombin (Sigma) for 24, 48, or 72 hours prior to performing the assay.
For knockdown, astrocytes were transfected with 5 μL ch25h small interfering RNA 1 (siRNA1; target sequence 5′-TCA CCA TCC TCG TCT TTC A-3′), ch25h siRNA2 (target sequence 5′-TCG CGA TGC TTC AGT GTC A-3′), par3 siRNA1 (target sequence 5′-CCA ACA TCA TAC TCA TAA T-3′), or scrambled siRNA (target sequence 5′-GGC UCU AGA AAA GCC UAU GC-3′) with Lipofectamine TM RNAiMAX transfection reagent (Invitrogen, Carlsbad, CA, USA) for 48 hours, and then subjected to 1 U/mL thrombin for another 24 hours before being cultured in a Transwell system and then subjected to quantitative polymerase chain reaction (qPCR) assay. The interference efficiency of the siRNAs was calculated by comparing the relative expression of the target gene following cell transfection with the targeted siRNA and the scrambled siRNA.
Research Article
and sense primers). Primer sequences are shown in Table 1. The reaction conditions were as follows: 94°C for 5 minutes, followed by 40 cycles of 94°C for 30 seconds, 60°C for 30 seconds, and 72°C for 30 seconds. Fluorescence was recorded during each annealing step. At the end of each qPCR run, data were collected automatically. Melting curve analysis confirmed the primer specificity and determined the cycle threshold (CT) fluorescence values. The data were analyzed by the 2 -ΔΔCT method (Zeng et al., 2019). The mRNA expression levels were normalized to gapdh.
Transwell migration assay
To examine the effects of thrombin-induced production of oxysterol 25-HC on macrophages, macrophage migration was assayed with a 24-well 8-μm Transwell chamber (Costar, Cambridge, MA, USA). Briefly, the astrocytes were treated with 1 U/mL thrombin for 24 hours with or without knockdown of CH25H expression for 48 hours, and the conditioned culture medium (ACM) was then collected to examine its effect on macrophage migration. Alternatively, the astrocytes were treated with 1 U/mL thrombin with or without 100 nM liver X receptor (LXR) antagonist (GSK2033, MCE) or 100 nM LXR agonist (T0901317, MCE) for 24 hours, and the ACM was collected for use in the migration assay. A total of 2 × 10 4 RAW264.7 cells suspended in 100 μL serum-free DMEM were added to the top chamber of the Transwell system, and the lower chamber was filled with 500 μL of ACM. After incubation for 24 hours at 37°C and 5% CO 2 , the cells in the lower chamber were stained with 0.1% crystal violet for 30-45 minutes at 23 ± 2°C. They were then imaged and counted using a DMR inverted microscope (Leica Microsystems, Bensheim, Germany). Assays were performed in triplicate.
Cell viability assay
To test the cell toxicity of the various drugs, astrocytes were seeded in 96well plates at a density of 6000 cells/well and cultured in an incubator at 37°C and 5% CO 2 . The cells were treated with different concentrations of Argatroban, SCH79797, tcY-NH 2 , or 25-HC (Simga, Shanghai, China) for 24 hours. After discarding the culture medium in the 96-well plate, 100 μL of 3-(4,5)-dimethylthiahiazo(-z-y1)-3,5-di-phenytetrazoliumromide (MTT) working solution (MTT:serum-free DMEM = 1:9) was added to each well in the dark, and the plates were incubated in a 37°C incubator for 4-6 hours. Then, 100 μL of 20% sodium dodecyl sulfate solution was added, and the plates were incubated for another 20 hours. Absorbance values were measured at 570 nm with a multifunctional enzyme marker (Biotek Synergy2).
Hematoxylin-eosin staining
To assess the size of the lesioned area of spinal cord, 1 cm segments surrounding the lesion sites at T9 were harvested from rats 21 days following SCI, post-fixed, and sectioned. Next, the sections were incubated with hematoxylin and eosin, following standard procedures. Then, the sections were observed under a fluorescence microscope (Axio Image M2, ZAISS). Cells that reacted positively with specific antibodies and lesion areas that stained with hematoxylin-eosin before or after drug treatment were quantified by three investigators using ImageJ v1.8.0 software (National Institutes of Health, Bethesda, MD, USA).
Behavioral tests
Basso-Beattie-Bresnahan (BBB) motor function scores were used to evaluate hindlimb motor function (Zhou et al. 2018). Briefly, hindlimb function while walking was observed and recorded at 0, 7, 14, and 21 days after surgery. In the first stage (0-7 points), hindlimb joint activity was scored. In the second stage (8-13 points), hindlimb gait and the coordination were scored. In the third stage (14-21 points), fine claw movements were scored. The scores for the three stages were combined for a total of 21 possible points. A score of 21 indicated a rat with normal mobility. Successful induction of SCI resulted in a post-operative score of 0.
Statistical analysis
No statistical methods were used to predetermine sample sizes. However, our sample sizes were similar to those reported in a previous publication (Ji et al., 2021). No animals or data points were excluded from the analysis. The assessors were blinded to the groupings in all assays except for immunofluorescence staining. Three sections from each of three animals were assessed for statistical analysis, which was carried out using GraphPad Prism v8.0.2 software (GraphPad Software, San Diego, CA, USA, www. graphpad.com). Comparisons between the two groups were performed using independent sample t-test. One-way analysis of variance followed by Bonferroni's post hoc comparisons tests was used for mutiple-group comparisons. Results are reported as mean ± standard error of mean (SEM). Statistical significance was defined as P < 0.05.
Changes in CH25H expression and its correlation with thrombin activation following SCI in a rat model
To explore whether CH25H expression correlates with thrombin activation at the lesion site following SCI, we used ELISA to determine the levels of thrombin protein in spinal cord segments. The results showed that thrombin expression at lesion sites was increased at 1 and 4 days following SCI in a rat model compared with that in the control and returned to normal levels at 7 days ( Figure 1A and B). Furthermore, ch25h expression was increased at 1, 4, and 7 days following SCI compared with that in the control, while cyp27a1 expression was increased only at 4 and 7 days (Figure 1C-E). Treating the lesion site with 4.5 μL of 5 mM SCH79797 (a PAR1 inhibitor) resulted in a marked decrease in ch25h expression compared with treatment with vehicle ( Figure 1C-E). These findings suggest that thrombin is associated with the regulation of CH25H expression following SCI.
To ascertain whether the astrocytes are involved in thrombin-mediated cholesterol metabolism, immunostaining was performed to detect the cellular distribution of CH25H. The results demonstrated that co-localization of CH25H/GFAP- (Figure 2A-H) and CH25H/S100β-positive astrocytes (Figure 2I-P) was more frequent at 1, 4, and 7 days following SCI in a rat model compared with 0 day (Figure 2Q and R). Because thrombin
Enzyme-linked immunosorbent assay
To assess the astrocytes levels of thrombin protein at the lesion site at 0, 1, 4, and 7 days following SCI, total protein was extracted from 1 cm spinal segments with 0.1 M PBS containing the protease inhibitor phenylmethanesulfonyl fluoride. After centrifuging at 2750 × g for 5-10 minutes at 2-8°C, the supernatant was subjected to thrombin enzyme-linked immunosorbent assay (ELISA) using a kit (Rat TAT ELISA Kit, Elabscience) according to the manufacturer's directions. The thrombin concentrations are expressed in ng/mg. Plates were read with a multifunctional enzyme marker (Biotek Synergy2, BioTek, Santa Clara, CA, USA) at a 450nm wavelength.
Tissue immunofluorescence
To study the distribution of CH25H and PAR1 within astrocytes, 1 cm spinal segments surrounding the lesion sites at 0, 1, 4, and 7 days following SCI were harvested from three rats at each time point for each group. After postfixing with 4% paraformaldehyde, the cord tissues were cryosectioned into 12μm sections, followed by immunofluorescence staining with the following primary antibodies overnight at 4°C: CH25H (rabbit, 1:100, Invitrogen, Cat#
Research Article
mediates cell signaling through PAR1, PAR3, and/or PAR4 (Russell, 2003;Wang et al., 2019b), we next examined the expression of these receptors par1 was expressed at higher levels in the cord tissue than par3 and par4 ( Figure 3A and B). Immunostaining analysis detected PAR1 in GFAP-positive astrocytes before and after SCI ( Figure 3C-F and K), and CH25H expression was significantly decreased in the SCH79797 group compared with that in the control group ( Figure 3G-J and L). These findings suggest that CH25H expression in astrocytes is regulated by the thrombin/PAR1 axis in response to SCI.
Thrombin regulates astrocyte CH25H expression by interacting with PAR1 receptors
To investigate the role of thrombin in the regulation of CH25H expression by astrocytes, we cultured 95% pure primary astrocytes ( Figure 4A). Adding U/mL thrombin to the astrocytes for 24, 48, or 72 hours induced a rapid elevation in CH25H protein levels in the cells ( Figure 4B). However, the addition of 5-20 μM of the thrombin inhibitor Argatroban significantly decreased CH25H protein levels, and this effect was not due to cell toxicity (Figure 4C and D). Next, par1, par3, and par4 expression in primary astrocytes was determined by qPCR, and the results were consistent with the in vivo results ( Figure 4E). Western blotting showed that CH25H expression was inhibited in a dose-dependent manner by 0.5-3 μM of the PAR1 inhibitor SCH79797 (Figure 4F and G). However, siRNA-mediated knockdown of PAR3 expression did not inhibit the thrombin-induced expression of CH25H, similar to the scrambled siRNA control group (Additional Figure 1A-C). In contrast, treatment with 100 μM (but not 10 μM) of the PAR4 inhibitor tcY-NH 2 attenuated CH25H expression (Additional Figure 1D and E). Taken together, these findings indicate that thrombin induces astrocytic expression of CH25H mainly through activation of the PAR1 receptor.
Thrombin promotes the astrocyte CH25H expression through activation of the mitogen-activated protein kinase/NFκB pathway Thrombin has been shown to affect fibroblasts and macrophages through activation of mitogen-activated protein kinase (MAPK)/NFκB signaling (Chen et al., 2017). To shed light on the signaling pathways involved in thrombin-
Figure 1 | Quantitative changes in thrombin production and cholesterol hydroxylase expression at the lesion site following spinal cord injury (SCI) in a rat model.
(A) Experimental design. Experiment I was designed to investigate the effects of the protease-activated receptor 1 (PAR1) inhibitor SCH79797 on thrombininduced cholesterol-25-hydroxylase (CH25H) expression by astrocytes. For quantitative polymerase chain reaction, enzyme-linked immunosorbent assay, tissue immunofluorescence, and hematoxylin-eosin staining, 1 cm spinal cord segments were harvested from rats at each time point. Experiment II was designed to investigate the effects of rat motor function after SCH79797 treatment. (B) Enzyme-linked immunosorbent assay of thrombin production at 0, 1, 4, and 7 days following SCI. Day 0 was used as the control. (C-E) RNA levels of ch25h, cyp46a1, and cyp27a1 at the lesion site were determined by quantitative polymerase chain reaction at different time points with or without injection of 4.5 μL of 5 mM SCH79797. Quantities were normalized to gapdh levels on day 0. Data are expressed as mean ± SEM (n = 3). *P < 0.05 (independent sample t-test). DMSO: Dimethyl sulfoxide; gapdh: glyceraldehyde-3-phosphate dehydrogenase.
Figure 2 | Cholesterol-25-hydroxylase (CH25H) colocalization with astrocytes at the lesion site following spinal cord injury (SCI) in a rat model.
(A-P) Immunostaining showed colocalization of CH25H (Cy3-labeled goat anti-rabbit IgG, red) with glial fibrillary acid protein (GFAP)-(Alexa Fluor 488-labeled donkey antimouse IgG, green) and S100β-(Alexa Fluor 488-labeled donkey anti-mouse IgG, green) positive astrocytes. CH25H expression within GFAP-(A-H) and S100β-positive astrocytes (I-P) was significantly increased at 1, 4, and 7 days following SCI. mediated CH25H expression by astrocytes, the MAPK/NFκB signaling axis was examined in astrocytes treated with thrombin. Western blot analysis showed that stimulating astrocytes with 0.5-2 U/mL thrombin resulted in a significant increase in ERK, JNK, and P38 phosphorylation and NFκB expression compared with a lack of stimulation ( Figure 5A-E). Adding 0.5-3 μM of the PAR1 inhibitor SCH79797 in the presence of 1 U/mL thrombin markedly inhibited ERK, JNK, and NFκB activation, but not P38 activation, as compared with a lack of SCH79797 treatment ( Figure 5F-J). We further examined the relevance of MAPK and NFκB activation to astrocyte expression of CH25H using ERK (PD98059), JNK (SP600125), P38 (SB203580), and NFκB (PDTC) inhibitors. The results demonstrated that CH25H expression was significantly attenuated in astrocytes after treatment with 10 μM of the ERK or JNK inhibitor, or 10-100 μM of the NFκB inhibitor, but not 10 μM of the P38 inhibitor, compared with that in astrocytes in the DMSO group (Figure 6). These findings indicate that thrombin promotes astrocyte CH25H expression through activation of the ERK-and JNK-mediated NFκB pathway.
Astrocyte-derived 25-HC promotes macrophage migration
To examine the effects of thrombin-induced 25-HC production on macrophages, we added 0-100 μM 25-HC to the culture medium of RAW264.7 macrophages for 24 hours and observed its effects on cell migration. The Transwell assay results demonstrated that treatment of RAW264.7 cells with 5-100 μM 25-HC markedly increased cell migration compared with a lack of treatment (Additional Figure 2). Next, astrocytes were treated with 1 U/mL thrombin for 24 hours with or without knockdown of CH25H expression for 48 hours, and the ACM was collected to examine its effect on macrophage migration. The Transwell assay results demonstrated that fewer RAW264.7 in the CH25H siRNA group migrated into the lower chamber compared with those in the scrambled siRNA control group (Figure 7B, D, and E). LXRs are endogenous 25-HC ligands that are very important in the regulation of cholesterol metabolism . Adding 100 nM of the LXR antagonist GSK2033 to ACM, significantly reduced RAW264.7 migration in the Transwell assasy ( Figure 7A and C), suggesting that GSK2033 abrogates the effects of thrombin-mediated astrocyte 25-HC production on RAW264.7 migration. However, adding 100 nM of T0901317, a highly selective LXR agonist, rescued the inhibitory effects of CH25H knockdown (Figure 7B and E). These findings indicate that thrombin-induced astrocyte 25-HC production promotes macrophage migration. Quantification of the ratio of CH25H + S100β + -cells to S100β + -positive cells. Data are expressed as mean ± SEM (n = 3). *P < 0.05 (independent sample t-test). PAR: Protease-activated receptor. 250 dp 100 dp 122 dp gapdh E * * * Thrombin Argatroban
Research Article
Inhibiting thrombin expression promotes functional recovery after SCI To assess the therapeutic effect of thrombin blockade after SCI, we used BBB motor function scores to assess motor function after SCI. Because thrombininduced astrocyte 25-HC production is associated with the migration of innate immune cells (Chen et al., 2017;Zhu et al., 2019), we quantified the microglia/ macrophages at the lesion site before or after SCI. The number of IBA-1positive microglia at the lesion 4 days after SCI was significantly increased in comparison with 0 day (Figure 8A-D and G), whereas administration of 4.5 μL of a 5 mM solution of the PAR1 inhibitor SCH79797 reduced the number of IBA-1-positive microglia compared with lack of administration ( Figure 8E-G).
We further examined the effects of inhibiting microglia migration on the size of the lesion. Observation of hematoxylin-eosin staining sections revealed that the lesioned area of the cord in SCH79797 group was significantly reduced compared with that in the vehicle group at 21 days following contusion ( Figure 8H and I).
Next, BBB scores were used to evaluate rat motor function for 21 days after SCI. Compared with a lack of treatment, SCH79797 treatment resulted in higher BBB scores at 7, 14, and 21 days ( Figure 8J). These findings indicate that inhibiting thrombin improves rat motor function after SCI. (A) The Transwell migration assay of RAW264.7 cells cultured with ACM from astrocytes stimulated with 1 U/mL thrombin (Thr) for 24 hours, followed by the addition of 100 nM GSK2033, and allowed to migrate for 24 hours. Treatment with GSK2033 significantly inhibited the effects of thrombin-induced 25-HC expression on RAW264.7 cell migration. (B) The Transwell migration assay of RAW264.7 cells cultured with ACM from astrocytes treated with 1 U/mL thrombin for 24 hours with or without knockdown of cholesterol-25-hydroxylase (CH25H) expression for 48 hours, followed by the addition of 100 nM T0901317, and allowed to migrate for 24 hours. The addition of T0901317 markedly rescued the inhibitory effects of CH25H knockdown on RAW264.7 cell migration. Scale bars: 100 μm. (C) Quantification of RAW264.7 cell migration shown in A. (D) The efficiency of ch25h knockdown was measured by quantitative polymerase chain reaction, with protein expression levels normalized to gapdh. (E) Quantification of RAW264.7 cell migration shown in (B). Data are expressed as mean ± SEM (n = 3). *P < 0.05 (one-way analysis of variance followed by Bonferroni's post hoc comparison test). ACM: Conditioned culture medium; Con: Control; GSK2033: liver X receptor antagonist; siRNA: small interfering RNA; T0901317: liver X receptor agonist; Thr: thrombin. (J) BBB score for hindlimbs at 0, 7, 14, and 21 days following intrathecal injection of 4.5 μL of 5 mM SCH79797 or vehicle at the lesion site. Data are expressed as mean ± SEM (n = 6). *P < 0.05 (one-way analysis of variance followed by Bonferroni's post hoc comparison test for B or independent sample t-test for C). BBB: Basso-Beattie-Bresnahan; PAR: protease-activated receptor; SCH: SCH79797; SCH79797: protease-activated receptor 1 inhibitor.
Research Article
Discussion SCI interrupts the connections between axons and peripheral organs, resulting in the paraplegia or quadriplegia. New technologies are being used to help the damage done by SCI, such as the application of polymer scaffolds Luo et al., 2021). However, complete functional recovery and spinal cord regeneration remain elusive. Identifying new players in aberrant CNS cholesterol metabolism is important for understanding the neuropathology of SCI. Cholesterol metabolism is strictly controlled in the CNS and is separated from the peripheral system by the blood-brain barrier and the blood-spinal cord barrier (Göritz et al., 2002). In situ cholesterol biosynthesis and elimination of cholesterol derivatives from the CNS are maintained in a delicate balance to prevent the development of a variety of neurological disorders (Hartmann et al., 2021;Jahn et al., 2021;Pikuleva and Cartier, 2021). Several lines of evidence have shown that the different cholesterol derivatives play different neuropathological roles. For example, 24S-hydroxycholesterol (24S-HC), an endogenous positive N-methyl-Daspartate receptor modulator, participates in N-methyl-D-aspartate receptor mediated neuronal excitotoxicity. However, 25-HC can antagonize 24S-HC potentiation by partially rescuing oxygen and glucose deprivation mediated cell death (Sun et al., 2017). 24S-HC levels have been found to be slightly reduced in the plasma of patients with multiple sclerosis compared with those in healthy individuals, reflecting the loss of brain mass over time (Papassotiropoulos et al., 2000;Leoni and Caccia, 2011). Conversely, elevated blood-derived 27-hydroxycholesterol is found in the cerebrospinal fluid of patients with multiple sclerosis, reflecting blood-brain barrier dysfunction (Leoni and Caccia, 2013). In the present study, we showed that thrombinmediated 25-HC production is involved in macrophage chemotaxis, consistent with its previously reported proinflammatory function (Gold et al., 2012;Pokharel et al., 2019). Our results provide further evidence for the important role of CH25H in mediating neuroinflammation.
25-HC regulates cell events mainly through binding with two distinct receptor families: nuclear receptor transcription factor LXR and G protein-coupled seven transmembrane domain receptor Epstein-Barr virus-induced gene 2 (EBI2). 25-HC promotes monocyte migration, suppress myelin gene expression in peripheral nerves, and dampens the anti-tumor response of dendritic cells in an LXR-dependent manner (Villablanca et al., 2010;Makoukji et al., 2011;Eibinger et al., 2013). In addition, 25-HC can act as the most active ligand for EBI2 to guide immune cell migration and regulate inflammatory responses (Cyster et al., 2014). In the present study, we found that thrombin-mediated 25-HC production induced macrophage migration of macrophages an LXRdependent manner. Whether oxysterol performs a similar role by binding the EBI2 receptor remains unknown.
Following CNS injury, thrombin is activated and plays a procoagulant role by cleaving soluble fibrinogen to release fibrin monomers (Di Cera, 2008). In addition, thrombin activates PAR1, PAR3, and PAR4 receptor expression in nerve cells, which modulates the progression of neuropathology . Previous studies have shown that thrombin activates the astrocytic reaction by influencing morphology, promoting astrogliosis, and enhancing inflammation (Niego et al., 2011;Radulovic et al., 2016). Several investigations have also shown that injury-induced thrombin activation is associated with inhibition of cholesterol biosynthesis, which decreases neurite outgrowth and functional recovery following SCI (Citron et al., 2016;Raghavan et al., 2018;Triplet et al., 2021). Here, we demonstrated that thrombin promotes production of the cholesterol derivative 25-HC by upregulating CH25H expression in astrocytes, suggesting a general role for the serine protease in mediating cholesterol metabolism in various cell types following SCI.
Astrocytes respond to thrombin stimulation by activating multiple intracellular signaling cascades via interactions with PAR1, PAR3, and/or PAR4 (Chen et al., 2022). PAR1, which is a G protein coupled receptor, regulates Ras homolog gene family member A, which in turn catalytically activates phosphoinositidephospholipase Cε (Dusaban et al., 2013). The phosphoinositide-phospholipase Cε then mediates more sustained activation of protein kinase D and nuclear translocation of NFκB, thus contributing to the pathophysiological roles of astrocytes (Dusaban et al., 2015). Alternatively, the thrombin has been shown to activate MAPK, thereby modulating astrocyte activity, in response to CNS injury (Nicole et al., 2005;Chen et al., 2022). In the present study, we demonstrated that thrombin induces astrocyte CH25H expression through activation of the ERK-and JNK-mediated NFκB pathway, suggesting the importance of the thrombin-activated MAPK/NFκB pathway in the regulation of a variety of astroglial functions following CNS injury.
In conclusion, our results reveal a new regulatory mechanism for aberrant astrocyte cholesterol metabolism that could be beneficial for controlling neuropathology following SCI. A limitation of this study is that we did not determine how much 25-HC was produced by astrocytes stimulated with thrombin. | 2022-11-28T16:04:20.998Z | 2022-10-11T00:00:00.000 | {
"year": 2022,
"sha1": "09ea08cde24086be27cbcda18a0125331432d6fe",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/1673-5374.357905",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c58d83e74ec8b641d632e282665bac846bba355f",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
208832865 | pes2o/s2orc | v3-fos-license | Designing a Water-Immersed Rectangular Horn Antenna for Generating Underwater OAM Waves
: In order to extend the applications of vortex waves, we propose a water-immersed rectangular horn antenna array for generating underwater vortex waves carrying the orbital angular momentum (OAM). Firstly, a single dielectric-loaded rectangular horn antenna with the central frequency of 2.6 GHz was designed for generating underwater electromagnetic (EM) waves. Due to the supplementing dielectric-loaded waveguide in this single antenna, the problems with di ffi cult sealing and fixation of the feed probe could be solved e ff ectively. The simulation results show that it has a good impedance characteristics ( S 11 < − 10 dB) and reasonable losses (less than 3.5 dB total for two antennas and a coaxial line) from 2.5 GHz to 2.7 GHz. Experiments on the single antenna were also carried out, which agree well with the simulations. Based on the designed single antenna, the water-immersed rectangular horn antenna array was proposed, and the phase gradient from 0~2 π was fed to the horn antennas for generating underwater OAM waves. The simulation results demonstrate high fidelity of the generated OAM waves from the intensity and phase distributions. The purity of the generated OAM modes was also investigated and further verifies the high fidelity of the generated OAM waves. The generated high-quality OAM waves meet the requirements for underwater applications of OAM, such as underwater communication and underwater imaging.
Introduction
As is well known, vortex waves carrying orbital angular momentum (OAM) have a rotating phase factor of e ilθ , where θ is the azimuth angle, and l is an integer number called "topological charge", which corresponds to the order of the OAM modes. Since the quantum characteristics of the OAM states were first discovered by Allen [1] in 1992, the OAM-carrying beam has attracted lots of attention. Then, in 2007, a uniform circular array with a successive phase shift was proposed to generate OAM beams in the radio frequency domain [2]. After that, studies of OAM were introduced into the radio frequency domain. In fact, OAM can provide rotational degrees of freedom to the electromagnetic (EM) field, which would be advantageous for manipulating particles [3][4][5],
Single Antenna
In this section, we discuss the design of a dielectric-loaded single horn antenna for generating underwater EM waves, and its simulation is carried out with a computer simulation technology (CST) microwave studio [32]. In order to calculate the loss of the antenna in deionized water, a box filled with deionized water was selected to replace the air box, while the background medium was set as the water. The open boundaries (PML) were set to the six faces of the water box to simulate the infinite space. Moreover, the waveguide port was used to feed the antenna at the bottom of the coaxial line.
The structure of the designed single antenna is presented in Figure 1. The waveguide of the horn antenna was filled with a higher relative dielectric constant ceramic medium to solve the sealing problem under the premise of ensuring impedance matching. As is well known, the ceramics medium is hard and crumbly, so it cannot be used in a patch antenna. Of course, the dielectric-loaded parabolic antenna has a larger structure, which is not suitable for the relevant applications of underwater EM waves. Moreover, across the frequency range of 2.5-2.7 GHz, the relative dielectric constant of Z r O 2 is almost equal to 36, while the loss tangent is a low constant, from which the medium Z r O 2 can be regarded as a non-dispersive material for impedance matching but with lower loss.
waveguide and feed probe need to be optimized, and the values of the optimized parameters are listed in Table 1. With these optimized parameters, the radiation pattern of the designed antenna will keep stable and optimized as well. In addition, a horn is added to the dielectric-loaded waveguide to obtain higher gain and better directivity. Therefore, although the loss of water is larger, the radiation signal from the single antenna can transmit for a longer distance in water, which provides enough distance for the superposition of the radiation signal to produce underwater OAM waves. A coaxial line with a characteristic impedance of 50 Ω was placed perpendicular to the rectangular waveguide for exciting the transverse electric zero one (TE01) mode microwave signal. The length of the probe equals the radius of the waveguide to make sure the radiation field is located at the center area. The rectangular horn antenna was selected due to the following advantages (compared with a circular horn antenna): i) Fewer merger patterns in the rectangular waveguide than that in the circular waveguide: there are fewer merger modes in the rectangular waveguide compared with that in circular waveguide.
ii) Simplicity of processing: the medium ZrO2 is a kind of ceramic and it is hard and crumbly, so, compared to a circular shape, the rectangle is much easier to fabricate. The medium facilitates the coaxial sealing at interface 1 between the coaxial line and the waveguide, as shown in the red circle in Figure 1b. With the Z r O 2 medium filling in the waveguide, only the sealing at interface 2 (blue circle in Figure 1b,c) between the horn and the waveguide is necessary, which can be easily realized compared to that at interface 1. In addition, the origin is only selected at the center of interface 2. Another benefit of the zirconium oxide (Z r O 2 ) medium is that the established position and length of the probe can be ensured, so the frequency offset can be reduced to a certain extent.
Furthermore, the impedance calculation of the rectangular waveguide with coaxial excitation should strictly follow the reported method [33], and a step-by-step design procedure is provided in Figure 1d. Based on the reported conclusions [28], in order to increase the bandwidth, the relative dielectric constant of the ceramic medium should be reduced, but the aperture of the antenna should be increased, which hinders the miniaturization and related applications of the antenna. Therefore, the relative dielectric constant of the ceramic medium should be reduced in a certain range. Due to the change of the relative dielectric constant of the packed medium, the dimensions of the rectangular waveguide and feed probe need to be optimized, and the values of the optimized parameters are listed in Table 1. With these optimized parameters, the radiation pattern of the designed antenna will keep stable and optimized as well. In addition, a horn is added to the dielectric-loaded waveguide to obtain higher gain and better directivity. Therefore, although the loss of water is larger, the radiation signal from the single antenna can transmit for a longer distance in water, which provides enough distance for the superposition of the radiation signal to produce underwater OAM waves. A coaxial line with a characteristic impedance of 50 Ω was placed perpendicular to the rectangular waveguide for exciting the transverse electric zero one (TE 01 ) mode microwave signal. The length of the probe equals the radius of the waveguide to make sure the radiation field is located at the center area. The rectangular horn antenna was selected due to the following advantages (compared with a circular horn antenna): (i) Fewer merger patterns in the rectangular waveguide than that in the circular waveguide: there are fewer merger modes in the rectangular waveguide compared with that in circular waveguide.
(ii) Simplicity of processing: the medium Z r O 2 is a kind of ceramic and it is hard and crumbly, so, compared to a circular shape, the rectangle is much easier to fabricate.
(iii) Reliability of antenna array assembly: due to the fact that it is easier to identify the direction of the rectangular horn antenna than that of the circular one, it is more reliable to generate OAM waves by the rectangular horn antenna array.
To measure S 11 of the single antenna and S 21 between the two antennas, a set of swept transmission loss measurements were implemented. As shown in Figure 2a, the tested antennas were placed at the center of a 40 cm × 24 cm × 35 cm glass tank at a depth of 15 cm, and the tank was filled with deionized water whose permittivity is 78.4, loss tangent is 0.125 and conductivity is 1.42 S/m respectively at 2.6 GHz. Due to the large loss of water and the adequate space of the tank, noises and most reflected waves can be absorbed, so it can be regarded as an infinite space that is also called an "anechoic environment". Two antennas were fixed by two aluminum alloy supports, and there was an adjustable clamp at the end of the support to fix the antenna. A set of measurements were taken to measure the power patterns of this antenna, as shown in Figure 2b. Due to the large transmission loss of the wave in water, it was necessary to utilize an amplifier with a 20 dB gain for amplifying the received signal. The energy of the radiation signal received by another antenna will decrease as the angle increases. If the angle is large enough, the received signal will be too small to be extracted from the noise. The schematic maps corresponding to the experimental setups in Figure 2a,b are presented in Figure 2c,d, respectively. iii) Reliability of antenna array assembly: due to the fact that it is easier to identify the direction of the rectangular horn antenna than that of the circular one, it is more reliable to generate OAM waves by the rectangular horn antenna array.
To measure S11 of the single antenna and S21 between the two antennas, a set of swept transmission loss measurements were implemented. As shown in Figure 2a, the tested antennas were placed at the center of a 40 cm × 24 cm × 35 cm glass tank at a depth of 15 cm, and the tank was filled with deionized water whose permittivity is 78.4, loss tangent is 0.125 and conductivity is 1.42 S/m respectively at 2.6 GHz. Due to the large loss of water and the adequate space of the tank, noises and most reflected waves can be absorbed, so it can be regarded as an infinite space that is also called an "anechoic environment". Two antennas were fixed by two aluminum alloy supports, and there was an adjustable clamp at the end of the support to fix the antenna. A set of measurements were taken to measure the power patterns of this antenna, as shown in Figure 2b. Due to the large transmission loss of the wave in water, it was necessary to utilize an amplifier with a 20 dB gain for amplifying the received signal. The energy of the radiation signal received by another antenna will decrease as the angle increases. If the angle is large enough, the received signal will be too small to be extracted from the noise. The schematic maps corresponding to the experimental setups in Figure 2a,b are presented in Figure 2c,d, respectively. Figure 3a demonstrates the comparisons of S11 of the single antenna and S21 of two antennas with a distance of 0.5 cm submerged in deionized water between the experimental and simulated results. The experimental results of S11 are lower than the simulation ones due to the different absorbed energies by deionized water and some machining errors. The box in the simulation, when calculating S21, was filled with deionized water, which is characterized by the Debye first model. The relative dielectric constants of water from 2.5 GHz to 2.7 GHz in simulation can be found in Figure 3b [34]. In addition, S21 depends on the total energy (P) of the signal fed into the antenna at port one and the measured energy S of the signal by the receiving antenna at port two. Furthermore, S can be calculated by Equation (1): where R depending on S11 represents the total reflection energy of the antenna, and L is the absorption loss by water. Here, is considered to be the same in the simulation and experiment. L is larger than R due to the large absorption loss of the wave by water and the good impedance characteristics. In other words, R can be ignored when calculating S. Therefore, although the experimental and Figure 3a demonstrates the comparisons of S 11 of the single antenna and S 21 of two antennas with a distance of 0.5 cm submerged in deionized water between the experimental and simulated results. The experimental results of S 11 are lower than the simulation ones due to the different absorbed energies by deionized water and some machining errors. The box in the simulation, when calculating S 21 , was filled with deionized water, which is characterized by the Debye first model. The relative dielectric constants of water from 2.5 GHz to 2.7 GHz in simulation can be found in Figure 3b [34]. In addition, S 21 depends on the total energy (P) of the signal fed into the antenna at port one and the measured energy S of the signal by the receiving antenna at port two. Furthermore, S can be calculated by Equation (1): where R depending on S 11 represents the total reflection energy of the antenna, and L is the absorption loss by water. Here, P is considered to be the same in the simulation and experiment. L is larger than R due to the large absorption loss of the wave by water and the good impedance characteristics. In other words, R can be ignored when calculating S. Therefore, although the experimental and simulated S 11 do not agree very well, the experimental S 21 may be very close to the simulation one.
wave in water is about 23.3 dB (6.1 cm). Therefore, the combined loss from both antennas and coaxial lines is 0.7 dB at 2.6 GHz. Even at the higher frequency of 2.7 GHz (S21 is about −26.8 dB), the total loss was only 3.5 dB, which is still acceptable. Moreover, S21, with different distances between the big mouths of the two antennas, is shown in Figure 3c. The loss of water is relatively large, and the water with different conductivities may have larger differences in the loss values. Although there are few differences between simulated and measured data in Figure 3c, they still have the same change trend, which can be another verification of the above conclusion. The value of S21 with a distance of 50 mm is higher than −50 dB, which can still be detected by a vector network analyzer (VNA), demonstrating the fact that the antenna can radiate at least 50 mm in water. In addition, the values of S21 with the distance of 5 mm and 50 mm between the two antennas were about −25 dB and −45 dB in [26], respectively, which agree well with that of our proposed work, as well as the value of S21 (about −35 dB) with the distance of 20 mm between the two antennas in [27]. The 0 dBm input signal from the microwave power source is first amplified 20 dB by the power amplifier, and it then arrives at the transmitting antenna. The radiation field by the transmitting antenna is finally received by another antenna and measured by an Agilent N1913A power meter (Keysight, Santa Rosa, CA, USA). In this process, the combined loss of all the coaxial lines is about 3 dB. By changing the angle of rotating displacement table and adjusting the antenna orientation, the power patterns in the E and H plane at 2.6 GHz with the distance of 5 cm between the big mouths of the two antennas can be obtained, as shown in Figure 3d. The 3 dB angular width in the E plane and H plane are about 13 degrees and 15.5 degrees, respectively, which demonstrates good directionality of this antenna. Subtracting the 20 dB gain of the amplifier, and adding the 3 dB loss of the coaxial line, 33 dB transmission loss (5 cm) and the 40 dB absorption loss of water (10.6 cm), the gain of the antenna is about 13.5 dB at 0 degrees. The corresponding simulated power patterns are also shown in Figure 3d, which are consistent with the experimental ones to a certain extent, and the small differences may come from the different loss of water in the simulation and experiment, the influence of the experimental devices, and the measured errors. S 21 is about −24.17 dB at 2.6 GHz, and the concrete transmission loss (382 dB/m at 3 GHz) of the wave in water is about 23.3 dB (6.1 cm). Therefore, the combined loss from both antennas and coaxial lines is 0.7 dB at 2.6 GHz. Even at the higher frequency of 2.7 GHz (S 21 is about −26.8 dB), the total loss was only 3.5 dB, which is still acceptable. Moreover, S 21 , with different distances between the big mouths of the two antennas, is shown in Figure 3c. The loss of water is relatively large, and the water with different conductivities may have larger differences in the loss values. Although there are few differences between simulated and measured data in Figure 3c, they still have the same change trend, which can be another verification of the above conclusion. The value of S 21 with a distance of 50 mm is higher than −50 dB, which can still be detected by a vector network analyzer (VNA), demonstrating the fact that the antenna can radiate at least 50 mm in water. In addition, the values of S 21 with the distance of 5 mm and 50 mm between the two antennas were about −25 dB and −45 dB in [26], respectively, which agree well with that of our proposed work, as well as the value of S 21 (about −35 dB) with the distance of 20 mm between the two antennas in [27].
Antenna array
The 0 dBm input signal from the microwave power source is first amplified 20 dB by the power amplifier, and it then arrives at the transmitting antenna. The radiation field by the transmitting antenna is finally received by another antenna and measured by an Agilent N1913A power meter (Keysight, Santa Rosa, CA, USA). In this process, the combined loss of all the coaxial lines is about 3 dB. By changing the angle of rotating displacement table and adjusting the antenna orientation, the power patterns in the E and H plane at 2.6 GHz with the distance of 5 cm between the big mouths of the two antennas can be obtained, as shown in Figure 3d Figure 3d, which are consistent with the experimental ones to a certain extent, and the small differences may come from the different loss of water in the simulation and experiment, the influence of the experimental devices, and the measured errors.
Antenna Array
As the approach based on the phased uniform array is very flexible and easily controlled, it is suitable for OAM-generation in water. Based on the above designed single antenna, an OAM antenna array is put forward for generating underwater vortex waves in this section.
The schematic configurations for the OAM-generating system are shown in Figure 4a. The RF signal coming from the VNA is first amplified by an amplifier. Then the amplified signal is transmitted into the eight-way power divider and the eight-way phase shifter to feed the antenna array accordingly. The received signal by the receiving antenna is finally saved on a computer through the VNA. The model of the array is shown in Figure 4b. The N antennas are located equidistantly around the perimeter of the circle and are fed with a phase difference between each element δ∅ = 2πl/N, where l denotes the OAM modes. The phase shifts of eight antennas with different OAM states are shown in Table 2. The radius of this concentric circle is set as 56 mm, while the center frequency is 2.6 GHz. The single antenna is linearly polarized in the X-direction, so the feed coaxial line of eight elements is placed in the same direction to generate the linear-polarization OAM wave. As the approach based on the phased uniform array is very flexible and easily controlled, it is suitable for OAM-generation in water. Based on the above designed single antenna, an OAM antenna array is put forward for generating underwater vortex waves in this section.
The schematic configurations for the OAM-generating system are shown in Figure 4a. The RF signal coming from the VNA is first amplified by an amplifier. Then the amplified signal is transmitted into the eight-way power divider and the eight-way phase shifter to feed the antenna array accordingly. The received signal by the receiving antenna is finally saved on a computer through the VNA. The model of the array is shown in Figure 4b. The N antennas are located equidistantly around the perimeter of the circle and are fed with a phase difference between each element ∅ 2 / , where denotes the OAM modes. The phase shifts of eight antennas with different OAM states are shown in Table 2. The radius of this concentric circle is set as 56 mm, while the center frequency is 2.6 GHz. The single antenna is linearly polarized in the X-direction, so the feed coaxial line of eight elements is placed in the same direction to generate the linear-polarization OAM wave. The S11 of eight used antenna elements are shown in Figure 4c, from which we can see that the water-immersed rectangular horn antenna array performs very well in the frequency range of 2.55-2.68 GHz. In addition, the S11 of any antenna is almost the same as the others, which means there is good consistency among the antenna array elements. The presented S parameters between the first element and the others, as depicted in Figure 4d, are all below 115 dB, which demonstrates the fact of less coupling among the eight used elements. Therefore, the far field of each element can be superimposed to produce a linearly polarized vortex field. The S 11 of eight used antenna elements are shown in Figure 4c, from which we can see that the water-immersed rectangular horn antenna array performs very well in the frequency range of 2.55-2.68 GHz. In addition, the S 11 of any antenna is almost the same as the others, which means there is good consistency among the antenna array elements. The presented S parameters between the first element and the others, as depicted in Figure 4d, are all below −115 dB, which demonstrates the fact of less coupling among the eight used elements. Therefore, the far field of each element can be superimposed to produce a linearly polarized vortex field. 2 0 45 90 135 3 0 90 180 270 4 0 135 270 45 5 0 180 0 180 6 0 225 90 315 7 0 270 180 90 8 0 315 270 225 For a detection point P(r, θ, ∅) in the far field, the electric field E(r) can be found from [15,35]. Based on the electric field E(r), the array factor can be found in [16,35], as shown by: where m r is a constant related to r, ∅ is the azimuth angle, k g is the wave vector in water, l indicates the OAM mode, called the "topological charge", J l represents the l th order Bessel function of the first kind, a is the radius of the array and θ is the angle between k g and the propagation direction z.
The array factor f (θ, ∅) not only can describe the distribution of normalized intensity patterns and phase patterns of the different OAM modes, but can also be utilized to replace the electric field E(r) when calculating the purity of OAM waves. Moreover, it is easy and efficient due to its simple form. The intensities and the phase distributions of the different generated OAM waves can be obtained from the far field distribution, as depicted in Figure 5. The selected plane has the scale of 40.4 mm × 40.4 mm at z = 200 mm, and the minimum value of cos θ is 0.9899, from which the propagation distance z can be approximated as r. Therefore, the array factor can be read from the CST on one plane with the determined value of z. From Figure 5, it can be seen that the phase reference point rotates as ∅ increases, and the values of the phase change can match the orders of the OAM wave in one turn very well. The intensity distributions of the four modes fit the corresponding Bessel function intensity distributions. Both the phase and intensity distributions demonstrate the super performance of the designed rectangular horn antenna array.
In order to further prove the simulated intensity and phase distributions of generated OAM-carrying waves, we also carried out the simulations by HFSS software, and the corresponding results were obtained and are displayed in Figure 6. It can be seen that these simulated results are consistent with that of the CST, which further verifies the effectiveness of our designs and also demonstrates the correctness of the simulation results.
Meanwhile, in order to show the purity of OAM waves in Figure 5, the mode decomposition is carried out by means of Fourier transform of the field distribution on a circle corresponding to the magnitude maximum [36], as shown in Figure 7a-d, where m is from −3 to 3. Assuming N points on the same circle are selected to calculate the purity of OAM waves, the azimuth spectrum w m could be obtained and expressed as follows: where E n is the electric field of the n th point, ∅ n is the theoretical azimuth of the standard vortex field at the location of the n th point. demonstrate the super performance of the designed rectangular horn antenna array.
In order to further prove the simulated intensity and phase distributions of generated OAMcarrying waves, we also carried out the simulations by HFSS software, and the corresponding results were obtained and are displayed in Figure 6. It can be seen that these simulated results are consistent with that of the CST, which further verifies the effectiveness of our designs and also demonstrates the correctness of the simulation results. Meanwhile, in order to show the purity of OAM waves in Figure 5, the mode decomposition is carried out by means of Fourier transform of the field distribution on a circle corresponding to the magnitude maximum [36], as shown in Figure 7(a-d), where m is from −3 to 3. Assuming N points on the same circle are selected to calculate the purity of OAM waves, the azimuth spectrum could be obtained and expressed as follows: Figure 6a,c,d, respectively. Figure 7e-h also show the field intensity distributions (along the radial direction) of the different OAM modes at 2.6 GHz, where the points of ∅ 0 were selected to calculate , and equals array factor in theory. Therefore, the simulation results were compared with the results from numerical calculations to make sure they are consistent. It can be seen that the lines match the center area very well. However, due to the edge effect of the OAM wave, there are small differences at the edge. This phenomenon can be attributed to the increasing difference between and with The generated OAM mode of m = 1 shows a high fidelity to the design, as shown in Figure 5c,d.
The mode decomposition result is demonstrated in Figure 6b, and the energy in the OAM mode of m = 1 clearly dominates, with a proportion above 90%, which is 30 times above the secondary mode, showing a good performance of the antenna array. The remaining OAM modes (OAM mode m = 0 in Figure 5a,b, m = 2 in Figure 5e,f and m = 3 in Figure 5g,h) are similar to the OAM mode of m = 1, as shown in Figure 6a,c,d, respectively. Figure 7e-h also show the field intensity distributions (along the radial direction) w m of the different OAM modes at 2.6 GHz, where the points of ∅ = 0 were selected to calculate w m , and w m equals array factor f in theory. Therefore, the simulation results were compared with the results from numerical calculations to make sure they are consistent. It can be seen that the lines match the center area very well. However, due to the edge effect of the OAM wave, there are small differences at the edge. This phenomenon can be attributed to the increasing difference between z and r with increasing θ, and the difference between two lines will also increase in theory, but it can be accepted totally.
Conclusion
In this paper, the experiment of the dielectric-loaded rectangular horn antenna was investigated, which works at 2.6 GHz in water. The experimental results show that this antenna has good impedance characteristics (S11 < −10 dB) and reasonable losses (less than 3.5 dB total for two antennas and the coaxial line) from 2.5 GHz to 2.7 GHz. A water-immersed OAM rectangular horn antenna array, working at 2.6 GHz, was also proposed to realize the OAM wave radiation in a water environment based on the above single antenna. From the obtained phase and intensity distributions of the generated OAM waves with different modes, it can be indicated that the designed antenna array works very well at 2.6 GHz, which was also further proved by analyzing the purity of the generated OAM waves. The OAM waves can radiate far enough to meet the requirements for future underwater applications of OAM, such as underwater communication and underwater imaging for biological tissues and vegetable rhizomes. In future, we will try to generate wideband underwater OAM waves and to apply the generated OAM waves to the concrete underwater applications.
Conclusions
In this paper, the experiment of the dielectric-loaded rectangular horn antenna was investigated, which works at 2.6 GHz in water. The experimental results show that this antenna has good impedance characteristics (S 11 < −10 dB) and reasonable losses (less than 3.5 dB total for two antennas and the coaxial line) from 2.5 GHz to 2.7 GHz. A water-immersed OAM rectangular horn antenna array, working at 2.6 GHz, was also proposed to realize the OAM wave radiation in a water environment based on the above single antenna. From the obtained phase and intensity distributions of the generated OAM waves with different modes, it can be indicated that the designed antenna array works very well at 2.6 GHz, which was also further proved by analyzing the purity of the generated OAM waves. The OAM waves can radiate far enough to meet the requirements for future underwater applications of OAM, such as underwater communication and underwater imaging for biological tissues and vegetable rhizomes. In future, we will try to generate wideband underwater OAM waves and to apply the generated OAM waves to the concrete underwater applications. | 2019-10-31T09:10:24.038Z | 2019-10-26T00:00:00.000 | {
"year": 2019,
"sha1": "ad8497eb2753934f70e426e3b705a507505f51e1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9292/8/11/1224/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f96cdd605c6fe42858acab56d075903d09ca03e4",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
236968400 | pes2o/s2orc | v3-fos-license | Dysregulation of lncRNAs in autoimmune neuropathies
Chronic inflammatory demyelinating polyradiculoneuropathy (CIDP) and Guillain-Barré syndrome (GBS) are inflammatory neuropathies with different clinical courses but similar underlying mechanisms. Long non-coding RNAs (lncRNAs) might affect pathogenesis of these conditions. In the current project, we have selected HULC, PVT1, MEG3, SPRY4-IT1, LINC-ROR and DSCAM-AS1 lncRNAs to appraise their transcript levels in the circulation of CIDP and GBS cases versus controls. Expression of HULC was higher in CIDP patients compared with healthy persons (Ratio of mean expression (RME) = 7.62, SE = 0.72, P < 0.001). While expression of this lncRNA was not different between female CIDP cases and female controls, its expression was higher in male CIDP cases compared with male controls (RME = 13.50, SE = 0.98, P < 0.001). Similarly, expression of HULC was higher in total GBS cases compared with healthy persons (RME = 4.57, SE = 0.65, P < 0.001) and in male cases compared with male controls (RME = 5.48, SE = 0.82, P < 0.001). Similar pattern of expression was detected between total cases and total controls. PVT1 was up-regulated in CIDP cases compared with controls (RME = 3.04, SE = 0.51, P < 0.001) and in both male and female CIDP cases compared with sex-matched controls. Similarly, PVT1 was up-regulated in GBS cases compared with controls (RME = 2.99, SE = 0.55, P vale < 0.001) and in total patients compared with total controls (RME = 3.02, SE = 0.43, P < 0.001). Expression levels of DSCAM-AS1 and SPRY4-IT1 were higher in CIDP and GBS cases compared with healthy subjects and in both sexes compared with gender-matched healthy persons. Although LINC-ROR was up-regulated in total CIDP and total GBS cases compared with controls, in sex-based comparisons, it was only up-regulated in male CIDP cases compared with male controls (RME = 3.06, P = 0.03). Finally, expression of MEG3 was up-regulated in all subgroups of patients versus controls except for male GBS controls. SPRY4-IT could differentiate CIDP cases from controls with AUC = 0.84, sensitivity = 0.63 and specificity = 0.97. AUC values of DSCAM-AS1, MEG3, HULC, PVT1 and LINC-ROR were 0.80, 0.75, 0.74, 0.73 and 0.72, respectively. In differentiation between GBS cases and controls, SPRY4-IT and DSCAM-AS1 has the AUC value of 0.8. None of lncRNAs could appropriately differentiate between CIDP and GBS cases. Combination of all lncRNAs could not significantly enhance the diagnostic power. Taken together, these lncRNAs might be involved in the development of CIDP or GBS.
Scientific Reports
| (2021) 11:16061 | https://doi.org/10.1038/s41598-021-95466-w www.nature.com/scientificreports/ levels in the circulation of CIDP and GBS cases versus controls. The reason for selection of these lncRNAs was their roles in modulation of immune responses. HULC has been identified as one of important factors in induction of pro-inflammatory responses in the course of liposaccharide-associated sepsis in endothelial cells 10 . Pvt1 has been shown to modulate the immunosuppression function of granulocytic myeloid-derived suppressor cells in animal models 11 . MEG3 has been reported to induce imbalance between regulatory T cells and Th17 cells 12 . SPRY4-IT1 interacts with ERRα 13 , a nuclear receptor which regulates innate immunity 14 . LINC-ROR has functional interaction with TGF-β to regulated hypoxia-induced cellular cascades 15 . Finally, DSCAM-AS1 has been shown to regulate several genes which are implicated in inflammatory responses 16 . These lncRNAs regulate immune reactions via different routes.
Materials and methods
Recruitment of GBS/CIDP cases and normal controls. A total of 32 CIDP patients with typical type (11 females, 21 males), 25 GBS patients (7 females, 18 males), and 58 healthy individuals (20 females and 38 males) participated in the current investigation. CIDP cases had symmetric muscle weakness which affected both proximal and distal muscles. The course of disorder was compliant with a motor-predominant neuropathy. Patients were assessed using the guidelines stated by American Academy of Neurology 17 and National Institute of Neurological Disorders and Stroke 18 . In addition, electrophysiological criteria were used for diagnosis of GBS 19 . Blood samples were obtained when patients entered the remission phase and were not on any treatment. All were responsive to corticosteroids or IVIg treatment. Ethics approval and consent to participant. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent forms were obtained from all study participants. The study protocol was approved by the ethical com- Table 2 demonstrates demographic and clinical data of patients. Expression of HULC was higher in CIDP patients compared with controls (Ratio of mean expression (RME) = 7.62, SE = 0.72, P < 0.001). While expression of this lncRNA was similar between female CIDP www.nature.com/scientificreports/ cases and female controls, its expression was up-regulated in male CIDP cases compared with male controls (RME = 13.50, SE = 0.98, P < 0.001). Similarly, expression of HULC was higher in total GBS cases compared with controls (RME = 4.57, SE = 0.65, P < 0.001) and in male cases compared with male controls (RME = 5.48, SE = 0.82, P < 0.001). Similar pattern of expression was detected between total cases and total controls. PVT1 was up-regulated in CIDP cases compared with controls (RME = 3.04, SE = 0.51, P < 0.001) and in both male and female CIDP cases compared with sex-matched healthy persons. Similarly, PVT1 was up-regulated in GBS cases compared with controls (RME = 2.99, SE = 0.55, P vale < 0.001) and in total patients compared with total controls (RME = 3.02, SE = 0.43, P < 0.001). Expression levels of DSCAM-AS1 and SPRY4-IT1 were higher in CIDP and GBS cases compared with controls and in both sexes compared with gender-matched healthy subjects. Although LINC-ROR was up-regulated in total CIDP and total GBS cases compared with controls, in sex-based comparisons, it was only up-regulated in male CIDP cases compared with male controls (RME = 3.06, P = 0.03). Finally, expression of MEG3 was up-regulated in all subgroups of patients versus controls except for male GBS controls (Table 3). Figure 1 displays expression amounts of selected lncRNAs in study subgroups. Significant pairwise correlations have been identified between lncRNAs expressions with the most robust one being between HULC/DSCAM-AS1 and HULC/SPRY4-IT pairs (r = 0.86 and 0.85 respectively) (Fig. 2).
Finally, diagnostic power of lncRNAs for distinguishing patients from healthy subjects was assessed (Fig. 4). SPRY4-IT could differentiate CIDP cases from controls with AUC = 0.84, sensitivity = 0.63 and specificity = 0.97. AUC values of DSCAM-AS1, MEG3, HULC, PVT1 and LINC-ROR were 0.80, 0.75, 0.74, 0.73 and 0.72, respectively. In differentiation between GBS cases and controls, SPRY4-IT and DSCAM-AS1 has the AUC value of 0.8. None of lncRNAs could appropriately differentiate between CIDP and GBS cases. Combination of all lncRNAs could not significantly enhance the diagnostic power (Table 4).
Discussion
LncRNAs have been shown to take part in the pathogenesis of immune-related conditions. Up-regulation of lncRNAs has been reported in a number of these conditions. For instance, expression levels of HOTAIR, LUST, anti-NOS2A, MEG9, SNHG4, TUG1, and NEAT1 have been shown to be increased in blood exosomes of patients with rheumatoid arthritis (RA) compared with exosomes retrieved from normal blood samples 21 . The same study has reported up-regulation of mentioned lncRNAs in addition to H19 antisense, HAR1B and GAS5 in peripheral blood mononuclear cells of these patients 21 . ENST00000483588 is another lncRNA which has been shown to be up-regulated in fibroblast-like synoviocytes of patients with RA 22 . A number of selected lncRNAs in the current www.nature.com/scientificreports/ project have been previously shown to be up-regulated in immune-mediated conditions. For instance, PVT1 has been reported to be up-regulated in fibroblast-like synoviocytes of RA models parallel with down-regulation of sirt6, a putative target for this lncRNA. PVT1 silencing or sirt6 over-expression could suppress cell proliferation and inflammation, while inducing cell apoptosis 23 . MEG3 has been demonstrated to regulate RA pathogenesis through targeting NLRC5 24 . LINC-ROR, MEG3, SPRY4-IT1 and UCA1 have been among lncRNA with higher expression in patients with schizophrenia compared with normal subjects 25 . CIDP and GBS disorders are two immune-mediated conditions in which lncRNAs might contribute. We measured expression of amounts of six immune-related lncRNAs in the circulation of these patients versus healthy controls. Expression of HULC was higher in CIDP patients compared with controls. While expression of this lncRNA was not different between female CIDP cases and female controls, its expression was higher in male CIDP cases compared with male controls. Similarly, expression of HULC was higher in total GBS cases compared with controls and in male cases compared with male controls. Similar pattern of expression was detected between total cases and total controls. HULC has been shown to regulate immune responses through miR-128-3p/RAC1 axis 26 . In line with our observations, miR-128-3p has been shown to be down-regulated in cerebrospinal fluid of animal models of GBS 27 . RAC1 regulates a number of inflammatory pathways such as STAT3 and NF-κB 28 . NF-κB pathway has a documented effect in the pathogenesis of immune-related neuropathies 29 . Therefore, HULC/miR-128-3p/RAC1 axis might also been involved in the pathogenesis of CIDP and GBS.
PVT1 was up-regulated in CIDP cases compared with controls and in both male and female CIDP cases compared with sex-matched controls. Similarly, PVT1 was up-regulated in GBS cases compared with controls and in total patients compared with total controls. Contrary to this finding, we have previously reported downregulation of PVT1 in the peripheral blood of patients with multiple sclerosis 30 . Therefore, this lncRNA might have distinctive effects in these two inflammatory conditions. Expression levels of DSCAM-AS1 and SPRY4-IT1 were higher in CIDP and GBS cases compared with controls and in both sexes compared with sex-matched controls. Therefore, these lncRNAs have a consistent pattern of expression among CIDP and GBs patients potentiating them as biomarkers for these conditions.
Although LINC-ROR was up-regulated in total CIDP and total GBS cases compared with controls, in sexbased comparisons, it was only up-regulated in male CIDP cases compared with male controls indicating the www.nature.com/scientificreports/ possible interactions between this lncRNA and sex-related parameters, since there was no gender-based difference in phenotype of the patients in terms of severity of illness. Finally, expression of MEG3 was up-regulated in all subgroups of patients versus controls except for male GBS controls. Expression of MEG3 has been shown to be elevated in CD4 + T cells of patients with immune thrombocytopenic purpura. Expression of this lncRNA has been reduced in CD4 + T cells cultured with dexamethasone 12 . Functionally, MEG3 inhibits Foxp3 expression and increases RORγt expression, thus inducing imbalance between regulatory T cells and Th17 cells 12 . The imbalance between these subsets of T cells might participate in the pathogenesis of GBS or CIDP as previous studies have shown the therapeutic effects of regulatory T cells in animal models of GBS 31 .
The correlations between expression levels of mentioned lncRNAs were not meaningfully different between patients and controls based on the measured correlation coefficients. SPRY4-IT and DSCAM-AS1 could differentiate CIDP cases from controls with appropriate diagnostic power values. Similarly, these lncRNAs had high power in differentiation between GBS cases and controls. Since expression levels of lncRNAs were almost similar between CIDP cases and GBS cases, none of lncRNAs could appropriately differentiate between CIDP and GBS cases. Combination of all lncRNAs could not significantly enhance the diagnostic power. Taken together, these lncRNAs might be involved in the development of CIDP or GBS. These transcripts might be regarded as marker for these immune-related conditions as well. Future studies should appraise expression of these transcripts in other immune-related conditions to evaluate their suitability as diagnostic markers for GBS/CIDP. www.nature.com/scientificreports/ Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2021-08-11T06:17:32.750Z | 2021-08-09T00:00:00.000 | {
"year": 2021,
"sha1": "2905fc26009ba4533e9ec3af1555fdc9cd5f6842",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-95466-w.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f78d4b51c24b82989372d9c3daf931d6782460d3",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233909017 | pes2o/s2orc | v3-fos-license | Effect of Fe on the Microstructure and Mechanical Properties of Fe/FeAl2O4 Cermet Prepared by Hot Press Sintering
The Fe/FeAl2O4 cermet was prepared with Fe-Fe2O3-Al2O3 powder by a hot press sintering method at 1400 °C. The raw materials for the powder particles were respectively 2 µm (Fe), 0.5 µm (Fe2O3), and 0.5 µm (Al2O3) in diameter, the sintering pressure was 30 MPa, and the holding time was 120 min. The effects of different Fe mass ratios on the microstructure and mechanical properties of Fe/FeAl2O4 cermet were studied. The results showed that a new ceramic phase FeAl2O4 could be formed by an in situ reaction during the hot press sintering. When the Fe mass ratio was increased, the microstructure and mechanical properties of the Fe/FeAl2O4 cermet showed a change law that initially became better and then became worse. The best microstructure and mechanical properties were obtained in the S2 sample, where the mass ratio of Fe-Fe2O3-Al2O3 was 6:1:2. In this Fe mass ratio, the relative density was about 94%, and the Vickers hardness and bending strength were 1.21 GPa and 210.0 MPa, respectively. The reaction mechanism of Fe in the preparation process was the in situ synthesis reaction of FeAl2O4 and the diffusion reaction of Fe to FeAl2O4 grains. The increase of the Fe mass ratio improved the wettability of Fe and FeAl2O4, which increased the diffusion rate of Fe to FeAl2O4 grains, which increased the influence on the structure of FeAl2O4.
Introduction
Iron-based cermet has the high hardness of ceramics, good thermal stability, good wear resistance, corrosion resistance, as well as metals' thermal conductivity and toughness [1][2][3][4]. It has mainly been used in aviation, automotive, and petroleum engineering, as well as engineering in machinery such as brakes or clutches [5][6][7][8][9]. The low raw material cost of iron-based cermet greatly reduces the preparation cost and promotes the wide application of metal-based ceramics. Fe has relatively good wettability with carbides such as TiC, VC, WC, ZrC, and Cr 3 C 2 . Therefore, there have been many studies conducted on iron-based cermet with carbides as reinforcements [10][11][12][13]. However, its high price limits its use in a wide range of applications. In recent years, research on iron-based cermet using inexpensive and widely sourced Fe and Al 2 O 3 as the main raw materials has seen some development.
Bansal [14] studied the interface bonding between Fe and Al 2 O 3 due to the formation of spinel phase FeAl 2 O 4 , which improves the wettability of metal and ceramic phases. Konopka [15] investigated the influence of Fe content on the microstructure and fracture toughness of iron-based cermet, and found that the fracture toughness of Fe/Al 2 O 3 cermet depended on the Fe content and the formation of FeAl 2 O 4 during the sintering process, as well as the formation of FeO around the iron particles. The formation of micro-crack defects between FeAl 2 O 4 and Fe led to passivation of the external stress of the cermet, which led to splits and deflections in the cracks. Other studies have found that the fracture toughness of the cermet improved as the amount of FeAl 2 O 4 was increased [16,17]. The above studies demonstrate that metallic Fe has an important influence on the performance of iron-based cermet. Gupta [18][19][20][21] used Fe and Al 2 O 3 (5-30 wt%) to prepare iron-based cermet by powder metallurgy, and found that the iron-based cermet prepared with 5 wt% Al 2 O 3 and 95 wt% Fe had the lowest total surface wear. Shuai Li [22] prepared iron-based cermet with low-grade bauxite powder and reduced iron powder, and found that with increasing Fe content, the compactness of the cermet increased, the volume density increased, and the water absorption rate and microporosity decreased; compressive strength and bending strength also had a greater impact.
In this study, hot pressing and sintering were used to add a reinforcing Fe 2 O 3 phase to the Fe/Al 2 O 3 system, and FeAl 2 O 4 was formed by the in situ reaction of Fe, Fe 2 O 3 , and Al 2 O 3 to improve the wettability between Fe and the ceramic phase. The metallic Fe phase changed from solid phase to liquid phase during the hot pressing sintering process. This would promote the fluidity of the metallic phase in the ceramic phase and affect the material transfer process, and then further affect the structure and mechanical properties of the Fe/FeAl 2 O 4 cermet. Therefore, determining the law of Fe's influence on the microstructure and mechanical properties of Fe/FeAl 2 O 4 cermet was one of the key links to preparing an Fe/FeAl 2 O 4 cermet with excellent performance. In addition, Fe, Al 2 O 3 , and Fe 2 O 3 are inexpensive and have a wide range of sources. Currently, they are the main components of metallurgical solid waste, such as zinc slag, steel slag, and red mud. This work could provide an important theoretical basis for the comprehensive utilization of metallurgical solid waste [23,24].
Materials and Methods
The raw materials in this experiment were analytically pure Fe powder (2 µm), Fe 2 O 3 powder (500 nm), and Al 2 O 3 powder (500 nm). The raw material ratios of the five sintered samples are shown in Table 1. The ingredients were obtained according to the ratio in Table 1. Absolute ethanol was used as the dispersion medium, and a XQM-2 vertical planetary ball mill was used for ball milling. The ball milling speed was 300 r/min, and ball milling time was 10 h. After ball milling, the samples were placed in a DZF-6050 vacuum drying oven at 120 • C for 24 h, and the vacuum was pumped to 100 Pa during drying. After drying, the mixed powder was passed through a 200-mesh sieve and put into a ZT-40-21Y high-temperature hot press sintering furnace to prepare the Fe/FeAl 2 O 4 at 1400 • C and 30 MPa for 120 min, and the vacuum was pumped to 10 −2 Pa during sintering. The relative density of the prepared samples was measured by the Archimedes principle, the bending strength was measured using the three-point bending method with a CMT4202 universal material testing machine with a crosshead speed of 0.5 mm/min and span of 30 mm, and the Vickers hardness was measured at a loading force of 49.05 N (5 kg) for 10-15 s by Tukon2500 Vickers hardness tester. Phase and composition analysis (XPert PRO MPD, PANalytical, The Netherlands) was carried out with an X-ray diffractometer. The microstructure and element analysis were carried out with SEM and EDS (GeminiSEM 300, Zeiss, Germany), respectively.
Preparation Principle of Fe/FeAl 2 O 4 Cermet
Using the analysis of thermodynamic software FactSage, it was shown that Fe, Fe 2 O 3 , and Al 2 O 3 powder can spontaneously synthesize FeAl 2 O 4 through an in situ reaction under the experimental conditions, in the following Equation (1): Due to the excessive Fe content in the mixed powder of ingredients, Fe 2 O 3 and Al 2 O 3 reacted completely after the in situ reaction and the Fe became redundant. The FeAl 2 O 4 produced by the in-situ reaction combined with the redundant metal Fe, and the Fe/FeAl 2 O 4 cermet was prepared during the process of hot pressing and sintering. The in situ reaction occurred on the three-phase interface of Fe liquid, Fe 2 O 3 , and Al 2 O 3 . This was an interface reaction-driven wetting, according to the free energy change control theory of interface reaction proposed by Aksay [25], as shown in Equation (2): where σ 0 SL is the solid/liquid interface energy before the reaction, A is the interface area, and ∆G r is the free energy change produced by the interface reaction product per unit volume. According to Aksay [25], the decrease of free energy in the interfacial reaction is the main driving force controlling the wetting process. The improvement of wettability is caused by the decrease of free energy. The interfacial reaction is more intense, ∆G r is lower, and the wettability of the system is better. For Reaction (1), with increasing Fe content in the raw material, the in situ reaction for the synthesis FeAl 2 O 4 would be more intense, and the wettability between liquid Fe and FeAl 2 O 4 would be better. With the wetting of Fe and FeAl 2 O 4 , the increased concentration gradient of Fe would increase the diffusion rate of Fe to FeAl 2 O 4 grains. The change law states that Fe accumulates in the FeAl 2 O 4 grains as they grow, which has a greater impact on the structure of FeAl 2 O 4 at the macro level, and this is reflected in changes in microstructure and mechanical properties.
The FactSage thermodynamic software was used to draw the phase diagram of the Fe-Fe 2 O 3 -Al 2 O 3 ternary system under these experimental conditions, and the composition design included samples S1, S2, S3, S4, and S5, as shown in Figure 1. Figure 2 shows the XRD results of the hot press sintering of samples. The results show that the phase composition of each sample was metal phase Fe and FeAl2O4. The above results are consistent with the results of theoretical thermodynamic calculations and indicate that an in situ reaction occurred in the interface between liquid Fe, Fe2O3, and Al2O3 during the hot press sintering process. 10 20 Figure 4 shows the fracture structure of different samples, which could characterize the combination of the metal phase and ceramic phase to a certain extent. The fracture of the Fe/FeAl2O4 cermet was mainly intergranular. With the increase of Fe, the microstructure of samples S1-S5 changed significantly. The bonding effect among the grains in S1 was poor, a large number of grain boundaries were exposed, and relatively more pores appeared. The section of the sample in S2-S4 became dense, the grains grew significantly, and the bonding effect among the grains was significantly improved. In S2-S4, the pores samples S1-S5 changed significantly. The bonding effect among the grains in S1 was poor, a large number of grain boundaries were exposed, and relatively more pores appeared. The section of the sample in S2-S4 became dense, the grains grew significantly, and the bonding effect among the grains was significantly improved. In S2-S4, the pores were reduced, but slight cracks appeared in S4. The surface in S5 was powdered, the crystal grains were obviously smaller, and more pores appeared. The change rule of the microstructure was consistent with the conclusions of Aksay's interface reaction free energy change control theory [25]. The increase of Fe affected the physical and chemical reactions in the hot press sintering process. On the one hand, it promoted the in situ reaction, reduced the interfacial energy of the solid/liquid surface, and improved the wettability of Fe and FeAl 2 O 4 . On the other hand, it increased the concentration gradient of Fe on the interface between Fe liquid and FeAl 2 O 4 , and increased its diffusion rate. Figure 4 shows the fracture structure of different samples, which could characterize the combination of the metal phase and ceramic phase to a certain extent. The fracture of the Fe/FeAl2O4 cermet was mainly intergranular. With the increase of Fe, the microstructure of samples S1-S5 changed significantly. The bonding effect among the grains in S1 was poor, a large number of grain boundaries were exposed, and relatively more pores appeared. The section of the sample in S2-S4 became dense, the grains grew significantly, and the bonding effect among the grains was significantly improved. In S2-S4, the pores were reduced, but slight cracks appeared in S4. The surface in S5 was powdered, the crystal grains were obviously smaller, and more pores appeared. The change rule of the microstructure was consistent with the conclusions of Aksay's interface reaction free energy change control theory [25]. The increase of Fe affected the physical and chemical reactions in the hot press sintering process. On the one hand, it promoted the in situ reaction, reduced the interfacial energy of the solid/liquid surface, and improved the wettability of Fe and FeAl2O4. On the other hand, it increased the concentration gradient of Fe on the interface between Fe liquid and FeAl2O4, and increased its diffusion rate. The analysis results of the point scan and the surface scan of the fracture energy spectrum in S2 are shown in Figures 5 and 6. The spot scanning showed that the grain at the spot was FeAl2O4. The surface scanning showed that the bright area was the Fe phase, and The analysis results of the point scan and the surface scan of the fracture energy spectrum in S2 are shown in Figures 5 and 6. The spot scanning showed that the grain at the spot was FeAl 2 O 4 . The surface scanning showed that the bright area was the Fe phase, and the dark area was the FeAl 2 O 4 phase. This result is completely consistent with the XRD results. Fe, Al, and O were distributed in the whole area, and as a result of the combined point scanning, FeAl 2 O 4 existed in the whole area. In the area where FeAl 2 O 4 was evenly distributed, there was a concentrated area of Fe, which indicated that metal Fe diffused into the FeAl 2 O 4 grains during the in situ reaction. The material transferred between FeAl 2 O 4 and Fe indicated that the wetting process followed the reaction-driven wetting mechanism. The increase of Fe liquid increased the reaction driving force, which improved the wettability between the two phases. This agrees with the interface reaction free energy change control proposed by Aksay's [25].
Effect on Mechanical Properties
The relative density, Vickers hardness, and bending strength of Fe/FeAl 2 O 4 cermet with different proportions were analyzed, as shown in Figure 7. The increased Fe content increased the relative density of the Fe/FeAl 2 O 4 cermet, which remain at about 94% after sample S2. This experimental result is consistent with the SEM result in Figure 4, which shows that the porosity of Fe/FeAl 2 O 4 cermet increased from S1 to S2, and the porosity of The analysis results of the point scan and the surface scan of the fracture energy spectrum in S2 are shown in Figures 5 and 6. The spot scanning showed that the grain at the spot was FeAl2O4. The surface scanning showed that the bright area was the Fe phase, and the dark area was the FeAl2O4 phase. This result is completely consistent with the XRD results. Fe, Al, and O were distributed in the whole area, and as a result of the combined point scanning, FeAl2O4 existed in the whole area. In the area where FeAl2O4 was evenly distributed, there was a concentrated area of Fe, which indicated that metal Fe diffused into the FeAl2O4 grains during the in situ reaction. The material transferred between FeAl2O4 and Fe indicated that the wetting process followed the reaction-driven wetting mechanism. The increase of Fe liquid increased the reaction driving force, which improved the wettability between the two phases. This agrees with the interface reaction free energy change control proposed by Aksay's [25].
Effect on Mechanical Properties
The relative density, Vickers hardness, and bending strength of Fe/FeAl2O4 cermet with different proportions were analyzed, as shown in Figure 7. The increased Fe content increased the relative density of the Fe/FeAl2O4 cermet, which remain at about 94% after sample S2. This experimental result is consistent with the SEM result in Figure 4, which shows that the porosity of Fe/FeAl2O4 cermet increased from S1 to S2, and the porosity of S2-S5 remained unchanged. The Vickers hardness of Fe/FeAl2O4 cermet first increased and then decreased with the increase in Fe content. The bending strength first increased and then decreased with the increase in Fe content. The Vickers hardness and bending strength reached their maximum values in S2 (1.21 GPa and 210.0 MPa, respectively). The effect of Fe on the microstructure and mechanical properties of Fe/FeAl2O4 cermet was caused by the wetting process between Fe liquid and FeAl2O4. According to Aksay's theory [25], the increase in Fe content effectively improved the wettability of Fe and FeAl2O4, not only improving the bonding ability of Fe and FeAl2O4, but also providing a favorable channel for the diffusion of Fe to FeAl2O4, and promoting the compactness, Vickers hardness, and bending strength of cermet. However, with the further increase in Fe content, the diffusion rate of Fe to FeAl2O4 increased, and Fe continued to accumulate in the growing FeAl2O4 grains, eventually leading to the collapse of the FeAl2O4 structure. Therefore, the increase in Fe was conducive to the improvement of the mechanical properties of the Fe/FeAl2O4 cermet to a certain extent, but too much Fe aggravated the reaction and wetting on the interface between Fe and FeAl2O4, and affected the structure of FeAl2O4. Mac- The influence law of Fe relative to the Fe/FeAl2O4 cermet's structure and mechanical properties is consistent with Aksay's interface reaction free energy change control theory [25]. The reaction mechanism of the Fe/FeAl2O4 cermet prepared by hot press sintering was obtained as shown in Figure 8. As the content of Fe increased, the in situ synthesis The influence law of Fe relative to the Fe/FeAl 2 O 4 cermet's structure and mechanical properties is consistent with Aksay's interface reaction free energy change control theory [25]. The reaction mechanism of the Fe/FeAl 2 O 4 cermet prepared by hot press sintering was obtained as shown in Figure 8. As the content of Fe increased, the in situ synthesis Reaction (1) Fe/FeAl2O4 cermet.
The influence law of Fe relative to the Fe/FeAl2O4 cermet's structure and mechanical properties is consistent with Aksay's interface reaction free energy change control theory [25]. The reaction mechanism of the Fe/FeAl2O4 cermet prepared by hot press sintering was obtained as shown in Figure 8. As the content of Fe increased, the in situ synthesis Reaction (1) was intensified, so that the wettability between liquid Fe and FeAl2O4 was improved. After breaking the wetting barrier between Fe and FeAl2O4, with the increase of Fe, the diffusion rate of Fe to FeAl2O4 grains increased, and it accumulated as the FeAl2O4 grains grew. The continuous accumulation of Fe in the FeAl2O4 grains increased the impact on the FeAl2O4 structure, eventually leading to the collapse of the FeAl2O4 structure. Therefore, increasing Fe was conducive to improving the wettability between Fe and FeAl2O4, however, too much Fe affected the structure of FeAl2O4 and resulted in poor mechanical properties. In this study, the optimum Fe:Fe2O3:Al2O3 ratio was 6:1:2.
Conclusions
In this paper, Fe/FeAl2O4 cermet in different Fe phases was prepared by hot press sintering, and the following conclusions were obtained.
(1) With the increase of Fe, the diffusion of Fe into FeAl2O4 grains occurred; the density of Fe/FeAl2O4 cermet increased, the grains of FeAl2O4 continued to grow, and the bonding ability of Fe and FeAl2O4 increased. However, the ratio of Fe:Fe2O3:Al2O3 was 12:1:2 and 15:1:2, the bonding ability of Fe and FeAl2O4 was decreased, cracks and evenly distributed powdering appeared on the cermet.
Conclusions
In this paper, Fe/FeAl 2 O 4 cermet in different Fe phases was prepared by hot press sintering, and the following conclusions were obtained.
(1) With the increase of Fe, the diffusion of Fe into FeAl 2 O 4 grains occurred; the density of Fe/FeAl 2 O 4 cermet increased, the grains of FeAl 2 O 4 continued to grow, and the bonding ability of Fe and FeAl 2 O 4 increased. However, the ratio of Fe:Fe 2 O 3 :Al 2 O 3 was 12:1:2 and 15:1:2, the bonding ability of Fe and FeAl 2 O 4 was decreased, cracks and evenly distributed powdering appeared on the cermet.
(2) With the increase of Fe, the relative density of Fe/FeAl 2 O 4 cermet first increased and then remained stable. The Vickers hardness and bending strength first increased and then decreased. The relative density of the cermet was maintained at about 94%, with a Fe:Fe 2 O 3 :Al 2 O 3 ratio of 6:1:2. The Vickers hardness and bending strength reached a maximum of 1.21 GPa and 210.0 MPa, respectively.
(3) The reaction mechanism of Fe/FeAl 2 O 4 cermet prepared by hot press sintering contained an in situ reaction synthesis to FeAl 2 O 4 and a Fe diffusion reaction to FeAl 2 O 4 grains. An increase of Fe improved the wettability between Fe and FeAl 2 O 4 , and increased the diffusion rate in the growing FeAl 2 O 4 grains. The Fe continued to accumulate and strengthen in the FeAl 2 O 4 grains. The effect on the structure of FeAl 2 O 4 was macroscopically expressed as the phenomenon that the mechanical properties of Fe/FeAl 2 O 4 cermet improved initially, and then became worse. | 2021-05-08T00:03:35.055Z | 2021-02-19T00:00:00.000 | {
"year": 2021,
"sha1": "83fb71406cf3b93f38b6bce13aae2089aca4a7e2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4352/11/2/204/pdf?version=1614245391",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "4b0cd476b2b1412853d0991ff488d62507622fe9",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
249282694 | pes2o/s2orc | v3-fos-license | Grain-size dependence of water retention in a model aggregated soil
We experimentally examined the amount of water retention in a model soil composed of aggregated glass beads. The model soil was characterized by two size parameters: size of aggregates $D$ and size of monomer particles (composing aggregates) $d$. In the experiment, water was sprinkled on the model-soil system that has an open top surface and drainable sieve bottom. When the sprinkled water amount exceeded a threshold (retainable limit), draining flux balanced with the sprinkled flux. The weight variations of retained and drained water were measured to confirm this balanced (steady) state and quantify the retained water. We defined the weight of the retained water in this steady state as $W_0$ and examined the relationship among $W_0$, $d$ and $D$. As a result, it was revealed that $W_0$ increases as $d$ decreases simply due to the capillary effects. Regarding $D$ dependence, it turned out that $W_0$ becomes the maximum around $D\simeq 500$~$\mu$m. The value of $D$ maximizing water retention is determined by the void formation due to the aggregated structure, capillary effect, and gravity.
Introduction
Water retention in porous media has been studied in various fields. Glass beads are frequently used as a model material to simplify the structure of porous media in which water can be retained [1,2]. One of the most significant advantages of using glass beads is the spherical shape that enables us to simply analyze grains contact network and pore structures. Therefore, spherical grains have been used for various experiments [3,4,5]. Moreover, most of the numerical works have used sperical grains [6,7]. Thus, to compare the experimental results with the numerical ones, spherical grains are better. Even by using spherical grains, physical properties of the mixture of grains and water (wet granular matter) complexly depend on the wetting degree [8,9,10].
Monodisperse glass beads are certainly too simple to model natural porous materials. For example, natural soil has hierarchical structures since tiny particles such as clay grains often form aggregates using organic substances to stick together ( Fig. 1(c)). There are two distinct pore-size scales in such soil structure. The larger pore between aggregates is called macroscopic pore and the smaller pore within each aggregate is called microscopic pore. In order to mimic such hierarchical structures, lightly sintered glass beads have been used for forming aggregates [11,12]. Sintered glass beads are useful to keep the structure stable [13] and estimate the pore size distribution. By using the sintered glass-beads aggregates for the model soil, the simple structure of capillary bridge between grains can also be assumed.
In most of the previous studies, water retention in soil has been evaluated by establishing correlations between water content and pressure head [14]. Water supply by precipitation and gravity-driven drainage in soil were not directly modeled. However, considering the actual situation such as rainwater permeating into soil, the amount of retained water must be determined by the balance between the precipitation rate, drainage, and capillary suction. Besides, since the drainage process of the retained water depends on the initial water content, the entire process (from wetting to drying) should be taken into account in order to evaluate the actual water retention in soil.
As a simple experiment studying water evaporation from soil, Kondo et al. [15] focused on the drying phase in model soil composed of glass beads by measuring the weight variation of the sample which was initially saturated with water. However, hierarchical structures were not considered in their experiment because they used monodisperse glass beads ( Fig. 1(a)). In addition, their experimental system could not have the drainage effect because the vessel they employed had a closed bottom base. In natural soil, the gravity-driven drainage affects the water retention and drying processes.
Therefore, in this research, we developed an experimental set-up which evaluates the evolution of the water retention as a result of wetting and drainage processes using model soil consisting of hierarchically structured aggregates ( Fig. 1(b)). In the experiment, hierarchical structure of soil was mimicked by using the collection of sintered glass-beads aggregates. The effect of drainage was also considered by employing an openbottom vessel. To control the water retention degree, two parameters, size of aggregates D and monomer particle size d, were systematically varied ( Fig. 2(a)). Such hierarchical structure significantly affects physical behaviors of dry granular materials [16,17]. The effect of granular hierarchy in wet granular matter has not been studied yet. Using this setup, we can evaluate the effects of hierarchical structure of soil and water drainage in wetting and drying processes. In general, wetting and drying processes of porous granular media are complex [18,19,20,21]. Here, we simply analyze the retainable water content in the hierarchically structured model soil. Although we have confirmed the drying curves that are similar to the observation in evaporation from porous media [22], the obtained data are still preliminary. The entire drying process will be discussed elsewhere in future. Thus, we focus only on the water retention in this study.
As a first step to understand the complex nature of water retention in soil, we measured the water retention of the precipitated soil. Particularly, we focus on the relation between the amount of retained water and two size parameters D and d.
Sample preparation
Glass beads were employed as monomer materials to form a model soil having hierarchical structure. The representative diameters of the glass beads used in the experiment were 5, 18, 100, 400, 2000 and 3000 µm (Potters-Ballotini Co., Ltd., EMB-10, P-001, As One Corp., BZ-01, 04, 2, and N6326450010302; true density ρ g = 2.5-2.6 g/cm 3 ). We prepared the hierarchically structured soil by sintering (650 • C, 50-90 min) a cluster of tiny monomer glass beads (d = 5, 18, 100, 400 µm). The chunk of sintered glass beads was crushed and sifted in a sieve to form aggregated grains with various size ranges ( Fig. 2 Glass beads of d = 18 µm were used as monomers for creating all of those aggregate samples (from XS to L). The corresponding packing fraction φ is 0.29, 0.28, 0.31, and 0.36, respectively (Table.1). The packing fraction φ was obtained from ρ g and bulk density by measuring the bulk volume and weight in a cylinder in diameter of 4.7 cm. Glass beads of d = 5, 18, 100, and 400 µm were used for creating L-sized aggregates. They respectively have φ = 0.28, 0.36, 0.34, and 0.36. The dependency of the water content on d was discussed by using L-sized aggregates consisting of d = 5, 18, 100 and 400 µm glass beads and D dependence was investigated by using 18 µm glass beads forming various sizes of aggregates: XS, S, M and L. Non-aggregated (monomer) glass beads (5-3000 µm) were also used as monomer model soils. The packing fraction φ of 5 µm and 18 µm monomer glass beads are 0.49 (after compression) and 0.54, respectively, and the other monomer glass beads (100-3000 µm) have φ = 0.60. Here, the samples achieve random close packing except for 5 µm and 18 µm glass beads. Monomer glass beads of d = 5 µm were compressed because the initial volume was too large to pack into the vessel. The packing fraction of d = 5 and 18 µm becomes smaller than that of the random close packing. For the aggregate samples, the bulk packing fraction φ is a product of microscopic one φ micro and macroscopic one φ macro , φ = φ micro × φ macro . We consider φ macro is close to the random close packing while φ micro depends on d.
shows a microscope observation of aggregates consisting of 400 µm glass beads. Roughly the shape of the monomer beads was kept spherical in the sintered aggregates. Therefore, we can neglect the deformation of the glass beads due to the neck formation between the connecting monomer grains. The pore distribution was characterized by mercury intrusion porosimetry [23,24]. Two types of monomers (d = 18 and 400 µm) and S-sized aggregates consisting of d = 18 µm beads were measured as shown in Fig. 3. The vertical dashed lines in Fig. 3 display the diameters of 18 µm and 400 µm. As expected, the representative size of pores almost matches the diameters of grains. Representative pore size is about two to four times smaller than the constituent grains as previously reported [24]. The plot of aggregate sample shows bimodal shape which originates from microscopic and macroscopic pores.
Setup
Aggregated dry particles of the fixed mass of 100 g were poured in a vessel whose bottom consists of a sieve in diameter of 7.5 cm with 150 µm of opening (Fig. 4). Typical sample thickness H soil ranged 1.5-3 cm depending on φ. To prevent the leakage of tiny grains, a paper filter (Whatman, Cat No 1001 090, cut into a circle in diameter of 7.5 cm) was put on the bottom sieve. Then, the fixed amount of water (100 g) was sprinkled on its surface for about 4 minutes with a flowrate 0.45 g/sec (which corresponds to 370 mm/h precipitation intensity) by using a spray nozzle (dretec SD-800). The nozzle was held by hand to spray water all over the surface. The distance between the nozzle and surface of the model soil was in the range of 3-4.5 cm depending on H soil . Since the sprinkled water did not deform the sample surface at all, the effect of water inertia was negligibly small. The temporal variations of the weight of drained water and soil including retained water were measured by electronic balances (A&D Co., Ltd., EK-300i) connected to PC ( Fig. 4(a)). All the experiments were performed under constant temperature of 35 • C kept by the thermostatic chamber (Isuzu Seisakusho Co., Ltd., VTR-115). From the measured mass variation, we analyzed the water retention ability of the hierarchically-structured model soil. In other words, we simply
Results and Discussion
The weights of retained water and drained water were obtained as a function of time t (Fig. 5). In Fig. 5, we show the result of L-sized aggregates with d = 400 µm as a representative retention/drainage graph. Three experimental runs were performed for each experimental condition to check the reproducibility. The errors shown in the following plots were calculated by standard deviation of three experimental runs. When the water supply was started, the amount of retained water began to increase. Then, within the short timescale, the retained water became almost constant after the drainage started. In this regime, incoming water and outgoing water are balanced and reaching the steady state. Although the drainage lasted for a few seconds after stopping the water supply, both weights finally approached their asymptotic values. This tendency was common for all the experimental results. We consider that the amount of retained water W 0 in this state is one of the key parameters to evaluate the water retention in soil. Thus, we measured W 0 after the drainage flow settled (Fig. 5). Although the ambient humidity varied in the range of 16-32%, its effect on evaporation rate is negligible in the timescale of the experiments (∼ 5 min) since the total drying time was at least over 5 hours which is 60 times longer.
The degrees of saturation S r can be calculated since we measured W 0 and φ (Fig. 6). For aggregates, average of D values defined as XS: D = 162 µm, S: D = 545 µm, M: D = 1480 µm, L: D = 3380 µm are used for the representative D values. Regarding monomers, the samples were almost saturated when D ≤ 100 µm, while they were not fully saturated when D ≥ 400 µm. However, S r is kept large up to D ∼ 2000 µm for aggregate soils. The large error of the data of monomer D = 400 µm probably comes from the transitional behavior between the saturated regime and non-saturated regime. Fig. 7 displays the relation between W 0 and d or D. W 0 in aggregated soil was larger than that of non-aggregated (monomer) soil ( Fig. 7(a)). Since the weight of glass beads forming soil sample was fixed, the number of particles composing the model soil is independent of the structure when d is fixed. Thus, the total volume of microscopic pores should be almost identical when d is identical (compare Fig. 1(a) bottom and (b)). However, volume of macroscopic pores is added in aggregated soil due to its hierarchical structure. Hence, the increase of W 0 in aggregated soil shown in Fig. 7(a) comes from the additional capacity of macroscopic pores which monomer soil does not possess. In Fig. 7(b), the relation between W 0 and D is presented. Again, W 0 in aggregated soil is larger than that of monomer soil. In this case, however, the difference between monomer and aggregated soils of the same diameter D is the existence of microscopic pores because aggregates can be regarded as a porous monomer particle (compare Fig. 1(a) upper and (b)). Since aggregates can retain water not only between the grains (macroscopic pores) but also inside the aggregates (microscopic pores), W 0 of the aggregated soil becomes larger than that of monomer soil.
Characteristics in d-dependent and D-dependent behaviors of W 0
The negative correlation between W 0 and d can be observed in both of monomer and aggregated soils (Fig. 7(a)). The water between grains is retained due to capillary force [25] via capillary bridges, in which the curvature radius is roughly proportional to d. Therefore, capillary-originated Laplace pressure decreases as d increases. For large d soils, gravity-driven drainage becomes dominant. The same tendency observed in aggregate soil can be explained by the same effect. Capillary effect is dominant in the microscopic pore scale because monomer size d is considered. The amount of retained water is governed by the competition between capillary and drainage effects both in monomer and aggregated soils.
The most prominent feature confirmed in Fig. 7(b) is its nonmonotonic behavior. Specifically, W 0 of aggregated soil shows the maximum around D = 500 µm. All the aggregates used to obtain the data shown in Fig. 7(b) are composed of monomer glass beads of the constant diameter d = 18 µm. Therefore, W 0 at D = 18 µm must be identical to W 0 of monomer soil with d = 18 µm. In other words, monomer soil and aggregated soil cannot be distinguished at d = D. When aggregates of size D (> d) are formed, W 0 increases because an increase in D creates larger macroscopic pores between aggregates. Thus, the hierarchical soil structure results in the increase in W 0 . However, W 0 starts to decline in the range of gravity-dominant regime (D ≥ 500 µm). Although this characteristic length scale D 500 µm is smaller than the typical capillary length of water (∼ 2 mm) [25], we consider this value must be determined by the balance between capillary force and gravity force. Its specific value might be affected by the inhomogeneous sizes and shapes of the aggregates. For example, the aggregate soil plotted as D = 545 µm is an average of the range of 250 -840 µm and its shape is distorted (not spherical, see Fig. 2(c)).
Discussion
In the case of aggregated soil, W 0 is a sum of the retained water stored in microscopic pores W micro and macroscopic pores W macro , where W agg 0 is the total W 0 stored in the aggregated soil. When the monomer and aggregated soils are composed of same partcle size d, the balance between drainage and capillary suction can be assumed identical in microscopic pores and independent of hierarchical structure. Besides, in this condition, the total volume of microscopic pores is also (roughly) identical because the mass of the model soil is fixed (100 g) (compare Fig. 1(a) bottom and (b)). Here, we can simply assume a relation, where W mono 0 indicates W 0 of monomer soil. Thus, although it is difficult to distinguish W micro and W macro only from the experiment we conducted, W macro can be estimated as, with the assumption of Eq. (2). In Fig. 8, W macro and W mono 0 are compared in various D cases. In this plot, the pores between monomer glass-beads are also regarded as "macroscopic" pores. As observed in Fig. 8, W macro of aggregated soil is approximately three times larger than W mono 0 around D = 500 µm. This result indicates that water retention in macroscopic pores strongly depends on the hierarchical structure of the model soil. Due to the low φ of aggregated soil, the total volume of aggregated soil is greater than that of monomer soil in the fixed mass condition (compare Fig. 1(a) upper and (b)). This increased volume can be almost saturated at D = 500 µm (Fig. 6). However, S r gradually decreases in the range of D > 500 µm.
The variation of S r might result from the thickness of saturated aquifer supported by the capillary menisci among aggregates. To evaluate this effect, here we consider the balance between gravity and capillary effects. The former can be estimated by hydrostatic pressure and the latter can be modeled by Laplace pressure. Then, the balance can be written as, where γ = 72.75 mN/m [25], R, ρ w , g and H are surface tension, radius of the pore constriction, water density (1.0 g/cm 3 ), gravitational acceleration (9.8 m/s 2 ), and the thickness of the aquifer, respectively. High degree of saturation suggests that the retained water is connected to each other in the sample since the coalescence of capillary bridges results to liquid films across the porous material. Therefore, Laplace pressure at the bottom is an important element to retain water. The form of Eq. (4) actually corresponds to the definition of Bond number if we consider the characteristic length scales are R and H, B 0 = ρ w gHR/γ. This means that H (water retention) is governed by the effective Bond number. The effective Bond number should be in order of unity to satisfy the pressure balance. From the given values, R must be less than 500-1000 µm in order to support the hydrostatic pressure with H ∼ 1.5-3 cm which corresponds to the typical value of the sample thickness H soil . Steep decrease in S r in monomer soil from 100 µm to 400 µm (Fig. 6) is roughly consistent with our estimation. The obtained R value also agrees with aggregate diameter D 500 µm at which W macro shows a peak value. Gradual and later decrease in S r in aggregated soil compared to monomer soil is possibly due to the size distribution and shape anisotropy. Decreasing trend of W mono 0 and W macro in the range of D ≥ 500 µm could be explained by the decrease in H. This effect suggests smaller D is better able to support thick aquifer. However, W macro is not a simple decreasing function. The volume of macroscopic pores depends on D/d. To secure the sufficient macroscopic pores, large D is better. Too large and too small D is not beneficial to retain water. W macro is close to W mono 0 at D 3000 µm (L-sized aggregates) despite varied d. This indicates that the difference between monomer and aggregated soils is hardly found because the water drainage occurs in macroscopic pores in this range regardless of whether the grains are aggregates or monomers.
Given the hydrophilicity of the glass beads, its wettability is needed to be taken into account in order to consider the application to natural soil situation since real soil has hydrophobic pockets/areas. W 0 is expected to be smaller than the value we obtained when the soil is hydrophobic. However, we consider the tendency obtained in this study must be useful as a firstorder approximation of the water retention characteristics in hierarchically structured soil. The water flow is certainly affected by grains wettability (e.g. [20]). Thus, effect of grains wettability is an open future problem.
Direct observations of the retained water are crucial next step to further understand the water retention in the aggregated soil stored in macroscopic and microscopic pores. The state of water between grains (completely saturated or capillary bridge regime) could also be revealed by the direct observation such as X-ray microtomography [1]. This will also enable us to reveal the complexity of hierarchical effects quantitatively and establish a more concise model.
Conclusion
The relation between the amount of retained water and two size parameters d and D characterizing aggregated soil structure was investigated in this study. From the measurement, the water drainage (dripping from the bottom sieve) was observed from the middle of spraying water and only right after stopping the water supply. To evaluate the ability of water retention in aggregated soil, we measured the amount of retained water in the steady state W 0 and analyzed its dependence on d and D. As a consequence, we found some characteristic features of water retention in aggregated soil. First, W 0 in aggregated soil was larger than that of non-aggregated (monomer) soil. Second, W 0 decreased as d increased when D was fixed. Finally, we found W 0 showed the maximum value at D 500 µm when d was fixed. Therefore, our result suggests that the smaller d and D 500 µm are better to increase the water retention W 0 . It is considered that the aggregated soil can efficiently retain water not only in microscopic pores within each aggregated particle but also between the aggregates around the size of D 500 µm. This specific value D 500 µm is determined by the balance between capillary force and gravity under the effect of complex geometry of aggregates and pore structure. In this study, only the spherical and hydrophilic glass beads were used to form soil. The effect of grains shape and their surface properties should be investigated to consider the application to the actual soil problem. In addition, internal water distribution should also be measured to fully understand the efficiency of water retention by macroscopic and microscopic pores. These are the important future problems.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2022-06-03T01:15:45.079Z | 2022-06-02T00:00:00.000 | {
"year": 2022,
"sha1": "32ea985a1d6500669be4ce4ae5d7a17d8419f9a6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d475b3aa183ba773558ac2d00f62c91dad5db4a7",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
231691722 | pes2o/s2orc | v3-fos-license | Epidemiology of ectopic pregnancy at Laquintinie Douala hospital (Cameroon): prevalence survey, clinical profile, therapeutic and transfusion issues
Ectopic pregnancies (EPs) are defined as the implantation of the product of conception outside the uterine cavity. They can be life threatening, through a tubal rupture leading to haemo-peritoneum, or even death from hemorrhagic shock. As such, EPs constitute a real diagnostic and therapeutic emergency. Being able to go from simple pelvic pain to a state of shock with hemodynamic disturbance or even death by hemorrhage after tubal rupture, EP represents until proof of the contrary the diagnosis to be eliminated in priority in a woman of childbearing age with pelvic pain and / or metrorrhagia. The incidence of EP is distributed across the planet, varying between 1-2% of pregnancies. In France, this incidence has doubled or even tripled in the ABSTRACT
INTRODUCTION
Ectopic pregnancies (EPs) are defined as the implantation of the product of conception outside the uterine cavity. They can be life threatening, through a tubal rupture leading to haemo-peritoneum, or even death from hemorrhagic shock. As such, EPs constitute a real diagnostic and therapeutic emergency. 1 Being able to go from simple pelvic pain to a state of shock with hemodynamic disturbance or even death by hemorrhage after tubal rupture, EP represents until proof of the contrary the diagnosis to be eliminated in priority in a woman of childbearing age with pelvic pain and / or metrorrhagia. 1 The incidence of EP is distributed across last two decades with low mortality: one reported annual case. 2 In 1970, the incidence of ectopic pregnancy was 4.5 per 1,000 deliveries in the United States. 3 This incidence increased considerably, going from 4.5 to 19.7 per 1000 deliveries in 1992. 3 In the developing world, the incidence of EP is higher, reaching 4% in some regions. 4 A study carried out at the teaching hospital of obstetrics and gynecology of Befelatanana (Madagascar), found an incidence of 2.48% in 2011; that made in Libreville (Gabon) found an incidence of 2.32% in 2002. 5,6 In Cameroon, Leke et al reported an incidence of 0.79% at the Central Hospital of Yaoundé; against 2.3% found by Dohbit et al at the Bafoussam Regional Hospital in 2010 and 3.45% found by Kenfack et al in Sangmelima in 2012. [7][8][9] The increased incidence of ectopic pregnancy may be associated with an increased prevalence of risk factors. Therefore, knowledge of these risk factors is essential in the case of EP. 10 It has three interests: to allow primary prevention of EP by eliminating risk factors, to allow secondary prevention by detecting EP in time in populations at risk and finally, to try to avoid a recurrence.
The work of Coste and Job-Spira demonstrated that almost half of the EPs were linked to a genital infection, more than half of which were linked to Chlamydia trachomatis infections. 11 A Swedish study by Egger demonstrated the effectiveness of national measures for the early treatment of Chlamydia infections. 12 These genital Chlamydia infections are decisive risk factors, with a frequency of 5-15% in France 3, 69% in Libreville (Gabon). They constitute the most frequent risk factor in Cameroon: in 95% of cases. 2 The fight against risk factors has been an active role for the medical community for many years. Ectopic pregnancy is the most frequent gynecological surgical emergency in most developing countries, the most frequent clinical form encountered is rupture accompanied by hemorrhage and thus engaging the vital prognosis. 2 Once the diagnosis of ectopic pregnancy has been made, the management is diversified ranging from therapeutic abstention to surgical treatment (by laparoscopy or laparotomy) through medical treatment. 13 Therapeutic abstention, medical treatment and laparoscopy are more and more practiced in our country, surgical management by laparotomy remains the most frequently used therapeutic modality. 14 As part of our contribution to knowledge on this, we set out to research the epidemiological, clinical and therapeutic profile of ectopic pregnancies at Laquintinie Hospital in Douala.
Type of study, duration and period
We conducted a retrospective study over 10 years, from January 1 st , 2007 to December 31 st , 2016 in the gynaecoobstetrics department of the Laquintinie Hospital in Douala by consulting the theatre registers and archived files of patients admitted for ectopic pregnancy (ruptured or not) during this period.
Study population
The target population consisted of patients from the Laquintinie Hospital in Douala who had an ectopic pregnancy between January 1st, 2007 and December 31st, 2016.
Inclusion and exclusion criteria
Included were all patients who had an ectopic pregnancy at Laquintinie Hospital in Douala between January 1st, 2007 and December 31st, 2016 with usable records. Those whose records were unusable were excluded.
Data sampling and analysis
Sampling was random. The variables of interest were: age, profession, marital status, parity, gestational age, gynecological and obstetric history, clinical, therapeutic, transfusion and evolutionary aspects. The data were collected using a pre-tested file, entered with Microsoft Word 2010 software and analyzed by SSPP software. The P values were interpreted at the statistical threshold of 5% and the confidence intervals at 95%.
Ethical consideration
The research authorization of the director of the Laquintinie hospital in Douala was obtained; confidentiality and medical confidentiality were respected.
RESULTS
During the study period, we identified 933 files of patients diagnosed for ectopic pregnancy, 28 of which were unusable and excluded from this fact. The final size of our sample was 905 cases of GEU for 32595 deliveries, achieving a hospital incidence of 2.8%, or a GEU for 36 deliveries ( Figure 1).
Socio demographic characteristics
The mean and the median age of the population were 28.59 and 28 years, respectively, with a standard deviation of 5.32 and extremes of 15 and 45. They were mostly 25-35 years (60.6%), single (57.9%) and housewives (46.6%) ( Table 1).
Clinical features
Multi-sexual partnership (≥2) was the dominant risk factor with a frequency of 728 cases, which represented 80.4% of our sample followed by the earliness of sexual intercourse before 18 years: 490 cases (54.1% ) and sexually transmitted infections: 475 (52.5%. In 22.7% of cases (205) the patients were nulli-gravid and nulliparous in 60.8% of cases (550) ( Table 1).
More than half of our sample (57.7%) (522 cases) consisted of referrals ( Figure 2); pelvic pain was the main reason for consultation (45.5%) (419 cases) (Figure 3) while the triad of pelvic pain, amenorrhea and metrorrhagia was found in 46.1% on clinical examination in order of frequency 96.9%, 77.3% and 63.6% respectively ( Figure 4). Paracentesis was the most widely used diagnostic means in 93% of cases (842) with a positivity rate of 86% (724 cases) ( Table 2). On the other hand, 07 deaths (0.77%) were recorded by hemorrhagic shock during this period, including two transfusion refusals due to religious considerations. The treatment was exclusively surgical with a laparotomic approach and mainly radical by salpingectomy in the order of 97% (878) ( Table 3).
Socio-demographic characteristics and risk factors
The average age of our population was 28.59±5.32 years with extremes of 15 and 45 years. Our results can be superimposed on those of Randriambololona (an average of 30.72 years with extremes of ages 18 and 48) 5 as well as those of Dohbit (average age of 29.69 years with extremes of 18 and 44) 8; the 25-35 years old (constituted the majority (60.6%). This is challenging in more ways than one because this age group constitutes the top level of human reproduction and its high percentage will have to elicit advice for behavior change, particularly sexual behavior.
Because the majority risk factor found in our study was the multiplicity of sexual partners (80.4%) followed by the precociousness of sexual intercourse before the age of 18 years (54.1%) and a history of sexually transmitted infection (51.4%).
If the medical literature and previous work on the subject report more history of sexually transmitted infection, our results are not contradictory, however, since the multiplicity of sexual partners is a potential determinant of sexual contamination helped in this by cervical immaturity. Such as the precocity of sexual intercourse in the under-18 age group thereby anticipating the variable 59.8 If ectopic pregnancy is the tubal sanction of a late egg as a result of a physiologically defective tube, these observations arise as a result of the factors mentioned above by our findings and those of other authors. They often stem from a precarious social experience.On the social level, many of our patients were single (57.98%) and housewives (46.6%) or working in the informal sector 25.5%. This means that our sample was predominantly low income and therefore at low economic level; therefore exposed to sexual practices that provide infections in accordance with the work of Namaya et al and
Clinical profile and diagnostic examinations
In the majority of studies, GEU is associated with low parity. 6,20,21 In our series, the average parity was 1.28. The nulliparous and primiparous were the most affected: 60.8%. These results are similar to those of Lankouande et al in Ouagadougou in 1998, as well as those of Sindayirwanya et al in Burundi in 1991. The symptomatic triad (amenorrhea, pelvialgia and metrorrhagia) was found in 46.1% of the cases in order of frequency 96.9% for pelvic pain, 77.3% for amenorrhea (77.3%), and 63, 6% for metrorrhagia unlike the findings of Majhi in whom pelvic pain was predominant (86.1%) followed by amenorrhea (76.1%) and metrorrhagia (42.2%) with, however, a effectiveness of the symptomatic triad confirming data from the literature. 1,18 In a medical environment with a low technical and financial resource, the clinical approach is often preponderant in our health structures in order to compensate as much for these shortcomings as for the late delays in consultation legions in our communities; this is what justifies as much as other authors in the same geographical area including Dobbit and Kouam our 59% of cases by clinical diagnosis and paracentesis (93%) as much as Majhi Nos, 7% of shock on admission attests to the delay in consultation and sometimes to inaccessibility to care given the precariousness of our popular strata here mainly single and housewives 8,18,19 Hence the strategy to promote primary prevention by controlling risk factors and secondary prevention by early and effective treatment of genital infections. 11, 12 The findings of other authors are heterogeneous and relate mainly to the health organization and the sizes of the samples. 8
Therapeutic aspects
As much as other works from the same geographic area and elsewhere, the treatment was surgical and radical by salpingectomy in 97% of our series. 8,19 This radical option was justified by the high rate of ruptured GEU in our sample (88.2%) with large hemoperitoneum, the average of which was 1556±916.36 ml. Ampullary localization was the most frequent with 59.8%, superimposable on that of Tumenta (61.9%), as well as that of Mohamed (60%). 20,21 Transfusion issues The massive hemoperitory with an average of 1556±916.39 ml explains the use of blood transfusion in the order of 68.1% of cases (or 616 cases out of 905). But in 21.5% of the cases the transfusion indication was relevant but not honored because of philosophical and precisely religious barriers, the outcome of which was fatal with the death of 02 patients; this problem requires education of the community masses in order to remove any uncomfortable ambiguity for a practitioner faced with hemodynamic instability of hemorrhagic cause. Because at the severe stage of anemia only hemoglobin replaces hemoglobin, filling with crystalloids is only a waiting measure
Evolutionary profile of the GEU
The patients arrived at the ruptured stage in 88.2%, or 798 of the cases. These results join the 87.62% of Dohbit et al. 8 This high rate is often dependent on the delay in legion consultation in our communities.
Complications of GEU
Two complications were recorded 11 cases of hemorrhagic shock (1.2%) resulting in 07 deaths, two of which by transfusion due to religious considerations (0.77%), ie a survival rate of 99.22% as opposed to Dohbit et al had reported post-operative infection as complications of GEU (1.4%). 8
CONCLUSION
The ectopic pregnancy at Laquintinie hospital has a hospital incidence of 2.8% with a survival rate of 99.22% despite religious barriers to blood transfusion which caused 02 deaths out of a total of 07 deaths. The mask of the patient in our study is a single housewife with the same determinants as those of cervical cancer, namely the precocity of sexual intercourse, multiple sexual partnership and their corollaries of sexually transmitted infections. All things that call for a strengthening of primary and if necessary secondary prevention strategies. | 2020-12-31T09:09:28.666Z | 2020-12-26T00:00:00.000 | {
"year": 2020,
"sha1": "5775dfeb07ce1dbe69622f6cc91c165882078c12",
"oa_license": null,
"oa_url": "https://www.ijrcog.org/index.php/ijrcog/article/download/9296/6147",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cb09794d48916d4dd1021b6e8aefe0596d509ff2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
249226368 | pes2o/s2orc | v3-fos-license | Co-morbidities in Children with Severe Acute Malnutrition – A Hospital based Study
Objective: To find out the co-morbidities such as infections and micronutrient deficiencies in hospitallized children with severe acute malnutrition. Study design: In this hospital based descriptive type of observational study, conducted at the Department of Pediatrics, SMS Medical College 125 severe acute malnourished children were included. Patients undergo relevant investigation to find out associated infectious co morbidities. Micronutrient deficiencies assessed by clinical signs. Vitamin D status assessed by laboratory test. Results: 42% had diarrhea and 27% had acute respiratory tract infections as co morbid condition. Tuberculosis was diagnosed in 13% of cases. Anemia was present in 86% cases. Signs of vitamin B and vitamin A deficiency were seen in 24% and 6% cases. 97% children have inadequate vitamin D levels. Conclusions: Timely identification and treatment of various co-morbidities is likely to break undernutrition-disease cycle, and to decrease mortality and improve outcome. Nearly all SAM patients have inadequacy of Vitamin D. So Vitamin D supplement J Pediatr Perinatol Child Health 2022; 6 (2): 296-304 DOI: 10.26502/jppch.74050109 Journal of Pediatrics, Perinatology and Child Health 297 should be given to all SAM patients.
Objective
Malnutrition or malnourishment is a condition that results from eating a diet in which nutrients are either not enough or are too much such that the diet causes health problems [1,2]. Not enough nutrition is called undernutrition or undernourishment while too much is called over nutrition. According to the World Health Organization (WHO), malnutrition essentially means "bad nourishment" and can refer to the quantity as well as the quality of food eaten [3,4]. Severe acute malnutrition affects an estimated 20 million children under 5 years of age and is associated with 1-2 million preventable child deaths each year [5]. Severe acute malnutrition (SAM) results from a nutritional deficit that is often complicated by marked anorexia and concurrent infective illness [6]. Similarly, malnutrition increases one's susceptibility to and severity of infections, and is thus a major component of illness and death from disease. Globally, comorbidities such as diarrhoea, acute respiratory tract infections and Malaria, which results from a relatively defective immune status, remain the major causes of death among children with SAM [7]. Anemia, Vitamin B complex deficiency, Vitamin D deficiency, Vitamin A deficiency, Scurvy are the common micronutrient deficiencies seen in severe acute malnourished Children [8].
This study was carried out to find out demographic data and co-morbidities such as infections and micronutrient deficiencies in children with severe acute malnutrition.
Methods
This study was conducted in the Department of Pedia- Immunization status of study subjects was assessed as per schedule of National Immunization Programme (NIP) [11].
Infectious comorbidities defined as per following criteria - • Diarrhoea was defined as three or more loose stools per day for any time duration. Persistent diarrhoea was defined as an episode of diarrhoea, of presumed infectious etiology, which starts acutely but lasts for more than 14 days. Chronic diarrhoea was defined as insidious onset diarrhoea of >2 weeks duration in children.
• Acute respiratory tract infection was defined as short duration of cough (< 2 weeks) or respiratory difficulty, age-specific fast breathing (above normal for age category), auscultatory and/or chest x-ray findings.
• UTI (Urinary tract infection) was diagnosed on the basis of suggestive clinical symptoms along with positive urine culture report.
• Meningitis was diagnosed on the basis of suggestive clinical features and confirmed by CSF examination and neuroimaging. IAP algorithm was applied to diagnose the tuberculosis in children in this study [12].
• Micronutrient deficiencies were assessed by clinical signs during general physical examination in these children except Vitamin D status which was determined by laboratory test.
• Anaemia was defined on the basis of WHO reference values of hemoglobin (Hb) in children in age group of 6 to 59 months- [13]. Descriptive cross tabulations were formed to examine for associations.
Results
In this study 125 children with severe acute malnutri-
Discussion
Severe acute malnutrition is a well recognized emergency situation with substantial morbidity and mortality that requires immediate and effective treatment. In our study about two third children had the history of recurrent hospitalization; either by same illness or other illness. In study by Yoann Madec el al [15] history of recurrent hospitalization was present in 4.7 % children only. Unhygienic living conditions and poor socioeconomic status may be the cause of increased rate of hospitalization in our study subjects. Rate of exclusive breast feeding found to be more in our study but weaning not started at recommended age which is predisposing factor for malnutrition. In present study, Diarrhoea was found to be the most common infectious co-morbidity. Acute respiratory tract infections were found to be the second most common co-morbidity followed closely by tuber-culosis. In a Colombian study, 68.4% of malnourished children were suffering from diarrhea and 9% had sepsis at the time of admission [16]. Two African studies also showed high incidence of diarrhea in SAM children of 49% and 67% [17,18]. Though previous reports have described malnutrition as an important risk factor for pneumonia than for diarrhea [19], diarrhea was the major co-morbid condition
Conclusion
Apart from nutritional rehabilitation, timely identify- | 2022-06-01T01:17:31.363Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "f33017f1b22db3a060b62fec2fd3c1d18fb77f6c",
"oa_license": null,
"oa_url": "https://doi.org/10.26502/jppch.74050109",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f33017f1b22db3a060b62fec2fd3c1d18fb77f6c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
44142931 | pes2o/s2orc | v3-fos-license | Microblog Conversation Recommendation via Joint Modeling of Topics and Discourse
Millions of conversations are generated every day on social media platforms. With limited attention, it is challenging for users to select which discussions they would like to participate in. Here we propose a new method for microblog conversation recommendation. While much prior work has focused on post-level recommendation, we exploit both the conversational context, and user content and behavior preferences. We propose a statistical model that jointly captures: (1) topics for representing user interests and conversation content, and (2) discourse modes for describing user replying behavior and conversation dynamics. Experimental results on two Twitter datasets demonstrate that our system outperforms methods that only model content without considering discourse.
Introduction
Online platforms have revolutionized the way individuals collect and share information (O'Connor et al., 2010;Lee and Ma, 2012;Bakshy et al., 2015), but the vast bulk of online content is irrelevant or unpalatable to any given individual. A user interested in political discussion, for instance, might prefer content concerning a specific candidate or issue, and only then if discussed in a positive light without controversy (Adamic and Glance, 2005;Bakshy et al., 2015).
How do individuals facing such large quantities of superfluous material select which conversations to engage in, and how might we better algorithmically recommend conversations suited to individual users? We approach this problem from a microblog conversation recommendation framework. Where prior work has focused on the content of individual posts for recommendation (Chen Conversation 1 ... [U1]: The sheer cognitive dissonance required for a "liberal" to say Clinton is as bad as Trump is just staggering. [U2]: Hillarists, Troll; they insult Liberals trying to distract from Hillary's Conseratism. [U i ]: The message is posted by user U i . "-" is the dividing line between training history and test part. U 1 did not reengage in Conversation 1 but reengaged in Conversation 2. Yan et al., 2012;Vosecky et al., 2014;He and Tan, 2015), we examine the entire history and context of a conversation, including both topical content and discourse modes such as agreement, question-asking, argument and other dialogue acts (Ritter et al., 2010). 1 And where Backstrom et al. (2013) leveraged conversation reply structure (such as previous user engagement), their model is unable to predict first entry into new conversations, while ours is able to predict both new and repeated entry into conversations based on a combination of topical and discourse features.
To illustrate the interplay between topics and discourse, Figure 1 displays two snippets of conversations on Twitter collected during the 2016 United States presidential election. User U 1 participates in both conversations. The first conversation is centered around Clinton, and U 1 , who is more typically involved with conversations about candidate Sanders, does not return. In the second conversation, however, U 1 is involved in a heated back-and-forth debate, and thus is drawn back to a conversation that they may otherwise have abandoned but for their enjoyment of adversarial discourse.
Effective conversation prediction and recommendation requires an understanding of both user interests and discourse behaviors, such as agreement, disagreement, inquiry, backchanneling, and emotional reactions. However, acquiring manual labels for both is a time-consuming process and hard to scale for new datasets. We instead propose a unified statistical learning framework for conversation recommendation, which jointly learns (1) hidden factors that reflect user interests based on conversation history, and (2) topics and discourse modes in ongoing conversations, as discovered by a novel probabilistic latent variable model. Our model is built on the success of collaborative filtering (CF) in recommendation systems, where latent dimensions of product ratings or movie reviews are extracted to better capture user preferences (Linden et al., 2003;Salakhutdinov and Mnih, 2008;Wang and Blei, 2011;McAuley and Leskovec, 2013). To the best of our knowledge, we are the first to model both topics and discourse modes as part of a CF framework and apply it to microblog conversation recommendation. 2 Experimental results on two Twitter conversation datasets show that our proposed model yields significantly better performance than state-of-theart post-level recommendation systems. For example, by leveraging both topical content and discourse structure, our model achieves a mean average precision (MAP) of 0.76 on conversations about the U.S. presidential election, compared with 0.70 by McAuley and Leskovec (2013), which only considers topics. We further con-ducted detailed analysis on the latent topics and discourse modes and find that our model can discover reasonable topic and discourse representations, which play an important role in characterizing reply behaviors. Finally, we also provide a pilot study on recommendation for first time replies, which shows that our model outperforms comparable recommendation systems.
The rest of this paper is structured as follows. The related work is discussed in Section 2. We then present our microblog conversation recommendation model in Section 3. The experimental setup and results are described in Sections 4 and 5. Finally, we conclude in Section 6.
Related Work
Social media has attracted increasing attention in digital communication research (Agichtein et al., 2008;Kwak et al., 2010;Wu et al., 2011). The problem studied here is closely related to work on recommendation and response prediction in microblogs (Artzi et al., 2012;Hong et al., 2013), where the goal is to predict whether a user will share or reply to a given post. Existing methods focus on measuring features that reflect personalized user interests, including topics (Hong et al., 2013) and network structures (Pan et al., 2013;He and Tan, 2015). These features have been investigated under a learning to rank framework (Duan et al., 2010;Artzi et al., 2012), graph ranking models (Yan et al., 2012;Feng and Wang, 2013;Alawad et al., 2016), and neural network-based representation learning methods (Yu et al., 2016).
Distinguishing from prior work that focuses on post-level recommendation, we tackle the challenges of predicting user reply behaviors at the conversation-level. In addition, our model not only captures latent factors such as the topical interests of users, but also leverages the automatically learned discourse structure. Much of the previous work on discourse structure and dialogue acts has relied on labeled data (Jurafsky et al., 1997;Stolcke et al., 2000), while unsupervised approaches have not been applied to the problem of conversation recommendation (Woszczyna and Waibel, 1994;Crook et al., 2009;Ritter et al., 2010;Joty et al., 2011).
Our work is also in line with conversation modeling for social media discussions (Ritter et al., 2010;Budak and Agrawal, 2013;Louis and Cohen, 2015;Cheng et al., 2017). Topic modeling has been employed to identify conversation content on Twitter (Ritter et al., 2010). In this work, we propose a probabilistic model to capture both topics and discourse modes as latent variables. A further line of work studies the reposting and reply structure of conversations (Gómez et al., 2011;Laniado et al., 2011;Backstrom et al., 2013;Budak and Agrawal, 2013). But none of this work distinguishes the rich discourse functions of replies, which is modeled and exploited in our work.
The Joint Model of Topic and Discourse for Recommendation
Our proposed microblog conversation recommendation framework is based on collaborative filtering and a novel probabilistic graphical model. Concretely, our objective function takes the form: This function encodes two types of information. First, L models user reply preference in a similar fashion to collaborative filtering (CF) (Hu et al., 2008;Pan et al., 2008). It captures topics of interests and discourse structures users are commonly involved (e.g., argumentation), and takes the form of mean square error (MSE) based on user reply history. This part is detailed in Section 3.1. The second term, N LL(C | Θ), denotes the negative log-likelihood of a set of conversations C, with Θ containing all parameters. A probabilistic model is described in Section 3.2 that shows how the topical content and discourse structures of conversations are captured by these latent variables.
The hyperparameter µ controls the trade-off between the two effects. 2 regularization is also added for parameters to avoid model overfitting.
For the rest of this section, we first present the construction of L and N LL(C | Θ) in Sections 3.1 and 3.2. We then discuss how these two components can be mutually informed by each other in Section 3.3. Finally, the generative process and parameter learning are described in Section 3.4.
Reply Preference (L)
Our user reply preference modeling is built on the success of collaborative filtering (CF) for product ratings. However, classic CF problems, such as product recommendation, generally rely on explicit user feedback. Unlike user ratings on products, our input lacks explicit feedback from users about negative preferences and nonresponse. Therefore, we follow one-class Collaborative Filtering (Hu et al., 2008;Pan et al., 2008), which weights positive instances higher during training and is thus suited to our data. Formally, for user u and conversation c, we measure reply preference based on the MSE between predicted preference score p u,c and reply history r u,c . r u,c equals 1 if u is in the conversation history; otherwise, it is 0. The first term of objective (Eq. 1) takes the following form: where U consists of users {u} and C is a set of conversations {c} in a dataset. f u,c is the corresponding weight for a conversation c and a target user u. Intuitively, it has a large value if positive feedback (user replied) is observed. Therefore, we adapt the formulation from Pan et al. (2008): where s > 1, an integer hyperparameter to be tuned. Inspired by prior models (Koren et al., 2009;McAuley and Leskovec, 2013), we propose the following latent factor model to describe p u,c : γ U u and γ C c are K-dimensional latent vectors that encode topic-specific information (where K is the number of latent topics) for users and conversations. Specifically, γ U u reflects the topical interests of u, with higher value γ U u,k indicating greater interest by u in topic k. γ C c captures the extents that topics are discussed in conversation c.
Similarly, D-dimensional vectors δ U u and δ C c capture discourse structures in shaping reply behaviors (where D is the number of discourse clusters). δ U u reflects the discourse behaviors u prefers, such as u 1 often enjoys arguments as in the second conversation of Figure 1, while δ C c captures the discourse modes used throughout conversation c. By multiplying user and conversation factors, we can measure the corresponding similarity. The predicted score p u,c thereby reflects the tendency for a user u to be involved in conversation c.
As pointed out by McAuley and Leskovec (2013), these latent vectors often encode hidden factors that are hard to interpret under a CF framework. Therefore, in Section 3.2, we present a novel probabilistic model which can extract interpretable topics and discourse modes as word 377 distributions. We then describe how they can be aligned with the latent vectors of γ C and δ U .
Parameter a is an offset parameter, b u and b c are user and conversation biases, and λ ∈ [0, 1] serves as the weight for trading offs of topic and discourse factors in reply preference modeling.
Corpus Likelihood N LL(C | Θ)
Here we present a novel probabilistic model that learns coherent word distributions for latent topics and discourse modes of conversations. Formally, we assume that each conversation c ∈ C contains M c messages, and each message m has N c,m words. We distinguish three latent components -discourse, topic, and background -underlying conversations, each with their own type of word distribution. At the corpus level, there are K topics represented by word distribution φ T represents the D discourse modes embedded in corpus. In addition, we add a background word distribution φ B to capture general information (e.g., common words), which do not indicate either discourse or topic information. φ D d , φ T k , and φ B are all multinomial word distributions over vocabulary size V . Below describes more details.
Message-level Modeling. Our model assigns two types of message-level multinomial variables to each message: z c,m reflects its latent topic and d c,m represents its discourse mode.
Topic assignments. Due to the short nature of microblog posts, we assume each message m in conversation c contains only one topic, indexed as z c,m . This strategy has been proven useful to alleviate data sparsity for topic inference (Quan et al., 2015). We further assume messages in the same conversation would focus on similar topics. We thus draw topic z c,m ∼ θ c , where θ c denotes the fractions of topics discussed in conversation c.
Discourse assignments. To capture discourse behaviors of u, distribution π u is used to represent the discourse modes in messages posted by u. The discourse mode d c,m for message m is then generated from π uc,m , where u c,m is the author of m in c.
Word-level Modeling. We aim to separate discourse, topic, and background information for conversations. Therefore, for each word w c,m,n of message m, a ternary switcher x c,m,n ∈ {DISC, TOPIC, BACK} controls word w c,m,n to fall into one of the three types: discourse, topic, and background.
Discourse words (DISC) are indicative of the discourse modes of messages. When x c,m,n = DISC (i.e., w c,m,n is assigned as a discourse word), word w c,m,n is generated from the discourse word distribution φ D dc,m where d c,m is discourse assignment to message m.
Topic words (TOPIC) describe the topical focus of a conversation. When x c,m,n = TOPIC, w c,m,n is assigned as a topic word and generated from φ T zc,m -word distribution given topic of m. Background words (BACK) capture the general information that is not related to discourse or topic. When word w c,m,n is assigned as a background word (x c,m,n = BACK), it is drawn from background distribution φ B .
Switching among Topic, Discourse, and Background. We further assume the word type switcher x c,m,n is sampled from a multinomial distribution which depends on the current discourse mode d c,m . The intuition is that messages of different discourse modes may show different distributions of the three word types. For instance, a statement message may contain more content words than a rhetorical question. Specifically, x c,m,n ∼ M ulti(τ dc,m ), where τ d is a 3-dimension stochastic vector that expresses the appearing probabilities of three kinds of words (DISC, TOPIC, BACK), when the discourse assignment is d. Stop words and punctuations are forced to be labeled as discourse or background. By explicitly distinguishing different types of words with switcher x c,m,n , we can thus separate word distributions that reflect discourse, topic, and background information.
Likelihood. Based on the message-level and the word-level generation process, the probability of observing words in the given corpus is: And we use negative log likelihood to model corpus likelihood effect in Eq. 1, i.e., N LL(C | Θ) = − log(P r(C | Θ), where parameters set Θ = {θ, π, φ, τ , z, d, x}.
Mutually Informed User Preference and
Latent Variables As mentioned above, the hidden factors discovered in Section 3.1 lack interpretability, which can be boosted by the learned latent topics and discourse modes in Section 3.2. However, it is nontrivial to link the topic-related parameters of γ C c to the conversation topic distributions of θ c , since the former takes real values from −∞ to +∞ while the latter is a stochastic vector. Therefore, we follow the strategy from McAuley and Leskovec (2013) to apply a softmax function over γ C c : We further assume that the discourse mode preference by users, δ U u , can also be informed by the discourse mode distribution captured by π u , i.e., a user who enjoys arguments may be willing to participate another. So similarly, we define: where κ T and κ D are learnable parameters that control the "peakiness" of the transformation. For example, a larger κ T indicates a more focused conversation, while a smaller κ T means users discuss diverse topics. Finally, softmax transformation is also applied to φ T k , φ D d , φ B , and τ d , as done in McAuley and Leskovec (2013), with additional parameters ψ T k , ψ D d , ψ B , and χ d (as shown in Figure 2). This is to ensure that the distributions φ * * and τ d are stochastic vectors. In doing so, these distributions can be learned via optimizing ψ * * and χ d , which take any value and thus ensure that the cost function in Eq. 1 is optimized without considering any parameter constraints.
Generative Process and Model Learning
Our word generation process is displayed in Figure 2 and described as follows: • Compute topic distribution θc by Eq. 6 Parameter Learning. For learning, we randomly initialize all learnable parameters and then alternate between the following two steps: Step 1. Fix topic and discourse assignments z and d, and word type switcher x, then optimize the remaining parameters in Eq. 1 by L-BFGS (Nocedal, 1980): Step 2. Sample topic and discourse assignments z and d at the message level and word type switcher x at the word level, using the distributions, computed according to parameters optimized in step 1: Step 2 is analogous to Gibbs Sampling (Griffiths, 2002) in probabilistic graphical models, such as LDA (Blei et al., 2003). However, distinguishing from previous models, the multinomial distributions in our models are not drawn from a Dirichlet prior. Instead, they are computed based on the parameters learned in Step 1.
Our learning process stops when the change of parameters is small (i.e., below a pre-specified
Experimental Setup
Datasets. We collected two microblog conversation datasets from Twitter for experiments 3 : one contains discussions about the U.S. presidential election (henceforth US Election), the other gathers conversations of diverse topics based on the tweets released by TREC 2011 microblog track (henceforth TREC) 4 . US Election was collected from January to June of 2016 using Twitter's Streaming API 5 with a small set of political keywords. 6 To recover conversations, Tweet Search API 7 was used to retrieve messages with the "inreply-to" relations to collect tweets in a recursive way until full conversations were recovered. Statistics of the datasets are shown in Table 1. Figure 3 displays the number of conversations individual users participated in. As can be seen, most users are involved in only a few conversations. Simply leveraging personal chat history will not produce good performance for conversation recommendation.
In our experiments, we predict whether a user will engage in a conversation given the previous messages in that conversation and past conversations the user is involved. For model training and testing, we divide conversations into three ordered segments, corresponding to training, development, and test sets at 75%, 12.5%, and 12.5%. 8 Preprocessing and Hyperparameter Tuning. For preprocessing, links, mentions (i.e., @username), and hashtags in tweets were replaced with generic tags of "URL", "MENTION", and "HASHTAG". We then utilized the Twitter NLP tool 9 (Gimpel et al., 2011;Owoputi et al., 2013) for tokenization and non-alphabetic token removal. We removed stop words and punctuations for all comparisons to ensure comparable performance. We maintain a vocabulary with the 5,000 most frequent words.
Our model parameters are tuned on the development set based on grid search, i.e. the parameters that give the lowest value for our objective are selected. Specifically, the number of discourse modes (D) and topics (K) are tuned to be 10. The trade-off parameter µ between user preference and corpus negative log-likelihood takes value of 0.1, and λ, the parameter for balancing topic and discourse, is set to 0.5. Finally, the confidence parameter s takes a value of 200 to give higher weight for positive instances, i.e., a user replied to a conversation.
Evaluation Metrics. Following prior work on social media post recommendation (Chen et al., 2012;Yan et al., 2012), we treat our task on conversation recommendation as a ranking problem. Therefore, popular information retrieval evaluation metrics, including precision at K (P@K), mean average precision (MAP) (Manning et al., 2008), and normalized Discounted Cumulative Gain at K (nDCG@K) (Järvelin and Kekäläinen, 2002) are reported. The metrics are computed per user in the dataset and then averaged over all users. The values range from 0.0 to 1.0, with higher values indicating better performance.
We further compare results with three established recommendation models: • OCCF: one-class Collaborative Filtering (Pan et al., 2008), which only considers users' reply history without modeling content in conversations.
• RSVM: ranking SVM (Joachims, 2002), which ranks conversations for each user with the content and Twitter features as in Duan et al. (2010).
• CTR: messages in one conversation are aggregated into one post and a state-of-the art Collaborative Filtering-based post recommendation model is applied (Chen et al., 2012).
Finally, we also adapt the "hidden factors as topics" (HFT) model proposed in McAuley and Leskovec (2013) (henceforth ADAPTED HFT). Because the original model leverages the ratings for all product reviews and does not handle implicit user feedback well, we replace their user preference objective function with ours (Eq. 2).
Experimental Results
In this section, we first discuss our main evaluation in Section 5.1. A case study and corresponding discussion are provided in Section 5.2 to provide further insights, which is followed by an analysis of the topics and discourse modes discovered by our model (Section 5.3). We also examine our performance on first time replies (Section 5.4).
Conversation Recommendation Results
Experimental results are displayed in Table 2, where our model yields statistically significantly better results than baselines and comparisons (paired t-tests, p < 0.01). For P@K, we only report P@1, because a significant amount of users participate only in 1 or 2 conversations. For nDCG@K, different K values are experimented, which results in similar trend, so only nDCG@5 is reported. We find that the baselines that rank conversations with simple features (e.g., length or popularity) perform poorly. This implies that generic algorithms that do not consider conversation content or user preference cannot produce reasonable recommendations.
Although some non-baseline systems capture content in one way or another, only ADAPTED HFT and our model exploit latent topic models to better represent content in tweets, and outperform other methods.
Compared to ADAPTED HFT, which only considers latent topics under a collaborative filtering framework, our model extracts both topics and discourse modes as latent variables, and shows superior performance on both datasets. Our discourse variables go beyond topical content to capture social behaviors that affect user engagement, such as arguments, question-asking, agreement, and other discourse modes.
Training with Varying Conversation History.
To test the model performance based different levels of user engagement history, we further experiment with varying the length of conversations for training. Specifically, in addition to using 75% of conversation history, we also extract the first 25% and 50% of history as training. The rest of a conversation is separated equally for development and test. Figure 4 shows the MAP scores for US Election and TREC datasets. The increasing MAP for all methods as the training history increases indicates that generally, conversation history is essential for recommendation. Our model performs consistently better over different lengths of conversation histories.
Results for Varying Degree of Data Sparsity. From Table 1 and Figure 3, we observe that most users in our datasets are involved in only a few conversations. In order to study the effects of data sparsity on recommendation models, we examine in Figure 5 the MAP scores for users engaged in a varying number of conversations, as measured on the TREC dataset. The results on the US Election dataset have similar distributions. As we see, the prediction results become worse for users involved in fewer conversations. This indicates that data sparsity serves as a challenge for all recommendation models. We also observe that our model performs consistently better than other models over different degrees of sparsity. This implies that effectively capturing discourse structure in conversation context is useful to mitigating the effects of Table 3: Predicted recommendation scores by different models of U 1 for conversations c 1 and c 2 in Figure 1. U 1 later replies to c 2 but not c 1 , where our model predicts scores of 0.961 for c 2 (higher than 0.924 for c 1 ).
Case Study and Discussion
Here we present a case study based on the sample conversations in Figure 1. Recall that user U 1 is interested in conversations about Sanders, and also prefers more argumentative discourse, and thus returns in conversation c 2 but not c 1 . Table 3 shows the predicted scores for the two conversations from OCCF, ADAPTED HFT, and our model (as in Eq. 2). Both ADAPTED HFT and our model more accurately recommend c 2 over c 1 , with our model producing a slightly higher recommendation score for c 2 . Table 4 shows the latent dimension values for the learned topics and discourse modes for this user and these two conversations. Based on human inspection, topic 1 appears to contain words about Sanders, which is the main topic in conversation c 2 . Topic 2 is about Clinton, which is a dominating topic in conversation c 1 . Our model also picks up user interest in topic 1 (Sanders), and thus assigns γ U u 1 ,1 a high value. For discourse modes, our model also generates a high score for "argument" discourse (labeled via human inspection) for both the user and c 2 .
Further Analysis of Topic and Discourse
Ablation Study. We have shown that joint modeling of topical content and discourse modes produces the superior performance for our model.
Here we provide an ablation study to examine the relative contributions of those two aspects by setting the trade-off parameter λ to 1.0 (topic only) or 0.0 (discourse only). Table 5 shows that topics or discourse individually improve slightly upon the comparison ADAPTED HFT, but only jointly do they improve significantly upon it. Topic Coherence. To examine the quality of topics found by our model, we use the C V topic coherence score measured via the open-source toolkit Palmetto 10 , which has been shown to produce evaluation performance comparable to human judgment (Röder et al., 2015). Our model achieves topic coherence scores of 0.343 and 0.376 on TREC and US Election datasets, compared to 0.338 and 0.371 for the topics from ADAPTED HFT. Sample Discourse Modes. While our topic word distributions are relatively unsurprising, of greater interest are the discourse mode word distributions. Table 6 shows a sample of discourse modes as labeled by human. Although this is merely a qualitative human judgment at this point, there does appear to be a notable overlap in discourse modes between the two datasets even though they were learned separately.
First Time Reply Results
From a recommendation perspective, users may be interested in joining new conversations. We thus compare each recommendation system for first time replies. For each user, we only evaluate for conversations where they are newcomers. Table 7 shows that, unsurprisingly, all systems perform poorly on this task, though our model performs slightly better. This suggests that other features, e.g., network structures or other discussion thread features, could usefully be included in future studies that target new conversations.
Conclusion
This paper has presented a framework for microblog conversation recommendation via jointly modeling topics and discourse modes. Experimental results show that our method can outperform competitive approaches that omit user discourse behaviors. Qualitative analysis shows that our joint model yields meaningful topics and discourse representations. | 2018-05-31T13:07:26.146Z | 2018-06-01T00:00:00.000 | {
"year": 2018,
"sha1": "07da5b4e47dd34066c06b146cdb2999a9c20934c",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/N18-1035.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "b2f9341b885ceb036857db9b2038b944074376b1",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
231801171 | pes2o/s2orc | v3-fos-license | SIMULATION AND OPTIMIZATION OF MULTISTAGE COMPRESSED DMR NATURAL GAS LIQUEFACTION PROCESS
In order to improve DMR (double mixed refrigerant) liquefaction process and reduce operation cost of natural gas liquefaction plant, a four-stage DMR process optimization simulation calculation model was established through Aspen Hysys v8.4 and the purpose of the optimization model is achieved by using the segmented compression process in this paper. The minimum energy consumption and the highest exergy efficiency were used as the objective functions. By using the optimizer in HYSYS, the process parameters and ingredient proportion of the mixed refrigerant in the fourstage DMR process was optimized, and the best process parameters and ingredient proportion of the mixed refrigerant were obtained. According to process power consumption obtained by the optimization simulation, the ratio power consumption and exergy efficiency of the process were calculated. The liquefaction power consumption per unit quality of natural gas was 272.2kW/t and the liquefaction exergy efficiency was 46.85% in this paper. Comparing with the current DMR process power consumption in China, the energy consumption was significantly reduced.
INTRODUCTION
According to refrigeration method, the natural gas liquefaction process can be divided into cascading natural gas liquefaction process, mixed refrigerant natural gas liquefaction process and natural gas liquefaction process with expander. The mixed refrigerant liquefied natural gas process has been widely used in large-scale LNG liquefaction plants due to its advantages of low energy consumption (Fu et al., 2004;Remeljej et al., 2006;MAFI et al., 2009). At present, the mixed refrigerant processes mainly used in industry are: single mixed refrigerant process (SMR), propane pre-cooled mixed refrigerant process (C3MR), AP-X expansion process and double-cycle mixed refrigerant process (DMR), etc. (Shi et al., 2001;Yin et al., 2010;Zhao et al., 2010;Wang et al., 2015;Khan et al., 2015). There is no precooling stage in the SMR process. Since it is with large temperature difference in the heat exchanger, energy consumption of SMR process is very high. To improve the efficiency of processing natural gas, propane is used in C3MR process for pre-cooling while AP-X process adds a nitrogen expansion cycle to the super-cooled part of C3MR process. However, minimum temperature of the pre-cooling parts in these two processes is limited by boiling point of propane. In DMR process, mixed refrigerants are used instead of propane in C3MR process. By adjusting the proportion of the mixed refrigerant, the selection range of pre-cooling temperature of natural gas is expanded, and adaptability of the process to raw natural gas and external conditions is improved. If heat exchange temperature difference between natural gas and the mixed refrigerant in heat exchanger is relatively uniform during the liquefaction process, the exergy efficiency will be high (Nibbelke et al., 2002).
The use of the DMR process allows for significant degrees of freedom in the variation of the compositions of each low level (operating at low temperatures) and high level (operating at relatively high temperatures than low level) refrigeration cycles both in the makeup of the refrigerant and variation of the compositions. This feature of the DMR process allows for the re-matching of the liquefaction load without altering the equipment (Newton., 1988). Husnil and Lee examined the optimal control structure of the DMR process by drawing the steady-state optimality map containing information on the major state variables (Husnil, et al., 2014). Husnil believed that when the working refrigerant flow ratio was constant, the DMR process could obtain the optimal and stable operation. Due to its multi-phase refrigerant and the complexity of operation conditions, optimization design of DMR liquefaction process actually includes two aspects: circulating operation parameters optimization and mixture proportion optimization (Cao et al., 2005;Wang ., 2009;Meng et al., 2015;Yang et al., 2018). In natural gas liquefaction plants, the natural gas liquefaction unit occupies about 80% energy consumption of the entire plant, so how to use software to simulate and optimize the optimal proportion of mixed refrigerants and process operating parameters plays an important role in the liquefaction industry (Waldmann., 2008). However , there are many constraints in optimization of the DMR liquefaction process. The optimization variables are often plenty, and the objective function is nonlinear, making it a complicated optimization problem. Hwang et al. used HYSYS to simulate and analyze the DMR process, and carried out numerical optimization by GA algorithm and SQP. After optimization, the compression power consumption was reduced by 34.5% compared with the patented (Roberts&Agrawal2001) (Hwang et al., 2013). Khan et al. studied the development process of DMR process and conducted multiple singleobjective and multi-objective optimization studies on a DMR process
Frontiers in Heat and Mass Transfer
Available at www.ThermalFluidsCentral.org (Khan et al., 2015). Qyyum et al. proposed a simple and highly efficient hybrid modified coordinate descent (HMCD) algorithm was proposed to cope with the optimization of the natural gas liquefaction processin in 2017 (Qyyum et al., 2017). Then, investigated the uncertainty levels in the overall energy consumption and minimum internal temperature approach (MITA) inside LNG heat exchangers with variations in the operational variables of the DMR processes and a global sensitivity analysis is conducted to identify the influence of random inputs on the process performance parameters (Qyyum et al., 2019). An energy and cost-efficient dual-effect single mixed refrigerant (DSMR) process is proposed, and it employs a single loop refrigeration cycle to generate the dual cooling and subcooling effect, separately. The DMR process and the proposed DSMR process are simulated (with same design parameters) using well-known commercial simulator Aspen Hysys v10 (Qyyum et al., 2020).
In order to select economical and reasonable Boil-off gas (BOG) treatment technology for different types of liquified natural gas (LNG) stations, Xiao et al introduced the related technologies of BOG treatment (Xiao et al., 2020). Zhang calculated the power loss in pressure-driven mass transfer process using a multi-scale method (Zhang, 2019). Sun et al established a dynamic model of dual mixed refrigerant (DMR) liquefaction process and tested the dynamic responses of the DMR liquefaction process by selected variations of gas-phase and liquid-phase plugging ratios as disturbances. (Sun et al., 2017). For studying the performance of spiral wound heat exchangers (SWHEs) applied in the LNG-FPSO (LNG Floating Production Storage and Offloading unit) dual mixed refrigerant (DMR) liquefaction process, an experimental device and a numerical simulation model of DMR liquefaction process are established (Sun et al., 2019). The two DMR process configurations were optimized to maximize efficiency, and the risk of conceptual explosion was analyzed and compared in the conceptual design phase. (You et al., 2019).
The DMR process is divided into three parts: Natural gas liquefaction, pre-cooled mixed refrigerant cycle and mixed cryogen cycle. In order to reduce power consumption, the entire compression process was generally divided into multiple stages. After each stage of compression completed, the gas would be cooled before the next stage of compression. The significant reduction in power consumption was due to high dew point components of the pre-cooled refrigerant. After the first stage of compression and cooled by water cooler, part of the gas phase would be condensed into liquid, and the liquid phase would be pressurized by liquid pump, which would save more power consumption than that by gas compressor. After cooling by water cooler, gas flow rate of the uncondensed gas phase was reduced, and power consumption of the second stage compressor was also reduced. The precooling mixed refrigerant in the pre-cooling cycle adopted two-stage throttling, and the mixed cryogen in the cryogen cycle adopted twostage throttling, hence the whole process called "four-stage throttling DMR liquefaction". Process of four-stage throttling DMR liquefaction is shown in Fig. 1. Fig. 1. Process of four-stage throttling DMR liquefaction Natural gas liquefaction: After pre-treatment, the qualified natural gas would enter plate-fin heat exchanger 1 and 2 in turn and pre-cooled to -60℃, then plate-fin heat exchanger 3 for liquefaction and plate-fin heat exchanger 4 for cryogenic treatment. At the bottom of plate-fin heat exchanger 4, liquefied natural gas would flow out at -160℃, then throttled and depressurized to 0.15Mpa through throttle valve 1, and finally entered gas-liquid separator 1. Then gas phase would serve as fuel gas in the plant area, and liquid phase would enter LNG storage tank as LNG products.
Pre-cool mixed refrigerant circulation: Composition of the precooled refrigerant was C2H4, C3H8 and i-C5H12. The high-pressure precooled refrigerant was separated by gas-liquid separator 2, and the gas phase was cooled to -60℃ by plate-fin heat exchanger 1 and 2. After throttling, depressurizing and cooling by throttle valve 3, it returned to plate-fin heat exchanger 2 to provide cooling capacity; the liquid phase was cooled by the plate-fin heat exchanger 1. After being throttled, depressurized and cooled by throttle valve 2, it was mixed with the precooled refrigerant flowing back from the plate-fin heat exchanger 2 and entered the plate-fin heat exchanger 1 to provide cooling capacity. After being compressed by compressor 1, the pre-cooled refrigerant would enter interstage cooler 1. At this time, part of liquid phase would be condensed out, which need to be separated by separator 3 for gas-liquid separation. The gas phase would enter compressor 2 for pressurization, and the liquid phase would enter liquid pump for pressurization. Then mixed the gas and liquid phases, and cooled them by cooler 2 to make pre-cooling refrigerant return to the initial state. Hence the pre-cooling circulation was completed.
Mixed cryogen circulation: Composition of the cryogen is CH4, C2H4, C3H8 and N2. The high-pressure cryogen was pre-cooled to -60℃ by plate-fin heat exchanger 1 and 2 and was separated by gas-liquid separator 4. The gas phase was cooled to -160℃ by plate-fin heat exchanger 3 and 4, After throttling, depressurizing and cooling by throttle valve 5, it returned to plate-fin heat exchanger 4 to provide cooling capacity; The liquid phase was cooled by the plate-fin heat exchanger 3. After being throttled, depressurized and cooled by throttle valve 4, it was mixed with the pre-cooled refrigerant flowing back from the plate-fin heat exchanger 4 and entered the plate-fin heat exchanger 3 to provide cooling capacity. Then it was boosted by the compressor 3 and cooled to the initial state by the cooler 3 to complete the cryogenic circulation.
In this paper, the chemical process simulation software HYSYS was used to simulate and optimize the DMR process. By taking the minimum process volume as the objective function; pressure value of the high and low pressure of pre-cooled refrigerant, pressure value of the high and low pressure of cryogen, mole fraction of C2H4, C3H8 and i-C5H12 in pre-cooled refrigerant and mole fraction of CH4、C2H4、 C3H8 and N2 as decision variables, the process parameters and mixed refrigerant proportion in the process would be optimized by optimizer in HYSYS.
OPTIMIZATION SIMULATION OF PROCESS
Since HYSYS cannot optimize material composition directly, we divided the pre-cooled mixed refrigerant and mixed cryogen by Component Splitter module into single-component materials. Hence proportion of mixed refrigerant could be controlled by stream of single component material. Then they were mixed up by the Mixer module. Therefore in the HYSYS model diagram, parameters of node 8 and node 8-2 were the same, and parameters of node 19 and node 19-2 were the same. The HYSYS optimization calculation model of DMR liquefaction process is shown in Fig. 2.
Since raw natural gas and mixed refrigerant in DMR process were in a high-pressure state, the state equation method but not fugacity coefficient method was used because it was with smaller error in calculation of the gas-liquid balance of high-pressure natural gas and other light hydrocarbon mixtures. The equation was very accurate on phase equilibrium of light hydrocarbon mixtures such as natural gas, thus the PR equation was used to calculate the gas-liquid phase equilibrium.
Theoretically, the more components of the mixed refrigerant, the more uniform the heat exchange temperature difference of the cold and hot fluid in the heat exchanger. However, too many components will make the storage and distribution system very complicated. Therefore, it is particularly critical to select reasonable components of the mixed refrigerant (Meng et al., 2015;Fan et al., 2017;Q M et al., 2018).The refrigerant groups commonly used in natural gas liquefaction systems are N2 and C1 ~ C5.The DMR liquefication process is divided into precooling cycle and cryogenic cycle. The pre-cooling cycle needs to cool natural gas from 30℃ to -60℃. The pre-cooling mixed refrigerant group is divided into C2-C5, the cryogenic cycle needs to cool natural gas from -60℃ to -160℃, and the cryogenic mixed refrigerant group is divided into C1 ~ C3 and N2. FIG. 3 shows the bubble point curves of the selected refrigerants N2, CH4, C2H4, C3H8 and i-C5H12. Fig. 3 The bubble point curves of the selected refrigerants N2, CH4, C2H4, C3H8 and i-C5H12 In order to make the simulation results more suitable for the actual operating conditions, it was necessary to set some reasonable assumptions before simulating and optimizing the DMR liquefaction process. The assumptions of the DMR process simulation in Table 1 Raw natural gas was supplied by the gas transmission line, and a pretreatment unit was required before the natural gas liquefaction unit. The pretreatment unit would remove solid impurities, acid gases (CO2, H2S), water, mixed hydrocarbons, benzene, mercury and other harmful substances contained in raw natural gas, so that the natural gas could met the gas supply standard of the liquefaction unit being transmitted by gas pipelines. The natural gas environmental parameters and the raw gas composition are listed in Table 2 Initial parameters of HYSYS simulation for the four-stage throttling DMR liquefaction process were as follows: ① Storage pressure of liquefied natural gas was 0.15MPa; ② The ambient temperature was 20℃; ③Temperature of the hot stream entering platefin heat exchanger 1 was 30℃; ④Temperature of the hot stream out of plate-fin heat exchanger 1 was -20℃; ⑤Temperature of the hot stream out of plate-fin heat exchanger 2 was -60℃; ⑥Temperature of the hot stream out of plate-fin heat exchanger 3 was -120℃; ⑦Temperature of the hot stream out of plate-fin heat exchanger 4 was -160℃.
Results of simulation
Through optimization and simulation of the process, we had state parameters of each node of the process (see Tab. 3.), components of pre-cooled mixed refrigerant, mixed cryogen, LNG and fuel gas (see Tab. 4.), as well as the liquefaction rate of natural gas and process parameters (see Tab. 5). Molar flow rate of natural gas was 1000kmol/h, molar flow rate of LNG was 982.14kmol/h, and liquefaction rate of natural gas was 98.214%.
Heat exchange temperature difference of four-stage throttling DMR process heat exchanger
To make heat exchange between natural gas and refrigerant work, there must be a certain heat exchange temperature difference. However, heat exchange temperature difference will cause exergy loss. If local heat exchange temperature difference in the heat exchanger is too small, heat exchange area required in the exchanger will increase sharply. Generally in engineering, the minimum heat exchange temperature difference in heat exchanger is 3℃.
In order to reduce the exergy loss and making heat exchange temperature difference in heat exchanger relatively uniform, in this paper, we optimized the mixed refrigerant proportion and process parameters; hence the average heat exchange temperature difference in heat exchanger was reduced while ensuring the minimum temperature difference in the heat exchanger unchanged. Fig. 4(a), (c), (e), and (g) are the optimized cooling and heating composite curves of heat exchangers 1, 2, 3, and 4; Fig. 4(b), (d), (f), and (h) are heat exchange temperature difference of cold and hot streams in heat exchangers 1, 2, 3, and 4 respectively.
As can be seen from Figure 4, the heat exchange temperature difference of heat exchanger 1 increases roughly with the increase of temperature, reaching the minimum value of 3.00℃ at -20℃ and the maximum value of 10.19℃ at 29.49℃.The heat exchange temperature difference of heat exchanger 2 first increases and then decreases with the increase of temperature, then slightly increases and then decreases, reaching the minimum value of 3.56℃ at -60℃ and the maximum value of 5.78℃ at -51.13℃.The heat exchange temperature difference of heat exchanger 3 first decreases, then increases and then decreases with the increase of temperature, reaching the minimum value of 3.04℃ at -120℃ and the maximum value of 8.09℃ at -135.75℃.The heat exchange temperature difference of heat exchanger 4 first increases, then decreases and then increases with the increase of temperature, reaching the minimum value of 3.05℃ at -69.71℃ and the maximum value of 11.21℃ at -84.01℃.
It can be seen from Fig. 3 that heat exchange temperature of hot and cold streams in the four heat exchangers was small and relatively (h) heat exchange temperature difference of cold and hot streams in heat exchangers 4 Fig. 4. The optimized cooling and heating composite curves and heat exchange temperature difference of cold and hot streams in heat exchangers uniform, which meant exergy loss in the liquefaction process is reduced, thereby power consumption of the four-stage throttling DMR process was reduced too.
Among the existing LNG plants in China, the liquefaction units of Shaanxi Ansai and Shandong Tai'an LNG plant adopt the DMR liquefaction process of China Global Engineering Corporation.The power consumption comparison between this paper and domestic current DMR process is shown in Table 6.
It can be seen from Table 6 that compared with the DMR liquefaction processes in Shaanxi Ansai and Shandong Tai 'an the ratio power consumption under the optimal parameters obtained through optimization simulation in this paper is reduced by 29.11% and 14.07% respectively.Liquefied exergy efficiency increased by 33.86% and 23.94% respectively.
CONCLUSIONS
In this paper, the HYSYS software was used to optimize process parameters of DMR liquefaction process and proportion of mixed refrigerant. The conclusions are as follows: (1) Since HYSYS cannot optimize material composition directly, we divided the pre-cooled mixed refrigerant and mixed cryogen by Component Splitter module into single-component materials. Hence proportion of mixed refrigerant could be controlled by stream of single component material and an optimized simulation calculation model could be established in HYSYS. Taking the minimum energy consumption of the process as objective function, the optimal process parameters and proportion of mixed refrigerant were obtained. Ratio power consumption and exergy efficiency of the natural gas liquefaction process were calculated according to power consumption of the process.
(2) For DMR process under the same natural gas intake condition, increasing throttling stages of mixed refrigerant circulation, the average heat exchange temperature difference in plate-fin heat exchanger would be reduced, so the exergy loss during heat exchange would be reduced also. In this case efficiency of the process would be improved and the total energy consumption and ratio power consumption of the process would be reduced.
(3) Through optimization simulation of the four-stage throttling DMR process, ratio power consumption of the liquefaction process was 272.2kw/t, and the liquefaction exergy efficiency was 46.85%. Compared with the DMR liquefaction processes in Shaanxi Ansai and Shandong Tai 'an the ratio power consumption in this paper is reduced by 29.11% and 14.07% respectively.Liquefied exergy efficiency increased by 33.86% and 23.94% respectively | 2020-12-24T09:04:45.998Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "f03cd02b2c4078fd86e8bbd950c3decefb48ee92",
"oa_license": "CCBY",
"oa_url": "http://thermalfluidscentral.org/journals/index.php/Heat_Mass_Transfer/article/view/1170/813",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8d1bc8fe80fc72ce9bcfc98722354576efe19067",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
} |
24483545 | pes2o/s2orc | v3-fos-license | LE Comparison of feeding habits and physical activity between eutrophic and overweight / obese children and adolescents : a cross sectional study
Conflit of interest: none Objectives: it is broadly accepted, but little explored, that obese children practice less physical activity and eat more. This study has the objective of comparing feeding habits and physical activity between eutrophic and overweight/obese children and adolescents. Methods: 126 students with ages ranging from 6 to 18 years were evaluated. Eutrophic and overweight/obese students were compared according to calorie intake, macro and micronutrients, prevalence of physical inactivity and ingestion of micronutrients. Results: differences were observed in the amount of calories ingested per unit of BMI (eutrophic, 97.6, and overweight/obese, 70.5, p=0.0061), as well as in calcium intake (eutrophic, 546.2, and overweight/obese, 440.7, p=0.0366). Both groups presented sedentarism, as well as a high prevalence of micronutrient intake deficiency, especially calcium and vitamins A, E, and C, but with no difference observed between eutrophic and overweight/obese subjects. Conclusion: energy and macronutrients consumption, as well as physical activity, were similar between eutrophic and overweight/obese. Calcium intake was lower in the overweight/obese group and the ingestion of vitamin C was lower in the eutrophic group. These results demonstrate the importance of considering all etiologic factors that may lead to obesity, so that new strategies for prevention and control may be added to traditional interventions.
introduction
Brazil is going through a phase of nutritional transition where an important reduction in the percentages of undernourished children is observed, as well as a progressive increase, in the last decades, in the prevalence of excess weight.Data from the Research Project on Family Budgets (Pesquisa de Orçamentos Familiares, POF 2008-2009) 1 demonstrate that for children from five to nine years old, the prevalence of overweight/obesity raised from 13.8% (boys) and 10.4% (girls) to 51.4 and 43.8%, respectively, and among adolescents it raised from 20.8% (boys) and 18.1% (girls) to 27.6 and 23.4%, respectively.Obesity is a disease that originates from a combination of genetic and environmental factors.In the first group, we highlight the genes that predispose to greater appetite, to reduced satiety and to greater fat deposition, among other factors; in the second category, we basically include obesogenic environment, encompassing sedentarism and a high offer of food.Considering the relatively short time it took for the obesity epidemic to settle, it is unlikely that significant changes in the genetic material of the population have occurred, but the processes of metabolic programming and epigenetic inheritance may influence the phenotypic expression.In parallel, environmental influences may be exerting an important role, especially concerning the pattern of physical activity and food intake.queira, in the Bonfim Paulista District, Ribeirão Preto, state of São Paulo, Brazil, were considered.This school admits students from the first grade of elementary school to the senior year of high school.Since it is the only school in that district, it is considered that it serves people of all social classes.Inclusion criteria were: age between 6 and 18 years, informed consent signed by the parents, and BMI z-score between -2 and 1. Exclusion criteria were: diseases that interfered in growth or anthropometry, diseases that demanded special diet, refusal to participate, and impossibility to practice physical activity.Once those criteria were met, the study group consisted of: 461 children as the total universe of eligible children; of those, 288 did not hand back the informed consent signed; 12 refused to participate after having brought back the informed consent signed; 3 were excluded due to neurologic disease or the presence of a cast for immobilization; 30 were excluded during the study because not all the data proposed was achieved; 2 were excluded due to a BMI z-score less than -2.Consequently, the actual study group included 126 subjects, and, from each of them, the following data were collected: full name, date of birth, gender, weight, height, feeding habits, and level of physical activity.
Weight and height were measured according to international recommendations, and the body mass index (BMI) was calculated in order to assess nutritional status. 10 Only those with BMI z-score between -1 and 2 participated in the study, according to the inclusion criteria, using the World Health Organization height and weight charts. 11Food habits were assessed by means of a food diary handed to the participant to be filled at home in two pre-determined days, Wednesdays and Sundays, and subsequently delivered to one of the researchers with whom the data was checked and validated.The information was then fed into the Nutwin software 12 where it was analyzed in regard to the composition of macro and micronutrients; intake adequacy was evaluated according to the dietary reference intakes (DRI). 13To evaluate physical activity, the short version of the standardized international physical activity questionnaire (IPAQ) 14 questionnaire was used, handed out to be filled at home and brought back to one of the researchers, with whom data was checked and validated.The decision to analyze the self-assessment questionnaire in its short version was due to the fact that it is the most suggested version for young populations.This version has eight open questions and their answers enable the estimation of the time used per week in different dimensions of physical activity (walks, and moderate and intense physical effort), as well as physical inactivity In this sense, although it seems evident that overweight and obese children most likely practice less physical activity, ingest more food, and eat less healthy food than eutrophic children, this aspect has been little explored in the scientific literature, with sometimes conflicting results. 2,3The Helena study, for instance, with 2,176 adolescents from many cities in Europe, demonstrated that those physically more active did not eat better than the sedentary ones, concluding that there was no relation between the option of being more active and eating adequately. 4A representative study of the French adolescent population between 11 and 15 years of age, demonstrated a negative association between the practice of moderate or vigorous physical activities and excess weight. 3In California, researchers evaluated whether the presence of fast-food stores close to schools would lead to a higher prevalence of obesity, and found no such correlation. 5In Brazil, a study conducted in Salvador with adolescents between 10 and 14 years of age found a correlation between obesity and physical inactivity in boys only. 6An American study that evaluated diet and the use of computers for recreation demonstrated that excessive use of such equipment lead to inadequate diet, therefore, to excess weight, but the effect disappeared when the time spent using a computer was not taken into account. 7An American interventional study, case control, demonstrated that it is possible to improve children's diets by means of feeding education at home, but no reduction in the prevalence of obesity was observed among those who began to eat more fruits and vegetables, besides other nutrients considered important for good health. 8any facilities have been conquered by modern men, such as more access to food and less physical effort, privileging intellectual activities, and those will most likely be embedded in the reality of human beings lives from now into the future.On the other hand, there is a speculation that such achievements may imply an increase in morbidity and mortality due to excess adiposity. 9Therefore, it is highly relevant to understand the true role of the new eating habits and physical activity patterns on the growing prevalence of obesity among children and adolescents.In order to contribute to this discussion, this study intended to compare eating habits and the pattern of physical activity of eutrophic and overweight/obese children and adolescents of a school in the city of Ribeirão Preto, SP, Brazil.
Methods
For this study, all children and adolescents regularly registered at the E.E.P.S.G.Dr. Francisco da Cunha Jun-(sitting).For that purpose, the duration (minutes/day) reported by the children as answers to the questions presented in the IPAQ was multiplied by frequency (days/ week).The results were analyzed according to the standardized template and the participants classified in two groups: sedentary (when classified as sedentary or insufficiently active) and active (when classified as active or very active).
Data collection was blind; therefore, the person responsible for anthropometry and for the questionnaires was not aware of the nutritional classification of the subjects.Only after all data had been collected and inserted in the tables, the two groups were stratified as eutrophic (n= 73, BMI z-score between -2 and 1), and overweight/ obese (n= 53, BMI z-score above 1).An initial statistical evaluation was conducted in order to check whether the groups were homogeneous.Eutrophic and overweight/ obese subjects were, then, compared in two ways using the computer program Graphpad: 15 1) according to the distribution of calorie intake, macro and micronutrients, by Mann-Whitney test; 2) according to the distribution of the prevalence of sedentarism and the deficient ingestion of micronutrients using the Fischer's exact test, in this case, for comparison, the most recent Brazilian data on personal feeding habits, as described in the Research Project on Family Budgets (POF 2008-2009) was also presented. 1 The research was approved by the Ethics in Research Committee of the University of Ribeirão Preto under number 103/2011.
results
The percentage of girls amongst the eutrophic subjects was 58.9% and amongst obese subjects, 64.2% with no difference in gender distribution between the two groups (p=0.5832,Fischer's exact test).The mean age in the eutrophic group was 10.1 years, and in the obese group, 9.5 years, with no difference between the two groups (p=0.1629,Mann-Whitney test).
Table 1 depicts the distribution of the medians and confidence intervals (CI) for calories, macro and micronutrient intake between groups.There was no statistical difference in most of the evaluations, except for the amount of energy ingested per BMI unit, and calcium intake, higher for the eutrophic subjects, as well as vitamin C intake, higher amongst the overweight/obese subjects.
Table 2 depicts the distribution of the prevalence of sedentarism and of deficient ingestion of micronutrients in the groups, no statistical difference being found in any of the comparisons done.Both groups reported little physical activity, more than 1/3 of all the subjects evaluated being considered as sedentary or insufficiently active.In regard to minerals, although no difference was found between the groups, calcium intake was below the recommended level in more than 90% of all children and adolescents evaluated.Vitamins A, E and C were the most frequently inadequate.Compared to the POF 2008-2009 1 data, the micronutrients with the most concerning results in this study, such as calcium and vitamins A, E and C, also had a high prevalence of inadequate intake.
discussion
Obesity in childhood is becoming endemic, with fast and continuous growth almost all over the world.Generally speaking, it is unquestionable that the increase in fat storage is due to positive energy balance, situation where energy intake overcomes expenditure, part of the surplus energy being deposited.To explain why this phenomenon has been occurring in such a striking way for the past few years, many explanations may be brought forward but, generally speaking, all of them fit in three possibilities: 16 • Energy intake increased.
• A combination of both.
The decrease in energy expenditure also seems to be evident if we consider the facilities of the modern world that rendered daily activities less demanding in regard to the energy needed for them. 17Vasquez-Nava et al., in a recent study, evaluated children between 6 and 12 years of age in a Mexican urban area and found a prevalence of sedentarism of 57.2%. 18In Brazil, Cesquini et al., 19 in a sample of 3,845 high school students, using the same instrument of the present study, the IPAQ-short version, 14 found a prevalence of physical inactivity of 62.5%.Higher caloric intake has been considered a reality, as food became more accessible, in terms of cost of acquisition logistic, and also more attractive. 17Nevertheless, this apparent simplicity of the phenomenon, shades much more complex situations, which deviate energy balance towards the side of storage, even in people who apparently do not eat more nor spend less energy than their peers.This study demonstrates this situation in a very clear way, as the overweight and obese children and adolescents evaluated, compared to the eutrophic subjects, presented practically identical intake and expenditure values.Some studies have been conducted on this topic and the results are often conflicting.Vieira et al., 20 in a case-control study, comparing eutrophic adolescents with ages between 14 and 19 years to overweight/obese ones, did not find differences in energy or macronutrients intake, nor in physical activity.Assis et al., 21 studying 120 adolescents, observed that those overweight/obese practiced more physical activity and ingested less milk, eggs, industrialized meat, sweets and soft drinks.Enes et al., 22 evaluating 105 adolescents with ages between 10 and 14 years, did not find any difference when they compared the amount of physical activity, time spent watching TV, and energy, lipids, and fiber intake between eutrophic and overweight/obese adolescents.Velde et al. 23 did a systematic review of prospective studies, which correlated energy balance and obesity and concluded that there is a strong inverse association between total physical activity and obesity, but did not find association with food intake or specific eating habits.The Australian study Look, 24 which followed 734 children from 8 to 12 years of age, concluded that obese children were less active but ingested less energy, fat, carbohydrates, and simple sugar.In a literature review, Sallis et al., 25 evaluating 55 studies, concluded that it is not possible to establish a correlation between level of physical activity and body weight.In a recent sys- tematic review, Rauner et al., 26 evaluating studies published after the year 2000, concluded that the correlation between physical activity and obesity remains unknown and that longitudinal studies cannot confirm the hypothesis that little physical activity leads to obesity.When the ingestion of micronutrients was assessed in this study from the standpoint of inadequate intake, no difference was found between the groups, but the results are concerning as they reveal a much less than expected intake for minerals and vitamins, especially calcium, vitamins A, E and C. It is interesting to point out that such results are very similar to those obtained by the POF 2008-2009, 1 which in a way points to the fact that the population studied is similar to the population of Brazilian subjects of the same age group.
It seems quite evident that, beyond the amount of food ingested, it is necessary to try to understand how the body uses and stores food.Likewise, how different people behave in regard to physical activity concerning interest, performance, caloric expenditure, capacity, among others, also has to be evaluated.A recent study demonstrates, for instance, that living with one of their separated parents may be per se a risk factor for sedentarism in obese children. 18In this sense, many factors have been described, which may be responsible for the origin and persistence of obesity, leading to positive energy balance, among which we emphasize: 1.Many types of genetic patterns: it is current knowledge that there is an important genetic component for obesity.It is possible that in the past the capacity of ingesting excess energy to increase the chance of surviving famine periods was advantageous evolution--wise.It is also believed that regular human energy expenditure then was significantly higher than today, in such way that most human beings had body weight below what was considered ideal in terms of reproductive aptitude.Therefore, natural selection favored polymorphisms that would determine more intake when there was more energy available. 16Monogenic inheritance is rare, but helps in the identification of multiple genes. 27In fact, studies of association in the whole genome have demonstrated a variety of genetic loci associated to the most common form of obesity, and more than 300 genetic loci potentially involved with obesity have been identified in human beings as well as in animals. 16Polymorphisms in the FTO gene, for instance, are associated to human obesity, leading to an increase in food intake and the preference for high energy food. 28 Different thermogenic capacities: even slight inter--personal variations in thermogenesis can, in dynamic systems, and in the long run, be important for thermogenesis and maintenance of obesity.3. Non-exercise activity thermogenesis, which refers to the individual capacity of generating heat, and therefore, spending energy, has been considered as derived from metabolic programming.29 4. Metabolic programming for the increase in the storage capacity: the fact that fetuses exposed to intra uterine nutritional restriction develop energy-sparing mechanisms to guarantee their survival is well recognized.When, after birth, the environment is no longer unfavorable and food offering is normal, the increased storage capacity becomes a risk for obesity.29 5. Viral contamination: animal trials have demonstrated that infections due to many viral agents can lead to obesity.Human studies demonstrate that the incidence of seroconversion for a specific virus may be significantly more frequent in obese adults and children than in normal subjects, and this fact has been more carefully studied in the last few years.30 6. Changes in bacterial flora: according to recent studies, gut microbiota may also play an important role in the prevalence of obesity, once changes in its composition have been observed in obese people, affecting body weight, insulin sensitivity, and lipid metabolism.31 Exposure to endocrine disruptors: endocrine disruptors are environmental chemical compounds produced by human activity with the potential of mimicking or blocking hormone actions.Many of them can modulate lipid metabolism and adipogenesis, contributing for the genesis of obesity or its exacerbation.32 One aspect that deserves underscoring in this study is the fact that the total energy intake was similar in both groups, but when this intake was adjusted according to the BMI, it was higher in eutrophic subjects.The most likely explanation for this result concerns the difference in body composition for lean and obese subjects.It is likely that eutrophic subjects have a higher proportion of lean mass per BMI unit, and this is more metabolically active, therefore needing more energy intake.Since this is an unusual form of assessment, and it was not possible to compare the results with publications by other authors; new studies will be necessary in order to prove this hypothesis.
The individualized assessment of nutrients demonstrated that calcium intake was different between the groups, eutrophic subjects having consumed a signifi-cantly higher amount of this mineral, although in both groups there was a high prevalence of dietary inadequacy.The scientific literature has recently suggested that low calcium intake may act as a factor that contributes to increase obesity.This was suggested by the Cardia study, 33 which demonstrated that adequate intake of dairy products was inversely proportional to the onset of all signs and symptoms of insulin-resistance, including obesity.In order to explain the mechanism of action, Souza et al. 34 state that dietary calcium is able to inhibit lipogenesis and stimulate the process of lipolysis.This action is enhanced when this mineral is obtained from dairy products.On the other hand, the absence of calcium in the diet promotes the increase of its concentration in adipose cells, favoring lipogenesis.Concurrently, this calcium inflow inhibits the phosphorylation of hormone-sensitive lipase, reducing fat oxidation.In obese subjects improvement in the parameters for the evaluation of adiposity has been demonstrated after the increase in dairy products intake.Heaney et al. 35 evaluated 348 young women and observed that when calcium intake was below the 25 percentile, the prevalence of obesity was 15%, but when calcium intake was within the recommendations, the prevalence dropped to 4%.Freitas et al., 36 in a review study, stated that most interventional studies suggest that calcium intake may favor the reduction of anthropometric measurements and improve body composition.They observed that the benefits are only detected when a regular low calcium intake (≈ 700 mg/day or less) is increased to around 1,200-1,300 mg/day.Goldemberg et al. 37 studied calcium intake and risk of obesity in adolescents and found significant differences in diet calcium densities between eutrophic and overweight/obese male subjects.They also found, for boys, an inverse relation between calcium ingestion and adiposity (r=-0.488and p=0.0173).
Vitamin C intake was higher among the overweight/ obese subjects.Even though the evaluation was not conducted considering food groups, one may presume that the main source of this vitamin was citric juices, much consumed in this region.In fact, the obesogenic role of fruit juices has been more and more recognized and their elimination from the diets of children and adolescents has been suggested as a way to prevent obesity. 38his study has some limitations.The method of obtaining dietary information by means of questionnaires is always subject to err, since it depends on the participants' cooperation.In the case of a study were people are compared considering their nutritional status, there is always the possibility of those overweight/obese not registering all their food, especially omitting food items know-ingly obesogenic. 39Another relevant aspect concerns the cross-sectional model of the study.Since the variables have been simultaneously evaluated in a single point in time, a cause-effect relation cannot be established.It is possible that food and physical activity, as observed in the study, which did not demonstrate differences between the groups, reflect only the current aspects, and that in the past, the now obese children and adolescents ingested more calories and practiced less physical activity than their eutrophic peers, therefore gaining weight.Lastly, we must emphasize that the data in this study reflect the reality of this community alone, and, although it does not have characteristics that differentiate it in an important way from other populations, it is not possible to extrapolate the data to the whole of subjects in the same age group.
conclusion
This study has demonstrated that energy and macronutrient intake, as well as the amount of physical activity, was similar when eutrophic subjects were compared to overweight/obese subjects.In regard to micronutrients, similar intake was observed in both groups, except for calcium (lower in the overweight/obese group) and vitamin C (lower in the eutrophic group).Inadequacy in the intake of many micronutrients was observed, but there was no difference in the prevalence of such inadequacy in either group studied.These results demonstrate the importance of considering all etiologic factors that may lead to positive energy balance and obesity in order to associate new strategies of prevention and control to the traditional interventions.
TABLE 1
Distribution of medians and confidence intervals (CI) for intake of calories, macro and micronutrients in the groups of eutrophic and overweight/obese children and adolescents.
TABLE 2
Distribution of the prevalence of sedentarism and deficient intake of micronutrients in the groups of eutrophic and overweight/obese children and adolescents and comparison with data of the Research Project on Family Budgets (POF 2008-2009). | 2017-08-30T16:23:46.967Z | 2015-05-01T00:00:00.000 | {
"year": 2015,
"sha1": "3d7feb9e10f61abec6c53d805b7f97c9316609a4",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/ramb/a/PkS5CKGJQzdpZRwzW5XTjFx/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3d7feb9e10f61abec6c53d805b7f97c9316609a4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
57573641 | pes2o/s2orc | v3-fos-license | A 23-year study of mortality and development of co-morbidities in patients with obesity undergoing bariatric surgery (laparoscopic gastric banding) in comparison with medical treatment of obesity
Background and aim Several studies have shown that bariatric surgery reduces long term mortality compared to medical weight loss therapy. In a previous study we have demonstrated that gastric banding (LAGB) is associated with reduced mortality in patients with and without diabetes, and with reduced incidence of obesity co-morbidities (cardiovascular disease, diabetes, and cancer) at a 17 year follow-up. The aim of this study was to verify at a longer time interval (23 years) mortality and incidence of co-morbidities in patients undergoing LAGB or medical weight loss therapy. Patients and methods As reported in the previous shorter-time study, medical records of obese patients [body mass index (BMI) > 35 kg/m2 undergoing LAGB (n = 385; 52 with diabetes) or medical treatment (controls, n = 681; 127 with diabetes), during the period 1995–2001 (visit 1)] were collected. Patients were matched for age, sex, BMI, and blood pressure. Identification codes of patients were entered in the Italian National Health System Lumbardy database, that contains life status, causes of death, as well as exemptions, prescriptions, and hospital admissions (proxies of diseases) from visit 1 to June 2018. Survival was compared across LAGB patients and matched controls using Kaplan–Meier plots adjusted Cox regression analyses. Results Final observation period was 19.5 ± 1.87 years (13.4–23.5). Compared to controls, LAGB was associated with reduced mortality [HR = 0.52, 95% CI 0.33–0.80, p = 0.003], significant in patients with diabetes [HR = 0.46, 95% CI 0.22–0.94, p = 0.034], borderline significant in patients without diabetes [HR = 0.61, 95% CI = 0.35–1.05, p = 0.076]. LAGB was associated with lower incidence of diabetes (15 vs 75 cases, p = 0.001), of CV diseases (61 vs 226 cases, p = 0.009), of cancer (10 vs 35, p = 0.01), and of renal diseases (0 vs 35, p = 0.001), and of hospital admissions (92 vs 377, p = 0.001). Conclusion The preventive effect of LAGB on mortality is maintained up to 23 years, even with a decreased efficacy compared with the shorter-time study, while the preventive effect of LAGB on co-morbidities and on hospital admissions increases with time. Electronic supplementary material The online version of this article (10.1186/s12933-018-0801-1) contains supplementary material, which is available to authorized users.
No intermediate evaluation of clinical and metabolic effects of bariatric surgery, in comparison with medical treatment of obesity, has appeared in previous studies evaluating long-term mortality, so that reduced mortality seems an all-or-none effect, with no mechanistic explanation for the reduced mortality.
In a previous retrospective study we have shown that, up to 17 years, LAGB is associated with reduced mortality in patients with and without diabetes, and with reduced incidence of diabetes and cardiovascular diseases [11]. This was the longest follow-up study, with no patient lost to follow-up; we also hypothesized that a longer follow-up was required to establish if the effects of LAGB were maintained or even made more significant through a prolonged observation, or whether the effects of LAGB vanished, also because of the process of aging.
The aim of this retrospective study was to extend the follow-up period observation of the previous study up to 23 years. In addition, we had the opportunity to compare the intermediate clinical and metabolic effects of bariatric surgery and of medical treatment of obesity, thus evaluating a possible mechanistic explanation for the reduced mortality.
Patients and study protocol
The participating institutions offer surgical and medical treatment of obesity. The institutions belong to the LAGB10 study group [11], a spontaneous network of physicians and surgeons working with bariatric surgery in the Lumbardy Region (Italy); LAGB has been performed here since 1995, according to NIH guidelines [25]. The specific study protocol was approved by four Ethics Committees in 2012, after the initial protocol had been approved in 1995, in 2002, and in 2006. Being a retrospective study, informed consent was obtained from all individual participants included in the study who could be reached by interview, phone or letter. The details of the protocol have been previously published [11]. Briefly, we considered all patients with obesity (BMI > 40 kg/ m 2 alone or BMI > 35 kg/m 2 in the presence of co-morbidities) aged 18-65 years, seeking medical advice and referred to the outpatients obesity clinics during the period 1995-2001, (first visit) undergoing thereafter LAGB, or medical weight loss treatment. After evaluation of indications and contra-indications, patients were offered LAGB; several patients declined the offer, mainly because of reluctancy, lack of knowledge of the possible benefits, fear of surgery and of surgical complications, inability or unwillingness to comply with the anticipated change of lifestyle habits or with the program of scheduled visits. Patients who declined surgery for any reason, but agreed to be followed-up during medical treatment, were considered controls. All surgery and nonsurgical patients were treated with diet, and received standard care (education on eating behaviors, advice on diet and exercise, plus drug treatment for diabetes and hypertension when present). At least initially, all patients were evaluated under basal conditions and at 3-month intervals with measurement of body weight and assessment of food intake through review of diet diaries; their suggested diet was between 1000 and 1200 kcal/day for women and men (22% protein, 29% lipids, and 49% carbohydrates), respectively, with the aid of a dietitian. From the medical records, birthdate and age, baseline anthropometric data (height, weight, BMI) systolic and diastolic blood pressure, heart rate, metabolic data ( [26]), current medical treatments, clinical evidence of coronary heart disease (CHD), retinopathy, were derived and tabulated. From the medical records it was also possible to evaluate later visits and lab examination, when present. Diagnosis of diabetes (type 2 diabetes) was established as already reported [27,28], and diagnosis of coronary heart disease (CHD) was based on medical records.
Procedures
Patients were identified through personal identification codes; codes were entered the Regional Lumbardy Administrative Database, and it was possible to ascertain whether patients were alive, were dead, or had moved to other regions. The National Health System (NHS) covers more than 95% of all hospital admissions, medical and surgical procedures and medical expenses of citizens [29] (Italian Survey 2012). The Regional Lumbardy Administrative Database contains since 1988 all pertinent data of all citizens, and this makes life status a clear finding, independently of participation in studies and of loss to follow-up. In particular, the Lumbardy database collects several information, including (1) an archive of residents who receive NHS assistance, reporting demographic and administrative data; (2) a database on diagnosis at discharge from public or private hospitals of the region; (3) a database on outpatient drug prescriptions reimbursable by the NHS; and (4) a database on outpatient visits, including visits in specialist ambulatory care and diagnostic laboratories accredited by the NHS. For each patient, these databases are linked through a single identification code.
In the Italian National Health System development of chronic diseases (diabetes mellitus, liver and cardiovascular diseases, selected thyroid, renal, and lung diseases) yields the right to exemption from medical charges (exemptions), that means life-long free prescriptions and examinations for the above diseases. Therefore, together with hospital admissions, exemptions were considered a proxy of development of chronic diseases. For each patient, exemptions and hospital admissions after first visit were identified and dated. Through registries of surgeons and the Regional Lumbardy Administrative Database it was also possible to retrieve patients who had removal of LAGB and/or new bariatric surgery procedures. Through the health districts (ASL) patients belonged to, it was possible to track causes of death, and nature of hospital admissions and of exemptions. Data from health districts were cross-checked with data from the Lumbardy Database, to rule out inconsistencies and possible delays in transcriptions. This procedure has already been employed and validated in previous studies in Lumbardy, Italy [11,30]. The limit date of June 30, 2018 was established for all patients for deaths, admissions, and exemptions. Causes of death, as well as exemptions and hospital admissions were coded according to ICD-10 codes. Full details of the procedures are reported elsewhere [11,24,30].
Outcomes
Death rate and cause of death among patients with diabetes (surgical vs nonsurgical) and among patients without diabetes (surgical vs nonsurgical); exemptions and hospital admissions among patients with and without diabetes (surgical vs nonsurgical). Analysis of survival and of other outcomes was carried out on an intention-to-treat basis, with no consideration for LAGB removal.
Statistical analysis
Data are shown as average values (± SD) for continuous variables or absolute numbers and frequencies for discrete variables. Continuous variables were compared with the Student's t-test. Frequencies were compared with the Fisher exact test. Surgical and nonsurgical patients were matched (with and without diabetes separately) with no attempt to match patients of the whole cohort. Group matching was made for sex, BMI (± 5 kg/m 2 ), age (± 10 years), for systolic (± 5 mmHg), and diastolic (± 5 mmHg) blood pressure. The median age of matched patients was 42 years, and the mean ages were 31.8 ± 6.43 and 51.8 ± 5.89, respectively. The proportion of dead patients was plotted through Kaplan-Meier curves, and differences in survival among subgroups were tested by the log-rank test. A multivariable analysis of risk factors for mortality was performed (Cox proportional hazards model), and used to plot Kaplan-Meier curves for surgery versus nonsurgical patients; age, median age, presence of diabetes, sex, systolic blood pressure, eGFR, and presence of CHD were entered a priori. Proportionality among the survival rates and attributable factors in the Cox model was assessed by plotting the log [− log (survival function)] versus time. Statistical analyses were performed with STATA 12.0 for MacIntosh.
Power calculation and sample size
Being a retrospective study, power calculation and sample size were only calculated to understand if the study was meaningful. Due to previous papers dealing with long-term prevention of mortality, showing effectiveness of about 50% in comparison with non-surgery subjects [9,10], given a power = 80% and an alfa error 0.05, it was calculated that 500 surgery subjects with 30 fatal events and 1000 nonsurgical subjects with 90 fatal events were required to detect significant differences in the outcomes [31,32]. Similarly, given the high efficacy of bariatric surgeries in the long-term prevention of diabetes and of cancer, [33][34][35], we estimated that the occurrence of 100 exemptions in 500 bariatric surgery subjects and 300 exemptions in 1500 subjects undergoing dietary and medical treatment would be required to detect significant differences in the outcomes between the two groups [31,32]. This manuscript was prepared following the guidelines of the STROBE statement [36] (Additional file 1).
Results
The details of patients in the study were already published in a previous publication [11], and now appear in Additional file 2: Table S1. Observation period was 19.5 ± 1.87 years (13.34-23.5). Mortality rate was 2.6, 6.6, 10.1, and 13.4% in controls at 5, 10, 15, and 20 years, respectively; mortality rate was 0.8, 2.5, and 3.1, and 7.4% in LAGB patients at 5, 10, 15, and 20 years, respectively. Figure 1 shows crude mortality curves in patients receiving LAGB as compared to controls receiving medical weight loss therapy, and Fig. 2a and b show crude mortality curves in patients without and with diabetes, respectively. The reduced mortality in surgical vs nonsurgical patients was significant in the whole cohort and in patients with diabetes, of borderline significance in patients without diabetes. During the first 5 years there were 4 deaths (1 above median age) in the surgery group and 18 deaths (17 above median age) in the nonsurgical group (NS). After exclusion of these patients, the HR was 0.32 (95% CI 0.15-0.69), (Log rank = 0.003). Figure 3a, b shows crude mortality curves in patients receiving LAGB as compared to controls receiving medical weight loss therapy, subdivided into aged < 42 years and aged > 42 years, respectively. The reduced mortality in surgical vs nonsurgical patients was significant in patients aged > 42 years, not significant in patients aged < 42 years. Table 1 shows causes of death in the whole cohort in the original study and in the follow-up study; causes of death were similar in the two observation periods, and the comparison between surgical vs nonsurgical patients had a reduced level of significance in the follow-up period, in agreement with the reduced overall effect on prevention of mortality. Table 2 compares the 17 year and the 23 year effects of LAGB as opposed to medical weight loss therapy; the effect on reduced mortality decreases with time, while the effect on prevention of co-morbidities and the effect on prevention of hospital admissions increases with time. Table 3 shows the clinical and metabolic effects of LAGB and medical weight loss therapy. The interval between baseline and follow-up data was 4.9 ± 3.63 years (mean ± SD), with no differences between surgery and nonsurgical patients. The effects were clearly different, with the noticeable exceptions of cholesterol (total, LDL-, and HDL-cholesterol). Table 4 shows univariate and multivariate analysis of risk factors for mortality in the current study as
Discussion
To our knowledge, this study represents the longest follow-up evaluation of patients undergoing LAGB, a bariatric surgery, in comparison with patients receiving weight loss medical treatment. With its up to 23 years duration of observation, this study adds about 6 years to our previous study, in the same cohort, studied in the same way. The main finding, in comparison with our previous study [11], is the somehow reduced effect on prevention of long-term mortality in comparison with our previous study; in contrast, the preventive effect of surgery on incident diseases increases, and the preventive effect of surgery on hospital admissions increases. Therefore, it appears that the beneficial effect of LAGB continues up to 23 years, even with some differences; the effect on mortality decreases, even it is still significant, while the effect on general health status continues, and increases. Overall, as recently confirmed [23], our data confirm that bariatric surgery is associated with lower mortality compared to medical weight loss treatment [9,10]; also prevention of co-morbidities, especially diabetes mellitus, is possible for prolonged periods [27,33,37,38]. A greater effect on mortality in patients with diabetes than in patients without diabetes has already been reported [12], leading to the interpretation that the benefit is greater in more compromised patients. There are no explanations for these differences, though it seems reasonable to assume that the aging process dilutes the preventive effect of LAGB on mortality. In the swedish obesity study (SOS study) [37] it was observed that the preventive effect of surgery on incident co-morbidities increases with duration of followup (from 2 to 10 years); our data support these findings, even though the observation periods of the two studies are quite different. However, we observed that the effect of surgery depends on age, i.e. it is significant for patients above median age (42 years in this cohort), not in younger patients. This confirms what was already observed by us and by others, using different bariatric techniques [5,8,11,39]; in the SOS Study, patients aged < 37 years were intentionally excluded because of the low mortality of patients with obesity in young age [4]. This study has strengths and limitations; the main strength lies in the prolonged observation period of the same cohort, evaluated with the same approach; also, due to the methods employed, no patient was lost to follow-up. In addition, we had detailed description of causes of death of all patients, of incident diseases, of hospital admissions. More, we had the possibility to observe clinical and metabolic variables in a fair proportion of patients after a mean period of 5 years, and we could observe a significant different effect of surgery vs medical weight loss treatment. Obesity, and especially visceral obesity, favor development of cardiovascular disease in type 2 diabetes [40], and both type 2 diabetes and obesity predict all-cause mortality [41,42]; the present results indicate that LAGB, able to induce weight loss and to prevent diabetes, prevents mortality through improvement of the general health status [43]. Finally, as reported above, we confirmed a significant age-related effect on prevention of mortality, in agreement with previous studies [5,8,11,39].
The main limitation lies in the retrospective nature of the study; the second limitation is that the study was not randomized, but at the time this study was conceived, randomization was deemed unethical, so that prospective studies could not be performed. The fact that several patients refused surgery for multiple reasons might represent a selection bias; however, it should be emphasized that in the years 1995-2001 evidence of benefits of bariatric surgery were still limited. Also, during the first 5 years there were 4 deaths (1 above median age) in the surgery group and 18 deaths (17 above median age) in the nonsurgical group (NS); we have no explanation for a higher number of early deaths in both groups is higher than in previous papers [10], but differences in different cohorts can occur. The fourth limitation is in the sample size. The fifth limitation is represented by the fact that the use of of LAGB is declining, so that some people argue LAGB should be abandoned; actually, LAGB is still performed in a significant proportion of patients with obesity. The last limitation is that our results can not be generalized to all bariatric procedures, also because there are no studies of similar duration performed with other bariatric techniques. | 2019-01-05T22:57:43.299Z | 2018-12-01T00:00:00.000 | {
"year": 2018,
"sha1": "87b3a64a61471ee8d4e21cce3a31f647223b7bfc",
"oa_license": "CCBY",
"oa_url": "https://cardiab.biomedcentral.com/track/pdf/10.1186/s12933-018-0801-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "87b3a64a61471ee8d4e21cce3a31f647223b7bfc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
213233639 | pes2o/s2orc | v3-fos-license | Ecological distinctiveness of birds and mammals at the global scale
Ecologically distinct species species with distinct trait combinations are not directly prioritized in current conservation frameworks. The consequence of this blind spot means species with the most distinct ecological strategies might be lost. Here, we quantify ecological distinctiveness, based on six traits, for 10,960 bird and 5,278 mammal species, summarizing species-level ecological irreplaceability. We find that threatened birds and mammals are, on average, more ecologically distinct. Specific examples of ecologically distinct and highly threatened species are Great Indian Bustard, Amsterdam Albatross, Asian Elephant and Sumatran Rhinoceros. These species have potentially irreplaceable ecological roles and their loss could undermine the integrity of ecological processes and functions. Yet, we also identify ecologically distinct widespread generalists, such as Lesser Black-backed Gull and Wild Boar. These generalist species have distinct ecological strategies that allow them to thrive across multiple environments. Thus, we suggest that high ecological distinctiveness is associated with either high extinction risk or successful hypergeneralism. We also find that ecologically distinct species are generally charismatic (using a previous measure of public perceptions of charisma). We thus highlight a conservation opportunity: capitalizing on public preferences for charismatic species could provide support for the conservation of the most ecologically distinct birds and mammals. Overall, our prioritization framework supports the conservation of species with irreplaceable ecological strategies, complementing existing frameworks that target extinction risk and evolutionary distinctiveness.
Introduction
A fundamental goal of conservation biology is to safeguard the diversity of life. Yet, global conservation funding falls short of what is required to prevent the loss of the world's biodiversity (McCarthy et al., 2012). Conservation expenditure must therefore be prioritized to effectively and efficiently minimise extinction and maintain nature's variability. Indices of priority species are an important tool for the allocation of scarce conservation resources . Traditionally, species prioritization frameworks have focussed on vulnerability (i.e., extinction risk), endemism, and 'flagship' status (Brooks et al., 2006;IUCN, 2018;Jenkins et al., 2013;Veríssimo et al., 2011). Although these aspects are important in identifying priority species, they focus on a single biodiversity dimension -taxonomic diversity. Yet biodiversity in all its dimensions (i.e., taxonomic, phylogenetic and ecological diversity) is likely required for the persistence of ecosystems (Gamfeldt et al., 2008;Hooper et al., 2005).
The application of phylogenetic diversity to set conservation priorities is gaining momentum (Brum et al., 2017;Gumbs et al., 2018;Isaac et al., 2007;Pollock et al., 2017;Thuiller et al., 2015). Phylogenetic diversity captures the isolation of lineages through deep time and has been applied to species prioritization through the Evolutionarily Distinct and Globally Endangered (EDGE) index, to highlight the role of species-level evolutionary irreplaceability (Gumbs et al., 2018;Isaac et al., 2012Isaac et al., , 2007. However, phylogenetic diversity does not directly account for species' ecological strategies (trait combinations) and thus species' ecological irreplaceability. Although, researchers have advocated that maximizing phylogenetic diversity will potentially capture ecological diversity, as species traits often reflect shared evolutionary history (Mazel et al., 2018;Monnet et al., 2014;Vane-Wright et al., 1991;Winter et al., 2013). Yet, traits are not necessarily concordant with phylogeny, as phylogenetically divergent species can converge on analogous ecological strategies, due to similar adaptive responses to similar selection pressures (see, e.g. Pianka et al., 2017;Thuiller et al., 2015;Winemiller et al., 2015). For example, pangolins and armadillos, which belong to separate Orders, both have armoured bodies and consume termites and ants.
Thus, while maximizing phylogenetic diversity can sometimes help to support trait diversity, phylogenetic diversity captures trait diversity unreliably (Mazel et al., 2018;Redding and Mooers, 2015).
Here we quantify species-level ecological distinctiveness and thus recognise ecological (trait) diversity as a complementary dimension of biodiversity, as has been previously acknowledged spatially (Brum et al., 2017;Pollock et al., 2017;Thuiller et al., 2015) and in a conservation context (Bowen and Roman, 2005;Redding and Mooers, 2015). Traits reflect species' adaptations to their environment, where species live and how they interact (Violle et al., 2007). Traits also jointly determine a species ecological role and function (Wilman et al., 2014); thus trait combinations are increasingly being used to summarise species' ecological strategies (Brum et al., 2017;Cooke et al., 2019b). Moreover, a diversity of ecological strategies is required to support and maintain ecosystem processes and functions (Hector and Bagchi, 2007). However, species, and their ecological strategies, are disappearing, with strong implications for the environment (Cooke et al., 2019b). For instance, within an assemblage, the loss of species with distinct ecological strategies may have very different consequences from the loss of species with common ecological strategies (Larsen et al., 2005;Monnet et al., 2014;Mouillot et al., 2013b). Yet, the relationship between the distinctiveness of species' ecological strategies and extinction risk remains little explored (although see Redding and Mooers, 2015).
To summarize species' ecological distinctiveness, based on their traits, we employ the 'functional distinctiveness' metric (Grenié et al., 2017;Violle et al., 2017). Here we use the term ecological distinctiveness in preference to functional distinctiveness, as the focal traits may or may not directly reflect the ecosystem functions performed by species (Huang et al., 2012), but do directly relate to their ecological strategies (Cooke et al., 2019b). Our analyses therefore build upon the functional rarity literature (Grenié et al., 2018(Grenié et al., , 2017Violle et al., 2017), but upscale ecological distinctiveness to a global assessment and thus do not directly incorporate information on taxonomic rarity, such as local abundance or regional restrictedness (Grenié et al., 2017). Instead, we ask how rare the traits of a given species are compared to all other species globally. Our goal is to identify the most distinct ecological strategies for birds and mammals; thus, recognizing species with potentially irreplaceable ecological roles, which could underpin the integrity of ecological processes and functions (Duffy, 2002;Larsen et al., 2005). Our framework therefore builds upon and complements existing taxonomic, e.g. the IUCN Red List (IUCN, 2018), and evolutionary, e.g., the EDGE index (Gumbs et al., 2018;Isaac et al., 2007;Jetz et al., 2014), conservation frameworks.
With these measures we investigate: (1) whether ecologically distinct species are at greater risk of extinction, (2) the relationship between ecological distinctiveness and evolutionary distinctiveness, and (3) which trait extremes dominate the most ecologically distinct species.
We make three predictions. First, we predict that threatened species will be more ecologically distinct. This prediction is based on multiple lines of evidence. For instance, extinction risk is evolutionary and ecologically non-random (Cooke et al., 2019b;Purvis et al., and reproduction (Simberloff and Dayan, 1991;Winemiller et al., 2015). Diet type defines the ecological roles and major trophic interactions of species (Burin et al., 2016;Chillo and Ojeda, 2012;Duffy, 2002), and thus relates to functions such as scavenging, pollination, seed dispersal and nutrient cycling (Ripple et al., 2017;Sekercioğlu, 2006;Wenny et al., 2011); whereas diet diversity dictates species' diet breadth, how species respond to changes in resource availability and summarizes the diversity of food web interactions for a species (Burin et al., 2016;Duffy, 2002;Newbold et al., 2013). Generation length signifies the turnover rate of breeding individuals in a population and therefore relates to the different rates at which taxa survive and reproduce (Cooke et al., 2018;IUCN Standards and Petitions Subcommittee, 2014). Thus, generation length reflects species' ability to recover after perturbations, where species with short generation length can repopulate or recolonize more quickly after disturbance (Newbold et al., 2013).
We extracted raw trait data (i.e., excluding estimated values) for body mass, litter/clutch size, habitat breadth and diet type from a database for 10,252 birds and 5,232 mammalscompiled by Cooke et al. (2019a) from four main sources (Jones et al., 2009;Myhrvold et al., 2015;Pacifici et al., 2013;Wilman et al., 2014). Habitat breadth was coded using the IUCN Habitats Classification Scheme and was quantified as the number of suitable habitats listed for each species (Cooke et al., 2019a). Diet type categorizes species into five groups according to their primary diet: plant/seed, fruit/nectar, invertebrates, vertebrates (including carrion), and omnivore (score of ≤ 50 in the four other diet categories) (Cooke et al., 2019a;Wilman et al., 2014). For diet diversity, we calculated a Shannon Index on the proportions of 10 diet categories (Santini et al., 2019) extracted from the EltonTraits database (Wilman et al., 2014). BirdLife supplied generation length for birds but restrictions apply to these data, which we used under license for the current study. However, these data can be manually downloaded from the BirdLife website (http://datazone.birdlife.org/species/search). For mammals we obtained generation length values from Pacifici et al. (2013), although we corrected three mammal generation length observations that have since been found to be anomalous (Cooke et al., 2018): Cephalophus adersi, Cephalophus leucogaster and
Cephalophus spadix.
We supplemented the trait data with additional data from multiple sources (Dunning, 2008;Jones et al., 2009;Myhrvold et al., 2015;Pacifici et al., 2013;Wilman et al., 2014), so that every species had at least one trait value. We therefore updated the trait data to reflect the changes to the IUCN taxonomy since the trait data was first compiled. The updated trait data (excluding generation length for birds, due to data restrictions) are provided as Appendix A.
Trait data were transformed where it improved normality: log 10 for body mass, generation length and litter/clutch size; square root for habitat breadth; and all numeric traits were standardized to zero mean and unit variance (z-transformation) (Figs. B.1 and B.2).
Transformation and standardization is recommended, so that each trait has the same weight in the analyses and the units used to measure the traits have no influence (Villéger et al., 2008).
Trait imputation
Complete trait data were not available for all species. To avoid excluding species, which can lead to reduced statistical power and introduce bias (Kim et al., 2018;Penone et al., 2014;Taugourdeau et al., 2014), we estimated missing data using Multivariate Imputation with Chained Equations (MICE). MICE has been shown to have greater accuracy, improved sample size and smaller error and bias than single imputation methods and the data deletion approach (Penone et al., 2014;Taugourdeau et al., 2014). We implemented MICE based on the functional (the transformed traits) and phylogenetic (the first 10 phylogenetic eigenvectors extracted from trees for birds (Prum et al., 2015) and mammals (Fritz et al., 2009)) relationships between species (Cooke et al., 2019a). We estimated missing data for birds and mammals for body mass (0.4% imputed for birds; 0% imputed for mammals), litter/clutch size (44% for birds; 37% for mammals), habitat breadth (18% for birds; 6% for mammals), diet type (26% for birds; 4% for mammals), diet diversity (26% for birds; 4% for mammals) and generation length (0.2% for birds; 0.4% for mammals). To estimate values, we used the mice() function (mice package (Van Buuren and Groothuis-Oudshoorn, 2011)).
We imputed 25 trait datasets to capture the uncertainty in the imputation process. We then performed subsequent analyses across the 25 trait datasets and calculated the associated total variance according to Rubin's rules -accounting for within imputation variance, between imputation variance and the number of imputations (Vink and van Buuren, 2014).
Ecological distinctiveness
Species priority lists can differ depending on the isolation metric chosen . There are many metrics available both for evolutionary and ecological isolation (Grenié et al., 2018;Villéger et al., 2008). For instance, trees have previously been used to evaluate the link between evolutionary distinctiveness and trait diversity (Redding and Mooers, 2015), and although trees can potentially be easier to compare (Redding and Mooers, 2015), they can be problematic (Petchey and Gaston, 2006).
For example, ecological trees tend to bias the initial distribution of ecological distances towards overestimating the dissimilarity between species pairs (Maire et al., 2015) and are sensitive to the species included in the analysis (Huang et al., 2012). Alternatively, there are two main isolation metric types available: pairwise metrics (average distance to all other species) and neighbour metrics (distance to the nearest relative) (Grenié et al., 2017;Redding et al., 2014).
Here, our primary analyses focus on ecological distinctiveness, also known as functional distinctiveness (Grenié et al., 2017) -a pairwise metric, as we aimed to quantify how uncommon the traits of a given species are compared to all other species globally (Grenié et al., 2018;Violle et al., 2017). We therefore focus on those species located in less speciesdense areas of trait space, such as the edges, e.g., ecological outliers (Cooke et al., 2019b;Violle et al., 2017). Prioritizing ecologically distinct species should therefore conserve species with rare trait combinations, maintaining ecological diversity (Cooke et al., 2019b;Grenié et al., 2018).
We calculated the ecological distinctiveness of a species as the average distance in trait space from it to all other species in its Class, using distinctiveness_com() in the funrar package (Grenié et al., 2017). Ecological distances were calculated as Gower pairwise distances between species, which allows mixed trait types (e.g., continuous, categorical, ordinal data) while giving them equal weight (Villéger et al., 2008), using compute_dist_matrix() in the funrar package (Grenié et al., 2017).
However, the traits are not independent, thus equal weighting can lead to overemphasis of specific ecological aspects, due to correlations between the traits. For instance, the strongest correlations (Spearman's rank correlation coefficients) across the traits are between body mass and generation length for birds (Spearman's ρ 10958 = 0.53; Fig To evaluate the effect of non-independence between the traits, we extracted distances between species from a Principal Coordinates Analysis (PCoA), weighted by the eigenvalues. We performed the PCoA on the Gower distances -due to mixed trait types, using the dudi.pco() function in the ade4 package (Dray and Dufour, 2007). PCoA rotates the matrix of Gower distances to summarise inter-species (dis)similarity in a low dimensional, Euclidean space (Legendre and Legendre, 1998). We then extracted the distances from the PCoA and weighted them by the eigenvalues, we used these weighted distances to recalculate ecological distinctiveness. However, our primary analyses focused on the equally weighted measure of ecological distinctiveness, as the correlations between the traits are generally low (Figs. B.3 and B.4), and the selected traits represent different ecological features (Cooke et al., 2019b). Thus, we used the PCoA as a comparative approach.
Furthermore, because ecological distinctiveness is computed using multiple traits, it can be difficult to disentangle the influence of individual traits on the metric. We therefore recalculated ecological distinctiveness excluding each trait one by one and then compared the values to ecological distinctiveness when measured across all six traits. We did not reduce the number of traits lower than five because we might have missed important dimensions of the possible trait space . This analysis of ecological distinctiveness by dimension also helps to reveal the influence and dependence between the traits, contrasting the PCoA approach.
In addition, to evaluate the impact of our metric choice, we also calculated ecological uniqueness, using uniqueness_stack() in the funrar package (Grenié et al., 2017). Ecological uniqueness is the distance of a focal species to its nearest neighbour, thus species with high ecological uniqueness are more distant to their closest neighbour in trait space (Grenié et al., 2017) and could therefore have unique ecological strategies. Importantly, ecological uniqueness is more akin to the fair proportion measure of evolutionary distinctiveness used here than is our pairwise ecological distinctiveness measure (see discussion in Redding et al. (2014)).
We also projected ecological distinctiveness and ecological uniqueness onto the first three principal components (selected based on screeplots -first three principal components explained 52% of the variation in traits for birds and 62% for mammals) extracted from the PCoA. The projection of ecological distinctiveness and ecological uniqueness helped us assess how these metrics capture the shape and structure of trait space for birds and mammals.
Extinction risk
We used the rl_history() function in the rredlist package (Chamberlain, 2016) to download up-to-date (as of 8 th Jan 2019) IUCN categories for birds and mammals (IUCN, 2018). We then performed a multiple comparison Kruskal-Wallis rank-sum test to compare ecological distinctiveness across IUCN categories, using the kruskal() function in the agricolae package (de Mendiburu, 2017). We also performed post-hoc tests using Fisher's least significant difference to differentiate between groups (de Mendiburu, 2017).
Evolutionary distinctiveness
Evolutionary distinctiveness measures the relative contribution of a species to the total evolutionary history of their taxonomic group (Gumbs et al., 2018). The evolutionary distinctiveness of a species is high when the species shares its path to the root with few other species or has a long unshared branch length with all the other species (Isaac et al., 2007;Redding et al., 2014Redding et al., , 2008. We obtained evolutionary distinctiveness scores for 10,960 bird species and 5,454 mammal species from the EDGE website (https://www.edgeofexistence.org/edge-lists/, accessed October 2018), but excluded marine mammals and species that were not classified by the IUCN (e.g., taxonomic mismatches or domesticated species, such as Equus caballus).
Geographic range
We also calculated geographic range size for birds and mammals, using spatial polygons from the IUCN (IUCN, 2018) and BirdLife (BirdLife International and Handbook of the Birds of the World, 2018). Although we expect range size to be associated with habitat breadth, they are derived independently (range size is calculated from distributional data and habitat breadth is derived from IUCN habitats listed as suitable by species' experts). We filtered the polygons to include only those coded as presence: 'Extant' (i.e., we removed polygons coded as presence: 'Probably Extant', 'Possibly Extant', 'Possibly Extinct', 'Extinct' or 'Presence Uncertain'). We re-projected the polygons to cylindrical equal area and then calculated their area in square kilometres, using the area() function in the raster package (Hijmans, 2019), and summed the area across all extant polygons per species. We could not calculate range size for 1,928 birds and 294 mammals, due to lack of spatial data, changes to taxonomy and/or no 'Extant' polygons, resulting in data for 9,032 birds and 4,984 mammals.
Ecological distinctiveness
Bird ecological distinctiveness (mean across 25 imputed trait datasets) ranges from 0.28 (Chestnut-winged Cinclodes Cinclodes albidiventris) to 0.69 (Greater Rhea Rhea americana) ( Fig. 1). Mammal ecological distinctiveness (mean across 25 imputed trait datasets) ranges from 0.33 (Stephen's Woodrat Neotoma stephensi) to 0.62 (Leopard Panthera pardus) (Fig. 1). Mean bird ecological distinctiveness is 0.37 (median = 0.36) and mean mammal ecological distinctiveness is 0.41 (median = 0.41). Of the twenty most distinctive birds, five are threatened (Critically Endangered, Endangered or Vulnerable), while seven of the twenty most distinctive mammals are threatened; thus, threatened species are proportionally overrepresented among the most distinctive birds and mammals. Priority species based on ecological uniqueness differ to those based on ecological distinctiveness (Fig. B.5). In particular, the correlation between ecological uniqueness and ecological distinctiveness is lower for mammals: Spearman's ρ 5276 = 0.05 than for birds: Spearman's ρ 10958 = 0.52. Yet, six bird species (Fig. 2a and Fig. B.5a) and seven mammal species (Fig. 2b and Fig. B.5b) are present in the top twenty species for both ecological uniqueness and ecological distinctiveness.
Ecological distinctiveness and threat status
Ecological distinctiveness differs between IUCN categories for both birds ( Ecological distinctiveness for mammals is highest for CR species (0.42, a), followed by EN (0.42, ab) and VU species (0.42, ab), then NT (0.42, b), then LC (0.41, c), and then DD species (0.40, d) (Fig. 2b). Thus, in general, threatened (CR, EN, VU) bird and mammal species are more ecologically distinct than non-threatened (NT, LC) species. Moreover, threatened bird species are more ecologically unique than non-threatened bird species, although there is no difference for mammals (Fig. B.6).
Ecological distinctiveness and evolutionary distinctiveness
Ecological distinctiveness is very weakly positively correlated with log evolutionary distinctiveness for birds (Spearman's ρ 10958 = 0.024), and there is a very weak negative correlation for mammals (Spearman's ρ 5276 = -0.018) (Fig. 3). By contrast, the correlations between ecological uniqueness and evolutionary distinctiveness are stronger than the correlations between ecological distinctiveness and evolutionary distinctiveness. Specifically, ecological uniqueness is weakly positively correlated with log evolutionary distinctiveness for birds (Spearman's ρ 10958 = 0.17), and there is also a weak positive correlation for mammals (Spearman's ρ 5276 = 0.26) (Fig. B.7).
Ecological distinctiveness and range size
Although threatened species are, on average, more ecologically distinct than non-threatened species (Fig. 2), we find a weak positive correlation between range size and ecological
Ecological distinctiveness by dimension
Ecological distinctiveness for the top twenty bird species is predominantly driven by large body mass, long generation length and high habitat breadth (Fig. 5a). For mammals, the primary drivers of distinctiveness for the top twenty species are large body mass, high habitat breadth and a carnivorous diet (Fig. 5b).
Sensitivity analysis
Ecological distinctiveness is highest at the edges of the PCoA trait space, furthest from the most speciose regions, for both birds (Fig. B.8a) and mammals (Fig. B.8c). By contrast, ecological uniqueness highlights individual isolated species within the PCoA trait space for birds (Fig. B.8b) and mammals (Fig. B.8d), this is particularly apparent for mammals, reflecting the clustered distribution of mammals in trait space (Fig. B.8d).
Overall, our results and conclusions are qualitatively similar for ecological uniqueness (Figs. B.5, B6 and B.7) and for PCoA ecological distinctiveness (Figs. B.9, B.10 and B.11). Thus, our findings appear robust to the choice of metric and the correlations between the traits, although ecological uniqueness could offer alternative insights on the irreplaceability of species' ecological strategies.
Discussion
We find, as predicted, that on average, threatened birds and mammals are more ecologically distinct than non-threatened species. Continuing to conserve threatened species should therefore simultaneously reduce extinction and support global ecological diversity, thus maintaining nature's variability.
However, our findings also support the need for a balanced consideration of both nonthreatened (i.e., common) and threatened (i.e., rare) species (Chapman et al., 2018;Gaston, 2011). Most of the top twenty ecologically distinct birds and mammals are non-threatened, including ubiquitous species, such as Lesser Black-backed Gull (Larus fuscus), Wild Boar (Sus scrofa), Coyote (Canis latrans) and Black Rat (Rattus rattus). We therefore demonstrate that, although threatened birds and mammals are more ecologically distinct on average, non-threatened species can have extremely distinct ecological strategies, contrary to our predictions. Thus, we find that both common and rare species make unique contributions to ecological diversity (as also reported for hydrothermal vents; Chapman et al., 2018).
We find that these ecologically distinct non-threatened species (e.g., Lesser Black-backed Gull, Wild Boar) are generally large-bodied, habitat generalists, which are often widespread and successful in multiple environments -in other words, hyper-generalists. For example, we observe a positive correlation between range size and ecological distinctiveness. Yet a common ecological tenet is that generalist species are at a disadvantage when competing with specialists -a 'jack of all trades is a master of none' mechanism (Büchi and Vuilleumier, 2014;Burin et al., 2016;Marvier et al., 2004). For instance, when a specialist and a generalist species compete for the specialist's preferred resource, the specialist species should ecologically outperform the other (Burin et al., 2016). Instead, here we suggest that the evolution of distinct ecological strategies could allow some generalist species to separate themselves from direct competitors and reduce interspecific competition, via negative frequency-dependence selection, allowing them to successfully colonise and occupy a diversity of environments (Chapman et al., 2018;Levine and HilleRisLambers, 2009;Violle et al., 2017).
We suggest that it is ecologically difficult to be a hyper-generalist, hence it is rare to be common (Gaston, 2011). Hyper-generalists are therefore distinctive -while this is counterintuitive, we suggest that generalists potentially require specialist traits to survive in a diverse set of environmental conditions and habitats, although this requires further investigation. In addition, these species could be promoted by human assisted dispersal and/or human impacts (human commensals; e.g., Black Rat, Lesser Black-backed Gull), as generalists can often take advantage of disturbed or heterogeneous landscapes, such as human-dominated systems (Büchi and Vuilleumier, 2014;Marvier et al., 2004;Monnet et al., 2014). Moreover, hyper-generalist species are potentially ecologically important, as they are often involved in engineering environments and interact with many other species (Gaston, 2011). If unchecked, a decline of these distinctive hyper-generalists could lead to cascading ecological effects. The evolutionary and ecological adaptations of these species therefore requires further research to understand why these species are so successful in different environments and how they contribute to ecosystem processes and function across scales.
Overall, we suggest that high ecological distinctiveness is associated with either high extinction risk or successful hyper-generalism.
The most ecologically distinct species, as quantified here, often have unique roles in their environment. For example, predators, such as White-tailed Sea-eagle (Haliaeetus albicilla),
Leopard (Panthera pardus), Bald Eagle (Haliaeetus leucocephalus), Grey Wolf (Canis lupus)
and Puma (Puma concolor) can effect grazing and mesopredation pressure, productivity, disease dynamics and carbon sequestration (Estes et al., 2011;O'Bryan et al., 2018;Ripple et al., 2014;Ritchie et al., 2012;Ritchie and Johnson, 2009); while African (Loxodonta africana) and Asian Elephants (Elephas maximus), and Hippopotamus (Hippopotamus amphibius) can alter vegetation structure and composition, fundamentally restructuring ecosystems (Bakker et al., 2016;Terborgh et al., 2018Terborgh et al., , 2016. Thus, the ecologically distinct species highlighted here have critical roles in ecosystems across the globe. The loss of these ecologically distinct species could therefore potentially disrupt species interactions and undermine the integrity of ecological processes and functions (Duffy, 2002;Larsen et al., 2005). Moreover, population declines for these ecologically distinct species could also lead to strong impacts, as species' abundances effect their contributions to ecological processes (Gaston, 2011;Winfree et al., 2015). For instance, species' ecological effects are often assumed to be proportional to their abundance or biomass (Grime, 1998), although there is evidence that rare species can also have important ecological roles (Leitão et al., 2016;Mouillot et al., 2013a), especially across time and under disturbance .
Thus, it is important to maintain the abundance, as well as the existence, of ecologically distinct species. Furthermore, there is potential to incorporate abundance data into future assessments of ecological distinctiveness, when comparable data becomes available (e.g., accounting for differences in detectability), and this could reveal species that are both ecologically rare and ecologically distinct, as well as species with crucial ecological roles at the local scale (Grenié et al., 2018(Grenié et al., , 2017. We find that ecologically distinct species are generally charismatic. For example, six (Elephant, Panther, Polar Bear, Wolf, Hippo and Rhino) of the top twenty most charismatic animals, based on public perceptions of charisma (Albert et al., 2018), correspond to species in the top twenty most ecologically distinct mammals. Public preferences for charismatic bird and mammal species (Morse-Jones et al., 2012;Smith et al., 2012) are reflected in greater willingness-to-pay for conservation focusing on these species (Albert et al., 2018;Colléony et al., 2017;Martín-López et al., 2007). We therefore highlight a conservation opportunity, where the protection of ecologically distinct species can be facilitated through the public support of charismatic species. The use of charismatic species to elicit funding is controversial, as it can divert focus to species that are not the most threatened or ecologically important (Albert et al., 2018;Brodie, 2009;Colléony et al., 2017;Restani and Marzluff, 2002). However, here we show that charismatic species may be deserving of their elevated attention, due to their often-distinct ecological strategies and therefore potentially unique ecological roles. In addition, funding for charismatic species can result in additional benefits (e.g., flagship species), through conservation actions shared with other species (Bennett et al., 2015), because these species tend to be broad ranging and lead to conservation of the habitats encompassing many other species. Flagship marketing remains a key fund raising tool for international agencies (e.g., IUCN and United Nations), nongovernmental organisations, local governments, and the scientific community (Bennett et al., 2015). Thus, capitalizing on the appeal of charismatic and/or flagship species will help to conserve the most ecologically distinct species and maintain a diversity of ecological strategies across the globe, supporting and maintaining ecosystem processes and functions (Hector and Bagchi, 2007).
Our species priority lists differ for ecological uniqueness and ecological distinctiveness. The difference between ecological uniqueness and ecological distinctiveness is greater for mammals than for birds, potentially due to the more clustered distribution of mammals in trait space, although this requires further investigation. Ecological uniqueness identifies species that are more distant to their closest neighbour in trait space (Grenié et al., 2017), and uniqueness appears to capture different attributes to ecological distinctiveness, highlighting ecological oddities, such as the Kakapo (Strigops habroptila) and Naked Mole Rat (Heterocephalus glaber). In addition, ecological uniqueness shows stronger (albeit still weak) correlation with evolutionary distinctiveness, compared to the correlation between ecological distinctiveness and evolutionary distinctiveness. We therefore highlight surrogacy between ecological uniqueness (and thus nearest-neighbour metrics of ecological isolation) and evolutionary history, supporting previous predictions that evolutionary distinct species are likely to have more unique features or trait combinations (Faith, 1992;Redding and Mooers, 2015). Thus, there is also potential for the use of ecological uniqueness for conservation prioritization, and indeed ecological uniqueness could be complementary to ecological distinctiveness or might be preferred under different conservation objectives. For example, ecological uniqueness could help to identify individual species isolated in trait space, which could be informative at regional scales where ecological redundancy between species is low (Cooke et al., 2019a). Nonetheless, the species present in the top twenty for both ecological uniqueness and ecological distinctiveness were generally charismatic, reaffirming the importance of the ecological strategies of charismatic species. By contrast, ecological distinctiveness summarizes how uncommon the traits of a given species are compared to all other species globally (Grenié et al., 2018), which can depress differences between species, due to the averaging effect . The ranking of ecologically distinct species can therefore be sensitive to small differences between species. However, ecological distinctiveness also shows more consistent structuring across trait space than ecological uniqueness and highlights species at the edges of trait space -ecological outliers . The conservation of species with high ecological distinctiveness should help to maintain a broad range of ecological strategies and minimize predicted directional trait shifts, by conserving those species at the edges of trait space, potentially supporting high ecological strategy diversity and continued ecosystem functioning (Cooke et al., 2019b;Hector and Bagchi, 2007).
Conclusions
We demonstrate that evolutionary distinctiveness is a poor surrogate for ecological distinctiveness. We therefore suggest that joint consideration of a species' evolutionary and ecological distinctiveness could better summarise the irreplaceability of a species and inform conservation prioritization. However management actions must be timely, as well as targeted (Gumbs et al., 2018). Hence, species at imminent risk of extinction are widely considered to be the first priority for immediate conservation action (Gumbs et al., 2018). We therefore propose that highly threatened species that are also ecologically and evolutionarily distinct require urgent attention, as the loss of these species could result in disproportionate ecological consequences (Cooke et al., 2019b) and an over-proportional loss of evolutionary history (Davis et al., 2018;Steel et al., 2018). Ecological distinctiveness, as quantified here, highlights the potential ecological costs of species loss, and therefore provides a complementary perspective to existing conservation prioritization frameworks, e.g., the EDGE approach (Isaac et al., 2007;Redding and Mooers, 2010). We therefore add to the growing consensus that, beyond focusing on the number of species or on those with major extinction risks, other facets of biodiversity need to be considered (Bowen and Roman, 2005;Brum et al., 2017;Isaac et al., 2007;Thuiller et al., 2015). Specifically, we suggest that prioritisation that accounts for extinction risk, ecological distinctiveness and evolutionary distinctiveness can contribute to the overall goal of conservation -maintaining living variation.
Thus, we suggest that our quantification of ecological distinctiveness could better inform species prioritization and the direction of conservation actions, highlighting species with irreplaceable ecological strategies.
Data statement
The raw trait data (i.e., excluding imputed values) for body mass, litter/clutch size, habitat breadth and diet type -compiled by Cooke et al. (2019a) from four main sources (Jones et al., 2009;Myhrvold et al., 2015;Pacifici et al., 2013;Wilman et al., 2014) are available on figshare: https://figshare.com/articles/Global_trade-offs_of_functional_redundancy_and_functional_dispersion_for_birds_and_mammals/561642 4 BirdLife supplied generation length for birds but restrictions apply to these data, which we used under license for the current study. However, these data can be manually downloaded from the BirdLife website (http://datazone.birdlife.org/species/search).
The 25 imputed trait datasets (excluding generation length for birds, due to data restrictions) are provided as Appendix A. The traits have been transformed log 10 for body mass, generation length and litter/clutch size; square root for habitat breadth. Body mass (body_mass_median), litter/clutch size (litter_clutch_size), generation length (GL), habitat breadth (hab_breadth), diet type (diet_5cat) and diet diversity (shdd).
Evolutionary distinctiveness scores for mammals and birds were freely downloaded here: https://www.edgeofexistence.org/edge-lists/ The calculated ecological distinctiveness scores for all bird and mammal species are provided as Appendix C, as well as the scores for PCoA ecological distinctiveness and ecological uniqueness. | 2020-02-20T09:09:04.833Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "c2aab95258b1e0fe24906d81546705ae6d4a7251",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.gecco.2020.e00970",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "797ac79abe101f2f197e5fdf20a6f406eed0121d",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Geography"
]
} |
3641035 | pes2o/s2orc | v3-fos-license | Paediatric lung imaging: the times they are a-changin'
Until recently, functional tests were the most important tools for the diagnosis and monitoring of lung diseases in the paediatric population. Chest imaging has gained considerable importance for paediatric pulmonology as a diagnostic and monitoring tool to evaluate lung structure over the past decade. Since January 2016, a large number of papers have been published on innovations in chest computed tomography (CT) and/or magnetic resonance imaging (MRI) technology, acquisition techniques, image analysis strategies and their application in different disease areas. Together, these papers underline the importance and potential of chest imaging and image analysis for today's paediatric pulmonology practice. The focus of this review is chest CT and MRI, as these are, and will be, the modalities that will be increasingly used by most practices. Special attention is given to standardisation of image acquisition, image analysis and novel applications in chest MRI. The publications discussed underline the need for the paediatric pulmonology community to implement and integrate state-of-the-art imaging and image analysis modalities into their structure–function laboratory for the benefit of their patients.
Introduction
Until recently, functional tests were the most important tools for the diagnosis and monitoring of lung diseases in the paediatric population. Functional tests are an indirect method to detect structural lung changes and are relatively insensitive for the detection and monitoring of localised structural lung changes. Chest imaging has gained importance in paediatric pulmonology as a diagnostic and monitoring tool to evaluate lung structure, thanks to technical innovations in computed tomography (CT) and magnetic resonance imaging (MRI) technology. When searching PubMed using the keywords "lung", "child" and "imaging", over 300 papers have been published since January 2016. Around 220 of these papers include chest CT or MRI acquisition techniques, image analysis strategies and their application in different disease areas. Of these papers 80% were related to chest CT and 20% to chest MRI. Together, these papers underline the importance and potential of chest imaging and image analysis in today's paediatric pulmonology practice. The focus of this review is on chest CT and MRI, as these are the modalities where considerable progress has been made over the past year. For the role of chest ultrasound, positron emission tomography (PET)-CT and PET-MRI, we refer to recent comprehensive reviews [1][2][3][4]. Together, these publications underline the need for the paediatric pulmonology community to re-evaluate the role of the various modalities for the benefit of their patients. The current gap between the world of imaging and functional tests needs to be bridged. We will briefly discuss the historical context of recent developments, followed by key developments in chest CT, MRI and image analysis in the past year.
Of sound and vision
Up to 1945, the stethoscope (invented by R. Laennec) was the most important diagnostic tool for investigating the lung, but then the chest radiograph was accepted by the medical community as a more sensitive diagnostic tool to evaluate lung structure. This paradigm shift, nearly 50 years after the discovery of X-rays by W. Röntgen, is nicely documented by a landmark paper published in 1945 by the Royal College of Medicine, which summarises the pros and cons of chest radiographs versus the stethoscope [5,6]. Even though major limitations of chest radiographs were recognised at that time, chest radiography has been widely used ever since as a diagnostic tool to depict lung structure for the detection of lung disease. The next major breakthrough came when Cormack and Hounsfield developed CT. The first CT scanner for clinical use was installed in Cambridge in 1971. The first papers describing the use of chest CT in the paediatric age group began to appear a few years later, in 1977 [7]. Despite its enormous potential in depicting lung structure, its adaption as a diagnostic tool for paediatric pulmonology has been slow. An important argument against the use of CT in the paediatric population related to its relatively high radiation burden compared to CXR. Fortunately, over recent decades, technical innovations in CT scanner technology and image reconstruction have resulted in a substantial reduction in the radiation dose [8]. Powerful post-processing techniques, such as iterative image reconstruction, allowed reduction of the radiation dose to levels which are nowadays in the order of 3-6 months background radiation [9]. Furthermore, the possible risks related to radiation, as well as the perception of these risks, are now clearly described and put into perspective, allowing risk and benefit to be balanced more adequately, without pitfalls [10][11][12][13][14]. Furthermore, thanks to the development of very fast CT scanners, it is now possible to scan even rapidly breathing young children without the need for anaesthesia or sedation. As a result of these innovations, chest CT can now be used more safely in the paediatric population to diagnose and monitor a wide range of lung diseases. However, the more widespread use of chest CT requires standardisation of chest CT protocols and breathing manoeuvres to a similar level as has been accomplished for lung function tests. The paediatric pulmonology community has to take on this responsibility, together with the radiology community, to accomplish lung volume standardisation.
Every breath you take
Standardisation of breathing manoeuvres for lung function tests has been well implemented throughout the world. This contrasts sharply with the lack of standardisation for the acquisition of chest CT images. It has long been recognised that spirometry guidance to standardise lung volume during chest CT can be important for the proper diagnosis of bronchiectasis and parenchymal diseases on inspiratory images, as well as for the recognition of malacia and regions of low attenuation on expiratory scans [15,16]. Spirometry-guided CT improves the diagnostic yield, in particular that of the expiratory chest CT. It has even been suggested that for cystic fibrosis (CF), an expiratory scan might suffice for the diagnosis of all relevant pathological changes [17]. Despite these studies showing the potential benefit of spirometry-guided image acquisition, it has not been implemented on a wide scale to date. New studies have been published that improve our understanding of the potential benefit of spirometry-guided chest imaging. To implement such image acquisition in the clinic, close collaboration between pulmonologists and radiologists is required. Its feasibility for the clinical routine has been described by SALAMON et al. [18], who outline a practical method to obtain accurate lung volume measurements for the guidance of chest CT imaging in paediatric patients. This procedure requires training of the subject before each chest CT or MRI, an MRI-compatible spirometer, and close collaboration between a lung function technician and the radiographer. Patients are trained to execute a breath-hold with open glottis at total lung capacity level for the inspiratory scan and at residual volume for the expiratory scan. A good-to-excellent target volume level for the inspiratory or expiratory scan was achieved in ∼90% of children. Spirometry-guided chest CT scans have been evaluated in clinical practice in a tertiary care children's hospital [19]. In this retrospective casecontrol study in children aged ⩾8 years, CT scans obtained before and after implementation of a spirometry-guided CT protocol were compared. Spirometry-guided CT scans (n=50 cases) were matched by age, sex and diagnosis (CF versus other) to CT scans obtained with voluntary breath-holds in the 6 years before implementation of the spirometry assistance protocol (controls). CT scans were evaluated by two paediatric radiologists blinded to the study. The most important difference was in the mean±SD expiratory image density, which was −629±95 HU among cases and −688±83 HU among controls ( p=0.002). The authors concluded that spirometry-assisted CT scans had a significantly greater difference in lung density between inspiratory and expiratory scans than those performed with voluntary breath-holds, thus probably improving the ability to detect air trapping. No appreciable difference in image quality was detected for the presence of motion artefacts or atelectasis.
It has been established that it is feasible for the spirometry-assisted method to be used successfully in patients suffering from Pompe disease to determine the function of the diaphragm [20]. Furthermore, spirometry-assisted MRI has even been used in a large birth cohort study. In this study, two spirometry-controlled inspiratory and two expiratory MRI scans were acquired within 5 min with a success rate of 90% [21]. Hence, implementation of spirometry-guided chest CT and MRI is considered feasible and adds to the diagnostic quality of images. However, it requires the involvement of a lung function technician, and there are some logistic hurdles to overcome. The need for standardisation is not dissimilar from that of lung function measurements, where proper training adds to the reproducibility and interpretation of the test. This developments needs to be driven by the paediatric pulmonology community, as lung function technicians play a key role in the implementation and execution of procedures.
Let's stick together
To date, the implementation of CT protocols has been largely performed by local radiology communities and has been focused on the balance between diagnostic image quality and radiation dose levels [22]. There is a great need for more standardisation of CT protocols between centres, for a number of reasons. First, for rare lung diseases such as CF, primary ciliary dyskinesia (PCD), bronchiectasis and interstitial lung diseases, large global clinical networks and registries have been developed to improve our understanding and treatment of these diseases. In these rare disease communities there is a desire to add information obtained from images to registries. This can be accomplished best when imaging protocols are sufficiently standardised to allow centralised scoring, and manual or automated image analysis. The latter requires a higher level of standardisation. Secondly, chest CT is increasingly used as an outcome measure in clinical trials for various chest diseases. Clinical trial networks have been set up for the joint execution of such clinical trials. So far, standardising image quality across centres in relation to radiation dose, reconstruction kernels, slice thickness, etc. has not been well addressed. Recognising the need for a higher level of standardisation of chest CT, the Standardised Chest Imaging Framework for Interventions and Personalised Medicine in CF (SCIFI CF) was founded to characterise chest CT image quality and radiation doses among 16 CF centres in the European Union (EU), seven in Australia and three in the USA [22]. The authors aimed to standardise CT protocols in children and adolescents in several CF centres. In doing so, an image quality (Q-factor) is assessed; this incorporates the influence of both dose and spatial resolution on image quality. Across the 16 EU centres CT protocols varied greatly. However, when adjusting for differences in preferred spatial resolution and radiation dose, the performance of all CT scanners (i.e. the Q-factor) was found to fall within a small range. It was concluded that multicentre standardisation of chest CT in children and adolescents with CF is achievable for clinical care and management. The SCIFI effort has contributed to the inclusion of chest CT as a primary or secondary outcome measure in several ongoing clinical trials in CF (e.g. clinicaltrials.gov NCT02950883 and NCT01270074). Another important standardisation effort has been the completion of guidelines to homogenise radiation dose for paediatric imaging throughout Europe [23]. Both efforts on image quality and radiation dose standardisation were carried out in concordance with the protocol optimisation principle ALARA (As Low As Reasonably Achievable).
To standardise chest MRI across centres and vendors to the level that it can be used for registries and multicentre clinical trials is considered a major challenge, and it is still early days. There are a number of reasons for this. First, there are endless possibilities to vary the settings for the sequences. Secondly, there are substantial differences in the sequences that can be used routinely between vendors. Chest MRI is currently mostly used in single-centre studies. Methods to standardise image quality for MRI are in development and are being applied to allow multicentre studies (clinical trial.gov NCT02270476 and NCT01245933).
When numbers get serious
Today's radiology reports of chest images for routine care are still largely expert-based, mostly not standardised, and do not contain quantitative outcome data. In other clinical specialities like cardiology, quantitative post-processing methods to acquire objective outcome measures are available and well implemented. For lung diseases in adults, such as chronic obstructive pulmonary disease (COPD), image analysis systems have been developed to measure airway dimensions, parenchymal density and emphysema [24,25]. Scoring systems have also been used to characterise disease progression in a cohort of PCD patients [26]. Initially, these systems were developed for research purposes, but increasingly they are finding their way into routine clinical care.
In children, scoring systems have been extensively used to validate chest CT outcome measures in CF [27]. Using the standardised CF-CT scoring system it was shown that the sweat test is an early predictor of later structural lung disease [28]. Unfortunately, scoring systems such as the CF-CT system are not very sensitive for the detection and monitoring of early disease and cannot be automated easily. PRAGMA-CF (Perth-Rotterdam Annotated Grid Morphometric Analysis for CF) is a quantitative grid method to quantify morphological changes that occur in early CF disease [29], and is based on a morphometric approach previously used for quantification of advanced CF lung disease. Comparing lung clearance index (LCI) values to PRAGMA-CF outcomes in 42 infants, 39 preschool and 38 school-aged children, it was concluded that for infants LCI is insensitive for the detection of structural lung disease and that in preschool and school-age children LCI cannot replace chest CT to screen for bronchiectasis [30].
A morphometric approach similar to that of PRAGMA-CF is now also applied to other diseases in children and adults, such as bronchopulmonary dysplasia (BPD) [31, 32] and interstitial lung diseases [33].
Airway disease is an important component of the above-mentioned diseases. For this reason KUO and co-workers [34,35] developed the airway-artery (AA) method to measure all visible airway-artery pairs in a view perpendicular to the airway axis on a 3D-reconstructed bronchial and arterial tree. The AA method allows an objective diagnosis of bronchiectasis and airway wall thickening. KUO and co-workers [34,35] compared airway and artery dimensions in a small group of preschool [35] and school-aged [34] children with CF and in controls (figure 1). Depending on the age of the patients and the inspiratory level during CT acquisition, between 50 and 500 AA pairs could be measured per CT. In school-aged children the number of visible AA pairs was doubled in patients with CF compared to controls due to inflammation and dilation of the smaller airways augmenting their visibility. The diagnosis of bronchiectasis appeared to Control CF-CT 1 CF-CT 2 CF 6-16 FIGURE 1 A method for the objective assessment of airway artery dimensions (the AA method) to diagnose bronchiectasis. Boxplots show the ratio between the outer edge of the airway (Aout) and the adjacent artery (A). Aout/A ratios are shown for four consecutive groups: control (n=23); first and second cystic fibrosis-computed tomography (CF-CT 1 (n=12) and CF-CT 2 (n=12), respectively); and CT scans in a cross sectional cohort including patients aged 6-16 years (CF 6-16) (n=11). In total, 11 262 AA pairs were measured. Aout/A ratios are plotted against segmental generation (1 is the first segmental bronchus up to the 12th airway generation peripheral from the segmental bronchus). Control subjects were age-matched for CF patients. Median ages are 2 years, 3.9 years and 11 years for CF-CT 1 , CF-CT 2 and CF 6-16, respectively. Boxes show median (horizontal line), interquartile range (box) and 1.5× interquartile range (whiskers). Outliers are shown as points. Note that for controls, Aout/A ratios were constant, whereas for each of the three CF groups an increasing and significant difference could be found in Aout/A ratio between the CF and control group from generation 2 to generation 5 (all p⩽0.02). Furthermore, the difference between CF and controls was bigger for the oldest CF patients. Note that for generation 9 and higher, no more airway artery pairs were visible on the scans for control subjects, while airway artery pairs were still visible on the CT scans of CF patients [34,35]. Reproduced and modified from [34].
https://doi.org/10.1183/16000617.0097-2017 be dependent on the lung volume at which the CT scan was acquired [35]. These findings once more support the need for volume standardisation for cooperative children during acquisition. In addition, it objectively showed that a comparison of the outer airway diameter with the artery is more accurate in assessing bronchiectasis than the inner airway diameter. Finally, KUO et al. [36] showed a good correlation between the AA method and the PRAGMA-CF and CF-CT scoring methods. Unfortunately, manual execution of the AA method is very time consuming. For this reason algorithms are in development that will allow sensitive analysis of airway dimensions for the diagnosis of airway wall thickening and bronchiectasis [37]. Adding quantitative information on lung CT scans to the routine radiology report is close at hand, and will be an important step forward for the diagnosis and monitoring of lung diseases. How quantitative CT markers can be used as outcomes in clinical CF studies has been described in a review and in comments as part of a special series in the Journal of Cystic Fibrosis [38][39][40]. Combining imaging and functional outcomes in clinical studies will be important to improve our understanding of the effectiveness of novel therapies, such as the very costly cystic fibrosis transmembrane conductance regulator (CFTR) correctors and potentiators.
Radio Ga Ga
MRI is sometimes described as making pictures with a radio. Will MRI replace CT at some point as the leading technique for making images of the lung? The first MRI prototype, a radiation-free alternative to chest CT, was installed in 1977. Although MRI has revolutionised medicine in many disease areas, its use for lung diseases has lagged behind. The reasons for this include the low-proton density of lung tissue, the continuous motion of the lung, and the elevated air content, which results in low signal intensity and fast signal decay. In 1983 the first study was published that included chest MRIs in children [41]. Major innovations in chest MRI in paediatrics have taken place over the past decade, especially in relation to the hardware. Fast acquisition techniques, respiratory gating and new high-resolution techniques continue to close the image-quality gap between CT and MRI [42].The resolution of conventional 1 H-MRI for morphologic imaging is relatively poor, and is inferior to that of chest CT [43]. However, image resolution has improved considerably over the past decade thanks to the development of novel ultrashort echo (UTE) (figure 2) and zero echo time (ZTE) sequences, which allow submillimetre high-resolution images to be obtained [44,45]. These UTE/ZTE sequences have been applied in quiet breathing neonates [46].
These sequences have been compared to CT in a group of infants with lung diseases, showing that the lung signal intensity of UTE correlates highly with lung density measured by CT [46]. This would allow the introduction of quantitative MRI parameters to define lung pathology, such as trapped air and emphysema, as is routinely done for asthma and COPD patients. Furthermore, to obtain functional information on lung perfusion an intravenous contrast agent can be applied [47], although the use of contrast agents for paediatric patients is still debated after evidence of gadolinium deposition in the body [48]. Alternative MRI techniques could be applied that allow simultaneous perfusion and ventilation imaging without using contrast [49]. Another interesting development to improve resolution is the use of inhaled hyperpolarised gases such as 129 Xe or 3 He, and other inhaled contrast agents to enhance the spatial resolution of lung airspaces [50]. The heterogeneity of ventilation can be assessed and hypoventilated areas can be easily identified. Although these techniques add complexity to the procedure and will increase the challenge of standardisation, they can be of great value as a research tool to improve our understanding of various lung diseases and to evaluate the efficacy of novel drugs. This technique has been applied in small cross-sectional studies in CF [51][52][53][54][55] and in asthma [56]. However, for clinical management in CF it might be sufficient to identify and quantify low-intensity regions related to hypoperfusion and/or trapped air using spirometer-controlled expiratory 1 H-MRI [18] or other strategies such as normalised T1 and non-contrast perfusion techniques [57]. This needs to be further investigated in comparative studies.
Hyperpolarised 3 He can also be used to assess alveolar size. This has been applied in BPD [58]. In a small follow-up study in 16 BPD patients, alveolar size was higher than that of former healthy term-born patients.
Another important advantage of chest MRI over chest CT is that it allows us to acquire simultaneous information on lung structure and function without using ionising radiation. This opens up new ways to study lung mechanics in asthmatic subjects [59].
Another exciting novel application is the potential of MRI to visualise lung inflammation and infection. In a relatively large single-centre study, conventional MRI sequences were used to assess paediatric pulmonary infection [60]. MRI had comparable sensitivity and specificity to CT for the diagnosis and monitoring of lung infection. Interesting methods have been developed that could facilitate diagnosing allergic bronchopulmonary aspergillosis (ABPA) in CF [61]. Furthermore, MRI could be particularly important for paediatric patients who cannot be exposed to ionising radiation, or in immunocompromised children who have repeated infections [62]. For lung inflammation detection and monitoring, diffusion-weighted MRI has been used in CF patients with pulmonary exacerbation [63]. Many of the MRI innovations described above were initially developed for CF and are now also applied for research in asthma [64], immunocompromised children [62], pulmonary infections [65], tuberculosis [66], BPD [46,58], congenital diaphragmatic hernia [67,68], pulmonary sarcoidosis [69], and even in the follow-up of a large birth cohort study [21]. Chest MRI is increasingly used for clinical management in the follow-up of CF lung disease [43] or for the assessment of central airway dynamics and dimensions [70]. The major challenge for MRI to make it into the daily clinic is standardisation of MRI protocols across centres and vendors. This will require a comparison of protocols using phantoms and selection and validation of comparable vendor-specific sequences.
Report, numbers? Ch-ch-ch-ch changes
For conventional MRI, quantification of morphological changes is more challenging than in CT as its resolution is lower than that of CT. Using scoring techniques to evaluate 1 H-MRI images of 57 patients it was shown, as in previous studies, that 1 H-MRI underestimates mild CF disease and overestimates severe CF disease compared to CT [43,70]. In two small cross-sectional studies in paediatric and adult CF patients using UTE-MRI, chest MRI scores correlated well with chest CT scores [44,61]. Scoring was also used to correlate MRI outcomes to LCI in a cross-sectional study that included 97 stable children with CF aged 0.2-21 years. Overall, correlations were weak but significant. In addition, 25 children had an MRI before and after therapy that showed significant improvements in several MRI scores except for airway wall thickness scores [47]. 1 H-MRI has also been used to characterise mucus using differences in its intensity between T1 and T2 weighted images as a method to identify patients at risk for developing ABPA [61].
These promising but discordant results warrant further longitudinal studies comparing the sensitivity for tracking structural CF lung disease using these improved MRI sequences with that for chest CT. In addition, it will be necessary to develop more sensitive semi-automated image analysis techniques to replace the currently used coarse scoring techniques. Only with a sufficiently standardised multi-vendor, multicentre and multi-sequence MRI protocol will we be able to promote the use of chest MRI on a larger scale.
For quantification of images using inhaled hyperpolarised noble gases as a contrast agent, ventilation defects are counted, or the volume of ventilation defects is computed and expressed as a fraction of total lung volume. Using hyperpolarised 3 He-MRI, ALTES et al. [55] showed (in a small pilot study) a reduction in the fraction of poorly ventilated lung tissue in CF patients while on treatment with the CFTR potentiator ivacaftor. On stopping treatment after 48 weeks, the volume of poorly ventilated lung tissue increased again to baseline values, showing a pattern similar to baseline MRI. This study suggests that ivacaftor can improve the ventilation of even structurally abnormal regions of the lung, but that these effects disappear after therapy is stopped. In using hyperpolarised noble gases to estimate alveolar size, the apparent diffusion coefficient is used as the outcome measure. In BPD patients, apparent diffusion coefficient values were significantly greater than in age-matched healthy controls, suggesting that, in the former, alveoli are enlarged [58].
Other quantitative functional parameters that can be extracted from MRI are central airway dimensions for the objective diagnosis of malacia [70][71][72]. Central airway diameters are measured at end-inspiration, during a forced expiration and at end-expiration to compute the change in cross-sectional area. Advantages of this MRI method include that it does not require general anaesthesia, unlike bronchoscopy, that the impact of a forced expiration and cough on airway diameter can be evaluated, and that it can be standardised well.
Another great application of chest MRI is that it can be used for quantitative analysis of the function of the diaphragm (being the most important respiratory muscle) [20]. Image registration and lung surface extraction are used to quantify lung kinematics during breathing. This allows us to compute the independent contributions of the diaphragm and thoracic muscles to the respiratory cycle. This quantitative method was used in a pilot study in Pompe patients and control subjects, and showed minimal motion of the diaphragm in the presence of mostly thoracic musculature movement in Pompe patients.
Using diffusion-weighted imaging, inflammation hotspots are visible either because free water movement is restricted due to the increased cellularity or due to an increase in microperfusion in relation to inflammation. The hotspots were counted over the course of an intravenous antibiotic treatment in CF patients treated for a pulmonary exacerbation, then compared to a control group of stable CF patients [63].
A striking finding was that at the end of treatment, hotspots were still visible in some patients. Moreover, quantitative diffusion-weighted imaging-derived parameters, such as the apparent diffusion coefficient, showed good sensitivity and specificity to detect respiratory tract exacerbations in CF patients. The ability of diffusion-weighted imaging-MRI to track inflammatory changes has great potential in assessing the efficacy of currently used exacerbation treatments and to develop more effective novel therapies.
There are many innovative MRI techniques to obtain detailed information on lung ventilation and perfusion, and a wealth of outcome measures can be extracted from these techniques. However, the greatest challenge is to select the most robust outcome measures and to validate these outcome measures.
Importantly, such outcome measures should add novel information that impacts clinical decision making. Furthermore, the feasibility of implementing these techniques across centres using multiple MRI vendors needs to be established.
Livin' in the future
We have come a long way in our diagnostic capabilities since the invention of the stethoscope by Laennec. Innovations in lung imaging and image analysis will change the face of our diagnostic tool kit as we know it today. We are likely to continue to use the stethoscope, chest radiographs and lung function in daily practice, but their role and importance for patient care will once more change substantially in the coming decade thanks to the capabilities of today's state-of-the-art chest CT and the rapidly developing and exciting new capabilities of chest MRI. Novel chest CT, MRI and image analysis techniques will improve our understanding of the pathophysiology and treatment of lung diseases in the paediatric population. Although, initially, many of the developments were primarily focused on CF, they are now being applied to other diseases such as BPD, bronchiectasis, interstitial lung diseases, pneumonia, PCD, sarcoidosis, tuberculosis and congenital lung abnormalities. Close and structured collaboration between the pulmonology and radiology communities is needed to facilitate further standardisation efforts and for the development of reference values and automated image analysis of key outcome measures and for the validation of outcome measures. | 2018-04-03T05:17:34.554Z | 2018-02-28T00:00:00.000 | {
"year": 2018,
"sha1": "62d7885c759d765131847b057372bfa62ad14e29",
"oa_license": "CCBYNC",
"oa_url": "https://err.ersjournals.com/content/errev/27/147/170097.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "48db26bf7e37097d69e24ee9bda417330722c971",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119245061 | pes2o/s2orc | v3-fos-license | Study of Yang-Mills-Chern-Simons theory in presence of the Gribov horizon
The two-point gauge correlation function in Yang--Mills--Chern--Simons theory in three dimensional Euclidean space is analysed by taking into account the non-perturbative effects of the Gribov horizon. In this way, we are able to describe the confinement and de-confinement regimes, which naturally depend on the topological mass and on the gauge coupling constant of the theory.
Introduction
Three-dimensional Yang-Mills theory is one of the most important models in which it is possible to analyse unsolved non-perturbative problems such as color confinement. The theory is simpler than QCD, but it is still highly non-trivial. It has local degrees of freedom and the coupling constant is dimensionful. Moreover, it can be viewed as an approximation to the high temperature phase of QCD with the mass gap serving as the magnetic mass.
A very interesting term which can be added to the three-dimensional Yang-Mills theory is the Chern-Simons term [1,2] 1 : this term provides a mass for the gauge field which is of topological origin. Therefore, while pure 3d Yang-Mills is known to be a confining theory, the addition of the topological Chern-Simons term has the effect of generating a de-confined massive excitation. Said otherwise, the theory undergoes a change of regime, passing from a confined to a de-confined regime.
The purpose of this paper is that of discussing, within a quantum field theory framework, how this change of regime is driven by the presence of the Chern-Simons term. To that aim, we shall take into account the non-perturbative effects arising from the Gribov horizon [5] 2 . This will enable us to encode non perturbative effects into the two-point gluon correlation function whose analytic structure can be employed to analyse how the theory moves from one regime to another when varying the coupling constant g and the Chern-Simons mass parameter M . We remind here that the presence of the Gribov phenomenon is a general feature of the quantization procedure of nonabelian gauge theories, the existence of Gribov copies being in fact a well known property of any local covariant renormalizable gauge fixing [8] (see also [9]). The presence of gauge copies gives rise to zero modes of the Faddeev-Popov operator which invalidate the usual Faddeev-Popov construction.
A successful method to deal with the issue of the Gribov copies is that of restricting the domain of integration in the functional integral to the so-called Gribov region Ω [5,6,7], which is the set of all transverse field configurations for which the Faddeev-Popov operator M ab = −∂ µ D ab µ is strictly positive, namely Ω = {A a µ ; ∂ µ A a µ = 0, M ab > 0}. The region Ω has been proven to be bounded in all directions in field space [10], its boundary ∂Ω being the first Gribov horizon. Moreover, all gauge orbits pass through Ω at least once [11], a property which strongly supports the restriction to Ω. Remarkably, the whole procedure results in a local and renormalizable action known as the Gribov-Zwanziger action [12,13]. More recently, a refinement of the Gribov-Zwanziger action has been worked out in [14,15] by taking account the effects of dimension two condensates. The resulting two-point gluon correlation function turns out to be in excellent agreement with the most recent lattice data [16], allowing for nontrivial analytic estimates of the first glueball states [17,18]. Let us also mention that the Refined Gribov-Zwanziger framework has been employed in the study of the Casimir energy [19], producing the correct sign for the Casimir force within the MIT bag model, clarifying a long-standing problem. Also, in a series of papers [20,21,22,23], the Gribov-Zwanziger set up has been employed in order to study, in the continuum, the transition between the confining and non-confining regimes when Higgs fields are present. Also in this case, the non-perturbative gluon two-point correlation function obtained by taking into account the effects of the Gribov horizon turns out to be a useful quantity in order to obtain information about the transition from the confining to the Higgs regime. As discussed in details in [20,21,22,23], the gluon correlation function undergoes a continuous change from a confining expression of the Gribov type, characterized by the presence of unphysical complex conjugate poles, to a Yukawa type propagator with a real pole, indicating that the theory is in the Higgs regime. The emerging picture is in full agreement with the renewed Fradkin-Shenker work [24].
In the present paper, we shall implement the restriction to the Gribov region Ω in 3d Yang-Mills-Chern-Simons theory by working out the non-perturbative expression of the two point gauge correlation function. Further, we shall vary the gauge coupling constant g and the Chern-Simons mass M and discuss how the poles of this correlation function get modified, thus obtaining information on how the theory passes form the confining to the non-confining regimes.
The paper is organized as follows: in Section 2, the gluon propagator and the Gribov gap equa-tion for 3d Yang-Mills-Chern-Simons theory are obtained. In Section 3, the behaviour of the poles of the gauge propagator as functions of the two parameters (g, M ) is discussed. In Section IV we present our conclusions.
2 Gauge propagator for Yang-Mills-Chern-Simons action in presence of the Gribov horizon We start by considering the Yang-Mills-Chern-Simons action in 3d Euclidean flat space quantized in the Landau gauge, namely (1) Here, M stands for the Chern-Simons mass, b a is the Lagrange multiplier enforcing the Landau gauge, ∂ µ A a µ = 0, and (c a , c a ) are the Faddeev-Popov ghosts. This theory is known as the topologically massive non-Abelian gauge theory, because of the massive gluon propagator [1,2], given by As already mentioned in the Introduction, the action (1) is plagued by the existence of Gribov copies. We shall thus proceed by restricting the domain of integration in the functional integral to the Gribov region Ω. To that aim we shall follow the procedure outlined by Gribov [5,6,7]. It amounts to impose the so-called no-pole condition for the connected two-point ghost function G ab (k; A) = k|(−∂D ab (A)) −1 |k , which is nothing but the inverse of the Faddeev-Popov operator −∂ µ D ab µ (A). One requires that G ab (k; A) has no poles at finite non-vanishing values of k 2 , so that it stays always positive. In that way one ensures that the Gribov horizon is not crossed, i.e. one remains inside Ω. The only allowed pole is at k 2 = 0, which has the meaning of approaching the boundary of the region Ω.
Following Gribov's procedure [5,6,7], for the connected two-point ghost function G ab (k; A) at first order in the gauge fields, one finds One can then write where the form factor σ(k; A) is given by The quantity σ(k; A) turns out to be a decreasing function of the momentum k [5,6,7]. Thus, the no-pole condition is implemented by requiring that [5,6,7] σ(0; A) ≤ 1 .
Making use of the transversality of A a µ (p)A a ν (−p) in the Landau gauge, one easily finds that One can easily verify that this condition is already fulfilled without restriction to the Gribov horizon whenever This means that, in the weak coupling regime, the Gribov problem does not occur.
Although in the present paper we are mainly focusing on the Gribov copies related to infinitesimal gauge transformations, namely to copies related to zero modes of the Faddeev-Popov operator, it is worth remanding here that, as pointed out in [1,2], the Chern-Simons term is not left invariant by the so called large gauge transformations, i.e. gauge transformations with non-vanishing winding number. Nevertheless, gauge invariance of the partition function is achieved by demanding that the Chern-Simons mass M obeys a quantization rule. More precisely, from [1,2] one has that 4π M g 2 = n, where n is an integer, n ± 1, ±2, · · · . Therefore, combining this quantization rule with expression (8), one learns that for values of the integer n such that n > 2 3 N , the size of the Chern-Simons mass M still guarantees that the no-pole condition (6) is fulfilled. To some extent, this remark might give a first indication of whether the Gribov copies related to large gauge transformations are expected to be not relevant.
As done in [5,6,7], condition (6) is encoded in the Euclidean functional measure through the introduction of a step function θ(x). Therefore, for the partition function of the theory one gets The factor σ(0; A) can be lifted into the exponential by employing the following integral representation for the step function As, in the following, we are concerned with the gauge propagator, we shall focus on the partition function in the quadratic approximation, namely with where α is a gauge parameter to be set to zero after having evaluated the gauge propagator. The parameter γ stands for the Gribov mass parameter [5,6,7]: In order to evaluate the gauge propagator it suffices to invert the operator Q ab µν . Writing the coefficients F, B and C are determined by requiring that yielding the following expression for the gauge propagator It is worth noting that, removing the Gribov horizon, i.e. setting γ = 0 in eq. (17), we recover the massive propagator of Yang-Mills-Chern-Simons theory, eq. (2). On the other hand, when M = 0, the Gribov propagator for Yang-Mills theory is obtained, that is
The gap equation for the Gribov parameter γ
The Gribov parameter γ is not free, being determined in a self-consistent way through a suitable gap equation, which we shall derive below by following Gribov's setup, amounting to evaluate the partition function (12) at the saddle point [5,6,7]. To that end we write Z quad as where, after integrating out the gauge fields, the quantity f (β) is given by In the Gribov semiclassical approximation [5,6,7], expression (19) is evaluated at the saddle point, where β * corresponds to the stationary point of f (β) which, upon evaluating Tr ln Q ab µν , gives the gap equation for the Gribov parameter γ: It is worth noticing that the Chern-Simons mass parameter M does not enter the gap equation (23). This is an expected result, due to the topological nature of the Chern-Simons term. One recognizes in fact that the quantity f (β * ) has the physical meaning of the vacuum energy of the system. However, as the Chern-Simons term does not couple to the metric, it follows that it does not contribute to the vacuum energy, which turns out to be independent from M . Of course, the same happens with the gap equation (23) for the parameter γ. Nevertheless, the presence of the Chern-Simons term leads to a deep change of the structure of the gauge propagator.
It is straightforward to integrate equation (23), obtaining γ as a function of the coupling constant g, i.e.
Therefore, the gauge propagator takes the form We notice that, before the implementation of the restriction to the Gribov region Ω, the theory displays a massive Yukawa type mode, as it is apparent from the expression of the propagator in eq.(2). The question that naturally arises is under which conditions this physical mode survives when the influence of the Gribov horizon is taken into account. This is the topic we shall address in the next section.
3 Analytic structure of the gauge propagator and the different regimes of the theory The propagator in expression (25) depends on the coupling constant g and on the Chern Simons mass M , and exhibits a rather complex pole structure. The poles of the propagator are functions of the parameters (g, M ). As such, the study of their behavior when varying (g, M ) is of great help in understanding the different regimes in which the theory may be found, as recently discussed in the case of Yang-Mills theories in the presence of Higgs fields [20,21,22,23] as well as of gauge theories at finite temperature [25]. The region in the plane (g, M ) in which the poles of the gauge propagator are complex has a natural interpretation as a confining region, since complex poles cannot be associated to a physical excitation of the spectrum. Moreover, the region in which the poles are real and the corresponding residues are positive has the meaning of a deconfined region in which a massive gauge particle is present in the spectrum.
In order to find the poles of the propagator (25) we have to determine the roots of the following polynomial P (q 2 ) = q 4 + λg 8 2 + M 2 q 6 (26a) where G = λg 8 . Although in principle the roots (m 2 1 , m 2 2 , m 2 3 , m 2 4 ) can be evaluated in close form, they turn out to be complicated expression of the parameters (G, M ). Rather, in order to provide a better and more clear analysis, we shall display three dimensional plots of the poles as functions of (G, M ). Let us start by splitting the propagator (25) in two parts, a parity conserved, and a parity violating one, namely G ab µν (q) Using partial fraction decomposition, for the parity violating part of the gluon propagator we get where the residues R 1 , R 2 , R 3 , R 4 are given by Analogously, the parity conserved part (28) of the gluon propagator reads and the residues are, .
Let us discuss these results in more detail. Due to the dependence of the polynomial P (q 2 ) in eq.(26b) on the parameters M and G, the reality of the roots will depend on these parameters as well, possibly becoming complex, and thus turning the related mode unphysical, for certain values of them.
Computing the discriminant of the quartic polynomial in (26b) Similar results were also found in [20] and [21], where the Yang-Mills-Higgs theory was studied from the perspective of the Gribov problem. Despite the very different nature of the theory, a similar regime with two real masses, one with positive and one with negative residue, was also found there. Of course, we remind here that, when the weak coupling condition (8) is fulfilled, we have the standard particle spectrum, as given by the gauge propagator (2). Moreover, it has been shown in [25] that the Gribov gap equation at finite temperatures in pure Yang-Mills theory produces a phase diagram which is very close to the ones obtained in [20] and [21] in which the temperature plays the role of the Higgs vev.
The present analysis might be relevant for the study of QCD at high temperatures. Indeed, as it is well known, in this case the theory can be described with an effective three-dimensional gauge theory in which the Chern-Simons term appear upon integrating out the fermions, see, for instance, [27] [28], a detailed review being [3]. The coupling constant of this kind of induced Chern-Simons term is proportional to the number of fermions flavours N f . Hence, the present results imply that when the (a-dimensional combination of the Gribov parameter with the) Yang-Mills coupling is very small compared with the flavours number then the theory is not in the confining phase while when the (a-dimensional combination of the Gribov parameter with the) Yang-Mills coupling is very large compared with N f then the theory is in the confining phase. These conclusions are very satisfactory from the intuitive point of view since it is well known that adding fermions flavours to Yang-Mills action "decreases" the confining character of the theory (see, for instance, [26]).
Conclusion
In this paper the Gribov semi-classical approach to eliminate gauge copies has been applied to Yang-Mills Chern-Simons theory in three dimensions. Unlike what happens in pure Yang-Mills theory, whose propagator is always confining at zero temperature within the Gribov semi-classical approach, the presence of the Chern-Simons topological term gives rise to a new regime in which a physical massive mode can propagate. In particular, the present analysis shows that there is a range of parameters, i.e. small Yang-Mills coupling constant and large values of the Chern-Simons coupling M , in which the theory is not in the confined phase since real poles corresponding to physical excitations appear. On the other hand, when the Yang-Mills coupling is large and the Chern-Simons coupling is small all the poles of the propagator are complex and the theory is in the confined regime. Therefore, even when the non-perturbative effects of the gauge copies are taken into account in the three-dimensional Yang-Mills-Chern-Simons theory, there is still a region of the parameters space corresponding to the Deser-Jackiw-Templeton massive gauge theory regime. Only when the Yang-Mills coupling is large enough compared to the Chern-Simons one, the confined phase appears. The present analysis can be quite relevant in the study of QCD at high temperatures since, in this case, the theory can be described with an effective three-dimensional theory in which the Chern-Simons term appear upon integrating out the fermions.
Another issue worth to be investigated in the future is the possibility of implementing the restriction to the Gribov region to all orders, which would amount to construct a local Gribov-Zwanziger type action, as done in the case of pure 3d Yang-Mills see, for instance, ref. [29]. In principle, provided the starting partition function is gauge-invariant, the terms which implement the restriction to the first Gribov region depend essentially only on the form of the gauge fixing itself. In this sense, one could implement the restriction to the first Gribov region beyond one-loop in a consistent way by adding to the starting action Zwanziger's horizon term in its local form [29]. This would lead to a kind of local Gribov-Zwanziger action for Yang-Mills-Chern-Simons theory. Furthermore, it has been established that both Chern-Simons and Yang-Mills-Chern-Simons theories are ultraviolet finite [30,31]. It would be interesting to check if these finiteness properties would still hold in the presence of the horizon term. Finally, the formation of suitable lower dimensional dynamical condensates in a way similar to the so-called Refined-Gribov-Zwanziger action [29] is also worth to be investigated. | 2014-03-17T17:30:14.000Z | 2013-12-11T00:00:00.000 | {
"year": 2014,
"sha1": "d269282260376d5b0f1552eaed1d8af05e1db825",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1312.3308",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d269282260376d5b0f1552eaed1d8af05e1db825",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
238867827 | pes2o/s2orc | v3-fos-license | Students’ interest in science learning and measurement practices. Questions for research in the Moroccan school context
Current reforms of education systems seek to improve the learning environment and opt for a more active role of students in the construction of learning to increase their academic performance. In Morocco, several reforms have been carried out over the last two decades to address the shortcomings of the education system. Still, the results of national and international assessments remain low, particularly for science. These discouraging results are linked on the one hand to the physical learning environment offered by the Moroccan school and, on the other hand, to less tangible aspects related to the students themselves, such as their interest in learning science. Often, reforms and teaching practices neglect these less tangible aspects. However, students' interest in school science is widely studied worldwide. At the same time, further research in the Moroccan context is needed to provide more explanations for their low achievement in science. The objective of this work is to see how interest has been defined and the good practices to measure it in the context of science teaching. In addition, to ask several questions that deserve an examination in the Moroccan school context.
Introduction
Despite the efforts made by Morocco to overcome the failures of its education system, the reading of National (PNEA, 2016) and international (TIMSS, 2015; PISA, 2018) assessment reports still reveal the weakness of Moroccan students' achievements in science. The causes of these weaknesses may involve multiple factors. Some of them are essentially personal, such as students' self-efficacy and interest in learning science. In contrast, others are contextual or socio-cultural factors related to school climate, teachers, curricula, and the effect of peers and parents.
Interest is a crucial element for students' academic success. Students engage in the learning process and learn more when something interests them. Teachers often focus more on explaining science concepts and ensuring that students assimilate the knowledge and pay little attention to their interests in learning science. Currently, the idea of students' declining interest in science is widely accepted, and many studies have repeatedly indicated this [1,2,3,4]. These studies suggest a need for further research on interest in science in different educational and cultural contexts.
Through the existing literature, the aim of this work is to describe the conceptualizations of interest and the different current methods of measuring this concept concerning science. In addition, to open up to research perspectives and questions related to students' interest in science learning in the Moroccan school context.
Conceptualization of interest
Interest describes a specific relationship between a person and the object of interest, which can be the science in general, a school subject (e.g., biology, physics), a specific field (the study of plants), a particular context (e.g., laboratory, museums), etc.
There is no entirely accepted theoretical orientation to interest. However, many definitions of this concept exist in the literature. Some researchers use specific words as areas of interest or alternatives to interest, including attention, concentration, curiosity, emotion, and motivation [1]. Interest is a multidimensional construct whose operational definition requires three general dimensions, cognitive characteristics (knowledge), emotional characteristics (Feeling of pleasure), and value-related characteristics (Value and importance) [5].
In general, many researchers distinguish between two levels of interest: situational interest and individual interest. "Situational interest refers to the focused attention and affective reaction triggered at the moment by environmental stimuli" (e.g., situation, task, context) [6]. At the same time, "personal (individual) interest refers to a person's relatively enduring predisposition to re-engage with a particular content over time" [6].
Although individual interest and situational interest have distinct characteristics, they can interact and influence each other. We can stimulate students' situational interest by creating new and meaningful environments that attract their attention. This interest may be transient or, it may be repeated over time and develop into a personal interest. However, when students enter a situation with some pre-existing interest, this interest can be maintained by interventions designed to broaden their knowledge of the topic and reinforce its perceived value.
Models and theories of interest
Recent decades have seen a proliferation of research and a renaissance of theories and models that conceptualize interest differently. Some contemporary conceptualizations of interest focus on the development of interest that occur through interactions with the environment (e.g., Hidi and Renninger, Krapp). Others focus on the state of interest as an emotion (e.g., Silvia, Ainley), or focus on perceived value (e.g., Eccles, Wigfield, and colleagues), or task/experience characteristics (e.g., Mayer). Whereas others focus on an individual's current abilities and their relationship to career interests (e.g., Holland) [7].
Theories of interest provide insights into how to developed students' interest in the context of science education.
Measuring interest
It is essential to refer to a theoretical model that conceptualizes interest for studying this concept in the science education context [8]. Generally, studies of interest in science can be classified into three types: Studies that collect qualitative data, studies that collect quantitative data, and studies that use mixed methods to collect qualitative and quantitative data.
The quality of a study's findings and conclusions depends on the method and measures used to collect the data [9]. Many studies have used written questionnaires to identify students' interest in science. These questionnaires allow researchers to directly measure interest in a topic and facilitate statistical processing of the data. Still, their disadvantage is that they are based on adults' opinions that should be meaningful to students.
Likert-type scaled questionnaires are often borrowed or modified from existing research tests or developed entirely by the researcher. In this type of questionnaire, students respond to statements by choosing a response on a multi-point scale to indicate their levels of agreement (e.g., a five-point scale: strongly disagree, disagree, not sure, agree, and strongly agree). It is best to use student scores for several items measuring interest in slightly different ways and combine them into an average score. The use of a multidimensional scale is less abundant in the research. This type of scale involves combining items representing multiple aspects of interest into an average score. It requires a theoretical basis for selecting appropriate items and statistical treatment after data collection (e.g., validity, reliability, factor analysis) to confirm its robustness and ensure its effectiveness in measuring what it is designed to measure. Knekta and colleagues (2020) developed a three-dimensional scale to measure students' interest in biology. Their measurement tool included six items measuring positive feelings toward biology, five items covering the personal value of biology, and eight items regarding re-engagement with biology-related content.
Another measurement method is to use questions that students ask themselves as the source of information about their interests. These questions are submitted to specific websites as a tool to survey their scientific interests [10]. This method provides a large amount of data. Still, it is more difficult to control because it does not ensure the representativeness of the sample and determine whether the question asked is the result of the submitter's interest or whether there is an involvement of others (e.g., peers, parents).
Qualitative methods include focus groups, classroom observations, and interviews with students. Classroom observations can reduce the researcher's involvement and allow for a wide variety of conclusions about students' interests. These qualitative methods do not allow for a generalization of results. But the data they offer can be explanatory or complementary if well combined with quantitative measures.
Questions for research in the Moroccan school context
Students' interest in science in school has been a concern for many researchers. It is related to academic success, career aspirations and enables students to put more effort into their learning.
Much research indicates that several variables can influence students' interest in learning science. This research addresses various aspects such as gender differences, international comparisons, and interventions that contribute to promoting interests. Hasni & Potvin (2015) suggested the need for additional research on interest in science and technology, especially in different educational and cultural contexts. While Osborne & al.'s (2003) study on attitude toward science noted a need for research to identify aspects of science education that make school science appealing to students, with a focus on classroom teaching methods. In addition, Krapp & Prenzel (2011) noted the need to focus on specific science areas or disciplines (e.g., biology, physics) and consider comparisons across disciplines that make up the curriculum.
In this context, it is important to study the Moroccan students' interest in science by answering questions such as: -What is the level of interest of Moroccan students in science subjects? -Is there a gender difference in interest in science subjects? -How does Moroccan students' interest in science change as they progress through school? -What are the main factors that influence Moroccan students' interest in learning science? -How can Moroccan students' interest in learning science be promoted?
The answers to these questions will help clarify Moroccan students' relationship with science and further explain their performance on national and international science assessments.
Conclusion
Interest in learning science is a component that can predict the quality of learning and the level of current and future engagement of students. In contrast, reforms and teaching practices ignore this aspect that influences students' academic outcomes. Therefore, the realization of studies on students' interest in learning science in the Moroccan school context can shed light on our perception of the relationship of Moroccan students with science and generate best practices likely to promote their interest in science. | 2021-08-27T17:04:32.363Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "dd396af5c2fe45b1cf729650777983d36ef75f39",
"oa_license": "CCBY",
"oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2021/30/shsconf_qqr2021_05006.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b37d603506fbc4eea89cea1d13e1e06822a1bc34",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
} |
9136777 | pes2o/s2orc | v3-fos-license | Sex change in the subdioecious shrub Eurya japonica (Pentaphylacaceae)
Abstract Sex change affects the sex ratios of plant populations and may play an essential role in the evolutionary shift of sexual systems. Sex change can be a strategy for increasing fitness over the lifetime of a plant, and plant size, environmental factors, and growth rate may affect sex change. We described frequent, repeated sex changes following various patterns in a subdioecious Eurya japonica population over five successive years. Of the individuals, 27.5% changed their sex at least once, and these changes were unidirectional or bidirectional. The sex ratio (females/males/all hermaphrodite types) did not fluctuate over the 5 years. In our study plots, although the current sex ratio among the sexes appears to be stable, the change in sex ratio may be slowly progressing toward increasing females and decreasing males. Sex was more likely to change with higher growth rates and more exposure to light throughout the year. Among individuals that changed sex, those that were less exposed to light in the leafy season and had less diameter growth tended to shift from hermaphrodite to a single sex. Therefore, sex change in E. japonica seemed to be explained by a response to the internal physiological condition of an individual mediated by intrinsic and abiotic environmental factors.
Such frequent sex changes affect the sex ratio of a population. Indeed, year-by-year fluctuations in sex ratios caused by sex change have been observed within a population (Nanami et al., 2004;Yamashita & Abe, 2002). Therefore, sex change may play an essential role in the evolutionary shift of a sexual system (Delph & Wolf, 2005;Spigler & Ashman, 2011).
Plant size (Yamashita & Abe, 2002), environmental factors (Freeman, Harper, & Charnov, 1980;Ghiselin, 1969), and growth rate (Nanami et al., 2004) all affect sex change. In some species, small plants reproduce as males, while larger plants reproduce as females (Bierzychudek, 1982;Kinoshita, 1987;Schlessman, 1991;Yamashita & Abe, 2002). Sex expression is sometimes correlated with environmental factors, such as light intensity and habitat condition. More females than males of an epiphytic orchid occur under open canopies (Zimmerman, 1991), and males have been observed to change to hermaphrodites under the best growing conditions on a moisture gradient (Sakai & Weller, 1991). Moreover, Nanami et al. (2004) found that sex change toward female occurred in unhealthy (i.e., slow growing) Acer trees after a decrease in precipitation. This observation suggests that sex change is regulated by the internal physiological condition of the plant itself (e.g., the availability of resources or health condition), which is affected by environmental circumstances (Matsui, 1995).
Eurya japonica Thunb. (Pentaphylacaceae) is an evergreen broadleaf subdioecious shrub. Tsuji and Sota (2013) reported that a single E. japonica individual changed from male to hermaphrodite bearing flowers of different sexes, while another shifted in the reverse direction; however, no study has examined the frequency and pattern of sex change or factors that affect sex change in E. japonica. In this study, we monitored sex expression in a subdioecious E. japonica population over five successive years, during which the growth rate and light condition of each individual were measured to (i) quantify the frequency and pattern of sex change, (ii) clarify the fluctuation in the sex ratio, and (iii) investigate the factors influencing the occurrence and pattern of sex change in E. japonica.
| Data analyses
After pooling all hermaphrodites (i.e., H, HF, HM, and HFM [hereafter, H-all]), we compared the sex ratio among the six sexual types using G (likelihood ratio) tests between 2010 and 2014. Using temporal changes in the observed sex ratio and the observed transition probability element, the differences in frequencies of sex change among the sexual types (F, M, and H-all) were evaluated, by calculating a transition probability matrix (3 × 3 matrix). Then, the statistical significance was assessed by conducting 5000 bootstrap runs for the matrices using the "markovchain" package, and steady state was calculated among the sexual types and getting the estimates of the confidence intervals.
To examine the factors affecting the occurrence of sex change, we analyzed the data using generalized linear mixed models (GLMMs) with plot as a random effect (Bolker et al., 2009). The standardized fixed effects were initial individual size (DBH), light environment in both leafless (rPPFD-winter) and leafy (rPPFD-summer) seasons, and growth rate. Growth rate per year was measured as the absolute difference between the DBH of the current and previous years. The most appropriate model was selected using Akaike's information criterion (AIC) (Anderson, Burnham, & White, 1998) with backward stepwise selection. We also analyzed the data using Kruskal-Wallis test to examine sexual differences in initial individual size, light environment, and growth rate during 2010-2014 between sex-changed and constant (no sex-changed) individuals. Factors influencing the pattern of sex change were also examined to determine the frequency of each focal pattern of sex change using GLMMs, and the best model was selected based on the minimum AIC. Due to insufficient sample size, we analyzed the patterns among females, males, and H-all for which there were more than 10 changes in sex. The fixed and random effects in the GLMMs were same as in the analysis of the factors affecting the occurrence of sex change. In the analyses examining factors affecting the occurrence and patterns of sex change, the data for the period 2011-2014 were used because no measurements of rPPFD-winter were available for 2010. All analyses were performed using R ver. 3.1.2 (R development Core Team 2014).
| Patterns of sex change and sex ratio
Of the 309 individuals that were examined in the period 2010-2014, 85 (27.5%) changed their sexes at least once, and 224 (72.5%) never changed sex (Table 1). Several patterns of sex change were observed: 37 individuals (12%) changed sex only once, others changed sex twice (9.7%), three times (5.2%), or in every year over the 5-year period (0.6%). Sex change was either unidirectional (no reversal to the previous sex) in three patterns, or bidirectional (including reversals to previous sex(es) at least once) in five patterns (Table 1).
When we reviewed the transition matrix of sex expression between previous and subsequent years for 2010-2014 (in total, 1000 observations), we never observed the following sex changes: F to HM, F to HFM, H to F, H to M, or HFM to F, but we encountered 25 other patterns of sex change among the six sexual types ( Table 2). The most frequent patterns of sex change were from H to HF and from HF to F The bold letters show that sex did not change.
| Factors that influenced the occurrence and pattern of sex change
Although individual size (DBH) (p = .36), light environments in leafless season (rPPFD-winter) (p = .39), and growth rate (p = .38) did not differ between sex-changed and non-changed individuals in each sexual type, sex-changed H-all individuals had significantly higher rPPFDsummer than non-changed ones (p < .02; Figure 3). A significant sexual difference in individual size was also found (p < .001) and males were largest and females were smallest (Figure 3).
The explanatory variables included in the model best explaining the occurrence of sex change were the light environments in both seasons (rPPFD-winter and rPPFD-summer) and growth rate (Table 3). Sex was more likely to change with higher growth rates and more exposure to light throughout the year. The factors that were frequently selected in the models best explaining the pattern of sex change among females, males, and H-all were initial individual size, rPPFD-summer, and growth rate (Table 3). Among individuals that changed sex, those that were less exposed to light in the leafy season and had less diameter growth tended to shift from hermaphrodite to a single sex (i.e., female or male); the smaller individuals changed to female, and the larger individuals changed to male. By contrast, individuals with greater diameter growth were likely to change from a single sex to hermaphrodite.
The light environments of both seasons were also selected in the best (Table 3).
| Diverse patterns of sex change but stable sex ratio
We found frequent, repetitive sex changes in subdioecious E. japonica. Moreover, the sex changes were multidirectional among the six sexual types (25 patterns). This is the first quantitative report on the diverse patterns of sex change in E. japonica. Tsuji and Sota (2013) described a sex change from hermaphrodite to male, a shift that we never observed. This discrepancy between studies might be explained by the complexity of sex expression in E. japonica. Sex change from H-all to male was observed; therefore, if H-all can be interpreted as synonymous with "hermaphrodite," our result becomes congruent with the observation by Tsuji and Sota (2013). In our study population, 27.5% of the individuals of subdioecious E. japonica changed sex at least once, which is a higher frequency of sex change than observed for Bischofia javanica (3.7%, Yamashita & Abe, 2002), similar to that in some Acer rufinerve populations (11%-20.7%, Matsui, 1995;Ushimaru & Matsui, 2001), but lower than that in another A. rufinerve population (54%, Nanami et al., 2004) and in other species, such as Pinus densiflora (37%, Kang, 2007) and Panax trifolium (57%, Schlessman, 1991).
No fluctuation in the sex ratio was detected over the 5 years, although sex changed frequently. The stable sex ratio may result partly from repetitive, bidirectional sex changes in the same individuals (e.g., A→B→A) and partly from complementary changes in sex among individuals (e.g., A→B in one plant and B→A in another). However, comparison between the observed transition matrix and 95% confidence interval of the estimated transition probability matrix suggests that females and males might be gradually increasing and decreasing, respectively, in this E. japonica population over the longer period. This hypothesis is also supported by the result of the calculated steady state, in which females significantly exceeded 0.333, whereas males were significantly lower than 0.333. However, it is inconsistent with our previous results that male individuals have an advantage in male fertility over hermaphrodites in hand-pollinated crosses (Wang, Matsushita, Tomaru, & Nakagawa, 2016). Considering the weakened reproductive success of females versus hermaphrodites under natural conditions in this E. japonica population (Wang et al., 2015), pollinatormediated interaction and reproductive success through male and female functions may be related to the gradual change in the sex ratio of E. japonica.
| Sex change in relation to internal condition
Higher frequencies of sex change in E. japonica were related to a greater growth rate and more abundant illumination throughout the year. However, we found no differences in growth rate and light environments between sex-changed and non-changed individuals, except for rPPFD-summer of H-all individuals. This suggests that the sex change in E. japonica results from good internal condition mediated by the light environment. Matsui (1995) and Nanami et al. (2004) also suggested that plant health is coupled with sex change.
The selection of growth rate in each model best explaining the pattern of sex change indicates that the internal physiological condition of an individual E. japonica is also likely to affect the direction of sex change. Unhealthy conditions induce sex change to the female gender in Acer trees (Matsui, 1995;Nanami et al., 2004). In E. japonica, poor internal condition (i.e., dark light condition in leafy season and reduced growth rate) was linked to sex change from hermaphrodite (H-all) to single gender status (female or male; Table 3). The difference among sexes in the immediate resource costs of reproduction appears to influence the sex change (Schlessman, 1991 This study was the first step in exploring sex change and factors that affect the occurrence and pattern of sex change in E. japonica.
Subdioecious E. japonica was found to have labile sex expression and diverse patterns of sex change. Internal condition was suggested to correlate with the occurrence and pattern of sex change. A constant sex ratio was observed over the 5 years, whereas the estimated transition probability matrix and steady state suggest increasing female and decreasing male individuals over a longer timescale. These findings imply that further studies of E. japonica will help to elucidate the importance of ecological factors in mediating the sex ratio and sexual system evolution. | 2018-04-03T00:05:53.766Z | 2017-02-01T00:00:00.000 | {
"year": 2017,
"sha1": "78115b80f441ff97f1c0cd8ceaac201d2a9bc005",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.2745",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "78115b80f441ff97f1c0cd8ceaac201d2a9bc005",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
14559459 | pes2o/s2orc | v3-fos-license | Successful resection of a solitary metastatic liver tumor from prostate cancer 15 years after radical prostatectomy: a case report
Background A solitary metastatic liver tumor of prostate cancer is extremely rare because liver metastasis occurs as a part of systemic dissemination of prostate cancer. We herein report a successfully resected case of a solitary metastatic liver tumor from prostate cancer almost 15 years after radical prostatectomy. Case presentation A 70-year-old male who had undergone radical prostatectomy for prostate cancer 15 years previously presented to our hospital for treatment of a liver tumor. Serum prostate-specific antigen was elevated at 13.77 ng/ml. Abdominal computed tomography revealed a solitary tumor with a diameter of 54 mm in segment 4 of the liver. No metastatic lesions were found in other organs. The patient was given a diagnosis of a metastatic liver tumor from prostate cancer, and he underwent medial segmentectomy. Microscopically, the resected specimen was composed of eosinophilic tumor cells with oval nuclei and prominent nucleoli, which exhibited a cribriform pattern and a fused glands pattern with positive prostate-specific antigen and prostatic acid phosphatase staining; these findings were compatible with metastatic prostate cancer. Other than portal thrombosis that required anticoagulation, the patient made a satisfactory recovery and was discharged on postoperative day 15. Conclusion To the best of our knowledge, this is the first report describing successful resection of a solitary metastatic liver tumor from prostate cancer in the medical literature. In such a rare circumstance, hepatic resection for liver metastasis of prostate cancer seems justified.
Background
A solitary metastatic liver tumor of prostate cancer is extremely rare, as liver metastasis from prostate cancer occurs through the portal system from lymph node metastasis or carcinomatosis [1]. We herein report a successfully resected case of a solitary metastatic liver tumor from prostate cancer almost 15 years after radical prostatectomy.
Case presentation
A 70-year-old male who had undergone radical prostatectomy for prostate cancer 15 years and 1 month previously presented for treatment of a liver tumor. The patient had received adjuvant hormonal therapy for prostate cancer using goserelin acetate and bicalutamide. Eleven years after resection, local recurrence occurred, and the patient underwent 70 Gy of external radiation therapy; hormonal therapy was also switched at that time from bicalutamide to flutamide while goserelin acetate was continued. The local recurrence of prostate cancer was successfully treated and had a complete response. The patient had a past medical history of multiple arterial thromboses, including popliteal, femoral, and mesenteric artery thrombosis; he was maintained on 3.5 mg of warfarin per day. Laboratory evaluation revealed that serum prostate-specific antigen (PSA) was elevated at 13.77 ng/ml. Enhanced computed tomography (CT) revealed the presence of a solitary, lowdensity, and hypovascular tumor with a diameter of 54 mm in segment 4 of the liver (Fig. 1). Magnetic resonance imaging demonstrated a liver tumor with a high intensity on T1-weighted images (Fig. 2a) and a low intensity with surrounding high intensity on T2weighted images (Fig. 2b) in the same area. No metastatic lesions were found in other organs on positron emission tomography-CT. With the diagnosis of a solitary metastatic liver tumor from prostate cancer, the patient underwent medial segmentectomy of the liver. Macroscopic findings of the resected specimen revealed a solid whitish tumor with maximum diameter of 55 mm with central hemorrhagic and necrotic changes (Fig. 3). Microscopically, the resected liver tumor was compatible with metastatic prostate cancer (Fig. 4). The patient developed portal vein thrombosis on postoperative day 6, which was successfully treated with anticoagulation. Otherwise, the patient made a satisfactory recovery and was discharged on postoperative day 15. Serum PSA decreased to 0.54 ng/ml after hepatic resection. Nine months after hepatic resection, serum PSA increased to 6.99 mg/ml, and enhanced CT at 1 year post-hepatic resection revealed a recurrent tumor in segment 5 of the liver (Fig. 5). The patient has received docetaxel chemotherapy for recurrent liver metastasis of prostate cancer.
Conclusions
In 2013, prostate cancer was the most frequently diagnosed cancer in males (1.4 million) worldwide, while the incidence is still low in developing countries [2]. In 2008, approximately 14% of prostate cancer worldwide was diagnosed within the Asia-Pacific region, with three out of every four being diagnosed in Japan (32%), China (28%), or Australia (15%) [3]. The bone (90%), lung (46%), and liver (25%) are well-known and common metastatic sites of prostate cancer [1]. In general, liver metastasis occurs as a part of systemic dissemination of prostate cancer [4]. Therefore, solitary liver metastasis of prostate cancer is extremely rare. Batson et al. referred to the importance of the vertebral venous system (Batson's plexus) as a metastatic pathway of prostate cancer. The vertebral venous system with their rich, valveless ramifications and connections by-passes the portal system, so this system offers a possible reason why a solitary metastatic prostate cancer occurred. To the best of our knowledge, this is the first report describing successful resection of a solitary metastatic liver tumor from prostate cancer in the medical literature. Although the utility of hepatic resection for patients with liver metastases from colorectal cancer or endocrine tumors has been established, for patients with non-colorectal, non-endocrine liver metastases, it remains unclear because of the limited numbers of patients in each primary tumor groups [5]. Adam et al. reported 5-year overall survival rates of patients with liver metastases from urologic cancer after hepatic resection of 66% in adrenal, 55% in testicular, and 38% in renal cancer, respectively; furthermore, these authors report that these patients may benefit from hepatic resection [5]. Herein, we also discuss portal vein thrombosis (PVT) after hepatectomy which is relatively rare complication. Kuboki et al. reported that the incidence rate of PVT after hepatectomy was 2.1% and that rightside hepatectomy, caudate lobectomy, splenectomy, and postoperative bile leakage were independent risk factors for PVT after hepatectomy [6]. Although this patient had no risk factors for PVT after hepatectomy, he had a past medical history of unidentified arterial thromboses, so this might have caused PVT. Hepatic resection for a solitary metastatic liver tumor from prostate cancer is Fig. 4 Microscopically, the resected specimen was composed of eosinophilic tumor cells with oval nuclei and prominent nucleoli, which exhibited a cribriform and fused glands pattern by hematoxylin-eosin staining (a) and stained positive for prostate-specific antigen (b) as well as prostatic acid phosphatase (c) (×100) one possible therapeutic option, provided that the systemic metastatic work-up is negative. In such a rare circumstance, hepatic resection for liver metastasis of prostate cancer seems justified.
Funding
There is no funding for this work.
Submit your next manuscript at 7 springeropen.com | 2018-01-30T09:25:11.415Z | 2017-01-25T00:00:00.000 | {
"year": 2017,
"sha1": "7902baa42075bbd274ac0f786cf5b3d3c0401840",
"oa_license": "CCBY",
"oa_url": "https://surgicalcasereports.springeropen.com/track/pdf/10.1186/s40792-017-0292-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7902baa42075bbd274ac0f786cf5b3d3c0401840",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234996596 | pes2o/s2orc | v3-fos-license | DAILY PRACTICES AND PROCESSES OF TERRITORIALIZATION OF SETTLERS IN QUERÊNCIA DO NORTE, PARANÁ, BRAZIL
ABSTRACT Purpose: To understand the processes of territorialization in the daily life and work routine of settlers in Querência do Norte, Paraná, Brazil. Originality/value: This work contributes to the context of discussions on territory and territorialization processes (Haesbaert, 2011; Raffestin, 1993, 2008; Saquet, 2008) in the daily life (De Certeau, 2014) of workers marginalized by society in general. Design/methodology/approach: We developed this qualitative article from data collected through eight life history interviews with settlers, pre-settlers, and residents of Querência do Norte, in the State of Paraná (PR). After transcribing the interviews, they went through an analysis of narratives. Findings: We identify that, through the practices of work, study and claims, the settlers territorialize the space, and there they create their own rules and norms of coexistence. In their struggles for land, it is clear that their place of belonging is the field, justifying the insistence on the struggle for the right to land to cultivate. The sense of belonging of the workers is represented by the struggle that unites them and places them as part of a larger movement. Placing men at the center of the construction of territoriality is accomplished through the daily struggle and work of the settlers and pre-settlers in Querência do Norte (PR).
INITIAL CONSIDERATIONS
According to Reis (2012), the distribution of land in Brazil is an issue permeated by popular conflicts and revolts, including Canudos, Contestado, the War of Formoso, the Farmer Leagues, and the MST (Landless Rural Workers' Movement). The relevance of social movements resides in their capacity to influence the discourses, procedures, and behavior of the State (Hollender, 2016). As Fernandes (2017) points out, the actions of social movements are a way of resisting the neglect of the State.
As Meszaros (2000) emphasizes, in Brazil, the fight for land has been a continual characteristic of the nation's history. The MST, one of the most important social movements in Brazil, was born from the determination to fight against new forms of aggression against the poorest section of society, mainly farmers 5 and it has challenged the distribution of land, as well as the logic of the Brazilian capitalist development (Meszaros, 2000). The group was formally organized in 1984 (Straubhaar, 2015) and has been one of the most important and long-lasting movements for agrarian reform in the history of the world and the main one in Latin America (Carter, 2009). However, the actions of the MST and the representativeness of social movements are still marginalized within the context of administration studies.
But why should we reflect on the subject of territorialization based on the everyday lives of these people? How will this discussion contribute to organizational studies? What are the possible connections between this subject and the object of proposed studies and Administration? Among the possibilities which can be addressed to justify these connections, there are some which deserve more attention. The first argument refers to the omission -or perhaps intentional forgetting -of controversial subjects, such as the issue of land disputes and territorialization processes by administration journals. As Barros and Carrieri (2015) argue, the hegemonic perspective is predominant in organizational studies, resulting in the marginalization of objects and theories outside this dominant context.
For decades, the dominant subjects of journals in this area have addressed -and still address -subjects that interest the mainstream, based on the perspective of the powerful ones in society. This reality, observed 5 The term "peasant", used by Meszaros (2000), refers to the historical-political content of the term, which according to Wanderley (2014), denotes the entire history of battles by the Brazilian peasantry; however, for the development of this study, we will use the term family farmers, which as Wanderley (2019) points out, can be used as an equivalent to the term peasant. within organizations, is also manifested in the research field of Administration. The Anglo-Saxon domination of the field of Administration reflects the colonization process of knowledge and learning in organizational studies (Alcadipani, Khan, Gantman, & Nkomo, 2012;Ibarra-Colado, 2006). The diversity observed in the daily operations of organizations reveals a universe that is little known and has been little explored. This diversity can be studied based on the everyday experiences of people, and the strategies and survival tactics adopted by these autonomous subjects, and the normal management practiced by them (Barros & Carrieri, 2015).
In this sense, the challenge that this research problem and the adopted theoretical references present is to reflect or perhaps understand the nuances of these subjects, which do not occupy leadership or strategic positions in large organizations. As Carrieri, Perdigão, and Aguiar (2014) argue, managerialism has dominated the process of constructing Administration, legitimizing it as a reference or a standard within the management model based on rigid and formal models of management. The challenge of this study is to reveal these invisible subjects, based on the perspective of De Certeau (2014), externalizing their voices and experiences, allowing them to reverberate in a field historically dominated by powerful subjects, following the historic suffocation, repression, and neglect of the voices of the humble, ordinary and common people.
Another aspect that legitimizes the discussion of land and everyday Administration is based on the position that administrative studies do not refer just to profit-oriented businesses but also to other types of organizations, such as social movements, which present the reality of other forms of management. Thus, we emphasize the ordinary management proposed by Carrieri et al. (2014, p. 698), which "avoids managerialist parameters", because it focuses on the everyday experiences of these ordinary people, which we will present through this study.
Methodologically, the theoretical-empirical reflection realized in this article makes it possible to transpose theories and theoretical assumptions from other areas of knowledge to organizational studies. Since its origins, Administration has been constructed based on contributions from other areas of knowledge (Rodrigues, 2019). As Barros and Carrieri (2015) argue, the dialogical use of theories from other areas of knowledge makes it possible for new knowledge to appear in the area of Administration. This contributes by making connections with current history in order to understand facts in their subtleties and details, which other methods of data collection, such as those based on instrumental logic, may not support. This study also constitutes possibilities for public administration and actions of the State by presenting the reality of the life and work of those who work in settlements and pre-settlements of the agrarian reform, whose needs do not end with the constitution of a settlement. Unseen and unheard, these people, who make up the mass of rural settlers, campers, or those in the process of settling, are invisible when they are not summarily ignored in the process of planning public policy.
The justification of the relevance of examining the MST in an administration study is also due to the changes that the movement has succeeded in winning for the reality of Brazilian rural workers. The state of Paraná, which is essentially agricultural and is where the MST was born, naturally is not exempt from most of its demands. Querência do Norte, a municipality situated in the extreme Northeast of the State, is an example of how these state battles occur. Godoy and Silva (2008) emphasize the influence of the MST in the growth of the consumer market, underlining the changes that have occurred in the municipality. The municipality of Querência do Norte has 785 families in 10 settlements (National Institute of Colonization and Agrarian Reform -Incra, 2017). Since the arrival of the MST, there has been an increase in the level of economic activity in the municipality, as well as more dynamic commerce, given that the region's large landowners did not exploit their land and lived outside the municipality (Godoy & Silva, 2008). However, the battles for land in the municipality predate the arrival of the MST and date from the construction of the municipality, in 1950, and the arrival of rural workers and squatters (Gonçalves, 2004).
It was a scenario of uncertainties and difficulties for family farmers, their business partners, tenants, and those who occupied the municipality's land, with no guarantees in terms of working conditions and in the chance to own land, while the government's actions benefitted the municipality's large property owners. It was in 1985 that the municipality's Rural Workers' Union began to focus on agrarian problems, with the objective of realizing settlements in Querência. Thus, in 1988, the 29 Pontal do Tigre farm was singled out as a priority area for disappropriation and agrarian reform, and it was subsequently occupied by 200 families from other MST camps (Gonçalves, 2004).
The MST's fight for land is the fight for territory in the sense of a physical space to settle its workers. However, for the purposes of this study, it is understood that, in addition to the physical sense of these spaces, the territory is the result of an actor's action "to appropriate a space concretely or abstractly" (Raffestin, 1993, p. 143 graphically delimited space, the territory also characterizes the subject's space of action, territorializing not just in a legal and concrete form in defining the legitimate owner of space but also promoting the appropriation of space in an abstract form. Based on the subject's everyday experiences, we seek to observe the process of territorialization which accompanies the movement for space (De Certeau, 2014).
Within this context, the objective of this article is to understand the processes of territorialization in everyday life and work of the agrarian reform settlers in the municipality of Querência do Norte (PR). In this way, we can broaden the perspective of Administration in terms of the issue of work, disassociating it from the traditional perspective, which emphasizes the practices of formal organizations and expresses it through the voices of workers who construct in their new environment new movements of space. We have developed this study based on the life stories of settlers of Pontal do Tigre, Sebastião da Maia, and Margarida Alves, as well as the residents of the pre-settlement of Água do Bugre farm and city residents who work directly with family farmers in the settlements.
This article is divided into six parts: after the initial considerations, we have the theoretical presentation of Michel de Certeau's work in terms of everyday experiences. The third part introduces theoretical discussions related to territory and territorialization, while the fourth presents the methodological procedures used in this study. Then we analyze life stories and their relationship with the theoretical aspects supported by this work, and, finally, we present our final considerations.
EVERYDAY EXPERIENCE ACCORDING TO DE CERTEAU
Interest in the everyday lives of common people and everyday practices have intensified recently in the field of Brazilian organizational studies. In this aspect, questioning typical hegemonic mainstream learning and thinking of new research possibilities based on the use of history, giving visibility to the stories of common people, ordinary businessmen, and common workers as Barros and Carrieri (2015) do, have become renewed possibilities. As Ribeiro, Ipiranga, Oliveira, and Dias (2019, p. 591) argue, this interest carries with it "[...] the social and political sense of practices which are expressed subjectively in a patchwork which is full of contradictions", constructed based on everyday movements, derived, in turn, by activities exercised by social actors which unfold in events which provoke changes in the course of events. The following recent studies in the field of organizational studies are worthy of notice: Carrieri, Saraiva, and Pimentel (2008) study hippy fairs in Minas Gerais, focusing on subversive actions of survival in the face of the process of institutionalization; Oliveira and Cavedon (2013) analyze everyday life in a circus based on ethnography; Teixeira, Saraiva, and Carrieri (2015) conduct a discussion of identity and the everyday life of maids; Ribeiro et al. (2019) articulate a discussion based on the feminist perspective and the practice of artisan resistance, discussing the issues of ordinary women; and Domingues, Fantinel, and Figueiredo (2019) use ethnography to examine the organizational space of an artisan fair in the State of Espírito Santo from the points of view of Henri Lefebvre and Michel de Certeau.
Within this context, one of the main everyday references comes from Michel de Certeau, whose works portray, as Kuus (2018) indicates, the contingent and situational nature of social practices. Studying everyday life, according to Courpasson (2017), is characterized by an attention to the casual and the possibility of emancipating the dominant rhythms, restrictions, and fatalities, which rest in the common gestures of everyday life. This extraordinary dimension of ordinary routines (Machado da Silva & Leite, 2008) becomes obvious in the experience of urban occupations, in battles for housing, subject to conflicts and disputes for territory (Santos, 2019), in which ordinary subjects adjust their common needs to emerging exceptions. However, the ordinary perspective of everyday life is essential to continue in exceptional situations (Machado da Silva & Leite, 2008). Repression, violence, and other coercive forms of intimidating ordinary subjects in their fight for land in the country, as well as in the city, represent these exceptions, which change the natural everyday rhythm of the common individual's everyday life, requiring them to give up these social practices.
De Certeau focuses on the individual (Best & Hindmarsch, 2018), the common man, who apparently gives in to passivity and is treated as a secondclass citizen, and is generally conceived of as an individual subject in social life (De Certeau, 2014). Looking at the actions of ordinary men in everyday life unveils the strategies of the powerful, who define the rules of the game of everyday life, as well as the possibilities that an ordinary man has in terms of action within this context (De Certeau, 2014;Courpasson, 2017). Within these possibilities of action, the astute ability of the ordinary subject to adjust to opportunities that arise in everyday life, reconciling the ordinary and the extraordinary, can be visualized in urban occupations (Santos, 2019).
De Certeau (2014) emphasizes that a relationship exists -which is always social -which sets relational determinations in which each individual is the locus where a given inconsistent plurality acts, sometimes in a contradictory manner. Thus, more than analyzing the subject itself, the interest in everyday practices is focused on modus operandi, which refers to the everyday art practiced by these subjects. In making use of some social practices, the ordinary men carry with them millennial codes culturally imprinted in them, which are obscured in these subjects, hidden by a mask of rationality typical of the West. In the sense proposed by De Certeau, exhuming refers to revealing these subliminal codes built into the actions of these people, whom the author calls consumers.
Treating these subjects as consumers reveals the silent and subversive production, which Ortmann and Sydow (2017) refer to as almost invisible, representing new ways of using the rules, routines, and resources available to these subjects. Within this scenario, the game of everyday life occurs on a field of tension and conflict, in which the subjects play employing different forces, which can be understood based on the concepts of strategy and tactics. A strategy is the calculation of force relationships of a powerful individual, an owner who uses them to define the rules of the political, economic, or scientific game. Tactics, in turn, are the expedients employed by the ordinary man since he is not an owner and operates blow-by-blow in another's space in an astute, stealthy, and incremental manner. If, on the one hand, the strategy is a victory of the place over time, considering that this individual has no place, tactics are, on the other hand, the victory of time over place, of the individual over himself (De Certeau, 2014;Munro, 2017;Frers & Meier, 2017;Nielsen & Langstrup, 2018;Gangneux & Docherty, 2018).
The ordinary man sustains himself within the gaps left by society, using them as opportunities to be taken advantage of when it is convenient (De Certeau, 2014;Ortmann & Sydow, 2017). It is the "art of getting around things" (Telles, 2010, p. 25), which astute individuals use to adjust to everyday situations. In a stealthy way, they seek to take advantage of situations to create occasions like artists and artisans, who should be in places where no one expects them, being astute (Ribeiro et al., 2019). Tactics are ways to reveal the possibility of victories of the weak over the strong, who express themselves incrementally to take advantage of apparently adverse situations (Redshaw, 2017;Duarte & Brewer, 2019). If the direct confrontation with the powerful does not always seem to be the best alternative, it is not rejected directly, given the use of their abilities in social metamorphosis and in reinventing their possibilities in terms of artifacts as well as spaces as an alternative to well-trodden everyday paths (De Certeau, 2014). This astute tactical practice has to do with what De Certeau (2014) calls metis, a term originated by the Greeks which refers to this type of astute intelligence. It is in everyday experience that the tension exists between owners who define strategies and the ordinary men who only possess astute practices. The trajectory of resistance of an artisan (Ribeiro et al., 2019) reflects this practice of using these tactics and crafts as a way to maintain his or her existence in the face of the mechanisms of oppression. Seeking shelter, as demonstrated in the study of Santos (2019), is a field of battles and disputes in the urban arena, in which the stealthy practices of ordinary people combine with the need to combine the ordinary and extraordinary in order to survive.
It is within this context of possibilities that De Certeau (2014) makes an important discussion regarding place and space. The author calls "place" the order of distribution of the elements in the relationship of coexistence, in which, analogous to the laws of physics, two bodies cannot occupy the same space. In this sense, the physical logic prevails, given that the frontiers and order of spaces are defined as the limits of each subject, indicating stability, a configuration of positions at a certain instant. Based on this logic, a place appears to be static. Space, in turn, has to do with dynamism, which consumes the intersection of possibilities admitted by the variables of time, velocity, and direction. Space, from this perspective, is movement, since it is, to De Certeau (2014), a practiced place. In the following section, we will discuss how space may be territorialized.
TERRITORY AND TERRITORIALIZATION
The concepts of territory can be placed into three basic groups: political, which deals with relationships of space and power; cultural, in which territory is the product of the symbolic appropriation or valorization of a group in relation to a living space; and economical, which focuses on the spatial dimension of economic relationships, in which territory is a source of resources (Haesbaert, 2011). Folmer and Meurer (2019) emphasize that the concept of territory is a way to designate spaces constructed based on social practice (relationships) that subjects establish and their dynamics. To understand the concept of territory, it is important to analyze its relationship with the concept of space, given that they cannot be considered equivalents or synonyms (Raffestin, 2008;Folmer & Meurer, 2019).
In terms of the difference between territory and space, Picheth and Chagas (2018, p. 790) point out that "territory rests on space, but is a production that occurs through it". The territory is constructed by the subject, while space precedes territory, which is generated based on space, through the actions of the subject, bringing into focus studies of territory through the actions of the subject and fleeing the understanding of the territory as material beyond the subject's actions (Raffestin, 2008).
The construction of a territory by a subject occurs, as Raffestin (2008) points out, through the concrete or abstract appropriation of space, and in this way, the subject territorializes his or her space. Therefore, in the view of this author, territorialization can be understood as the creation of territories through a direct or indirect, objective or subjective appropriation by the subject. The concept of territoriality refers to "the multidimensionality of territorial 'living' experienced by members of a collective, and by societies in general", as pointed out by Raffestin (1993, p. 158). It is in this sense that Koch (2017) presents Raffestin's territoriality as a practice between actors and as essentially relational. Thus, territoriality is a form of relational behavior in which the nature of relationships is more important than the physical space in which they occur (Sewell & Taskin, 2015).
Defined as a group of relationships, territoriality originates in the tridimensional system of society-space-time (S-S-T), which refers to the relationship between society, space, and time in the construction of territorialities (Raffestin, 1993). Life is made up of relationships, and, for this reason, Raffestin (1993) defines territoriality as a group of relationships that involve a subject who belongs to a collective, a relationship that possesses form or content. Exteriority or a place is an abstract space, an institutional, political, or cultural system. Time in Raffestin's (1993) tridimensional system represents variations which the elements society and space exhibit, being susceptible to variations over time.
Therefore, territoriality refers to social relationships "which historically produce each territory" (Saquet, 2008, p. 79), being, in this sense, production based on space (Saraiva, Carrieri, & Soares, 2014). As Fuini (2019) points out, territoriality is linked to the idea of a group's belonging to a territory. As presented by Saquet (2008), territorialization is constituted by various temporalities and multidimensional territorialities. "Territorialization is the result and condition of social and spatial processes and signifies historical and relational movement" (Saquet, 2008, p. 83). Raffestin (1993, p. 161) stresses that territoriality is made up of "mediated relationships, which are symmetric or asymmetric with exteriority", therefore talking of territoriality is talking about production, exchange, and consumption, and not a simple link to space.
The relational nature of territory is also addressed by Haesbaert (2011, p. 82), who deals with its definition "within a group of historical-social rela- tionships" as well as the complex relationship between social processes and material space. Based on this understanding, we can understand territory as movement, fluidity, interconnection, or, in other words, temporality. The territorial production described by Raffestin (1993Raffestin ( , 2008) is a process of territorialization or the construction and appropriation of territory, which generates, as Saquet (2008, p. 88) points out, identities and heterogeneities, which in turn, generate territories. Territorialization involves an actor (individual or collective), work, the disposition of the actor (a combination of energy and information), and material mediators (instruments, materials, knowledge). Raffestin (1993Raffestin ( , 2008 emphasizes the following components in the process of territorialization: the realizable intentions of the actor's objectives, the relationship between the actor and the general environment, the organic and inorganic environments, the social environment, the general environment (organic, inorganic and social environments), the territory produced by the actor in the environment, and the group of relationships developed by the actor within the territory. These elements, which Raffestin (2008) presents, characterize a small-scale model used to explain transformations that occur during the process of territorialization. In this way, territorialization is the combination of "elements learned by actors in various systems which are at their disposition" (Raffestin, 2008, p. 30).
The concepts of territory and territorialization of Raffestin (1993Raffestin ( , 2008, Haesbaert (2011), andSaquet (2008) are based on an understanding of the action of a subject within a space. They are based on the centrality of the subject, which is the focus of these authors' approach, seeking an understanding of not only its material aspect but also the immaterial and the symbolic aspect of territory as the locale of the actor's relationships, appropriations, creations, and inventions.
To understand how the actions of the participants in this study of the territorialization of space in settlements in Querência do Norte occur, in the following section, we will present the methodological strategies used to obtain and interpret the investigation data.
METHODOLOGICAL PROCEDURES
The development of this study seeks to understand the processes of territorialization in everyday life and work of settlers in agrarian reform in the municipality of Querência do Norte (PR). To do this, we have developed a descriptive study in order to interpret these territorialization processes based on the everyday life and on life stories of the interviewed subjects.
The data is of a qualitative nature and was collected using the oral history technique, which appears in three forms: oral life history, oral history themes, and oral tradition. For the development of this work, we have opted to use oral life history, which, to Ichikawa and Santos (2006), allows greater freedom for the interviewed subjects, who can relate their personal experiences because space is given to narrate their story in accordance with their experiences, which fits the proposed objective of this study.
To develop this study, we realized eight life interviews, which took place between July 2016 and July 2017, with residents of settlements and presettlements in the city of Querência do Norte. The interviewees are referred to by a pseudonym in order to maintain their anonymity. They are: Marta, who joined the MST at the age of 11 after her father lost his land due to bank debts; Lourdes, who entered the movement at the age of 17; Aparecida, who has lived 11 years in a pre-settlement; Célia, who entered the movement at 15 years of age and also lives in a pre-settlement; Ana, a cooperative veterinary technician who works directly with settlement farmers; João, 53 years of age, one of the leaders of the settlement; Marcos, 29 years of age and the son of João; and Mário, one of the historical leaders of the settlements in Querência do Norte.
After transcribing the interviews, we analyzed the life history narratives of the interviewees. In this context, we considered Barros and Lopes (2014, p. 55) in analyzing life histories, who believe that "the question that should guide the researchers is how to use these histories to advance the understanding of a reality". In this way, the narratives which make up these life histories should be observed by the researcher not just as personal histories but as a way of understanding an unknown social object, situation, or universe.
Therefore, according to Barros and Lopes (2014), the most relevant part of the analysis is the analytical research sample used, which can guide the researcher in terms of related issues, such as the person, work, militant choices, and engagements, which will be measured by the concepts and theories which sustain the study and dialogue with the narratives produced by the interviewees. In this way, we have related the narratives to theorization about territory and everyday life. Or, in other words, we have interpreted the data based on a constant dialogue with the authors who provided theoretical support for this investigation.
Being part of the MST
The processes of territorialization, which occur in a space, represent a point of departure for the subjects who construct relationships with the space and other subjects in everyday life (Raffestin, 1993(Raffestin, , 2008. Thus, the interviewed settlers and pre-settlers come from different territorialities, but in many cases, they possess the same objectives which tie them to the space where they dwell today. These life histories show the relationship between these subjects and camp life: During your youth, what was your dream? To have a piece of land. My father just had 18 acres, he couldn't get any more, so later the children had to work as employees, and he later joined them and went to the city, and I resisted. Of the six children, I'm the only one who is in the fight, you know? So, I feel fulfilled, you know? (João).
It was my brother who entered in the beginning. We didn't know it. So, my brother entered... and from that point on, we began to live with it, you know? I'd go there and visit him in the camp, see how his life was there. We were adolescents at the time, we didn't think much, you know? We took it in stride, but over time we learned and understood. From Tibagi [a municipality in the central region of Paraná], he came here to Querência, in the Pontal, where he stayed in the camp for the first time. Then my father came, and we decided to come here too. It was at this time that I left. I went back and got married. It's now been 11 years that I'm here. I haven't left again. I've been in this area for eight years. We've had good times when everything was calm in the camp, but we could leave, work and many people worked outside. It was relatively calm. Then we thought it better for each one to find their lot, you know? To try to get along in life. Because up until then, we worked on the land. But it wasn't the same, you know? We farmed, but just planting things for us to eat, you know? So, we were there in the camp until we came here to plant [...] I myself worked in a dairy cooperative. So, we had this daily commitment, you know? Then, we said let's stay on the lots and it will be better, won't it? And we came, and it really was better (Célia). In the previous excerpts, we can understand the beginning of the territorialization of Querência, hearing João, with his dream of having his own plot of land, and Célia, who got to know the movement over time. What calls our attention at this point is not just the rural history and trajectories, but the exploitation and use of work to, in fact, territorialize a new space.
Célia, in particular, mentions working on the land, but despite this, the sense of belonging, of having land in their name, was essential for things to get better. The territory of one's own piece of land functions in the imaginary and symbolic context. The future promise only takes shape through everyday practices, "that daily commitment". De Certeau, Giard, and Mayol (2013) mainly identify the practices of cooking and living as everyday practices that make it possible to understand the tasks and meanings which occur in the everyday lives of these subjects. Based on our interviews, we managed to identify, among the research participants, two types of practices that were capable of territorialization: working and studying. The excerpts below depict a few reports of the practice of studying: So, our fight from the beginning was like this, it was from a perspective of having land, but having an education too. The fight to educate your children, you know? From this perspective, I think that Querência here, our idea is that all the parents want their children to study, something which they may not have succeeded in doing, getting their children an education. And we've succeeded [...] So, we left this situation of a precarious school for a better one, and the requests were for a, well, a school, a health post, telephone, electricity, you can see, we had nothing, you know? (João).
I had the great opportunity to study until the fourth grade [...] with the moving of the MST from one place to another... today we were in one space and tomorrow another, so I wasn't able to complete my education, you know? Then after we arrived here in Querência I resumed my education [...] there were various difficulties, you know? Then you think, for all these years that we've been sleeping in the damp, without a roof, trying to find food, a collective of people, you'd think that there would be some regret one day saying "our father had so many opportunities to live with an uncle or someone else, to work here or there". No. Never. Just thanking our father for having taken this position, for giving his family and raising his children with an objective, a goal (Marta).
In the excerpt referring to João, his demands go beyond just land in the physical sense but seek basic elements of survival and dignity, such as a better school, a health post, and other issues. We may also perceive in Marta's comments the constant process of moving from one space to another over a long cycle. Even so, in all these traveled spaces, its essence has remained the same. The difficulty in completing her education was just a cost of fighting. She expresses satisfaction with her father's decision to maintain a rural territory of customs and symbols, even though the physical space has changed. In these interviews, there also emerges a mixture of studying and working, a type of learning among the settlement members. We can see this in the excerpt below: Since we did not have the resources to look for education, to specialize, ourselves, we sought out in various settlements cultural practices with medicinal plants. [...] because there's someone from the riverbank who grew up in the islands and has something that he's always used [...] And there's another from a different group whose family came from Germany. So, we gather all of this together. [...] It's been like that with ointments, syrups, 12 herbs, and so many other things that we have learned here from others (Marta).
These practices related by Marta, of learning and producing for the consumption of the settlers and later selling to others, are typical of the ordinary man. De Certeau (2014) mentions in his writings the use of individual experience as a tactic and a way of improvising and utilizing the materials and resources at hand to create ruptures in everyday life. Learning and teaching these practices generate contact and the sharing of everyday life with others from other physical territories and also intertwines the territory of the settlers. These practices promote union and belonging to the same territory, whether camped or settled, the landless movement continues: [...] According to Lourdes, work also arises from difficulties, and it is through work that territorialization occurs. When some leaders were imprisoned, during greater persecution of the movement, other members had to act to make sure everyday activities continued with some degree of normalcy. It was by providing services, of feeling the difficulties of youth and a suffering life, that Lourdes transformed this territory and feels that she belongs to the movement.
Personal relationships, but more specifically love and passion, are another point that is associated with a given territory. We can observe this in the excerpts below: She registered in that region of Santa Terezinha, close to Iguaçu Falls because her father lived there. And then she waited. I was camping, and we began to talk, and she said, "I've already registered; I'm just not camping like you are". So... we met there. Later in '90, we married. In '93, our first daughter was born (João).
My father said: "Look, I'm leaving. Your mother and I." Then we said: "No, I said we're going to stay." He [my brother] and I decided to stay. "No, we're going to stay here." And he said: "No, I just wanted to hear your opinion because you're older, you couldn't come, but your mother and I will be all right." So, we stayed and later I met my companion there, you know? We dated and ended up getting married (Lourdes).
[...] he [her husband] came from Água da Prata. He just has a sister here in this region, you know? His sister got a lot in Água da Prata. And since he was left out, they came here, those who were left out in Água da Prata came to Água do Ouro. I came with my father, camped here, and this is where we met. Love that was born in the camp. Everything began here (Aparecida).
We can see from these excerpts how the relationships between these subjects are fundamentally a practice of territorialization. The birth of a daughter, dating, and marrying, sharing a symbolic life, a physical life, and to still be in this fight is what makes each practice territorialize wherever they go. It makes the land where they signed the papers, or an improvised camp, their places of belonging, where they are transformed, act, and, mainly, fight.
Actions that territorialize
Through ruptures in the everyday lives of the members, they get their sense of belonging. It is in their difficulties, their ruptures that, today, tomorrow, and afterward make the fight not just a dream and their main goal, but also one of their territorializing actions.
[...] so many things happened; it was very difficult there. The lack of water, the lack of prospects. People didn't adapt to the region. So, it was all a fight for us to demand to come to Querência do Norte, you know? (João).
The action of demanding shows perhaps one of the first practices of territorialization. In the case narrated by João, he explains how land which was provided by the government had many hills and was not very productive or appropriate for agriculture. Through their demands, the members managed to get a new piece of land, once again as the fruit of a conflict between those who rule the game -Incra and the government -and ordinary men -members of the MST.
The Fight (hereon written with a capital F) is almost immediately associated with demands: demonstrations, miles walked, and the power of the group's union gave these everyday practices a sense of belonging. They didn't just request new land but changes in the prerogatives established by the government. We can observe this below when João talks about the rule that only those who are married receive land: [...] before, Incra was prejudiced, I believe, because they awarded people lots based on their points. Who had more points? The families with more children. They had a lot of points. Those who were single didn't get a lot, [...] then we waged a historic fight, imagine '86, '87, '88, and we arrived here. When I arrived in '95, ... it worked out for me because I was married, but I had various friends who remain single to this day. [...] And then we said no! Being married can't be a rule for settling or not because it's a life option. I say this because a person spends ten years fighting to have space because he wants to be a farmer. Now he can't just because he is or isn't married? This led to a controversy, and in the beginning, they didn't agree. So, we said: "No! We're not going to give in on this. [...]. We were breaking a rule that in many areas, they simply exclude single people. This is why many made The excerpt narrated by João shows two important things in terms of the theoretical question that we have established. The first is the question of territorialization practices of not just making demands and being successful in this, but, moreover, confronting the resistance of Incra, to beat it at its own game, changing the rules by facing a controversy and even using the verb "to conquer". In addition, we see another movement opposed to the use of strategy (De Certeau, 2014): the tactic of arranging an improvised marriage to indirectly "get around" the rules established for conquering the objective of the Fight. It is interesting that the interviewee himself, in going against this action, perceives that one shouldn't take shortcuts in waging the Fight. This reflection may be a consequence of another reported problem. In the excerpt below, we see another MST demand referring to the title of the lot: The lot arrived with both names on it, but you know what happens a lot? When there's a fight, in the view of some of these farmers, the owner is the one whose name appears first on the contract [...]. It's complicated to explain, but let's suppose that a couple separates. Incra doesn't have a department that takes care of dividing lots. The lot belongs to the family. It doesn't divide lots, and it doesn't have a legal department that deals with this [...] You've separated? There is no specific legislation that covers this. And there have been many cases of disease, of alcoholism. And the wife ends up leaving with the children, and the man says: "No. The lot is mine. You want to leave? Go", and she leaves. Then…it changed. I was directly involved in a case like this. "No, send a contract in her name so that the husband understands that she's the owner as well" [...] Then contracts began to have the name of the wife. Now it's more of a question of cultural understanding. Now he understands that with her name appearing first, it's as if she's the owner (Ana).
As related by Ana, we have observed a conflict between the members of the MST, in which both share the same territory. The practice of splitting and the abandonment of Incra has led to the need for a demand by the group. In addition to being another practice to protect the movement, this also creates the need for "cultural understanding", in the words of the inter- viewee, or, in other words, the institution of rules for the belonging and good relations of all.
Understanding what the Fight is and where it can lead
Perhaps the main point that motivates many of the territorial practices engaged in by members of the MST is the use of the term "Fight". This conventional word used with insistence reflects what we believe to be the cornerstone of the group, that which motivates them to wait for months and/ or years for their own piece of land. Up to a certain point, this is correct. However, as the interviews progressed, we perceived that "the Fight" is not a well-defined moral code, but rather the actions and practices which are responsible for territorialization functioning in a collective sense and, later, being transformed into something individual.
No. It's just as I said in the beginning. This task was given to me 13 years ago, let's say ten years ago. And I treat it as a task. Today, we can say that through the logic of space, I am the President of CEPAG (Ernesto Guevara Research and Training Center), but this doesn't change or anything. I'm still the same militant that I was before. The same [...]. (Marta).
The previous excerpt demonstrates not just the need for the union of all these subjects with a common objective but also the force with which it establishes roots. Marta demonstrates that, even though her tasks have changed, she is still the same militant that she was before, that is, part of the territory of which the Fight is the cornerstone. Her practices transform the territory into a conquest of the Fight. The militance of the MST guides their common desires, and each everyday practice allows these individuals to assume this militance as its members, which is also revealed in their language. In the eyes of an outsider, how is it possible to differentiate a small family farmer from a member of the MST who already has a piece of land? What distinctions separate one from the other? The answer appears in two more perceptible forms: vocabulary and history.
We will begin with vocabulary: We perceived how rare it was for one of the interviewees to call us "comrade". They always referred to us as "you". In one of the interviews, we asked whether "comrade" was used to refer to members of the MST who are married. A synonym for husband/wife. The answer was that members of the movement call each other "comrade", which is thus an everyday act that symbolizes not just belonging to the Fight (based on who's speaking), but also that identifies who members are (who's referred to).
Even though the language is treated in a subtle form, and speech is considered an everyday practice by De Certeau (2014), there is a "historical" element that also is evident. The existence of a process of territorialization to Raffestin (1993) considers the society-space-time triad, or as Saquet (2008) adds, a historical aspect. The members of the MST spend a long time together, sharing their practices and experiencing their own divisions. Thus, as De Certeau (2014) mentions, everyday life in and of itself is not based on routine and tedium, but rather on conflicts or divisions caused by subjects who share everyday life.
A "comrade" recognizes the other because both share the camp's land. They share land in the present, which is the promise of land in the future. At this point, we perceived that everyday practices are also territorialization actions. An outsider cannot identify who is a member of the MST, but a member of the MST can identify who is outside their territory, outside the Fight.
We perceive this because of the history of violence suffered by the group. As demonstrated by the reports below, the members of the MST are constantly seen as powerless subjects, low in the social hierarchy, who often take on the role of thieves or lazy people, because they receive land to farm "for free": "But most of us are ordinary because we don't appear in the media" (Marta).
[...] we often stayed up all night. When they told us on a Monday that there would be a general meeting, what did we think? There goes Tuesday.
That's what we thought. [...] Even the children felt down because the girls couldn't see whether a police car was passing by, and they couldn't say, "Look, Mom, the police are going to our tent, to destroy our tent and kick us out" (Aparecida).
We came by bus with the women and children, and the men rode on the top of a truck. All the way from Ponta Grossa here... so we arrived here and they [wanted] to scare us saying "Querência is very complicated, the police are at the entrance... there are gunmen... 1,500 of them" (João). We can observe the everyday practices creating ruptures and permitting the territorialization of space. The interviewees recognize their position of not determining the rules of the game (De Certeau, 2014), as Marta says, and that the members are ordinary people without power or ownership. Aparecida and Mário relate how sensitive the land situation is. At any moment, these interviewees themselves can be evicted, taken from their homes, and taken to an unknown territory. Violence, whether symbolic (being denied work due to being landless) or physical (being evicted from their homes), is an everyday condition of their lives.
This history of common fighting has made them strongly territorialize the space claimed in Querência do Norte. The relationships between the social actors, space, and time have constructed this territoriality. Being a life composed of relationships, territorialization has occurred through the collective nature of these relationships in which the subjects have been involved with space (abstract and/or concrete) and time (Raffestin, 1993).
FINAL CONSIDERATIONS
The Fight and the sharing of the symbols and customs that bind the members of the MST to rural life are still subjects that are rarely addressed in studies of Administration. This silence exposes the issue of access to land in Brazil as a subject that has still not been settled and, as pointed out by Fernandes (2017), it preceded the fight for agrarian reform. Within this scenario, the MST is still seen as an anti-State and anti-democratic movement, being presented in a negative manner by intellectuals and most of the Brazilian press (Carter, 2009;Straubhaar, 2015). With this, violence in the field has come to be presented as coming from members of the MST, disqualifying their position as victims of the excesses committed by large landowners.
The objective of this study is to understand the processes of territorialization in the everyday lives and work of agrarian reform settlers in Querência do Norte (PR). Through collected life history narratives, we have observed some of the aspects present in the everyday lives of these workers who participate in these processes of territorialization, such as: the Fight that they share, their sense of belonging to country life, and the relationships between the subjects.
Based on the analyzed narratives, we understand that the Fight of those interviewed is related to everyday practices and, as a result, their processes of territorialization. The members of the MST are ordinary people to De Certeau (2014), and act step by step, practice by practice. In this sense, they experience their Fights every day, transforming their territory and making their current life conditions different from those of the past. This discussion is rarely presented in Administration, which silences the voices of ordinary subjects and rural workers who fight every day for the right to live and work in the field while they suffer a wide variety of forms of violence.
The subjects interviewed demonstrate a strong sentiment of belonging to their movement, with the word Fight being the most notable in their interviews. The Fight is what unites them. The fact that they are part of a larger movement seems to be what maintains the group's identity and makes it more cohesive. Politically, it is an active movement, and practices such as work, study, and demands promote a greater sense of belonging among the people who live the same historical reality of the Fight. We may observe from the results of this study that territorialization is accomplished through everyday practices and also the centrality of man in the construction of territorialization. As Raffestin (1993Raffestin ( , 2008 shows, work and the disposition of these individuals and their material and symbolic mediators are important in this process of transforming space into territory. Time, the other element of territorialization, represents variations that occur in society along a longitudinal scale. In this manner, the history shared by the members of the MST interviewed for this investigation has proved to be an important factor in their cohesion, sense of belonging, shared everyday practices, and, as a result, a process of territorialization and territoriality. It remains to be discovered which practices, places, and historical facts will make sense (or will not make sense) in the processes of territorialization for future generations of settler families. This study has already provided insights into this issue, but this is a subject that will be pertinent to the next article. | 2021-05-22T00:03:08.853Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "39001ff9c6aca90a2a31d9d899fc0f6eb867b640",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/j/ram/a/44ZfY3LjSJBQwQ5dxFYgJ9F/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "bada61c4238ce62f454685321016a308ec2d11a5",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
} |
236884036 | pes2o/s2orc | v3-fos-license | CCD 2: design constructs for protein expression, the easy way
CCD 2 is a software tool that aggregates sequence information for protein sequences (conservation, structure prediction, domain and disorder detection), enabling informed choices for expression-construct design, the single-click generation of PCR primers for cloning and easy data tracking.
Introduction
Proteins, especially from eukaryotes, are modular machines comprising multiple domains, often connected by flexible regions. Most structural biology projects require the generation of multiple truncation constructs in order to explore recombinant expression, solubility, crystallizability or the functional properties of a target protein or macromolecular complex. Generation of a protein construct has two phases. Firstly, the constructs must be designed based on features of the sequence of the protein of interest. The aim here is to find suitable cutting points that are most likely to preserve protein folding and solubility while retaining the desired functional properties. Secondly, once the truncation constructs are known, amplification primers must be designed to amplify the relevant DNA sequence, which will then be cloned into a suitable recombinant expression vector. Although all of the information necessary for protein-construct design is available online, aggregating it and mapping it onto the protein of interest is rather tedious. Furthermore, truncation points are decided on the protein sequence, but primer design requires working with the DNA sequence. Mapping protein residues to the DNA sequence and designing primers with suitable chemical properties and appropriate cloning adaptors is trivial, but is error-prone and time-consuming. ProteinCCD (Crystallization Construct Designer; Mooij et al., 2009), a Java, browser-based tool that we previously designed, aggregated many sequence-analysis tools in a single interface, allowing the user to generate PCR primers automatically starting from the protein sequence. However, technological changes have rendered ProteinCCD inoperable in modern browsers and obsolete.
CCD 2 (Crystallization Construct Designer 2) is the successor to ProteinCCD, using modern technology, but most importantly offering a largely expanded set of functionalities, features and tools.
Architecture
CCD 2 comprises of two parts: a user-facing, graphical user interface (GUI) and a server-side backend that is responsible for data gathering and manipulation. The GUI (whose functional core is also used for LAHMA; https://lahma.rhpc.nki.nl; van Beusekom et al., 2021) consists of an interactive web page written in JavaScript/jQuery and styled with the Bootstrap CSS/html libraries. Such an arrangement allows easy extensibility and compatibility with all modern browsers. The backend is written in Python 3.6 and uses Flask (Ronacher, 2010) to expose a RESTful API to the frontend. To improve network performance, requests to external servers are asynchronously parallelized by both the frontend (AJAX) and the backend (AIOHTTP).
Internally, CCD 2 implements a pipeline that is summarized in Fig. 1. To improve parallelization and maintainability, different tasks are fulfilled by different modules of the backend (coloured boxes in Fig. 1), each accessible through a separate REST call.
2.2. Web-server implementation, installation and code availability CCD 2 is accessible at the URL https://ccd.rhpc.nki.nl hosted by the Netherlands Cancer Institute Research High-Performance Computing Facility.
CCD 2 can also be run locally on a Linux-based machine (tested on Ubuntu version 20.04). The source code of CCD 2 is available at https://github.com/ProteinCCD2/ProteinCCD2 and is free for noncommercial use (licencing terms are available at the code repository site). Some external tools are covered by different licencing arrangement and should be obtained by the user in accordance with the term of the respective licences. CCD 2 can automatically match UniProt protein isoforms with their encoding DNA. An annotated screenshot of the user interface of CCD 2 is depicted, showing the data retrieved for UniProt entry Q9UJ41 (human RABGEF5). The three existing isoforms of Q9UJ41 are aligned to show their differences. For each isoform, the length and matching coding DNA sequences are shown. DNA cross-references link to the entry in the respective database. Choosing an isoform (by clicking on its radio button) will automatically paste the coding DNA into the DNA sequence window, allowing CCD 2 to use it for primer design. The user can also paste their own DNA sequence, if necessary. For ease of display, some white space in this figure has been trimmed compared with the normal CCD 2 display.
Installation of CCD 2 is straightforward and is explained in the README.MD file provided with the distribution. Python environment consistency is maintained using Anaconda virtual environments.
Results
In this section, we will describe the functions of CCD 2 following the natural order of user interaction schematized in Fig. 1. We will also provide some general tips about proteinconstruct design.
In our experience, a successful protein construct fulfils three related requirements: (i) it is recombinantly expressed (ideally at high levels), (ii) it is (highly) soluble and (iii) it is conformationally stable or at least constrained. (i) and (ii) are requirements for recombinant expression and biochemical, biophysical and many other functional assays, whereas (iii) is a requirement for a high-resolution structure by X-ray crystallography or by single-particle cryoEM. CCD 2 can help efficient recombinant expression by providing a quick and easy way to clone the user's construct with different tags and in different hosts (see Section 3.4) and by facilitating relative bookkeeping (see Section 3.5). The solubility requirement (ii) is achieved when proteins are correctly folded and do not expose an excessive hydrophobic surface to the solvent. Due to the modular nature of proteins, this is true when a truncation cuts between, but not within, protein domains and structural elements. For structural biology and requirement (iii), one would also prune unstructured regions (i.e. regions that are not part of a folded domain) to limit the conformational freedom of the construct. If one is interested in intrinsically disordered proteins, and expressing disordered regions is instead the target, unstructured regions would be cloned instead. Either way, protein-construct design boils down to identifying domains and disorder in proteins. A main goal of CCD 2 is to collate and display at a glance all of the information useful for domain identification.
Identifying the DNA sequence of the protein of interest
The first step required for construct design is to retrieve and analyse the sequence of the protein of interest (POI). However, since the final objective is to generate cloning primers, CCD 2 needs to start by knowing the DNA sequence that codes for the POI (Fig. 2). Two options are available. The user can paste their own DNA sequence into the GUI and start the workflow from there. This is the only option available in the case where the POI is coded by an ORF that is nonnatural (i.e. codon-optimized) or by an ORF that is not present in the UniProt database. However, if the POI is coded by a natural sequence whose translation has been deposited in the UniProt database, CCD 2 can query UniProt using a userprovided identifier (for example Q9UJ41) or mnemonic accession code (for example RABX5_HUMAN). From the UniProt entry, CCD 2 can automatically determine which isoforms of the POI are reported and match them to appropriate DNA sequences (open reading frames; ORFs) by querying the cross-referenced nucleotide databases. UniProt protein sequences are determined by consensus and curation (https://www.uniprot.org/help/canonical_nucleotide), meaning that there is no one-to-one match between DNA primary database accessions and protein isoforms. CCD 2 simply gathers all the cross-referenced DNA sequences, translates them and matches them to isoforms at the protein level. No attempt is made to compare the raw DNA sequences for silent single-nucleotide polymorphisms, because these are rare and are extremely unlikely to affect the eventually generated primers. Imperfect protein sequence matches of up to three single amino-acid substitutions are shown to the user if no perfect match can be found for an isoform, along with a detailed notice about the sequence differences. For bacterial proteins, CCD 2 can parse multicistronic genes and genomic sequences (where the entry does not exceed 1 Mb in download size).
Once isoform matching is complete, the user is prompted to choose which isoform they wish to use for downstream analysis and primer generation. Cross-references to the primary DNA databases are provided for each isoform (Fig. 2). For easier visual reference, an alignment of the different isoforms is also provided. Inspection of the differences between isoforms can suggest viable truncation positions and hint at domains that might be swapped in or out among isoforms.
Creating and visualizing a report on sequence conservation
Domains are functionally and structurally constrained, and are thus evolutionarily conserved. Disordered and linker sequences are under looser evolutionary pressure and mutate more frequently, unless they are of specific functional importance. In general, in a multiple sequence alignment, well ordered domains will appear as contiguous stretches of conserved residues, whereas linker and disordered regions will show higher divergence. CCD 2 attempts to find and display a multiple sequence alignment of the POI using three approaches. If a UniProt ID is provided by the user, and this ID can be mapped to a pre-calculated Ensembl alignment (Yates et al., 2020), this alignment is retrieved. If no UniProt ID is available or provided, CCD 2 performs a BLAST search against a local copy of the SwissProt database and looks for up to five hits that have an identity of >95% with the POI. CCD 2 then queries Ensembl and looks for pre-calculated alignments for any of these hits. If such a hit exists, it is considered a homolog to the POI and the corresponding Ensembl alignment is retrieved. If this approach also fails, CCD 2 displays results of the BLAST search that (i) have an E-value of >0.001 and (ii) represent sequence coverage of the POI of 75%. Requiring a high sequence coverage is likely to find and display true orthologues of the POI (rather than showing sequences of loosely related proteins that simply share a single domain with the POI). Then, the POI isoform chosen by the user is aligned with the homolog sequences using MUSCLE (Edgar, 2004).
The Ensembl or on-the-fly constructed multiple sequence alignment is then displayed in the GUI (Fig. 3, top) and coloured by conservation using the ClustalX scheme (Larkin research papers et al., 2007). By default, only sequences belonging to specific pre-chosen species are displayed. These species are chosen using the following two criteria: (i) they have high-quality genome sequences and (ii) they sample all main phylogenetic classes in order to provide a wide view of the evolutionary diversity of the POI (a full list is available at https:// ccd.rhpc.nki.nl/species). The user has the option of showing the entire alignment if they wish. Furthermore, if the alignment comes from Ensembl, the user has the option of selecting which types of homologs are displayed (one-to-one, one-tomany, many-to-many homologs and paralogs, as defined by Ensembl; http://www.ensembl.org/info/genome/compara/ homology_method.html).
For user convenience, whenever possible, Ensembl and UniProt identifiers are renamed to indicate their organism of origin and gene name more clearly; for example ENSMUSG00000006715 is renamed to M.musculus_gmnn_ (H3BLK4_MOUSE), indicating that this is the mouse product of the gmnn gene, whose UniProt accession is H3BLK4_MOUSE. Alignments can be downloaded in FASTA format for bookkeeping and/or further analysis in external tools.
Aggregating and visualizing sequence-information data
All the different data are gathered from various software, either locally or using web services, collated and displayed at below the multiple sequence alignment and the POI sequence (Fig. 3, bottom). Below we discuss all the different types of information collected and displayed by CCD 2 . CCD 2 shows the results of many sequence analyses, facilitating the choice of construct boundaries. CCD 2 displays the query protein sequence (for example Q9UJ41 isoform 2) between a multiple sequence alignment (usually derived from Ensembl) and the results of multiple sequence analyses. The vivid colours allow an intuitive, visual interpretation of the results; the vertical alignment allows easy mapping of the analyses to the sequence. The user needs only choose where constructs should start or end by clicking on the query sequence. Green boxes indicate start points, red boxes indicate stop positions and yellow boxes indicate residues that are both a start and a stop point. Note that the truncation point at Ala58 was added for illustrative reasons and is unlikely to be a good truncation boundary, since it cuts in a long helix within a globular domain. Q9UJ41 1-48 is expressed, but does not readily crystallize (data not shown). Prediction legend: e, -strand; h, helix; t, loop; d, disordered; G, globular; * or @, span of the predicted domain (for example ZnF_A20); >, start position of a known structure; A, acetylation; U, ubiquitination.
3.3.1. Secondary-structure prediction. Domains have a high content of secondary structure, while disordered regions do not. CCD 2 runs the sequence through four secondarystructure prediction algorithms [HNN (Guermeur, 1997), DPM (Delé age & Roux, 1987), MLRC (Guermeur et al., 1999) and Predator (Frishman & Argos, 1996)]. These secondarystructure prediction methods are reasonably reliable and quick. Their results are displayed together, so that the user can derive a consensus view. Consecutive stretches of consensus secondary structure indicate domains.
3.3.2. Disorder prediction. Disordered regions often have low-complexity, repetitive sequences. Additionally, polar and charged amino acids are overrepresented in disordered regions (Dyson, 2016). CCD 2 gathers disorder and globular region information using IUPred (Dosztá nyi et al., 2005), DisEMBL (Linding, Jensen et al., 2003) and GlobPlot . The SMART database (Letunic et al., 2021) is also used to display low-complexity regions. Cuts in the constructs should encompass, but not cut within, predicted globular regions. Trimming terminal disordered regions is generally required for crystallization, and might lead to more homogeneous protein preparations owing to reduced proteolytic degradation.
3.3.3. Domain detection. CCD 2 highlights known domains in the protein sequence by querying the SMART (Letunic et al., 2021) and Pfam (El-Gebali et al., 2019) domain-fingerprint databases. Additionally, CCD 2 performs a BLAST search (Altschul et al., 1990) against a local copy of the Protein Data Bank (PDB), reporting hits at three different levels of similarity. The prediction 'PDB_95' highlights the parts of the POI sequence that have an identity of 95% to a solved structure in the PDB, thus indicating that parts of the POI (or of a very close homologue) have been experimentally determined. The boundaries of the expression constructs deposited in the corresponding PDB structures are also indicated on the POI sequence. Hovering the cursor over the construct boundaries (marked with '>' or '<' for a start or stop position, respectively) will display the PDB code and chain of the matching structures.
These are experimentally validated, effectual boundaries for truncation constructs. The predictions 'PDB50_to_95' and 'PDB30_to_50' similarly highlight parts of the POI sequence that have BLAST hits against the PDB with identities between 95% and 50% and between 50% and 30%, respectively. These portions of the POI sequence are homologous to known structures, indicating the likely existence and approximate boundaries of a folded domain. All of the results of the search against the PDB can be downloaded for further analysis by clocking on the 'Save PDB hits' button. These include the PDB code, sequence coverage and percentage identity for each matching hit. Fig. 3), with a T m of 65 C and overhangs for pETNKI LIC 1.1. The overhang portion of the primer is shown in lower case and the annealing portion is shown in upper case. The primers are named prefix_Fw/Rv_position, where the prefix is chosen by the user (for example RBX5), Fw stands for forward, Rv stands for reverse and 'position' is the chosen start/stop position. Primers can be copied and pasted into a spreadsheet or saved in comma-separated value (csv) format. The DNA sequences of the constructs resulting from all possible combinations of primers can also be saved in csv format.
3.3.4. Coiled-coil detection. Coiled coils are very common structural domains that often mediate protein-protein interactions. CCD 2 searches for coiled coils by querying the SMART database and by direct prediction with NCOILS (Lupas et al., 1991). Truncation within coiled coils is possible (see, for example, Ciferri et al., 2008), although trial and error is necessary.
3.3.5. Detection of other functional elements. CCD 2 further detects the presence of putative nuclear localization signals (NLS) using NLS (Kosugi et al., 2009). The presence of an NLS can influence expression in eukaryotic systems. However, NLSs are low-complexity, generally disordered sequences, so their removal can positively affect crystallization.
If experimentally validated post-translational modifications (PTMs) are recorded in the UniProt entry for the sequence of interest, these are displayed. UniProt covers a wide variety of possible PTMs, including glycosylation, disulfide bridges, cross-links (intra-chain and to other proteins such as ubiquitin), chemical modification of amino acids and more. These modifications are indicated with a single-letter code including, for example, 'A' for acetylation, '^' for a disulfide link, '+' for multiple known modifications etc. (a full legend can be found on the tutorial page at https://ccd.rhpc.nki.nl/ tutorial). Hovering the cursor over each letter will display more precise information about each modification. Because UniProt annotations always refer to the sequence of the canonical isoform, these annotations are disabled if the user has selected an alternative splicing variant, to avoid sequence discrepancy.
When data are available (human, rat and mouse proteins), CCD 2 also queries the Phosphosite Plus database (Hornbeck et al., 2004) for the presence of experimentally validated posttranslational modifications (PTMs) on the sequence. Annotations follow the same notation as for UniProt above. Hovering over each annotation provides further information about the underlying data.
PTMs are added by enzymes and require physical accessibility to be attached. Thus, the presence of PTMs can hint at disordered, highly accessible linker regions or at least solventexposed residues (Dyson, 2016). PTMs can also inform about the functionality of truncation constructs.
Designing protein constructs and single-click generation of DNA primers
With all the necessary information available, the user can choose where truncation constructs should start or stop by clicking start and stop points on the sequence of the POI (Fig. 3, middle). The clicked amino acid is always included in the final construct. Start points will generate forward PCR primers and stop points will generate reverse PCR primers. A position can be marked as being both a start and a stop. PCR amplificates typically need adapter sequences to be cloned into recipient vectors. These sequences are added to the primers as 'overhangs' that extend beyond the primer sequence that anneals to the template DNA. CCD 2 allows the user to choose overhangs in three ways . Firstly, the version of CCD 2 hosted on our servers is designed to work in tandem with the pETNKI series (Luna-Vargas et al., 2011) of ligation-independent cloning (LIC; Aslanidis & de Jong, 1990) vectors. These vectors are suitable for mammalian, insect-cell or Escherichia coli expression and are designed for maximum intercompatibility, so that the same PCR amplificate can be cloned in multiple targets. CCD 2 can automatically generate PCR primers with the correct overhangs for any pETNKI vector chosen (Fig. 4a). Some pETNKI vectors can be freely obtained from Addgene (https://www.addgene.org/, catalogue Nos. 108703-108710); others, which are encumbered by third-party patents, can be sourced from the Netherlands Cancer Institute protein-production facility with a material transfer agreement. When running a local copy of CCD 2 , userdefined vectors can be integrated instead of the petNKI series (not shown). Alternatively, CCD 2 contains a utility to generate primer overhangs for conventional restriction cloning (Fig. 4b).
Finally, CCD 2 can accept user-provided custom overhangs, which may contain nonstandard sequences (i.e. other than the standard DNA bases ATCG; Fig. 4c). In all cases, the user is notified of the final overhang sequence and of the presence of start/stop codons in the overhang (not shown). The user can also choose the properties of the primer by choosing a desired melting temperature (T m ; default 65 C) or primer length. Overhangs are not considered in determining the T m . Finally, the user can also choose a name for the primers.
Using these data, CCD 2 automatically maps the user-chosen start and stop positions from the protein to the DNA sequence and generates a table with all of the primers that can be saved in spreadsheet-compatible format for bookkeeping or copied and pasted for quick ordering of the primers (Fig. 4d). The amplified DNA sequences resulting from all possible combinations of starts and stops (i.e. resulting from a start and stop primer that amplifies any portion of the protein sequence) can also be downloaded in spreadsheet-compatible format by clicking on the 'Save Construct DNA' button.
Enabling data tracking and bookkeeping
CCD 2 displays the sequence of the protein truncations that are generated by all possible start and stop combinations in a different panel, along with basic information about their predicted molecular weight (MW), isoelectric point (pI) and predicted extinction coefficient at 280 nm (Fig. 5a). These are calculated with the same algorithm as used by ProtParam in the Expasy portal (Gasteiger et al., 2005). If pETNKI vectors are chosen as cloning targets (or custom vectors are integrated in a local copy of CCD 2 ), CCD 2 also has the information about the sequence of each construct prior to (Fig. 5b) and after (Fig. 5c) proteolytic tag cleavage, and can provide further provide the sequence, molecular weight, predicted isoelectric point (pI) and expected 280 nm extinction coefficient for all generated constructs, either with attached tag or after protease digestion. All of these data can be saved in spreadsheetcompatible format for bookkeeping and to assist in protein expression and purification.
Finally, for pETNKI and custom vectors, CCD 2 can generate and save annotated plasmid maps of the chosen truncation constructs (GenBank format; https:// www.ncbi.nlm.nih.gov/genbank/samplerecord/). These can be opened in any standard DNA-manipulation software and are useful as a reference to check the success of cloning.
Conclusions
The design and cloning of constructs are frequent and timeconsuming tasks in any structural biology project, and often in general biochemistry and biophysics. CCD 2 streamlines these tasks: it helps in the design of constructs by consolidating multiple informative analyses of the sequence in a single place, and it enables the user to make quick decisions about where protein truncations should be placed. Then, once the boundaries have been chosen, CCD 2 takes care of the nitty-gritty details of primer design and plasmid mapping, also providing a brief recombinant construct analysis. Overall, CCD 2 allows the user to save valuable time and reduce costly mistakes in any structural biology project. | 2021-08-04T06:17:15.794Z | 2021-07-29T00:00:00.000 | {
"year": 2021,
"sha1": "64aec2ec11d4d17727abb9bd631c82327646dc55",
"oa_license": "CCBY",
"oa_url": "https://journals.iucr.org/d/issues/2021/08/00/qg5001/qg5001.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8493ff88075e1c2f2123d2f0f7eb0eff6c9b15d9",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257327274 | pes2o/s2orc | v3-fos-license | Disentangling the resistant mechanism of Fusarium wilt TR4 interactions with different cultivars and its elicitor application
Fusarium wilt of banana, especially Tropical Race 4 (TR4) is a major factor restricting banana production. Developing a resistant cultivar and inducing plant defenses by elicitor application are currently two of the best options to control this disease. Isotianil is a monocarboxylic acid amide that has been used as a fungicide to control rice blast and could potentially induce systemic acquired resistance in plants. To determine the control effect of elicitor isotianil on TR4 in different resistant cultivars, a greenhouse pot experiment was conducted and its results showed that isotianil could significantly alleviate the symptoms of TR4, provide enhanced disease control on the cultivars ‘Baxi’ and ‘Yunjiao No.1’ with control effect 50.14% and 56.14%, respectively. We compared the infection processes in ‘Baxi’ (susceptible cultivars) and ‘Yunjiao No.1’ (resistant cultivars) two cultivars inoculated with pathogen TR4. The results showed that TR4 hyphae could rapidly penetrate the cortex into the root vascular bundle for colonization, and the colonization capacity in ‘Baxi’ was significantly higher than that in ‘Yunjiao No.1’. The accumulation of a large number of starch grains was observed in corms cells, and further analysis showed that the starch content in ‘Yunjiao No. 1’ as resistant cultivar was significantly higher than that in ‘Baxi’ as susceptible cultivar, and isotianil application could significantly increase the starch content in ‘Baxi’. Besides, a mass of tyloses were observed in the roots and corms and these tyloses increased after application with isotianil. Furthermore, the total starch and tyloses contents and the control effect in the corms of ‘Yunjiao No.1’ was higher than that in the ‘Baxi’. Moreover, the expression levels of key genes for plant resistance induction and starch synthesis were analyzed, and the results suggested that these genes were significantly upregulated at different time points after the application of isotianil. These results suggest that there are significant differences between cultivars in response to TR4 invasion and plant reactions with respect to starch accumulation, tyloses formation and the expression of plant resistance induction and starch synthesis related genes. Results also indicate that isotianil application may contribute to disease control by inducing host plant defense against TR4 infection and could be potentially used together with resistant cultivar as integrated approach to manage this destructive disease. Further research under field conditions should be included in the next phases of study.
Fusarium wilt of banana, especially Tropical Race 4 (TR4) is a major factor restricting banana production. Developing a resistant cultivar and inducing plant defenses by elicitor application are currently two of the best options to control this disease. Isotianil is a monocarboxylic acid amide that has been used as a fungicide to control rice blast and could potentially induce systemic acquired resistance in plants. To determine the control effect of elicitor isotianil on TR4 in different resistant cultivars, a greenhouse pot experiment was conducted and its results showed that isotianil could significantly alleviate the symptoms of TR4, provide enhanced disease control on the cultivars 'Baxi' and 'Yunjiao No.1' with control effect 50.14% and 56.14%, respectively. We compared the infection processes in 'Baxi' (susceptible cultivars) and 'Yunjiao No.1' (resistant cultivars) two cultivars inoculated with pathogen TR4. The results showed that TR4 hyphae could rapidly penetrate the cortex into the root vascular bundle for colonization, and the colonization capacity in 'Baxi' was significantly higher than that in 'Yunjiao No.1'. The accumulation of a large number of starch grains was observed in corms cells, and further analysis showed that the starch content in 'Yunjiao No. 1' as resistant cultivar was significantly higher than that in 'Baxi' as susceptible cultivar, and isotianil application could significantly increase the starch content in 'Baxi'. Besides, a mass of tyloses were observed in the roots and corms and these tyloses increased after application with isotianil. Furthermore, the total starch and tyloses contents and the control effect in the corms of 'Yunjiao No.1' was higher than that in the 'Baxi'. Moreover, the expression levels of key genes for plant resistance induction and starch synthesis were analyzed, and the results suggested that these genes were significantly upregulated at different time points after the application of isotianil. These results suggest that there are significant differences between cultivars in response to TR4 invasion and plant reactions with respect to starch 1 Introduction Bananas, the most traded tropical and subtropical fruit Zou and Fan, 2022), are also fourth staple crop after wheat, corn and rice (Nayar, 2010), providing food source for approximately 400 million population worldwide (Dusunceli, 2017). However, banana industry is seriously threatened by Fusarium wilt of banana (FWB) which is a soil-borne vascular bundle disease caused by Fusarium oxysporum f. sp. cubense (Foc) (Ploetz, 2006a;Ploetz, 2015;Dita et al., 2018). On the basis of difference in pathogenicity of Foc, it can be divided into 4 physiological races (Foc 1, Foc 2, Foc 3 and Foc 4) and Foc 4 can be divided into subtropical race 4 (STR4) and tropical race 4 (TR4) (Ploetz, 2006b;Karangwa et al., 2018). In the 1950s', the FWB caused by Foc1 was successful controlled by replaced the 'Gros Michel' (disease-susceptible cultivar)with the 'Cavendish' (diseaseresistant cultivar) (Ploetz, 2006b). In the 1990s, the banana industry was again in crisis with the advent of TR4 (Ploetz, 2006b). In the past decades, TR4 gradually has spread to the surrounding countries such as the Philippines and Malaysia (Hwang and Ko, 2004;Ploetz, 2006b). Then it expanded to countries and regions such as Australia, the Middle East, India and Africa (Thangavelu and Mustaffa, 2010;Butler, 2013;Ploetz et al., 2015). In recent years, it has been found in Jordan (Garcıá- Bastidas et al., 2014), Lebanon (Ordoñez et al., 2016), Israel (Maymon et al., 2018), Mozambique (Garcıá- Bastidas et al., 2014), Pakistan (Ordoñez et al., 2016), Puerto Rico (Garcia et al., 2018), Miyako Island in Okinawa, Japan (Nitani et al., 2018), India (Thangavelu et al., 2019), Mayotte (Aguayo et al., 2020), Colombia (Bastidas et al., 2020) and Peru (Acuña et al., 2021). As TR4 continues to spread rapidly around the world (Dita et al., 2018;Zheng et al., 2018;Pegg et al., 2019), it is essential to take actions to stop its further spread and to have comprehensive management approaches. Nowadays, although historical experience has shown that disease-resistance breeding is a particularly effective way to control FWB Bubici1 et al., 2019;Zorrilla-Fontanesi et al., 2020), no completely immune TR4 cultivar has been incorporated into agricultural production, because of the major challenge faced to breed disease-resistant cultivars in traditional ways due to the peculiarities of triploids of banana plants.
In the natural environment, in order to prevent pathogenic infection, plants not only form a physical barrier on the surface, but also have various internal immune responses. Plants are able to induce broad defense reactions by pathogens in their surroundings (Choudhary et al., 2007). So, activating its inherent defense by specific elicitors would be an effective way to protect plants from disease (Ward et al., 1991;Pieterse et al., 1998b). Therefore, most researchers prefer the plant-induced resistance as a new type of plant disease control strategy (Eschen-Lippold et al., 2010;Kurth et al., 2014;Dorneles et al., 2018;Sopeña-Torres et al., 2018), which may also become a new sustainable plant protection approach in the future (Roberts and Taylor, 2016).Today, many bacterial, fungal and chemical inducers that induce plant defenses to control crop disease have been commercialized (Verhagen et al., 2004;Takahashi et al., 2006). However, so far there is no study on exogenous inducers on FWB. Whether these exogenous elicitors can induce bananas to acquire systemic resistance to FWB is still unknown.
According to the molecular mechanism of induction, induced resistance is divided into systemic acquired resistance (SAR) and induced systemic resistance (ISR) (Pieterse et al., 2009). SAR, which depends on salicylic (SA) and its associated systemic immune responses have been confirmed in some plants Fu and Dong, 2013;Bektas and Eulgem, 2014), such as, enhances the expression of pathogenesis-related (PR) genes (Loon et al., 2006). PR proteins, with antibacterial activity outside the cell, can directly act on pathogen (Loon et al., 2006). NPR1, a key gen regulator for transducing the SA signaling and activating PR gene expression in the pathway (Dong, 2004;Grant and Lamb, 2006), and both exogenous SA application and pathogen infection may lead to enhanced expression of the NPR1 gene of the SAR pathway in plants (Cao et al., 1997;Ryals et al., 1997). In contrast to SAR, ISR relies primarily on jasmonic acid (JA) and ethylene (ET) pathways (Loon et al., 1998;Pieterse et al., 1998a;Pieterse et al., 2002;Choudhary et al., 2007;Pieterse et al., 2012;Pieterse et al., 2014). Although SAR and ISR are significantly different, studies have shown that ISR also requires NPR1 (Pieterse et al., 2014;Nie et al., 2017). ET is synthesized from the amino acid methionine by a pathway requiring SAMS (S-adenosylmethionine synthetase), ACS [1-aminocyclopropane-1-carboxylic acid (ACC) synthase] and ACO (ACC oxidase), and ACS is a key synthetase gen in this pathway (Sauter et al., 2013;Wang et al., 2013;Dubois et al., 2018).
Starch is a decisive factor for plants to adapt to abiotic stress (Thalmann and Santelia, 2017), and often shows very obvious plasticity when different plant tissues face stresses (Cuellar-Ortiz et al., 2008;Yin et al., 2009;Morais et al., 2019). Bananas plants with high starch content in corm are more resistant to FWB than those with low starch content (Dong et al., 2019). It is well known that ADP-glucose pyrophosphorylase (AGPase), Starch branching enzyme (SBE) and granule-binding starch synthase (GBSS) are key enzymes in the starch biosynthesis pathway, and AGPase plays an important role in crop heat tolerance (Saripalli and Gupta, 2015).The activity of GBSS within granules is the main determinant of amylose content (Seung, 2020). SBE is a key enzyme in pullulan synthesis (Li and Gilbert, 2016).
Isotianil is one such elicitors which acts as a salicylic acid (SA) mimic, with proven activity against rice blast (Bektas and Eulgem, 2014) and wheat blast (Portz et al., 2020). It was discovered by Bayer in 1997 (Toquin et al., 2012). Although isotianil does not have any direct antimicrobial activity against bacteria and fungi, it can induce the defense response of various plants to pathogens. For example, isotianil treatment can induce the expression of some defenserelated genes, such as NPR1 and PR1 in the SA signaling pathway (Yoshida and Toda, 2013;Bektas and Eulgem, 2014). There is only one paten report on the application of isotianil in FWB before (Gilbert et al., 2019), and the specific mechanism of isotianil inducing plant resistance is also unclear yet. Therefore, this study aims to explore whether isotianil could induce plant resistance and alleviate the infection of FWB in different two cultivars. Besides, the interaction between TR4 and bananas have been explored by confocal laser-scanning microscope (CLSM) and molecular approaches have been used to analyze the mechanism of action mode of isotianil on banana plants ( Figure 1).
Plant materials
In this study, two cultivars of Cavendish were used, 'Baxi' (Musa spp. AAA, Susceptibility cultivar) and 'Yunjiao No.1' (Musa spp. AAA, moderately resistant cultivar), and the banana plantlets were propagated by plant tissue culture. The tissue plantlets were cultured at 25°C, under a 16 h/18 h (light/dark) photoperiod until new roots grew and then transplanted into an aperture disk (32 tray specification, capacity 110 mL), filled with coconut bran and seedling substrate. Each banana plantlets about 15 cm high with 5 leaves were transplanted into the plastic pot with 25 cm in the diameter containing garden soil and substrate. All banana plants were grown in a solar greenhouse with isotianil watering and fertilization management.
Isotianil application and pathogen inoculation
TR4 labeled with green fluorescent protein (GFP) was used to explore the infestation process of pathogens in plants (Zhang et al., 2018a). After TR4 was grown on PDA medium at 28°C for 7 day, the spores were collected by rinsing the plates with sterile water, and the concentration of suspension was 1×10 7 spores/mL measured by hemocytometer. Isotianil as the main compound of Routine ® product was provided by Bayer AG, Crop Science Division company. Routine ® is a suspension concentrate (SC) containing 0.2 g/mL isotianil and was applied when the banana plants had 6-7 leaves. The applied diluent was prepared by dissolving 0.035 mL of the original product in 100 mL of tap water per banana plant, applied by either drenching the roots or spraying the leaves (0.07 mg/mL isotianil, 100 mL/plant). The applications were performed once every 28 days, and the applications were performed three times in total. Seven days after the second application of Routine ® , 100 mL of 1×10 7 spores/mL of TR4 was drenched for root inoculation, and the control treatment was applied with tap water. Before TR4 inoculation, the roots were treated with two wounds around plants, using a shovel. Banana roots and corms were taken 0 d, 1 d, 7 d, 14 d, and 62 d after TR4 inoculation as samples for subsequent TR4 content detection, microscopic observation and gene expression determination.
Experimental treatment design
Four treatments were set of each banana cultivar which contain: control (CK), inoculated with pathogen alone (TR4), banana leaves sprayed Routine ® and inoculated with TR4 (TR4+R1) and Banana root drenched with Routine ® and inoculated with TR4 (TR4+R2) ( Table 1). Three biological replicates were designed for per treatment and 45 plantlets were prepared for per replicate.
Disease index investigation
The banana corms were dissected to investigate the disease index after 62 days post inoculated TR4. After plant corms were dissected, the degree of lesions of each plant corm was investigated according to five grades from 0 to 4. The five grades of 0, 1, 2, 3 and 4 represent no lesions of the corm, the area of corm lesions is 1-10%, the area of lesions in the corm is 11-30%, the area of corm lesions is 31-50% and the area of corm lesions is more than 50%, respectively. The formula for calculating the disease index and control effect is as follows (Zuo et al., 2018;Chen et al., 2019;Fan et al., 2021).
Detection of TR4 content in banana roots and corms by qPCR
The roots and corms of bananas plants after inoculation with TR4 were frozen in liquid nitrogen immediately after being collected at 4 different time points, and then stored at -80°C for later use. Genomic DNA was extracted according to the cetyltrimethylammonium bromide (CTAB) method (Tamari et al., 2013). Fungal biomass determination is basically genomic quantification of gene copy numbers by qPCR based on our previous established protocol (Zhang et al., 2018b). Three plants were prepared for each treatment as one biological replicate, and each treatment was repeated with three replicates. The quality of the resulting standard curve can be used for data analysis (efficiency, 90% to 110%; Correlation coefficient, R 2 >0.99).
Confocal laser scanning microscope observation
For examined infection process and colonization of TR4, the roots and corms of the banana plants were collected after inoculated Scheme illustrating that isotianil-induced multi-resistance in bananas to prevent TR4 infection. (A) TR4 hyphae in the soil; (B) TR4 hyphae are accumulated in the rhizosphere and infestation into the plant; (C) TR4 hyphae enter plant root xylem vessel and spread further; (D) TR4 hyphae spread into the xylem vessel that connects the root to the corm; (E) TR4 hyphae spread into the corm and multiplies; (F) TR4 hyphae is blocked to enter the corm by elicitor -induced tyloses in the xylem vessel connecting the root to the corm; (G) After elicitor applicated plants by drenching roots or spraying leaves, multiple defense systems in the banana plant are activated to prevent further spread of TR4 in the corm. SA, salicylic acid; NPR1, nonexpressorofpathogenesis-relatedgenes1; PR1, pathogenesis-related 1 genes; PR3, pathogenesis-related 3 genes; JA, jasmonic acid; MYC2, basic helix-loop-helix transcription factor; ERF1, ethylene response factor 1; ET, ethylene; ACC, 1-aminocyclopropane-1-carboxylic acid synthase; AGPase, ADP-glucose pyrophosphorylase; SBE, Starch branching enzyme; GBSS, granule-binding starch synthase.
with TR4. Three biological replicates were designed for per treatment and 3 plantlets were prepared for per replicate. The samples were washed in sterile water and 75% alcohol, and cut into transverse and longitudinal thin slices with an ultra-thin blade. The slices were placed on the microscope slide with MQ water droplets, and then cover the sample with a glass cover slip. The processed samples were microscopically observed under a confocal laser scanning microscope (Lecia TCS-SP, Wetzlar, Germany). The s p e c tr a l p a r a me t er s of GF P fl u o r e s ce n c e a nd pl a nt autofluorescence in this confocal laser scanning microscope are (excitation wavelength 488 nm, emission wavelength 500-560 nm) and (excitation wavelength 561 nm, emission wavelength 570-670 nm), respectively.
Determination of starch contents in corms
Fresh corms of different treatments were collected at 1 d, 7 d, 14 d and 62 d after inoculation TR4. Three plants were prepared for each treatment as one biological replicate, and each treatment was repeated with three replicates. The corms were frozen and ground into powder in liquid nitrogen immediately after collection, and the starch content was determined using the Plant Starch Content assay Kit (Comin Biotechnology Co Ltd. Suzhou, China) according to the manufacturer's instructions.
Analysis of key genes expression related to starch synthesis and plant defense by quntitative real-time PCR
Three key genes (AGPase, GBSS, SBE) of banana in the starch synthetic pathway were selected for expression study at 1 d, 7 d, 14 d and 62 d post inoculation with TR4. Six defense-related gens NPR1, PR1, PR3, MYC2, ERF1 and ACC were also selected for this study. In each treatment, 9 corms were collected to detect the expression of related genes, and three technical replicates and three biological replicates were performed for each analysis. The collected corms sample were immediately frozen in liquid nitrogen and stored at -80°C, and then the total RNA was extracted using the Omega Plant RNA Extraction Kit according to the manufacturer's protocols. The A260/A280 and A260/A230 of total RNA were 1.9 to 2.1 and 2.0 to 2.4, respectively, and can be used for further experiments. Additionally, cDNA was synthesized by the Prime Script RT Master Mix Kit (TaKaRa), and Reverse-transcription quantitative PCR (RT-qPCR) was performed using the iTaq Universal SYBR Green Supermix Kit (BIO-RAD) according to the manufacturer's protocols. Relative changes in gene expression levels were calculated by the 2 -△△ CT method (Zhao et al., 2013;Dong et al., 2019). Relevant primer sequences for RT-qPCR analysis are listed in Supplementary Table 1 (Dong et al., 2019;Dalio et al., 2020). Moreover, Musa25SrRNA was used as the reference gene (Berg et al., 2007;Wu et al., 2013). In the preparation of the standard curve for real-time PCR amplification, each cDNA is diluted according to a gradient of 1-2-4-8-16-32-64-128, and then the corresponding standard curve is established. The R 2 and amplification efficiencies of the standard curve were greater than 0.99 and between 90%-110%, respectively, and the next step of the experiment can be continued.
Data analysis
Data were analyzed by using SPSS 25 and were graphed using Origin2018 (Graph Pad Software). All values are expressed as mean ± standard deviation, and statistically significant differences were determined using the Duncan multiple range tests (P< 0.05).
Isotianil application can significantly induce resistance of banana to Fusarium wilt
Banana corms were split at 62 dpi to investigate the disease severity, the symptoms of banana plants with TR4 infection were recorded. The untreated control treatment showed no symptoms or phytotoxicity. Compared with the control plants, the corms after TR4 inoculation showed obvious symptoms, and the color of the corms showing brownish-black zones of infection ( Figure 2). However, the application of isotianil alleviated the symptoms caused by the TR4. Disease investigation showed that the disease indexes following isotianil application ('Baxi' 25.52% and 'Yunjiao No.1' 11.98%) were significantly lower than those where there was only TR4 inoculation ('Baxi' 51.56% and 'Yunjiao No.1' 27.08%) ( Figure 3A). Among them, there was no significant difference between leaves sprayed (TR4+R1) or roots drenched (TR4+R2) with isotianil, but the disease indexes in 'Yunjiao No.1' were significantly lower than in 'Baxi'. The control effects of isotianil in 'Baxi' and 'Yunjiao No.1' to FWB in greenhouse experiments were 50.14% and 56.14%, respectively ( Figure 3B). For the control effects, there was no significant difference between leaves sprayed (TR4 +R1) or roots drenched (TR4+R2) with isotianil. The results showed that 'Yunjiao No.1' is more resistant against TR4 than 'Baxi', and isotianil could significantly induce the resistance of both cultivars to FWB.
Determination of pathogen biomass in different tissues of banana plants
The pathogen biomass was measured by qPCR in different plant tissues at different time points (Figure 4). The results showed that the pathogen biomass of 'Baxi' (susceptible cultivar) was significantly higher than that of 'Yunjiao No.1' (resistant cultivar) in banana corms ( Figure 4B). The pathogen biomass of corms in 'Baxi' and 'Yunjiao No.1' ranged from 139.58 ± 31.48 copies/g to 10905.68 ± 1745.75 copies/g and 104.65 ± 8.81 copies/g to 3540.11 ± 2184.47 copies/g, respectively. In addition, the content of TR4 in corms was significantly lower than root in all times points ( Figures 4C, D). The pathogen biomass of corm and root in 'Baxi' ranged from 262.2 copies/g to 64,159.7 copies/g and 139.5 copies/g to 10,905.6 copies/g, respectively. There was no significant difference between TR4 inoculated plants (TR4), isotianil applied plants and inoculated with TR4 (TR4+R1, TR4+R2) in roots. However, the plants inoculated with TR4 (TR4) had significantly more pathogenic biomass than the control plants (CK) and the plants applied with isotianil and inoculated with TR4 (TR4+R1, TR4+R2) ( Figure 4B).
Differences in the infection process of TR4 in banana plants
In order to explore the infection difference in different tissue, we used confocal laser-scanning microscope to monitor the infection colonization process of TR4 in banana tissue. The results showed that TR4 mainly existed in the form of hyphae in banana plants, and spores were hardly found. The detailed observation results were presented in Supplementary Table 2.
TR4 infection in roots
The hyphae could penetrate the vascular bundle tissues of all treatments, but some differences were observed in their infection process. At 1 day post inoculation (dpi), the hyphae were observed in the vascular bundle tissues only in cultivar 'Baxi' inoculated TR4 treatments, but not observed in other treatments. At 7 dpi to 62 dpi, a large number of hyphae were observed in the vascular bundle of root and it multiplies in large numbers in all treatment ( Figure 5).
TR4 infection in corms
To accurately understand the infection mechanism of pathogens, TR4 hyphae in the corms from 1dpi to 62 dpi were further monitored. Few TR4 hyphae were discovered in the cortex vascular tissues at 7 dpi and then expanded to the central cylinder in cultivar 'Baxi' inoculated TR4 treatments at 14 dpi. However, hardly any TR4 hyphae were observed in the cortex and central cylinder in other treatments. At 14 dpi, a mass of hyphae was found in the cortical root vessels of corms in the cultivar 'Baxi' inoculated TR4 treatments, while relatively few hyphae were found in the cortical roots of the other treatments. At 62 dpi, massive hyphae were observed in vessels of corms in cultivar 'Baxi' inoculated with TR4 only, while the level of pathogen hyphae in other treatments was relatively rare, so we turned our attention to the central cylinder of corms again. During this period, the central cylinder of the cultivar 'Baxi' inoculated TR4 treatments was ruptured, and many TR4 hyphae were released from the ducts to colonize the central column of the corms, while in other treatments, no hyphae released from the central column were found ( Figure 5).
Effect of isotianil application on TR4 infection between different cultivars and tissues 3.4.1 Cultivar differences in TR4 infection between 'Yunjiao No.1' and 'Baxi'
TR4 hyphae were observed in both roots and corms of 'Baxi' (TR4 inoculated plants) one day after inoculation with TR4, while were not observed in 'Yunjiao No.1'. From 1dpi to 7dpi, the number of hyphae in the root and corms of 'Yunjiao No.1' is lower than that of 'Baxi'. After 14 dpi, quite a lot of hyphae were discovered in the roots of 'Baxi' and 'Yunjiao No.1'. A mass of hyphae was also discovered in the corms in 'Baxi', and almost no hyphae were observed in 'Yunjiao No.1'. These results indicated that the infection difference of TR4 in 'Yunjiao No.1' and 'Baxi' was mainly in the corms, and 'Yunjiao No.1' was more resistant than 'Baxi'. Furthermore, quantity of hyphae in the root is higher than in the corms. These results show that the corms play an important role in blocking the infection of TR4 ( Figure 5).
Effect of isotianil application on TR4 infection
One day after inoculated TR4, funguses were only discovered in the roots and corms of 'Baxi', but not in the tissues treated with isotianil. Numerous funguses were discovered in the roots and corms of all plants from 7dpi to 14dpi. However, a mass of funguses was discovered in the roots 62 dpi, while massively hyphae were observed only in the 'Baxi' without isotianil treatment in the corms, while 'Yunjiao No.1' and 'Baxi' treated with isotianil no mycelia were observed ( Figure 5) According to the tracking of the TR4 infection process over a period of time and across different plant parts, these results show that isotianil application can trigger the resistance of banana plants and prevent TR4 hyphae from infecting corms.
Tyloses accumulation in the vascular bundles of corms
A microscopic observation analysis performed on root and corms samples from banana plants showed that tyloses plays an important role in against the pathogen infection in vascular bundle. At 62 days post inoculation (dpi), numerous tyloses were observed in the cortical root vascular bundle vessels of corms. At the same time, the hyphae were significantly reduced in the tissues with tyloses ( Figures 6I-P) whereas a large number of TR4 hyphae were observed in the tissues without tyloses (Figures 6E-M). Isotianil application could induce the formation of tyloses (Figures 6J-P) and the number of tyloses of isotianil treatments (TR4+R1, TR4+R2) were higher than that in TR4 infected plants (TR4). There were also differences between different banana cultivars, with the tyloses in 'Yunjiao No.1' were also being more abundant than in 'Baxi'.
Determination of starch content and related gene expression levels in corms
In the process of monitoring TR4 infection, there were significantly less TR4 hyphae in cells filled with starch granules than in tissues with fewer starch granules. In addition, the content of starch granules at 62 dpi was higher than that at 1dpi (Supplementary Figures 1A, B). These results indicated that starch granules in corms may play an important role in preventing TR4 infection (Figure 7). To verify that starch content is related to plant disease resistance, the total starch content in the bulbs was determined ( Figure 7A). The results showed that the starch contents in the isotianil applied banana plants (TR4+R1, TR4 +R2) were significantly higher than those in TR4 inoculation alone (TR4) and control (CK) in 'Baxi'. In addition, the starch content in 'Yunjiao No.1' is significantly higher than that in 'Baxi'. The starch content of corm in 'Baxi' and 'Yunjiao No.1' ranged from 15.25 ± 3.18 mg/g to 67.01 ± 4.39 mg/g and 44.70 ± 0.73 mg/g to 123.05 ± 10.89 mg/g, respectively ( Figure 7A). These results indicate that isotianil applied plants were induced to produce more starch granules than TR4 inoculated plants (Figure 7). In addition, the key genes (SBE, GBSS, AGPase) related to starch synthesis in the corm were selected for its expression study, and the results showed that the expression of SBE, GBSS and AGPase in two cultivars was significantly different. These genes are significantly more expressed in 'Yunjiao No.1' than 'Baxi' ( Figure 7B).
Discussion
In the past few decades, the banana production suffered dramatic losses because of the epidemic of FWB, which is a typical soil-borne disease that is difficult to control (Ortiz, 2013;Zhang et al., 2013;Li et al., 2015;Wang et al., 2015;Paz-Ferreiro and Fu, 2016;Zuo et al., 2018;Niwas et al., 2020). According to the historical experience of the first epidemic of FWB, enhancing plant disease resistance is generally considered to be one of the most effective strategies to control FWB. However, the mechanism by which banana plants resist TR4 infection is unclear.
From this study, the results showed that isotianil can significantly reduce the incidence of FWB and alleviate the disease symptoms in both cultivars (Figures 2, 3). Comparing the disease index, the isotianil application treatments (TR4+R1, TR4+R2) was significantly lower than that TR4 inoculated alone (TR4). To explain the mechanism of action of isotianil on banana plants, the biomass of TR4 in different banana tissues were measured by qPCR. The results showed that TR4 biomass in corms is lower after isotianil application, and both cultivars have a consistent trend ( Figure 4B). The TR4 biomass in the corm was consistent and stable at different growth points. However, no clear tendency of TR4 biomass in roots was found, probably because roots keep growing and only parts of roots were taken for the analysis; sampled roots could be newly grown and uninfected. At the same time, qPCR results showed that at 14 dpi, the content of the pathogenic in corms alone TR4 inoculation gradually decreased, which may be related to the increase of plant resistance during this period. In addition, TR4 biomass in corms was significantly lower than in roots, and these results were consistent with what was observed. Therefore, these results showed that the corm as a physical barrier can play an important role in reducing the harm of FWB by slowing down TR4 infestation. This result is consistent with our previous study (Zhang et al., 2018b). However, the detailed mechanism which prevents TR4 entering the corm is deserved for further study. In the current study, in order to make it easier to monitor the infestation process of TR4 in bananas plants, GFP-TR4 was used (Zhang et al., 2018a). The results showed that TR4 hyphae penetrates the root epidermis, invades the xylem vessels, and also invades the vascular bundle vessels from the wound, root hair or intercellular spaces of the cortex. These phenomena are basically consistent with previous research results (Li et al., 2011;Guo et al., 2014;Guo et al., 2015;Li et al., 2017). Compared with other parts during the infection process, we found that TR4 hyphae were more likely to infect from wounds, which is similar to the findings of Dong (Dong et al., 2019). Study of the TR4 infection process in the past was mainly focused on the infection of plant roots or corms (Li et al., 2011;Guo et al., 2015;Li et al., 2017). However, no one observed how TR4 moves from the roots into the corms. In this study, we observed that TR4 hyphae enter the corms from the root through vascular ducts, and the number of TR4 hyphae in the root is much higher than in the corms. Once again, these results confirmed that banana corms play an important role for blocking the infection of pathogens. At the same time, the observation by confocal laser scanning microscope showed that the number of TR4 hyphae after isotianil application corms (TR4+R1, TR4 +R2) was significantly lower than that in TR4 treatment (TR4). When plants were infected by pathogens, the defense response in the xylem vessels would be activated, preventing further spread of the pathogens Relative expression levels of key genes that induce resistance in corms at 1 dpi to 62 dpi. The heat map illustrates the doubled changes of gene expression (log10 scale) in corms at 1 dpi, 7 dpi, 14 dpi and 62 dpi. Different color means induced or repressed gene expression (Red indicates down-regulation of gene expression; green indicates up-regulation of gene expression; black indicates no effect on gene expression). Three biological replicates and three technical replicates were used in data analysis. MYC2, basic helix-loop-helix (bHLH) transcription factor; ACC, 1aminocyclopropane-1-carboxylic acid synthase; ERF1, ethylene response factor 1; NPR1, nonexpressorofpathogenesis-relatedgenes1; PR3, pathogenesis-related 3 genes; PR1, pathogenesis-related 1 genes. (Yadeta and Thomma, 2013;Li et al., 2022). One of the common defense mechanisms produced in xylem vessels is the formation of tyloses (Beckman, 1964;Talboys, 1972;Grimault et al., 1994;Rahman et al., 1999;Fradin and Thomma, 2006;Yadeta and Thomma, 2013). The formation time and number of tyloses in different plants varied greatly, and the contents of tyloses in disease-resistant plants were significantly higher than in disease-susceptible plants (Grimault et al., 1994;Xu et al., 1997;Fradin and Thomma, 2006;Hu et al., 2008).In this study, a mass of tyloses and gums were observed in the cortical roots of banana plants treated with isotianil, while almost no pathogen was observed where the tyloses appeared ( Figure 6). Therefore, this result showed that isotianil could induce the formation of tyloses in the root of banana cortex to prevent the further infection of TR4 into corms.
When plants encounter some stress challenges, they can quickly initiate corresponding defense responses to enhance their resistance (Conrath et al., 2002;Acharya et al., 2011;Tanou et al., 2012). Elicitors are a class of substances that can trigger defense responses by mimicking the interaction of corresponding signaling molecules with homologous receptors in plants (Nimchuk et al., 2003). In the study, prior applications of isotianil significantly enhanced the expression of PR1, PR3, NPR1 and ERF1 and further increased the expression in isotianil application plants (TR4+R1 and TR4 +R2) when compared to TR4 inoculated plants. Isotianil preapplication significantly induced the expression of key genes PR1, PR3 and NPR1 of the SA pathway in banana plants. The result showed that isotianil may initiate the SA pathway to improve banana resistance to Fusarium wilt (Figure 8), this is consistent with previous research results on rice (Toquin et al., 2012). Some studies have shown that ERF1, a key responder downstream of the ET and JA pathway (Zhu et al., 2011;Huang et al., 2016), plays an important role in plant disease resistance (Berrocal-Lobo and Molina, 2004;Meng et al., 2013;Xing et al., 2017). In isotianil application plants, ISR-related genes were also induced to significantly upregulate at some time points, such as ERF1 (Figure 8), this part of the result is a new discovery. Taken together, these results suggest that isotianil is a potent resistance inducer that can significantly enhance the expression of key genes of SAR and ISR pathways in banana plants (Figure 8).
Another interesting finding was that there was a large accumulation of starch grains in the corm cells ( Supplementary Figure 1), and starch granules and TR4 in diseased corms could not coexist in the same time and space (Dong et al., 2019). At the same time, many studies have reported that under abiotic stress, the starch content will decrease or increase (Villadsen et al., 2005;Pressel et al., 2006;Goyal, 2007;Damour et al., 2008). The accumulation of starch granules is not only an important factor for plants to respond to abiotic stress, but also closely related to plant disease response (Takushi et al., 2007;Etxeberria et al., 2009). Combined with the previous observations, the expression of starch synthesis-related genes in corm was measured by qPCR, and the results showed that the expression of key genes in 'Yunjiao No.1'was significantly upregulated, and the upregulation level was higher after isotianil application. ( Figure 7B). In addition, the expression of related genes in 'Yunjiao No.1' at some time point was much higher than in 'Baxi'. Further, the starch content in corms of different cultivars was measured, and the results showed that in 'Yunjiao No.1' (diseaseresistant cultivar) the level was much higher than that in 'Baxi' (susceptible cultivar). In addition, this result showed that the starch content after the isotianil application was much higher in 'Baxi' ( Figure 7A). On the one hand, the accumulation of starch may enhance the density of cell (Kuang et al., 2013) and directly inhibit the diffusion of TR4 in the corm, on the other hand, starch, as an important energy substance, may directly participate in the synthesis of resistance-related substances in cells (Dong et al., 2019). Therefore, (A) Contents of starch in banana corms at different time points past inoculation TR4 (1 dpi, 7 dpi, 14 dpi and 62 dpi). The data represents three independently repeating values. Based on Duncan's multiple range test, a significant difference was determined at P<0.05. Lowercase letters a, b, c, d mark significant differences: the same letter means no difference during the same period, while different letters indicate a significant difference.
Error bars represent ± standard deviation. (B) Relative expression levels of key genes for starch synthesis in corms at 1dpi to 62dpi. The heat map illustrates the double change in the expression(log10) of key genes for starch synthesis in the corm at 1dpi to 62 dpi. Different color means induced or repressed gene expression (Red indicates down-regulation of gene expression; green indicates up-regulation of gene expression; black indicates no effect on gene expression). Three biological replicates and three technical replicates were used in data analysis. AGPase, ADP-glucose pyrophosphorylase; SBE, Starch branching enzyme; GBSS, granule-binding starch synthase.
we speculate that the accumulation of starch grains in the corm cells may be closely related to the disease resistance of plants. There are differences in the resistance of two cultivars. Our results showed that 'Yunjiao No.1' can significantly reduce the incidence of FWB and alleviate the disease symptoms much more than 'Baxi' cultivar. Comparing 'Baxi' with 'Yunjiao No.1', the disease index of 'Yunjiao No.1' was significantly lower than that of 'Baxi' (Figure 3). In addition, the TR4 biomass in 'Yunjiao No.1' (2058.42 copies/g) is much lower than that of 'Baxi' (10905.69 copies/g) in corms at 62 dpi ( Figure 4B). Moreover, the content of tyloses and starch grains in 'Yunjiao No.1' is higher than that of 'Baxi', and there is a consistent trend of disease resistance between 'Yunjiao No.1' and 'Baxi' applicated with isotianil. Furthermore, the expression of resistance (ERF1) and starch related (AGPase, SBE and GBSS) genes in 'Yunjiao No.1' are higher than those in 'Baxi'. All the results demonstrated that 'Yunjiao No.1' is more resistant to FWB than 'Baxi', and resistant banana plants may prevent pathogenic fungal infection by inducing the content of tyloses in vascular bundles and starch grains in corms to form a physical barrier and activate immune pathways such as SAR and ISR in the corms.
In the next step of our research, we will continue to explore the defense mechanisms of different resistant banana cultivars in the face of TR4 infestation, especially the relationship between tyloses, starch grain formation and plant resistance.
Conclusion
This study has shown that the plant elicitor isotianil can significantly reduce the impact of FWB and protect different banana cultivars. In addition, we further found that corms are important to against further infestation of TR4. The elicitor isotianil is able to induce the formation of tyloses in cortex vascular tissues of corms, preventing pathogen from root entering the corms. It can activate the three major systems of ISR, SAR, and starch granule synthesis in the corms, and inhibit the diffusion of pathogen in the corms, so as to reduce the effect of FWB. In addition, 'Yunjiao No.1' are more resistant than 'Baxi'. In summary, when biological, chemical and agronomic measures are not ideal for the control of FWB, further enhance the resistance of cultivar together with elicitor isotianil application is a promising control strategy for banana growers.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
SUPPLEMENTARY FIGURE 1
Observation of starch granules in corms cell at different time points (1 dpi, 7 dpi, 14 dpi and 62 dpi) after inoculation TR4. The TR4 hyphae and starch grains was indicated by white arrows and yellow arrows respectively in the banana plant corms. Photographs were taken under GFP channel, GFP channel and through transmitted light (A-P). Zhou et al. 10.3389/fpls.2023.1145837 Frontiers in Plant Science frontiersin.org | 2023-03-04T16:16:44.346Z | 2023-03-02T00:00:00.000 | {
"year": 2023,
"sha1": "ab7e504a8b7ca9c466cc1cfb70f3f3ffb7237eaf",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2023.1145837/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f438ab17a8fde8d1b512ff4bbcbef2c843a722e5",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236365975 | pes2o/s2orc | v3-fos-license | Additional treatment with Carnoy solution in surgical therapy of ameloblastomas: Case report
Introduction: Ameloblastoma is a benign neoplasm characterized by the proliferation of odontogenic epithelium that mainly affects the gnathic bones and, due to its invasive and expansive growth, presents high rates of recurrence to surgical treatment. Among the most conservative treatments are enucleation and marsupialization; among radicals, resections are more widespread. Objective: The objective is to present, through a case report, conservative surgical treatment with enucleation followed by the use of the Carnoy solution. Case report: A 24-year-old male patient arrives at the outpatient clinic of Hospital da Restauração with painful complaints of mild and constant intensity in the region of the left mandibular angle, with an evolution of three weeks. After a panoramic X-ray, the presence of the included Research, Society and Development, v. 10, n. 6, e15610615235, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i6.15235 2 38 tooth was associated with an extensive unilocular radiolucent lesion, surrounding the angle and mandibular ramus. Preoperative examinations were performed for incisional biopsy. Histopathological diagnosis was unicystic ameloblastoma. In view of the histopathology obtained, we opted for enucleation of the lesion with concomitant use of direct Carnoy solution in the region of the lesion. Discussion: The choice of therapeutic behavior depends on the size, type of lesion, location and histopathology. After the surgical decision, a radiographic clinical follow-up is necessary to assess possible recurrences. Carnoy's solution is a cauterizing agent with moderate tissue penetration, rapid local fixation and hemostatic action, whose surgical use in cystic lesions has occurred since the beginning of the 20th century. Conclusion: Conservative treatment with the enucleation technique followed by complementary therapy using Carnoy's solution proved to be quite effective.
38 tooth was associated with an extensive unilocular radiolucent lesion, surrounding the angle and mandibular ramus. Preoperative examinations were performed for incisional biopsy. Histopathological diagnosis was unicystic ameloblastoma. In view of the histopathology obtained, we opted for enucleation of the lesion with concomitant use of direct Carnoy solution in the region of the lesion. Discussion: The choice of therapeutic behavior depends on the size, type of lesion, location and histopathology. After the surgical decision, a radiographic clinical follow-up is necessary to assess possible recurrences. Carnoy's solution is a cauterizing agent with moderate tissue penetration, rapid local fixation and hemostatic action, whose surgical use in cystic lesions has occurred since the beginning of the 20th century. Conclusion: Conservative treatment with the enucleation technique followed by complementary therapy using Carnoy's solution proved to be quite effective. Keywords: Ameloblastoma; Complementary therapies; Neoplasms.
Introduction
Ameloblastoma is a benign, slow-growing, locally invasive and expansive tumor, presenting high rates of relapse to surgical treatment (Aragão 2014;Effiom et al., 2018;Sheela et al., . The tumor may occur at any age; however, there is predominance between the third and fifth decade of life (Effiom et al., 2018;Palanisamy & Jenzer 2020;Sheela et al., 2019;Chai et al., 2019). In the initial phase, ameloblastomas present scarce clinical features, without symptomatology, making the early diagnosis rare (Kruschewsky et al., 2010). Usually, they present a slow growth, often associated with the expansion of the cortical bone, which leads to facial deformities (Paikkatt et al., 2007;Kreppel & Zöller 2018). When signs or symptoms are present, patients usually report painless swelling associated with paresthesia or malocclusion, but most cases are discovered by routine radiographs (Kruschewsky et al., 2010).
In radiographic studies, most of the lesions are found as radiolucent, multilocular with well-defined limits that may resemble "soap bubbles" or "honeycomb", however, due to their ability to infiltrate the spaces bone marrow, its limits sometimes do not reflect the actual impairment of the lesion (Aragão 2014;Krishnapillai & Angadi 2010;Palanisamy & Jenzer 2020). In this context, Ameloblastoma, according to its clinical and radiographic characteristics, is classified into three types (Neagu et al., 2019): multicystic, unicystic and peripheral (extraosseous). There is also the malignant form that has few cases described in the literature (Palanisamy & Jenzer, 2020). Histologically, multicystic ameloblastomas can be classified in follicular plexiform, acanthomatous, granular, baseloid and desmoplastic cells. The unicistics can be classified in intraluminal, mural and extramural (Neville 2011;González-González et al., 2020), the mural variant tending to repeat itself (Marimuthu et al., 2020).
The surgical therapy of ameloblastoma can have several types of approach, from the most conservative to the most radical and the professional who will do the approach should choose the best treatment option (Carvalho et al., 2010;Neagu et al., 2019). Among the most conservative treatments are enucleation and marsupialization (Chai et al., 2019); among radicals resections are more widespread. The choice of therapeutic behavior depends on the size, type of lesion, location and histopathology (Palanisamy & Jenzer 2020). Radical interventions have a lower recurrence rate (Saraiya 2020), but most of them create aesthetic and functional damages that are difficult to reconstruct (Effiom et al., 2018;Chai et al., 2019). All extirpular surgical procedures aim at total removal of the lesion and elimination of remaining cells, so conservative treatments have a lower spectrum of action compared to radicals (Pogrel & Montes 2009). However, some authors like Lee et al., (2004) believe that conservative treatment creates a better quality of life and Marimuthu et al., 2020 reports the importance of enucleation and the use of Carnoy's solution even in pediatric patients, a population in which the occurrence of ameloblastoma is rare (Sheela et al., 2019).
On the other hand, for the multicystic variety, the conservative approach is not recommended, being this one more indicated in the unicystic variety, in all its variants. In order to give a greater safety to the conservative treatment, additional procedures like the use of chemical substances (carnoy solution) or thermal (cryotherapy) are recommended for the treatment of the bed with elimination of possible remaining cells. The ameloblastomas present high rates of relapses (Neagu et al., 2019), so after the treatment, it is necessary that the preservation is done for a period of at least 5 years (Lee et al., 2004).
Carnoy's solution is a cauterizing agent with moderate tissue penetration, rapid local fixation and hemostatic action, whose surgical use in cystic lesions has occurred since the beginning of the 20th century. Its application to the bone shop after invasive lesion removal provides safety margin by chemical necrosis of up to 1.5 mm depth (Williams & Connor 1994). Each 10 mL of solution contains 6 mL of absolute alcohol; 3 mL of chloroform and 1 mL of glacial acetic acid associated with 1 g of ferric chloride and can be used for three minutes after enucleation directly in the bone shop (Lee et al., 2004).
Therefore, the objective of this study is to present the resolution of the clinical case of unicystic ameloblastoma, in which it was decided to enucleate the lesion with concomitant use of a direct carnoy solution in the region of the lesion in order to guarantee an optimal prognosis for the patient.
Methodology
This is a clinical case study that is qualitative, descriptive and produced by the technique of direct observation.
According to Pereira et al. (2018), research with this character aims to elucidate a particular subject and study it thoroughly, with the patient's permission, through access to medical records, clinical examination, laboratory and image exams available, with the researcher being the instrument paramount to this process. Here, we respect all the ethical principles proposed by the Declaration of Helsinki, upon the patient's consent to their clinical information. The patient in question consented to the study and disclosure of his case, signing the Informed Consent Form provided by our team. The lesion was discovered, and after that, preoperative exams were performed for an incisional biopsy procedure. The histopathological diagnosis was unicystic ameloblastoma. In view of the obtained histopathology, it was chosen the enucleation of the lesion with concomitant use of direct carnoy solution in the lesion region ( Figure 2). During the period of convalescence, it was necessary to maintain the patient with maxillary mandibular block for 30 days to reduce the risk of pathological fracture and remained in clinical radiographic follow-up for the following periods:
Discussion
Ameloblastoma is a benign neoplasm characterized by proliferation of the odontogenic epithelium (Neville 2011;Palanisamy & Jenzer 2020). Because it is a potentially invasive type of tumor and has a good number of histopathological variables, there is a need for additional histopathological and radiographic exams so that there is an accurate diagnosis and, from this, the best form of treatment is chosen (Paikkatt et al., 2007;Carvalho et al., 2010;González-González et al., 2020).
The data found in the literature related to the location and prevalence of age are confirmed by the clinical findings described in the case, corroborating the work of Fregnani et al, (2010), where the majority of tumors occur between 20 and 30 years and mainly affect region of mandibular angle.
According to Neville et al. (2011) and Palanisamy and Jenzer (2020), ameloblastoma generally evolves without symptoms, since pain and paresthesia are rarely reported. In the present case, the clinical findings diverge from the literature, since the patient was taken to the hospital because of constant painful symptoms of mild intensity in the region where the tumor was located. Also important was the association of the tumor with the crown of an unbroken tooth, which is also characteristic of lesions such as dentigerous cyst and odontogenic keratocystic. Thus, it is important to perform a histopathological examination that, besides excluding differential diagnoses, is fundamental for the surgical planning and treatment of the lesion (Neagu et al., 2019).
The histopathological and radiographic exams were extremely important and served as the basis for the decision making regarding the treatment of the tumor. Because it is a unicystic ameloblastoma, it was chosen a conservative treatment, as is the case with enucleation. Because of the high rates of relapses consistent with the conservative treatment described in the literature (Chai et al., 2019), it was used carnoy's solution to provide a margin of safety, removing epithelial remnants that may promote recurrence of the lesion (Krishnapillai & Angadi 2010).
Placement of Carnoy solution used directly in the surgical shop for three minutes after enucleation may reduce the chance of relapse, as well as the use of cryotherapy (Williams & Connor 1994;Costa et al., 2019). Both methods promote cauterization and bone necrosis. However, the carnoy solution is more resistant to cryotherapy in both its handling and postoperative complications (Lee et al., 2004).
Two-year follow-up did not reveal findings consistent with recurrence of tumor lesion, thus showing that the conservative treatment with safety margin used in this case is viable. However, there is still a need for radiographic and, when necessary, histopathological clinical follow-up for a period of 10 years to evaluate possible relapse (Kruschewsky et al., 2010;Aragão 2014).
Conclusion
The definitive treatment for ameloblastomas is surgical and the histopathological diagnosis of the type of ameloblastoma is fundamental for the surgical decision, and the conservative treatment of the unicystic lesions is effective with the enucleation technique followed by complementary therapy with the use of a solution of Carnoy. | 2021-07-27T00:05:32.187Z | 2021-05-26T00:00:00.000 | {
"year": 2021,
"sha1": "68c76b5a3db2694fa536f446439c911aba6c4c2c",
"oa_license": "CCBY",
"oa_url": "https://rsdjournal.org/index.php/rsd/article/download/15235/13973",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6b15503bfceaeae5386bccf7bfb2d5235c12cb37",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244418539 | pes2o/s2orc | v3-fos-license | Impact of COVID-19 pandemic on prevalence of Clostridioides difficile infection in a UK tertiary centre
Serious concerns have been raised about a possible increase in cases of Clostridioides difficile infection (CDI) during the COVID-19 pandemic. We conducted a retrospective observational single centre study which revealed that total combined community and hospital-based quarterly rates of CDI decreased during the pandemic compared to the pre-pandemic period.
Introduction
Coronavirus disease , caused by the Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2), emerged in Wuhan (China) in early December 2019 and has spread rapidly worldwide, causing a global pandemic. The elderly population is disproportionately affected by COVID-19, with initial reports showing that~80% of the deaths due to COVID-19 occur in those over the age of 65 [1]. Due to the enhanced usage of broadspectrum antibiotics during the current pandemic, overcrowding in hospitals, and the fact that Clostridioides difficile infection (CDI) largely affects the elderly, serious concerns have been raised about a consequent possible increase in transmission of hospital-acquired infections such as CDI, particularly in frail, elderly patients [2]. There are very few clinical surveillance studies reporting CDI with COVID-19. Sandhu et al. described 9 patients at a medical centre in Detroit, Michigan, with SARS-CoV-2 and CDI during March 11 -April 22, 2020, the majority of whom were elderly females with high ATLAS scores (https://www.mdcalc.com/atlas-scoreclostridium-difficile-infection) and multiple co-morbidities [3]. The onset of diarrhoea was found to occur after COVID-19 diagnosis in 7 of these cases, with a median of 6 days from CDI diagnosis to COVID-19 diagnosis. In another high-volume US tertiary-care centre, Mount Sinai, New York, Luo et al. did not find a difference in hospital onset CDI (HO-CDI) rate during the pandemic despite a trend toward increased high-risk antibiotic exposures [4] and is further corroborated by similar findings of a retrospective study by Allegretti et al. across 9 hospitals in Massachusetts [5]. More recently, Sehgal et al. identified 21 patients (20 hospitalized) with median age 70.9 years who had CDI and COVID-19 within 4 weeks of each other [6].
From a European perspective, Granata et al. identified 32 COVID-19 patients who developed HO-CDI across 8 participant hospitals in Italy during the study period from February through July 2020, corresponding to a HO-CDI prevalence of 0.38%. The presence of previous hospitalization, steroid administration, and consumption of antibiotics during hospitalization were the main risk factors associated with CDI [7]. Bentivegna et al. assessed differences in hospital-acquired CDI (HA-CDI) in the medical wards of a hospital in Rome before and during the COVID-19 pandemic, finding that HA-CDI was significantly lower during the pandemic with respect to previous years. However, COVID-19 departments showed higher HA-CDI incidence respect to COVID-19 free wards during 2020, suggesting that SARS-CoV-2 infection may be a possible risk factor for CDI [8]. In a Spanish tertiary centre study, Ponce-Alonso et al. observed a 70% reduction in the incidence density of nosocomial CDI during the period with the maximal incidence of COVID-19 compared with the same period in the preceding year, which they attributed to the reinforcement of infection control measures [9]. In contrast, Lewandowski et al. found a significant increase in the incidence of CDI during the COVID-19 pandemic compared with the pre-pandemic period in their single centre study in Warsaw, Poland (10.9% vs 2.6%; P < 0.001) [10].
The main aim of this study was to assess the impact of the COVID-19 pandemic on total hospital and community-associated quarterly rates of CDI and in-hospital antimicrobial consumption patterns before and during the pandemic. We hypothesized that the reinforcement of infection control measures implemented to prevent COVID-19 transmission would lead to a decrease in total CDI case burden in our tertiary care centre.
Methods
We conducted a single centre retrospective analysis in Nottingham University Hospitals NHS Trust (NUHT), UK, from Jan 2019 through to June 2021. NUHT is a large acute teaching hospital in England with 1700 beds, 90 wards and approximately 16,000 staff, providing specialist medical and surgical services to 2.5 million residents of Nottingham and its surrounding communities, and tertiary services to a total of 3e4 million people from neighbouring counties. During the pandemic, NUHT continued to admit both COVID-19 and non-COVID-19 patients and was therefore not complicitly dedicated to coronavirus disease. Throughout the pandemic infection control measures (including PPE, mask wearing, heightened cleaning, adherence to social distancing and the limiting of visitors) were implemented and adapted in line with national guidance. Prudent antibiotic prescribing practices remained in place throughout the pandemic and the Pharmacy department launched an antibiotic prescribing guideline for COVID-19 to help reinforce appropriate use of antibiotics during the pandemic. All antibiotic audits remained in place throughout the pandemic.
Using the database of the participant Trust, we identified total CDI case burden (community and hospital-combined in all subjects 2 years of age) and hospitalized adult (18 years old) COVID-19 patients with CDI reported from January 2019 (one year before the first UK lockdown in March 2020), through to end of June 2021. We compared total quarterly CDI cases per 10,000 occupied bed days (OBD) during the pandemic with the preceding control years 2019/2020. We also documented OBD (%), total COVID-19 admissions, and consumption of antimicrobials by quarter. A diagnosis of CDI was made in patients with new onset diarrhoea and confirmed by means of toxin immunoassays. Some PCR positive, toxin negative cases were treated. However, this was based on clinical suspicion or susceptibility of the patient and thus not definitive clinical cases, thus these were not included in the analysis. Basic demographic and laboratory data were collected using Excel Office and analysed by means of descriptive statistics. Rates of CDI per 10,000 OBD were compared between quarters by means of binomial test of proportions. A corrected P-value of 0.008 was considered significant to account for multiple comparisons. The research was reviewed by the clinical governance team at the Nottingham University Hospitals NHS Trust and informed consent was not required since this was a service evaluation and minimal risk retrospective study.
Results
A total of 491 cases of CDI were observed over the study period in over 1.4 million bed days. The CDI infection rates per 10,000 OBD for each yearly quarter for 2019, 2020 and 2021 are shown in Fig. 1. The CDI rate per 10,000 OBD was significantly lower in the first and second quarters of 2021 compared to that seen during the same period in 2020 (p < 0.0001). The quarterly defined daily doses (DDD) of antimicrobials per 10,000 OBD were also lower in the first quarter of 2021 compared to the preceding 2 years ( Supplementary Fig. 1). However, the total CDI rates in 2020 were significantly higher for the quarterly period from JulyeSept compared to the same time in 2019, p ¼ 0.005. Data pertaining to the number of CDI cases, number of OBD and DDD of antimicrobials per 10,000 OBD in the time periods before, during and after the emergence of the pandemic are detailed in Supplementary Table 1. Details of OBD (%) and total COVID-19 admissions per quarter are presented in Fig. 2.
We identified 8 cases (median age 74.5 years, range 65e84 years with male:female ratio 5:3) with SARS-CoV-2 and CDI. The mean duration from SARS-CoV-2 diagnosis to CDI diagnosis was 21 days, and in all cases, CDI was diagnosed after SARS-CoV-2 diagnosis.
Discussion
In this study, we observed a significant reduction in the total CDI infection rate per 10,000 OBD during the current pandemic compared with the pre-pandemic period. There are several potential reasons for this observation. Firstly, it is likely that a reduction in patient mobility, including a general reluctance to present to primary or secondary practice, as well as a reduction in overall testing, may have under-estimated the true burden of CDI in the community. Despite the widespread use of antibiotics, the total CDI burden may have been suppressed due to aggressive reinforcement of infection control measures such as frequent handwashing, augmented environmental cleaning regimes, universal PPE, social distancing, in addition to limited patient visits and movement, all of which may have indirectly limited the nosocomial spread of C. difficile. Furthermore, a forced reduction in hospital consultations and surgical procedures may have contributed to fewer opportunities of introducing C. difficile into the hospital from the community.
The higher CDI case burden seen in JulyeSept in 2020 may be partially explained by Annual Epidemiological Commentary data on seasonal trends from Public Health England which showed that the greatest proportion of CDI cases was reported in the JulyeSeptember quarter of financial year from 2016/17 onwards (between 26% and 29% of cases each year) [11]. An explanation for this shift in seasonality is currently lacking. Interestingly, studies have demonstrated seasonal variability in rates of CDI [12,13]. Rodriquez-Palacios et al. [12] observed that C. difficile was more commonly isolated from retail meat in Canada in winter, suggesting a seasonal component may exist. Clements et al. [13] analysed 20 studies in their systematic review, which reported a peak in CDI cases in the spring and contrastingly lower frequencies of CDI in summer/autumn across Northern and Southern hemispheres and continents. It remains possible that environmental or food contamination with C. difficile spores may explain variation in seasonal patterns of CDI. Indeed, strains of C. difficile have been detected in various environmental sources, including farms, livestock animals, water (sewage and rivers) and agricultural produce [14e16] as well as public lawn spaces [17]. However, there have been no reported cases of foodborne transmission of CDI reported to date.
Our study is limited by its retrospective design and single centre analysis. We did not distinguish between community-acquired and hospital-acquired CDI. Nevertheless, our findings support the importance of maintaining a heightened level of attention regarding infection control measures during the pandemic, which may help significantly decrease overall C. difficile transmission and related health economic costs.
Declaration of competing interest
The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: T.M. is a consultant advisor for Takeda. All other authors declare that they have no conflicts of interest. | 2021-11-20T16:13:29.150Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "d38ba333cfde5e8aee90bda2f81db4184e2b813b",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.anaerobe.2021.102479",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "af8aa1bb23f4e26a28d6f33284d4104c56479ec7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53984450 | pes2o/s2orc | v3-fos-license | T-cell Non-Hodgkin lymphoma associated with myelodysplasia : A case report in a child
Non-Hodgkin’s lymphomas (NHL) are a group of malignant diseases originating in the organs and cells of the immune system. Childhood NHL exhibits diffuse and extranodal involvement. Childhood NHL generally arises from lymphoid precursors and B-cell type is found in 80% of the cases (3). Non-Hodgkin’s lymphomas typically metastasize early and the risk of leukemic presentation and central nervous system relapse is high in NHLs (4).
Introduction
Myelodysplastic syndrome (MDS) is a clonal bone marrow disease characterized by ineffective erythropoiesis.MDS patients have cytopenia.The risk of acute leukemia, and particularly of acute myeloblastic leukemia (AML), is the most important characteristic of the disease (1).
MDS is defined as primary of de novo MDS if it develops in a child who has no other diseases, and who has not received chemotherapy or radiotherapy for any other reason, while it is considered as secondary MDS if there is a factor which promotes the development of myelodysplasia, especially a history of chemotherapy or radiotherapy (2).Genetic factors, ionized radiation, chemotherapy, benzene, smoking, alcohol, hair dyes and over-consumption of foods especially rich in phenol are risk factors for the development MDS (2).
In primary MDS cases, cytogenetic abnormalities are found in 50-70% of the patients, while this rate rises over 85% in cases with treatment-related secondary MDS (2).
Non-Hodgkin's lymphomas (NHL) are a group of malignant diseases originating in the organs and cells of the immune system.Childhood NHL exhibits diffuse and extranodal involvement.Childhood NHL generally arises from lymphoid precursors and B-cell type is found in 80% of the cases (3).
Non-Hodgkin's lymphomas typically metastasize early and the risk of leukemic presentation and central nervous system relapse is high in NHLs (4).
Myelodysplasia associated with NHL has been rarely described in the literature.This is suggested to be due to the defect in the immune system, an up-regulation of some cytokines, and a common molecular origin (5)(6)(7)(8)(9)(10).
In this article, we reported a 7-year-old pediatric NHL case with a normal karyotype and myelodysplasia in the bone marrow and discussed the pathogenesis of the association of NHL and myelodysplasia.
Case
A 7-year-old female patient admitted with a swelling on the right side of the neck which had been noted 5 days ago.She had had no complaints such as fever, weight loss, or night sweating.The past medical history and family history of the patient revealed no significant findings.The physical examination of the patient revealed multiple lymphadenopathies in both cervical chains in the submandibular region, with the largest on the left measuring 2.5x1.5 cm and the largest on the right measuring 3.5x1.5 cm, and a 2x2 cm lymphadenopathy in the right inguinal region.Discovery, 2018; 5(5):202-6 The laboratory examinations revealed a hemoglobin value of 10.1 g/dl, a WBC count of 1.2x109/l, and a platelet count of 159x109/l .Peripheral blood smear revealed no blasts.The lactate dehydrogenase 705 U/l and B 12 levels and the other biochemical test results were considered as normal.The immunoglobulin A 133.0 mg/dl, G 955.0 mg/dl, M 44.0 mg/dl and E 49.25 IU/ml levels were consistent with the age of the patient.
Medical Science and
The results of the direct Coombs's test and the ELISAbased Parvovirus PCR, EBV, and CMV assays were negative.
Abdominal ultrasonography revealed multiple ovoid and round lymphadenopathies with loss of echogenic hilus in the para-aortic, parailiac and mesenteric regions, with the largest measuring 21x13 mm and neck ultrasonography revealed multiple reactive lymphadenopathies with echogenic hilus and hilar blood flow in both cervical chains in the submandibular region, with the largest one on the left measuring 23x10 mm and the largest one on the right measuring 32x14 mm.
The bicytopenia of the patient continued for 4 days in the clinical follow-up and bone marrow aspiration was performed.The bone marrow aspiration smear revealed 4% monocytes, 35% normoblasts, 30% lymphocytes, 15% myelocytes, 6% metamyelocytes, 6% neutrophis, and 4% blasts.Dysplasia was found in bone marrow cells.Diffuse hypogranular myeloid cells, dysplastic megakaryocytes and erythroblastic cells were observed (Figure 1a,1b,1c).The cervical lymph node biopsy result was consistent with diffuse NHL with a high-grade malignancy (Figure 2).The cervical lymph node biopsy result was consistent with diffuse NHL with a high-grade malignancy (Figure 2).
Discussion
MDS is known to be related to a process in tumor differentiation.A high incidence of MDS was reported in relation with solid tumors, such as lung, colon, prostate and liver cancers (11).The same relationship was described between MDS and lymphoid neoplasms, such as acute lymphoblastic leukemia, chronic lymphocytic leukemia, and NHL (12)(13)(14)(15).In 1996, a group of Spanish investigators studied the association of lymphoid malignancy in patients with primary MDS and found an association rate of only 1% and concluded that this association could be a coincidence (14).
The association of MDS and lymphoma was described in 21 cases in the literature.However, in 9 of these 21 cases MDS and NHL were diagnosed simultaneously.
The mechanisms responsible for the development of NHL in MDS patients have not been cleared yet.MDS is generally considered as a clone disorder with pluripotent stem cell origin and with a potential to differentiate into lymphoid and myeloid cells.Some authors suggest that the two diseases are caused by the same neoplastic process or a common origin (6).
Another opinion is that MDS plays a predisposing role in the development of lymphoid neoplasms.(5).MDS is associated with abnormal immunological functions.Abnormal lymphocyte count and function (especially natural killer cells) induce growth of neoplastic cells.The immune system defect underlying the development of myelodysplasia is also present in NHL (8,9).
Shimanoto et al. related the association of MDS and NHL
to the up-regulation of particular cytokines, such as IL-6 and vascular endothelial growth factor (VEGF) and reported a case of anaplastic large-cell lymphoma with presence of high IL-6 and VEGF levels and bone marrow myelodysplasia at the time of diagnosis (10).
Chromosomal anomalies are common in both MDS and lymphomas.The questions, whether there are other cytogenetic abnormalities not known yet, and whether the association of MDS with NHL is caused by these common cytogenetic anomalies, still remain to be answered.
In 1998, Mori A et al. found bone marrow dysplasia simultaneously with the diagnosis of angiocentric lymphoma in a 46-year-old male patient.They thought that the association might be caused by cytokines, such as interleukin-2, -4, and-6 (28).
Huang HH et al. also reported a case with the association of bone marrow dysplasia and lymphoma in 2009.They suggested that this association might be caused by a common chromosomal anomaly (del (20q)) based on the fact that the patient had 20q deletion in both myeloid and lymphoid cell lines (29).
Conclusion
In conclusion, the association of MDS and lymphoma is very rare.Only 21 cases have been reported to date.However, de novo MDS and lymphoma were simultaneously identified in 8 of these cases.We think that this association may be caused by a common molecular origin, common chromosomal anomalies and cytokines.Large scale studies including many cases are needed on this subject.
Immunohistochemical examination of cervical lymph node was consistent with T-cell.Analyses of 17p13.1,p53, 20q12, 5q31, 7q31 gene deletions and monosomy/trisomy 7, and monosomy/trisomy 8 chromosomes were performed with bone marrow cytogenetic and FISH (fluorescence in situ hybridization) studies.The results were accepted as normal.The patient left our hospital to continue her diagnostic studies and treatment in another institution. | 2018-12-01T12:30:44.769Z | 2018-05-30T00:00:00.000 | {
"year": 2018,
"sha1": "3fd393dfcb7142dbd8d86776d4cc1f2d59204479",
"oa_license": "CCBYNC",
"oa_url": "https://medscidiscovery.com/index.php/msd/article/download/245/233",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "3fd393dfcb7142dbd8d86776d4cc1f2d59204479",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252608046 | pes2o/s2orc | v3-fos-license | Identification and Validation of a Novel Ferroptotic Prognostic Genes-Based Signature of Clear Cell Renal Cell Carcinoma
Simple Summary Clear cell renal cell carcinoma (ccRCC) is one of the leading types of kidney malignancy and is closely related to ferroptosis that is an iron-dependent regulated cell death with lipid peroxide accumulation. A signature of nine ferroptotic genes was identified as an independent prognostic factor via construction in The Cancer Genome Atlas (TCGA) database and validation in the ArrayExpress database. This signature could successfully divide patients into low- and high-risk groups to predict survival rate. Compared with the other eight genes, glutaminase 2 (GLS2) played a crucial role during erastin-induced ferroptosis in ACHN and Caki-1 cells. It was discovered for the first time that GLS2 might be a ferroptotic suppressor in ccRCC. Abstract Renal cell carcinoma (RCC), as one of the primary urological malignant neoplasms, shows poor survival, and the leading pathological type of RCC is clear cell RCC (ccRCC). Differing from other cell deaths (such as apoptosis, necroptosis, pyroptosis, and autophagy), ferroptosis is characterized by iron-dependence, polyunsaturated fatty acid oxidization, and lipid peroxide accumulation. We analyzed the ferroptosis database (FerrDb V2), Gene Expression Omnibus database, The Cancer Genome Atlas database, and the ArrayExpress database. Nine genes that were differentially expressed and related to prognosis were involved in the ferroptotic prognostic model via the least absolute shrinkage and selection operator Cox regression analysis, which was established in ccRCC patients from the kidney renal clear cell carcinoma (KIRC) cohort in TCGA database, and validated in ccRCC patients from the E-MTAB-1980 cohort in the ArrayExpress database. The signature could be an independent prognostic factor for ccRCC, and high-risk patients showed worse overall survival. The Gene Ontology and Kyoto Encyclopedia of Genes and Genomes were utilized to investigate the potential mechanisms. The nine genes in ccRCC cells with erastin or RSL3 treatment were validated to find the crucial gene. The glutaminase 2 (GLS2) gene was upregulated during ferroptosis in ccRCC cells, and cells with GLS2 shRNA displayed lower survival, a lower glutathione level, and a high lipid peroxide level, which illustrated that GLS2 might be a ferroptotic suppressor in ccRCC.
Introduction
Renal cell carcinoma (RCC) is one of the main urological malignant tumors, and there were more than 430 thousand new cases and nearly 180 thousand deaths worldwide in 2020 [1]. The ratio of men to women diagnosed with RCC was nearly 1.7:1 and 90% of cases
The following data processing was handled by R language software. The gene expression of the above databases was normalized with the "edgeR" package [25]. The differentially expressed genes (DEGs) between normal kidney tissue and ccRCC tissue in the GSE53757, GSE66272, GSE71963, and KIRC cohort were obtained using the "limma" package [26], and DEGs were selected with the |log 2 (FC)| > 1 and the false discovery rate (FDR) < 0.05. The tumor prognostic genes (TPGs) were obtained using the "survival" package and identified by univariate Cox analysis (p < 0.05) of overall survival (OS) [27].
Establishment and Validation of the FPM
The LASSO Cox regression analysis was used to select ferroptotic prognostic DEGs (FPEDGs) using the "glmnet" package, and the optimal lambda (λ) was identified as the optimal value through a tenfold cross-validation process [28,29]. The risk score was computed by summarizing the product of each normalized FPEDG expression and its corresponding multivariate Cox regression coefficient (β). In the KIRC cohort, the risk score of every patient was computed using the above formula, and ccRCC patients were divided into two groups (low-risk group and high-risk group) by the media risk score. In the E-MTAB-1980 cohort, ccRCC patients were divided into two groups (low-risk group and high-risk group) according to the media risk score of the KIRC cohort. The distribution patterns of patients were performed by uniform manifold approximation (UMAP) using the "umap" package and t-distributed stochastic neighbor embedding (t-SNE) with "Rtsne" package. The "survminer" package was employed to conduct Kaplan-Meier (K-M) survival analysis. The "time" ROC package was employed to conduct time-dependent receiver operating characteristic (ROC) analysis.
Independent Prognostic Value of FPM
The risk score and clinical factors including age, gender, grade, and stage were analyzed via Pearson's chi-square test and displayed using a heatmap. The univariate/multivariate Cox regression analysis was employed to estimate the independent prognostic value of FPM and traditional clinical characteristics, and the results were summarized using hazard ratios (HRs) and 95% confidence intervals (CIs).
Establishment and Validation of the Nomogram
According to the results of the multivariate Cox regression analysis, the factors were used to establish a nomogram for predicting survival rates using the "rms" package and the "survival" package. Time-dependent ROC analysis was utilized to evaluate the predictive performance of FPM using the "time" ROC package. Calibration curves were utilized to estimate the consistency between actual survival rates and predicted survival rates.
Molecular Functional Analysis
The GO and KEGG of the nine genes were displayed using the "clusterProfiler" package to screen the potential biological processes (BPs), cellular components (CCs), molecular functions (MFs), and pathways. These results were presented using the "ggplot2" package and the "G0plot" package [30].
Cell lines and Cell Culture
ACHN and Caik-1 cells were provided by Cell Bank/Stem Cell Bank, Chinese Academy of Sciences. ACHN was cultivated in DMEM high glucose media with 1% penicillinstreptomycin and 10% fetal bovine serum, whereas Caki-1 cells were cultivated in RPMI 1640 media with 1% penicillin-streptomycin and 10% fetal bovine serum. All cells were cultured in a humidified incubator at 37 • C with 5% CO 2 .
Cell Viability Assay
A total of 5 × 10 3 ACHN or Caik-1 cells were seeded into each well of a 96-well plate and were incubated overnight, then the cells were respectively treated with various concentrations of erastin (12 h) or RSL3 (6 h) with or without Fer-1 [1 µmol/L (µM)], Lip-1 (1 µM), HCQ (20 µM), NSA (1 µM), and Z-VAD (20 µM) [31]. Then, 5 mg/mL MTT was added to each well and incubated for 4 h at 37 • C. Then, the medium was carefully removed from the wells and 150 µL DMSO was added into each well. After shaking for 10 min, the 96-well plate was put into a multiskan sky high microplate reader (Thermo, Waltham, MA, USA) and the absorbance was detected at 570 nm wavelength.
Real Time-PCR Assay
Total RNA from cells were extracted by using a cellular RNA extraction kit, and reverse transcribed into cDNA by using the cDNA synthesis kit [32]. The real time PCR was performed using Bio-Rad CFX96 Real-time PCR systems (Bio-Rad, Hercules, CA, USA). The result was calculated by the comparative Ct method. All primers were designed by Primer Premier 6 and synthesized by Sangon Biotech (Shanghai, China). The primer sequences are shown in Table S1.
Lentiviral Infection
The GLS2 shRNA sequence was taken from references and synthesized by Shanghai Genechem. Viral vector and packaging vectors were transfected into HEK293T cells using Lipofectamine 2000. The medium was replaced after 6 h, and viral particles were harvested after 24 h. After ACHN and Caki-1 cells had been infected for 24 h, the medium was replaced, and cells were cultured for another 48 h. Then puromycin (2.0 µg/mL) was employed to select cell lines.
Western Blot
When cells were cultured to 90% confluence, they were harvested and lysed with RIPA buffer containing protease inhibitor cocktail (HY-K0010, MedChemExpress, Monmouth Junction, NJ, USA). The protein concentration was measured by using a BCA protein assay kit (23227, Thermo). Then, samples were separated by 10% SDS polyacrylamide gel, transferred onto polyvinylidene difluoride (PVDF) membrane (WBKLS0500, Millipore, Billerica, MA, USA), blocked with 5% skimmed milk for 1 h at room temperature, and blotted with the corresponding antibodies (actin 1:10,000; GLS2, 1:1000) in 5% skimmed milk overnight at 4 • C. The PVDF membranes were washed with TBST three times and incubated with HRP-conjugated secondary antibody for 1 h at room temperature. After being washed with TBST three times, the PVDF membranes were detected by enhanced chemiluminescence reagents (WBKLS0050, Millipore, Billerica, MA, USA) with C300 (Azure Biosystems, Dublin, CA, USA).
Detection of MDA and GSH Level
The levels of MDA and GSH in cells were measured using MDA and GSH assay kits and detected using a multiskan sky high microplate reader (Thermo, Waltham, MA, USA).
Lipid Peroxidation Assay
The cells were seeded on glass-bottomed culture dishes and incubated for 24 h. The cells were incubated with DMSO, erastin or erastin+Lip-1 solutions for 12 h and washed with PBS. Next, the cells were treated with 10 µM BODIPY 665/676 dissolved in PBS for 30 min at 37 • C and washed with PBS. Then the cells were treated with 2 µg/mL Hoechst 33,342 for 15 min at room temperature, washed with PBS, and examined using a Zeiss LSM 880+ Airyscan confocal microscope (Zeiss, Oberkochen, Germany).
MBB Staining
MBB dissolved in PBS was used to treat cells for 15 min at 37 • C after the removal of culture medium [33]. They were photographed using an Olympus IX51 inverted fluorescence microscope (Olympus, Tokyo, Japan).
Statistical Analysis
All data were displayed by mean plus or minus standard deviation. Statistical analysis was managed using Prism 9 and SPSS 13. The value of p < 0.05 was considered significant (* p < 0.05, ** p < 0.01, *** p < 0.001).
Identification of the FPDEGs
In this study (Figure 1), we identified 1519 DEGs (Table S2) from three GEO databases (GSE53757, GSE66272 and GSE71963), 449 genes (Table S3) from the FerrDb V2 database, and 5865 DEGs (Table S4) from TCGA database, and then 41 significant DEGs (Table S5) were obtained. Next, we identified 207 tumor prognostic genes (TPGs) (Table S6) from the FerrDb V2 database and TCGA database. Finally, we harvested 20 FPEDGs that contained 11 upregulated genes and nine downregulated genes between normal tissues and ccRCC tissues in Figure 2A. The expression and HR (95% CI) of 20 FPEDGs in TCGA are displayed in Figure 2B,C.
Construction of the FPM in TCGA
The 20 FPEDGs were subjected to LASSO Cox regression analysis based on OS to screen key genes among FPEDGs ( Figure 3A,B). A nine-gene signature with DPEP1, NOX4, Cancers 2022, 14, 4690 6 of 21 MT1G, GLS2, GLRX5, TIMP1, CA9, CDCA3, and CYBB was identified in the KIRC cohort based on the optimal λ, and their respective relative coefficients were calculated to establish the FPM. The risk score of each patient was computed using the following formula: Risk score = (−0.12661 × expression of DPEP1) + (−0.03457 × expression of NOX4) After checking the clinical data and gene expression of the patients in TCGA, we deleted 13 cases without gene expression profiles, complete clinical data or patients with 0 days. Based on the media risk score, patients (n = 526) were divided into a low-risk group (n = 263) and a high-risk group (n = 263), and patients in the high-risk group possessed high mortality ( Figure 3C). The two groups of ccRCC patients could be well distributed into two sets by using UMAP and t-SNE analysis ( Figure 3D,E). The K-M survival curves of the two groups showed that patients in the high-risk group had a worse survival rate when compared with their counterparts ( Figure 3F). Furthermore, the time-dependent ROC analysis was utilized to show the prognostic value and predictive performance of the FPM, and the area under curve (AUC) reached 0.751 at 1 year, 0.732 at 3 years, and 0.748 at 5 years ( Figure 3G), which illustrated that the FPM could be a suitable prognostic predictor.
Identification of the FPDEGs
In this study (Figure 1), we identified 1519 DEGs (Table S2) from three GEO databases (GSE53757, GSE66272 and GSE71963), 449 genes (Table S3) from the FerrDb V2 database, and 5865 DEGs (Table S4) from TCGA database, and then 41 significant DEGs (Table S5) were obtained. Next, we identified 207 tumor prognostic genes (TPGs) ( Table S6) from the FerrDb V2 database and TCGA database. Finally, we harvested 20 FPEDGs that contained 11 upregulated genes and nine downregulated genes between normal tissues and ccRCC tissues in Figure 2A. The expression and HR (95% CI) of 20 FPEDGs in TCGA are displayed in Figure 2B,C.
FPM Could Be a Well Independent Prognostic Factor of ccRCC
The heatmap based on increasing risk score showed the relationship between risk group, basic clinical information, pathological feature, and nine-gene expression ( Figure 4A). Based on Pearson's chi square test, the risk group had noticeable correlativity with the tumor grade (p < 0.001), tumor stage (p < 0.001), and patient status (p < 0.001) while it was not related to age (p = 0.794) or gender (p = 0.066). The age, tumor grade, tumor stage, and risk score possessed a strong relationship with OS through univariate Cox regression ( Figure 4B). The age, tumor stage, and risk score showed a marked relationship with OS through multivariate Cox regression ( Figure 4C). These results demonstrate that the risk score could be an independent prognostic predictor. The age, tumor stage, and risk score were employed to set up a nomogram to display the survival probability rates ( Figure 4D). The AUC for 1, 3, and 5 years was 0.864, 0.815, and 0.795, respectively ( Figure 4E). The calibration analysis displayed a good fitting effect between the actual survival probabilities and the predicted survival probabilities ( Figure 4F).
The FPM Could Be a Convincing Independent Prognostic Predictor in E-MTAB-1980 Cohort
We checked the clinical data and gene expression of the patients in E-MATB-1980 and deleted the cases without complete information, then 92 cases were used to validate the prognostic value of the FPM. According to the media risk score of TCGA, 92 patients were divided into a low-risk group (n = 33) and a high-risk group (n = 59), and all dead patients were placed into the high-risk group, which illustrated the well-predicted value of FPM ( Figure 5A). The two groups of ccRCC patients could be well distributed into two sets by using UMAP and t-SNE analysis ( Figure 5B,C). The K-M survival curves of the two risk groups showed that patients in the high-risk group displayed a worse survival rate when no patient was in the low group ( Figure 5D). The AUC for 1, 3, and 5 years was 0.888, 0.875, and 0.879, respectively, which showed the convincing prognostic value of the FPM ( Figure 5E). The heatmap of the 92 patients in the E-MTAB-1980 cohort was also used to present the relationship between risk group, basic clinical information, pathological features, and nine-gene expression ( Figure 5F). Cancers 2022, 14, x FOR PEER REVIEW 13 of 25 The age, tumor grade, tumor stage, and risk score possessed a strong relationship with OS through univariate Cox regression ( Figure 5G). The tumor stage, and risk score showed a marked relationship with OS through multivariate Cox regression ( Figure 5H). These results demonstrate that the risk score could be a convincing independent prognostic predictor.
Molecular Functional Analysis
GO and KEGG analysis were used to investigate the molecular function and potential signaling pathways of nine genes. The nine genes enriched in 132 BP, 22 CC, 36 MF, and 10 KEGG terms that possessed significant difference (p < 0.05). The top ten terms of BP, CC, and MF were chosen and are displayed in Figure 6A,B. The nine genes mainly focused on several amino acid metabolic processes, metal processes, and redox processes, such as the homocysteine metabolic process, α-amino acid metabolic process, sulfur amino acid metabolic process, ROS metabolic process, electron transport chain, electron transfer activity, oxidoreductase activity, and cellular response to metal iron. The 10 KEGG terms contained HIF-1 signaling pathway, ferroptosis, and amino acid metabolism ( Figure 6C,D), however, DPEP1, GLRX5, and CDCA3 did not enrich in 10 KEGG terms. The age, tumor grade, tumor stage, and risk score possessed a strong relationship with OS through univariate Cox regression ( Figure 5G). The tumor stage, and risk score showed a marked relationship with OS through multivariate Cox regression ( Figure 5H). These results demonstrate that the risk score could be a convincing independent prognostic predictor.
Molecular Functional Analysis
GO and KEGG analysis were used to investigate the molecular function and potential signaling pathways of nine genes. The nine genes enriched in 132 BP, 22 CC, 36 MF, and 10 KEGG terms that possessed significant difference (p < 0.05). The top ten terms of BP, CC, and MF were chosen and are displayed in Figure 6A,B. The nine genes mainly focused on several amino acid metabolic processes, metal processes, and redox processes, such as the homocysteine metabolic process, α-amino acid metabolic process, sulfur amino acid metabolic process, ROS metabolic process, electron transport chain, electron transfer activity, oxidoreductase activity, and cellular response to metal iron. The 10 KEGG terms contained HIF-1 signaling pathway, ferroptosis, and amino acid metabolism ( Figure 6C,D), however, DPEP1, GLRX5, and CDCA3 did not enrich in 10 KEGG terms.
GLS2 Was Upregulated during Ferroptosis
The erastin or RSL3 could dose-dependently induce the death of ACHN cells that was prevented by the ferroptosis inhibitors Fer-1 and Lip-1 in Figure 7A,B, and the cellular morphology is shown in Figure 7E. In contrast, the autophagy inhibitor HCQ, the necroptosis inhibitor NSA, and the apoptosis inhibitor Z-VAD could not repress erastin-or RSL3induced cell death. Meanwhile, similar results also occurred in Caki-1 cells (Figure 7C-E). The mRNA expression of nine genes in ACHN and Caik-1 with different treatments are exhibited in Figure 7F,G. The mRNA expression of GLS2 was significantly upregulated in both ACHN and Caki-1 with erastin or RSL3 treatment, which indicated that GLS2 played a crucial role during ferroptosis. morphology is shown in Figure 7E. In contrast, the autophagy inhibitor HCQ, the necroptosis inhibitor NSA, and the apoptosis inhibitor Z-VAD could not repress erastinor RSL3-induced cell death. Meanwhile, similar results also occurred in Caki-1 cells ( Figure 7C-E). The mRNA expression of nine genes in ACHN and Caik-1 with different treatments are exhibited in Figure 7F,G. The mRNA expression of GLS2 was significantly upregulated in both ACHN and Caki-1 with erastin or RSL3 treatment, which indicated that GLS2 played a crucial role during ferroptosis.
GLS2 Was Low-Expressed in ccRCC Tissues and Closely Related with Prognosis
Compared with normal kidney tissues, the mRNA expression of GLS2 was lowexpressed in ccRCC tissues from TCGA ( Figure 8A,C), and the proteomic expression of GLS2 was also low-expressed in ccRCC tissues from CPTAC ( Figure 8B). In the aspect of histopathological grade, the mRNA expression of GLS2 was lower in G3 and G4 than its counterparts ( Figure 8D). In terms of clinical TNM stage from AJCC, the mRNA of GLS2 was low-expressed in Stage III and IV ( Figure 8E). The OS, disease specific survival, and progress-free interval illustrated that patients with high GLS2 expression possessed high survival rates compared with their counterparts (Figure 8F-H). These results illustrate that GLS2 was significantly low-expressed in ccRCC tissues and is closely related to the prognosis of ccRCC patients.
GLS2 Might Be a Suppressor of Ferroptosis in ccRCC
As the expression of GLS2 obviously upregulated with treatment of erastin, we knocked down the mRNA expression of GLS2 using shRNA in ACHN and Caik-1 cells (Figures 9A-D and S3). After the knockdown of GLS2, the cell viabilities of ACHN and Caki-1 markedly decreased ( Figure 9E,G). The cell viabilities of shRNA groups with erastin treatment were further decreased, and cell death could be repressed by Lip-1. The levels of MDA increased in both ACHN and Caki-1 cells after the knockdown of GLS2 and further reduced in knockdown groups with erastin treatment while the levels of MDA could downregulate by Lip-1 ( Figure 9F,H). The BODIPY 665/676 could detect the lipid ROS in cells and the results were consistent with MDA results both in ACHN ( Figure 9I) and Caki-1 ( Figure S1). As GLS2 participated in the biosynthesis of GSH, the levels of GSH were tested in ACHN and Caki-1 cells ( Figure 9J,K). The levels of GSH decreased in shRNA groups and further reduced in shRNA groups with erastin treatment, whereas Lip-1 treatment could not reverse the intracellular GSH level. The intracellular GSH level was detected by MBB staining because MBB could bind with GSH to emit blue fluorescence [31]. The fluorescence intensity of shRNA groups and erastin groups distinctly reduced and did not reverse with Lip-1 treatment in ACHN, which was consistent with GSH detection ( Figure 9L). The MBB staining of Caki-1 with different treatments is displayed in Figure S2. These results show that GLS2 might be a suppressor of ferroptosis via affecting biosynthesis of GSH.
Discussion
Several methods of managing RCC have been utilized such as surgery, ablation, targeted therapy, chemotherapy, and immunotherapy. Although partial or radical nephrectomy and ablation could successfully be applied as ccRCC therapy, 30% of ccRCC patients still develop metastases, which is associated with higher mortality [4]. The 3-year survival rate of patients with nodal invasion is 20-30% after surgery without consideration of T stage [34]. Thermal ablation, cryoablation, and radiofrequency ablation could be taken into consideration for renal masses of less than 3 cm [3]. Targeted therapy, such as VEGF receptor inhibitors and tyrosine kinase inhibitors, has been used for RCC, however, many patients could develop drug resistance after treatment for 6-15 months, especially those with metastatic RCC [35]. These interventions might improve the OS of ccRCC patients, however, complete remission is rare because advanced RCC is a deadly disease.
Ferroptosis has been investigated in different cancers, such as breast cancer [36,37], glioblastoma [38,39], hepatocellular carcinoma [40,41], lung cancer [42,43], and pancreatic cancer [31,44]. A great number of genes, long non-coding RNAs (lncRNAs), and compounds have been studied and explored by researchers and different mechanisms have been reported. Different gene-or lncRNA-based signatures have been explored and pos-sess prognostic value [18,19,[45][46][47]. Prognostic models are fundamental to developing a personalized therapy, moreover, an early diagnosis is of paramount importance in these cases [48]. In a recent study, a signature containing eight ferroptotic lncRNAs was found to be accurate and reliable in predicting clinical outcomes, and the target genes (BNIP3, RRM2, and GOT) of three lncRNAs (LINC00460, LINC01550, and EPB41L4A-DT) were closely related to survival outcomes of ccRCC [45]. In the ferroptosis-related gene signature, seven genes were selected to set up a model that showed good prognostic value, and twelve genes were chosen to predict prognosis and reveal immune relevancy, however, the signatures only used a group of 60 ferroptosis-related genes and were only displayed in TCGA with or without simple validation in the E-MTAB-1980 cohort [18,19]. These signatures lack database validation or experimental validation and remain only at the level of computer operation and therefore need deeper investigation. Considering these deficiencies, we introduced three databases from GEO and obtained the whole gene expression, then we used these databases and the KIRC cohort from TCGA to find the common DEGs. In order to avoid the limitation of using only 60 genes, we downloaded the latest ferroptotic gene database (including drivers, suppressors, and markers) without unclassified genes, whose role in ferroptosis is unclear, from FerrDb V2. Based on the five databases, a signature including nine genes was employed to predict outcomes of ccRCC patients. Except MT1G and CA9, the other seven genes of the signature were not involved in previous signatures [16,18,19,46,[49][50][51], which might be attributed to using a fragmentary gene set. Therefore, the nine-gene signature in this study could be more successful at distinguishing high-risk and low-risk patients and more accurate in predicting the prognosis of patients.
Except GLS2, the other eight genes involved in the signature could be divided into two groups, the drivers (DPEP1, NOX4, TIMP1, and CDCA3) and the suppressors (MT1G, GLRX5, CA9, and CYBB) of ferroptosis. Dipeptidase 1 (DPEP1) as a membrane-bound glycoprotein could hydrolyze a wide range of dipeptides, and it colocalized with clathrin (endocytic vesicle marker) to induce transferrin endocytosis [52]. Deficiency of DPEP1 could protect kidneys from cisplatin-induced ferroptosis. In kidney samples, DPEP1 expression is strongly related to SLC3A2 that combines with SLC7A11 to form a transport system for cystine. NADPH oxidase 4 (NOX4) could generate intracellular superoxide and promote ferroptosis via oxidative stress-induced lipid oxidation [53]. When NOX4 was inhibited by its inhibitor, cells could display resistance to erastin-induced ferroptosis [8]. Pseudolaric acid B could trigger ferroptosis by activating NOX4 in glioma, and the knockdown of NOX4 made it resistant to Pseudolaric acid B-induced cell death [54]. Metalloproteinase inhibitor 1 (TIMP1) targets and forms complexes with metalloproteinases to irreversibly inactivate the latter. Inhibition of TIMP1 could repress ferroptosis of CMEC cells by decreasing transferrin receptor 1 [55]. The cell division cycle associated protein 3 (CDCA3) mainly participates in drug resistance and cell cycle regulation in cancers and has not been studied in depth [56]. However, CDCA3 was just regarded as a driver of ferroptosis because of genome-wide CRISPR screens and has not been validated [57]. In the other group, metallothionein-1G (MT1G) has a high content of cysteine residues that bind various heavy metals and, as a transcriptional target of NRF2, could ameliorate heavy metals and free radicals to maintain cellular redox homeostasis while it is upregulated in sorafenib-resistant hepatocellular carcinoma cells [41]. Suppression of MT1G expression via shRNA or an inhibitor could significantly improve the sensibility of tumors to sorafenib. Glutaredoxin 5 (GLRX5) participates in iron-sulfur cluster biogenesis and regulates hemoglobin synthesis. GLRX5 knockdown could enhance intracellular lipid peroxidation and increase intracellular free iron, which is attributed to the upregulated transferrin and downregulated ferritin in head and neck cancer cells [58]. Carbonic anhydrase 9 (CA9), as one of the CAs that play a crucial role in equilibrating the reaction between CO 2 , HCO 3 − , and H + , is inductively expressed during hypoxia in various cancers [59,60]. Inhibition of CA9 could decrease the viability and migration of malignant mesothelioma cells, while Fe 2+ is increased via upregulating transferrin receptor and downregulating ferritin [61]. The CA9 inhibition could be repressed by deferoxamine and ferrostatin-1, which indicated that CA9 might be a suppressor of ferroptosis. Cytochrome b-245 β chain (CYBB) is the terminal component of a respiratory chain [62]. However, its suppression of ferroptosis was only deduced and not entirely confirmed [8].
There are two types of glutaminase isoenzymes, GLS (encoded by GLS and regulated by regulated by c-Myc) and GLS2 (encoded by GLS2 and regulated by p53), which are both significant enzymes participating in glutamine metabolism. The GLS-mediated deamination of glutamine results in ammonia release to maintain cell survival via biosynthesis with α-ketoglutarate and intermediates, meanwhile, glutamate coming from GLS2-mediated deamination of glutamine takes part in an antioxidant mechanism (GSH) [63]. GLS is correlated with tumor growth rate and malignancy [64], and it is high-expressed in various cancers including brain cancer, lung cancer, breast cancer, hepatocellular carcinoma, colorectal cancer [65,66]. In a recent study, the absence of exogenous glutamine induced glutamate level, which led GLS to convert from dimer to self-assembled filamentous polymer [67]. The catalytic activity of filamentous GLS increased and further depleted intracellular glutamine, which resulted in ROS-induced apoptosis that could be rescued by asparagine supplementation. GLS2 could increase the GSH level to enhance intracellular antioxidant function in HepG2, HCT116, and LN-2024 cells [68]. In our experiments, the viabilities of ACHN and Caki-1 decreased after GLS2 knockdown, and the GSH levels of ACHN and Caik-1 descended accompanied by increase of MDA. According to these findings and our experiments, GLS2 might be a negative regulator of ferroptosis. However, this conclusion was different from another study in which the knockdown of GLS2 repressed serum-dependent necroptosis in mouse embryonic fibroblasts through control of glutaminolysis [69]. The authors also considered that the results might be due to predominant expression of GLS2 in mouse embryonic fibroblasts, however, they did not further validate this. As there are various components in fetal bovine serum, the results lacked specific ferroptosis treatments and specific ferroptosis-related assays. In our study, we used erastin and RSL3 to induce ferroptosis of ACHN and Caki-1 and discovered the role of GLS2 in the ferroptosis of ccRCC cells.
Conclusions
In summary, this study identified a novel signature that could successfully distinguish patients with ccRCC on the basis of clinical and molecular characteristics. The novel nine-FPEDG signature could be an independent prognostic factor for ccRCC in TCGA and ArrayExpress databases. It was discovered for the first time that GLS2 might be a ferroptotic suppressor in ccRCC. The potential mechanisms of other FPEDGs remain unclear and need further investigation. | 2022-09-30T15:21:26.697Z | 2022-09-27T00:00:00.000 | {
"year": 2022,
"sha1": "89d05111c5ee3a1261a70b1169516a46fb5765b6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/14/19/4690/pdf?version=1664267413",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "632231b80d05ebe4bc9a67e591b1fec6ff6bf46c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267029143 | pes2o/s2orc | v3-fos-license | Effectiveness of bone grafting versus cannulated screw fixation in the treatment of posterolateral tibial plateau compression fractures with concomitant ACL injury: a comparative study
Background Posterolateral tibial plateau compression fractures (PTPCF) are one of the significant factors leading to knee instability and anterior cruciate ligament (ACL) reconstruction failure. The effectiveness of fixation for such cases without the use of metal implants remains inconclusive. The aim of this study is to investigate whether the fixation with isolated bone grafting is stable enough for the treatment of PTPCF with concomitant ACL injuries. Methods This retrospective study analyzed patients treated for concomitant ACL injuries and PTPCF in authors’ institution. A total of 53 patients (21 males and 32 females) with an average age of 47.43 ± 14.71 years were included. Patient data were collected, including factors leading to injury, affected side, height, weight, and basic medical history. The posterior inclination angle and the lateral tibial plateau lateral inclination angle were measured to evaluate the fixation stability. Rasmussen functional score and HSS score were used to assess the knee functional recovery. Results The bone grafting group achieved satisfactory levels of Rasmussen score (28.22 ± 0.85) and HSS knee joint function scores (95.57 ± 1.97). The cannulated screw fixation group had a Rasmussen knee joint function score of 28.70 ± 0.92 and a HSS knee joint function score of 96.07 ± 1.93. No statistically significant difference was found (P > 0.05). The cannulated screw fixation group had a mean posterior inclination angle reduction loss of 0.20° ± 1.11°, while the bone grafting group had a reduction loss of 0.18° ± 1.01°, with no statistically significant difference (P > 0.05). The cannulated screw fixation group had a lateral inclination angle reduction loss of 0.01° ± 0.37°, and the bone grafting group had a reduction loss of 0.03° ± 0.43°, with no statistically significant difference (P > 0.05). Conclusion The use of bone grafting for fixation of PTPCF with accompanying ACL injuries demonstrated no substantial disparities in knee joint function. In cases of simple PTPCF, filling and compacting the bone defect underneath the tibial plateau fracture fragment can yield satisfactory fixation, obviating the necessity for supplementary cannulate screw fixation.
Introduction
Posterolateral tibial plateau compression fractures (PTPCF), accounting for only 7-15% of all tibial plateau fractures, are closely associated with anterior cruciate ligament (ACL) injuries or ACL attachment avulsion fractures [1][2][3][4].The posterolateral tibial plateau plays a crucial role in knee flexion stability, and fractures in this region are often accompanied by an increased posterior inclination angle or an increased lateral inclination angle of the lateral plateau [5,6].An increased posterior inclination angle or lateral inclination angle of the lateral plateau can lead to increased load on the ACL during knee joint motion, closely related to ACL injuries, and is one of the significant factors leading to knee instability and ACL reconstruction failure after ACL reconstruction surgery [7].Therefore, to achieve satisfactory clinical outcomes and reduce the failure rate of ACL reconstruction, the treatment of PTPCF with concomitant ACL injury should not be limited to ACL reconstruction alone; equal attention should be given to the treatment of PTPCF [8,9].While addressing PTPCF in patients with concomitant ACL injuries, some surgeons use arthroscopic-assisted reduction and cannulated screw fixation [10].Other orthopedists perform arthroscopic-assisted reduction and fixation with bone grafting only [11].Nevertheless, the effectiveness of fixation for these fracture types without the use of metal implants is yet to be conclusively established.Therefore, this study retrospectively analyzed patients who had concomitant ACL injuries and PTPCF to investigate whether the fixation with isolated bone grafting is stable enough for the treatment of PTPCF with concomitant ACL injuries.The hypothesis of this study was that fixation with isolated bone grafting for PTPCF with concomitant ACL injuries can get a stable fixation as well as satisfactory clinical and radiological outcomes compared to fixation with cannulated screws.
Materials and methods
This retrospective study analyzed patients treated for concomitant ACL injuries and PTPCF in authors' institution between June 1, 2017, and May 31, 2022.
Inclusion criteria: (1) preoperative knee joint magnetic resonance imaging (MRI) showing ACL injury or ACL attachment avulsion fractures, with three-dimensional computed tomographs (3DCT) confirming PTPCF; (2) articular step-off of posterolateral tibial plateau fractures ≥ 2 mm or lateral tibial plateau posterior inclination angle greater than or equal to 17°, with patients undergoing arthroscopic surgery for tibial plateau fracture reduction; (3) age ≥ 18 years; (4) follow-up duration ≥ 12 months; and (5) complete preoperative and postoperative imaging data.This study included a total of 53 patients (21 males and 32 females) with an average age of 47.43 ± 14.71 years (range 18-72 years).The patients were diagnosed with concomitant ACL injuries and PTPCF with ACL reconstruction and tibial plateau fracture reduction and fixation as a one-stage procedure.Patients were divided into two groups based on whether the PTPCF was internally fixed with cannulated screws during the procedure: the cannulated screw fixation group (30 cases, 13 males, and 17 females) and the bone grafting fixation group (23 cases, 8 males, and 15 females) (Table 1).This study was approved by the Medical Ethics Committee of Taizhou Hospital in Zhejiang Province, and all enrolled patients were informed and signed informed consent.
Surgical procedure
Patients were placed in the supine position, and after the onset of general anesthesia, anterior and posterior drawer tests, Lachman tests, varus/valgus stress tests, and knee joint flexion-extension assessments were performed to evaluate knee joint mobility and stability.Knee joints were flexed to 90 degrees, and medial and lateral approaches were established.After removing intra-articular hematomas and synovial debris to ensure a clear surgical field, intra-articular injuries were examined, including articular fracture fragments, articular surface injuries, ligament ruptures, and meniscus tears.
Aiming to locate the lowest point of the articular fragment, the ACL guide was used to insert a 2.0-mm Kirschner wire from anterolateral portal, creating a 7.0-mm tunnel, which was then used to elevate the collapsed articular surface with a metal tamp (Fig. 1A-C) [12].The reduction of articular surface was confirmed using arthroscopy.Then inserting a bone graft funnel into the fabricated 7.0-mm tunnel and filling and compacting the bone defect under the tibial plateau fracture fragments with allograft bone (Osteolink, China) or calcium sulfate artificial bone (Biocomposites, UK) for the bone grafting group (Fig. 1D-F).In the cannulated screw fixation group, temporary fixation was carried out with the 2.0-mm Kirschner wires from the lateral plateau to the medial plateau when deemed necessary after reducing the fracture.One to two 7.3-mm cannulated screws (Canwell, China) were implanted below the fracture fragments to get a rigid fracture fixation without bone grafting.
Meniscus reshaping or repair surgery was performed when needed.For medial collateral ligament ruptures, traditional repair methods were used.In cases of ACL attachment avulsion fractures, a 2-0 Ethibond suture was used for traction reduction and fixation.Autogenous hamstring tendons or gracilis tendons were used as grafts for single-bundle ACL reconstruction [10].
Postoperative recovery and follow-up
All patients underwent standardized postoperative recovery and guidance training.Starting from the first day after surgery, quadriceps muscle strength training began, with passive knee flexion exercises at 30 degrees of flexion.Two weeks after surgery, patients began nonweight-bearing ambulation with knee joint support.After four weeks, partial weight-bearing ambulation commenced, and at eight weeks post-surgery, full weightbearing ambulation was allowed.All enrolled patients underwent a minimum of 12 months of follow-up, during which their knee joint mobility, HSS knee joint function score (Hospital for Special Surgery), Rasmussen knee joint function score, and VAS (visual analog scale) for pain assessment were conducted.
Clinical and radiological assessment
Patient data were collected, including factors leading to injury, affected side, body mass index (BMI), and basic medical history such as hypertension and diabetes.Radiological data, comprising preoperative and postoperative knee joint lateral images, 3DCT scans, and knee joint MRIs, were collected to evaluate the patient's condition before and after the operation.Preoperatively, lower limb vascular Doppler ultrasound examinations were routinely performed to rule out deep vein thrombosis.If no deep vein thrombosis was detected, low-molecularweight heparin calcium injection (4100 IU) was administered subcutaneously to prevent lower limb deep vein thrombosis.If deep vein thrombosis was present, treatment with low-molecular-weight heparin calcium injection (4100 IU) every 12 h subcutaneously was initiated.Postoperatively, lower limb sensation was assessed to evaluate potential peroneal nerve damage.Follow-up occurred over a minimum of 12 months and involved assessing knee joint mobility (extension and flexion), knee joint VAS pain scores, Rasmussen function scores, and HSS knee joint scores to evaluate postoperative knee joint function.
The PACS (picture archiving and communication system) was used to measure the radiological parameters of the tibia plateau, including the relative posterior inclination angle and the lateral tibial plateau lateral inclination angle before surgery, postoperatively, and at the final follow-up, in order to assess the alignment of the lateral tibial plateau (Fig. 2).
Statistical
For statistical analysis, SPSS version 26.0 (SPSS Inc., Chicago, IL, USA) was used.Continuous variables were
Results
Baseline data, including patient age, gender composition, BMI, operated side, diabetes, hypertension, and follow-up time, did not differ significantly between the two groups (P > 0.05) (Table 1).Peroneal nerve injury was not observed in either group, and no cases of ACL reconstruction failure were noted.There were no significant differences between the two groups regarding knee joint periarticular soft tissue injuries (meniscus injuries, medial collateral ligament injuries), major tibial plateau inclination types, or incidence of deep venous thrombosis of lower extremity (P > 0.05) (Table 1).At the final follow-up, the mean extension angle of the knee was 0.65° ± 1.72° (range 0°-5°) in the bone grafting group, with a mean flexion angle of 124.78° ± 7.46° (range 110°-140°).In the bone cannulated screw fixation group, patients had a mean extension angle of 0.50° ± 1.53° (range 0°-5°) and a mean flexion angle of 127.50° ± 8.78° 115°-145°).There was no statistically significant difference in knee joint mobility between the two groups (P > 0.05).
The isolated bone grafting group achieved satisfactory levels of Rasmussen score (28.22 ± 0.85) and HSS knee joint function scores (95.57± 1.97).Similarly, the cannulated screw fixation group had a Rasmussen knee joint function score of 28.70 ± 0.92, and a HSS knee joint function score of 96.07 ± 1.93.There was no statistically significant difference in knee joint function scores between the two groups (P > 0.05) (Table 2).
To further assess the reliability of maintaining articular surface reduction after bone grafting fixation, a comparison of posterior inclination angle and lateral inclination angle of the lateral tibial plateau was performed for patients before surgery, postoperatively, and at the final follow-up.It was found that there was no statistically significant difference in the posterior inclination angle and lateral inclination angle of the lateral tibial plateau at both the postoperative and final follow-up time points.
The cannulated screw fixation group had a mean posterior inclination angle reduction loss of 0.20° ± 1.11°, while the bone grafting group had a reduction loss of 0.18° ± 1.01°, with no statistically significant difference between the groups (P > 0.05).The cannulated screw fixation group had a lateral inclination angle reduction loss of 0.01° ± 0.37°, and the bone grafting group had a reduction loss of 0.03° ± 0.43°, with no statistically significant difference between the groups (P > 0.05) (Table 3).
Discussion
The outcome of this study shows that the isolated bone grafting for the treatment of PTPCF could achieve a satisfactory range of motion of knee joint (a mean extension of 0.65° ± 1.72° and a mean flexion of 124.78° ± 7.46°) and HSS scores (95.57± 1.9) and Rasmussen score (28.22 ± 0.85).Furthermore, the fixation with isolated bone grafting for the treatment of PTPCF could get a relatively stable fixation without significantly loss of reduction compare to fixation with cannulated screws with similar posterior inclination angle reduction loss and lateral inclination angle reduction loss.
ACL injuries are often accompanied by tibial plateau fractures [9,13,14].A meta-analysis of knee MRIs in 1047 cases of ACL injuries revealed that early knee MRIs can detect bone contusion signal changes in up to 78% of patients [15].The presence of tibial plateau bone contusion or fractures on MRI, and even impaction fractures of the tibial plateau and femoral lateral condyle, usually signifies severe knee joint trauma caused by violent force leading to knee joint subluxation [2].Tibial plateau bone contusions or impaction fractures, often combined with femoral lateral condyle injuries, are believed to occur during the process of ACL injury, which involves knee joint flexion, external rotation, and valgus force.During the reduction process after knee joint subluxation, the impact between the lateral femoral condyle and the tibial plateau's posterolateral aspect is responsible for this injury.A typical tibial plateau impaction fracture manifests as a coronal plane defect in the posterolateral tibial plateau, hence it has been termed the "bitten apple" fracture [16,17].Bernholt et al. categorized tibial plateau compression fractures into three major classes based on the MRI presentation of the lateral tibial plateau: posterior cortical fractures of the tibial plateau that do not involve the articular surface, posterior cortical fractures of the tibial plateau that affect the articular surface, and posterior split fractures of the tibial plateau [18].Building upon this classification, Menzdorf and colleagues employed the posterior horn of the lateral meniscus as a reference point to aid in the reclassification of surgical guidelines, though the substantial individual variability of meniscus morphology led to a reduction in classification reliability [17].
However, the two fracture classifications did not encompass the tibial plateau compression fractures included in this study.Tibial plateau compression fractures, often treated with arthroscopically assisted reduction and cannulated screw fixation involving the use of a metal tamp during the reduction process, exhibit various deficiencies, such as articular surface fracture fragment fragmentation, uneven articular surfaces, inadequate angular reduction, and fracture fragment rotation.To facilitate the use of metal tamp for fracture reduction, the authors categorized compression fractures of the lateral tibial plateau into three types based on the primary direction of articular surface inclination: posterior inclination, lateral inclination, and horizontal compression types [10].In this study, the main focus was on posterior inclination and lateral inclination types.In contrast to tibial plateau impaction fractures accompanied by ACL injury, which often display the "kissing sign" on knee MRI [19], this sign is less frequent in the MRIs of knee joints with tibial plateau compression fractures associated with ACL injuries.The mechanism behind this difference might be that the lateral tibial plateau compression during the violent process of flexion and external rotation increases the posterior inclination angle or external rotation angle.This increased stress on the ACL eventually leads to ACL injury, even without knee joint subluxation.Tibial plateau articular subsidence and increased posterior inclination angle are closely related to the posterolateral instability of the knee Fig. 3 A 55-year-old male patient was admitted due to left knee pain following a fall from an electric bicycle.3DCT scans of the left knee upon admission revealed a compressive fracture of the posterolateral tibial plateau.The coronal view demonstrated a significantly increased lateral inclination of the lateral tibial plateau (A), and the sagittal view displayed an increased posterior inclination (B).The sagittal view of the MRI revealed an acute ACL tear (C), while the coronal view of the MRI also indicated a lateral tibial plateau compression fracture (D).The patient underwent a one-stage procedure involving ACL reconstruction and arthroscopically assisted reduction and fixation of the PTPCF using two cannulated screws without bone grafting.Fourteen months post-surgery, the fracture fragments exhibited successful union with no significant reduction loss (E-H) joint and are often associated with a positive pivot-shift test of grade 2 or higher in knee joints [20].A posterior inclination angle of 17 degrees or more is a significant factor in post-ACL reconstruction failure [21].Thus, the treatment of tibial plateau compression fractures has attracted considerable attention.The treatment of tibial plateau compression fractures varies.As arthroscopic technology continues to advance, Fig. 4 A 41-year-old female patient admitted due to limited knee mobility and pain caused by a car accident-related injury to the left knee.3DCT scans of the knee upon admission revealed a compressive fracture of the posterolateral tibial plateau.The sagittal view displayed significant increased relative posterior inclination of the lateral plateau (A), while the coronal view showed partial compression with less lateral inclination (B).Three-dimensional reconstruction illustrated a significant increase in the lateral plateau's relative posterior slope (C).Subsequent knee joint MRI revealed an avulsion fracture at the attachment of the ACL (D), posterolateral plateau collapse with pronounced lateral plateau edema, and no signs of femoral condyle edema (E, F).Arthroscopically assisted fixation of the anterior cruciate ligament avulsion fracture was performed, and in a single-stage procedure, the lateral plateau fracture was restored and securely fixed using calcium sulfate artificial bone grafting, resulting in an anatomical reduction of joint surface (G-J).Two years later, a follow-up 3DCT scan showed excellent fracture healing with some bone resorption within the tunnels (K, L) many orthopedists have adopted arthroscopic reduction and cannulated screw fixation, which has shown good results [11,12].However, excessive cannulated screw fixation may interfere with the tibial tunnel for the ACL, affect grafts, risk peroneal nerve damage during the lateral implantation of the cannulated screw fixation, and require a second removal of the cannulated screw fixation [22,23].In this retrospective study, we compared the clinical and radiographic results of arthroscopic reduction and cannulated screw fixation and isolated bone grafting for tibial plateau compression fractures.It was found that simply filling and compacting bone under the tibial plateau fracture fragment can achieve a good reduction support effect.There were no significant differences in postoperative knee joint function or reduction loss between two groups.The subchondral bone underlying the fracture fragment in the tibial plateau is characterized by its trabecular composition, imposing rigorous requirements on the strategic placement of cannulated screws.Therefore, it is imperative to position the screws directly beneath the fracture fragment of the articular surface to attain an efficacious support effect (Fig. 3).In this study, increasing the trabecular bone density under the fracture fragment by performing allograft bone or calcium sulfate artificial bone grafting and compaction under the articular surface fracture fragment results in good reduction support, as shown in Fig. 4. Notably, neither the application of one or two cannulated screws nor isolated bone grafting can achieve the level of robust fixation compared with plate fixation [24,25].During knee joint movement, the axial load on the lateral tibial plateau is minimal when in the extended position.However, during knee flexion, due to the roll-back effect of the lateral condyle of the femur, the lateral plateau gradually begins to bear axial stress [26,27].Therefore, regardless of whether cannulated screw internal fixation is used, early weight-bearing knee flexion exercises should be avoided during postoperative rehabilitation.
In this study, some patients were managed using calcium sulfate artificial bone for grafting.Calcium sulfate artificial bone has good biocompatibility and degradability, while also inducing bone formation and having excellent mechanical properties.Although some degradation and absorption were observed during postoperative follow-up, all fractures achieved good healing, with no reduction loss (Fig. 4).Other patients were under went allograft bone grafting for the fixation of PTPCF.While the mechanical performance of allograft bone is lower than that of calcium sulfate artificial bone, its absorption rate is lower, approximately 10.78% [28].During the follow-up, the fracture healed well with no significant loss of reduction.In comparison to cannulated screw fixation group, bone grafting fixation have several advantages, including avoiding the use of additional metal implants, reducing the risk of peroneal nerve injury during the fixation process, and eliminating the need for secondary removal surgery.
Limitations
Limitations of this study include its retrospective study design, a relatively small sample size, a short follow-up period, and inherent selection bias regarding the use of metal cannulated screw fixation during the surgical process.Additionally, the study did not include patients with tibial plateau compression fractures who had conservative treatment for the tibia plateau without ACL injuries, resulting in a lack of a control group.Furthermore, the study did not provide a more reasonable classification of tibial plateau compression fractures, and it is essential to determine whether simply bone grafting defects under the tibial plateau suffice for achieving ideal reduction and fixation, depending on the type of tibial plateau fracture.
Conclusions
The use of isolated bone grafting for fixation of PTPCF with accompanying ACL injuries demonstrated no substantial disparities in postoperative knee joint range of motion, Rasmussen score, HSS knee joint function score, or VAS score when contrasted with cannulated screw fixation.In cases of simple PTPCF without lateral wall disruption, filling and compacting the bone defect underneath the tibial plateau fracture fragment can yield satisfactory fixation results, obviating the necessity for supplementary cannulate screw fixation.
Fig. 1 Fig. 2
Fig.1Following the clearance of synovium and blood within the joint cavity, the posterior aspect of the lateral meniscus is lifted to expose the tibial plateau fracture (A).Utilizing an anterior cruciate ligament guide, the articular surface of the fracture is localized at a lower position (B), and a Kirschner wire is used to establish a bone tunnel.A metal tamp is used to gently tap the depressed articular surface, thereby restoring the height and angular orientation of the depressed posterior lateral articular surface (C).A metal tamp was used to elevate the fracture fragments through the prefabricate 7.0-mm cannula, and a bone grafting funnel was used to fill the defect under the tibial fracture fragments with allograft bone or calcium sulfate artificial bone (D-F)
Table 1
Baseline data comparison between the cannulated screw fixation group and the isolated bone grafting fixation group BMI body mass index, ACL anterior cruciate ligament expressed as mean ± standard deviation.Independentsample t tests were utilized for between-group comparisons.Categorical variables were compared using Chi-square tests or Fisher's exact tests to assess differences between the two groups.A significance level of P < 0.05 was considered statistically difference.
Table 2
Clinical outcome comparison between two groupsVAS visual analogue scale, ROM range of motion, HSS Hospital for Special Surgery
Table 3
Radiological outcome comparison between two groups Yang et al.Journal of Orthopaedic Surgery and Research (2024) 19:75 | 2024-01-19T05:07:33.645Z | 2024-01-17T00:00:00.000 | {
"year": 2024,
"sha1": "13f994cd07376b3f1ed8c6bc14d93c1266cadb69",
"oa_license": "CCBY",
"oa_url": "https://josr-online.biomedcentral.com/counter/pdf/10.1186/s13018-023-04516-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "13f994cd07376b3f1ed8c6bc14d93c1266cadb69",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226300447 | pes2o/s2orc | v3-fos-license | Ras Pathways on Prox1 and Lymphangiogenesis: Insights for Therapeutics
Over the past couple of decades, lymphatics research has accelerated and gained a much-needed recognition in pathophysiology. As the lymphatic system plays heavy roles in interstitial fluid drainage, immune surveillance and lipid absorption, the ablation or excessive growth of this vasculature could be associated with many complications, from lymphedema to metastasis. Despite their growing importance in cancer, few anti-lymphangiogenic therapies exist today, as they have yet to pass phase 3 clinical trials and acquire FDA approval. As such, many studies are being done to better define the signaling pathways that govern lymphangiogenesis, in hopes of developing new therapeutic approaches to inhibit or stimulate this process. This review will cover our current understanding of the Ras signaling pathways and their interactions with Prox1, the master transcriptional switch involved in specifying lymphatic endothelial cell fate and lymphangiogenesis, in hopes of providing insights to lymphangiogenesis-based therapies.
INTRODUCTION
The lymphatic system is a vascular system that shadows the well-known cardiovascular or circulatory system. The circulatory system plays roles in delivering essential nutrients, hormones, and oxygen across the body, where fluid extravasates from the arterial ends of capillary beds to transport these components to the surrounding tissue, and then gets reabsorbed into the venous ends of the capillary beds to return to the venous circulation. However, ∼10% of this fluid is unable to be reabsorbed, due to the circulatory system's overall higher capillary hydrostatic pressure and lower blood colloidal osmotic pressure (1,2). To compensate for this loss, the lymphatic system facilitates the return of remaining interstitial fluid by draining it through lymphatic vessels and lymph nodes as lymph, which eventually returns to the circulatory system via the subclavian veins. In addition to this drainage role, the lymphatic system is involved in immunosurveillance, serving as a centralized hub for activating naïve B and T lymphocytes via antigen-presenting cells that drain through the lymph (3). Furthermore, the lymphatic system facilitates fatty acid absorption from the digestive system, where lymph vessels that line the intestines, lacteals, take up chylomicrons and take them to the blood circulation for downstream processing (4).
Lymphangiogenesis is the generation and sprouting of lymphatic endothelial cells (LECs) from preexisting ones, mirroring that of angiogenesis (5,6). This process is the major mode of lymphatic growth and is essential to the development of the lymphatic system during embryogenesis. Lymphangiogenesis revolves around the transcription factor Prospero Homeobox 1, Prox1 (7). The formation of the lymphatic vasculature begins with Prox1 expression in a subset of blood endothelial cells (BECs) in the cardinal vein, where these Prox1-expressing cells ultimately bud off and migrate toward vascular endothelial growth factor C (VEGF-C) to create a lymph sac, which forms into primary lymphatic plexus and matures as the lymphatic network (8). Lymphangiogenesis gives rise to the complete lymphatic system and is also involved in disease response, though there are studies that suggest that other cells could transdifferentiate into LECs (9,10). Outside of embryonic development, lymphangiogenesis may be induced following injury to assist in wound healing. For example, skin wound healing studies have shown increases in lymphatic vessel density and quicker recovery times as opposed to those with impaired lymphangiogenesis (11). The past several years has also presented new findings on the ability of cardiac lymphangiogenesis to reduce myocardial edema and fibrosis following cardiac injury (12). Given the lymphatic system's roles in fluid homeostasis and immunity, lymphangiogenesis is governed by multiple signaling pathways in both development and pathophysiological responses through different manners, given the contrasting microenvironment of these two models. As such, levels of growth factors and inflammatory cytokines play significant and unique roles in controlling vascular growth. A moderate balance of lymphatic vasculature must be maintained; the lack of mature as well as the excess of immature lymph vessels can impair lymphatic function, where vessels become "leaky" and are unable to properly transport lymph. This has been highlighted through numerous knockout studies, revealing key pathways involved in this process ( Table 1). The dysregulation of lymphangiogenesis through the inhibition or over-stimulation of signaling pathways often leads to lymphatic vessel malfunction.
Lymphatic vessel malfunction is associated with the pathogenesis of many diseases, including inflammatory disease (27,28), lymphedema, or tumor-associated lymphangiogenesis (29). The loss of lymphatic function could lead to impaired fluid drainage and immunosurveillance capability during disease, exacerbating the pathological conditions that exist. Inhibiting fetal lymphangiogenesis through VEGF-C sequestration has been shown to lead to lymphedema (30). In contrast, increased lymphangiogenesis enables cancer metastasis similar to that of angiogenesis, where tumor cells are able to invade and travel through the vasculature (31). Interestingly, a major transcription factor of LECs, Prox1, was found to be overexpressed in multiple cancers, promoting not only lymphangiogenesis but also cancer cell migration capacity as well as invasiveness (32)(33)(34).
This review will look to cover our current understanding on both Prox1 and the major signaling pathways of lymphangiogenesis, so that we could better understand how they are tied in this process.
THE LYMPHATIC SYSTEM'S ROLE IN CANCER
Like that of angiogenesis, lymphangiogenesis could open routes for cancer metastasis, where tumor cells may separate from a primary tumor and find their way to distant organs through these vascular systems. The significance of lymphatic vessels in metastasis was recognized through the usage of photodynamic therapy; the photodynamic ablation of these vessels and intralymphatic cancer cells prevented metastasis (35). This study highlighted that targeting the lymphatics is just as important as targeting the cardiovascular system to combat metastatic cancer. The lymphatic system was found to be directly involved in metastasis through the release of VEGF-C and other related growth factors by malignant tumors (36). Through the release of these growth factors, the lymphatic vasculature begins to sprout toward the tumor, which is then followed by the metastatic process. This metastatic process is comprised of multiple steps: (1) Invasion of primary tumor into surrounding tissue and basement membrane, (2) intravasation of tumor cells into surrounding vessels, (3) circulating tumor cells (CTCs) survival in the circulation, (4) CTCs arrest and extravasation at a distant organ site, and (5) metastatic formation/colonization of the site (37).
Besides metastasis, both vascular systems possess additional roles that support tumor growth. The cardiovascular system's roles in delivering oxygen and nutrients are vital to tumor growth, whereas the lymphatic system may help dampen anti-tumor immunity. Indeed, these lymphatic vessels do not just act as passive routes for metastasis, but are involved in immune modulation and cancer stem cell survival (36). This role may appear paradoxical, as the lymphatics are crucial for the initiation of tumor-specific T cell responses as seen in melanomas (38). However, there exists a negative feedback loop between LECs and cytotoxic T cells, where LECs activated by interferon gamma (IFN-γ) upregulate their expression of programmed death-ligand 1 (PD-L1), an immunosuppressive molecule that inhibits cytotoxic T cell accumulation in tumors (39). As such, the tumor microenvironment may abuse the expression of these inflammatory cytokines to promote their progression, generating an environment that may be intolerable to normal cells but not to the tumor.
In addition to impairing the adaptive immune response, it may be possible that increased lymphatic drainage would promote the clearance of waste products that are generated with the rapid growth and proliferation of cancer cells. Furthermore, it has been reported that lymph flow toward the sentinel nodes are increased, leading to mechanical stress-induced changes in stromal cells and thereby tumor microenvironment (40). This microenvironment would likely promote the growth of more vessels. However, there is a lot of controversy over the significance of intratumoral lymphatic vessels, as imaging studies have suggested that these vessels may be collapsed and nonfunctional as a result of high interstitial pressure within the tumor (41,42). The disruption of this balance in interstitial fluid pressure is well-known to complicate drug delivery; high interstitial fluid pressure in most solid tumors impair the extravasation of therapeutic agents in the blood to the target site (43). These phenotypes are not just apparent in cancer; many diseases of excessive lymphatic remodeling have been met with lymphatic insufficiency, where the surrounding tissues of these leaky vessels are flooded.
Gene of Interest Mutant Mouse Models and Observations
Proposed function
Vegfr3
A) Global Vegfr3 +/− : Embryos appear phenotypically normal (17). Lymphedema due to hypoplastic cutaneous lymphatic vessels (18) B) Global Vegfr3 −/− : Embryonic death beginning at E10.5, prior to lymphatic vessel emergence/sprouting. No live-born pups (17). Abnormal vasculature, enlarged vascular bed formation, severe anemia before embryonic death (19) Receptor for growth factors to initiate AKT and ERK signaling almost exclusively in LECs, though present in other ECs during early development Ras is a major effector protein involved in AKT and ERK signaling (21)
Rasa1
A) Ub-CreER T2 ; Rasa1 fl/fl : Following tamoxifen injection at 2 months of age, mice presented chylous ascites, lymphatic vessel dilation and extensive lymphatic vessel hyperplasia. All mice died by 8 months after tamoxifen administration (22) Rasa1 codes a negative regulator of vascular growth (22). Rasa1 may modulate AKT and ERK signaling by turning off Ras Spred-1 and Spred-2 are negative regulators in VEGF-C/VEGFR-3 signaling, inhibiting ERK and AKT activity (24) Cdh5 A) Prox1CreER T2 ; Cdh5 fl/fl : Following tamoxifen injection at E10: Dilated lymphatic vessels in mesentery, valves still absent by E18.5. Edema observed at E14.5 and onward. Pups presented chylous ascites (25) B) Global Cdh5 −/− : Embryonic death at E9.5 (26) VE-Cadherin required for response to fluid shear stress and thereby Beta-Catenin and AKT signaling, promoting Prox1 and Foxc2 expression (25) These models tend to lead to loss of proper vascular development, typically manifested as edema in varying severity with the potential for embryonic death.
PROX1: A POWERFUL REGULATOR IN MANY TISSUES
Prox1 is often referred to as the master switch for lymphatic endothelial cell (LEC) specification and sprouting, being a vital marker for LECs. However, Prox1 is not restricted to the lymphatic endothelium alone; it has a major role in pushing hepatoblasts toward the hepatocyte phenotype in liver, regulating neurogenesis, promoting the development of the heart, and so on (44)(45)(46)(47). With these findings, Prox1 can be recognized as a cell fate switch in these tissues, playing a large role in cell differentiation. Even so, it is important to understand that the sets of genes that are induced or repressed by this transcription factor are cell type-specific; for example, Prox1 is found to promote the shift of colorectal cancer from benign to highly dysplastic, despite the lack of overlap between Prox1-induced genes in LECs vs. these colorectal cancer cells (48).
With advances in both lymphatic and cardiovascular research, it has been reported that there are unique vascular beds that possess a heterogenous expression of blood and lymphatic vessel markers, leading to the characterization of "hybrid vessels." These specialized hybrid vessel beds include the Schlemm's canal of the eye, the placental spiral artery, and so on. Schlemm's canal, which is a vascular structure in the eye that drains aqueous humor from the intraocular chamber back into the circulatory system, acquires lymphatic characteristics through Prox1 upregulation during postnatal development (49). Along with this determination of endothelial identity, Prox1 levels linearly correlate with Schlemm's canal function, where reduced levels indicated poor functionality (49). Recent discoveries have identified Prox1's involvement in placental spiral artery remodeling; Prox1 begins to be expressed at E11.5 in the spiral artery endothelium of mice to promote lymphatic mimicry (50). Spiral arteries are used to supply maternal blood over to the fetal side of the placenta and thereby the fetal vasculature, with poor spiral artery remodeling being associated with pregnancy complications such as preeclampsia (51,52). This dual expression of LEC and BEC markers can be seen across hybrid vessels, with Prox1 as a driver for these other LEC markers.
The aberrant expression of Prox1 has highlighted its role in endothelial cells (ECs). Its ectopic expression in blood endothelial cells (BECs) leads to an upregulation of lymphatic-specific genes, suggesting Prox1 is sufficient to program LECs (53). Ectopic Prox1 expression in human umbilical vein endothelial cells (HUVECs) and siRNA-mediated knockdown in LECs also revealed angiopoietin-2, forkhead box protein c2 (Foxc2), and homeobox D8 (HoxD8) as Prox1's targets for transcription (54). Prox1 knockouts in mice result in a loss of lymphatic markers such as lymphatic vessel endothelial hyaluronic acid receptor (LYVE-1), vascular endothelial growth factor receptor 3 (VEGFR-3) and the solute-carrier gene (SLC) superfamily, while gaining an expression of vascular markers such as laminin and CD34 (7). Indeed, Prox1 is able to induce LEC-specific gene transcription while suppressing BEC-specific genes (55). In addition to differential gene expression, knockout mouse models of transcription factor Prox1 presented complete or partial loss of lymphatic vasculature, resulting in death or a multitude of complications (14), including adult-onset obesity (15). Altogether, these findings suggest that Prox1 is not only necessary for LEC determination, but also lymphangiogenesis.
Prox1 can regulate the transcription of many genes through direct promoter binding. The Prox1 homeodomain consists of the characteristic helix-loop-helix-turn-helix fold structure that works together with the prospero domain to form a functional DNA-binding unit (56)(57)(58). Prox1 binds to the promoter of fibroblast growth factor receptor (FGFR-3) in LECs, inducing its transcription to support lymphatic vessel development (59). In contrast, Prox1 binds to matrix metallopeptidase 14 (MMP-14) promoter to repress its transcription (60). This finding suggests tumor suppressive roles for Prox1 with cancer invasion, though its role in cancer is context and tumor type-dependent; Prox1 been shown to play oncogenic roles in oral squamous cell carcinoma, for example (61,62).
In addition to modulating transcription via direct DNAbinding, Prox1 may regulate gene expression through corepressor/coactivator activity. These interacting proteins range from a number of nuclear receptors to chromatin modifiers. The N-terminal end of Prox1 possesses a nuclear localization signal and two nuclear receptor boxes (61). In the liver, Prox1 plays a major role in energy homeostasis by limiting cellular respiration rate; Prox1 possesses LXXLL interaction motifs that allows for its interaction with liver receptor homolog 1 (LRH1) as a corepressor (63,64) and can inhibit ERRα/PGC-1α complex activity (65). It also co-regulates hepatocyte nuclear factor 4 alpha (HNF4α) transcriptional activity of cholesterol catabolizing enzymes (66). Furthermore, Prox1 functions as a corepressor of the retinoic acid-related orphan receptors (RORs) by interacting with their activation function 2 (AF2) domain, though their interactions are independent of these LXXLL motifs (67). Retinoic acid signaling pathways are known for their anti-proliferative and pro-apoptotic effects, serving as potential chemotherapeutic approaches to cancer (68). Prox1 may be promoting proliferation in cancer through the inhibition of these retinoic acid signaling pathways.
Prox1 is also involved with epigenetic mechanisms that involve chromatin modification, such as histone methylation and acetylation. In lens development, Prox1 has been reported to interact with coactivator cAMP response element-binding protein (CREB)-binding protein (CBP) and/or p300 to upregulate crystallin gene expression via euchromatin formation (69). In contrast, histone deacetylase 3 (HDAC3)-Prox1 complexes were found to mediate a gene expression program important for lipid synthesis and lipolysis, where loss of either protein resulted in increases of liver triglyceride content (70). In the liver, Prox1 was also found to recruit lysinespecific demethylase 1 (LSD1) and HDAC2 to the cytochrome p450 family 7 subfamily A member 1 (CYP7A1) promoter, epigenetically silencing its transcription (71). In the context of colorectal cancer, Prox1 was found to interact with the nucleosome remodeling and deacetylase (NuRD) complex to suppress Notch signaling, thereby allowing these cancer cells to maintain their stem cell properties and growth advantage (72). This would further explain the differences in gene targets between LECs and colorectal cancer, as previously mentioned. Ultimately, these studies support the notion that Prox1 possesses roles dependent on tissue/organ function and disease context.
VEGFR-3, A "MASTER RECEPTOR" FOR LYMPHANGIOGENESIS
Vascular endothelial growth-factor receptor (VEGFR) signaling regulates vascular function of both the cardiovascular and lymphatic system, where its three different types have varying roles across both systems. Whereas, VEGFR-2 is highly expressed in blood endothelial cells and is thereby crucial for angiogenesis, VEGFR-3 is required for the development of lymphatic vessels and is also important in early cardiovascular development, as VEGFR-3 is present in BECs at early embryogenesis (73)(74)(75). During later embryonic development, VEGFR-3 expression is largely restricted to LECs (14,76). VEGFR-1 is primarily expressed during hematopoietic cell development and recruitment (77). While these three tyrosine receptor kinases have distinct functions in separate tissue compartments, they may all converge to promote pathological vessel formation in lymphatic diseases and tumor-associated lymphangiogenesis (78).
These tyrosine receptor kinases bind to the VEGF family of homodimeric glycoproteins, which consists of five members in mammals: VEGF-A, VEGF-B, VEGF-C, VEGF-D, and placental growth factor (PLGF). This group belongs to the cystine-knot superfamily of hormones and extracellular signaling molecules, which all possess eight conserved cysteine residues that form this knot. VEGFR-3 binds to VEGF-C (79) and VEGF-D, stimulating lymphangiogenesis (80). VEGF is produced by a number of cell types, including macrophages, keratinocytes, and tumor cells (81)(82)(83). Besides vascular development, VEGF plays roles in bone formation, hematopoiesis and wound healing (84)(85)(86). With regards to wound healing, ongoing developments on the cardiac lymphatics have highlighted their roles in the resolution of inflammation; VEGF-C treatment following myocardial ischemia and/or infarction allows for increased lymphatic drainage capacity of excess proteins (e.g., pro-inflammatory mediators), immune cells (e.g., macrophages) and fluid, which ultimately promotes cardiac function (87)(88)(89). Lymphatic vessels were found to have increased diameter with higher VEGF-C doses (12).
Where Prox1 is often referred to as a master switch in specifying LEC fate, VEGFR-3 could be viewed as a master receptor for LEC sprouting and migration. Knockout studies on VEGFR-3 and its ligands helped to define their distinct spatiotemporal roles in vessel formation, presenting a few differences. While heterozygous Vegfr3 mice and embryos appeared phenotypically normal, complete knockouts of the Vegfr3 gene results in pericardial effusion and cardiovascular failure by E9.5 (17). These findings explain the importance of VEGFR-3 during cardiovascular development, prior to their restriction to lymphatic vessels.
On the other hand, Vegfc −/− mice were found to have endothelial cells commit to LEC fate as usual, but these cells were unable to sprout and form lymph vessels, resulting in prenatal death due to fluid accumulation (16). Half of these mutant embryos were found to die between E15.5 and E17.5 (16), which contrasts with the time of death of Prox1 null mice, which primarily takes place at E14.5 (14). These deaths are likely attributed to a combination of phenotypic alterations, though the lymphatic phenotype likely plays a significant role, due to notes of fluid accumulation across the body.
Similar to that of Prox1 +/− mice, surviving Vegfc +/− mice have an underdeveloped lymphatic system, presenting lymphatic hypoplasia and lymphedema (7,16). These findings complement the surrounding discoveries of VEGFR-3 function during early development. Interestingly, VEGF-D was unable to compensate for VEGF-C during embryonic lymphatic development, suggesting the necessity of VEGF-C in promoting Prox1-expressing LEC migration and thereby lymph sac formation (90). Given the importance of VEGFR-3 in the proper organization of blood vessels during early embryogenesis, it suggests that VEGF-D is more integral to cardiovascular development. This elucidation of VEGF-C's role in lymphangiogenesis has implicated its roles in cancer progression (91,92), serving as an effective predictive marker for lymph node metastasis for some cancers (93,94).
RAS, AN ESSENTIAL MEDIATOR IN VASCULAR DEVELOPMENT
Lymphangiogenesis is regulated by various signaling cascades mediated by VEGFs/VEGFRs (90) (Figure 1). Further investigation has shown the importance of Ras in modulating multiple signaling pathways following VEGFR activation (101).
In LECs, much of Ras signaling is dependent on VEGFR-3 activation. VEGF-C is primarily received by VEGFR-3, which leads to the receptor dimerization, transphosphorylation and interaction with growth factor receptor-bound protein 2 (GRB2). This leads to the eventual recruitment of guanine nucleotide exchange factors such as the sons of the sevenless (SOS) protein, which activates the small GTPase Ras via GTP binding. This can be inactivated by GTPase-activating proteins (GAPs), which promotes its GTPase activity to hydrolyze GTP to GDP, resulting in an inactive GDP-bound state. Active GTP-bound Ras leads to the activation of multiple pathways including the AKT (Protein Kinase B) and ERK (Extracellular Signal-Regulated Kinase) pathway, which is also referred to as the MAPK (mitogenactivated protein kinase) pathway.
These pathways are heavily involved in promoting survival, proliferation and migration, which are integral to lymphangiogenic sprouting (90,102). Interestingly, the activation of each pathway is dependent on the dimerization state of VEGFR-3 upon VEGF-C stimulation; the formation of a VEGFR-3/VEGFR-2 complex activates AKT signaling, whereas ERK signaling is activated following VEGFR-3 homodimerization (103). Furthermore, there is crosstalk between the ERK pathway and AKT pathway; phosphorylation of Raf by AKT (also known as Protein Kinase B) leads to the inhibition of the Ras-Raf-MEK-ERK cascade (95), which can be seen in proliferating cells (104). In contrast, phosphoinositidedependent kinase 1 (PDK1), which is activated by PI3K activity, can phosphorylate a downstream target of the ERK pathway, ribosomal S6 kinase 2 (RSK2), to lead to its full activation (96). This suggests that the activation of multiple pathways may serve to regulate one another.
As the Ras pathways play an essential role in regulating cell cycle, growth, differentiation and survival, their dysregulation leads to severe consequences. This has been well-documented in oncogenesis with their discovery as the first human oncogenes over three decades ago, where it is strongly argued that Ras gainof-function somatic mutations play a causative role in human tumorigenesis (97). There are three canonical Ras genes (H-Ras, N-Ras, or K-Ras), which vary in distribution and frequency across different organs/cancers (105); K-Ras mutations are present in a majority of pancreatic ductal cancers but uncommon in bladder tumors, where H-Ras mutations are likely detected (97).
Further studies revealed the importance of Ras signaling during development, with a collection of unique mutations leading to disorders commonly referred to as "RASopathies." RASopathies are a class of developmental disorders caused via germline mutations in important regulators of the Ras-ERK pathway (106), though PI3K-AKT signaling participates in this pathophysiology as well (107). This group of disorders FIGURE 1 | Overview of ERK and AKT pathway on lymphangiogenesis. Both pathways influence different aspects of lymphatic remodeling upon activation of VEGFR-3, leading to downstream phosphorylation of signaling proteins (90). Crosstalk occurs across the pathways primarily as a means of negative regulation (95)(96)(97). Current therapies to inhibit these signaling pathways involve the sequestration of VEGFR and its ligands (98)(99)(100). Created with BioRender.com.
include Noonan syndrome, Cranio-facio-cutaneous syndrome (CFCS), and LEOPARD syndrome, which are affiliated with ∼20 different disease genes in these pathways but present similar symptoms: congenital heart defects, postnatal proportionate short stature, developmental delay, facial dysmorphism, and so on (107). Additional information regarding these RASopathies such as protein function, chromosomal location and phenotype are excellently summarized in a past review (106). Further investigation has revealed that patients with Noonan or CFCS syndrome had a consistent pattern of bilateral lower limb lymphedema and chylous reflux (108).
Mouse knockout and overexpression studies of Ras have presented lymphatic vascular hypoplasia and hyperplasia, respectively (21). Studies have shown that constitutively active Ras speeds up cell migration and thereby wound healing (109), which would be interesting to investigate in the context of cardiac function following infarction. While Ras has been shown to be invaluable in the regulation of multiple signaling pathways, each Ras isoform plays different roles throughout the body. This is due to Ras isoforms being differentially expressed, with their dysregulation leads to different but related lymphatic vascular disorders (110). For example, Noonan syndrome and CFCS consist of varying K-Ras activating mutations, in which both disorders present scenarios where Ras activity is unable to be negatively regulated to a degree (111,112). K-Ras mutations are also the most frequent out of the three isoforms in a majority of cancers, such as pancreatic, colon and lung cancer (113). Despite this differential expression, the Ras isoforms share common mutation sites, being at G12, G13, and Q61. The majority of H-Ras missense activating mutations occur at G12, which is integral to Q61 orientation for GAP-promoted GTPase activity and thereby inactivation of Ras (114). The same can be said with K-Ras, where G12 mutations comprise 83% of all K-Ras mutations, whereas N-Ras is predominantly mutated at Q61 (115).
Dysregulation in the negative regulators of Ras has also resulted in developmental defects and abnormal lymphatic vasculature. For example, the knockout of Spred-1 and NF1 result in similar RASopathies, Legius syndrome, and Neurofibromatosis-1, respectively (116). Double knockout studies of Spred-1 and Spred-2 found that these proteins were essential for embryonic lymphangiogenesis, resulting in embryonic death from E12.5-15.5 whilst presenting edema and dilated lymphatic vessels (24). These knockouts resulted in increased ERK phosphorylation and subsequent activation. NF1 knockdown in HUVECs appeared to present the same effects, where cells proliferated at an increased rate and failed to undergo normal branching morphogenesis (117). It was later found that Spred-1 and Spred-2 interact with NF1 to downregulate Ras-GTP levels and subsequent pathway activation; Spred-1 induces plasma membrane localization of NF1, which acts as a GAP on Ras (118). However, Spred-1 has been reported to act in a Ras isoform-specific manner, where Spred-1 prevents K-Ras membrane anchorage but not H-Ras (119). This would suggest the involvement of other regulators in this pathway that have yet to be uncovered. Rasa1, which codes for p120-RasGAP, has been shown to be repressed in colorectal cancer, allowing for the increased activation of Ras (120). Loss-of-function Rasa1 mutations (e.g., A3070T, C2245T) have been found to cause capillary malformation-arteriovenous malformation (CM-AVM) (121,122), vascular abnormalities that can range from macular staining to abnormal bleeding and life-threatening complications. Systemic loss of Rasa1 resulted in lymphatic vessel disorders characterized by extensive vessel hyperplasia and leakage, as well as early lethality due to chylothorax (22). Follow-up studies found that Rasa1 regulates lymphatic vessel valve function, in which LEC apoptosis around these valves explains the leakage defects (123). Further investigation has revealed that Rasa1 disruption impairs the export of collagen IV from ECs during developmental angiogenesis, which leads to apoptotic death due to endoplasmic reticulum stress (124).
THE AKT PATHWAY AND LYMPHANGIOGENESIS
The PI3K-AKT pathway is highly conserved and is controlled through a multistep process. Phosphatidylinositol 3-kinase (PI3K) can be activated by one of three means: the direct binding of PI3K's regulatory subunit p85α by (1) Ras, (2) the scaffolding protein known as growth factor receptorbound protein 2 (Grb2)-associated binder (GAB), or (3) the receptor tyrosine kinase itself (125). PI3K then phosphorylates phosphatidylinositol (4,5)-bisphosphate (PIP2) to phosphatidylinositol (3-5)-triphosphate (PIP3). This process could be negatively regulated by a PIP3 phosphatase known as Phosphatase and Tensin Homolog (PTEN), whose deletion or mutation leads to can lead tumorigenesis and excessive angiogenesis (126,127). This PIP3 serves as a docking phospholipid that binds to AKT, allowing PDK1 access and phosphorylate AKT's T308 in the activation loop (128). From there, AKT can phosphorylate a number of targets such as the tuberous sclerosis 1 and 2 (TSC1-TSC2) complex, mammalian target of rapamycin complex 1 (mTORC1) and Caspase 9, thereby promoting protein synthesis and survival (129,130). In addition to this, PDK1 and mTORC1 can activate ribosomal protein S6 kinase (S6K), which also stimulates protein translation (131,132).
Aberrations in AKT signaling represent a broad spectrum of human diseases such as cancer, immunological disorders, cardiovascular disease and so on (133). Overactivation of this pathway via AKT hyper-phosphorylation tends to lead to excessive cell growth and division. AKT-mediated signaling was well-known in blood vascular development and was eventually shown to be required for proper lymphatic development; AKTdeficient mice (Akt1 −/− ) presented decreases in lymphatic capillary diameter, losses in valves typically present in the collecting lymphatic vessels, and sparse smooth muscle coverage of such vessels (23). Follow-up studies found that the phosphorylation of AKT and ERK1/2 played large roles in VEGF-A/VEGFR-2-mediated lymphangiogenesis, as the inhibition of either protein's upstream kinase led to decreased LEC migration and proliferation (134). These discoveries have opened new avenues for treating lymphatic disease, as AKT's involvement in lymphangiogenesis became clearer. Our lab has gotten involved with these treatments, reporting that 9-cis retinoic acid may be a promising therapeutic agent against secondary lymphedema, as retinoic acid could promote the proliferation, migration and tube formation of LECs via FGFR-signaling and thereby AKT activation (135).
PI3K is composed of four subgroups (class Ia, Ib, II, III), though growth factor receptors primarily activate class Ia, dimeric proteins consisting of a catalytic and regulatory subunit (136). Mutations in both subunits have been shown to impair lymphatic sprouting and maturation. For example, the deletion of Pik3r1, which encodes the regulatory isoforms p85α, p55α, and p50α, led to intestinal lymphangiectasia marked with increases in lymphatic endothelial endoglin expression. Interestingly, the effects varied by organ site, as the diaphragm was marked with arrested lymphatics, while the gut showed lymphatics invading the gut (136). Many distinct activating mutations in PIK3CA, one of four catalytic isoforms in class I PI3Ks, present mutation hotspots in human tumors (133). In addition to this, constitutively active PIK3CA mutations were found to be expressed in LECs and vascular endothelial cells (VECs) in capillary lymphatic venous malformations, leading to continuously phosphorylated AKT and hyperproliferation of these cells (137). These gain-of-function mutations typically either bypassed PI3K's requirement to interact with Ras (H1047R) or disrupted regulatory subunit interface (E542K/E545K), leading to the pathway's hyperactivation (138,139). Given this pathway's involvement in both lymphangiogenesis and angiogenesis, many vascular overgrowth disorders are associated with these mutations (140).
Recently, the combined treatment of VEGF-C trap and rapamycin, but neither treatment alone, were found to promote lesion regression in PIK3CA H1047R -driven lymphatic malformations, through lymphatic vasculature regression and blockage of LEC proliferation (141). This highlights the importance of combination therapy on both upstream and downstream elements of a mutant effector molecule, suggesting the combined impact of other signaling proteins in the generation of these physiological changes. Other studies found that PTEN knockouts in endothelial cells result in increased cancer susceptibility and embryonic lethality, due to aberrant differentiation, hyperproliferation and disorganized vasculature (142,143).
Interestingly, the regulation of lymphangiogenesis through PI3K is not entirely restricted to Ras-mediated activation. VEGF-C can induce PI3K-dependent AKT activation through VEGFR-3, where VEGFR-3 forms a complex with the PI3K regulatory subunit p85 (144). As seen with other receptor tyrosine kinases, insulin receptor substrates (IRS) such as GAB are typically recruited as adaptor proteins for PI3K regulation and activation (145), suggesting that they may be recruited by VEGFR-3. In alignment with these reports, IRS blockade was found to inhibit lymphangiogenesis by reducing proliferation and VEGF-A expression in LECs (146).
AKT modulates multiple cellular functions through inhibitory and activating phosphorylation events. AKT is well-known in promoting cell growth through inhibiting TSC2 and thereby activating mTORC1, which initiates translation and ribosome biogenesis (145,147). In addition to this, AKT can inactivate cyclin-dependent kinase inhibitors such as p21 and p27, allowing for cell cycle progression (148). AKT can also inhibit apoptosis by blocking proapoptotic protein function of Bcl-2, Bax, and Bim proteins (145).
THE EXTRACELLULAR SIGNAL-REGULATED KINASE (ERK) PATHWAY AND LYMPHANGIOGENESIS
The ERK/MAPK pathway relies on the binding of growth factors to induce a series of phosphorylation cascades, which begins with the activation of Raf by GTP-bound Ras. From there, Raf phosphorylates MEK, which phosphorylates ERK, allowing it to phosphorylate many downstream targets in both the cytoplasm and nucleus. This system provides opportunities for feedback regulation as well as signal amplification with each subsequent phosphorylation event.
Gain-of-function mutations in the Ras/Raf signaling cascade present lymphatic defects such as lymphangiectasia, which is prominent in patients with Noonan and LEOPARD syndrome (149,150). RAF1 in particular has been recognized as a major effector whose gain-of-function mutations cause Noonan and LEOPARD syndromes, emphasizing the importance of the ERK pathway activation in developmental disorders (151,152).
Following these discoveries, many studies investigated the dysregulation of ERK pathway through loss-and gain-offunction mutants in upstream and downstream elements of Ras. VEGFR-3 bearing deletions in the cytoplasmic domain at tyrosines Y1226/7 prevented ERK phosphorylation and lymphatic sprouting, which were rescued with autonomous ERK activation (153). Indeed, it was found that hyperactivation of the ERK pathway resulted in increased LEC proliferation or fate specification, where gain-of-function mutant RAF1 S259A embryos led to lymphangiectasia (154). Loss of negative regulation has presented similar results; the loss of Foxc1 and Foxc2 presented decreases in GAPs encoded by Rasa4 and Rasal3, resulting in increased ERK activation and thereby LEC proliferation and enlarged lymphatic vessels (20). Recently, ERK pathway inhibition has shown some promise as treatment for lymphatic anomaly, where an advanced anomalous lymphatic disease patient possessing a gain-of-function ARAF recurrent mutation was unresponsive to mTOR inhibition but not to MEK inhibition, marked by decreased lymphedema and improvement in pulmonary function (155). Together, these studies make it evident that the ERK pathway is vital to lymphatic remodeling.
ERK possesses numerous downstream targets, including other kinases, transcription factors and so on. Many of these transcription factors that are directly activated by ERK, such as activator protein 1 (AP-1), c-Myc, and erythroblastosis virus oncogene homolog 1 (Ets-1), were discovered as protooncogenes (156). S6K has also been recognized as a target of ERK in cardiomyocytes (157). Kinases downstream of ERK, such as RSKs, have been found to activate CREB and cyclic AMP-dependent transcription factor 1 (ATF-1), transcription factors that are also implicated in cell transformation (156,158). Furthermore, ERK and RSK can inhibit TSC1-TSC2 complex activity via phosphorylation (159).
Given the range of targets, it is difficult to ascertain which of ERK's downstream targets are integral to lymphatic sprouting and differentiation. Increases in ERK signaling through ectopic expression and mutant Raf presented increases in Prox1 and other LEC-specific genes, highlighting the broad induction of lymphatic fate determination (154). This finding is consistent with the literature, as Ras-ERK signaling leads to the activation of many transcription factors through direct phosphorylation or the subsequent activity of other downstream effectors.
VEGFR-3, SIGNALING PATHWAYS AND PROX1 INTERACTION: FEEDBACK LOOPS
LECs require the stable expression of Prox1 to maintain their identity; Prox1 siRNA-treated LECs revert to a BEC phenotype (160). Due to this constant expression, Prox1 is heavily used as a lymphatic marker. Its transcriptional activity is frequently observed to determine the effects of signaling pathways, as increased Prox1 leads to the upregulation of other lymphatic markers and ultimately lymphangiogenesis. While VEGFR-3 signal transduction has been recognized as one of the major pathways involved in lymphangiogenesis, the mechanisms by which ERK and PI3K may be coordinating Prox1 activity remains unclear.
Several studies have suggested that Prox1 participates in a number of positive feedback loops by promoting the transcription of key signaling proteins and other transcription factors that target the Prox1 promoter and enhancer regions. For example, HoxD8 levels are significantly higher in LECs than BECs, as its transcription is upregulated by Prox1 and positively regulates Prox1 transcription in return (54). Chicken ovalbumin upstream promoter-transcription factor 2 (Coup-TFII), which was shown to act as a coactivator for Prox1, is also required for the initiation and maintenance of Prox1 expression in LECs, as Coup-TFII can directly bind to a conserved binding site in a regulatory region ∼9.5 kb upstream of the Prox1 ORF (161). However, it has been suggested that ERK signaling is capable of inducing Prox1 expression in the absence of Coup-TFII (154).
In LECs, Prox1 has also been shown to interact with multiple transcription factors for its self-regulation as well as expression of VEGFR-3. Nuclear hormone receptor Coup-TFII is required for the initiation and maintenance of Prox1 expression of LECs (161) and acts as a coactivator of Prox1 to promote FGFR3 and VEGFR-3 transcription (162). The transcription of potent regulator APN in angiogenesis is induced by Ras-mediated phosphorylation of Ets-2 (163), which also interacts with Prox1 to bind to the VEGFR-3 promoter (164).
Given the role of VEGFR-3 in lymphangiogenesis, the sustained expression of this receptor in LECs is integral to its maintenance. Interestingly, Prox1 has been shown to maintain LEC identity through a Prox1-Vegfr3 feedback loop, where the downregulation of either one results in the downregulation of the other (165). The activation of this signaling pathway ultimately induces Prox1 transcription. Ras/ERK signaling was found to mediate p300 recruitment through Ets activation, leading to histone acetylation on the Vegfr3 gene in LECs (166). Given Prox1's interaction with Ets-2 (165), Prox1 may be involved with this p300 recruitment to enable VEGFR-3 transcription.
Research on cancer metastasis have implicated the potential roles of Prox1 in the regulation VEGF-C autocrine signaling. It was discovered that CCAAT-enhancer binding proteindelta (C/EBP-δ) upregulates VEGF-C and VEGFR-3 expression in LECs in lung cancer, where hypoxia could induce this transcription factor's expression (167). Following this discovery, cultured oral squamous cell carcinoma cell lines reported increased levels of Prox1 in the "highly-metastatic" lines, which were found to activate VEGF-C expression (34). Together, these findings suggest the cooperation of Prox1 and C/EBP-δ to modulate VEGFR-3 signaling in LECs.
PROBLEMS WITH CURRENT ANTI-ANGIOGENESIS THERAPIES
Inhibition of angiogenesis has been shown as a viable anticancer therapy, with VEGFs and VEGFRs as a major target for these treatments. These include the anti-VEGF-A monoclonal antibody bevacizumab and broad-range receptor tyrosine kinase inhibitor sunitinib, the former of which being approved for combination use with chemotherapy for cancer (168,169). Small molecule inhibitors such as Sorafenib and Sunitinib have also been used to target a broad range of receptor tyrosine kinases in hopes of overcoming multiple pathways. Many of the existing clinical trials are currently investigating the effects of these drugs in combination with one another or other current therapies, in hopes of increasing patient outcomes.
Despite these advances, many cancers have achieved resistance to anti-angiogenesis therapies through adaptive mechanisms, such as an upregulation of alternative angiogenic factors, recruitment of vascular progenitor cells, and increase in pericyte coverage, to name a few (170). Even bevacizumab, which is considered as one of the most effective anti-angiogenic drugs to date, yields only modest improvements, adding only around 5 months of progression-free survival (PFS) in patients with metastatic colorectal cancer (98).
With these limited results, researchers have also looked targets further downstream of these pathways. There are MEK and mTOR inhibitors that have been approved or are undergoing trials for clinical use by the US Food and Drug Administration (FDA), though it appears that their impact only adds a few months to PFS. Trametinib and Dabrafenib are MEK inhibitors used in combination for metastatic melanoma that presented a BRAF V600 activating-mutation, adding around 2.2 months of PFS when compared to dabrafenib alone (Clinical Trial NCT01584648) (171). mTOR inhibitors such as Everolimus were found to increase PFS by around 2.1 months for patients with metastatic renal cell carcinoma that has progressed from receptor-targeting drugs such as sunitinib (Clinical Trial NCT00510068) (172).
As characterized in previous reviews, there are no FDA-approved selective agents that specifically suppress lymphangiogenesis (173), despite the growing evidence on their contributions to cancer and other diseases. Antilymphangiogenesis therapies are following the same approach as anti-angiogenesis therapies, targeting the key receptors and growth factors of the lymphatic system. A phase I study was completed for anti-VEGFR-3 monoclonal antibody, IMC-3C5, against solid tumors and colorectal cancer. They were able to set a tolerable dose, but only saw minimal anti-tumor activity so far (Clinical Trial NCT01288989) (99). There was also a phase I study for anti-VEGF-C monoclonal antibody, VGX-100, for combination usage with bevacizumab against advanced solid tumors (Clinical Trial NCT01514123) (100). While these therapies may be promising, they may encounter the same levels of resistance that were seen with current antiangiogenesis therapies, highlighting the need for a different group of targets.
POST-TRANSLATIONAL MODIFICATIONS AND PROX1: A PROMISING FRONTIER ON LYMPHANGIOGENESIS
As transcription factors have the capability of reconfiguring cellular physiology and function, their regulation through molecular modification is essential to proper function. Post-translational modifications (PTMs) play a large role in modulating protein stability, protein-protein interaction, DNA-binding, subcellular localization and so on (174). Many of these PTMs occur as individual, isolated events to dictate some aspect of transcription factor function, though some PTMs are sequentially linked, enabling (or inhibiting) one another (175).
using SUMOplot analysis. In addition to this, both sites are highly conserved in different species, including humans, mice, chicken, and zebrafish (177). Sumoylation of Prox1 K556 by small ubiquitin-related modifier 1 (SUMO-1) enables Prox1 binding to the VEGFR-3 promoter to upregulate transcription (177). Without this sumoylation, VEGFR-3 signaling was significantly diminished and led to impaired sprouting. This may in part explain the feedback loop between Prox1 and VEGFR3, though it remains unknown how VEGFR3 signaling impacts Prox1. Further investigation found that the ectopic expression of K556R mutant Prox1 could not induce a lymphatic phenotype in HUVECs (177), as compared to the previous studies with wildtype Prox1 (54). These findings highlight the importance of sumoylation on Prox1 functionality.
PTMs on Prox1 also impact its corepressor activity; sumoylation at K353 inhibits Prox1 interaction with HDAC3 in LECs, thereby downregulating Prox1's corepressor activity (176). Given this finding, sumoylation may be vital to the previously mentioned roles of Prox1 in the liver, given its interaction with HDAC3, HDAC2, and LSD1.
Not much is known about the roles of other Prox1 PTMs, which include acetylation, methylation, ubiquitylation and phosphorylation. Comprehensive online resources such as PhosphoSitePlus have compiled data sets from journal articles to identify modifiable sites on Prox1 (Figure 2). Acetylation sites have been detected in the homeo-prospero domain using liquid chromatography mass spectrometry, suggesting this modification's potential involvement in DNA-binding activity. Several phospho-sites spanning Prox1 have also been identified using mass spectrometry-based approaches (178,179). Prox1 methylation has not been documented through these approaches. In short, the roles of these modifications and the enzymes responsible remain unknown.
CONCLUDING REMARKS
Lymphangiogenesis is a complex process that is heavily tied with VEGFR-3 signaling and Prox1 activity. The feedback loops that these proteins manage are essential to not only this sprouting capability but sustaining of LEC fate. RASopathies have highlighted the importance of the ERK and PI3K pathways in lymphatic development, where many of these genetic syndromes are attributed to gain-of-function mutations in their upstream elements. Consequences of hyperactivation with these pathways include proliferation as well as cell death, in which these seemingly paradoxical effects result in leaky vasculature.
The roles of these signaling pathways have been wellcharacterized in lymphatic disease and cancer, suggesting the benefits of dual pathway inhibition. The same could be said about stimulation, given the lymphatic system's roles in wound healing and resolution of inflammation. It would be crucial to target both the activity of negative and positive regulators throughout these pathways, as cancer may overcome the targeting of either group through the overexpression of repression of the other. It may be worthwhile to investigate whether this combinatorial approach leads to synergistic or additive effects on lymphangiogenesis.
These signal transduction events modulate transcription factor activity with Prox1 or possibly Prox1 itself. However, there remains some disconnect between these signaling pathways and Prox1; we lack information on the impact and causes of post-translational modifications of Prox1. As seen with sumoylation, these modifications could significantly impact Prox1 DNA-binding and coactivator/corepressor activity, ultimately influencing lymphangiogenesis. Given Prox1's vital role in a number of organ systems, it would be imperative to see how these signaling pathways could lead to these modifications and impact its lymphangiogenic ability. This knowledge will inform us of potential therapeutic targets that may overcome the resistances seen with VEGF and VEGFR targeting.
AUTHOR CONTRIBUTIONS
KB drafted and wrote the literature review. KB and Y-KH contributed to the review and final approval for the literature review. All authors contributed to the article and approved the submitted version.
FUNDING
This work was supported by National Institutes of Health (HL141857, HL141857, and CA250065). | 2020-11-12T14:07:15.059Z | 2020-11-12T00:00:00.000 | {
"year": 2020,
"sha1": "47ae3aa05152ab65ec70d3bc3d2ecd2be3b25bbe",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2020.597374/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "47ae3aa05152ab65ec70d3bc3d2ecd2be3b25bbe",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250450853 | pes2o/s2orc | v3-fos-license | Photonic heat transport from weak to strong coupling
Superconducting circuits provide a favorable platform for quantum thermodynamic experiments. An important component for such experiments is a heat valve, i.e. a device which allows one to control the heat power flowing through the system. Here we theoretically study the heat valve based on a superconducting quantum interference device (SQUID) coupled to two heat baths via two resonators. The heat current in such system can be tuned by magnetic flux. We investigate how does the heat current modulation depend on the coupling strength g between the SQUID and the resonators. In the weak coupling regime the heat current modulation grows as g2, but, surprisingly, at the intermediate coupling it can be strongly suppressed. This effect is linked to the resonant nature of the heat transport at weak coupling, where the heat current dependence on the magnetic flux is a periodic set of narrow peaks. At the intermediate coupling, the peaks become broader and overlap, thus reducing the heat modulation. At very strong coupling the heat modulation grows again and finally saturates at a constant value.
I. INTRODUCTION
Quantum thermodynamics attracts a lot of attention both from the fundamental physics viewpoint and due to potential applications in nanoscale devices [1][2][3]. In this context, understanding of the heat transport in nanoscale systems is very important [4][5][6][7][8][9][10]. Precise control and tuning of the heat power is essential for the design of quantum heat engines [11][12][13][14][15][16][17], thermal rectifiers [18][19][20][21], transistors [22,23], masers [24] and circulators [17]. Such thermal devices can also be used for heat management in quantum circuits [10,25]. In superconducting circuits one can control the heat current by tuning the transition frequencies of a qubit [9,26,27] and very accurately measure it employing, for example, normal metal -insulator -superconductor junctions as thermometers [28]. The heat transport experiments in superconducting circuits can be performed at very low temperatures, where the photonic heat flux dominates over phononic and electronic contributions [29]. In such systems, the heat can be transmitted over macroscopic distances [30], which permits remote management of the heat.
Thermodynamics of the systems weakly coupled to the environment has been studied extensively, and there also have been many extensions of the theory to the strong coupling regime [31][32][33][34][35]. One of the difficulties in this context is the ambiguity in the definition of heat at strong coupling [36]. Here we consider a system consisting of two small normal metal islands coupled to the two coplanar waveguide resonators, which are, in turn, coupled to the superconducting quantum interference device (SQUID) tunable by magnetic flux (see Fig. 1). We express the heat in terms of the temperature changes of the metallic islands playing the role of thermal baths. Thus, in our model the heat is defined via the changes of the internal energies of the baths. This definition is inspired by the experiments mentioned above, and the heat defined in this way can be experimentally measured regardless of the coupling strength between the SQUID and the baths.
The device depicted in Fig. 1 is supposed to operate as a heat valve, which allows one to tune the heat flux between the resistors by changing the critical current of the SQUID with the magnetic flux. The performance of the valve is characterized by the heat current modulation amplitude, i.e. by the difference between the maximum and the minimum values of the heat current. We investigate how does the heat modulation vary with various parameters and obtain two surprising results. First, we find that the modulation of the heat depends on the coupling strength between the SQUID and the resonators in non-monotonous way. Indeed, the modulation grows with the coupling strength in the weak coupling regime, it almost vanishes at the intermediate coupling, then it grows again and eventually saturates at very strong coupling. Second, the strongest heat modulation is achieved in the weak coupling regime. In our modelling we use feasible parameters [9,26,27], and we believe that our predictions can be experimentally tested. Finally, we have derived analytical expressions for the heat flux in various limiting cases in terms of the circuit parameters.
We organize the paper as follows: in Sec. II, we introduce the model and analytically analyze the weak, the intermediate and the strong coupling regimes and in Sec. III we summarize the results.
II. THE MODEL
We consider an electric circuit depicted in Fig. 1. In this circuit, the two normal metal islands, having the same resistances R and kept at constant temperatures T 1 , T 2 , act as heat baths. The temperatures T j (j = 1, 2) can be experimentally monitored using biased normal metal -insulator -superconductor junctions [28]. The two identical superconducting coplanar waveguide λ/4resonators with characteristic impedance Z 0 serve as filters. The resonators are coupled to the SQUID via the capacitors C c,j . The frequencies of the resonators, ω 1 and ω 2 may slightly differ to compensate the difference between C c1 and C c2 , as we explain below.
In this setup, the SQUID can act as a quantum heat valve [10,26]. Indeed, it provides a control parameter, the external magnetic field, which one can use to tune the heat current through the system. We assume that the SQUID is symmetric and its critical current I C periodically depends on the magnetic flux Φ as Here I C,0 is the critical current at zero flux and Φ 0 is the magnetic flux quantum. The SQUID is characterized by the Josephson energy E J (Φ) =hI C (Φ)/2e and by the charging energy E C = e 2 /2 (C c,1 + C c,2 + C), where C is the capacitance of the SQUID, see Fig. 1. Here we consider the limit E J (0) . In this case, the two non-linear Josephson junctions of the SQUID can be approximately replaced by an inductor L J (Φ) =h/2eI C (Φ), and the SQUID as a whole -by an LC-circuit with the frequency To describe the transport of heat by photons between the resistors 1 and 2, we use a quasiclassical Langevin equation where the power spectra of Nyquist noises generated by the resistors are determined by the fluctuationdissipation theorem [37]. In this way, we obtain the following expression for the heat current from the resistor 2 to the resistor 1 [38] Here N j (ω) = 1/ eh ω/k B Tj − 1 are the Bose functions and τ (ω, Φ) is the transmission probability, which depends on frequency and magnetic flux. The transmission probability τ (ω, Φ) equals to the square absolute value of the transmission coefficient |S 21 (ω, Φ)| 2 between the two resistors. Eq. (3) has the familiar form of the Landauer formula for the photon current [4]. For the circuit under consideration, see Fig. 1, the transmission probability τ (ω, Φ) is given by [9,38] where the impedances of the resonators are and the impedance of the SQUID is In Eq. (5) the angular frequencies ω j correspond to the fundamental modes of the uncoupled resonators. In Figs. 2 and 3 we plot, respectively, the transmission probability (4) and the heat current (3) evaluated numerically. For the numerical simulations we have used the parameter values typical for the experiment: Z 0 = 50Ω, R = 2Ω, ω 1 /2π = ω 2 /2π = 8.84 GHz, C = 58.7 fF, I C = 291 nA, T 2 = 300 mK, T 1 = 150 mK. In the subsequent subsections we discuss various approximate approaches, which allow us to find analytical expressions for the heat current and to understand the underlying physics.
A. Qualitative discussion
The transmission probability (4) has peaks at frequencies corresponding to the eigenmodes of the whole system "two resonators plus SQUID". The position, the height and the width of these peaks depend on magnetic flux. In Fig. 2 we plot the function τ (ω, Φ) for three different values of the coupling strength between the SQUID and the resonators. In this figure and in the rest of the paper, we assume that the maximum value of the SQUID frequency satisfies In this case, the SQUID frequency (2) crosses only the lowest resonator modes.
In the weak coupling limit (Fig. 2a) the modes of the resonators and of the SQUID are almost independent. They become hybridized only in the vicinity of the flux point where ω J (Φ) crossed the frequency of the resonators ω 1,2 . The heat current through the system J(Φ) shows sharp peak at this point and almost vanishes away from it (Fig. 2d). That is why the modulation of the heat current in the weak coupling limit approximately equals to its maximum value.
At the intermediate coupling (Fig. 2b), the hybridization between the resonators and the SQUID becomes significant even far away from the crossing flux point. For this reason, the heat current peaks become broad and overlap (Fig. 2e). Therefore, the magnitude of the heat current modulation drops. In fact, it almost vanishes, see Fig. 3. Another effect, visible in Fig. 2b, is the splitting of the resonator modes into pairs. In each pair, only one of the modes is coupled to the SQUID and is sensitive to the magnetic flux. Namely, it is the mode having voltage antinode in the vicinity of the SQUID, and having the higher frequency of the two modes.
In the strong coupling regime (Fig. 2c) the two lowest lines in the spectrum move to very low frequencies. In this limit, the heat current modulation reappears again. In part, this effect is caused by the divergence of the Bose functions at low frequencies, which makes the relative contribution of these frequencies to the integral (3) more significant. In addition to that, the third hybrid mode with the frequency close to ω 1,2 depends on the flux and also contributes to the modulation shown in Fig. 2f.
In the next three subsections we discuss each of the regimes introduced above in detail.
B. Weak coupling regime
In the weak coupling regime, the Hamiltonian of the combined system "resonators plus SQUID" can be approximately reduced to that of three coupled oscillators [9,39,40], Here Ω r are the frequencies of the two lowest modes of the resonators shifted by presence of the capacitors C c,j , a j are the ladder operators of the resonators, b is the ladder operator of the SQUID, and are the coupling constants between the resonators and the SQUID. In the rest of the paper we consider the symmetric case and assume that the shifted frequencies (10) are the same for both resonators. This implies that the parameters ω j and C c,j are not independent. Eqs. (10) and (11) have been derived by expanding the tangents in the resonator impedances (5) as and keeping only the pole in the expansion with n = 0. Eqs. (9)(10)(11) are valid at small coupling g j Ω r . Below we provide more accurate condition for the weak coupling approximation, which also involves the damping rate of the resonators (21).
Determining the normal modes of the Hamiltonian (9), we find that one of them is independent of the magnetic flux because it is uncoupled from the SQUID. We call this mode "uncoupled", it has a voltage node in the vicinity of the SQUID and in the weak coupling limit its frequency always equals to that of the shifted resonator mode, ω unc = Ω r . The frequencies of the two other modes are In Fig. 2a these modes are clearly visible, while the uncoupled mode overlaps with ω ± due to the small value of the coupling constant g.
In the weak coupling limit the heat current (3) strongly increases at values of the magnetic flux Φ r , which correspond to the resonance condition ω J (Φ r ) = Ω r , and it almost vanishes away from this point (Fig. 2d). In Fig. 2a the SQUID frequency crosses the resonator frequencies at the flux value Φ r ≈ 0.41Φ 0 . In this case the heat modulation amplitude equals to the maximum value of the heat flux. To find the latter, it is sufficient to consider the range of frequencies ω ∼ ω J ∼ Ω r , where one can accurately approximate the transmission probability (4) as follows Here we have introduced the complex frequency ν r = Ω r − iκ/2, where is the damping rate of the resonator modes. The heat current (3) with the approximate transmission probability (14) can be evaluated analytically. If the temperatures of the resistors are sufficiently high, The maximum of the heat flux is achieved at the resonance condition ω J (Φ) = Ω, while the minimum occurs far away from the resonance, i.e. either at zero flux, Φ = 0 or at Φ = Φ 0 /2. As we have discussed, at weak coupling one always has J min J max . Therefore, in this regime the modulation of the heat current is close to its maximum value, In the symmetric case g 1 = g 2 = g and at very weak coupling g κ/2 one finds Thus, in this limit the heat modulation grows with the coupling strength as g 2 . In the limit g κ/2 the modulation saturates at the value In Eq. (16) we have ignored the contributions of the high frequency modes of the resonators to the heat transport. Since in our model the SQUID angular frequency ω J (Φ) does not cross these modes, in the weak coupling regime they give small contribution.
In Fig. 3 we plot the maximum and the minimum values of the heat current, obtained numerically, as a function of the coupling constant g and compare them with the approximate results. We note that the expressions (17) and (18) well agree with the numerics in the weak coupling regime.
Finally, we provide more accurate condition under which the weak coupling expressions (16)(17)(18)(19)(20) are valid, C. Intermediate coupling regime In this section we consider the intermediate coupling regime g j ∼ Ω r . In this case, the expressions for the coupling constants g j (11) and for other parameters should be corrected. To obtain the corrected expressions, we consider the classical Lagrangian of the system. For simplicity, we consider fully symmetric setup and put ω 1 = ω 2 = ω r , C c,1 = C c,2 = C c and g 1 = g 2 = g.
We also define the effective capacitance of the λ/4resonators, C r = π/4Z 0 ω r , and their effective inductances L r = 4Z 0 /πω r . Afterwards, the classical Lagrangian of the lowest modes of the two resonators interacting with the SQUID is expressed as Here ϕ is the Josephson phase of the SQUID and ϕ j are the phases describing the resonators. They are related to the electric potentials at the ends of the resonators, adjacent to the coupling capacitors, V j , asφ j = 2eV j /h. Diagonalizing the Lagrangian (22), we obtain the corrected expressions for the angular Josephson frequency ω J , for the angular frequencies of the resonator modes Ω r and for the coupling constant g: .
In the limit C c C r these expressions match the weak coupling formulas given in the previous section. In addition, if the resistance R approaches Z 0 one should use more accurate expression for the damping rate, With these corrections, Eq. (16) approximately describes the heat current in the intermediate coupling regime. The frequencies of the eigenmodes of the coupled system in the limit R → 0 are for the mode uncoupled from the SQUID, and for the two hybrid modes. Note that the interaction term in this equation slightly differs from that in Eq. (13). As expected, in the limit C c C r these expressions meet those of the previous section.
The main difference between the weak and the intermediate coupling regimes is in the growing value of the minimum heat current. Assuming that J min = J(0), from Eq. (16) one finds the modulation in the form This modulation amplitude significantly drops if i.e. as soon as the coupling can no longer be considered weak.
To illustrate these points, in Fig. 2b we show the transmission probability τ (ω, Φ) in the intermediate coupling regime g ∼ Ω r . The flux-independent line at f ≈ 4.4 GHz corresponds to the uncoupled mode (27). The lines corresponding to hybrid modes (28) are well separated at all values of magnetic flux. This is why the dependence of the heat current on Φ becomes weak and does not exhibit a resonance peak (Fig. 2e). This, in turn, suppresses the heat current modulation, as is evident from Fig. 3.
D. Strong coupling regime
In the strong coupling limit the hybrid mode ω − (Φ) (28) and the uncoupled mode ω unc (27) move to low frequencies, where they merge and form a broad peak in τ (ω, Φ), which depends on Φ. The mode ω + (Φ) becomes isolated, with pronounced dependence on the magnetic flux due the strong coupling to the SQUID. This behavior of the transmission peaks is visible in Fig. 2c. These effects lead to the reappearing of the heat current modulation in the strong coupling limit.
Formal boundaries of the strong coupling limit, in which one can derive approximate analytic expressions, are In the limit C c max{C r , C} Eqs. (23)(24)(25) become .
Furthermore, at low frequencies ω ω r one can approximate the impedances of the resonators (5) as Z 1 (ω) = Z 2 (ω) = −iωL r + R. In this limit, at small capacitance of the SQUID, C L r /R 2 and for C c L r /R 2 the transmission probability at low frequencies acquires the form of a non-Lorentzian peak, This peak has a maximum at frequency ω max ∼ R/L r . The contribution of the low frequency peak to the heat flux can be evaluated analytically for temperatures T 1 , T 2 > ∼h R/k B L r . In this case, in Eq. (3) one can make the low frequency approximation for the Bose functions After that, Eq. (3) with the transmission probability (34) leads to the contribution to the heat flux coming from the low frequency peak in the form: .
Here we have defined the flux dependent dimensionless parameter One can work out even more accurate approximation for the low frequency contribution, This result can be extended to the intermediate coupling regime, i.e. to the values of C c smaller than the condition (31) requires. The contribution of the mode ω + (28) to the heat current can be estimated as where the frequency ω + (Φ) is given by Eq. (28). In the limit C c max{C r , C}, where g 2 = Ω 2 r /8, this frequency acquires a simple form .
The total heat current takes the form where J bg (Φ) is the background contribution coming from the modes with frequencies higher than ω + . Interestingly, in the strong coupling regime the heat current has a maximum at Φ = 0.5Φ 0 and minimum -at Φ = 0, see Fig. 2f. In Fig. 3 we observe the reappearance of the heat current modulation at strong coupling. For the chosen parameters the modulation predominantly comes from the term J + (Φ) (39), although the low frequency part J l (Φ) also gives significant contribution. In the limit C c → ∞ and for k B T 1,2 > ∼ ω + (Φ) the modulation approaches the limiting value where both A and ω + are taken at Φ = 0.
III. CONCLUSION
In conclusion, we have studied photonic heat transport through a SQUID coupled to the two resonators and two resistors. By tuning the critical current of the SQUID with magnetic flux, one can control the heat power transmitted from the hot resistor to the cold one. This device can be used as a heat valve provided its' parameters are chosen properly. We study the performance of the heat valve depending on the coupling strength between the resonators and the SQUID. We find that the main parameter characterizing the performance of such device, namely, the amplitude of modulation of the photonic heat power, non-monotonously varies with the coupling strength. The modulation grows with the coupling strength in the weak coupling regime, then significantly drops at the intermediate coupling and, finally, it reappears again in the strong coupling limit. This unusual behavior is explained by the resonant nature of the heat transport in the system. Indeed, at weak coupling the heat flows through the device only at magnetic flux values corresponding to the resonance condition ω J (Φ) = Ω r and drops to zero away from these values. As a result, the dependence of the heat power on the magnetic flux, J(Φ), is given by a periodic set of narrow peaks. At the intermediate coupling these peaks become broader and eventually overlap, thus reducing the heat modulation. The optimal performance of the heat valve is achieved at the boundary between the weak and the intermediate coupling regimes. Our results can help to optimize the design of the low temperature heat valves based on superconducting circuit components. | 2022-07-13T01:15:42.315Z | 2022-07-12T00:00:00.000 | {
"year": 2022,
"sha1": "e2fd6c2425f1f9e8667dc21ea64ae7e3c587e167",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2207.05586",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e2fd6c2425f1f9e8667dc21ea64ae7e3c587e167",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
3844491 | pes2o/s2orc | v3-fos-license | Evolution of Competitive Ability: An Adaptation Speed vs. Accuracy Tradeoff Rooted in Gene Network Size
Ecologists have increasingly come to understand that evolutionary change on short time-scales can alter ecological dynamics (and vice-versa), and this idea is being incorporated into community ecology research programs. Previous research has suggested that the size and topology of the gene network underlying a quantitative trait should constrain or facilitate adaptation and thereby alter population dynamics. Here, I consider a scenario in which two species with different genetic architectures compete and evolve in fluctuating environments. An important trade-off emerges between adaptive accuracy and adaptive speed, driven by the size of the gene network underlying the ecologically-critical trait and the rate of environmental change. Smaller, scale-free networks confer a competitive advantage in rapidly-changing environments, but larger networks permit increased adaptive accuracy when environmental change is sufficiently slow to allow a species time to adapt. As the differences in network characteristics increase, the time-to-resolution of competition decreases. These results augment and refine previous conclusions about the ecological implications of the genetic architecture of quantitative traits, emphasizing a role of adaptive accuracy. Along with previous work, in particular that considering the role of gene network connectivity, these results provide a set of expectations for what we may observe as the field of ecological genomics develops.
Introduction
Biologists are broadly interested in the drivers of diversity, ranging in scale from nucleotide sequences to the entire biome. One goal is to span across levels of organization: we would like to understand how genes interact with one-another and with environmental inputs to produce phenotypes (the genotypephenotype map, GPM), and how phenotypes 'fit' to the environment (the phenotype-environment map, PEM). Ultimately, we would like to understand the links across all three levels of organization, the genotype-environment map (GEM). Such a goal requires incorporating dynamics from each of the sub-mappings into an over-arching set of expectations. We might ask, for example, how does variation in genetic architecture affect trait evolution, how does trait evolution affect competitive dynamics, and how might competition feed back to alter genetic architecture?
The example of competition is raised because it has a long history in investigations of the maintenance of diversity at the level of the PEM, as exemplified in Hutchinson's ''Homage to Santa Rosalia'' [1]. Classical ecological analyses, from Lotka-Volterra to Tilman's R* to contemporary models [2][3][4][5], typically (implicitly) assume that competing species are fixed for the attributes that regulate competitive dynamics, i.e., that ecological dynamics are much faster than evolutionary change. However, as Antonovics noted four decades ago [6], we should expect most ecological changes to be associated with evolutionary change.
Researchers have recently begun to explore and formalize the joint effects of ecological and evolutionary dynamics on species' populations and their communities [7][8][9][10][11][12]. Hairston and colleagues [7] developed several analytical models that incorporate both phenotypic change (evolution) and population change (ecology). They demonstrated that evolutionary change can play a major role in altering population dynamics (as in the case of Geospiza fortis populations and evolving bill size), or evolutionary change may play a smaller role (as in the case of Onychodiaptomus sanguineus and egg diapause). Fukami and colleagues [13] demonstrated that evolution in Pseudomonas communities systematically alters the community structure: a single colonist strain will evolve to occupy several niches, excluding future colonizing strains and changing community structure when compared to a community into which several strains are introduced simultaneously. All of this is to say that the traits that mediate competitive interactions should evolve sufficiently quickly to alter community dynamics.
The rate at which a trait can evolve-which may describe how population dynamics might be affected at different rates of change-is described by the quantitative genetic parameter heritability [14]. One of the advances of the Modern Synthesis was the realization that we did not need to know details of the genetic basis of a trait in order to be able to predict the rate at which the trait will change [15]. All that is needed are estimates of the additive genetic and phenotypic variances of the trait. The heritability of a trait underlying competitive ability should then describe the rate of change of competitive ability. Gomulkiewicz and Holt [16] linked trait heritability to the probability and the rate at which populations recover from sudden environmental change, showing that higher heritability increases the chance of recovery and the rate at which recovery occurs. The predicted Ushaped population decline and recovery pattern expected from their theory has recently been recovered empirically [17]. Now consider extending their result to two initially competitively equivalent species that begin competing for a resource that evolves over time (e.g., a food resource such as phytoplankton that evolves defenses to zooplankton grazing [18][19][20]). We expect that the competitor with the higher heritability for the trait (e.g., tolerance of phytoplankton defenses) to be able to adapt faster and ultimately out-compete the species with lower heritability [21].
Although knowledge of the genetic basis of heritability is not required to make predictions about trait evolution, with the advent of modern genomic and bioinformatic techniques we are beginning to be able to determine the genetic details underlying quantitative traits [22][23][24]. By extension, if a link from genetic sequence to trait heritability exists, there should be a link from genetic sequence to communities by way of traits and their role in mediating competition (i.e., a model that incorporates the GEM). In a previous paper [25], I examined the plausibility of a link between genetic architecture and heritability of a quantitative trait. The results kept with analytical models of biological epistasis and the effects on variance components [26][27][28], such that network structures hide and reveal additive genetic variation so that, even without any environmental variance inputs, heritability is altered. Specifically, I found that smaller networks should tend to have higher heritability than larger networks because hidden additive variance is released and selected on more quickly. In addition, because the quantitative trait is divided among fewer genes, the average effect of a mutation is larger in small gene networks than in large networks. As a result of these two factors, populations with smaller gene networks adapt and recover faster from sudden environmental changes than do populations in which the ecologically-critical trait is underlain by larger networks. By extension, small-network populations persist longer than largenetwork populations when the environment fluctuates rapidly through time [29]. These results are consistent with previous network-centric research that focused on network connectivity rather than size [30,31]. Together, they suggest that the competitor with the smaller gene network underlying an ecologically-critical trait should out-evolve and out-compete a species with a larger gene network for the same trait.
Here, I test the hypothesis of maximal fitness arising from minimized network size under the scenario of interspecific competition in a single patch. Two competing species are limited by a resource with two characteristics. First, the resource occurs at a given quantity that limits the total number of individuals in a patch, and the two species are effectively neutral with respect to capitalizing on quantity (i.e., their requirement and impact vectors are identical [32]). Second, the resource has a quantitative value for quality, such as palatability, to which the competing species must adapt in order to maximize their fitness. The quantitative trait, whose value is determined by the gene network, maps to this resource quality. Specifying competition in this way stabilizes the population dynamics relative to a system in which the primary resource is depleted. The 'focal species' in the competition possesses a fixed genetic architecture for an ecologically-critical trait (n = 16 genes, scale-free network topology, recombination rate = 0.5, mutation rate = 0.001) while the 'competitor's' genetic architecture for the trait varies from 16 to 256 genes, random or scale-free topology, and different recombination and mutation rates. The results highlight a speed-versus-accuracy tradeoff for different networks. Smaller networks confer the advantage of higher adaptive speed in fast-changing environments, whereas larger networks confer greater adaptive accuracy when the environment changes sufficiently slowly. These results provide a set of hypotheses to be empirically tested as we attempt to refine the genotype-phenotype-environment map.
Results
A strong interaction between the rate of environmental change and size of the gene network underlying the ecologically-critical trait was apparent when two species compete. The first metric of this effect is the impact of the competitor on the focal species' population growth rate (dN/dt) in the first 20 generations of competition. The importance of the interaction between network size and dE/dt is readily apparent in Figure 1. The size of the competitor's gene network and the rate of recombination, conditional on interactions with the rate of environmental change (dE/dt), accounted for 79% of the variance in the focal species' dN/dt during the first 20 generations of competition (Table 1). This model possessed an Akaike's Information Criterion (AIC) score <120 points lower than the next-best model considered (see Methods). When the rate of environmental change is slow (,4e 23 ), a large-network competitor drives down the focal species' rate of population growth. However, when dE/dt is fast (.4e 23 ), the focal species' rate of population growth is positive and increases with the competitor's network size. Given the specifications of these simulations, all network sizes are approximately equivalent at dE/dt = 4e 23 .
The basis of the different effects on the focal species' population growth rate can be inferred from the relative amounts of phenotypic and additive genetic variances (V P and V A , respectively) of the two species conditional on dE/dt. The AIC-best model (DAIC < 40) for explaining the focal species' dN/dt using variance components as predictor variables required knowing both the competitor's V P and V A and the interaction with dE/dt. The model explained 76% of variance in the focal species' dN/dt ( Table 2). Although the competitor's V A is not statistically significant on its own or at any given dE/dt, the interaction of V A and dE/dt is significant over all levels. Both variance components tend to be lower for all networks larger than the focal species' network ( Figure S1). The effects of differential adaptive ability on population growth rates during the initial competition phase are not completely transitive to predicting which species, the focal or competitor, ultimately wins. Because very few competitor wins were recorded at the rates of change examined in the first simulations (i.e., during the first 20 generations of competition), I extended the dE/dt landscape an order of magnitude slower (see Methods). The resultant descriptive pattern remains: smaller networks perform better than larger networks when dE/dt is high (and conversely when dE/dt is low), but dE/dt = 4e 23 is no longer the cutoff. Instead, smaller networks continue to perform well down to dE/ dt = 1e 23 , and only below that dE/dt do larger network competitors systematically win the competition ( Figure 2). Although the focal species' population declines during the initial stages of competition, it appears that the larger-network competitor cannot sustain their higher level of adaptive accuracy and the focal species' population bounces back ( Figures S2-S4). That is, although more accurate, the mean phenotype of the largenetwork species begins to lag too far behind the optimum (i.e., it is biased) and the lower-accuracy focal species gains an advantage. Two additional results stand out in Figure 2. First, the slightly lower than 50% probability of the focal species winning when the competitor's network is the same size as the focal species' derives from differences in recombination (see Methods). Second, a 64-gene network competitor never has an advantage over the 16-gene focal species network. Given the landscape of Figure 2, it appears that an even slower dE/dt could afford a 64-gene network an advantage over the focal species' 16-gene network, but I do not test that idea here. Over the landscape of dE/dt values examined, network size, the rate of environmental change, and the interaction of the terms explains a significant part of total model deviance in competitive outcomes ( Table 3). The best model, on which Table 2 is based, possessed the lowest AIC by < 40 points.
The size of the competitor's gene network, the rate of environmental change, the competitor's recombination rate, and interactions were the major predictors of co-persistence times of the competing species (DAIC = 116.7), explaining ,60% of the variance (Table 4). Larger differences between species' networks and higher rates of environmental change consistently decrease persistence times ( Figure 3). In addition, differences in recombination rate tended to increase population persistence times, i.e., higher recombination affords an adaptive advantage at some network sizes. Note that this result speaks only to the fact that competition has ended, and not which species won; the adaptation speed/accuracy tradeoff is not apparent in time-to-resolution of competition.
Discussion
The interplay between genetic architecture, phenotypes, and evolutionary and ecological dynamics are complex, yet despite the rapid acceleration of biological research, a fundamental understanding of the interplay among these factors remains elusive. Progress is being made in refining the both the GPM and the PEM. Given this progress, we need sets of theoretical expectations to unite the constituent pieces. Here I have attempted a step in that direction with a set of simulations that span from the gene network underlying a quantitative trait to a simple two-species community in which Figure 2. Probability that the focal species wins competition as a function of competitor network size and log(dE/dt). At slower rates of environmental change, the probability that the focal species will win declines with an increase in the size of the competitor's network. With the exception of a competitor with a 64-gene network, when the rate of environmental change is high, the probability of the focal species winning increases as the competitor's network size increases. 64-gene networks are never superior to the 16-gene network at the rates examined here. Note that this figure, produced using the akima package for R [54], interpolates data to produce the surface, whereas the predictor variables (network size, recombination rate, and dE/dt) are categorical in the simulations and statistical analyses. doi:10.1371/journal.pone.0014799.g002 interspecific competition occurs. Previous work that did not include competition suggested that specific characteristics of the genetic architecture of a trait could affect population dynamics when the environment suddenly shifts states or when it changes steadily through time [25,31,33]. One conclusion drawn from that work is that network size should be minimized, scale-free topology maintained, and intermediate network connectivity evolved in order to maximize adaptability. By including competition in the current model, I have increased the degree of realism and refined expectations of what we should observe when linking genotypes to ecological and evolutionary dynamics. The major refinement of expectations is the trade-off between adaptive speed and adaptive accuracy, as revealed by the presence of a competitor and contrary to the expectation from single-species models. In rapidly changing environments the advantage of greater adaptive speed conferred by smaller networks is readily apparent. As the rate of environmental change slows, the probability of competitive superiority goes up with increasing network size. This is in contrast to single-species results, in which as rate of environmental change slows, populations of all network . Effect (± 95% CI) of competitor's genetic architecture and the rate of environmental change (dE/dt) on the duration of competition. The time required for one of the two competing species to go to dominance (i.e., drive the other species extinct) in a single patch is largely a function of the relative difference in network sizes and the rate of environmental change (dE/dt). The focal species' genetic architecture is held constant (as in Figure 1) while the competitor species' genetic architecture varies. Time-to-resolution is the number of generations between the start of competition and the generation in which one species has gone extinct. Resolution occurs quickly when dE/dt is high-we quickly find that one species is not suited to the environment-whereas resolution takes considerably longer when dE/dt is low. Likewise, as the disparity between each species underlying network increases, the time-to-resolution declines. sizes converge on indefinite persistence time (see Figure 2 in [29]). In general, the lower V A of larger networks is sufficient in slowchanging environments, while the lower V P ensures that a largenetwork species is better adapted. In contrast, the higher V A conferred by smaller networks is required in fast-changing environments; the small-network species does not adapt as well (higher V P ), but it does not need to because the large-network species cannot adapt quickly enough. This is analogous to the importance of developmental accuracy as described by Hansen et al. [34]. The trade-off between adaptive speed and adaptive accuracy, in the context of the implications for the evolution of competition, has however not been previously recovered to my knowledge. Repsilber and colleagues [33] allowed their networks to evolve in size and discovered higher mean population fitness for singlespecies populations at different landscape heterogeneities, but did not consider .1 species in the landscape. The primary reason that the trade-off has not been previously recovered is that earlier work with competitors and an explicit GPM has focused on a single number of loci underlying a limiting trait. For example, Urban and de Meester used a model in which an ecologically-critical trait was underlain by 20 binary loci in each species [35]. If we consider an optimal phenotype of 0.53 (on the scale used by Urban and de Meester), the closest possible phenotype is 0.55 (11/20). Alternatively, if one species' GPM is defined by a 100-locus model, a phenotype of 0.53 is possible and would result in higher fitness. Given the joint processes of gene duplication and deletion [36][37][38], we can anticipate that certain traits may be underlain by fewer or additional genes, which should alter the speed and resolution of adaptation. These changes should then propagate up levels or organization to affect competitive dynamics as traits evolve, as shown here.
Convergence of genetic architecture-characteristics such as network size-becomes an equalizing mechanism [39] permitting long-term, essentially neutral, coexistence. In these simulations, as the difference in genetic architecture between two competing species increases, the persistence time of a two-species local community declines. Neither species can gain a distinct evolutionary-ecological advantage when genetic architectures are identical, and if an advantage is gained, it takes considerable time to evolve. An important caveat to the equalizing nature of genetic architecture change (by gene duplication and loss) is that differences in demographic parameters, such as generation time, could compensate for differences arising from gene regulatory network differences. For example, terHorst and colleagues showed that generation time differences between mosquito larvae and their protozoan prey altered eco-evolutionary dynamics [40]. However, if species are comparable in the variety of life history traits in addition to being limited by an analogous trait, then genetic architecture poses a tradeoff between speed and accuracy.
We may be able to link the network GPM concepts considered here to the models developed by Hairston and colleagues [7]. Their generalized model (their Eq. 3) incorporates rates of ecological and evolutionary change as the sum of two partial differential equations, the first describing the focal species' change relative to trait evolution and the second describing the focal species' change relative to non-evolutionary demographic factors. We should expect that large network differences between competing species increases the relative role of evolution in total ecological change. This is conditional on the relative differences in demographic parameters of the competing species, however: if those differences are greater than even a large network difference, then demographic differences would still play a larger role than evolutionary differences. With this condition in mind, we can hypothesize that we should find larger differences in the networks underlying competition-critical traits in systems where evolutionary change is dominant, but more similar gene networks where demographic changes drive the system.
The results of these simulations suggest a further hypothesis: that communities composed of species with similar genetic architectures (for limiting traits) give rise to neutral community dynamics, whereas differences in genetic architecture give rise to species sorting dynamics. The identical evolutionary potential of species is, in fact, an assumption of Hubbell's neutral theory [41]. Conversely, we can hypothesize that the prevalence of nichedriven species sorting in many ecological communities [32] could be a result of differences in adaptive potential resulting from differences in the genetic architecture of ecologically-critical traits. That is, when considering the genetic architecture of ecologicallycritical traits as evolving networks, a novel axis of species sorting [42,43] seems to emerge. Classical species sorting considers traits as fixed, but these simulations show that traits can evolve and species assort in a single patch according to the network best-suited to particular rates of environmental change and the competitive challenge posed by another species. The degree to which this axis of species sorting occurs will depend on the relative rates of dispersal among a set of patches, and the heterogeneity of the patches, in a metacommunity.
How do these results compare to the real world? The short answer is, we don't know. This is driven in large-part by the fact that the tools necessary for elucidating the GPM are recent developments, and, at this time, still relatively expensive. I have proposed that a given trait in different species may be underlain by different size networks and that these differences can drive evolutionary ecological patterns such as competitive dynamics. An alternate hypothesis-and perfectly reasonable in the absence of empirical data-is that any particular challenge requires approximately the same size network regardless of the species in question and its evolutionary history. For example, perhaps osmoregulation requires, say, 250 genes (or, more correctly, the products of 250 genes and their associated regulatory loci), and any differences in adaptive capacity are due solely to specific sequences and gene regulation. We might even expect such a pattern to emerge: as discussed above, given sufficient time for gene duplication and loss [36,37], trait genetic architecture should converge as an equalizing mechanism [39]. Ultimately, either result-very similar network sizes or different network sizes-from empirical data would be interesting and informative, even if the latter makes the results herein irrelevant.
In addition to our lack of data to confirm this work, we have to consider that these simulations, like all models, are simplifications of reality. The basic caveats to the research here largely follow the caveats of Malcom [25]: Boolean regulatory networks gloss over real differences of gene functions, the details of which are interesting and may have important ramifications. The networks I use here are simplified in that each gene is regulated by a single upstream factor, whereas real genes are often multiply regulated. We have ample evidence of widespread pleiotropy between networks [44][45][46], and the traits that these linked networks underlie may be under different selection regimes, which alters the efficiency of natural selection. Lastly, the competition scenario considered here is greatly simplified, and other (non-network) research has shown the multi-species and multi-trophic scenarios can alter eco-evolutionary trajectories in unpredictable ways [47]. There are numerous directions that future research could take. First and foremost, empirical support (or rejection) of the basic assumptions in this purely theoretical paper needs to be gathered; for example, do different species possess different size networks for the same trait? Second, because we know both phenomena are widespread, incorporating pleiotropy and plasticity in similar, network-based models would increase realism and may further refine our theoretical expectations. Including .2 species, and/or two or more trophic levels, with the GPM defined as complex networks could further refine our expectations of the links across the GEM.
There are two main conclusions from this research. First, there is an adaptation speed-accuracy tradeoff conferred by network size (and to a lesser extent, recombination). This tradeoff allows species with slow-evolving traits (i.e., large underlying networks) to outcompete species with fast-evolving traits (i.e., smaller networks) by virtue of increased adaptive accuracy. Second, the trade-off is contingent on the rate of change of the environmental variable to which the trait maps. Together, these results suggest that ecological interactions such as competition should contribute to the shaping of gene networks underlying quantitative traits. Therefore, not only should knowledge of the ecological interactions of a study species contribute substantially to our expectations of what should be observed when the GPM is investigated, but knowledge of the GPM may provide important information about why certain ecological patterns or processes are observed.
Gene Network Model
I focus on individuals of two species competing in a single patch with an environmental variable that fluctuates through time at a variety of rates. Individuals of either species possess a single quantitative trait that maps to the quality of the limiting resource (discussed in detail below). The trait is encoded by a directed Boolean network of 16, 32, 64, 128, or 256 genes, the state of each determined dynamically (see below). The topology of the network is initiated as either random (no preferential attachment) or scalefree (with preferential attachment) in its out-degree distribution [48]. Randomly-connected networks show an approximately Poisson degree distribution, whereas scale-free networks exhibit an power law degree distribution [49]. I use a lottery model algorithm to form the scale-free networks, i.e., the probability of an existing gene acquiring a connection to a new gene is proportional to the number of existing connections [49].
At the start of a run, every individual's network is randomly determined (as guided by the constraints of topological specification). With these relatively small populations, it is very unlikely that any two individuals possess the same exact network at simulation initiation. The binary state [0, 1] of each gene in the network except the upstream-most is determined by comparing the state of the gene immediately upstream to the functional relationship of the gene pair (Figure 4a, encoded by chromosome of 4c). The state of the upstream-most gene is determined randomly for each individual at simulation initiation, and is then inherited for subsequent generations. Some genes may act as repressors and others as activators, and the state of the downstream gene is determined by the match or mismatch between the state of the upstream gene and the function (Figure 4b). For example, if the upstream gene is ''on'' (state = 1) and is a repressor (function = 0), then the downstream gene takes the ''off'' state (state = 0). Alternatively, if the upstream gene state is 0 and it is a repressor, then the downstream gene takes the ''on'' state. Each gene except the basal-most has a single input to ease computational requirements (the number of calculations increases according to 2 2 k with k inputs [29]), but may have one or more outputs (i.e., may be pleiotropic). All network information is stored on a single chromosome consisting of two parts (Figure 4c). First, the topology is defined by a ''tails list'' of the downstream genes; the ''heads list'' (the controlling, upstream genes) is inferred from the index position of each tail list element. The relationship between heads and tails genes is randomly determined at the start of a simulation run, but, as noted above, the out-degree distribution is constrained by the scale-free versus random topological assignment. Figure 4a is an example 13-gene network whose states have been calculated given the information from the chromosome in Figure 4c.
Each individual's phenotype is determined by summing the states of all terminal genes in the network, i.e., genes with outdegree = 0, and scaling the value to the range of the environment ( = 140). So, for example, the network in Figure 4a possesses eight terminal genes, four of which are ''on'', thus the individual possesses a phenotype of 70 ( = (140/8) * 4). I am thereby assuming that there are no biochemical limits given a particular network size; individuals with a 16-gene network can approximate a phenotype of 140, as can individuals with a 256-gene network. The consequence for this re-scaling is that smaller networks have lower resolution than larger networks, which is a reasonable assumption given that dividing any particular task among fewer actors will result in lower overall accuracy. I stored the phenotypes of each individual's parents and used mid-parent regression to estimate the trait's heritability in the population. Additive genetic variance was derived by multiplying the phenotypic variance by the heritability.
Each individual's phenotype is translated to a fitness relative to the environment using a Gaussian function of the form, where D is the absolute value of the difference between the environment and the individual's phenotype, and v is a value that changes the breadth of the selection function. I varied v from 1.5 (high tolerance for a phenotype-environment mismatch) to 2.5 (low tolerance for a phenotype-environment mismatch) in the simulations. In this way I assume that the environmental effect is absolute and the phenotypic variance of the population plays no role in how an individual is selected. Each individual's RF does not affect the number of offspring produced, but does affect the probability that an individual will survive to reproduce.
Individuals are sexually-reproducing hermaphrodites who mate at random. The number of offspring from a mating is determined by drawing a random value from a Poisson distribution with l = 1.5. Gametes undergo recombination during a diploid meiotic stage to create an offspring chromosome that is a mixture of parental alleles, which in this model are the tails list and the functional relationships. The first element of the offspring chromosome is chosen from the first element of one parent, then subsequent elements are taken from the same parent until a random uniform number less than the recombination rate (r = 0.05 or 0.5) is drawn, at which point the element is drawn from the opposite parent. This continues the length of the chromosome. Mutation, as determined by testing a uniform random number against the mutation rate (1e 23 or 1e 25 ) for each chromosomal element, occurs after the new chromosome is created. Although these mutation rates appear high, as noted by Frank [30], because the trait is directly related to fitness, the effective mutation rate is about one order of magnitude lower. All mutations are nonsynonymous and may affect either the controlling function of a gene (an activator mutates to suppressor) or the relationship to another gene (i.e., alter network topology).
Death occurs after reproduction in three stages. First, all parents are killed to prevent over-lapping generations. Next, the new generation is culled according to each individual's relative fitness: if the RF is less than a uniform random number, then the individual dies. Last, a carrying-capacity is enforced by randomly killing individuals to bring the population below K = 500.
Competition Simulations
As discussed in the Introduction, the two competing species are co-limited in this model. First, the resource occurs at a given quantity that limits the total number of individuals in a patch, and the two species are effectively neutral with respect to capitalizing on quantity (i.e., their requirement and impact vectors are identical [32]). Second, the resource has a quantitative value for quality, such as palatability, to which the competing species must adapt in order to maximize their fitness. The quantitative trait, whose value is determined by the gene network, maps to this resource quality. Specifying competition in this way stabilizes the population dynamics relative to a system in which the primary resource is depleted. Note, however, that this does not permit exploring the effects of over-exploitation, which could alter competitive dynamics.
An initial canalization period is important for reducing excess initial phenotypic and genotypic variance. Simulations are initiated with each species in its own patch, and competition occurs in a third patch. The environmental variable is initialized at the same value ( = 70) and changes at the same rate (8e 23 to 2e 24 units per generation; details below) in all three patches. A single dispersal event occurs after the 20-generation canalization period and 200 randomly-chosen individuals of each species-which are as well-adapted to the same environment, insofar as their genetic architecture allows-are moved to the third patch. Any individuals not selected to disperse are killed.
I ran two sets of simulations. In the first, I examined the effect of the competitor on the focal species' dN/dt over the first 20 generations of competition, i.e., up through generation 40. These simulations were full-factorial for genetic architecture of the competitor (five network sizes, two network topologies, two recombination rates, and two mutation rates) and six rates of environmental change (dE/dt = 8e 23 , 6e 23 , 4e 23 , 2e 23 , or 1e 23 ), replicated 40 times for each combination.
After the first set of simulations had been completed and analyzed, and no effects of network topology or mutation rate were observed, I ran a new set of simulations. These were full-factorial for five network sizes, two recombination rates, and five rates of environmental change, as above. Analysis of this initial set of full runs showed that even though the dN/dt values were depressed at low dE/dt, the focal species still typically won competition. I then ran another set of simulations with slower dE/dt ( = 8e 24 , 6e 24 , 4e 24 , or 2e 24 ) and all competitor genetic architecture treatments. Both of these sets of runs were represented by 40 replicates of each treatment combination.
Analysis
For all analyses, except when noted otherwise, the predictor variables are factors rather than continuous values. Thus, even though some figures suggest non-linear models may be appropriate, they are not necessary given the structure of the simulations and analysis. A summary of the models considered, and for which AIC was calculated, is provided in Table 5. Standard AIC, as opposed to AIC C , was used because of the large sample sizes for the simulations. All simulations were run in NetLogo 4.1 [50]. I used R 2.10 [51] for statistical analysis, and Akaike's Information Criterion (AIC) for model selection [52].
To analyze the first set of simulations, I estimated the focal species' dN/dt during the 20 generations following the start of competition of each run using a basic linear model of population on time. The slope of each regression was stored and used as the response variable in the models described under Initial Competition in Table 5. I used two sets of predictor variables to examine the determinants of focal species' dN/dt, the first focused on network characteristics and the second focused on quantitative genetics variance components (V P and V A ). This latter analysis was designed to link the simulations to the classical understanding of evolutionary dynamics, but it is important that the variance components are emergent properties of the networks and populations, rather than being specified a priori.
I considered two response variables for the second set of simulations. First, I extracted the winner of each simulation run; if the run lasted 1,000 generations, then the species with the larger population at the last time step was called as the winner. Second, I extracted the time (i.e., generation) of the end of each simulation run; a slight skew to the time-to-resolution data required a log transformation to ensure normally-distributed residuals. I used a generalized linear model with a binomial distribution and logit link function [53] to relate the network and dE/dt predictor variables to the probability that the focal species won the competitive bout (Table 5, Competition Winner). Figure 2 was generated using the akima package for R [54] and treats the predictor variables as continuous values for interpolation purposes. However, predictors were factors in the analysis presented in Table 3. I used an OLS linear regression to relate network characteristic and dE/dt predictor variables to log-transformed time-to-resolution (Table 5, Time-to-Resolution). Figure S1 Comparison of focal species' and competitors variance components at the start of competition. There is no discernible pattern to VA and VP in the focal species (left panels), but the competitor's VA and VP decline with increasing size of the competitor's network (right panels). Larger-network competitors cannot persist in fast-changing environments, suggesting that VA $20 is required to keep up with the changing environment at the higher dE/dt. The lower VP affords a competitive advantage (i.e., more individuals are closer to the optimal trait value) when networks are large and dE/dt is slow. Found at: doi:10.1371/journal.pone.0014799.s001 (0.45 MB TIF) Figure S2 Mean VA of the focal species (solid line) and competitor (dashed line) over the course of competition. These five panels are from runs at dE/dt = 4e-3, 2e-3, and 1e-3, where the initial impact of the competitor is to suppress the focal species, but eventually the focal species tends to recover and win competition. Note these plots are averaged over all three rates of environmental change (dE/dt). The solid, vertical bars in each plot indicate the average end-ofcompetition time for each network size treatment. The end of competition occurs most-quickly when the difference in VA between species is most evident, and persistence is highest throughout when VA is similar. Importantly, although VA quickly becomes similar (ca. 100 generations), the 16-gene competitor typically wins (see Figure 2). See Figure S4 for a partial further explanation. Found at: doi:10.1371/journal.pone.0014799.s002 (0.55 MB TIF) Figure S3 Mean VP of the focal species (solid line) and competitor (dashed line) over the course of competition. These five panels are from runs at dE/dt = 4e-3, 2e-3, and 1e-3, where the initial impact of the competitor is to suppress the focal species, but eventually the focal species tends to recover and win competition. The solid, vertical bars in each plot indicate the average end-of-competition time for each network size treatment. Note these plots are averaged over all rates of environmental change (dE/dt). Longer persistence time is associated with minimized difference in Ṽ P, but even when VP is similar, the competitor loses (see Figure 2). See Figure S4 for a partial further explanation. Found at: doi:10.1371/journal.pone.0014799.s003 (0.33 MB TIF) Figure S4 Mean difference of the average phenotype minus the environmental value of the focal species (solid line) and competitor (dashed line) over the course of competition. At the dE/dt considered here, the focal species should lose competition-at least against a largernetwork competitor-because the focal species' dN/dt is much lower than when competing against a 16-gene species (see Figure 1). In these plots, however, we see that the difference between the optimal trait value (i.e., the environmental value) and the population mean tends to be much larger for the competitor (at least for 64-to 256-gene competitors). That is, although the competitor is more accurate, it is more biased, and therefore eventually loses the competition. Found at: doi:10.1371/journal.pone.0014799.s004 (0.31 MB TIF) | 2014-10-01T00:00:00.000Z | 2011-04-25T00:00:00.000 | {
"year": 2011,
"sha1": "f84178510d46b04029d8e5ba78539a9528fb8c90",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0014799&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f84178510d46b04029d8e5ba78539a9528fb8c90",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
155651112 | pes2o/s2orc | v3-fos-license | From the Museum to the Street : Garry Winogrand ’ s Public Relations and the Actuality of Protest
Focusing on Garry Winogrand’s Public Relations (1977), this article explores the problematic encounter between street photography and protest during the Vietnam War era. In doing so, it considers the extent to which Winogrand’s engagement with protest altered the formalist discourse that had surrounded his practice and the ‘genre’ of street photography more broadly since the 1950s. It is suggested that, although Winogrand never abandoned his debt to this framework, the logic of protest also intensified its internal contradictions, prompting a new attitude towards the crowd, art institution, street and mass media. By exploring this shift, this article seeks to demonstrate that, while the various leftist critiques of Winogrand’s practice remain valid, Public Relations had certain affinities with the progressive artistic and political movements of the period.
Introduction
In her 1981 essay, 'In, Around and Afterthoughts (on Documentary Photography)', Martha Rosler sought to reinvent documentary practice through a Marxist critique of its traditions, truth claims and political assumptions.However, in doing so, she also stressed the difference between this project and a second, reactionary attack upon photographic credibility; one conducted by a postwar art establishment, which sought to secure 'the primacy of authorship' and avoid the social by isolating images 'within the gallery-museum-art-market nexus' (Rosler 1992, p. 320).Her clearest targets were the photographers championed by MoMA's director of photography, John Szarkowski, in the 1967 exhibition New Documents: Diane Arbus, Lee Friedlander and Garry Winogrand.In Rosler's view, these street and antihumanist photographers had rescinded all responsibility towards their subjects, abandoning what remained of the pre-war social documentary tradition and indulging in a virtuoso celebration of photographic form.Consequently, they were not only complicit with Szarkowski's modernist-formalist curatorial strategy but epitomised his attempt to locate a new mode of documentary practice which, to paraphrase the exhibition's infamous wall text, made no claim to reform or understand the world, but simply 'enjoyed life despite its terrors' (Szarkowski 2017, p. 1).Such a framework, Rosler concluded, was particularly irresponsible at a moment in which America was already several years into the 'terrors' of the Vietnam War.Consequently, those who sought to reinvent documentary practice had clear adversaries in New York's most powerful art institution.
This demarcation of late 1960s and 1970s photography into left-and right-wing camps is well known today and remains broadly useful.However, this essay will seek to explore a complicating factor: the disproportionate quantity of 'protest images' produced by photographers which Rosler associated with the (formalist) right.Indeed, whilst leftist photographers of this period were certainly in protest, Winogrand's formalist supporters also hinted at the potential threat which protest posed to their understanding of his practice.Could Public Relations exemplify another claim made by Rosler: that the 'force [of documentary practice] derives at least in part from the fact that the images might be more decisively unsettling than the arguments enveloping them' (Rosler 1992, p. 306).Of course, it would not be possible to simply separate the photographs from their discourse situation to reveal what Gerry Badger once called the '"good Winogrand" [ . . .] permeated with a social sense' (Badger 1988, para. 14 of 26).While a more extensive, or less problematic, depiction of protest could doubtless be found in the images which were excluded from the exhibition and book, Winogrand was such a prolific photographer that this approach could be used to support any number of claims.Furthermore, there is no good reason to assume that Papageorge, or the other formalist critics, fundamentally misrepresented the photographer's intentions.Winogrand was closely associated with all of them and, on the few occasions in which he was forced to comment on the project's politics, tended to echo their interpretations (or propose even more discomforting ones). 5However, while Public Relations emerged from a formalist problematic, it did not simply leave it unchanged.Indeed, it will be my contention that, when brought into contact with protest, Winogrand's street photography experienced a type of internal mutation or dialectical reversal.To address 1960s and 1970s protest culture, was to approach the formalist concept of street photography from within an alien register; one that risked overturning it from within and placing it on a new path.
Protest in Photography
"The critic Ian Jeffrey once asked rhetorically why Winogrand was so important.Part-a vital part-of the answer is that like André Kertész, Henri Cartier-Bresson and Robert Frank before him, Winogrand was a street photographer.That genre-freewheeling, casual, intuitive, essentially experimental-brings us, it seems to me, close to the existential heart of the photographic process.It demonstrates in its purest form the far from simple impulse to observe, reach out, and encapsulate a fragment of actuality [ . . . ].The broadly diaristic nature of this process, exemplified in the sixties and by Winogrand in particular, results not in reports but in confessions.And (never far away from confessions) in playing games".(Badger 1988, p. 24) The above statement is as good a place as any to start unpacking the notion of street photography represented by Winogrand and the internal threat posed to it by protest.In fact, while Gerry Badger's article gestured towards a social reading of Winogrand's practice, this comment on the photographer's importance is underpinned by a formalist understanding of street photography as the most medium-specific photographic 'genre'.Badger does not simply grant the genre this status because of its adherence to the established principles of straight photography.He also suggests that the fragmentary, diaristic form of observation demanded by working in the street reveals the medium's 'existential heart' by purging it of reportage.In this sense, it would seem that street photography-a type of 'art documentary' which privileges experimental, personal, elliptical or even comic results-radically differs from protest photography, however one chooses to define it.Indeed, throughout its many manifestations, protest photography has tended to favour the exact opposite strategies, from instrumental realism to the refusal of authorship and outright political agitation.The early twentieth-century worker photography movement, for example, commonly viewed experimentation as a concession to 'bourgeois aesthetics' and implored the photographer to act as part of a collective.However, as Badger suggests, street photography acquired its classic formalist 5 During the making of Public Relations, Winogrand tended to short-circuit political readings of his work by suggesting that he was only interested in the photographic problems posed by his subject matter.However, towards the end of his life, he often adopted a dismissive or openly hostile attitude towards the protestors.For a telling example of this see (Diamonstein and Winogrand 1982).
guise during a moment, the sixties, in which protest was never far from its privileged terrain.Indeed, Public Relations shows us what happens when protest, both literally and metaphorically, occupies the pure space of the street photograph.Before addressing how this encounter affected Winogrand's practice, it is therefore necessary to consider how the discourse of street photography sought to control it and the compromises which this required.
The main interpretive framework which surrounds Public Relations is that of Tod Papageorge, the photographer, curator and writer who designed the exhibition, sequenced the book and wrote the catalogue essay.Having selected the 76 photographs which appeared in the show from over 6500 proof prints, Papageorge bears a significant responsibility for the major document of protest which appears in the project. 6Furthermore, as the installation shots show, he granted images of social activism pride of place, presenting them in dense groups of seven or eight which recurred throughout the hang.This active prioritisation of protest continues in the introduction to the book which, despite its near-monographic range and frequent formalist motifs, contains one of the most extensive commentaries on the project's content to emerge during the photographer's lifetime.However, it is in this commentary that Papageorge's conflicted relationship towards the protest image begins to surface.For example, in a telling and oft-quoted passage we are told that the photographs provide 'evidence [ . . .] of the collective hysteria which locked us into "the sixties"' and 'how we behaved under pressure during a time of costumes and causes, and of how extravagantly, outrageously, and continuously we displayed what we wanted' (Papageorge 1977, pp. 14-15).In making these comments, Papageorge went further than Badger by evoking the connection between Winogrand's photographs and the explicit politics of 'the sixties'.However, he did so only to reassure the reader that they depicted a moment which had, at long last, ceased to have any bearing upon the present.While this framing was partly inevitable given the retrospective sequencing of the project, Papageorge's evocation of public extravagance, and even hysteria, suggests that the unease provoked by Winogrand's image of 'the sixties' and its immediate aftermath had deeper causes.
To appreciate the structural basis of this discomfort it is useful to explore the contrast between Winogrand's images of Central Park demonstrations and the photographs which Papageorge produced whilst walking through the park to curate the exhibition in 1977.Recently published as Passing through Eden, these shots did not so much respond to Public Relations as attempt to purify its subject matter by reviving an idealised version of classical street photography (Papageorge 2008).In stark contrast to the chaotic and crowded shots of Winogrand's work, Papageorge depicted the park as a tranquil, near-bucolic tabula rasa.No large crowds appear in this 'Edenic' space.Rather, we see individuals, couples or small groups engaging in conversations, kissing, sleeping, sunbathing, eating or reading, all glimpsed in passing by a photographer in motion.Consequently, the public behaviour of the figures seen in Winogrand's images is replaced by a sense of relative privacy punctuated by moments of ambiguous interpersonal drama.While this view of Central Park partly reflected the declining militancy of the late 1970s, it also pinpointed one of the features which made protest so intolerable to street photography: its rejection of the bourgeois division between public life and private experience.Indeed, Papageorge inadvertently demonstrated that street photography treated this division as its 'blank canvas', hunting for (or provoking) situations in which private experience briefly and ambiguously erupted into public life.Protest, on the other hand, involved the active reclamation of public space through the collective expression of ideas and emotions.Impersonal, sustained and as unambiguous as possible, it intentionally removed the basic hinge which had been crucial to the street photograph's mystery.In this sense, Winogrand's protest images risked returning to reportage, not because they were straight photographs, but because the protestors drowned out the 'confessions' and 'fictions' of the street photographer with banners and demands.
As a result, Papageorge's introduction and image selection for Public Relations attempted to nullify this politicised public behaviour by constructing a series of dubious equivalences.Most obviously, it sought to impart a sense of self-serving promotion to the act of protest by conflating it with other events arranged for the mass media.The suggestion that each of the figures depicted in the photographs-be they sports stars, politicians, art-world doyens or protestors-'outrageously [ . . . ] displayed what [they] wanted' implied that they all had the same base motivation for being in the public eye: to have their 15 minutes of fame (Papageorge 1977, p. 15).Indeed, the sequence even contains a picture of Andy Warhol at a Frank Stella opening!However, given that the book project also included shots of the so-called hard-hat protests, which were organised by pro-war construction unions to disrupt the antiwar movement, Papageorge's comments on 'collective hysteria' suggested an even more problematic equivalence.In the Public Relations hang, the most famous of these hard-hat images, New York City, was placed next to shots of police brutality, peace marches and sit-ins, implying an identity between opposing groups (Figure 1).A sense of millenarian fanaticism was also imparted to the block of photographs through the inclusion of an image showing protestors marching under a dense array of flags and crucifixes (Figure 2).In each of these ways, the exhibition sought to construct an image of an America which had temporarily lost its reason-engaging in forms of collective behaviour, or even therapy, that were equally irrational, be they left or right.While this picture pilloried the Nixon regime as much as it did the protestors, it was, at best, a liberal critique of the Vietnam War era as collective folly-at worst, a photographic visualisation of the so-called horseshoe theory.
the public eye: to have their 15 minutes of fame (Papageorge 1977, p. 15).Indeed, the sequence even contains a picture of Andy Warhol at a Frank Stella opening!However, given that the book project also included shots of the so-called hard-hat protests, which were organised by pro-war construction unions to disrupt the antiwar movement, Papageorge's comments on 'collective hysteria' suggested an even more problematic equivalence.In the Public Relations hang, the most famous of these hardhat images, New York City, was placed next to shots of police brutality, peace marches and sit-ins, implying an identity between opposing groups (Figure 1).A sense of millenarian fanaticism was also imparted to the block of photographs through the inclusion of an image showing protestors marching under a dense array of flags and crucifixes (Figure 2).In each of these ways, the exhibition sought to construct an image of an America which had temporarily lost its reason-engaging in forms of collective behaviour, or even therapy, that were equally irrational, be they left or right.While this picture pilloried the Nixon regime as much as it did the protestors, it was, at best, a liberal critique of the Vietnam War era as collective folly-at worst, a photographic visualisation of the so-called horseshoe theory.At the moment of their production, however, the photographs were enveloped in a different, if equally problematic, discourse: that of John Szarkowski.It is often assumed that Szarkowski's exhibition programme during the 60s and 70s avoided addressing the Vietnam War.And, indeed, this is largely true of the main shows featuring work by Winogrand at this time-namely, New Documents and The Animals. 7However, during this period, Szarkowski did combine images from Public Relations with others by photojournalists, including Benedict J. Fernandez and several Magnum photographers, to curate a lesser-known exhibition entitled Protest Photographs.Opening at MoMA on 23 May 1970, the timing of this show was anything but coincidental.Less than a month before, President Nixon had announced the Cambodia campaign, giving rise to a wave of protests which culminated in the Kent State shootings on 4 May.As Christopher Phillips notes, the show appeared to recognise these events by presenting the shots unmounted as if 'hot off the press' (Phillips 1982, p. 60).Indeed, correspondence preserved in the MoMA archive shows that, while the Winogrand works At the moment of their production, however, the photographs were enveloped in a different, if equally problematic, discourse: that of John Szarkowski.It is often assumed that Szarkowski's exhibition programme during the 60s and 70s avoided addressing the Vietnam War.And, indeed, this is largely true of the main shows featuring work by Winogrand at this time-namely, New Documents and The Animals. 7However, during this period, Szarkowski did combine images from Public Relations 7 For an alternative reading of The Animals, as a satire of American society during the Vietnam War, see (Balaschak 2012).
with others by photojournalists, including Benedict J. Fernandez and several Magnum photographers, to curate a lesser-known exhibition entitled Protest Photographs.Opening at MoMA on 23 May 1970, the timing of this show was anything but coincidental.Less than a month before, President Nixon had announced the Cambodia campaign, giving rise to a wave of protests which culminated in the Kent State shootings on 4 May.As Christopher Phillips notes, the show appeared to recognise these events by presenting the shots unmounted as if 'hot off the press' (Phillips 1982, p. 60).Indeed, correspondence preserved in the MoMA archive shows that, while the Winogrand works were already in the collection, Szarkowski had the Fernandez negatives hastily shipped in and printed in an apparent attempt to fill gaps in the show's coverage (Szarkowski 1970).The exhibition hang also hinted at a form of photo-essayistic narrative by building up to a single, soon to become iconic, image: John Filo's shot of Mary Vecchio kneeling over the fatally wounded student Jeffrey Miller at Kent State. 8However, in a gesture which would later be repeated in Public Relations, the show was otherwise lacking in contextual information, beyond the locations of the photographs and the names of the photographers.Furthermore, Szarkowski insisted that the photographs be printed at 16 × 20 inches with a one-inch border, a format which had strong associations with his exhibitions of art photography.In this sense, the show granted press images an unusual parity with those of Winogrand, adopting an ambiguous position between art and photojournalism.
were already in the collection, Szarkowski had the Fernandez negatives hastily shipped in and printed in an apparent attempt to fill gaps in the show's coverage (Szarkowski 1970).The exhibition hang also hinted at a form of photo-essayistic narrative by building up to a single, soon to become iconic, image: John Filo's shot of Mary Vecchio kneeling over the fatally wounded student Jeffrey Miller at Kent State. 8However, in a gesture which would later be repeated in Public Relations, the show was otherwise lacking in contextual information, beyond the locations of the photographs and the names of the photographers.Furthermore, Szarkowski insisted that the photographs be printed at 16 × 20 inches with a one-inch border, a format which had strong associations with his exhibitions of art photography.In this sense, the show granted press images an unusual parity with those of Winogrand, adopting an ambiguous position between art and photojournalism.Consequently, while Protest Photographs may have been an attempt to answer criticism of the museum's political quietism, it was also something of a precursor to one of Szarkowski's most overtly formalist experiments: the 1973 exhibition From the Picture Press.By displaying press images without their captions, this exhibition adopted a near-Greenbergian approach to medium specificity.Yet whereas Greenberg understood photography as a content-based medium, Szarkowski sought to secure its formal autonomy by demonstrating that even the most apparently informative photographs could only convey a simple, fragmentary message (Greenberg [1946] 1986).The images, which had been partly researched and selected by the late Diane Arbus, were gathered under generic titles intended to illustrate this narrative simplicity, such as 'Disasters', 'Winners' and 'Losers'.These titles could have easily included 'Protest'.Indeed, it is possible to assume that Szarkowski viewed the images featured in Protest Photographs in a similar light to those of the 1973 exhibition: as mere 'symbols' which failed to explain the full meaning of the events in question (Szarkowski 1966, p. 8).In this sense, Protest Photographs encouraged the viewer to treat images of social activism like street photographs, focusing upon incidental details or moments of formal coherence, rather than their supposedly unrepresentable political content.While this framing may have been appropriate for Consequently, while Protest Photographs may have been an attempt to answer criticism of the museum's political quietism, it was also something of a precursor to one of Szarkowski's most overtly formalist experiments: the 1973 exhibition From the Picture Press.By displaying press images without their captions, this exhibition adopted a near-Greenbergian approach to medium specificity.Yet whereas Greenberg understood photography as a content-based medium, Szarkowski sought to secure its formal autonomy by demonstrating that even the most apparently informative photographs could only convey 8 For an account of the iconic status of this photograph see (Lucaites and Hariman 2007).a simple, fragmentary message (Greenberg 1986).The images, which had been partly researched and selected by the late Diane Arbus, were gathered under generic titles intended to illustrate this narrative simplicity, such as 'Disasters', 'Winners' and 'Losers'.These titles could have easily included 'Protest'.Indeed, it is possible to assume that Szarkowski viewed the images featured in Protest Photographs in a similar light to those of the 1973 exhibition: as mere 'symbols' which failed to explain the full meaning of the events in question (Szarkowski 1966, p. 8).In this sense, Protest Photographs encouraged the viewer to treat images of social activism like street photographs, focusing upon incidental details or moments of formal coherence, rather than their supposedly unrepresentable political content.While this framing may have been appropriate for Winogrand-at least in Szarkowski's view-it was hardly representative of the other photographs in the exhibition. 9For example, as John Lucaites and Robert Hariman demonstrate, John Filo's image had already been widely disseminated in protest flyers and literature with the intention of organising further demonstrations against the war and police violence (Lucaites and Hariman 2007, pp. 152-54).Consequently, conflating this type of imagery with photographs by Winogrand served to downplay the galvanising effect which it was having upon the peace movement.In short, for Szarkowski, the images later included in Public Relations were literally protest 'in' photography; social activism safely contained within the 'four walls' of the photograph and the art institution.
The Museum and the Street
The institutional frameworks which surrounded Winogrand's project, both during and after its production, demonstrate why the formalist understanding of street photography was either incompatible with protest or served to reduce it to business as usual.Nevertheless, by studying the differences between Szarkowski's and Papageorge's framings of the project, it is also possible to see how certain aspects of Winogrand's work either troubled photographic formalism or pushed it in unlikely directions.The years 1970-1977 played host to a series of historical developments which significantly altered the reception of Winogrand's work.Most obviously, in 1974 Nixon resigned following the Watergate scandal, paving the way for the end of the Vietnam War in 1975.By 1977, the photographic medium also enjoyed a significantly more established position within the art institution and market; a meteoric rise subsequently referred to as the 'photo-boom'. 10Consequently, while Papageorge's curatorial approach downplayed the political implications of Winogrand's photographs, it also unveiled certain aspects of the project which would have been intolerable in 1970-at the height of peace movement, when photography was still partly external to the art establishment.In what follows, I will seek to demonstrate how, at these points of rupture, Winogrand pushed classical street photography into an uneasy parallelism with protest.
To appreciate the subversive potential of Winogrand's project it is important to address, not only his shots of protests, but also his images of exhibition openings.Towards the end of Winogrand's life, these photographs were read either as quaint archival records or as loving send-ups of the rich and powerful, akin to his earlier photographs of party goers and businessmen. 11It was this kind of imagery which led the photography critic A.D. Coleman to argue that Winogrand's 'professional and economic allegiance is to the upper class [and] the museum/gallery network' (Szarkowski 1988, p. 33).Nevertheless, it is telling that at the historical moment in which Winogrand was taking the photographs Szarkowski was willing to display the protest images, but studiously avoided his shots of 9 Szarkowski's later suggestion that Winogrand went to photograph one theme-the effect of media on events-but became distracted by 'minor contingencies' suggested that he was instinctively aware of the medium's representational limitations (Szarkowski 1988, p. 32). 10 In the year in which Public Relations was staged, Christies' and Sotheby's New York opened photography departments and the city played host to at least 10 photography auctions.See (Hacking 2018, p. 186). 1For example, as Pepe Karmel wrote in 1981, 'Winogrand basically has an indulgent, even admiring attitude toward the rich, powerful, and renowned: he seems to feel at heart that they are awfully lucky to be able to make such fools of themselves, without worrying about the consequences' (Karmel 1981, p. 41). the artworld elite.One possible reason for this is that since the late 60s MoMA had itself become the target of protest on account of its muted response to the war and board of trustees, which infamously included Nelson Rockefeller: the Republican governor of New York and a key supporter of the war effort.During the early 1970s, groups such as the Art Workers Coalition produced a series of works which revealed the corporate connections of this and other boards.Indeed, just a day before the opening of Szarkowski's Protest Photographs, these actions culminated in the famous Art Strike against Racism, War and Oppression, which demanded that all museums close for the day.Of course, this is not to suggest that Winogrand challenged or analysed the artworld to the same extent as these groups.Although he depicted Rockefeller and others, no figures were named in the accompanying captions.Nevertheless, his images do suggest an attempt to bring this institutional framework into the public eye.The only photographer in the room, Winogrand appears as a kind of counterspy within the New York Glitterati; someone invited to the exclusive events yet partly distant from them.As such, we are presented with a detached study of people chatting, flirting, eating and, most importantly, shaking hands.Winogrand was known for capturing these gestures, and the ambiguous stories which they implied, in the streets of Manhattan.In this context, however, this aspect of street photography was transformed into an unambiguous comment on social and economic capital.By 1981 Winogrand was quite explicit about the political-economic purpose of such events, stating that 'In the case of museums, it's always got to do with money, people who donate and things like that [ . . .] But there's nothing evil about it' (Diamonstein and Winogrand 1982, para. 29 of 215).This remark, like most of the comments on politics which appear in Winogrand's interviews, is imbued with a laconic cynicism.Yet even if the photographs had been similarly blunt, they were clearly understood as a threat to the museum at their moment of production; more shocking than openly political images which allowed the institutional framework to remain invisible.
By showing art being consumed, traded, displayed and funded for private gain, these images also call into question the pictorial self-sufficiency which Szarkowski attributed to the protest images.Indeed, Winogrand's approach to the paintings at these events even suggests a disdain for the finished art object.For example, his images of a 1970 Frank Stella opening at MoMA make heavy use of flash, picking out the figures in the foreground, yet consigning the paintings to darkness.In fact, it is characteristic of these shots that the depicted figures seem to be more interested in one another than the artworks, which are reduced to an incidental background.Winogrand had a similar attitude towards his own images, even if he benefitted from their display at MoMA.Indeed, as Public Relations demonstrates, he tended to separate his practice from its printing, editing, arrangement, interpretation and exhibition; tasks which he largely delegated to others.Many of Winogrand's champions sought to defend this practice by treating it as a byproduct of his 'restless genius'.But, in reality, it caused them significant discomfort.The problem was not just that he left his work undeveloped or allowed it to get damaged, but that he made huge amounts of it, leaving editors such as Papageorge to pick out the 'masterpieces'.These aspects of his practice became more pronounced during the making of Public Relations and especially toward the end of his life, where they were treated as evidence of his artistic decline.For example, when confronted with the 9000 rolls of film which Winogrand left undeveloped or unproofed at his death in 1984, Szarkowski commented forlornly that 'To expose film is not quite to photograph' (Szarkowski 1988, p. 36).
Yet Winogrand's increasingly-prolific approach to photography could also be understood as a process-based turn within his oeuvre.By the 1970s, Winogrand's tendency to 'overshoot' could no longer be treated simply as evidence of his indifference to content or enraptured immersion.Rather, it implied a submission of control and selfhood to the near-automatic activity of taking pictures.As Lucy Soutter notes, through this process, Winogrand increasingly dispensed with the most-basic rudiments of composing a shot, resulting in photographs which are 'interesting to look at specifically because they teeter on the brink of banality'-a kind of aesthetic deskilling not dissimilar to that employed by certain conceptual photographers (Soutter 1999, p. 9).However, these photographs shared another feature with the conceptual use of photography, in that they were more important as records of movement than as finished art objects.Indeed, when asked why he made art, Winogrand would simply say 'It's a way of living.It's a way of passing through the time'-as if attempting to minimise the difference between the practice of photographing and everyday life itself (Szarkowski 1988, p. 32).Consequently, while Winogrand has often been treated as a single-image photographer, the shots contained in Public Relations must be seen in the opposite way: as cross-sections of a near endless stream, limited only by Winogrand's own movement.Working in this way allowed him to deny the hierarchal divisions between different aspects of life.Indeed, there are many stories of Winogrand dashing into MoMA fresh from the street with little respect for their difference.Winogrand may not have challenged the privileges which allowed him to traverse these boundaries so fluidly.However, in crossing them, he not only brought the artworld closer to the street; he also brought the street, and its crowds of protestors, closer to art.
Winogrand's increasingly processual practice even led him to a certain understanding of protest as an artistic activity in its own right.The clearest evidence of this is his interest in the hand gestures, banners, placards and other objects used by the protestors, as seen in an image such as Peace Demonstration, Washington, D.C. (Figure 2).However, whilst he clearly appreciated the incongruous juxtapositions and theatricality of these displays, he did not treat them-as Szarkowski presupposed-as merely one means amongst others for creating finished, if unorthodox, pictures.Rather, Winogrand attempted to keep both the placards and the photographs in motion.The placards are usually at the head of a moving crowd, shot using his trademark tilted frame and wide-angle lens to increase the dynamism.The resulting shots are infused with a type of kinetic force.They attempt to transcend their status as still images by sustaining the kind of energy which he also sought in his own practice.But did Winogrand integrate his processual practice into the political activity of such groups?According Joel Meyerowitz, all the major street photographers including Winogrand 'went to every public demonstration, every "be in" in the park, all the gatherings down in the Forties and Times Square; wherever there were marches, we all went.[ . . .] You lent your body to them because it was right' (Westerbeck and Meyerowitz 2001, p. 375).But is this strictly true?What was Winogrand's relationship towards these new, politicised articulations of the crowd?Did his way of photographing, as a way of moving through life, extend to participating in demonstrations and capturing them from the inside?
To answer these questions, we must briefly consider how Winogrand approached the crowd in his earlier work.Despite the popularity of this field, its social framework is rarely examined.For example, it is common to read that Winogrand addressed the 'democratic' crowd or that street photography was his 'republic', but what exactly does this mean? 12In his essay 'Commons and Crowds', Steve Edwards argues that street photography was based upon a 'paradoxically detached immersion' (Edwards 2009, p. 453).Underpinning this was an understanding of the crowd as a 'flow of endless particularity' reducible to its constituent parts (Edwards 2009, p. 453).Consequently, the street photographer both sought to isolate anonymous individuals within the crowd and assumed their own 'unique subjectivity as a figure outside of the mass' (Edwards 2009, p. 453).I would add that this sense of mutual separation was also underpinned by a notion of photographic citizenship, albeit not the transactional one recently proposed by Ariella Azoulay, but another based on a notion of negative rights.To be more specific, street photography presupposed that the medium itself granted its subjects a certain right to privacy and freedom, on account of its fleeting nature and incapacity to go beyond the visual.This formalist assumption was closely guarded as it served to justify the aggressive approach exemplified by much of Winogrand's earlier work.For example, as Tod Papageorge claimed in his introduction to Public Relations, the argument that the photographer has a 'moral' responsibility towards their subject is, in fact, an 'old confusion' because even the most intrusive photograph is 'just a picture' and has no immediate connection with those who it depicts (Papageorge 1977, p. 15).This understanding of photography-as a kind of automatic barrier between the photographer and the depicted subject-has frequently been criticised for occluding a series of power relations.However, protest presented it with another challenge: an attempt to reach across the supposed barrier to form consensual relations of solidarity.In this sense, while the street photographer happily inhabited the average, alienated street crowd, they tended to see collective action-however peaceful-as an attack upon the photographer's autonomy.In short, for all its talk of democracy, street photography feared the 'tyranny of the majority'.
'Sly' Photojournalism
Returning to Public Relations, it seems clear that this cautious relationship towards the politicised crowd remains at least partially present.If Winogrand 'lent his body to the crowd', as Meyerowitz claims, the evidence is rarely there in the pictures, which are largely shot from the sidelines or facing into an advancing march (Figure 2).Yet while he may not have been willing to become a participant observer, photographing demonstrations forced Winogrand to realise that his movement was not simply that of the autonomous street photographer.Rather, to follow a demonstration from the sidelines was to join the group of photojournalists who gathered to photograph the event.In a 1981 interview with Barbaralee Diamonstein, Winogrand even suggested that he was more interested in the challenges posed by adopting this position than in the project's official theme: Winogrand: I went to events, and it would have been very easy to just illustrate that idea about the relationships between the press and the event, you know.But I felt that from my end, I should deal with the thing itself, which is the event.I pretty much functioned like the media itself.
Diamonstein: But weren't you the media then?W: I was one of them, yeah, absolutely.But maybe I was a little slyer, sometimes.(Diamonstein and Winogrand 1982, p. 215) Winogrand had been 'one of them' before, having initially supported his street photography through a career in photojournalism and advertising, a job which he had only abandoned in 1969 to pursue a role in teaching.However, his comment on being a 'little slyer' suggests an attempt to pursue press photography within a different, perhaps more critical or satirical, register.What exactly did he mean by this?Did it allow him to overcome the longstanding problems which have tended to arise from the encounter between protest and the mainstream media?
Before answering this second question, it is necessary to consider what Winogrand's 'sly' photojournalism looked like in practice and explore its political implications.The photographs in Public Relations come in two basic forms: first, shots of events designed, at least in part, to be captured and publicised through photography and, secondly, images which capture the participants, the journalists and their recording equipment, demonstrating how the press channelled the event into a series of photo-ops, interviews and other spectacles.In this sense, Winogrand simultaneously behaved like the media and took a sideswipe at it, criticising its effect upon public life.This assessment of mass-mediatised 'reality' had longstanding precedents on the left.For example, in 1927 the Marxist critic Siegfried Kracauer had complained that, with the rise of the American illustrated magazine, 'the world itself has taken on a "photographic face": it can be photographed because it strives to be absorbed into the spatial continuum which yields to snapshots' (Kracauer 1995, p. 59).This led him to conclude that photography was the 'go-for-broke game of history', in that-despite weakening our associative faculties-it had the revolutionary potential to reveal the provisional status of any given reality.However, for Papageorge Winogrand's images evoked a significantly less dialectical theoretical framework: the conservative historian Daniel J. Boorstin's notion of the 'pseudo-event' (Watson 2016, p. 5).Boorstin had coined this term in his 1962 book The Image to describe the rise of planned events which had little purpose other than to garner news coverage (Boorstin 1962, pp. 11-12).13However, while Boorstin's argument was superficially similar to leftist critiques of media spectacle, it concluded with a patriotic call to resist the semireligious attraction of illusions and restore the rugged individualism of American life.In their measured ambiguity Winogrand's photographs could sustain both political interpretations, seemingly typifying his tendency to 'sit on the fence'.
Nevertheless, if we return to the protest images, is it possible to detect a specific stance?To my mind, these images are by no means free of a certain distaste for the planned protest which manifests in a search for forms of 'authenticity'.For example, Winogrand frequently attempted to capture the nervous atmosphere, or sense of confusion, which accompanied an outwardly hopeful demonstration (Figure 3).His tendency to privilege marches over static forms of peaceful protest such as rallies also implied a preference for spontaneity and action.However, Winogrand was not naïve about the need for political movements to strategically mobilise press coverage and, in this sense, his critique of inflated media artifice was more relevant to the other events depicted in the book (Diamonstein and Winogrand 1982, para. 19 of 215).Indeed, excepting his images of protestors being interviewed, Winogrand saved his most acerbic deconstructions of the staged event-in which wires, lights and recording equipment are visible-for his shots of patriotic marches, official politics or actual media events.The protest images, on the other hand, both tackled the event head on and treated it as a space for photographic experimentation.Despite making formalist claims for his originality, this hybrid of street photography and photojournalism also stopped him from reproducing the negative attitude towards protest which dominated mainstream coverage of the antiwar movement.In what follows, I will seek to demonstrate this by discussing two examples: his depictions of 'violence' and the politicised crowd.
Conclusions
In conclusion, while Winogrand's work may have been formalist through and through, this did not stop him from producing a rich document of protest culture of significant interest to the left.Indeed, if anything Winogrand pushed certain formalist motifs to such an extreme that they became de-anchored from their theoretical context and entered into an unlikely identity with protest.Winogrand was celebrated for his indifference to the subject matter of his photographs.But he pushed this in a new direction when he photographed the infrastructure of the art establishment, started shooting in an automatic stream and incorporated his photography into the process of everyday life.He was also seen as a figure who had understood and transcended photojournalism.Yet in doing so he also avoided both the media cynicism which surrounded the formalist project and The image of the violent protestor is, of course, one of the most infamous strategies for depicting demonstrations as unstable mobs on the brink of anarchy.Widely used to this day, this trope was also common within coverage of the American antiwar movement.For example, as Melvin Small demonstrates, on 22 October 1969, after the 'Siege of the Pentagon' The Washington Post published an image of protestors confronting police on the Pentagon steps with the caption: 'compact mass of armed policemen resist a surging band of demonstrators', downplaying the police violence and peaceful rally which had occurred earlier in the day (Small 1994).In Winogrand's work, however, the only images of this kind are those of the hard-hat rallies.In New York City, for example, the crowd rises up behind a central figure whose crazed expression and stance seem to prefigure a more collective turn to violence (Figure 1).Here, Winogrand behaved just like the reporters who can be seen rushing to capture this outburst, photographing the central figure so that he almost appears to be swinging an American flag at the crowd.The work also includes a more sober record of the real violence which such groups, the police and the army committed against the antiwar movement.These include images of students fleeing a tear gas attack at Kent State and a gruesome shot of a young protestor with blood pouring down his face from a head wound.Winogrand was not always willing to attack or name the perpetrators of this violence.His images of the police, for example, are hardly a serious indictment-despite poking fun at their joyless attitude and sentimental patriotism.Yet his work gives a strong sense that it was predominantly pro-war groups or law enforcement which engaged in 'collective hysteria'.
Images of crowds shot from above are also a common feature of the press coverage of demonstrations.These images may be used to highlight the scale of a rally.However, they have also been used to downplay the diversity of the crowd, reducing it to a mechanical mass. 14Winogrand was clearly uncomfortable with this type of imagery as well.It makes a partial appearance in Peace Demonstration, Central Park, New York but mixed with a kind of post-apocalyptic atmosphere (Figure 3).Indeed, even here, Winogrand sought to impart a sense of movement to an otherwise static scene by shooting at a moment in which the entire crowd turned to watch a mass release of balloons.As previously suggested, this discomfort towards large politicised crowds made Winogrand fairly ignorant as to the political value of sit-ins and rallies; he later admitted that the speeches and songs bored him (Diamonstein and Winogrand 1982, parag. 32-34 of 215).However, it did lead him to foreground the range of political groups sometimes referred to as the new social movements.Just as figures on the New Left were attempting to rethink the traditional centrality of blue-collar workers, Winogrand was depicting not just labour unions, but gay liberation groups, the women's movement, the 'youth' and so on.This is not to say that these images are free from the mocking humour or even misogyny of his earlier street photographs.By focusing upon the variety of these groups, Winogrand may even have been trying to create a sense of chaos akin to that of the average Manhattan street.However, the resulting interplay between diversity and simultaneity provides a more effective foundation for approaching the, still prescient, problem of forming alliances between such groups than the sense of false unification often suggested in photographs of large politicised crowds.At the very least, Winogrand's project showed the crowd's complexity rather than conflating left and right.
Conclusions
In conclusion, while Winogrand's work may have been formalist through and through, this did not stop him from producing a rich document of protest culture of significant interest to the left.Indeed, if anything Winogrand pushed certain formalist motifs to such an extreme that they became de-anchored from their theoretical context and entered into an unlikely identity with protest.Winogrand was celebrated for his indifference to the subject matter of his photographs.But he pushed this in a new direction when he photographed the infrastructure of the art establishment, started 14 For a discussion of this see (Memou 2013, p. 23).
shooting in an automatic stream and incorporated his photography into the process of everyday life.He was also seen as a figure who had understood and transcended photojournalism.Yet in doing so he also avoided both the media cynicism which surrounded the formalist project and the derogatory representations of protest produced by the mainstream media.As a result, he adopted a novel hybrid position between the moving spectacle of the politicised crowd and the structure of its surrounding media apparatus.To make this argument is not to celebrate Winogrand's 'genius' or project a clear leftist intention onto his work.Rather, Winogrand's achievement was to follow the logic of protest just far enough to expand the parameters of street photography in ways which allowed something new to emerge.
Unfortunately, this fragile rupture would prove short-lived.By the time Public Relations was exhibited in 1977, Winogrand had distanced himself from the political elements of the project and returned to his old methods and apolitical themes.His final finished work, Stock Photographs, adopted a significantly less ambitious approach to the public event by addressing a livestock show, rodeo and its participants.The project was something of a retread of previous works such as The Animals and Public Relations.Furthermore, those who posthumously explored his unprinted shots found them to be characterised by a diluted form of classical street photography and an increasingly bleak outlook.In this sense, the subversive potentialities of Winogrand's processual method gave way to a kind of bad infinity or stifling lassitude.This late sense of indirection has been attributed to a number of factors, not least his deteriorating health.However, it is hard not to feel that Winogrand was at his peak treading a fine line between street photography, protest, (pseudo) institutional critique and photojournalism, which was increasingly difficult to sustain.Indeed, as the political climate which underpinned Public Relations receded and the streets of western cities were increasingly given over to commerce and gentrification, street photography itself experienced something of a crisis.When asked in 2008 which figures had taken on the mantle of 1960s street photography, Tod Papageorge unwittingly epitomised its descent into a series of ever more inflated 'fictions' by naming only his former student Philip-Lorca Dicorcia (Papageorge and Schuman 2009, para. 53 of 67).Nevertheless, Joel Sternfeld's 2002 work Treading on Kings: Protesting the G8 in Genoa shows us that the productive tension between protest and classical street photography can still be reignited.Returning to Winogrand's Public Relations today reminds us that this tension was not external to photographic formalism but a contradiction at the heart of its core project. | 2019-05-17T13:16:38.771Z | 2019-05-03T00:00:00.000 | {
"year": 2019,
"sha1": "8dc4b65bd30a08525270195e48a0ccfe86317662",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-0752/8/2/59/pdf?version=1558937429",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "8dc4b65bd30a08525270195e48a0ccfe86317662",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
4033154 | pes2o/s2orc | v3-fos-license | Interaction of cochlin and mechanosensitive channel TREK-1 in trabecular meshwork cells influences the regulation of intraocular pressure
In the eye, intraocular pressure (IOP) is tightly regulated and its persistent increase leads to ocular hypertension and glaucoma. We have previously shown that trabecular meshwork (TM) cells might detect aqueous humor fluid shear stress via interaction of the extracellular matrix (ECM) protein cochlin with the cell surface bound and stretch-activated channel TREK-1. We provide evidence here that interaction between both proteins are involved in IOP regulation. Silencing of TREK-1 in mice prevents the previously demonstrated cochlin-overexpression mediated increase in IOP. Biochemical and electrophysiological experiments demonstrate that high shear stress-induced multimeric cochlin produces a qualitatively different interaction with TREK-1 compared to monomeric cochlin. Physiological concentrations of multimeric but not monomeric cochlin reduce TREK-1 current. Results presented here indicate that the interaction of TREK-1 and cochlin play an important role for maintaining IOP homeostasis.
and TREK-1 leads to their interaction with TREK-1 in order to effectively regulate IOP remains unknown. The magnitude of vWFA multimerization within physiological range is governed by the degree of shear stress 17 . The degree of vWFA multimerization results from the interaction of different protein interactors resulting in different biological consequences. Information on differences between the interactions of multimeric versus monomeric cochlin is completely lacking. Although regulation of cochlin has been shown to be important for IOP and elevated cochlin results in elevated IOP 18 , whether cochlin and TREK-1 are needed together to generate broad IOP changes has not been investigated. Here we present evidence that cochlin and TREK-1 are both required components for IOP regulation. We present evidence that pathologic state associated multimeric cochlin alters biochemical and functional electrophysiological properties differently than monomeric cochlin.
Results
Cochlin and TREK-1 are necessary components for IOP regulation. The DBA/2J-Gpnmb+/SjJ mouse is a genetically matched strain to that of the DBA/2J mouse however, DBA/2J-Gpnmb+/SjJ mice do not develop elevated IOP, nor do they demonstrate glaucomatous optic nerve damage 19 . We have previously demonstrated an increase in IOP due to over expression of cochlin in DBA/2J-Gpnmb+/SjJ mice 18 . TREK-1 was silenced in DBA/2J-Gpnmb+/SjJ mouse TM through the use of shRNA (Fig. 1). In this particular experiment, channel silencing prevented the cochlin-induced increase in IOP when compared to control animals that were injected with cochlin alone (n = 6; p < 0.016), demonstrating that both proteins are needed for IOP regulation. TREK-1 silencing without cochlin overexpression maintains a lower IOP similar to what is observed with cochlin overexpression and TREK-1shRNA in combination (n = 6; p < 0.016) (Fig. 1A). The TREK-1shRNA reduced TREK-1 expression by ~70% (Fig. 1B) compared to control samples. Following the establishment of these components in IOP regulation, we asked whether interaction between them elicits spatial changes in the TM by altering the extracellular matrix of TM cells 7, 20 . Interaction of both proteins produce changes in TM cell architecture towards homeostatic regulation of fluid flow. The interaction of cochlin and TREK-1 becomes an area of interest as both are components that seem to be needed for IOP regulation, as seen in the silencing experiments. Cochlin, that was multimerized through the application of shear stress, pulls down a greater amount of TREK-1 compared to monomeric cochlin in western blot analysis ( Fig. 2A) performed with purified cochlin. This is consistent with the observation that similar amounts of cochlin are associated with greater amounts of TREK-1 in human TM samples, modeled through the application of shear stress (Fig. 2B). Cochlin in glaucomatous TM is likely in the multimeric form, perhaps either due to the presence of shear stress or oxidative stress. Shear stress causes an increase in cochlin thus resulting in an increase in TREK-1 interaction. To investigate if the cochlin-TREK-1 interaction produced significant rearrangements in cellular organization within the TM cells, we measured fluorescein dye transport using an Ussing-type chamber across a trilayer of TM cells cultured on a PVDF membrane (Fig. 2C). The TM cells were transfected with TREK-1+ cochlin (monomeric), TREK-1+ Retinal Pigment Analyses of variance (ANOVA) showed a statistically significant difference between the three groups. Scheffe's post hoc test showed that cochlin alone produced an increase in IOP that was statistically different from the maintenance of a lower IOP in cochlin + TREK-1 shRNA (n = 6; *p < 0.016) and TREK-1 shRNA only (n = 6; *p < 0.016) treated groups. (B) A representative Western blot of control and TREK-1 shRNA treated mice (DBA/2J-Gpnmb+/SjJ) TM as indicated. The blot was sequentially probed (with a stripping step prior to second probing) with antibodies to TREK-1 and GAPDH as indicated. Upper panel shows cochlin with or without shear stress as indicated on a western blot after separation on a non-reducing but denaturing gel, middle and lower panels are identical experiments as above but from reducing and denaturing gels probed with cochlin and TREK-1 antibodies as indicated. (B) Coimmunoprecipitation followed by western blot analysis details the interaction of cochlin and TREK-1 in normal as well as glaucomatous samples (the latter is presumably in the presence of shear stress). (C) Fluorescein dye migration was measured as fluorescence intensity across several layers of cells on a PVDF membrane within an Ussing-type chamber (as shown in top figure) in the presence of TREK-1 + cochlin, TREK-1 + RPE65, TREK-1 only, TREK-1 shRNA + cochlin as indicated. Student's t-test showed significant difference in fluorescein transport between TREK-1 + cochlin and all other transfected groups (n = 10; *p < 0.001). TREK-1 + cochlin also showed significant difference in dye transport with non-transfected cells (n = 10; *p < 0.001). (D) TM cells transfected with TREK-1 + cochlin, TREK-1 + RPE65, TREK-1 only, TREK-1 shRNA + cochlin, or untransfected (hydrogel + cells) as indicated were cultured with rat tail collagen and placed into capillary tubes. The amount of collagen expansion was measured and recorded. Expansion shown by TREK-1 + cochlin, TREK-1 + RPE65, and TREK-1 only were found significantly different than cells only (hydrogel + cells) by pairwise t-test. TREK-1 shRNA + cochlin showed a contraction, which was again statistically significant than cells only (n = 10 per group). (E) Immunohistochemistry of human TM sections probed for cochlin (red) and TREK-1 (green). The brightfield reference image indicates the locations of the trabecular meshwork (TM), Schlemm's canal (SC), and ciliary body (CB).
Epithelium-Specific Protein 65 kDa (RPE65), TREK-1 alone, TREK-1 shRNA, or were non-transfected controls. RPE65 was used as a control protein in this experiment because of its molecular similarity in size to cochlin. The filter alone allowed substantial flow of the dye while addition of the cells decreased this flow tremendously. TM cells transfected with TREK-1+ cochlin show a significant increase in fluorescein dye transport compared to the other transfected (TREK-1 only or TREK-1+ RPE65 or TREK-1 shRNA + cochlin; n = 10; *p < 0.001) or non-transfected cells (n = 10; *p < 0.001). The much higher magnitude of transport when cells are transfected using both TREK-1 + cochlin [compared to that with TREK-1 shRNA + cochlin (n = 10; *p < 0.001)] suggests that interaction of both proteins produces significant changes in cell shape due to cytoskeletal remodeling. This implies that both elements (TREK-1 and cochlin) are needed to alter cell architecture towards homeostatic regulation of fluid flow. Collagen gel assays were performed in order to further investigate TM cellular architecture (Fig. 2D). Interestingly, TREK-1 + RPE65 transfected cells and TREK-1 only transfected cells performed similarly with a minimal expansion compared to TREK-1 + cochlin transfected cells, which showed a significant increase in gel expansion (compared with cells only; n = 10; *p < 0.001). Downregulation of TREK-1 expression with TREK-1 shRNA transfected cells caused contraction rather than expansion. The contraction was significant compared to the cell only control (n = 10; *p < 0.001) or with TREK-1 + cochlin (n = 10; *p < 0.001).
Immunohistochemistry of human TM sections shows the co-localization of both TREK-1 and cochlin specifically in the Schlemm's canal and TM region (Fig. 2E). The close proximity of these components is key to enabling a potential interaction with one another in the TM tissue milieu. Immunocytochemistry on human normal TM cells in the presence of exogenous cochlin demonstrates a difference in cell shape as well as actin expression in the cytoskeleton (Fig. 3).
In the untreated cells (Fig. 3A) and in the presence of monomeric cochlin (Fig. 3B), cytoskeleton architecture remains visibly organized whereas in the presence of multimeric cochlin, actin expression increases along with clustering of actin fibers (Fig. 3C). The clustering, if at all present in the untreated cells, is very limited in intensity compared to that visualized with the multimeric cochlin treatment. A relative quantification of cochlin and actin in treated as well as untreated cells has been presented in Fig. 3D,F using fluorescence/area. Furthermore, a difference in the shape of the cells is present between all three cohorts (untreated, mono-and multi-meric cochlin treated). The untreated cells and the monomeric treated cells (Fig. 3A,B) exhibited a uniformed flattened, circular shape when compared to multimeric treated cells (Fig. 3C), which assumes a splindle-like conformation. Quantification further validated cell shape changes due to addition of different forms of cochlin (Fig. 3G,I,J). These observations are consistent with and corroborates our previous studies 7, 18 , demonstrating similar changes in cell shape and interaction of multimerized cochlin with TREK-1 channels 7 .
To compare the changes in the actin cytoskeleton elicited by exogenous cochlin, further experiments were conducted to investigate the effects of TREK-1 channel activation. Previous observation has shown that cellular lipids such as arachidonic acid (AA) 21-23 activate TREK-1 channels. We utilized AA in order to activate TREK-1 channels and observe changes in the actin cytoskeleton of TM cells (Fig. 4). TM cells were treated with 20 µM of AA and the actin cytoskeleton of the cells was analyzed using immunocytochemistry. In the untreated cells ( Fig. 4A), TREK-1 expression remains at a normal state along with the actin. The AA treated cells (Fig. 4B) exhibit a robust increase in TREK-1 expression as well as actin. The increase of expression is further supported by the quantification in Figure C. These observations demonstrate the effects the activation of TREK-1 has on the actin cytoskeleton of TM cells. Inhibition of phospholipase A2, an enzyme that hydrolyzes AA, has been shown to cause a decrease in the actin cytoskeleton of porcine TM cells 21 , corroborating our results presented here. The observed change in cellular architecture is different from that found in the presence of exogenous multimeric cochlin (Fig. 3). Taken together with previous studies 7, 18 these results suggest that the multimerized cochlin may affect the overall cellular architecture mediated by interaction with TREK-1. These results allude to the potential cytoskeletal changes taking place in the presence of multimeric cochlin that has resulted as a consequence of shear stress.
Multimeric cochlin modulates TREK-1 channel current. To assess if cochlin interaction with TREK-1
also modulates the channel current, human TREK-1 was transiently expressed in HEK293 cells, and channel activity was monitored using whole-cell patch clamping. Cells expressing TREK-1 were voltage-clamped at −60 mV and voltage ramps were used to record the channel current. Multimerized cochlin, monomeric cochlin, or vehicle were applied to the bath (Fig. 5). TREK-1 basal channel activity was strongly reduced by multimerized cochlin at physiological concentrations (10 nM). Equivalent effects on the current were seen when cochlin multimerization was previously induced by addition of Ca 2+ (−33.4 ± 4.1%; n = 8; Fig. 5A,B and D) or after a shear stress protocol aided by a syringe (−30.9 ± 5.1%; n = 8; Fig. 5D). The addition of vehicle did not produce significant effects (−1.6 ± 1.3%; n = 11). As previously reported [13][14][15][16] , fluid shear stress, induced by increasing the bath perfusion rate, produced an increase in TREK-1 activity (Fig. 5B). Despite the variability observed among cells in the stimulating effect of shear stress, no differences were seen between the shear stress effect on vehicle and cochlin groups (Fig. 5D). On the contrary, when monomeric cochlin was assayed, a very small but significant increase in TREK-1 activity could be observed compared with the vehicle (+3.8 ± 1.4%; n = 9; Fig. 5C and D).
It has been previously reported that external protons, heat, and pressure-evoked TREK-1 gating inputs act via a common gate that has the characteristics of a C-type gate 24 . For this, the extracellular region of the transmembrane segment M4 is a key element of the TREK-1 gating apparatus and despite whether the signal activates or inhibits the channel, the mechanism is conversed in different K 2 P channels. A mutation in the M4 segment (W275S) produces a gain of function TREK-1 channel with a reduced sensitivity to extracellular protons or temperature, an effect that can be mimicked by a high extracellular potassium concentration 24 . To test whether the inhibitory effect of multimeric cochlin on TREK-1 was mediated via the C-type gate, recordings on high K + extracellular solution were performed. In contrast to the inhibitory effect of multimeric cochlin, no significant effect was observed when recording in high K + solution (+2.5 ± 3.2%; n = 8; Fig. 5D), suggesting that cochlin might be inhibiting the channel function through a C-type gate mechanism. In agreement with these results, a diminished sensitivity to shear stress was also observed in high K + solution (+10.4 ± 3.9%; n = 8; Fig. 5D). Our electrophysiological experiments suggest a direct interaction of cochlin with TREK-1 channel but whether inhibitory/excitatory effects on channel current are directly related to cell shape remodeling it still is not known. In fact, TREK-1 effects on cell shape have been reported to be independent of its ion transport capability 11 . To assess whether the effects of cochlin on TREK-1 current were similar to these found in HEK293 cells, we used a trabecular meshwork cell line derived from a normotensive patient 25 transiently transfected with TREK-1. TREK-1 current was recorded with a voltage ramp from −100 to +50 mV and then challenged with multimeric cochlin as previously described. Multimeric cochlin induced a statistically significant decrease in TREK-1 current of 36.8 ± 11.2%; n = 5; p < 0.05 vs. baseline; Fig. 6A,B). The effect of cochlin was opposite to the well-known effects of shear stress stimulation by fluid flow or arachidonic acid (20 µM), which significantly increased TREK-1 current by 72.1 ± 21.9% and 148.7 ± 50.1%, respectively (n = 5 each; Fig. 6B).
Discussion
The mechanosensing performed by cochlin in the solution phase in concert with mechanotransduction by TREK-1 on the cell surface is a novel finding. We demonstrate here that their interaction leads to previously shown cytoskeletal remodeling in the TM. Impairment of aqueous humor outflow is linked to spatial changes in the ECM and potential remodeling of the cytoskeleton 26,27 . In other systems, TREK-1 is recognized as working independently in order to perform its mechanotransducing functions yet in the trabecular meshwork, cochlin acts as a mechanosensing molecule assisting TREK-1 mechanotransduction. The presence of cochlin helps in facilitating TREK-1's mechanotransducing properties in a low fluid flow regime compared to the high fluid flow regime of the kidneys. Interestingly, studies performed in alveolar epithelial cells have demonstrated that the location or regime of TREK-1 expression causes differences in function. In this cell type, TREK-1 deficiency correlates with decreasing actin stress fibers and TREK-1 overexpression correlates with increasing stress fibers 28 . Our results allude to a difference in effect on TM cells in the presence of multimeric cochlin possibly due to cell type as well as location. The importance of cochlin in IOP regulation has previously been demonstrated through silencing of cochlin using shRNA, which resulted in decreased IOP 7,18 . TREK-1 downregulation may significantly reduce the sensitivity of cells to detect mechanical stimulus necessary for collagen expansion (Fig. 2C) and is consistent with the observed decreased fluorescein transport (Fig. 2D). It is noted that the fluorescein dye used in these experiments is transported via transcellular transport in some cell types 29 . It is important to point out that fluorescein experiments were performed using only monomeric cochlin which explains the potential lack of consistency with physiological experiments as described below. The shRNA used for these experiments penetrates all three cell layers as seen from analysis of each cell layer separately. These findings also further support the necessary presence of both TREK-1 and cochlin in order to elicit a change in the cellular architecture as previously shown for these two molecules separately 7, 18 . The TREK-1 and cochlin interaction is not sufficiently supported with the collagen gel assays and fluorescein transport assays ( Fig. 2C and D) alone but they lay the foundation for the remainder of our studies. Taken together with our functional experiments as discussed below and the remainder of our experiments, the role that these two molecules are crucial in aqueous humor outflow regulation is supported. It is important to highlight the fact that in the eye, due to the particular location of TM tissue and its architecture, remodeling will modulate aqueous humor outflow and may affect IOP homeostasis 18 . The change in cell shape as well as the actin cytoskeleton staining in the presence of multimeric cochlin is consistent with our observations previously described in TM cells after cochlin exposure 7,18 . It is evident that the multimeric cochlin elicits a response from cells that causes them to become more elongated and spindle like in conformation (Fig. 3). Also, the presence of TREK-1 in the vicinity of cochlin together with demonstration of interaction in vitro and in vivo suggests the functional influence these elements have upon each other. Fluorescence resonance energy transfer (FRET) was attempted for cochlin and TREK-1 but did not produce sufficient data as technical difficulties were experienced as a result of the various domains present in TREK-1. Both molecules proved to be too large to exhibit any consistent resonance data.
Previously, we postulated a model in which multimerized cochlin binding to TREK-1 changes cell shape and motility. These experiments provide further evidence to support this model. TREK-1 mediated cytoskeletal rearrangement appears to be independent of TREK-1 channel activity 11 . TREK-1 activation may allow the cell to "relax" and increase outflow in the normal state similar to the effect produced by the high-conductance calcium dependent K + channel (BKCa) 30,31 . In fact, monomeric cochlin produced a small but significant increase in TREK-1 current, thus potentially favoring cell relaxation (Fig. 5D). This data supports the idea that monomeric cochlin interacts with TREK-1 in the physiological environment causing a small increase in TREK-1 current resulting in a positive effect on outflow and a reduction in IOP. In the diseased model, as supported by our data, multimerized cochlin decreases TREK-1 current, which may in turn decrease outflow, as a result of the cellular structure changes induced by the interaction of both proteins. The negative effects of cochlin are elicited when it interacts with TREK-1 in its multimerized form causing an inhibition in TREK-1 current and cellular architecture rearrangement that may contribute to impedance in outflow followed by an increase in IOP. It is important to note that in other regimes, such as the uterus during pregnancy, TREK-1 expression is seen to decline in order to promote a contractile state 32 . These findings may share functional characteristics with TREK-1 in the TM in the presence of multimeric cochlin. Our data, for the first time, suggests the involvement of an interaction of cochlin and TREK-1 in glaucoma and renders this interaction a target for therapy. Further studies will elucidate as to how TREK-1 or cochlin separately or together can be manipulated as potential therapeutic targets for flow associated disease pathologies.
Methods
The study protocols were approved by the University of Miami IACUC. The methods were carried out in accordance with the approved guidelines. Cochlin transgene and cochlin and TREK-1 shRNA lentivirus production. To overexpress cochlin in the TM of congenic DBA/2J-Gpnmb+/SjJ mice, a cochlin transgene bearing lentivirus was constructed in HEK293T cells (cat# 293T/17 (CRL-11268), ATCC, Manassas, VA). The cochlin expression clone (cat# EX-Q0226-Lv31, GeneCopoeia Inc., Rockville, MD) was packaged into a lentiviral vector using the Lenti-Pac FIV expression packaging kit and the protocol provided by the manufacturer. This protocol typically yielded 10 7 infectious units/mL of the recombinant lentivirus. The cochlin gene (COCH) used to produce the cochlin expression vector was human (NM_004086).
To down-regulate TREK-1 expression in the TM of DBA/2J-Gpnmb+/SjJ mice, TREK-1 shRNA virus was made in HEK293T cells using the Trans-Lentiviral ™ GIPZ Packaging System (cat# TLP4614, Open Biosystem, Huntsville, AL) and the protocol provided by the manufacturer. This typically yielded a viral stock of 108 transduction units (TU)/mL. The shRNA used was a set of 5 clones. Transfection efficiency was determined to be 70%. This efficiency was validated by resolving equal amounts of protein by SDS-PAGE and detecting TREK-1 expression via western blot. The membrane was stripped and re-probed for GAPDH in order to confirm equal loading. The mice that were given an intracameral injection with the cochlin over-expression vector alone or TREK-1 down-regulation vector with cochlin over-expression vector were measured for intraocular pressure before injection. The mice were anaesthetized with an intraperitoneal injection (0.1 μL) of ketamine (100 mg/kg) and xylazine (9 mg/kg) prior to IOP measurement. The IOP was taken using a hand held tonometer, TonoLab (Colonial Medical Supply, Franconia, NH) after the mouse of interest failed to respond to touch. The IOP was measured throughout the course of the study following this procedure.
Fluorescein dye transport and gel expansion assay. An Ussing-type chamber (cat# USS1L, World Precision Instruments Inc. (WPI), Sarasota, FL) was used to measure fluorescein flow (fluorescein dye) across a polyvinylidene fluoride (PVDF) membrane (cat# 75696E, Pall Life Sciences, Pensacola, FL). TM cells were cultured on the PVDF membrane. Before plating the cells, a layer of collagen matrix (Rat Tail Collagen, cat# 354249, BD Biosciences, San Jose, CA) was formed on the membrane to facilitate cell adherence. The cells were allowed to form a confluent monolayer over a period of 16-24 h, following the addition of another cell layer. This process was repeated to ultimately achieve a confluent tri-layer of cells. The cells were transfected with DNA of interest (TREK-1 + cochlin, TREK-1 + RPE65, TREK-1 only, transfection agent (Lipofectamine 2000, Invitrogen Inc., Carlsbad, CA), or non-transfected control). Twenty-four to thirty-six hours post-transfection the membranes were placed between the hemi-chambers of the apparatus connected to a single-channel peristaltic pump (cat# 151922, Watson-Marlow, Wilmington, MA) with 1X PBS as the bathing medium. To measure the flow across the membrane, sodium fluorescein dye was used (1:100 dilution of 1 mg/mL). A set volume of the dye was introduced into one hemi-chamber. After 5 minutes, equal volume was aspirated out from the opposite side of the membrane through the opposing hemi-chamber opening. The fluorescein concentration was calculated using a spectrophotometer.
The collagen gel assay was performed following a published procedure 33 . A hydrogel solution was prepared in a serum-free environment with the following: 10X MEM (cat# 11430, Invitrogen), sodium bicarbonate (cat# S5761, Sigma-Aldrich), L-glutamine (cat# G6392, Sigma-Aldrich) and HEPES buffer (cat# 15630, Invitrogen). The following solution was aliquoted into separate Eppendorf tubes for each transfection type. Transfection complexes were prepared in separate tubes containing a mixture of the transfection agent (Lipofectamine 2000, Invitrogen) and desired DNA vectors with a transfection agent ration (w/v) of 0.4 μg/μL. Trabecular meshwork cells were obtained from a >90% confluent layer of trypsin-treated 25 cm 2 cell culture flask (cat# 353109, Becton Dickinson, Franklin Lakes, NJ) centrifuged for 5 minutes at 800 rpm (77xG) cell pellet was then re-suspended in serum free DMEM 1X culture media (cat# 15-013-CV, CellGro, Corning), thoroughly mixed and equally aliquoted into each respective transfection-complex containing vessel. The reaction was allowed to incubate for 45 minutes, after which it was terminated through addition of cell culture media (DMEM1X + 10% FBS) followed by the addition of the previously prepared hydrogel solution and the rat tail collagen (BD Biosciences) to initiate gel polymerization. The suspension was gently mixed and aspirated into 1 mL single use needle syringe (Henk-Sass-Wolf). This suspension was injected into a borosilicate glass capillary tube with an inner diameter of 0.75 mm (cat# TW100-6, WPI) to approximately half of its volume. A digital picture snapshot was taken of all capillaries against blank background with a millimeter scale following the hydrogel injection inside a custom-built fixed height transparent plexiglass chamber and was repeated every 24 h for 48 h. The hydrogel containing capillaries were incubated inside a specially prepared moisture chamber to prevent dehydration as well as gel displacement and were kept inside a cell culture incubator at 37 °C and 5% CO 2 . Photographs obtained using a digital camera was analyzed using NIH ImageJ (v.1.43 u) software. Lengths were measured between the opposite ends of the gel and subjected to statistical analysis using Microsoft Excel 2007 (Microsoft Corp., Redmond, WA).
Reciprocal immunoprecipitation. Human TM was dissected from enucleated eyes obtained from the Bascom Palmer Eye Bank (BPEI). These eyes were taken from normal donors between 40-85 years of age. All human TM cells used were authenticated by detection of myocilin with repeated dexamethasone treatments before experimental use. TM was dissected from the enucleated eyes, finely minced, and protein extraction was performed using 50 mM Tris-HCl, ph 7.5, 125 mM NaCl, and 0.1% genapol (cat# 345794, EMD Biosciences, La Jolla, CA). To produce shear stress, cochlin was passed through a 30-gauge needle 10-20 times. The 100 µg of protein extract was added with 1-2 µg of cochlin (either native or shear stressed) in an Eppendorf tube.
After incubating the above tubes at room temperature for 2 h, 50 µL of TREK-1 antibody coupled to magnetic beads (cat# 88802, ThermoScientific Pierce Protein A/G magnetic beads) was added and again incubated for 2 h at room temperature. Magnetic beads were removed using a magnet. The resultant precipitate was re-suspended in 200 µL of 1X PBS and magnetic beads were collected. The supernatant was discarded and was followed by the addition of 50 µL of 100 mM glycine (pH 3.0) to the magnetic beads. The supernatant was collected and neutralized with 1.5 M Tris-HCl buffer, pH 8.8 (cat# 161-0798, Bio-Rad Laboratories). Proteins were subjected to Western blot analysis and probed for cochlin (usually hCochlin#3, Aves Labs Inc.).
The tubes were incubated with magnetic beads (cat# 88802, ThermoScientific Pierce Protein A/G magnetic beads) coupled to chicken anti-cochlin antibody (hCochlin#3, Aves Labs Inc.). The magnetic beads were collected and washed in 200 µL of 1X PBS. The supernatant was discarded and 50 µL of 100 mM glycine (pH 3.0) was added to the magnetic beads. The supernatant was collected and neutralized with 1.5 M Tris-HCl buffer, pH 8.8 (cat# 161-0798, Bio-Rad Laboratories). The proteins were subjected to Western blot analysis and probed with antibodies against TREK-1 (cat# ab83932, TREK-1, Abcam). Reciprocal immunoprecipitation for cochlin (hCochlin#3, Aves Labs Inc.) was carried out following a similar protocol as described above.
Immunocytochemistry on cochlin treated TM cells. Human normal TM cells (NTM) were cultured
in 12 well plates (cat# 3513, Costar, Corning Incorporated) on circular microscope cover slides (cat# 48380-068, VWR International) with serum-free cell culture media (cell culture media 1X DMEM (cat# 15-013-CV, CellGro, Corning). Cells were incubated in an incubator at 37 °C and 5% CO 2 for 24 h. At the 24 h mark, monomeric cochlin or multimeric cochlin was added to specified wells at a concentration of 10 ug per well. Multimeric cochlin was produced by passing the monomeric cochlin through a 30-gauge syringe 15-20 times before adding to the well. Following incubation, media was removed and cells were washed 3X with 1X PBS (cat# 21-040-CV, Mediatech Inc., Manassas, VA) then fixed with 1% paraformaldehyde (cat# 15710, Electron Microscopy Sciences, Hatfield, PA) for 15 minutes. After fixation cells were washed 3X with 1X PBS before blocking. Cells were blocked with 1X PBS + 0.2% bovine serum albumin (BSA) (cat# 2910, Fraction V, EMD Chemicals, Gibbstown, NJ) for 30 minutes. The primary antibody was added for cochlin in 1:200 dilution (cochlin: hCochlin#3 Aves Labs Inc.). After incubating overnight at 4 °C, primary antibody was washed out with 1X PBS + 0.2% BSA three times for 10 minutes per wash. The corresponding secondary antibody was added in 1:1000 dilution (Donkey anti-chicken FITC, cat#ab63507, Abcam) and incubated for 1 h at room temperature. Following secondary antibody incubation, slides were washed three times for 10 minutes per wash with 1X PBS + 0.2% BSA. Slides were then incubated with 100 nM rhodamine phalloidin (cat# PHDR1, Cytoskeleton) for 30 minutes to stain the actin cytoskeleton. Slides were then washed three times for 10 minutes per wash with 1XPBS. The cover slides were then mounted on glass microscope slides (cat# 48300-0205, VWR International, West Chester, PA) and stained with DAPI Vectashield (cat# H-1200, Vector Laboratories). Prepared slides were imaged using Leica DM 6000 B confocal microscope (Leica, Inc.). Intensity of fluorescence was measured in a relative arbitrary unit under the same settings and conditions for each sample using ImageJ software. Cellular conformation was confirmed via individual counting performed by five different individuals blinded to cell treatment. Averages were taken from these calculations and presented as an arbitrary measurement of cells/area.
Immunocytochemistry on arachidonic acid (AA) treated TM cells. Human normal TM cells (NTM)
were cultured in 12 well plates (cat# 3513, Costar, Corning Incorporated) on circular microscope cover slides (cat# 48380-068, VWR International) with serum-free cell culture media (cell culture media 1X DMEM (cat# 15-013-CV, CellGro, Corning). Cells were incubated in an incubator at 37 °C and 5% CO 2 for 24 h. At the 24 h mark, specific wells of cells were treated with 20 µM AA diluted in serum-free DMEM media. Along with the AA, 5 µM of indomethacin, a cyclooxigenase inhibitor, was added in order to inhibit AA metabolism for 30 minutes. Following treatment, cells were washed 3X with 1X PBS (cat# 21-040-CV, Mediatech Inc., Manassas, VA) then fixed with 1% paraformaldehyde (cat# 15710, Electron Microscopy Sciences, Hatfield, PA) for 15 minutes. After fixation cells were washed 3X with 1X PBS before blocking. Cells were blocked with 1X PBS + 0.2% bovine serum albumin (BSA) (cat# 2910, Fraction V, EMD Chemicals, Gibbstown, NJ) for 30 minutes. The primary antibody was added for TREK-1 in 1:200 dilution (cat#: sc-398449, Santa Cruz Biotechnology, Inc.). After incubating overnight at 4 °C, primary antibody was washed out with 1X PBS + 0.2% BSA three times for 10 minutes per wash. The corresponding secondary antibody was added in 1:1000 dilution (Goat anti-mouse Alexa Fluor 488, cat#: 948490, Invitrogen Molecular Probes) and incubated for 1 h at room temperature. Following secondary antibody incubation, slides were washed three times for 10 minutes per wash with 1X PBS + 0.2% BSA. Slides were then incubated with 100 nM rhodamine phalloidin (cat# PHDR1, Cytoskeleton) for 30 minutes to stain the actin cytoskeleton. Slides were then washed three times for 10 minutes per wash with 1XPBS. The cover slides were | 2017-04-27T08:35:35.871Z | 2017-03-28T00:00:00.000 | {
"year": 2017,
"sha1": "465587d7ea69786c882ed37b61d9151a8f97f8be",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-00430-2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "38609b91d7013fb31907b25ac6b55b264aa1b647",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
220380901 | pes2o/s2orc | v3-fos-license | Enhancing dipolar interactions between molecules using state-dependent optical tweezer traps
We show how state-dependent optical potentials can be used to trap a pair of molecules in different internal states at a separation much smaller than the wavelength of the trapping light. This close spacing greatly enhances the dipole-dipole interaction and we show how it can be used to implement two-qubit gates between molecules that are 100 times faster than existing protocols and 1000 times faster than rotational coherence times already demonstrated. We analyze complications due to hyperfine structure, tensor light shifts, photon scattering and collisional loss, and conclude that none is a barrier to implementing the scheme.
Molecules confined in arrays of optical tweezer traps are particularly attractive and have recently been realized [16][17][18][19]. The platform is scalable to several hundred sites, enables re-arrangement of the traps [20,21] to reduce entropy or control which particles interact, and provides natural single-site addressability. Various authors have proposed protocols for two-qubit gates using rotational states of molecules [4,[22][23][24][25][26][27]. The number of possible gate operations is set by the ratio E dd τ c /h where E dd is the dipole-dipole interaction energy and τ c is the coherence time of a trapped molecule in a superposition of rotational states. For conventional tweezer traps E dd is limited by the minimum trap separation. This is roughly the wavelength of the trapping light, typically ∼ 1 µm, giving E dd /h ∼ 1 kHz. Recent work has extended τ c to several milliseconds [28,29] but, at these interaction strengths, only a few high-fidelity gates are possible. While prospects are good for further improvementscoherence times near 1 s have been demonstrated in hyperfine states of molecules [30] and electronic states of atoms in tweezers [31]-considerable advances are required to realize the full potential of this platform for quantum science.
Here we show how to increase the dipole-dipole interaction between two molecules by trapping them at reduced separations using the state-dependence of the moleculelight interaction. Our scheme has similarities to statedependent optical lattices which have been used to control atoms on sub-wavelength scales [33][34][35][36][37][38][39][40][41], but benefits from the advantages of the tweezer platform noted above. By using rotational states of molecules, we avoid the short excited-state lifetimes which limit schemes involving the dipole-dipole interaction between atoms [33]. Our method, shown in Fig. 1(a), uses two optical tweezers of different wavelengths, focused at the same position. The tight focussing of the light produces elliptical polar- Contour plot of (I/Imax) C ·ẑ in y = 0 focal plane for a single tweezer. Calculated using the vector Debye integral [32] for a lens with NA = 0.55. The input beam is polarized along x and has 1/e 2 diameter equal to that of the lens.
ization components with opposite handedness on each side of the focus [42,43]. The interaction of this field with the vector polarizability of the molecule results in a state-dependent ac Stark shift. Two molecules in different internal states are trapped at different positions in the trap and their separation can be controlled by varying the relative intensities of the two tweezers. This statedependent potential allows E dd to be enhanced by two orders of magnitude. We introduce these concepts and show how to apply them in practice to implement fast two-qubit gates.
Method.-The scheme can be illustrated using a simple 2 Σ molecule with no hyperfine structure. We focus on the four states with total angular momentum J = 1 2 , where the pair of states with rotational angular momentum N = 0 are separated from the pair with N = 1 by the rotational energy E rot . field of intensity I and polarization . The interaction has scalar, vector and tensor parts whose dependence on the frequency of the light can be factored out into three constants α (0) , α (1) and α (2) ; the scalar, vector and tensor polarizabilities. The scalar interaction shifts all four of our states by W 0 = − 1 2 0c α (0) I. The vector and tensor parts cause state-dependent shifts. The vector shift is non-zero when the field has ellipticity, described by C = Im( × * ). | C| gives the degree of ellipticity and its direction gives the handedness. For incident light propagating along y and linearly polarized along x, this handedness is along z and is opposite either side of the focus [see Fig. 1(b)]. In this case the vector shift is W 1 = 1 2 0 c α (1) g J m J ( C ·ẑ)I, where g J = 1/[2J(J + 1)] and we have assumed W 1 is small compared to the spin-rotation interaction. W 1 is identical for |0 − /1 − and opposite to that of |0 + /1 + . The tensor shift is zero for our J = 1 2 states; we return to it later.
The polarizabilities α (k) depend on the details of the electronic structure [44]. Here, for simplicity, we assume that they are dominated by interaction with the first excited electronic state. The relevant electronic structure is shown in Fig. 2(a). The spin-orbit interaction splits the excited state into two components separated by δ fs . Their mid-point is ω AX above the ground state and we define ∆ as the detuning of the light field from this point. As outlined in the Supplemental Material (SM) [45], for ∆ ω AX , the polarizabilities can be written where d AX is the dipole matrix element connecting the X 2 Σ and A 2 Π states. We use tweezer traps at two different wavelengths λ sc and λ vec , shown schematically in Fig. 2(a), which we call the scalar and vector traps. Their on-axis intensities are I sc and I vec . The scalar trap light is red-detuned with ∆ = ∆ sc δ fs . In this regime, α (0) α (1) and the interaction is dominated by the scalar component. The vector trap light is tuned between the fine structure components. Figure 2(b) shows α (0) and α (1) in this region. When ∆ = ∆ vec = 0, α (0) = 0 while α (1) = −2d 2 AX / δ fs which can be large. Fig. 2(d) shows how δx depends on the intensity ratio while the green line shows the enhancement of E dd for two point particles positioned at the trap minima relative to those in separated scalar traps. The enhancement is ultimately limited by undesirable collisions that occur when the spacing is too small. As we will see, for realistic parameters, an enhancement of 2-3 orders of magnitude is feasible.
Eigenstates.-The dipole-dipole interaction Hamiltonian is where d A and d B are the dipole moments of the two molecules, x A and x B their positions, andx is a unit vector along x. As we show in the SM, after restrict-ing ourselves to states with one molecule trapped on either side of the focus, the eigenstates of the two-molecule Hamiltonian, including H dd , are with energies 0, 2E rot and E rot ± E dd respectively. Here Since |E dd | can approach or even exceed the motional energy spacing in the trap, ω t , it is important to consider the motional degree of freedom of the two molecules. A 1D treatment is sufficient to elucidate the main points. When the upper and lower states in each pair have the same vector shift, so that the potential is the same for both, the eigenstates are (see SM) |ψ |n cm |φ(x rel ) .
Here |ψ is one of the internal eigenstates of Eqs. (4) and |n cm is a harmonic oscillator eigenstate for the center of mass coordinate x B + x A . The relative motional state |φ(x rel ) is an eigenstate of the state-dependent dimensionless Hamiltonian Here is the reduced relative motional coordinate, p rel the conjugate momentum,δx = M ωt 2 δx, r = M ωt 2 (Λ 10 /4π 0 ω t ) 1/3 is the separation, in reduced units, at which E dd = ω t , and M is the mass of the molecule. The factor q reflects the statedependence of the dipole-dipole interaction, and is equal to {0, −1, 1, 0} for |ψ = {|00 , |Ψ − , |Ψ + , |11 } respectively. The relative motional states for qr 3 > 0 are examined in the SM and show two important effects. First, the finite extent of the wavefunction means that 1/x 3 rel > 1/ x rel 3 . Second, the molecules are pushed apart by their interaction so their mean separation is larger than δx. In the motional ground state, the first effect dominates at larger δx increasing E dd , while the second effect dominates at small δx, reducing E dd below the value for fixed point dipoles.
Complications in real molecules.-The addition of nuclear spin introduces a hyperfine interaction which can mix states of different J. For these mixed states, the vector Stark shift depends on the relative size of the hyperfine and spin-rotation interactions, which differs from one rotational state to the next. Consequently, the position of the potential minimum for |0 ± is shifted relative to |1 ± . As shown in the SM, for a shift ξ in reduced units, the resulting imperfect overlap of the spatial wavefunctions reduces the dipole-dipole energy by e −ξ 2 , the square of the overlap integral. As we will see, this reduction is typically small.
A second complication is that states outside N = 0 can have a tensor Stark shift due to the light at λ sc .
The most relevant effect of this is to couple states with ∆m ≤ 2 near the center of the trap, allowing tunneling between the left and right potentials. This coupling is eliminated when the incident polarization of the scalar and vector traps are orthogonal. At other angles the tunneling is proportional to the wavefunction overlap so becomes negligible when the molecules are well separated.
Realistic example.-To illustrate the power and practicality of our method, we show how to implement a simple two-qubit gate using CaF molecules. CaF has been confined in optical tweezer traps [18] and has a structure similar to the model molecule but with a fluorine nuclear spin of 1 2 . We map the states of the model molecule to our specific case as follows: The states |0 ± and |2 ± form our computational basis, while |1 ± are used to implement the gate. Many other choices of states are possible and may have different advantages. With our choice, the |0 ± ↔ |1 ± and |1 ± ↔ |2 ± transitions, used for all single-qubit and two-qubit operations, are insensitive to magnetic fields [29] making the scheme highly robust to field fluctuations. Figure 3(a) shows the potentials for the three pairs of states and for the parameters given in the caption. The closely spaced traps can be loaded adiabatically and without collisional loss from two separated tweezers using simple intensity ramps. We suppose the molecules have been cooled to the motional ground state [44], resulting in the wavefunctions shown for each potential. The trap frequency is within 10% of 120 kHz for all states, and the corresponding rms wavepacket size is 26 nm. Figure 3(b) shows the two-molecule states relevant for the gate. The matrix elements of H dd are zero between states of our computational basis |0 ± and |2 ± . The states |2 − , 1 + and |1 − , 2 + are mixed by H dd giving the pair of entangled states |Ψ ± 21 = (|2 − , 1 + ±|1 − , 2 + )/ √ 2, split by 2E dd . A microwave pulse resonant with the |2 − , 0 + ↔ |Ψ − 21 transition and of sufficient duration to resolve it from |2 − , 0 + ↔ |Ψ + 21 entangles the two molecules. Note that this transition can be distinguished from |0 − , 2 + ↔ |Ψ − 21 by choice of polarization and that the near-degenerate |0 − , 0 + ↔ |Ψ − 10 transition is forbidden by symmetry. A 2π pulse implements the two- This gate is universal in combination with single-qubit operations which can be carried out rapidly using two-photon microwave pulses [13,29,46,47]. In an array of such qubits, single-qubit addressability is obtained through a combination of microwave polarization and tweezer intensity. The polarization determines which molecule in a pair is addressed, and a small change in intensity of the selected tweezer relative to all others ensures that only the molecule in that tweezer is addressed.
The blue lines in Fig. 3(c) show the energy shift of |Ψ − 21 as a function of the separation of the potential minima for the |2 ± states. The dashed and solid lines show results for fixed point dipoles and the full 1D calcula- tion respectively (see SM for details). For a separation of 124 nm, as shown in Fig. 3(a), the combined effect of the dipole-dipole interaction pushing the molecules apart and the imperfect overlap of the motional wavefunctions, reduces E dd by ∼ 40%. Also shown are the expected dominant loss mechanisms in the trap. We calculate the collisional loss rate R col using the coefficient measured in Ref. [47] for CaF in a 780 nm tweezer trap. It decreases with increasing separation and is largest when both molecules are in |2 ± where their overlap is largest. The photon scattering rate R ph is dominated by scattering from the vector trapping light. We have assumed a fixed I sc so R ph increases with separation since larger separations require larger I vec . Over the range shown, the ratio of E dd /h to the sum of the loss rates is large. Choosing the separation of the |2 ± states to be 124 nm, R col R ph 200 s −1 while E dd /h = 160 kHz, almost 1000 times larger. This is also 100 times larger than the maximum interaction energy achievable with separate tweezers. For a fixed vector Stark shift, R ph scales in-versely with the fine-structure interval, so will be smaller for heavier molecules. For example, it is reduced by factors of ∼ 4, 6 and 19 in SrF, YO and YbF respectively. R col may be very different in other systems or for the same system at different wavelengths [48,49]; this is an important topic for investigation.
To scale our scheme to many molecules, traps can be rearranged to implement gates between different pairs. A useful metric is the time required to move a pair from two separated potentials into a single, combined trap ready for the fast gate. Intuitively, this transport must be slow compared to τ = 2π/ω t . For our chosen parameters, τ ∼ 10 µs. Simple adiabatic protocols take a few hundred µs, while more sophisticated non-adiabatic transport [50,51] can be completed without heating in a few τ , as demonstrated for ions [52].
Summary.-We have proposed a new scheme which uses state-dependent optical tweezer traps to confine pairs of polar molecules at distances much smaller than the wavelength of the trapping light, and shown how to engineer a greatly enhanced dipole-dipole interaction between them. We have analyzed an example in detail, including the effects of hyperfine structure and tensor light shifts. We find that two-qubit gates can be implemented at least 100 times faster than existing protocols and 1000 times faster than rotational coherence times already demonstrated for molecules [28,29]. Thus, our work enables useful quantum information processing without further improvements to coherence times. Because the gate is so much faster, the effects of fluctuating magnetic fields or tweezer intensity matter less. We have designed a specific two-qubit gate, but our scheme provides a similar speedup for any gate that uses the dipoledipole interaction e.g. [25]. Shaped microwave pulses that produce remarkable robustness to various experimental imperfections [27] can also be utilized in our scheme. Our method will work for all the laser-coolable molecules, and the heavier ones have a reduced scattering rate which may be an important advantage. The method should also work for heteronuclear bialkali molecules prepared in the 3 Σ state [53]. Further analysis is needed to determine whether it can work for bialkali molecules in the ground electronic state.
As well as quantum information processing, the enhanced dipole-dipole interactions will be useful in quantum simulation. For example, a linear chain of tweezers with a pair of molecules in each can implement an SSH model [54] in a natural way. Furthermore, the ability to control the wavefunction overlap between two molecules with such precision is unique and offers a new tool for studying collisions and quantum chemistry with unprecedented precision and control.
We thank Jeremy Hutson, Jordi Mur-Petit, Paolo Molignini, Simon Cornish, Michael Hughes and Alex Guttridge for helpful discussions and feedback. This work was supported by EPSRC grant EP/P01058X/1.
I. AC STARK SHIFT
We calculate the ac Stark shift of our model 2 Σ molecule following the Appendix of Ref. [44]. In this section, we refer to equation numbers from that reference. The interaction of the molecule with light of polarization and electric field magnitude E 0 is described by the operator where A K are the polarizability operators and P K are the polarization tensors of the light. Here, we focus on the K = 1 term which gives the vector Stark shift, W 1 . For incident light polarized along x and propagating along y, 0 = 0 everywhere, and the only non-zero component of Thus, we need only calculate the matrix elements of A 1 0 . They are diagonal in the magnetic quantum number m J . Furthermore, if we specialize to the case where the Stark shift is small compared to the spin-rotation interaction, we see that + 1)). So we reach the result where I is the intensity of the light. The expressions for the polarizability components α (K) are given by Eqs. (A24) and (A26) and require the evaluation of a sum over all excited states. In the main text, to illustrate our scheme, we focus on the case where only the A 2 Π excited state contributes to the polarizability. This is reasonable provided the detuning of the light field from this state is small compared to the detuning from higher-lying states. Including the effects of higherlying electronic states changes the quantitative details but the qualitative features important to our scheme are unchanged. If the |Ω| = 1 2 and 3 2 components of this state are ω 1/2 and ω 3/2 above the ground state respectively, we can define the fine structure splitting δ fs = ω 3/2 −ω 1/2 and ∆, the detuning of the laser field from their midpoint at ω AX = 1 2 (ω 3/2 + ω 1/2 ). In the limit where ∆ ω AX , the expressions from Ref. [44] reduce to those of Eqs. (2) in the main text.
For our realistic example of CaF in the main text, we estimate the polarizability components from data on the A 2 Π ↔ X 2 Σ and B 2 Σ ↔ X 2 Σ transitions at our chosen wavelengths of λ sc = 780 nm, λ vec = 604.966 nm. The scalar, vector and tensor Stark shifts of all the relevant levels are calculated using the matrix elements given by Eq. (A25).
II. DIPOLE-DIPOLE INTERACTION
Consider the two-molecule states |F A , m A |F B , m B where m A/B are the magnetic quantum numbers of molecules A and B and F A/B stand for all other relevant quantum numbers. The dipole-dipole interaction can couple two-molecule states whose values of m tot = m A + m B differ by 0, ±1, ±2.
In the basis of Eqs. (4), the combined rotational and dipole-dipole Hamiltonian can be written where E rot is the rotational energy splitting, is the dipole-dipole interaction energy and the states are in the order {|00 , |Ψ − , |Ψ + , |11 }. The matrix is diagonal except for off-diagonal elements between |00 and |11 . For realistic values of |x B − x A | in traps designed to prevent excessive overlap of the motional wavefunctions, E rot E dd and these elements can be safely ignored. In this case the eigenstates are well approximated by the basis of Eqs. (4).
III. EFFECT OF MOTION ON DIPOLE-DIPOLE INTERACTION
First consider the case where the four internal states in question experience the same trap frequency ω t , the states |0 − /1 − have a trap minimum at position x = −δx/2 and the states |0 + /1 + have a trap minimum at position x = δx/2. The full motional Hamiltonian is now separable from the internal part. In the subspace of Eqs. (4) we have where the factor q = {0, −1, 1, 0} for states {|00 , |Ψ − , |Ψ + , |11 }. We can reformulate this in terms of the dimensionless, relative position operators For qr 3 > 0, the trap frequency is increased and the two molecules are pushed apart slightly so that their mean separation is larger than in the absence of the dipoledipole interaction. For qr 3 < 0, these effects are reversed. Figure S1 shows the energies of the first 5 eigenstates of Eq. (S6b) calculated numerically as a function ofδx for q = 1 and r = 3.5. At large separations, the energy shifts agree well with those expected for point dipoles fixed at the potential minima (dashed lines). For intermediate separations, the shifts from the full 1D calculation are larger because the finite extent of the wavefunction means that 1/x 3 rel > 1/ x rel 3 . This effect is larger for excited motional states where the extent of the wavefunction is larger. At small separations, the dipoles are pushed apart by their interaction and so the energy shift is reduced from the value expected from fixed point dipoles.
IV. HYPERFINE INTERACTION AND SHIFTED POTENTIALS
Here, we consider in more detail the complications introduced by the hyperfine interaction. As in the main text, we take the handedness of the light to be along z.
The hyperfine interaction couples the nuclear spin I and the total electronic angular momentum J. Their sum is F . Let us first consider states with well defined F , J and I, and a vector Stark shift which is small compared to the hyperfine interaction so that we need only consider the diagonal matrix elements of the effective Stark shift operator. In this case, the vector Stark shift is This is a useful result for molecules where the spinrotation interaction is large compared to the hyperfine interaction. States from neighboring rotational manifolds that have the same values of J, F and m F will have the same vector Stark shift, and the potentials for these states will be identical. However, for many molecules of interest, the hyperfine and spin-rotation interactions are similar in size so the hyperfine coupling mixes states with the same F and m F but different J. The vector Stark shift of these mixed states is not given by Eq. (S8), but instead depends on the relative size of the hyperfine and spin-rotation coupling. As a result, in general, states in different rotational levels will have different vector Stark shifts, and potentials that are shifted relative to one another. For the realistic example described in the main text, the chosen states are not mixed, so g F is given by Eq. (S8). However, because these states have different values of J, F and m F , the potentials are shifted in this case too. Here we consider the effect of this shift on the dipole-dipole interaction.
We assume the |0 ± states have potential minima at ±δx 0 /2 and the |1 ± states at ±δx 1 /2. The full Hamil-tonian is now The Hamiltonian is no longer separable into motional and internal parts. We apply a unitary transformation where T A/B are the single particle translation operators such that T A/B (η) |x A/B = |x A/B + η . We find where δx av = 1 2 (δx 0 + δx 1 ), δx 10 = 1 2 (δx 0 − δx 1 ), D A = |1 − A A 0 − |, D B = |1 + B B 0 + | and we have defined the operators We can immediately neglect, as before, the off-resonant terms in D A D B and D † A D † B because they couple internal states which are separated by the rotational energy. Transforming again to the dimensionless relative position operators we havẽ where ξ = M ωt 2 δx 10 andδx av = M ωt 2 δx av . T cm and T rel are the translation operators for the dimensionless center-of-mass and relative coordinates respectively and we have used . In Eq. (S13), the dipole-dipole interaction couples the center-of-mass and relative motions, and has an off-diagonal matrix element between the states |Ψ + and |Ψ − .
We note thatH = 1 ωt H m + δH where H m is given by Eq. (S5) and (S14) Taking zeroth-order eigenstates to be those of H m , and using first-order perturbation theory, we find that the dipole-dipole energies are just multiplied by the factor e −ξ 2 . This factor is used in the calculation of the dipoledipole interaction in the main text. It accounts for the shifted potentials irrespective of the size of the dipoledipole interaction because the center-of-mass eigenstates of Eq. (S6a) are unchanged by the dipole-dipole interaction.
V. TUNNELING
States that have total angular momentum F > 1/2 will be subject to a tensor ac Stark shift. This has several effects. One is to make the trap frequencies for different states differ from each other. For the parameters considered in this work, this effect is small. The most important effect is to give an off-diagonal term that couples together states with ∆m F ≤ 2. For pairs of states that are degenerate at x = 0, this introduces an avoided crossing and provides a mechanism for molecules to tunnel from one potential to the other.
Let a be the matrix element of the tensor part of the Stark shift operator between the pair of states, and assume that this is constant across the region of interest. The single-molecule tunneling rate is where |φ − and |φ + are the motional states corresponding to the pair of internal states. If we assume harmonic oscillator ground states, this is 2|a|e −δx 2 /2 . In the realistic example described in the main text, all states degenerate at x = 0 have a = 0, so there is no tunneling. Instead, consider the states |N = 1, F = 1 + , m F = ±1 of CaF, where the 1 + refers to the F = 1 state of highest energy. For these states |a| = 730 kHz. Taking the same tweezer parameters used in the main text, δx for this pair of states is 340 nm, and γ t is only 79 µHz. However, halving this separation increases γ t to 4 kHz.
VI. PHOTON SCATTERING RATE
The scattering rate is dominated by the vector tweezer since it is tuned much closer to resonance than the scalar tweezer. The scattering rate from light tuned to the midpoint of the fine structure interval is dominated by scattering from these two levels and is well approximated by, R ph = ΓΩ 2 3δ 2 fs (S16) where Ω = d AX 2I/ 0 c/ is the Rabi frequency and Γ is the linewidth of the 2 Π state. For CaF we have Γ = 2π × 8.3 MHz, δ fs = 2π × 2.14 THz and d AX = 0.97 × 5.95 D where the first factor is the Franck-Condon factor and the second is the transition dipole moment between electronic states.
VII. COLLISIONAL LOSS RATE
The collisional loss rate for two molecules with wavefunctions ψ A (x) and ψ B (x) is where β is the two-body loss rate constant, recently measured for CaF in 780 nm tweezer traps [47]. For two molecules in the motional ground states of two displaced but otherwise identical potential wells, this is where ω r is the trap frequency in the radial direction and ω a is the trap frequency in the axial direction. We find that, for CaF in a trap that has ω r = 2π × 200 kHz, ω a = 2π × 35 kHz, keeping the collisional loss rate below 1 Hz requires a separation of 4.7 oscillator lengths, or 140 nm. The rate is very sensitive to separation in this region -decreasing the separation to 3.6 harmonic oscillator lengths (or 106 nm) increases the loss rate to 100 Hz. | 2020-07-08T01:01:32.047Z | 2020-07-07T00:00:00.000 | {
"year": 2020,
"sha1": "ac5d858576572323d78d40c9570c4d854dd3099c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2007.03296",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "cb3bd60377bbb0281affecf69541177dcf519f4b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics",
"Materials Science"
]
} |
188662839 | pes2o/s2orc | v3-fos-license | Analysis of cost between branded medicines and generic medicines in a tertiary care hospital
Background: There is much debate regarding the importance of promoting the use of cheaper generic alternatives over brand-name drugs. While generic drugs have been noted to be comparable to brand-name drugs in their ability to treat conditions, significant debate surrounding their bioavailability or the concentration of the drug that reaches its site of action has arisen. Many experts continue to believe that generic and brand-name drugs are bioequivalent and equally viable options for effective drug treatment, as assumed in this review. Methods: Prices of commonly used branded and generic medicines in same concentration, dosage form and combination were compared with the help of Indian Drug Review, brochures of pharmaceuticals and pharmacies and Jan Aushadhi price list 2017. Mean of all the prices available of branded and generic medicine were calculated and the percentage difference in the mean costs of generic and branded medicines were calculated. Results: The mean cost of 47 branded medicines out of the selected 50 medicines was higher than their generic versions. Mean cost of 3 generic medicines was higher than branded ones. Percentage difference in the mean costs of branded and generic medicines varied from <10% to >70%. Conclusions: This study has shown a very noteworthy difference of prices between branded and generic drugs. Efforts should be taken to promote the generic medication. Misconception about low efficacy with generic drugs should be erased.
Generic medicine is a replica of the original branded product, marketed after the patent period or expiry of other exclusive rights and hence supposed to be of low cost. 4,5 Both branded and generics are manufactured by confirming to international standards. Generics can be sold by different brand name and may contain different fillers, binders and lubricants which give them a different color, shape, taste, smell, etc. Hence, generic can be marketed under non-proprietary name or as a branded generic. This enables the manufacturer to market the product in a way similar to the proprietary product. 6 Non-proprietary name is of the active ingredient in the medicine that is determined by an expert committee and is understood internationally. 7 Prescribing generic drugs means advising drugs manufactured by other companies after expiry of the patent of the parent drug of the innovator company. Very often, it is misconceived as prescribing by a drug's generic name or non-proprietary name. 3 The unethical promotional practices by the pharmaceutical companies to get more prescriptions from the doctors make the drugs unaffordable to the common man as this adds to the cost of the medicine. 8 The importance of generic prescribing has been given importance, primarily to reduce the cost of drugs. 9 With this background a study was planned with the aim to compare the cost of various commonly used branded medicines and generic medicines and to establish the prudence of emphasizing generic versus branded prescription.
METHODS
An observational study was carried out. Prices of different branded and generic drugs were compared. 50 commonly used drugs available as both branded and generic forms in the same concentration, dosage form, and the combination, belonging to different classes, in HIMS Hassan, Karnataka, India were selected.
Medicines belonging to different classes such as nonsteroidal anti-inflammatory drug, antacids, antihypertensives, anti-diabetics, antibiotics, drugs used for bronchial asthma and cough syrups, etc., were included for comparison. The brand with the highest price and lowest price available were taken into consideration. The comparison between prices were done using Indian Drug Review, brochures of pharmaceuticals and pharmacies and Jan Aushadhi price list 2017. Mean of all the prices available of branded medicine and generic medicine calculated and the percentage difference in the mean costs of generic and branded medicines were calculated.
RESULTS
A total of 50 medicines were selected for analysis. Of 50 medicines 19, (38%) branded and generic medicines were having 10%-40% difference in their cost.
Mean cost of generic drugs is 13 Rs., whereas branded with highest price is 125 Rs., with lowest price is 31 Rs.
Percentage difference in the mean costs of branded and generic medicines varied from <10% to >70%. Hassan, Karnataka, India and same has been selected for this study. Out of 50 selected drugs 7 (14%) were NSAIDs, 11 (22%) were antibiotics 7 (14%) were drugs acting on GI tract like anti-diarrhoeal, ulcer protective anti-ulcer drugs, anti-spasmodic so on. 4 (8%) were cardio protective agents and 6 (12%) were anti-hypertensive, 3 (6%) belonged to anticonvulsant group and 2 (4%) each in antiemetic, Antifungal, oral hypoglycemic agents, anti-malarial, antiasthmatics and topical agents. Table 1 depicts the number of generic and branded drugs belonging to different price range.
About 34 out of 50 of generic drugs belong between price range of 1-10 Rs. and 3 belong to >50 Rs. About 6 of branded drugs with highest price belong to 1-10 Rs group and 22 in >50 Rs range. Total 19 of branded drugs with lowest price belong to price range between 1-10 Rs and 10 belong to >50 Rs. whereas the mean cost of branded drugs with highest cost is around 124 Rs and with lowest cost is around 31 Rs. The mean cost of generic drugs is cheaper than the mean cost of branded drugs with lowest cost. This implicates that author should prescribe generic drugs in socioeconomically low-class people.
DISCUSSION
It is evident by the results after completion of the study that the mean cost of 94% of branded medicines were found to be more than the mean cost of generic medicines which is consistent with previous findings. 10 Generic medicines are cheaper in comparison to branded medicines because there is no need to make investments in research and development as in the case of new drugs. 6% branded medicines were cheaper than the generic medicines. This could be because of prevailing fierce competition among producers makes the manufacturers to keep low prices. 11 Generic prescribing has been emphasized mainly to reduce the cost of drugs. Until the amendments in the Indian Patent Act in March 2005, India was following process patent system but after becoming a part of the global treaty India has moved to product patent system. 12 Because of process patents only, in 2002 India was the world's largest producer of generic medicines. 12 Earlier the branded medicines were manufactured by multinational pharmaceuticals and large Indian pharmaceutical companies and therefore usually expensive. 13 Practice of bribing doctors for getting more prescriptions is well known which adds to the maximum retail price (MRPs) of the medicines.
Recently, in order to check the doctor-pharmaceutical connection and unethical marketing practices, an authoritative decision dated January 21, 2013 by Medical Council of India which has directed doctors, hospitals, and medical colleges to prescribe by generic names as far as possible as they are thought to be more affordable. 9 Concern grew about the fate of pharmaceutical industry after the amendment. 14 There shall be no price hike due to new patent regime. The fear that prices of medicines will spiral is unfounded. 15 Now the act allows only two types of generic drugs in Indian market: off-patent drugs and generic versions of drugs patented before 1995. Hence, at present nearly 97% of drugs manufactured in this country are now off patent so Indian pharmaceutical industry will not be affected by product patent regime. 16 These cover all the lifesaving drugs as well as medicines of daily use for common ailments. 14 Indian Union Minister for Commerce and Industry, Sri Kamal Nath, assured that "the prices of medicines will not shoot up due to Patents and 97% drugs in the market and 100% of all essential drugs are not covered by patents." 11,17 Percentage difference in the mean costs of branded and generic medicines varied from <10% to >70%. Actually, almost all the drugs produced in India are generic medicines (generic equivalents) under different brand names. The prices of them may vary and are not controlled by the doctors. MRP is decided and permitted by the government. 18 Now instead of chasing doctors, pharmaceuticals provide generic medicines to the pharmacies at very cheap rates with MRPs written as allowed by the government, and now these pharmacies have free choice of selling the drug at their own cost.
CONCLUSION
Facts revealed after the completion of this study are indicative of that the obligation for cost reduction, from the view point of drug selection, lies with the doctors by prescribing the cheapest available drug and include the generic names of the drug in parenthesis, in case that particular brand is not available. The government must make it compulsory for all pharmacies/medical shops to stock generic versions of all essential drugs.
A website/app should be designed and released by the Drug Controller of India which consists of branded drug list so that every doctor can find out cheapest and approved drugs easily. | 2019-06-13T13:23:55.269Z | 2019-04-23T00:00:00.000 | {
"year": 2019,
"sha1": "4affa4c35f28c07958f1d6a1f22ee3b1c98ab57a",
"oa_license": null,
"oa_url": "https://www.ijbcp.com/index.php/ijbcp/article/download/3279/2398",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ba5a258674d016807fdaf44f43d061d191da1a3b",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
39436341 | pes2o/s2orc | v3-fos-license | Recurrent locked knee caused by an impaction fracture following inferior patellar dislocation: a case report
Introduction Locked knee caused by inferior patellar dislocation is considered rare in elderly patients. It was originally thought that, in the osteoarthritic knee, osteophytes on the pole of the patella become entrapped in the inter-condylar notch, which is managed by performing closed reduction and immobilization in a knee splint for three to four weeks. We present an unusual case of a locked knee with an impaction fracture. To the best of our knowledge, there have been no previous reports of such impaction fractures managed with arthroscopy. Case presentation We present an unusual case of an 88-year-old Caucasian woman with moderate arthritis who had a locked knee caused by an impaction fracture of the patella into the lateral femoral condyle. In this case report, we describe the need for arthroscopic surgery to prevent relocking of the knee in these patients. Conclusions This case report emphasizes the need for careful assessment of locked knees in elderly patients. Impaction fractures should be considered in all rare cases of patellar dislocation, and we advocate arthroscopic assessment of the articular cartilage in these patients. This is an important consideration, as the population demographics change and such impaction fractures may become more common in patients with degeneration in the knees.
Introduction
Patients with locked knees present to orthopedic and emergency departments relatively often, and the many causes of this entity are well documented in the literature [1,2]. These include meniscal lesions, loose bodies, ligament injuries, hematomas, tumors, and patellar dislocations [3][4][5].
Locked knee presenting with inferior patellar and intra-articular dislocations is considered less common in elderly patients and is thought to be the result of osteophytes on the pole of the patella that become entrapped in the inter-condylar notch. Earlier reports have recommended simple manipulation in elderly patient with degenerative knee disease, followed by three to four weeks of support in a knee splint [6][7][8][9]. More recently, Syed and Ramesh [10] reported this mechanism of knee locking in an elderly patient who required an open operative procedure to prevent relocking. Their article also described damage to the femoral condyle. In 2010, Theodorides et al. [11] recommended that open operative procedures should be performed in all such patients.
We present an unusual case of an elderly woman with moderate arthritis who had a locked knee presenting as a patellar dislocation caused by an impaction fracture of the patella into the lateral femoral condyle. In this case report, we confirm the need for surgery to prevent relocking but demonstrate that such injuries can be treated by performing arthroscopy rather than an open surgical procedure. This point is particularly relevant because as population demographics change and such injuries become more common in patients with degenerative knees, short, minimally invasive procedures and reduced recovery times are important to preserving patients' mobility.
Case presentation
An 88-year-old Caucasian woman was referred to our orthopedic unit following a simple trip on the stairs leading to a locked knee at an 80°angle. In the fall, the quadriceps muscles were forcefully contracted on her bent knee. Prior to the incident, she was independently mobile with the use of a stick. Her medical history included osteoarthritis affecting both knees and mild, generalized, right-sided weakness following a subdural hemorrhage secondary to an RTA. The physical examination of her right knee revealed a closed injury with minimal swelling. We noted a tender, inferiorly displaced patella. Her range of movement of the knee was 80°to 115°with a definite block to extension.
The initial plain radiographs of her flexed knee revealed inferior displacement of the patella ( Figure 1).
Under patient sedation, the patient¹s patellar swelling was reduced by hyperflexing the knee, placing downward pressure on the inferior pole of the patella, and then slowly extending her leg. Her knee was then placed in a camp splint and allowed to mobilize. The following day the patient was able to raise and straighten her leg with a relatively pain-free range of movement of the knee and was able to mobilize independently on the ward. While in the hospital, she was unable to tolerate the splint and abandoned it. Her knee then locked again following flexion past 90°. A computed tomography scan of her right knee was performed, which showed a superior patellar osteophyte embedded in the lateral femoral condyle (Figure 2).
The patient's knee was examined while she was under general anesthesia. When her knee was flexed past 90°, the patella locked into the lateral femoral condyle. The knee could be unlocked by slightly increasing flexion and applying pressure inferomedially while slowly extending the knee. Arthroscopy was performed using standard lateral and medial portals. This confirmed the presence of a deep ridge where the patellar osteophyte had become embedded into the arthritic lateral condyle, causing locking of the knee when flexed passed 90°. The rest of her knee had grade III-IV osteoarthritic changes.
With a burr, the superior pole of the patella (Figures 3 and 4) was trimmed and the ridge on the lateral femoral condyle was smoothed (Figures 5 and 6). The results of this procedure were checked using a fluoroscopic image intensifier (Figure 7). When reexamined, the patella tracked smoothly over the lateral condyle without locking.
Post-operatively, the patient showed marked improvement in her symptoms. She was able to raise and straighten her leg and extend her right knee. Mobilization was possible without recurrence of her right knee locking, and her pre-injury mobility was regained within one week. In her last review at the clinic 12 weeks later, she was found to have retained her pre-injury mobility and was delighted with the outcome of her surgery.
Discussion
There are a number of reports in the literature about arthritic and locked knees [1,6,9,10,12]. Simple manipulation and immobilization have been recommended for the management of elderly patients with patellar dislocations, as the mechanism of locking is understood to be osteophytes on the pole of the patella becoming entrapped in the inter-condylar notch [10]. In our patient, a patellar osteophyte was impacted into the lateral femoral condyle, causing locking of the knee.
Femoral condyle articular damage has been reported in younger patients with hemophilia who presented with locked knees and were treated with simple closed manipulation, but they had long, incomplete recovery times of up to one year [13]. There have also been reports of articular damage in elderly patients managed by performing open operative procedures, suggesting that locked knees in elderly patients are not benign [10,11].
The current recommendations in the literature for irreducible and recurrent dislocations are open reduction and exploration, but such procedures may lead to longer recovery times than that described in the present report [9,11]. Herein we describe management with arthroscopic procedures which allowed a short inpatient stay and a good, immediate return to pre-injury mobility within one week after surgery with minimal soft tissue disruption. To the best of our knowledge, this is the first report of patient with a locked knee with lateral condyle impaction fracture that was recognized as such and was managed successfully by performing arthroscopic surgery. The changing demographics of the population suggest the likelihood of an increase in such presentations.
Conclusion
Locked knees require careful assessment, especially in the elderly. Impaction fractures should be considered in all rare cases of patellar dislocation, and we advocate arthroscopic assessment of the articular cartilage in such cases.
Consent
Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal. | 2016-05-04T20:20:58.661Z | 2011-08-03T00:00:00.000 | {
"year": 2011,
"sha1": "695475622eaee06af8297dec4d6df356f4b63808",
"oa_license": "CCBY",
"oa_url": "https://jmedicalcasereports.biomedcentral.com/track/pdf/10.1186/1752-1947-5-347",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "046bfb741ee0c281604e79e3625ad244c94a8186",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
652583 | pes2o/s2orc | v3-fos-license | A risk score development for diabetic retinopathy screening in Isfahan-Iran
BACKGROUND: The purpose of this study was to develop a simple risk score as screening tool for retinopathy in type II diabetic patients. METHODS: A cross-sectional study was carried out recruiting 3734 patients with type II diabetes in an outpatient clinic in Isfahan Endocrinology and Metabolism Research Center (IEMRC), Iran. The logistic regression was used as a model to predict diabetic retinopathy. The cut-off value for the risk score was determined using the Receiver Operating Characteristic (ROC) curve procedure. RESULTS: According to final models, being male, having lower body mass index (BMI), being older, longer duration of diabetes and higher HbA1c were correlated with increased risk of diabetic retinopathy. Area under the Curve (ROC) was 0.704 (95% CI: 0.685-0.723). A value ≥ 52.5 had the optimum sensitivity (60%) and specificity (69%) for determining diabetic retinopathy. CONCLUSIONS: The results indicated that risk factors for retinopathy were sex, BMI, age, duration of diabetes and HbA1c levels. In conclusion, applying developed retinopathy risk score is a practical way to identify patients who are at high risk for developing diabetic retinopathy for an early treatment.
ype II diabetes is increasing in the world population. It is an important cause of death and complications, which can impose a burden on the patients, their relatives and the health care system. The most common complication of type II diabetes is diabetic retinopathy (DR), which is a leading cause of visual impairment among working age people. The prevalence of blindness among diabetics is estimated to be around 25 times more than non-diabetes population. [1][2][3][4] Duration of diabetes, hyperglycemia, nutritional and genetic factors, high blood pressure, usage of insulin, pregnancy and hyperlipide-mia are the risk factors for diabetic retinopathy. [5][6][7][8][9][10][11][12][13][14] Important improvement has been achieved in diagnosis, medical care and risk factors that affect the prevalence of diabetes and retinopathy during the recent decades. Considering the increase of diabetes incidence, the patients who suffer from ophthalmologic complications should be properly managed to prevent permanent eye damage. [15][16][17][18] Identifying individuals at risk of diabetes retinopathy is very important for health system. Recently, risk scores for diabetes retinopathy based on simple anthropometric and T www.SID.ir
A r c h i v e o f S I D
demographic variables have been established to identify high risk individuals. It is also evident that a common risk score cannot be applied for all ethnic groups. The final risk score form contains significant diabetic retinopathy risk factors. Each of the risk factors is weighted according to their contribution in the main model. The total risk score is sum of all risk factors varied from 0 to 100. In this study, we have developed a simple and practical scoring system to screen Diabetic Retinopathy. Using such a risk score would be great help in the developing countries where there is a huge undiagnosed DR.
Method
This was a cross-sectional study on patients with type II diabetes mellitus using routinely collected data at outpatient clinics of the Isfahan Endocrinology and Metabolism Research Center (IEMRC), Iran. A total of 12,644 type II diabetes patients were registered in the IEMRC. Diabetes was defined according to the report of the expert committee on the diagnosis and classification of diabetes mellitus. Only type II diabetes was included in this study. We excluded the patients with missing data for diabetic retinopathy. Data of 3734 patients were included. These patients were initially screened by an endocrinologist and then referred to an ophthalmologist to undergo ophthalmologic examination.
The data source for these analyses was collected from all patients who attended for the first time and completed forms in the clinic. Data were collected through physical examinations including a retinal examination, blood pressure, fasting plasma glucose (FPS), glycosylated haemoglobin (HbA1c), urinary albumin, triglyceride, cholesterol and serum creatinine. Demographic information, family history and history of smoking were also obtained. All ophthalmologic examination records were used in the study and were entered into a database. The lowest score for each category was defined as zero. The total score for each subject was calculated by summing all weighted risk factors with variation from 0 to 100.
Because of complicated interpretation, interaction terms between various variables were not considered. ROC curves were constructed to identify the optimum value (> 60%) of diabetic patients for determining DR. Sensitivity and specificity for predicting DR were calculated for different cuts of score.
The cut-off value for the risk score was determined using the Receiver Operating Characteristic (ROC) curve procedure.
Statistical Analysis
The following risk factors were analyzed: sex, age, duration of diabetic, body mass index (BMI), the presence or absence of high blood pressure (BP) with definition in an adult as BP @ 130 mmHg systolic pressure or @ 80 mm Hg diastolic pressure, the levels of glycated hemoglobin (HbA1c), fasting plasma glucose (FPS), cholesterol, triglyceride.
Age was categorized and changed at each subject's at 10-year interval starting at age 30 years, the earliest possible entry in the study. Diabetes duration was categorized as follow: first year as newly diagnosed diabetics and after that by 2, 5 and 10-year. Overweight and obesity was defined as body mass index (BMI) of more than 25 kg/m 2 . HA1c level was defined as 9% or less, 9.01%-11% and more than 11% respectively.
Variables were included in the multiple logistic regressions using stepwise backward elimination, with DR as the dependent variable. The independent variables were categorized. P values less than 0.05 were considered statistically significant.
A risk score was developed from above factors. The variables of interest were treated in two ways. First, their distribution within each area was separately calculated and results were presented as the odds of DR in each www.SID.ir
A r c h i v e o f S I D
group and 95% confidence interval (CIs) for these relative odds were estimated from the logistic regression analysis. Second, the results were presented as logistic regression coefficients as well as significance levels. Coefficients of each significant variable in the model were used to assign a score value.
Optimal cut-point for the risk score (the point with the highest sensitivity and lowest false-positive rate) was depicted by the ROC analysis. Statistical analyses were done using the software package STATA version 9.0.
Results
Out of total patients included in this study 64% were female. Fifty four percent (54%) of patients were diagnosed having retinopathy. Basal characteristics of patients with and without DR are presented in table 1.
More than 70% of patients with diabetes for more than 10 years had diabetic retinopathy. Approximately 61% of patients had BMI below 25 kg/m 2 . Table 2 demonstrates the results of using logistic regression models with DR as dependent variable. Area under the curve for the ROC was 0.704 (95% CI: 0.685-0.723) as shown in Figure 1. Table 3 shows the sensitivity and specificity of different cut-off points for diabetic retinopathy in our patients. A diabetic retinopathy clinical outpatients' value @ 52.5 had the optimum sensitivity (60%) and specificity (69%) for determining DR.
Discussion
In this study the risk score of retinopathy for type II diabetic patients in Isfahan, Iran was investigated. It was shown that higher HbA1c, longer diabetes duration, being older and male increase the risk for development of retinopathy whereas body mass index > 25 kg/m 2 decreases it (Table 1).
Studies on diabetic patients conducted in other countries have similar results. The prevalence of retinopathy varies widely depending on the diabetes duration. Accordingly, the prevalence of diabetic retinopathy is reported to be 50%, 31.3%, 50% and around 50% in Mexico, Sri Lanka, UK and Spain, respectively. Some other studies also reported prevalence rate to be around 60.5%. 10,11 A low prevalence rate (26%) was reported in a study in Pakistan. 12 A study comparing diabetics with and without DR in Kuwait found that high HbA1c, obesity (BMI > 30kg/m 2 ) and longer duration of diabetes mellitus were the risk factors for DR. 13 Furthermore, another study showed that HbA1c, BMI and length of illness are a contributing factor in the degree of retinopathy. The interesting finding of our study is that BMI as a risk factor has an inverse correlation with developing DR. Similarly, some previously done studies, including the Wisconsin Epidemiological Study of Diabetic Retinopathy, demonstrated an inverse relationship between BMI and severity of diabetic retinopathy. To explore if poor glycemic control which may reflect in lower BMI can explain this reverse correlation, we examined the relation between BMI and other risk factors such as HbA1c, FPG and even diabetes duration, blood pressure, cholesterol and triglyceride. We found just a significant negative association between BMI and duration of diabetes. Therefore, our hypothesis was not supported by this analysis. However, further studies are needed to investigate this controversial issue.
ROC Curve
This study indicates that diabetic retinopathy is much higher in patients with a relatively longer duration of diabetes. A few reports have shown age at diagnosis, total cholesterol, triglycerides, high HbA1c as risk factors. Also, we found a significant relation between age, BMI, sex, duration of diabetes and HbA1c, but we did not find any association between DR and blood pressure, triglyceride, cholesterol and FPG.
We also did not find any relationship between blood pressure, cholesterol, triglyceride and DR. However, several studies supported the positive relationship between blood pressure and DR. In conclusion, our study demonstrated that diabetic retinopathy was associated with HbA1c, age, diabetes duration, BMI and sex, but was not associated with FPG, blood pressure, cholesterol and triglyceride. Applying developed retinopathy risk score is a practical way to identify diabetics who are at high risk for developing DR for early treatment. | 2018-04-03T03:41:23.985Z | 2009-04-27T00:00:00.000 | {
"year": 2009,
"sha1": "8c68221ff374abf5ff56c4ed037a6ee58138f419",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2b7b31b1234b126d6ce893b70a1150ea94071f03",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268320425 | pes2o/s2orc | v3-fos-license | Sparse-view X-ray CT based on a box-constrained nonlinear weighted anisotropic TV regularization
: Sparse-view computed tomography (CT) is an important way to reduce the negative e ff ect of radiation exposure in medical imaging by skipping some X-ray projections. However, due to violating the Nyquist / Shannon sampling criterion, there are severe streaking artifacts in the reconstructed CT images that could mislead diagnosis. Noting the ill-posedness nature of the corresponding inverse problem in a sparse-view CT, minimizing an energy functional composed by an image fidelity term together with properly chosen regularization terms is widely used to reconstruct a medical meaningful attenuation image. In this paper, we propose a regularization, called the box-constrained nonlinear weighted anisotropic total variation (box-constrained NWATV), and minimize the regularization term accompanying the least square fitting using an alternative direction method of multipliers (ADMM) type method. The proposed method is validated through the Shepp-Logan phantom model, alongisde the actual walnut X-ray projections provided by Finnish Inverse Problems Society and the human lung images. The experimental results show that the reconstruction speed of the proposed method is significantly accelerated compared to the existing L 1 / L 2 regularization method. Precisely, the central processing unit (CPU) time is reduced more than 8 times.
Introduction
X-ray computed tomography (CT) aims to visualize the internal structure of the human body by reconstructing tissues' attenuation coefficients µ to X-rays in clinical applications.Depending on diverse X-ray sources, there are parallel beam, fan beam and cone beam CT [1][2][3][4].In this paper, for ease of explanation, we focus on image reconstructions in parallel beam CT even though the proposed method can be used in other fan beam and cone beam CT.In parallel beam CT, parallel X-ray beams in different directions are transmitted through the patient who lies between the X-ray sources and the detectors (see Figure 1).The corresponding attenuated X-ray intensities are measured through the detectors.The inverse problem of parallel beam CT is to reconstruct the attenuation coefficient from the received attenuated X-ray intensities.
Given the perfect measured data, the reconstruction methods include filtered back-projection, the algebraic reconstruction technique (ART) [1] and so on.Due to the advantages of fast, accurate and excellence with bones and lungs for nondestructive testing, X-ray CT is widely used in medical imaging to aid doctors diagnose diseases [1].
Nevertheless, the exposure of patients to the environment of radiation increases the risk of many diseases such as leukemia, cancer, etc [5].Low dose CT is an effective way to reduce such a risk.There are generally three ways of low dose CT.The first is to reduce the tube voltage/currents, the second is the limited angle CT reconstruction and the third is the sparseview CT reconstruction.For the first one, to obtain a meaningful CT image, we need an efficient denoising algorithm [6].For the second one, there exist visible singularities, invisible singularities and artifacts [1].For the third one, since it violates the Nyquist/Shannon sampling criterion, strong streaking artifacts will occur [7].In this paper, we focus on removing the streaking artifacts in the sparse-view CT reconstruction.To this end, we need to develop efficient ways to deal with the ill-posedness caused by the projection downsamplings [1].
To deal with the ill-posedness, the data-driven and model-driven methods exist.The datadriven method includes the usage of convolutional neural network (CNN) [8], U-Net [9] and the GoogLeNet [10].However, as pointed out in [11], there should be more evidence of such methods being used in clinical applications.For the sparse-view CT, it is also very difficult to provide enough labeled data since we can not produce the data with full projections without enough doses.Hence, the model-driven method is still quite necessary.
For the model-driven method, note that the algebraic method [12] has the flexibility of incorporating the a-priori information of µ.It is widely used in sparse-view CT.To be precise, the reconstruction of µ is recast into a minimization problem of an energy functional constructed by a (weighted) least square fitting and a regularization.
Noting that the piecewise constant structure of medical images, its gradient can be considered sparse.[13] proposed the total variation (TV) regularization method in sparse-view CT.However, it is well known that TV will introduce new blocky/staircasing artifacts [14].[15] proposed the anisotropic TV regularization method while it could produce distortions along axes.To handle such problems, many variations of TV regularization have been proposed in the last two decades.[16] proposed an edge-preserving TV regularizer which used the e −|∇µ| 2 /σ 2 as the edge detector, where σ is a prescribed parameter representing the amount of smoothing.Later [17] proposed a similar discretized version to [16].The similar methods can be found in [18,19].
Nevertheless, the ability of removing the streaking artifact and edge-preserving can be improved more due to the fact that the amount of regularization near the edges and away from that does not differ much since e −|∇µ| 2 /σ 2 ∈ [0, 1].[20] proposed the total generalized variation (TGV) regularizer which takes use of the second order derivative of the unknown µ.While this method can avoid the blocky artifacts, it assumes the piecewise linear structure of the image.It is well known that commonly for a medical image it should be piecewise constant.[21] proposed a directional TV regularizer in which a directional derivative is considered rather than just using ∇µ.However, different weights can employed in different directions to improve the performance of this method.[22] proposed L p (0 < p < 1) regularization method.However, the images are heavily relied on the parameter p. L 1 /L 2 is a recently proposed regularization technique [23].This method is based on updating the regularization parameter in each iteration.However, the updating parameter is not region-dependent, that is, in each iteration, the minimization problem is isotropic.[24] proposed a nonlinear weighted anisotropic TV (NWATV) regularization method and used it in electrical impedance tomography, a low resolution imaging modality.In this paper, a box-constrained NWATV method is used in sparse-view CT which produces a significantly improved reconstruction compared with directly using NWATV and box-constrained L 1 /L 2 methods.Precisely, across the internal edges where ∇µ → ∞, we set the regularization to be small to preserve the edge, while near the smooth region we set a normal regularization to make the ill-posed problem better posed.We found a significant convergence behavior of the iteration process with the box constraint (set the reconstruction value lies in a proper interval).We validate the proposed algorithm using the Shepp-Logan phantom, the walnut X-ray data provided by Finnish Inverse Problems Society (http://fips.fi/dataset.php),and the clinical lung image provided by The Cancer Imaging Archive(TCIA: https://www.cancerimagingarchive.net/collection/lungctdiagnosis/).The rest of the paper is organized as follows: In Section 2, we provide a brief introduction of the parallel beam CT.In Section 3, we introduce the proposed box-constrained nonlinear weighted anisotropic TV regularization and provide an iterative reconstruction algorithm.In Section 4, we validate the performance of the proposed regularization method using the Shepp-Logan phantom, the actual walnut CT experiment data and the clinical lung image.In Section 5, we discuss the rules of the choice of regularization parameters.In Section 6, we conclude the paper and provide some future research topics.
Preliminaries of parallel beam CT
In parallel beam CT, we restrict our explanations to the two-dimensional space because the parallel beam always lies in a plane, and each time the X-ray can only pass through a slice of the object.Let Ω ⊂ R 2 represent a bounded region of the imaging object.Denote µ as the attenuation coefficient of Ω, which is generally a piecewise constant function in medical imaging.In parallel beam CT, an incident X-ray beam along the direct lines L θ,s := {x ∈ R 2 : Θ • x = s} passes through the object which lies between the X-ray sources and detectors (see Figure 1).Here s ∈ R denotes the signed distance of L θ,s to the original point O(0, 0) and Θ = (cos θ, sin θ) with θ ∈ [0, π) denoting the angle of a directly line l and x-axis, where l is perpendicular to L θ,s .We assume that the incident X-ray intensity is I 0 .For fixed θ and s, the attenuated X-ray intensity I(θ, s) can be measured through the detector.The relation between the measured I(θ, s) and the unknown µ is described by the Lambert-Beer law [1,2] where R θ [ f ](s) is the Radon transform [25] of f defined as with dℓ x denoting the length element.
In medical imaging, we assume that a parallel beam contains J X-rays, and hence J-detectors are employed to detect the corresponding attenuated X-rays.We assume that the J X-rays are equidistantly distributed.To be precise, we assume that the signed distances of the X-rays to the original point lie in [s, s] and the signed distance of the j-th For ease of explanation, we denote y m = ln I 0 I(θ k ,s j ) for m = J(k − 1) + j.Then we have Then from the Eq (2.1), the reconstruction of µ can be recast to solve the following linear system
.2)
Here A = (a mp ) is an M × n matrix for n = N 2 , y = (y m ) is an M × 1 vector and u = (u p ) is an n × 1 vector.To be precise, a mp is the length of the projection line which lies in the pixel P qt , i.e.
+ q, u p = µ(q, t) and y m is defined in Eq (2.1).
Nonlinear weighted anisotropic TV regularization with box constraint
Note that for the sparse-view CT, generally we have M ≪ n, hence to solve the Equation (2.2), we reformulate it to the following least squares problem where ∥ • ∥ ℓ 2 denotes the standard Euclidean norm in R M .Since A T A is ill-conditioned, where • T represents the transpose of •, we approximate Eq (3.1) by the following well-conditioned problem The first term on the right-hand side of Eq (3. 2) is the data fidelity term, the second term Reg(u) is the regularization term and λ is the regularization parameter which balances the fidelity and the regularization terms.Precisely, instead of seeking the solution in the space ℓ 2 where there may be infinitely many solutions, we seek the solution in its subspace characterized by Reg(u).
Since the edge of internal structure is a key feature in medical imaging, the choice of the term Reg(u) should obey the following rule: • Near the local edges of µ or u where |∇µ| ≈ ∞, do as little regularization as possible to preserve the edges.
• The range of the reconstructed u coincides with the range of its true value.
Combining the above three considerations, we define Reg(u) as follows where γ is the indicator of using box constraint, that is γ = 1 if the box constraint is used while , where D x , D y ∈ R n×n are respectively the first-order difference operators along the x and y with β > 0 a small positive number to avoid zero being the denominator; and equals to +∞ otherwise.Note that Π [c 1 ,c 2 ] is capable of enforcing u into the range of the actual attenuation coefficient.
The augmented Lagrangian functional of Eq (3.
2) together with Eq (3. 3) can be expressed as where d, v are the auxiliary variables, b, e are the Lagrangian multipliers and ρ, α are the scalar penalty parameters.
We end this section by summarizing the above process as the reconstruction algorithm in the form of pseudocode shown in Algorithm 1.
Experiments
In this section, to validate the advantages of the proposed regularization we do experiments The walnut CT data is also provided in ZENODO (https://zenodo.org/record/1254206).The experimental models are shown in Figure 2 .
Experiment setup
To show the advantages of the proposed regularization, we compare the reconstructions of the most recently proposed gradient-based L 1 /L 2 [23] with box constraint and the nonlinear Algorithm 1 The box-constrained NWATV method Require: Projection matrix A, observed data y, and a bound [c 1 , c 2 ] for the original image.
end if 12: end for weighted anisotropic TV regularization [24] with box constraint.To compare the performance of the reconstructions we compute the relative errors (including the L 2 relative errors RE(k) and the H 1 relative errors RE(k)) and mean square errors MSE(k) for the k-th step as follows Here, u (k) represents the result of the reconstruction at k-th step.In Shepp-Logan phantom, Besides we also compare the peak signal-to-noise ratio PSNR(k) and the structural similarity index SSIM(k) [28] for the k-th step defined as follows In the numerical experiment, we set the size of the reconstructed images to be 256 × 256 pixels and set the number of detectors to be J = 362.In the reconstruction of the actual walnut experiment, the size of the reconstructed images is set to be 164 × 164, and the number of detectors is also 164.In the lung image, we use the first image of patient R_172 in the dataset.
The original size is 512×512, we sample evenly to get the ground truth image of 128×128.The number of detectors is set to be 181.Using the Radon transform we obtain the projection data corresponding to the lung image.To solve Equation (3.6), we use the Generalized Minimal Residual Algorithm(GMRES) [29] to accelerate the computation.
The reconstructions are carried out using Matlab 2018a (The MathWorks, Inc., Natick, MA, USA) on a workstation with 1.60 GHz Inter (R) Core (TM) i5-8250U CPU, 8.00 GB memory, Windows 10 operating system.We also make use of the MATLAB package AIR Tools II to simulate parallel beam for the CT scanning [30].
Numerical experiment results
Under box constraints, we compare the results of L 1 /L 2 method and nonlinear weighted TV regularization.In all the experiments, we set the maximum number of external and internal iterations in the box-constrained L 1 /L 2 to be 300 and 5, respectively.For fair comparisons, the ranges of other parameters are set according to [23] We first consider the effect of box constraint for the NWATV regularization.We consider parallel beam CT reconstruction with 31 angles uniformly taken from 0 • to 150 • , and the noise level is 0.5%.The box constraint is [0,1].Therefore, the sample size is set to be 362×31.We do our best to choose the parameters such that RE attains the minimum value.Precisely, we choose ρ=20, λ=0.002, α=5 in the box-constrained NWATV, and ρ=20, λ=0.004 in NWATV.
The reconstruction images are shown in Figure 3, and the corresponding relative errors (RE and RE), mean square errors (MSE), peak signal-to-noise rations (PSNR), SSIM values and the CPU times are shown in Table 1.The results show that with a short reconstruction time, the box-constrained NWATV regularization can perform better than that without the box constraint.In Figure 4, we illustrate the evolutions of RE(k) and SSIM(k) with k, the iteration step.The figure clearly shows that the box constraint improves the convergence behavior of the NWATV method.
Next, we show that the proposed regularization can reconstruct a satisfied image for using different sampling angles and is robust against the Gaussian random noise.We evenly take 90, 60 and 30 angles for comparisons from 0 • to 179 • .For each case, we add different Gaussian random noises with the levels 0.5%, 1%, 1.5% and 2%.The box constraint is set to be [0,1].We list the specific parameters selected in the Table 2.The corresponding reconstruction results are shown in Figure 5, 6, 7. We show the values of the corresponding numerical results in Table 3. choosing a set of parameters to make the reconstructed image have the best visual effect.The results are shown in Figure 9, and the corresponding numerical results are shown in Table 5.
Mathematical Biosciences and Engineering
For further comparison, we also illustrate profiles of the reconstructed walnut and human lung images along the dash lines shown in Figure 10.From the profiles it's clear that the proposed model behaves better in edges and details preserving.
In conclusion, through the above experiments, the box-constrained NWATV method can produce similar and visually better results than the recently developed box-constrained L 1 /L 2 method while the CPU time is significantly decreased.[33].However, recent research has reported the instabilities of such methods [11,34].Moreover, to gather the high quality training data, the conventional regularization-based reconstruction methods are still quite necessary.This is because to get the labeled data, without using an elegant regularization, data obtained from the full projections should be employed, which expose the patient under high risk of radiation.
Another possible model-based approach is the sinogram inpainting method [35].However, it will cause other artifacts.Hence, a proper regularization in the image reconstruction process is a mild way to produce high performance CT images for diagnosis and gather the training data for AI methodology.[36,37] provide other ways of combining the model-based and data-driven methods for CT image reconstructions.
Parameter selection rules
In regularization-based reconstructions, the choice of regularization parameter is generally very difficult and criticized.However, from the point view of dimensional reduction, we can produce a high performance CT image with a much higher dimension by given a small amount of parameters.For the choices of the parameters, on the one hand, mathematically there are the methods as discrepancy principle and statistical methods [38] to deal with proper choices of the parameters.On the other hand, there are some rules for the parameters which are listed as follows.
• As in [24], the performance of the reconstruction i.e. the relative error depends only the λ ρ rather than the values of λ and ρ.
• The values of ρ and α should be determined by order of A T A and the number of pixels in such a way that A T A + ρD T D + αI should not change its order.
• The optimal λ ρ value ranges from 10 −5 to 10 −4 since using such value we can minimize the relative error.
In Figure 11 we depict the evolution of RE with λ ρ and α for different ρ.From the figures, we can see that for each ρ, RE is invariant with the changes of α when λ ρ is fixed.
Comparisons on the computation cost with
In this section, we explain the computation load of the proposed box-constrained NWATV method and compare with that in box-constrained L 1 /L 2 method [23].To guarantee the convergence of the box-constrained L 1 /L 2 method both inner and outer loops have to be used while in the proposed NWATV method a single loop could produce a good performance of the convergence.Note that in each loop the most expensive computation is the calculation in (3.6).
As that in [23], the maximum number of inner loop is set to be 5 in this paper which means that for the same outer loops, the computational load is at least 5 times bigger than that in the proposed method.The range of λ is {0.002, 0.004, 0.006, 0.008, 0.01, 0.02, 0.04, 0.06, 0.08, 0.1}.models, we validate that the proposed regularization can reconstruct a more accurate CT image than the most recently developed L 1 /L 2 regularization method.To be precise, the reconstruction time is reduced more than 8 times while maintaining similar relative errors and structural similarity index.The proposed method shows advantages, especially when the sampling angles are less than 60 and the noise level is more than 1%.Numerical simulations also show a good convergence performance of the proposed iterative scheme.Moreover, since the pixel values of digital images are mostly limited to a certain range, it is reasonable to add box constraints in image processing [1,23].Note that errors/noises in the iteration based numerical scheme may accumulate with the iterations, the box constraint plays the role in suppressing the accumulation in some extent.Hence, the box constraint could enforce the iteration converges to the critical point of the functional L as shown in Figure 4. We note that in the box-constrained L 1 /L 2 method, the authors also note the similar role of the box constraint [23].
Future works should definitely include the mathematical theory of the convergence of the proposed iterative scheme.Moreover, in this paper through manual tuning, we list some rules for the selection of parameters λ, ρ, α.In the future, we could develop an automatic way of the selection of optimal parameters by minimizing the discrepancy function in an admissible set using the combinatorial optimization method.If the noise level δ is known, the discrepancy function could be defined as F(λ, ρ, α) = |∥Au λ,ρ,α − y∥ − τδ| for a given τ > 1 to avoid underregularization.
Mathematical Biosciences and Engineering
Volume xx, Issue x, xxx-xxx y∥/λ could be used [39].To minimize the above functions, we need firstly select a good initial guess λ 0 , ρ 0 , α 0 , and the optimal parameters could be obtained by minimizing the discrepancy functional using an alternate direction iteration scheme.Furthermore, the proposed method can be further used in the area of metal artifact reduction (MAR) in CT reconstruction [40], beam-hardening artifact reduction [41], limited angle artifact reduction [1] and so on.
MathematicalFigure 1 .
Figure 1.Schematic diagram of a parallel X-ray beam CT system.
0 represents the ground truth image; in walnut experiment, it represents the CT image reconstructed using full projections and the method of filtered back-projection (FBP) since we do not know the ground truth image in the actual experiment; in lung image, it represents the actual image from TCIA.Figure2 (a)-(c) illustrate u 0 of Shepp-Logan phantom, walnut and lung experiment.
Figure 2 .
Figure 2. Experimental models we used.(a) is the Shepp-Logan phantom model in numerical experiment, (b) shows the walnut model in actual CT experiment which is obtained in Finnish Inverse Problems Society (FINNISH), and (c) is a clinical lung image provided by the Cancer Image Archive (TCIA).
Figure 3 .
Figure 3.Effect of box constraint in NWATV reconstruction.The grayscale window is [0,1].Left: Reconstruction result under box constraint.Right: Reconstruction result without box constraint.
Figure 4 .
Figure 4.The effect of box constraint on the convergence of the iteration scheme of NWATV.The left is the RE and the right is the SSIM.
MathematicalFigure 5 .Figure 6 .
Figure 5. Reconstruction results using the box-constrained NWATV and the boxconstrained L 1 /L 2 methods with the sampling size 362×90.The top row is the results of the box-constrained L 1 /L 2 , while the bottom row is the results of the box-constrained NWATV.From left to right are respectively the reconstructed results with noise levels of 0.5%, 1%, 1.5% and 2%.
Figure 9 .
Figure 9. Reconstruction results of the human lung image.The first row depicts the reconstructed images using the box-constrained L 1 /L 2 (left) and box-constrained NWATV (right) methods, respectively for the sampling size of 181×60.The second row illustrates the similar results for sampling size of 181×30.
Figure 10 .
Figure 10.The profiles of the reconstructed walnut and human lung images along the dash lines on left figures.In the right figures, the red line illustrate the values of u 0 along the dash line, the blue line show the values of reconstructions using box-constrained L 1 /L 2 methods while the green line depict the reconstructions using box-constrained NWATV methods along the dash lines.The sampling size of the human lung imaging is set to be 181×30.
to minimize the L 2 relative error(RE) and obtain the best performance.The number of iterations in NWATV(without box constraint) and Mathematical Biosciences and Engineering Volume xx, Issue x, xxx-xxx the box-
Table 1 .
Numerical results of the Logan-Shepp model using the NWATV reconstruction with and without box constraint.
angles(362×30), as the noise level increases, the box-constrained NWATV has more advantages than the box-constrained L 1 /L 2 in noise removal and detail recovery.
Table 3 .
Performance of the box-constrained L 1 /L 2 and the box-constrained NWATV regularization for Shepp-Logan phantom for different sampling sizes and noise levels.
Table 4 .
Comparisons of the performances of two box-constrained methods using the walnut data.
Table 5 .
Comparisons of the performances of two box-constrained methods using the human lung image. | 2024-03-11T17:57:13.877Z | 2024-03-04T00:00:00.000 | {
"year": 2024,
"sha1": "dea881e2d307823f77c50b53716aff79a8a88b1e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3934/mbe.2024223",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0a6598cf33c80eb89896de3338b51aa2a6d179d6",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267585273 | pes2o/s2orc | v3-fos-license | Dysfunction in atox-1 and ceruloplasmin alters labile Cu levels and consequently Cu homeostasis in C. elegans
Copper (Cu) is an essential trace element, however an excess is toxic due to its redox properties. Cu homeostasis therefore needs to be tightly regulated via cellular transporters, storage proteins and exporters. An imbalance in Cu homeostasis has been associated with neurodegenerative disorders such as Wilson’s disease, but also Alzheimer’s or Parkinson’s disease. In our current study, we explored the utility of using Caenorhabditis elegans (C. elegans) as a model of Cu dyshomeostasis. The application of excess Cu dosing and the use of mutants lacking the intracellular Cu chaperone atox-1 and major Cu storage protein ceruloplasmin facilitated the assessment of Cu status, functional markers including total Cu levels, labile Cu levels, Cu distribution and the gene expression of homeostasis-related genes. Our data revealed a decrease in total Cu uptake but an increase in labile Cu levels due to genetic dysfunction, as well as altered gene expression levels of Cu homeostasis-associated genes. In addition, the data uncovered the role ceruloplasmin and atox-1 play in the worm’s Cu homeostasis. This study provides insights into suitable functional Cu markers and Cu homeostasis in C. elegans, with a focus on labile Cu levels, a promising marker of Cu dysregulation during disease progression.
Introduction
The essential trace element and micronutrient copper (Cu) functions as a catalytic cofactor for a variety of enzymes in biological processes, including mitochondrial respiration and the synthesis of biocompounds (Chen et al., 2022).Cu is widely used in industry and agriculture, both a major contributor of soil and water pollution (Li et al., 2021;Vázquez-Blanco et al., 2020).Excess in Cu (beyond the physiological need) can be harmful to organisms due to its redox properties and the ability to promote the formation of reactive oxygen species (ROS) (An et al., 2022).In humans, altered Cu levels lead to oxidative stress and in consequence can result in the onset of neurodegenerative disorders (Que et al., 2008), as well as cancer (Ge et al., 2022).Therefore, the tightly controlled homeostasis of Cu levels is of importance to cellular and organismal wellbeing (Bisaglia and Bubacco, 2020).Mammals and other organisms are therefore endowed with a complex network of proteins which are involved in the regulation of Cu homeostasis.These proteins work in concert to coordinate the import, export and intracellular utilization of Cu, thus maintaining cellular levels within a specific range, thereby preventing the consequences of Cu overload (Chen et al., 2022).The reduced form of Cu (Cu + ) enters the cell mainly in via a high affinity copper uptake protein (CTR-1) dependent on intracellular Cu levels (Clifford et al., 2016), while oxidized Cu 2+ is taken up via the divalent metal transporter 1 (DMT-1) (Shawki et al., 2015) (Figure 1).Antioxidant protein 1 (ATOX-1), a Cu metallochaperone protein which obtains Cu from CTR-1, engages in the intracellular transport of Cu to target organelles such as the nucleus or golgi (Kamiya et al., 2018).As Cu serves as a cofactor for a variety of mitochondrial enzymes, the cytochrome c oxidase copper chaperone (COX-17) regulates mitochondrial Cu import (Punter et al., 2000).The major Cu-carrier in the blood is the multifunctional protein ceruloplasmin, which stores up to 90% of total Cu (Hellman and Gitlin, 2002) and displays ferroxidase activity (Linder, 2016).Furthermore, ceruloplasmin serves as an extracellular scavenger for reactive species and therefore limits oxidative damage (Linder, 2016).Likewise, metallothioneins bind metal ions like cadmium, zinc and Cu for detoxification and protection against oxidative stress (Höckner et al., 2011).Cu excretion is mediated by ATP7B, which either delivers Cu to ceruloplasmin (Weiss et al., 2008) for subsequent elimination via the bile (Prohaska, 2008) or translocates from the golgi to the plasma membrane to efflux Cu via vesicular sequestration (Monty et al., 2005;Cater et al., 2006).To date, serum or plasma Cu status is derived solely by determining total Cu or ceruloplasmin levels (Olivares et al., 2008;Hackler et al., 2020).Cellular copper is partitioned between tightly-bound pools in cuproenzymes, which bind copper with K d values in the 10 −15 M and tighter, and labile pools, defined as loosely bound to low-molecular weight ligands with K d values that are orders of magnitude weaker, typically in the 10 −10 to 10 −14 M range, which can regulate diverse transition metal signaling processes (Banci et al., 2010;Carter et al., 2014;Cotruvo et al., 2015;Hare et al., 2015;Ackerman et al., 2017;Ackerman and Chang, 2018).The labile Cu fraction provides an estimation of Cu activity and may thus serve as a better functional marker than total Cu as Cu participates in transition metal signaling pathways beyond traditional roles in metabolism (Chang, 2015;Pham and Chang, 2023).Indeed, labile Cu was recently identified as a marker for the Cu status in human serum (Schwarz et al., 2023;Tuchtenhagen et al., 2023).This study aimed to further our knowledge base regarding Cu homeostasis and dyshomeostasis, with a particular focus on labile Cu levels.This will shed light on the regulatory mechanisms involved when an organism is challenged with an oversupply of total Cu and/or labile Cu, respectively.For this purpose, we use the model organism Caenorhabditis elegans (C.elegans), which is an in vivo invertebrate model organism suitable to study metal homeostasis and toxicity (Aschner et al., 2017).An additional advantage of using C. elegans is the wide range of available deletion (Δ) mutants.Although the metallomic underpinning of Cu homeostasis in C. elegans shares many homologies to mammals, studies using the nematode in research on Cu homeostasis are scarce (Wakabayashi et al., 1998;Chun et al., 2017;Yuan et al., 2018).Here, we studied Cu dyshomeostasis by excess Cu feeding as well as by using models displaying genetic Cu disbalance, such as the mutant ceruloplasminΔ, which lacks the major Cu storage protein, as well as an atox-1Δ mutant.Taken together, we define the role of ceruloplasmin and atox-1 in Cu homeostasis and identify suitable functional markers in the model C. elegans.
2 Materials and methods 2.1 C. elegans handling and Cu treatment C. elegans strain Bristol N2 (wildtype) and deletion mutants mtl-2(gk125) were obtained from the Caenorhabditis Genetics Center (CGC, Minneapolis, USA), which is funded by the National Institutes of Health Office of Research Infrastructure Programs.Deletion mutants atox-1 (tm1220), mtl-1(tm1770) and the ceruloplasmin mutant (tm14205) were obtained from the Mitani laboratory at Tokyo Women's Medical University.Worm strains mtl-1;mtl-2(zs1), and Pmtl-1::GFP and Pmtl-2::mcherry (integrated into the genome by Mos1-mediated single-copy insertion (MosSCI)) were generated by the Stephen Stürzenbaum laboratory.Note, the Pmtl-2::mcherry strain contained an addition nuclear localization signal (NLS).All strains were cultivated on agar plates coated with Escherichia coli (E.coli) at 20 °C as previously described (Brenner, 1974).Worms were synchronized as described in (Porta-de-la-Riva Schematic overview of the assumed intracellular Cu import, distribution, storage and excretion in C. elegans.Cu is taken up primarily as Cu + via CTR-1 or alternately as Cu 2+ via DMT-1.ATOX-1 mediates Cu distribution to the golgi, nucleus or mitochondria via COX-17.GSH and metallothionein is thought to be involved in the chelating of excess Cu, while the majority is stored in the ceruloplasmin.The efflux of excess Cu is mediated by ATP7B via vesicular sequestration.et al., 2012) and placed on NGM plates until L4 larval stage.L4 stage worms were treated with copper-enriched inactivated E. coli (OP50) on NGM plates for 24 h.The bacteria were inactivated for 4 h at 70 °C (Baesler et al., 2021).Stock solutions of CuSO 4 (≥99.99%,Sigma Aldrich) were prepared fresh in bidistilled water.
Lethality studies after Cu exposure
For lethality testing, worms were counted manually as alive or dead after 24 h of Cu exposure.Worms were defined as dead if they demonstrated no movement after prodding with a platinum wire.
Total Cu quantification via ICP-OES
Total Cu content in worms was quantified using inductively coupled plasma-optical emission spectrometry (ICP-OES) (Avio 220 Max, Perkin Elmer).Following 24 h Cu exposure, 1000 worms per condition were washed 4x using 85 mM NaCl + 0.01% Tween 20, pelleted by centrifugation, frozen in liquid nitrogen and stored at −80 °C.Pellets were homogenized using an ultrasonic probe (UP100H, Hielscher) and subsequently dried at 95 °C.Yttrium (Y) (ROTI ® STAR, Carl Roth) was added as internal standard and the samples were digested at 95 °C using 500 µL of a 1:1 mixture (v:v) of HNO 3 (Suprapur ® , Merck KGaA) and H 2 O 2 (for ultratrace analysis, Sigma Aldrich) and re-dissolved in 2% HNO 3 .The following parameters were used for measurements: Plasma power: 1500 W, cooling gas: 8 L/min, auxiliary gas: 0.2 L/min, nebulizer (MicroMist ™ ) gas: 0.7 L/min, wavelengths: Cu -327.939 and Y -371.029.The analysis was performed using external calibration (multi element mix (spectec-645) + Y) and verified by measuring certified reference material BCR ® -274 (Single cell protein, Institute for Reference Materials and Measurement of the European Commission) and SRM ® -1643f (Trace Elements in Natural Water, National Institute of Standards and Technology).The Cu content was normalized to the protein amount determined using a BCA assay (Walker, 1994) using bovine serum albumin (Sigma Aldrich) for external calibration.
Quantification of labile Cu by fluorescent dye CF4
Labile Cu levels were assessed using the fluorescent dye Copper Fluor-4 (CF4), which has an apparent K d value of 2.9 × 10 −13 M for a 1:1 copper:probe stoichiometry that is well-matched to monitor labile Cu pools by reversible Cu binding without depleting the total Cu stores (Xiao et al., 2018).Stock solutions were prepared in DMSO (5 mM).Following Cu treatment, worms were exposed to 10 µM CF4 for 3 h in the dark in incubation buffer (25 mM HEPES, 120 mM NaCl, 5.4 mM KCl, 5 mM Glucose, 1.3 mM CaCl 2 , 1 mM MgCl 2 , 1 mM NaH 2 PO 4 , pH = 7.35, 0.01% Tween 20).Thereafter, fluorescence intensity was assessed by either fluorescence microscopy or plate reader measurement.Worms were transferred to 4% agarose pads on microscope slides and anesthetized by 5 mM levamisole (Sigma Aldrich).Fluorescence images as well as intensities were obtained with a DM6 B fluorescence microscope and the Leica LAS X software (Leica Microsystems GmbH) using a triple band excitation filter and constant settings as well as light exposure times.400 worms per well in triplicates were transferred into a 96 well plate for plate reader measurements, while another aliquot was stored for protein measurement.Bottom reads were performed using a microplate reader Infinite ® M Plex (Tecan) with wavelengths of 415 nm for excitation and 660 nm for emission.
Cu imaging by ToF-SIMS
Worms were incubated with 2 mM CuSO 4 , following 3x washing steps with 85 mM NaCl and 3x washing steps with Rotipuran Ultra (Carl Roth).Subsequently, about 20 worms per strain were transferred to indium tin oxide (ITO) coated glass slides.In order to locate the 3-dimensional Cu distribution in nonfluorescent-labeled worms, ToF-SIMS 3D depth profiling analysis was performed using an IONTOF "ToF.SIMS 5 ".Sputtering was performed using a O 2 + , 2 keV ion beam with a maximum current of 650 nA rastered across 700 × 700 μm 2 .Analysis was performed using a Bi 1 + , 30 keV, 0.5 pA ion beam in spectrometry mode, rastered across 500 × 500 μm 2 within the center of the sputter crater.Secondary ions of positive polarity were mass analyzed.
Gene expression via quantitative realtime PCR analysis
Total RNA content was isolated using the Trizol method, published by Bornhorst et al. (2014), of which 1 µg was transcribed using the High Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Thermo Fisher Scientific) following the manufacturer's protocol.Quantitative real-time PCR was carried out on the AriaMx Real-Time PCR System in duplicate wells for each gene using TaqMan Gene Expression Assay probes (Applied Biosystems, Thermo Fisher Scientific) according the manufacturer's instructions.The AFDN homolog afd-1 was used as housekeeping gene for normalization by the comparative 2 −ΔΔCt method (Livak and Schmittgen, 2001).The probes used were: afd-1
Metallothionein expression
To assess metallothionein expression, Pmtl-1::GFP and Pmtl-2:: mcherry transgenes were used.After Cu treatment and 4x washing steps with 85 mM NaCl + 0.01% Tween 20, excess liquid was aspirated to yield 1600 worms in 400 µL. 3 × 100 µL were transferred as triplicate into a 96 well plate, the remaining 100 µL were used for protein quantification.Bottom read measurements were conducted at 488 nm (excitation) and 509 nm (emission) for GFP-tagged worms and 561 nm (excitation) and 610 nm (emission) for mcherry-tagged worms using a Tecan microplate reader Infinite ® M Plex (Tecan, Switzerland).Additionally, worms were transferred to 4% agarose pads on microscope slides, followed by anesthesia using 5 mM levamisole (Sigma Aldrich).Images were taken using a Leica DM6 B fluorescence microscope (Leica Microsystems GmbH) with constant settings and light exposure time.
Statistical analysis
Statistical analyses were carried out with GraphPad Prism 6 (GraphPad Software, La Jolla, CA, USA).Statistical tests and significance levels are listed in figure captions.
Lethality after Cu exposure
Lethality testing following 24 h Cu exposure revealed no toxic effect up to 2 mM in wildtype worms, while atox-1Δ and ceruloplasminΔ deletion mutants presented a significant reduction of survival of about 10% after 2 mM Cu treatment (Figure 2).During lethality testing we noticed that worms exposed to 2 mM Cu started to display shortened and thinner phenotype, which indicated the onset of a developmental delay.Concentrations above 2 mM were not considered, since worms were previously shown to avoid higher amounts of Cu, as described in Guo et al. (2015) and Munro et al. (2020).
Total Cu vs. labile Cu levels
Following 24 h of treatment with CuSO 4 -enriched E. coli up to 2 mM, total Cu levels of wildtype, atox-1Δ and ceruloplasminΔ deletion mutants were quantified by ICP-OES (Figure 3A).Cu basal levels were indistinguishable in all 3 worm strains with 0.42 ± 0.05 ng Cu per µg protein in wildtype worms respectively.In addition, a concentration-dependent increase in Cu levels was observed for all strains.However, mutants with impaired Cu homeostasis displayed significantly lower total Cu levels than wildtype worms, in particular ceruloplasmin-deficient worms.Labile Cu levels were determined by fluorescent dye CF4 (Figure 3B).Labile Cu levels tended to be elevated following Cu treatment of wildtype and atox-1Δ worms, furthermore, a higher basal level of labile Cu levels was observed in untreated ceruloplasmin-deficient worms.In general, labile Cu levels appeared to be higher in worms characterized by a disturbed Cu homeostasis (Figures 3C-E).
Cu imaging and depth profiling by ToF-SIMS
The location of Cu in worms exposed to 2 mM Cu for 24 h was investigated by Time-of-Flight Secondary Ion Mass Spectrometry (ToF-SIMS).3D depth profiles were created to determine the Cu distribution in relation to the worms' depth.
In "dual-beam-mode" of ToF-SIMS depth profiling each sample surface was continuously sputtered by an ion beam (O 2 + ), while a second ion beam (Bi + ) was used to image the respective intensity of Cu in the resulting crater surface (Figure 4A).Subsequently, the lateral distribution over the total sputtered depth excluding the first sputter seconds in order to exclude surface contaminants as well as the regional depth profile at the worm positions were calculated from the ToF-SIMS raw data stream.Images of the isotopes 63 Cu + and 65 Cu + (Figure 4A) were comparable in distribution.Figure 4A shows that the highest Cu intensity is located in the middle part of the worm corpus for all three strains.The highest Cu intensity was detected in wildtype worms, whereas the ceruloplasmin-deficient worms demonstrated the lowest Cu intensity (Figure 4B).
Gene expression of Cu homeostasisrelated genes upon Cu exposure
The relative mRNA levels of Cu transport-and storage-related genes were determined via RT-qPCR in wildtype and mutants treated with Cu for 24 h (Figure 5).Target genes were Cu importer ctr-1 (ortholog to human high affinity copper uptake protein 1 encoded by SLC31A1), cytochrome c oxidase copper chaperone cox-17, intracellular transporter atox-1, atp7a/b (ortholog of human ATP7A and ATP7B) and storage-related genes ceruloplasmin, mtl-1 and mtl-2 (orthologs to human metallothionein MT1A and MT2A).In wildtype worms, Cu treatment resulted in an upregulation of ctr-1, while atox-1Δ worms displayed already elevated basal levels.Mitochondrial Cu importer cox-17 expression was elevated in atox-1Δ deletion mutants at the basal level as also following Cu exposure.Atox-1 mRNA levels did not increase due to Cu exposure but were altered in ceruloplasminΔ worms.Mammalian genomes encode for two isoforms per Cu exporter (ATP7A and ATP7B), whilst C. elegans carries only a single gene of atp7a/b, albeit with high sequence similarity to human homologs (Chun et al., 2017).Cu treatment lead to an increase in atp7a/b mRNA levels in wildtype worms, which were already significantly elevated in both untreated deletion mutants.Gene expression of ceruloplasmin was amplified due to Cu treatment, in addition, atox-1Δ worms displayed significantly higher levels in untreated controls compared to wildtype worms.mRNA levels of mtl-1 were significantly reduced by about 90% in wildtype worms upon treatment with 2 mM Cu.The basal level of mtl-1 was lower in atox-1Δ and ceruloplasminΔ deletion mutants (compared to wildtype) but exposure to 2 mM Cu lowered mtl-1 gene expression further.The expression of mtl-2 increased at low level exposures (0.5 mM Cu) but reduced at the higher exposure concentration (2 mM), this trend was observed in wildtype and the two deletion mutants, but the expression levels were notably higher in the atox-1Δ mutant.
Metallothionein expression and alterations of Cu uptake in mtl-KO mutants
Since Cu oversupply resulted in decrease of mtl-1 and mtl-2 expression, the involvement of metallothionein in Cu homeostasis was further investigated.Therefore, single knockout mutants of mtl-1 (mtl-1(tm1770)) and mtl-2 (mtl-2(gk125)), as well as the double knockout mutant (mtl-1;mtl-2(zs1) were incubated with Cu as described and total Cu levels were determined by ICP-OES.Results revealed a concentration-dependent Cu uptake for all tested strains, however, mtl-1KO (mtl-1(tm1770)) worms displayed significant less Cu uptake after 2 mM CuSO 4 treatment (Figure 6A), but also lower levels in other trace elements (Supplementary).Although mRNA is required for protein synthesis, it does not inversely dictate that mRNA levels and mRNA induction levels are universally proportional to each other (Buccitelli and Selbach, 2020).Consequently, we investigated the induction of mtl-1 and mtl-2 using the fluorescence-tagged transgenes Pmtl-1::GFP and Pmtl-2:: mcherry, generated by the Mos1-mediated single-copy insertion (MosSCI) techniques, note the latter modified to contain a nuclear localization signal (NLS).Fluorescence plate reader measurements revealed a marginal increase in mtl-1 expression but mtl-2 levels remained, at large, unaffected by Cu exposure (Figure 6B), which was also visualized by fluorescence microscopy (Figure 6C).
Discussion
Cu is an essential trace element, serving as an enzyme cofactor due to its redox properties (Chen et al., 2022).In excess, however, Cu can promotes adverse health effects, which are mainly caused by the excessive formation of reactive oxygen species at the cellular level (Song et al., 2014).Excess Cu, beyond the homeostatic range, has been linked to the onset of numerous neurodegenerative diseases, and foremost Wilsons disease (WD) (Shribman et al., 2021;Squitti et al., 2023).It is therefore of importance to have mechanisms in place that allow an efficient regulation of Cu homeostasis.It is crucial to better understand how Cu homeostasis is balanced, and characterize these regulatory mechanisms.Two key players are ceruloplasmin and atox-1 and the consequences of their loss of function should be investigated.In addition, suitable markers and new tools to assess Cu status are needed, and the nematode C. elegans is a powerful model to address these shortcomings.
Others have demonstrated that high doses of Cu can result in cellular toxicity in different modes of applications (Chun et al., 2017;Yuan et al., 2018;Zhang et al., 2021).Our study focused on metal homeostasis and investigated physiological endpoints rather than toxicology.Accordingly, Cu was applied via E. coli on agar plates up to 2 mM for a 24 h duration, which did not impact majorly on lethality rates.Having said that, mutants with disturbed Cu homeostasis presented a reduced survival rate of about 10% and are consequently Cu-hypersensitive.In addition, concentrations above 2 mM were avoided, as worms move away from the exposed E. coli and starve (Guo et al., 2015;Munro et al., 2020).Cu 2+ , as used in our study, is reduced to Cu + by a yet unknown reductase in C. elegans and subsequently taken up by importer CTR-1.The transcription of ctr-1 increased in wildtype worms exposed to 0.5 mM Cu, which is in contrast to observations made by Clifford et al. (Clifford et al., 2016), however ctr-1 expression was not modulated in worms challenged with the higher dosage of Cu (2 mM).Total Cu uptake increased in a concentration-dependent manner (Chun et al., 2017;Yuan et al., 2018), yet significantly less in the atox-1 and ceruloplasmin deletion mutants, suggesting that these mutants are characterized by an altered storage capacity.Factors that may further contribute to a disturbed homeostasis may include a reduced influx, an increased efflux or a lack of sufficient storage capacity, or a combination thereof.Li et al. display normal Cu levels in ceruloplasmin-KO mice in the cerebral cortex and hippocampus (Li et al., 2022).The brain is, after the liver, the organ with the highest Cu occurrence (An et al., 2022).Consequently, we investigated whether Cu accumulates in specific areas of the worm.ToF-SIMS analysis revealed a universal distribution of Cu across the worm body, but it should be noted that neurons are present not only in the head region but over the entire body of the worm (Gendrel et al., 2016).Even if ToF-SIMS analysis goes further than microscopy, as an additional depth profile analysis is included, the resolution is not sufficient to localize Cu within a cell (subcellular).Therefore, future studies should focus on neuronal cells by using techniques such as NanoSIMS (Nano Secondary Ion Mass Spectrometry) (Witt et al., 2020).With respect to the total Cu amount, the ToF-SIMS results matched our ICP-OES data, where the highest Cu concentrations were measured in wildtype worms and the lowest in ceruloplasmin-deficient worms, following a 24 h treatment with 2 mM Cu (Figure 7).Studies in ATOX-1KO mice and cell culture revealed a disturbed Cu homeostasis (Hamza et al., 2001;Hamza et al., 2003).Furthermore, Zhang et al. displayed the phenotype of a C. elegans atox-1KO model in form of reduced brood size and distal tip cell migration defects (Zhang et al., 2020).However, data on the Cu status were lacking, which are critical for the evaluation of Cu toxicity.In humans, the highest mRNA levels of ATOX-1 in the brain were detected in the cerebral cortex and hippocampus, with elevated ATOX-1 activity due to increased Cu levels (Lutsenko et al., 2010).Moreover, ATOX-1 is thought to possess antioxidative properties (Lutsenko et al., 2010), as increased endogenous ATOX-1 levels protect against oxidative stress and promote neuronal survival (Kelner et al., 2000).In our study, cox-17 expression was elevated in the atox-1Δ deletion mutant, which might indicate an increased Cu transport into mitochondria.Schematic overview of the changes in the bioavailability and expression of genes responsible for Cu homeostasis in C. elegans.Displayed are changes in wildtype worms (left) vs. mutant worms (atox-1Δ or ceruloplasminΔ) (right).Up-and downregulation of mRNA levels by excessive Cu feeding are indicated by green arrows, while differences in basal levels due to genetics compared to wildtype worms is indicated by smaller or larger font size.
Whilst atox-1 participates in intracellular Cu distribution, ceruloplasmin is the major Cu storage protein responsible for the binding of 90% of total Cu (Hellman and Gitlin, 2002).Genetic loss of ceruloplasmin can lead to the autosomal recessive disorder "aceruloplasminemia", which is characterized by progressive neurodegeneration (Kono, 2012).Elevated Cu or labile Cu levels are not the only concern in aceruloplasminemia observed in aging worms (Muchenditsi et al., 2021).Due to ceruloplasmin's ferroxidase activity, it is essential for iron (Fe) oxidation during cellular export, resulting in cellular Fe accumulation in aceruloplasminemia (Miyajima, 2015).Despite its importance in Cu storage protein in mammals, to date no research has focused on the role of ceruloplasmin in Cu homeostasis in C. elegans.Our data revealed that Cu levels were altered due to Cu supplementation in ceruloplasmin-deficient worms, but Fe levels seem to be unaffected in this mutant compared to wildtype worms (Supplementary).In addition to neurodegeneration, obesity and steatosis have been reported in ceruloplasmin-KO mice (Raia et al., 2023), highlighting that ceruloplasmin is essential for Cu and Fe homeostasis.Our data revealed that Cu induced ceruloplasmin mRNA expression in wildtype and atox-1 deletion mutants (Figure 7), possibly due to the excretion of excess Cu bound to ceruloplasmin.In addition, excess Cu is excreted by atp7b, which in humans, among others, participates in providing Cu to ceruloplasmin (Prohaska, 2008).
In mammals, the two major functions of ATP7B are the supply of Cu to ceruloplasmin in the golgi and the excretion of excess Cu into the bile (Weiss et al., 2008).ATP7B translocalizes to the plasma membrane which enables the efflux of excess Cu in the form of vesicular sequestration (Monty et al., 2005;Cater et al., 2006).This aligns with our data where atp7a/b mRNA levels were elevated in wildtype worms following Cu treatment.Similar observations were made by Li et al. after Cu treatment (Li et al., 2021).The notion that ATP7B is essential for proper functioning Cu homeostasis is supported by experiments in ATP7B-KO models (Muchenditsi et al., 2021).Our data reveal an increase of atp7a/b mRNA levels due to Cu treatment in wildtype worms, but further display that untreated atox-1 and ceruloplasmin deletion mutants already exhibit elevated atp7a/b levels (Figure 7).Interestingly, Cu treatment does not increase atp7a/b in those mutants further.The fact that atox-1Δ and ceruloplasminΔ mutants demonstrate greater atp7b expression but lower total Cu levels compared to wildtype worms is unexpected.This suggests that one should not focus exclusively on total Cu levels, but also on labile Cu levels, which differ among the worms used in this study and are notably increased in atox-1 and ceruloplasmin deletion mutants.
Traditionally, Cu status is assessed by measuring serum or plasma total Cu and ceruloplasmin levels (Olivares et al., 2008;Hackler et al., 2020), whereas for WD diagnosis a liver biopsy is required (Mohr and Weiss, 2019).Besides ceruloplasmin protein levels, its enzyme activity and mRNA level also affect the maintenance of Cu homeostasis (Ranganathan et al., 2011).Furthermore, labile Cu has recently emerged as a marker of Cu status, as it is assumed to be readily bioavailable and reflects more accurately Cu activity compared to total Cu (Dodani et al., 2014;Kardos et al., 2018).Our data reveal that atox-1Δ and ceruloplasminΔ mutants displayed reduced total Cu levels compared to wildtype worms following Cu treatment, as well as severe alterations of Cu homeostasis, e.g., increased mRNA levels of atp7a/b.This could be linked to elevated levels of labile Cu.Nevertheless, relying on labile Cu levels is currently not considered sufficient due to the complexity of analysis and lack of methodologies available (Cotruvo et al., 2015;Xiao et al., 2018;Quarles et al., 2020;Pezacki et al., 2022).Having said that, in combination with total Cu and ceruloplasmin measurements, the analysis of labile Cu promises to be a valuable and powerful tool to assess the Cu status and thus the risk or diagnosis for Cu dyshomeostasis-related diseases (Shribman et al., 2021).Squitti et al. observed a subpopulation of patients diagnosed with Alzheimer's disease (AD) displaying higher than normal nonceruloplasmin bound Cu in serum, similar to WD patients, stating that labile Cu identifies a Cu subtype of AD (CuAD) (Squitti et al., 2021;Squitti et al., 2023).They further hypothesize that Cu dyshomeostasis results in a shift of protein-bound metal pools to labile metal pools, which is associated with the loss of energy production but also altered antioxidant function (Squitti et al., 2021).Labile Cu pools are increased even by physiological Cu amounts in different brain cells (Lee et al., 2020), which can exert neurotoxicity (Ugarte et al., 2013).Elevated Cu levels promote the formation of reactive oxygen species, lipid peroxidation, apoptosis and decreased mitochondrial membrane potential, leading to oxidative stress (Song et al., 2014;Li et al., 2021).Borchard et al. report that labile Cu is cell toxic with mitochondria as vulnerable target, which in turn can be avoided by Cu chelation (Borchard et al., 2022).Several studies reveal that Cu chelation reduces or even prevents Cu-mediated toxicity (Lichtmannegger et al., 2016;Yuan et al., 2018;Yuan et al., 2022), while studies in WD models demonstrate that chelation lowers systemic Cu levels into the homeostatic range as possible therapeutic approach (Brady et al., 2014;Müller et al., 2018;Einer et al., 2023).Chelator therapy also suggests that Cu toxicity is mainly mediated by labile Cu rather than by pre-protein bound Cu like in ceruloplasmin.Local concentrations of Cu as well as cellular distribution of Cu transporters, storage and excretion proteins are important to maintain a steady state (An et al., 2022) and tight regulation of Cu homeostasis, as dyshomeostasis is associated with the pathogenesis of neurodegenerative diseases such as WD, but also AD (Mezzaroba et al., 2019;Bisaglia and Bubacco, 2020;Chen et al., 2022).
Metallothionein binds excess metal ions to maintain homeostasis, thus the downregulation of mtl-1 mRNA levels after Cu treatment (Figure 7) seems surprising but supports Zhang et al. who too report a reduction of metallothionein mRNA levels in Cu exposed C. elegans (Zhang et al., 2021).In contrast, Tapia et al. were not able to identify alterations in metallothionein mRNA levels in rat fibroblast cells (Tapia et al., 2004), which suggests either the presence of tissue-or speciesspecific differences in metallothionein transcription.C. elegans metallothioneins seem to be strongly induced by other trace elements, such as zinc (Polak et al., 2014;Baesler et al., 2021) as well as cadmium (Hughes et al., 2009;Zeitoun-Ghandour et al., 2010;Essig et al., 2023).Notably, mRNA levels are not necessarily proportional to protein levels (Buccitelli and Selbach, 2020), this also applies to metallothionein (Michaelis et al., 2022).Our data suggest slight changes in mtl-1, but not mtl-2 expression.Since mtl-2 is present in larger quantities in C. elegans (Zeitoun-Ghandour et al., 2010;Höckner et al., 2011), alterations in mtl-1 are marginal with respect to total metallothionein levels.Nevertheless, mtl-1 still seems to be important for metal uptake, as mtl-1 knockouts take up less Cu and other trace elements in our but also previous studies (Tapia et al., 2004;Baesler et al., 2021).Additionally, metallothionein preserves Cuinduced neurotoxicity (Petro et al., 2016).
In summary, we were able to uncover that ceruloplasmin and atox-1 play a key role in Cu homeostasis in C. elegans.ICP-OES and ToF-SIMS analysis revealed that total Cu levels were reduced in the ceruloplasmin and atox-1 deletion mutants compared to wildtype worms, in contrast they displayed increased levels of labile Cu.Furthermore, ToF-SIMS analysis is a powerful tool applied in this study enabling firstly a 3D Cu localization in worms.Accordingly, a genetic Cu dyshomeostasis and Cu oversupply can result in a shifted ratio of total Cu vs. labile Cu pools.The dyshomeostasis is further reflected by an altered gene expression of crucial participants in Cu homeostasis like atp7a/b, atox-1, ceruloplasmin and metallothionein.As demonstrated here, labile Cu is a potential marker of the Cu status in the organism C. elegans.Taken together, the C. elegans genome encodes a suite of evolutionary conserved genes involved in Cu homeostasis and thus serves as an exquisite model to study Cu dyshomeostasis linked to neurodegenerative diseases.However, some aspects remain unanswered and require further investigation, such as the mechanistic regulation of atox-1 and ceruloplasmin in C. elegans.Our study demonstrates early observations of a defective Cu homeostasis in C. elegans, but also reveals a lack of knowledge of underlying mechanisms due to complexity, which should be addressed in future studies.Altogether, the Cu status should be monitored by multiple functional markers including total Cu, labile Cu as well as gene expression of Cu homeostasis-related genes in order to provide specificity and sensitivity to detect potential alterations in Cu homeostasis.
FIGURE 2
FIGURE 2 Lethality in wildtype worms, atox-1Δ and ceruloplasminΔ deletion mutants following an exposure to Cu for 24 h at agesynchronized L4 larvae stage.Data presented are mean values of n ≥ 4 experiments ± SEM.Statistical analysis using 2-way ANOVA with Tukey's multiple comparison.Significance level with α = 0.05: §: p ≤ 0.05 compared to wildtype in same condition.
FIGURE 4
FIGURE 4ToF-SIMS analysis of wildtype, atox-1Δ and ceruloplasminΔ worms following 2 mM Cu treatment for 24 h.(A) Distribution of 63 Cu + and 65 Cu + over the total sputtered depth (excluding the first 250 sputter seconds).(B) Depth distribution of 63 Cu + for all 3 worm strains (O 2 + sputtering) covering approximately half of the worm's depth. | 2024-02-11T16:36:31.982Z | 2024-02-08T00:00:00.000 | {
"year": 2024,
"sha1": "e6cce67a4cb00506a2bf7946ba381ff3338a7619",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmolb.2024.1354627/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a03809d671b3f5ddcb4e47135b5b38c511672df2",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231963620 | pes2o/s2orc | v3-fos-license | The socioeconomic distribution of alcohol-related violence in England and Wales
Inequalities in alcohol-related health harms have been repeatedly identified. However, the socioeconomic distribution of alcohol-related violence (violence committed by a person under the influence of alcohol)–and of subtypes such as alcohol-related domestic violence–remains under-examined. To examine this, data are drawn from nationally representative victimisation survey, the Crime Survey for England and Wales, from years 2013/14 to 2017/18. Socioeconomic status specific incidence and prevalence rates for alcohol-related violence (including subtypes domestic, stranger, and acquaintance violence) were created. Binomial logistic regressions were performed to test whether the likelihood of experiencing these incidents was affected by socioeconomic status when controlling for a range of pre-established risk factors associated with violence victimisation. Findings generally show lower socioeconomic groups experience higher prevalence rates of alcohol-related violence overall, and higher incidence and prevalence rates for alcohol-related domestic and acquaintance violence. Binomial logistic regression results show that the likelihood of experiencing these types of violence is affected by a person’s socioeconomic status–even when other risk factors known to be associated with violence are held constant. Along with action to address environmental and economic drivers of socioeconomic inequality, provision of publicly funded domestic violence services should be improved, and alcohol pricing and availability interventions should be investigated for their potential to disproportionately benefit lower socioeconomic groups.
Introduction
"Inequalities are a matter of life and death, of health and sickness, of well-being and misery" [1 p. 16] The association between alcohol consumption and violence perpetration is long recognised [2][3][4], with some meta-analyses and longitudinal studies suggesting this may be a causal relationship [2,5]. National statistics from the Crime Survey for England and Wales (CSEW) in 2017/18 support these findings, with perpetrators in almost two of every five (39%) violent crimes reported by victims as being under the influence of alcohol [6]. However, despite identification of socioeconomic inequalities in alcohol-related health harms [7][8][9], the socioeconomic distribution of alcohol-related violence (defined as 'violence committed by a person under the influence of alcohol' throughout unless otherwise noted) remains under-examined.
Is low socioeconomic status a risk factor for alcohol-related violence victimisation?
There are three main reasons we might suspect lower socioeconomic status (SES) to be a risk factor for alcohol-related violence. Firstly, whilst alcohol's various cognitive effects may make violence perpetration more likely [3], sociological work has repeatedly shown that individuals' responses to intoxicants-including alcohol-can be affected by their surroundings and social context [10][11][12]. Settings for drinking occasions will likely differ between socioeconomic groups (for example, off-license premises are more densely clustered in lower SES areas [13]) and it is possible we will see different levels of violence experienced by these groups because of this. Secondly, people in lower socioeconomic groups are more likely to be victims of violence overall [14,15]. This was demonstrated recently in analysis of data from the British Crime Survey (now CSEW) between 2002/03 and 2007/08 showing lower household income to increase risk of violent victimisation [16]. The uneven distribution of violence overall suggests we might see similar inequality in alcohol-related violence victimisation. Finally, a few studies have found a relationship between socioeconomic status and alcohol-related violence, though they disagree whether advantaged or disadvantaged people are at greater risk. Home Office research using the nationally representative British Crime Survey (now CSEW) examined incidents of alcohol-related assaults in the years 1996, 1998 and 2000 and found that rates of alcohol-related stranger assault ("in which the victim did not know any of the offenders" [17 p. 4]) and alcohol-related acquaintance assault ("in which the victim knew one or more of the offenders at least by sight (excluding partners, ex-partners, household members and other relatives)" 17p. 4]) were higher for unemployed adults; those on low household incomes were also found to experience the highest rate of alcohol-related acquaintance assault [17]. Similarly, Scottish hospital data from the early 2000s has shown that alcohol-related facial injuries "disproportionately [affect] young men from socioeconomically deprived areas" [18 p. 644]. Further, in an analysis of alcohol's harms to others in Wales, a significant association between regional deprivation of a respondent and experience of violent harm as a result of another's drinking was identified [19]. However, analysis of Australian police data showed higher SES neighbourhoods experienced higher rates of alcohol-related crime, including violence, sexual assault, criminal damage, and anti-social behaviour [20]. Further, the British Crime Survey analysis already discussed found those on higher household incomes experienced the highest rate of alcohol-related stranger assault; private renters were also found to experience higher rates of alcohol-related stranger and acquaintance assaults when compared to social renters [17]. The Australian study seems less reliable than the others because it uses police-recorded crime statistics (which the criminology literature regards as less reliable than victimisation survey and hospital admissions data [21]) and its subnational sample of only rural communities may not be generalisable. Mixed findings to date mean we cannot yet confidently conclude that lower socioeconomic status is a risk factor for alcohol-related violence. Further, while some work already discussed has presented findings regarding alcohol-related stranger and acquaintance violence [17], work has yet to disaggregate alcohol-related domestic violence from other subtypes. Population level studies have repeatedly linked alcohol consumption levels to the rates of many subtypes of violence [22], including domestic violence [23], and strong associations have been identified between alcohol consumption by perpetrators and specific types of violence including domestic [23,24] and stranger violence [17,25,26]. Indeed, an evidence summary of meta-analysis and case-control studies of domestic violence and alcohol use concluded alcohol to be a "contributing cause of violence. . . [contributing] to violence in some people under some circumstances" [23 pp. 423-424]. Failure to disaggregate all of these violence subtypes is not only a substantial limitation, as under-counting of domestic violence incidents has been shown to have distorted official crime trends in recent years [27], but there are reasons why we might suspect the patterns of victimisation across SES groups to vary between violence subtypes. While some have attempted to create all-encompassing theories of violence (e.g. [28]), subtypes of violence are generally recognised to occur in distinct contexts and to have some unique drivers (of which alcohol is just one potential contributory factor). For example, stranger violence is more likely than domestic violence to occur in night-time economy settings in which large volumes of intoxicated individuals cluster together [17,29]. As already touched upon, evidence suggests a varied propensity of those from different socioeconomic groups to drink (differently) in different contexts (e.g. given that off-trade premises are more densely clustered in lower SES neighbourhoods [13] or that 'pre-drinking' patterns in home settings are affected by a person's SES and motives [30]). Thus, a person's SES could have diverse relationships with these different forms of violence. This is supported by research examining a nationally representative survey on offending behaviour in England and Wales which found "favoring drinking heavily in pub settings" to be associated with both alcoholrelated violence perpetration and lower SES [31 p. 1727].
Finally, a wide range of other risk factors for violence and alcohol-related violence have been identified in previous research, such as sex [14], age [14,29], attendance of night-time economy spaces [32], and disability [33]. Some such risk factors for violent victimisation may themselves be associated with socioeconomic status. For example, people who live in urban areas are more likely to be victims of violence than those in rural areas [34]-urban areas also have a higher percentage of households living on low incomes [35]. If we hope to design policy action to address socioeconomic inequalities in alcohol-related violence, we need to understand if SES impacts upon the probability of alcohol-related violence once such factors have been accounted for.
Design
This study combines five waves of data drawn from the Crime Survey for England and Wales for years 2013/14 to 2017/18 [36][37][38][39][40] employing a cross-sectional between-subjects design to: a. create and compare prevalence rates (the percentage of people who experienced a given crime in a year) and incidence rates (the number of incidents of a given crime in a year per 1000 people) of alcohol-related violence overall, as well as alcohol-related domestic, stranger and acquaintance violence, for different socioeconomic groups, and; b. perform binomial logistic regression analyses to confirm the effect of a range of other risk factors associated with violence on any relationship identified.
This survey is nationally representative of the household population in England and Wales, and is administered face-to-face to more than 35,000 adults, identified from a random sample of addresses, annually (further detailed description of the sampling strategy employed can be found at [41]). Respondents are asked about their victimisation within the last 12 months, as well as information on their employment, income, and housing. Response rates to this survey have remained between 70-75% since the 2008/09 release [42].
Procedure
Each year, information from respondents to the Crime Survey for England and Wales is held across two datasets: the Victim Form and the Non-Victim Form datasets. Each row of the Non-Victim Form dataset contains information on an individual respondent, such as measures of their socioeconomic status. Each row of the Victim Form dataset contains detail on an individual instance of crime or a series of instances of the same crime, including, if the incident was violence and what type of violence it was-domestic, stranger, or acquaintance (Table 1 shows which variables relating to this work are contained in each dataset). In order to analyse details of a crime and its victim together, these datasets are merged. This is achieved by appending respondent characteristics (from the Non-Victim Form dataset) to the incident or crime series data (in the Victim Form dataset) via a unique 'Case identifier' contained in each. Using this method, all records were matched accurately without duplication.
Further, given the relatively rare nature of violent events, a large sample was required in order to assure sufficient cases of violence for analysis. To this end, data were pooled from five years in order to increase the reliability and accuracy of any results. The final sample thus totalled at 174,178, including 1398 incidents (unweighted) of alcohol-related violence. Weighting variables used ensure sample is nationally representative, by (amongst other things) "[compensating] for unequal address selection probabilities" as well as "[adjusting] for differential non-response" [41 p. 97], and to create estimates of how many victims and incidents of each kind there were across the whole population. Further details of the full weighting procedure are available in the CSEW User Guide [41]. Previously, such weighting included a cap of five on the incidents of one kind that could be reported by a respondent, as a method to remove outliers. This led to undercounting of violence-particularly domestic violence-and a method developed by Walby, Towers, and Francis [27] was needed to remove this capping and more accurately count these incidents. However, this undercounting has since been addressed through changes to the weighting variables, using "the 98th percentile of victim incident counts for each crime type (calculated over several years)" to cap repeat victimisation reports, avoiding the undercounting problem potentially encountered by previous work of this kind [41 p. 14].
Measures
Violence. Respondents of the CSEW are asked about their experiences of a range of crimes-including violence-in the 12 months prior to the interview. Incidents of violence are described to interviewers through a series of questionnaire items. Wounding ("the incident results in severe or less serious injury, for example, cuts, severe bruising, chipped teeth, bruising or scratches requiring medical attention or any more serious injuries" [41 p. 44]), assault with minor injury ("an incident where the victim was punched, kicked, pushed or jostled and the incident resulted in minor injury to the victim, for example, scratches or bruises" [41 p. 44]) and violence without injury ("an incident (or attempt) where the victim was punched, kicked, pushed or jostled but resulted in no injury" [41 p. 44]) are coded by trained crime survey coders [41] as domestic violence (incidents "that involve partners, ex-partners, other relatives or household members" [41 p. 44]), stranger violence (incidents "in which the victim did not have any information about the offender(s), or did not know and had never seen the offender(s) before" [41 p. 44]), or acquaintance violence (incidents "in which the victim knew one or more of the offenders, at least by sight; it does not include domestic violence" [41 p. 44]). Respondents are also asked whether they believed their perpetrators were under the influence of alcohol (the full questionnaire is published with the survey data annually [36][37][38][39][40]). The variable 'Whether offender was under the influence of drink' indicates if an incident was alcohol-related (Table 2), and the variable 'CSEW Type of violence' indicates whether an incident or series of crimes was violent, and whether it was classed as domestic, stranger or acquaintance violence (Table 3). While it is recognised that domestic violence can comprise other elements beyond physical harm (e.g. verbal and psychological harms), the measure used in this work is limited to physical violence.
Socioeconomic status. Three household and individual level variables are used here with which to explore SES: total household income; housing tenure; and occupation of respondent. Base (n = 15315, unweighted) = sub-sample of victim form sample, item presented to participants for the first three incidents or series of incidents they describe only, and excluding incidents where the victim was unable to comment on the perpetrator, or the perpetrator was 10 years of age or younger (n = 35722, unweighted, 70.0% of victim form sample). In analysis those responding 'Don't know' were also marked as missing.
https://doi.org/10.1371/journal.pone.0243206.t002 These individual and household measures have previously been successfully deployed in analysis of violence and SES [43,44]. The limitations of using such indicators in isolation has been demonstrated [45], and so analysis is repeated here with this selection of SES measures in order to triangulate the findings. Violence risk factors: a range of risk factors associated with violence (as demonstrated in the introduction) are included as control variables (in binary form) in the second part of this analysis. These are outlined in Table 1. Information on the derivation of and frequency tables for these variables are included in the (S1 to S6 Tables in S1 File): • Respondent sex • Respondent age (converted to a binary variable to maintain statistical power, those over 30 years and those 30 or under) • Whether respondent lives in a rural or urban area • Whether respondent has a disability • Frequency respondent visits clubs (in the last month or not) and pubs (weekly and upwards, or less) Analysis a) Creating incidence rates. Total incident figures for each type of alcohol-related violence were derived from two variables; 'CSEW Type of violence' and 'whether offender was under the influence of drink'. These were calculated for each socioeconomic group within each SES variable, using a weighted dataset comprising all victim form datasets, with SES information on respondents appended from the non-victim form datasets, for the period analysed. From this, population figures for the various socioeconomic groups as presented in Tables 4-6 were used to create incidence rates. In each wave of the survey, respondents reported their victimisation (if any) for the previous 12 months. Therefore, the incidence rate throughout this work is the average annual incidence rate for the period 2013/14 to 2017/18. b) Creating prevalence rates. By cross-referencing the 'CSEW Type of violence' and 'whether offender was under the influence of drink' variables, all incidents contained in all victim form datasets were marked as whether they were alcohol-related violence, and whether they were alcohol-related domestic, stranger, or acquaintance violence. These datasets were merged with all the non-victim form datasets for the period covered. Each respondent was now marked as a non-victim or victim of alcohol-related violence overall, as well as of each subtype of alcohol-related violence, and total weighted victim counts were created. From this, population figures for the various socioeconomic groups as presented in Tables 4-6 were used to create prevalence rates. In each wave of the survey, respondents reported their victimisation (if any) for the previous 12 months. Therefore, the prevalence rate referred to throughout this work is the average annual prevalence rate for the period 2013/14 to 2017/18. Two-tailed chisquared tests were also performed to examine the association between socioeconomic status and alcohol-related violence victimisation. c) Regression analyses. Binomial logistic regression analyses were performed on weighted data from the combined non-victim form dataset created in step (b). Twelve binary logistic regression analyses were performed sequentially in total: using one of the three measures of socioeconomic status as an independent variable, against each of the four binary violence outcome variables (alcohol-related violence, alcohol-related domestic violence, alcohol-related stranger violence and alcohol-related acquaintance violence) as a dependent variable. In all 12 models the previously outlined risk factors for violent victimisation (age, sex, night-time economy attendance, whether the respondent has a disability, living in an urban or rural setting) were controlled for. Given the possibility of incorrectly rejecting a true null hypothesis (Type I error), with the number of simultaneously tested hypotheses here, the Bonferroni correction was applied, adjusting our significance threshold from an original value of p<0.05 to p<0.004.
All analysis was performed using SPSS v 24. As this work comprises secondary analysis of government published data, ethical approval was not sought for this analysis; data are received pre-anonymised, with consent of participants having already been obtained. Further, the data owner (Office for National Statistics) has pre-approved the reporting of these findings to void any possibility of disclosure.
PLOS ONE
The socioeconomic distribution of alcohol-related violence in England and Wales
Alcohol-related violence overall
Across the whole sample, the prevalence rate for alcohol-related violence overall was 0.87%, while the incidence rate was 19.1 incidents per 1000 of the population. Lower socioeconomic groups experienced higher prevalence rates of alcohol-related violence overall (total household income: χ2 = 35922.96, p<0.001; housing tenure: χ2 = 523448.76, p<0.001; occupation: χ2 = 47003.53, p<0.001). For two of the three socioeconomic status indicators used, the lowest groups experienced the highest prevalence rates for alcohol-related violence overall; 1.07% for those in households earning £19,999 and under and 1.01% for the group 'Never worked and long term unemployed' (see Table 7). 1a. Alcohol-related domestic and acquaintance violence. Similar patterns are seen when disaggregating patterns in alcohol-related domestic and alcohol-related acquaintance violence. For all socioeconomic measures, prevalence rates for alcohol-related domestic violence were highest for the lowest socioeconomic group (total household income: χ2 = 56130.26, p<0.001; housing tenure: χ2 = 131153.69, p<0.001; occupation: χ2 = 30698.37, p<0.001). For housing tenure, the prevalence rate for the lowest socioeconomic group (social renters, 0.26%) was more than five times that of the highest (owners, 0.05%). Incidence rates were highest for the lowest socioeconomic group in two measures, with the most dramatic disparity seen between the incidence rates of alcohol-related domestic violence when measuring socioeconomic status through housing tenure; the lowest group (social renters) had an incidence rate more than 14 times as high as the highest group (owners). Similarly, the lowest socioeconomic groups experienced the highest prevalence (total household income: χ2 = 48341.69, p<0.001; housing tenure: χ2 = 227680.95, p<0.001; occupation: χ2 = 55329.04, p<0.001) and incidence rates for alcohol-related acquaintance violence, except social renters whose incidence rate was not as high as that of private renters (those in households earning £19,999 and under, 0.40% prevalence and 9.31 incidents per 1000 people; social renters, 0.52% prevalence and 10.72 incidents per 1000 people; those unemployed, 0.40% prevalence and 16.02 incidents per 1000 people).
1b. Alcohol-related stranger violence. There are no clear trends in the prevalence and incidence rates for alcohol-related stranger violence across socioeconomic measures (see Table 7). Prevalence and incidence rates were highest for those earning £40,000 and above (0.53%, and 14.00 incidents per 1000 people), private renters (0.92%, and 15.41 incidents per 1000 people), and those with occupations classed as intermediate (0.48% (joint with routine and manual workers) and 11.29 incidents per 1000 people).
Influence of other risk factors
Binomial logistic regression results show that those in lower socioeconomic groups are more likely than others to experience alcohol-related violence overall when other known risk factors for violence are held constant. In some cases, including additional known risk factors brought this relationship into sharper relief: whereas social renters were found to experience lower prevalence and incidence rates of alcohol-related violence than private renters, when additional violence risk factors were included in the analysis, social renters were more than twice as likely as owners to experience this [ nightclub in the last month, or having a disability were all found to also increase a person's risk of experiencing alcohol-related violence, in all the analyses performed (see Tables 8-10). As the weighting variables provided in the dataset serve to create population level estimates, results of the same regressions performed on unweighted data as a sensitivity analysis are presented in the S7-S9 Tables in S1 File, to verify that statistical significance was not an artefact of this.
Binomial logistic regression results also show that those in lower socioeconomic groups are more likely than others to experience the subtypes of alcohol-related domestic and acquaintance violence, when other known risk factors for violence are held constant. For all socioeconomic variables, the lowest and central socioeconomic groups were more likely than the highest group to experience these forms of violence. Further, in some cases, the contribution of socioeconomic status to a person's risk here is sizeable. Considering alcohol-related domestic violence, a social renter is more than three and a half times as likely to experience this than a home owner [OR = 3.678, 95% CI = (3.641-3.715), reference category: housing tenure, owners] while those in households earning £19,999 or less are almost two and a half times as likely to [OR = 2.403, 95% CI = 2.376-2.430), reference category: total household income, £40,000 and above]. It should also be noted that having a disability also raised a person's risk of experiencing alcohol-related domestic violence by more than three times, in all regression models presented (see . In two of the three regression models presented (Tables 8 and 10), having a disability raised a person's risk of experiencing alcohol-related domestic violence to a greater degree than the already sizeable effect of belonging to the lowest SES group. The regression results for alcohol-related stranger violence diverge from these other violence subtypes (see Tables 8-10). In some cases the lower socioeconomic groups were in fact protected from this violence by their status; lower income groups were less likely to experience this violence than the highest group, those earning £40,000 and over (those in households earning £19,999 or less [OR = 0.911, 95% CI = (0.907-0.916), reference category: total household income, £40,000 and above] and those earning between £20,000 and £39,999 [OR = 0.981, 95% CI = (0.976-0.986), reference category: total household income, £40,000 and above]. In the case of occupation, the lowest group were statistically more likely to experience this violence than the highest group-but by less than 10% [OR = 1.069, 95% CI = (1.057-1.081), reference category: managerial or professional occupation]. Along with these small or protective effect sizes, it should be noted that in the unweighted regressions included in (S7-S9 Tables in S1 File), income and occupation were not found to be significant predictors of alcohol-related stranger violence. All other risk factors-being aged 30 and under, male, living in an urban area, having a disability-were found to significantly increase this risk; particularly night-time economy attendance. A person's likelihood of experiencing alcohol-related stranger violence was raised by weekly pub and monthly nightclub attendance, sometimes by as much as three times (e.g. see Table 8, those attending nightclubs monthly [OR = 3.281, 95% CI = (3.265-3.297), reference category: no nightclub attendance in last month], as well as those who visited the pub weekly or more [OR = 1.609, 95% CI = (1.601-1.616), reference category: visited pubs less than once a week]). Those same night-time economy visits had a smaller effect on all other kinds of alcohol-related violence, however regular nightclub attendance also
PLOS ONE
The socioeconomic distribution of alcohol-related violence in England and Wales more than doubled a person's likelihood of experiencing alcohol-related acquaintance violence (e.g. see Table 9, [OR = 2.468, 95% CI = (2.452-2.484), reference category: no nightclub attendance in last month]).
Discussion
Our results suggest that being of a lower socioeconomic status is a risk factor for experiencing alcohol-related violent victimisation, and particularly for alcohol-related domestic and acquaintance violence. The finding of lower socioeconomic status as a risk factor for alcoholrelated violence reflects patterns identified for violent victimisation in general [14,15], and the inequalities in alcohol-related health harms experienced between socioeconomic groups [7][8][9]. This finding corroborates previous work examining hospital admissions for alcohol-related facial injuries in Scotland, which found those in the most deprived regions were more than six times as likely to be injured in this way when compared to the most advantaged regions [18]. The finding that this disparity in overall alcohol-related violence comprises wide disparities in alcohol-related domestic and acquaintance violence is novel and holds important implications for policy decisions moving forward. These findings represent a notable development in the understanding of the distribution of alcohol-related violence. It is important to consider this in light of some limitations of this study, not least those associated with the use of victimisation surveys. The Crime Survey for England and Wales remains an internationally recognised source for crime statistics [46] and, unlike police recorded crime statistics, holds the designation of official statistics in England and Wales from the National Statistics Authority [47]. While victimisation survey data are generally accepted in criminology to improve substantially on police-recorded crime data as a measure of crime levels, survey data are not without its own limitations e.g. recall error [21,42] or difficulty respondents may have in identifying perpetrator intoxication. However, there have been measures introduced to ensure some such limitations are minimised within the Crime Survey for England and Wales. Importantly, detailed reports of incidents from respondents are coded as violent or otherwise by trained coders, minimising categorisation errors respondents might make. Much of the criticism surrounding the Crime Survey for England and Wales has instead focused on its sampling, with suggestions that it has under-sampled lower socioeconomic groups [48]; a limitation that will, if anything, underplay the results we have found in underestimating the strength of the association between lower SES groups and their increased probability of experiencing of alcohol-related violence. Further detail of the survey methodology can be found in the CSEW user guide [41]. Further, whilst the importance of interactions between individual and neighbourhood level socioeconomic status indicators in investigation of alcohol consumption patterns have been demonstrated in other studies [49][50][51], exploring such interactions was beyond the scope of the paper. We encourage such analysis be taken forward to illuminate contributions of neighbourhood level socioeconomic status.
Notwithstanding these limitations, this study represents the first of its kind to disaggregate subtypes of alcohol-related violence, including alcohol-related domestic violence, in order to understand their distribution across SES groups. We consider these findings to build on links between poverty and domestic violence victimisation that have been identified in other literature [24]-particularly the uneven impact of the global economic crisis in 2008 on levels of violence experienced by different groups. Walby, Towers, and Francis noted that the crisis has "reduced income levels and increased inequalities and thereby reduced the propensity of victims to escape violence" [27 p. 1228]. Returning to our findings of highly disproportionate rates of alcohol-related domestic violence for lower SES groups, it is possible that these effects are amplified as alcohol sales outlets are more heavily clustered in lower SES neighbourhoods [13] and alcohol availability is linked to levels of violence. For example, research from Scotland has shown rates of violence to be "consistently and significantly higher in areas with more alcohol outlets", for both on-and off-sales [52 p. 8]. While not the focus of this work, it is also important to touch upon the findings relating to disability and alcohol-related domestic violence. In each model, having a disability increased a person's risk of alcohol-related domestic violence by more than three times-stronger than the already sizeable effect of belonging to the lowest socioeconomic group for two of the three SES measures used (household income and occupation). Previous research has identified disability as a risk factor for domestic violence [53], and this finding relating to alcohol-related domestic violence expands this understanding. These findings are important and warrant further investigation-particularly as the limitations of analysis using aggregate population data, as was the case here, in examining violence against disabled people have been noted [54]. One avenue of future research this study might inform is the confluence of disability and lower SES, and how this affects alcohol-related domestic violence victimisation; as it has been noted that perpetrators of domestic violence may intentionally restrict disabled victims' financial resources [55 as cited in 54].
The finding of no consistent relationship between alcohol-related stranger violence and SES when other violence risk factors were held constant refines findings from previous work; analysis of British Crime Survey data from the years 1996, 1998 and 2000 found those unemployed to have an incidence rate of alcohol-related stranger violence 2.6 times higher than those in employment [17]. Our findings suggest this incidence rate was confounded by other factors; possibly night-time economy attendance. Our results suggest exposure to night-time economy settings increases the likelihood of experiencing alcohol-related violence; specifically, alcohol-related stranger violence. Regular nightclub attendance (once a month at least) as much as trebled a person's likelihood of experiencing alcohol-related stranger violence, and weekly pub trips raised a person's risk more for alcohol-related stranger violence than for any other violence subtype. This corroborates previous research which has identified an association between experiencing violence and night-time economy attendance [56]. Indeed, the same British Crime Survey analysis discussed previously found that the incidence of alcoholrelated stranger violence amongst those attending nightclubs between one to three times in the last month was more than twice as high as in the general population, and amongst those visiting a pub nine times or more in the last month (roughly twice a week), more than three and a half times as high [17]. Considering that the socioeconomic disparities found in overall alcohol-related violence comprise wide disparities in alcohol-related domestic and acquaintance violence, but that night-time economy attendance is a greater risk factor for alcohol-related stranger violence, this suggests policy interventions focused on night-time economy settings, such as business best practice schemes (e.g. PubWatch [57] or Best Bar None [58] in the UK), will have little impact on the inequalities presented in this paper.
This discussion highlights important considerations for policy and future research concerned with violence prevention and alcohol harm, as well as socioeconomic equality more generally. First, in light of findings in this paper, we implore an immediate improvement in the provision of and access to publicly funded domestic violence services within lower SES neighbourhoods. There has been a chronic under-provision of both domestic violence and alcohol treatment services in this country for many years [59,60], and this should be addressed nationwide for many reasons. We suggest the findings of this paper prompt a focused increase in provision and access to (e.g. consideration of transport or childcare needs etc.) domestic violence services for lower SES neighbourhoods specifically, because this research indicates how this alcohol-related harm is distributed. There are likely many factors contributing to this inequality (for example, see the extensive literature examining the alcohol health harm paradox [7]), and so we should address the pattern of harm directly as a matter of urgency. As recommended by the National Institute for Health and Care Excellence [61], it would also be beneficial for these services to continue to build their awareness and understanding of alcoholrelated domestic violence victimisation and to improve links to other service providers through multi-agency working.
We further urge policymakers to consider SES as an important contextual factor in shaping the relationship between alcohol and violence overall. For example, policymakers should be cognisant that off-sales outlets-with increases in off-sale availability linked to levels of violence [52] and intimate partner violence [62]-have been demonstrated to be most densely clustered in the most deprived neighbourhoods [13]. Licensing applications in England and Wales currently must be considered on their individual merits in all but exceptional circumstances [63], meaning it is difficult for licensing authorities to address broader public health and crime prevention concerns such as this. Licensing practices in England and Wales should be revisited to address this, as has been advocated for by many in terms of public health considerations more generally [64,65]. Similarly, despite the lowest SES groups drinking less on average, minimum unit pricing has been modelled to show promise in improving health outcomes for the lowest socioeconomic groups to the greatest degree [66] and to have potential to be implemented without raising concerns of regressivity [67]. As research has repeatedly linked the price of alcohol and levels of violence [68,69], it should be investigated whether minimum unit pricing can further disproportionately benefit lower socioeconomic groups by reducing their alcoholrelated violence victimisation levels.
This study has illuminated socioeconomic disparities in victimisation through alcoholrelated domestic and acquaintance violence. These form part of a broader disparity in alcoholrelated violence victimisation overall. While some suggestions for future research directions have been put forward, this finding is itself a notable contribution to our understanding of the unequal burden that alcohol harms place on the lowest SES groups. Along with action to address environmental and economic drivers of socioeconomic inequality, policymakers should address the provision of publicly funded domestic violence services in lower SES areas as a matter of urgency, coupled with action on the price and availability of alcohol, which have shown promise in beginning to ameliorate this imbalance. | 2021-02-20T06:16:15.960Z | 2021-02-18T00:00:00.000 | {
"year": 2021,
"sha1": "9fccf461c3b9bc2fb608eddb95e88e728a168a2b",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0243206&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "075fde298c6e90d0735925fef596f11ac90c98c8",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
10847312 | pes2o/s2orc | v3-fos-license | Evolved hexose transporter enhances xylose uptake and glucose/xylose co-utilization in Saccharomyces cerevisiae
Enhancing xylose utilization has been a major focus in Saccharomyces cerevisiae strain-engineering efforts. The incentive for these studies arises from the need to use all sugars in the typical carbon mixtures that comprise standard renewable plant-biomass-based carbon sources. While major advances have been made in developing utilization pathways, the efficient import of five carbon sugars into the cell remains an important bottleneck in this endeavor. Here we use an engineered S. cerevisiae BY4742 strain, containing an established heterologous xylose utilization pathway, and imposed a laboratory evolution regime with xylose as the sole carbon source. We obtained several evolved strains with improved growth phenotypes and evaluated the best candidate using genome resequencing. We observed remarkably few single nucleotide polymorphisms in the evolved strain, among which we confirmed a single amino acid change in the hexose transporter HXT7 coding sequence to be responsible for the evolved phenotype. The mutant HXT7(F79S) shows improved xylose uptake rates (Vmax = 186.4 ± 20.1 nmol•min−1•mg−1) that allows the S. cerevisiae strain to show significant growth with xylose as the sole carbon source, as well as partial co-utilization of glucose and xylose in a mixed sugar cultivation.
In order to cost-effectively produce biofuels from renewable plant biomass, all sugars, including all pentose and hexose sugars present in the raw lignocellulosic starting material, must be converted efficiently into the final products 1 . The yeast, S. cerevisiae, is an excellent host-microbe for a range of industrial applications, from chemical and commodity production, to biofuel synthesis [2][3][4] . However, S. cerevisiae does not readily uptake and use pentose sugars. This includes xylose, the most abundant pentose, and the second most abundant sugar next to glucose, found in biomass 5 . While native xylose-utilizing organisms exist, they largely lack well-developed genetic tools for host engineering or exhibit low product and inhibitor tolerances. Therefore, it is important to develop S. cerevisiae host platforms with more efficient xylose utilization.
Generating a yeast strain that utilizes xylose, especially in a glucose/ xylose mix has been an object of extensive research for several decades 6 . Great success has been achieved in boosting the native yeast utilization capability. Two approaches are now used routinely to provide for xylose utilization: overexpression of a heterologous xylose isomerase (XI) [7][8][9][10][11] , and overexpression of the native or heterologous xylose reductase (XR) and xylitol dehydrogenase (XDH) 12,13 . Both pathways result in the transformation of xylose to xylulose, and benefit from additional overexpression of xylulokinase (XKS) to shunt the carbon into pentose-phosphate pathway (PPP) 14,15 . Further, overexpression of genes encoding enzymes in the pentose-phosphate pathway, such as the transaldolase (TAL1) and the transketolase (TKL1), leads to additional improvements in xylose assimilation rates 7,[16][17][18] . Recently, it has also been shown that xylose utilization can be achieved via replacement of the native S. cerevisiae xylose utilization and PPP genes with those from the xylose-utilizing yeast Scheffersomyces stipitis 19 .
The improvements in intracellular xylose consumption have led to a bottleneck in xylose uptake 20 . To date there has been no discovery of a sugar transporter that, in S. cerevisiae, allows for xylose uptake comparable to glucose uptake. S. cerevisiae has numerous monosaccharide transporters (HXT1-17 and GAL2), but all of them have greater specificity for hexose sugars. While a few of these (HXT1, 2,4,5,7 and GAL2) can import xylose, they display rates of uptake so low that they cannot support growth on xylose 6,[21][22][23][24][25] . Further, xylose uptake in these native transporters is repressed in the presence of glucose, limiting the use of these transporters in mixed sugar sources 26,27 .
Several strategies have been employed to tackle the issues with xylose transport. Much work has been devoted to bioprospecting and characterizing heterologous xylose-transporters in S. cerevisiae, resulting in the identification of several membrane proteins that can transport xylose 22,[28][29][30][31][32][33] . These studies have shown that increasing xylose transport does increase utilization and final product formation, proving that xylose import is the limiting factor in utilization. However, these transporters have had limited efficacy either due to reduced growth rates, problems with substrate affinities, non-optimal transport rates, or substrate inhibition.
Recently, a few studies have attempted to improve transport by engineering native transporters with encouraging results. Using a combination of bioinformatics, and mutagenesis, Young and colleagues, identified a xylose transport sequence motif, and were able to produce a mutant HXT7 strain that grew on xylose, but not glucose 34 . Although this strain still showed glucose inhibition, another group was able to bypass this problem using growth to screen for mutants with glucose insensitivity 35 . This latter approach resulted in the discovery of Gal2 and Hxt7 variants that bypass glucose inhibition. Unfortunately, the modifications that eliminated glucose repression also resulted in diminished uptake rates (Vmax). Additionally, although the transporter is overexpressed, the resulting growth on xylose was modest in both these studies and would benefit from further optimization.
In the present study, we used an evolutionary engineering approach to address the problem of xylose import. Starting with a S. cerevisiae strain that has been engineered to enhance intracellular xylose consumption, we report the discovery of a mutation in HXT7 that shows improved xylose uptake rates, and allows S. cerevisiae to show significant growth with xylose as the sole carbon source. This mutation, F79S, is predicted to lie within the first transmembrane region of the transporter and enables partial co-utilization in a glucose/ xylose mix.
Results
Evolution of a xylose utilizing strain. Since xylose import into the cell is a limiting factor in S. cerevisiae growth and utilization of xylose, we hypothesized that we could select for increased xylose uptake by subjecting a S. cerevisiae strain engineered with an improved cytosolic xylose metabolic pathway to evolution in xylose medium (i.e. xylose as the sole carbon source). JBEI_ScMO001, a BY4742 strain deleted for the XR, gre3, and overexpressing the Piromyces sp. XI, pspXI, and XKS1, was sub-cultured in synthetic defined (SD), 2% xylose medium. After several rounds of sub-culturing, the culture was plated onto solid xylose medium and the fastest growing colonies were selected (Fig. 1a). The clones were assayed for growth and xylose consumption and the best performing strains were further evolved in SD, 2% xylose medium. This process was repeated until strains were obtained where growth could be seen in one day. The doubling time of the fastest-growing strains in xylose were reduced to approximately nine hours, down from an initial doubling time of over 150 hours for the unevolved strain (Fig. 1b). Colonies that showed improved xylose utilization were confirmed to be S. cerevisiae via 16S sequencing. Other eukaryotic contaminants, such as Aureobasidium pullulans were also detected, but not selected for sequencing.
The fastest-growing, xylose-utilizing S. cerevisiae strain, 7a2c (JBEI_ScMO002), was selected and analyzed for mutations by whole-genome sequencing. Sequencing revealed single nucleotide polymorphisms (SNPs) in three genes, including a mutation in the hexose transporter, HXT7. Additional mutations were found in YDL176W, a gene predicted to be involved in fructose-1,6-bisphosphatase degradation, as well as in an intergenic region on the left telomere of chromosome eight (Fig. 1c). Because the mutation in chromosome eight was in a heterochromatic region it was not pursued further.
HXT7(F79S) confers growth in xylose medium.
Since Hxt7 is a known hexose transporter that can also transport pentose sugars with low affinity, the HXT7(F79S) mutation was our most likely candidate for conferring growth in xylose. Like other Hxt proteins, SPOCTOPUS software 36 predicted Hxt7 to be a 12-pass transmembrane protein with the F79S mutation located in the first predicted membrane helix (Fig. 2a). Since there is no solved structure for any of the Hxt transport proteins, Phyre software 37 was used to predict the structure of Hxt7 based upon its closest homolog with a solved structure, the bacterial XylE (Fig. 2b). The model predicted that residue F79 resides in the middle of helix one, facing internally towards the central pore. The recently solved structure of XylE has the added benefit that it was crystalized in complex with xylose and glucose, conveying fundamental information about substrate binding 38 . Intriguingly, Hxt7 F79 lies in close proximity to the bound-xylose in the pore of the XylE structure, and therefore suggests that the residue is poised to affect xylose binding and transport (Fig. 2b).
To test if the HXT7(F79S) mutation was indeed responsible for the improved growth in xylose, we individually cloned each mutated gene, HXT7(F79S) or YDL176W(D504W), into single-copy plasmids, under their native promoters and terminators, and transformed the resulting plasmids into the gre3∆ strain overexpressing pspXI, XKS1, and TAL1. The plasmids were also transformed into a strain that contained additional deletions in the genes of interests (hxt7; ydl176w). The transformants were examined for growth in SD, 2% xylose medium. Both the gre3∆ and gre3∆ hxt7∆ strains expressing HXT7(F79S) grew in xylose medium, reaching a maximum optical density (OD 600 ) of between 2.0-2.4 after 40 hours. The two strains transformed with empty vector plasmids showed no growth after 60 hours (Fig. 3). To eliminate the possibility that an extra copy of HXT7 permits growth in xylose medium, wild-type HXT7 was also expressed in the gre3∆ and gre3∆ hxt7∆ strains and tested for growth. However, these strains did not grow in the xylose medium (Fig. 3), confirming that the xylose growth is specific to the HXT7(F79S) mutation. Of note, the evolved strain showed a longer lag time in this assay then in the assay performed in Fig. 1b. This is due to the glucose pre-culturing conditions needed for the growth of the controls. When the evolved strain is pre-cultured in a glucose medium prior to culturing in a xylose medium, we observe an approximate 12-hour increase in lag time ( Fig. S1; evolved gx) relative to the same strain pre-grown in xylose medium ( Fig. S1; evolved xx). This further suggests that Hxt7 is the causal mutation, as its expression is known to be repressed by glucose 39 .
YDL176W(D504H) did not contribute significantly to the growth of the evolved strain in xylose. Strains expressing the YDL176W(D504H) alone showed no growth in SD, 2% xylose medium, while strains expressing YDL176W(D504H) along with a wild-type genomic copy only showed marginal growth to OD 600 0.6 after 60 hours (Fig. S2).
HXT7(F79S) allows for increased xylose consumption and partial co-utilization in a mixed-sugar source.
To verify that the growth seen in the HXT7(F79S) strains were indeed due to increased xylose uptake, the amounts of xylose consumed from YP, 2% xylose medium were examined after 48 hours. High-performance liquid chromatography (HPLC) analysis established that strains expressing wild-type HXT7 only consumed 0.2 ± 0.2 g/L xylose, while strains expressing the mutant HXT7(F79S) consumed 9.0 ± 0.3 g/L (Fig. 4a), corroborating that the growth seen in HXT7(F79S) expressing strains is due to increased xylose uptake.
To examine the co-utilization of glucose and xylose, strains were grown in 0.5% glucose, 0.5% xylose medium, inoculated at a starting OD 600 of 0.1, and monitored periodically for sugar consumption and OD 600 . Strains harboring the wild type and mutant Hxt7 transporters showed similar, rapid consumption of glucose. However, the wild type HXT7 strain did not consume any xylose during the 48-hour experiment (Fig. 4b), while the HXT7(F79S) mutant steadily consumed 3 g/L of xylose during the time-course and attained a higher final OD 600 (Fig. 4c). The strain harboring the HXT7(F79S) mutation displayed a substantial improvement in glucose and xylose consumption, demonstrating that the Hxt7(F79S) transporter enables partial co-utilization of glucose and xylose in a mixed-sugar source.
Recently, Farwick et al. described a mutation in the Hxt7 transporter, N370S, that reduced glucose repression 35 . We combined the N370S mutation with F79S to test if this would allow for a transporter that possesses both reduced glucose repression with improved xylose import kinetics. Sugar consumption and OD 600 were monitored from strains expressing HXT7, HXT7(F79S), HXT7(N370S), or HXT7(F79S,N370S) in 0.5% glucose, 0.5% xylose medium (Fig. S3). All strains rapidly consumed the glucose. As before, the wild type HXT7 strain did not consume xylose during the 48-hours (Fig. S3, panel a), while the HXT7(F79S) mutant consumed about 3 g/L of xylose (Fig. S3, panel b). Surprisingly, the HXT7(N370S) mutant did not show xylose uptake or glucose insensitivity in our minimally engineered strain background (Fig. S3, panel c), and when combined with the F79S mutation, the strain did not show glucose insensitivity, and consumed less than 1 g/L of xylose (Fig. S3, panel d).
Kinetic measurement of Hxt7 and Hxt7(F79S) xylose transport. In order to understand how HXT7(F79S) affects transport, the kinetic properties of the mutant and wild-type transporters were assayed with radioactive sugar uptake assays (Fig. 5). Strains deleted for all hexose transporters that can transport xylose (hxt1∆, hxt2∆, hxt4∆, hxt5∆, hxt7∆, gal2∆) were transformed with low-copy plasmids expressing either HXT7 or HXT7(F79S). The 6∆ strain was used in lieu of the well-established EBY.VW4000-complete hexose transport-deficient strain, because of the recent report that EBY.VW4000 possesses extensive chromosomal abnormalities 40 . The wild-type Hxt7 transporter was confirmed to be a low-affinity xylose transporter with a Km of 161.4 ± 22 mM, and a Vmax of 101.6 ± 6.5 nmol•min −1 •mg −1 for xylose, similar to previously published values 22,35 . The Hxt7(F79S) mutant transporter displayed a similar xylose substrate affinity of 228.8 ± 45.9 mM, but showed about a two-fold increase in xylose transport velocity (Vmax = 186.4 ± 20.1 nmol•min −1 •mg −1 ) over its wild-type counterpart.
Discussion
The need to engineer a S. cerevisiae strain that can consume both pentose and hexose sugars, ideally together, is well recognized as important for engineering yeast to produce fuels and commodity chemicals. The main impediment to the realization of this goal is the lack of necessary xylose transporters in S. cerevisiae. Specifically, two aspects of xylose transport need improvement before the goal of co-utilization can be reached: (1) transport rates, (2) glucose inhibition. The latter problem has been recently addressed using a selection approach to generate glucose insensitive Gal2 and Hxt7 variants 35 . Here we generate an endogenous xylose transporter that has high rates of transport while maintaining high growth rates on xylose. This transporter, Hxt7(F79S), also allows for partial co-utilization of glucose and xylose, thereby decreasing the cultivation time needed to consume all sugars from a mixed-sugar source.
In our efforts we compiled several commonly used cytosolic xylose utilization genes and genetic modifications that served as our engineered strain and as the basal strain for lab evolution (Fig. 1a). A lab evolution regime, using serial dilution and plating on solid medium, and 2% xylose as the sole carbon source led to the appearance of colonies that could sustain significant growth on xylose (Fig. 1b). The phenotype was tracked to a single mutation in the Hxt7 protein. The HXT7(F79S) mutation allows for an improvement in xylose transport rates (Vmax), as well as provides for growth on xylose, and partial glucose/xylose co-utilization.
The mutant residue F79 lies within a previously reported G-G/F-XXXG motif located from amino acids 75 to 80 (Fig. S4), although in their report Young et al. 34 incorrectly report the locus of the Hxt7 GGFVFG motif as amino acids 36 to 41. Our discovery further highlights the importance of this region, not just for rewiring of glucose transporters for xylose as shown by Young et al. but also for increasing xylose transport while maintaining glucose transport capabilities. Additionally, a recent report also identifies amino acid F79 of the heterologous transporter Mgt05196p from Meyerozyma guilliermondii as one of several amino acids that show slight improvement of growth on glucose and xylose when mutated to alanine 41 .
Combining the previously reported N370S mutation with the F79S mutation did not result in a xylose transporter with both reduced glucose repression and increased xylose transport (Fig. S3). This is likely due to the previously identified N370S mutation not providing for glucose repression in our strain which differs significantly from the EBY.VW4000 background used in Farwick et al. that contains known extensive chromosomal abnormalities 40 that may be required for the reported phenotype. The Hxt7N370S phenotype may also require the protein to be overexpressed 35 , unlike the physiological levels used in all of our experiments with single, low-copy plasmid with native promoter. The reduced xylose uptake/utilization of the double mutant HXT7(F79S,N370S) over the single mutant HXT7(F79S) could also be due to epistasis or protein unfolding.
Lab evolution of S. cerevisiae is another commonly used strategy to obtain variants that have improved xylose utilization phenotypes. Several such studies are reported in the literature and each has resulted in the identification of key metabolic and regulatory genes [42][43][44][45][46] . Our study is the first lab evolution to find a mutation in a plasma membrane sugar transporter (HXT7), highlighting the importance of selecting appropriate starting strains and selective pressures to obtain desired phenotypes. While evolutionary selection is a powerful approach, it cannot sample all possible mutations in the amount of time given in the lab. Directed evolution approaches have produced heterologous and hybrid transporters with improved kinetics, such as the Candida intermedia Gxs1 pump, the S. stipitis Xut3 transporter, and the chimeric S. cerevisiae Hxt36 protein 47,48 . Recent directed evolution of HXT7 provided promising results 34,35 , revealing that more saturated attempts may be a good next step for further HXT7 engineering. Native S. cerevisiae sugar transporters all have much greater specificity and uptake rates for hexose sugars. Several of the native hexose transporters can leak in xylose, and the one with the best xylose specificity, Hxt7, displays a low Km of 161 mM. Hxt7 also exhibits a meager uptake rate of 101 nmol•min −1 •mg −1 , does not alone support growth on xylose, and is inhibited by the presence of other sugars 22 . Some heterologous xylose-transporters have been identified, and have helped improve xylose utilization 31 . However, their performance has been hampered by poor growth rates, low substrate affinities, transport rates, or substrate inhibition. Recently success in engineering of native transporters has resulted in the identification of a xylose transport sequence motif 34 , and the generation of glucose insensitive strains 35 . These approaches also resulted in diminished uptake rates (Vmax), and resulted in modest growth on xylose, which are of limited value to future mixed sugar co-utilization. The HXT7(F79S) mutation alone enhanced the xylose transport rate (Vmax), which enables growth on xylose in a minimally engineered background strain. The mutation decreases doubling times from over 150 hours to nine hours (Fig. 1b), and doubles xylose transport rates to 186.4 nmol•min −1 •mg −1 (Fig. 5), without affecting xylose affinity.
Using the structure of the bacterial homolog of the yeast Hxt proteins, XylE 38 , we were able to model the structure of Hxt7 (Fig. 2b), and to address possible mechanisms of action for Hxt7(F79S). The model predicts that the mutated residue, F79, faces inward towards the central sugar-binding pore, similarly to the residues previously identified as critical to glucose binding 49 . The mutated Phe residue of Hxt7 aligns with a Phe residue that participates in xylose binding for XylE, providing support for the importance of this residue in Hxt7 sugar transport. The amino acid substitution from a Phe to a Ser shifts the Hxt7 sugar-transporting pore towards polarity. This perhaps provides for increased xylose transport rates by allowing for additional hydrogen bonding between Ser and xylose; by allowing for additional water molecules to enter, thereby contributing to substrate binding through water-mediated hydrogen binding; or by allowing for a conformational change that favors xylose transport. Because we do not observe an increase in xylose affinity (Km) with Hxt7(F79S), the latter two mechanisms are more likely. Further structural information for the yeast Hxt proteins will enhance our understanding of xylose transport, and help to solidify the exact mechanism of how the HXT7(F79S) mutation affects xylose transport.
Both of the coding mutations found in the evolved strain were reasonable candidates for impacting sugar utilization. The causal mutation in HXT7 was not surprising since the native transporter had been previously shown to provide for the highest intracellular accumulation of xylose in S. cerevisiae 26 . The only other mutation in our Scientific RepoRts | 6:19512 | DOI: 10.1038/srep19512 xylose-evolved strain, YDL176W(D504H), had an almost indiscernible impact on this phenotype by itself (Fig. S2). Although YDL176W is largely uncharacterized, it is predicted to be involved in fructose-1,6-bisphosphatase (Fbp1) degradation and a member of the glucose-induced degradation (GID) complex [50][51][52] , making it a likely target for affecting sugar utilization. When S. cerevisiae are starved of glucose for prolonged periods of time, gluconeogenic enzymes such as Fbp1 are induced 53 . Therefore, one possible explanation for this mutation is that it resulted not from the adaptation to xylose, but instead from long-term glucose starvation. Alternatively, components of the GID complex have been implicated in degradation of Hxt7 54 . Perhaps Ydl176W(D504H) could be altering the degradation of Hxt7, explaining the slight growth improvement seen at 60 hours. However, from our studies, HXT7(F79S) provides the phenotype seen in the evolved strain.
Our discovery has enabled us to contribute to the list of yeast xylose utilization discoveries made to date. This invention has very broad applicability. All industries and research ventures that use S. cerevisiae yeast microbial hosts as their platform to convert sugar to a desired product may find this mutant transporter useful. Moreover, the xylose utilization phenotype reported here is due to expression of a single nucleotide substitution in a single copy of HXT7, making this discovery easily transferable to established industrial strains. The HXT7(F79S) mutation allows yeast to better use xylose, thus allowing it to use the main sugars (glucose and xylose) present in the mixes that arise from saccharification of plant biomass. This ability would be desirable specifically to industries and ventures that are manufacturing bulk compounds and chemicals and that wish to have inexpensive and sustainable biomass as the feedstock.
With this discovery, both aspects needed for xylose transport have been engineered independently. The future of xylose utilization will need to focus on combining these properties into one transporter. Incorporating motif modifications to reduce glucose repression and to improve import, in addition to other genetic modifications that also improve xylose utilization may result in the true elimination of diauxic growth that is typical in mixed carbon sources.
Strains and media.
A complete list of strains and plasmids used in this study can be found in Tables S1 and S2 (see Additional File 1), and are available through the JBEI registry (http://public-registry.jbei.org 55 ). Yeast cells were grown in standard rich (YP, yeast extract-peptone) or synthetic defined media (SD, yeast nitrogen base with CSM amino acids (Sunrise Science Products) for plasmid selection) with 2% sugar, unless otherwise stated. For yeast kanamycin resistance selection, 250 μ g/ml of geneticin (G418) was used in rich medium. Bacteria were grown in LB with 50 μ g/ml carbenicillin.
S. cerevisiae strains were transformed with plasmids using the conventional lithium acetate method 56 . DNA cloning was performed using standard techniques; T4 DNA polymerase-mediated (Fermentas) ligations or Gibson assembly in Escherichia coli, or homologous recombination in S. cerevisiae. Plasmids were recovered from S. cerevisiae by lysing the cells mechanically with glass beads, followed by plasmid mini-prep (Qiagen). Chromosomal gene deletions were generated by integration of PCR products flanked by loxP sites 57 .
Strain evolution.
A BY4742 gre3∆ strain expressing Piromyces sp. XI (Pi-xylA), and XKS1 from two high-copy plasmids was evolved in SD -URA -HIS, with 2% xylose. The 4 mL culture was maintained at 30 °C, shaking at 200 revolutions/min. Mutants with increased specific growth rates were selected through dilution of the culture when turbidity was seen. At periodic intervals, the culture(s) were plated onto solid SD -URA -HIS, 2% xylose medium, and several of the fastest-growing colonies were selected for independent evolution in liquid culture. This process was repeated, selecting for the fastest growing isolates at each round, until culture saturation was achieved within one to two days of dilution. In total, the evolution process took approximately three months until satisfactory growth was achieved. At the end of the process, about one dozen clones were re-streaked and tested individually for xylose growth. One of the best performing clones, 7a2c (JBEI_ScMO002), was selected and prepared for genome sequencing. Genome sequencing. Five μ g of total gDNA was extracted from the parental and evolved strains, and sent to the Department of Energy Joint Genome Institute (DOE JGI, Walnut creek) for whole genome resequencing. Resequencing data associated with this study can be found via NCBI SRA accession numbers SRX298977 and SRX298863. Burrows-Wheeler Aligner (BWA) was used to align reads, and Bcftools to assign SNPs and indels. Sequencing files were analyzed using Integrated Genome Viewer software 58 .
Xylose growth experiments. Strains were grown overnight in SD -LEU -URA, 1.4% glucose, 0.6% xylose medium. Cells were pelleted and resuspended to a final OD 600 of 0.1 in 1 mL of SD -LEU -URA, 2% xylose medium in a 24-well plate. The plate was then placed into the BioTek Synergy 4, preheated to 30 °C, and the growth was monitored by taking the OD 600 every fifteen minutes, for 60 hours.
For experiments associated with Fig. 4, strains were grown overnight in SD -LEU -URA, 1.4% glucose, 0.6% xylose medium. Cells were pelleted and resuspended to a final OD 600 of 0.1 in 5 mLs of YP, 2% xylose medium (Fig. 4a), or YP, 0.5% glucose, 0.5% xylose (Fig. 4b,c) in standard culture tubes at 30 °C. Samples were periodically taken and monitored for sugar concentrations and OD 600 for 48 hours.
Analysis of glucose and xylose concentrations. The concentrations of sugars were quantified on an Agilent Technologies 1200 series HPLC equipped with an Aminex H column. Samples were filtered through 0.45 μ m VWR filters to remove cells, and 5 μ l of each sample was injected onto the column, preheated to 50 °C. The column was eluted with 4 mM H 2 SO 4 at a flow rate of 600 μ l/min for 25 minutes. Sugars were monitored by refractive index detector, and concentrations were calculated by comparison of peak areas to known standards. Radioactive sugar uptake. Uptake of 14 C-xylose was used to determine the Michaelis-Menten parameters for Hxt7(F79S). 1-14 C-xylose was purchased from American Radiolabeled Chemicals. Twelve mL overnight cultures grown in SD -URA, 1.4% glucose, 0.6% xylose medium, were diluted to an OD 600 of 0.1/ml in 50 mL of medium and allowed to grow until mid-log phase (OD 600 0.5 to 0.8). Twenty ODs of cells were centrifuged at 3000 x g for 5 minutes and washed once with 10 mL of 0.1 M potassium phosphate buffer, pH 6.8. Cultures were then resuspended in 300 μ l of 0.1 M potassium phosphate buffer, pH 6.8, and warmed to 30 °C. Twenty-five μ l of cells were then mixed with an equal amount of radiolabeled sugar solutions, producing final mixed sugar concentrations between 10 mM and 400 mM. Ten seconds after mixing, the samples were filtered through 0.2 μ m Whatman Nuclepore filters (GE Healthcare), and washed with 10 mL ice-cold 0.1 M potassium phosphate, 500 mM xylose buffer. Filters were subsequently placed in 4 mL Ecoscint XR scintillation fluid (National Diagnostics) and counted in a LS 6500 scintillation counter (Beckman-Coulter). KaleidaGraph software (Synergy Software) was used to plot the data, and to arrive at Michaelis-Menten kinetic parameters for each transporter. All assays were performed in biological triplicate. One outlier with accelerated uptake was discarded from the 300 mM HXT7(F79S) data set. | 2018-04-03T00:17:55.837Z | 2016-01-19T00:00:00.000 | {
"year": 2016,
"sha1": "6f02db996bbae0dbebbb03ce63a4d445817ca95b",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep19512.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "680a301484eec267ee66019acd8b4af98f1d2746",
"s2fieldsofstudy": [
"Biology",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
207961463 | pes2o/s2orc | v3-fos-license | Factorial structure of the Manchester Short Assessment of Quality of Life in patients with schizophrenia-spectrum disorders
Purpose Subjective quality of life is a central patient-reported outcome in schizophrenia-spectrum disorders. The Manchester Short Assessment of Quality of Life (MANSA) is an established and widely used instrument for its assessment. The present study is a secondary analysis of large schizophrenia studies and aims to establish the factorial structure of the MANSA with a rigorous two-step methodology. Methods A sample of 3120 patients was randomly split into two datasets; the first includes two thirds of the patients and serves as the calibration sample (N = 2071) and the second includes one third of them and serves as the validation sample (N = 1049). We performed an exploratory factor analysis with the calibration sample followed by a confirmatory factor analysis with the validation sample. Results Our results for both samples revealed a model with adequate fit comprising two factors. The first factor encompasses eight items measuring satisfaction with a variety of life and health-related aspects of quality of life, whereas the second consists of four items assessing satisfaction with living environment comprising living alone or with others, accommodation, family, and safety. These two factors correlate in a different way with socio-demographic characteristics such as age and living conditions. Conclusions Future trials and service evaluation projects using the MANSA to measure quality of life should take into account that satisfaction with living environment may be distinct from satisfaction with other life and health-related aspects of quality of life.
Introduction
Subjective quality of life (SQoL) is regarded as an important outcome in clinical practice and research [1][2][3][4] with patients with psychosis. One of the most widely used instruments to assess SQoL [1] is the Manchester Short Assessment of Quality of Life (MANSA [5]). The MANSA is based on Lehman's [6] conceptualisation of quality of life and explores satisfaction with a number of life domains. It was created primarily for use in patients with schizophrenia-spectrum disorders and has been used in more than 700 studies. It is a brief, easily administered instrument that was developed as a shortened version of the Lancashire Quality of Life Profile (LQLP [4]) in order to reduce the length of the assessments and respondents' fatigue. Thus, it can be easily included in research designs that involve extensive evaluations and also used in routine clinical practice.
An additional strength of this instrument is that the latent concept of quality of life measured is not specific to healthrelated issues. As such, it can be used to compare patients with psychosis to patients suffering from other types of mental illnesses, or to the general population. For these reasons, the MANSA offers an advantage compared to the more extensive and health-oriented quality of life instruments such as the WHO-QoL-bref [7] or the SF-36 [8]. Despite its extensive use, the factorial structure of the MANSA has not been established using rigorous statistical methods. A question remains as to whether the different MANSA items assess a unidimensional general appraisal of quality of life and life satisfaction [9], or if the MANSA assesses distinct latent constructs [10].
Previous attempts were made to answer this question, but they were based on incomplete versions of this instrument. Priebe et al. [11] examined psychometric properties of the DIALOG, a therapeutic intervention that includes eight out of the twelve MANSA SQoL items. Based on a sample of 271 patients, their aim was to test the feasibility of using the data extracted from the intervention as a valid SQoL patient report and thus the exploration of the complete MANSA structure was beyond their scope. Similarly, Eklund and Bäckström [12] measured the properties of only nine out of the twelve MANSA items as a part of an examination of SQoL determinants, using a sample of 161 patients. Both studies identified a two-factor structure of the MANSA. However, these studies would not provide useful information with regard to the factorial structure of the entire MANSA SQoL instrument, as in addition to not including all of the items, they were based on small and local samples and they did not perform a confirmatory factor analysis (CFA).
When validating an instrument, most analyses have the significant disadvantage of exploring and testing the model in only one sample. A proper methodology requires calibrating and validating the model in two different samples/ sets. To address the above issues, a rigorous and systematic examination of the MANSA is required, including both an exploratory factor analysis (EFA) to develop a factorial model and a CFA to validate the findings in a different sample. Such systematic examination will allow us to provide recommendations for an accurate use of the instrument in clinical practice and research.
Procedure
For the purpose of the present analysis, we merged the data of nine different studies that assessed SQoL using the MANSA in patients with schizophrenia-spectrum disorders (ICD-10: F20-F29 [13]). Patients were above 18 years old, had the capacity to provide informed consent, and had sufficient command of the language of the country where they were assessed. Those suffering from any type of organic brain disorders or cognitive impairment were excluded. Overall, the merged database included N = 3120 patients. Details of the included studies can be seen in Table 1. When data on SQoL were available for more than one time point, we opted to include only baseline scores, to obtain as much data as possible from each study.
Measures
The subjective components of the MANSA scale encompasses 12 items that measure satisfaction with life as a whole, job or being unemployed, financial situation, number and quality of friendships, sex life, leisure activities, accommodation, personal safety, people living with or living alone, family relationships, physical health, and mental health. Satisfaction is measured using a 7-point Likert scale, from 1: could not be worse to 7: could not be better. The instrument is clinician-administered or self-rated and it takes up to 15 min to be completed. It has been found to have satisfactory psychometric properties in terms of concurrent validity, overall reliability [6], and internal consistency [14].
Statistical analyses
The analyses were carried out using SPSS v.24. Before proceeding, we inspected the data for normality, outliers, and missing values. Visual inspection of the histograms revealed a normal distribution of the data and boxplot inspection showed no outliers. Regarding missing data, this was less than 4% across the 12 MANSA items (with the exception of item 5-Satisfaction with sex life, where the percentage was 12%). First, we randomly split the sample (N = 3120) and created two separate datasets, the first including two thirds of the initial sample (N = 2071) and serving as the calibration sample and the second including one third of the initial sample (N = 1049) and serving as the validation sample. Potential differences between the two samples were tested by using T-test or Chi square analyses. In order to validate the MANSA structure, we performed an EFA using a maximum likelihood estimation process with the calibration sample and a CFA with maximum likelihood with the validation sample, so as to corroborate the solution offered by the first analysis.
For the EFA, we applied oblique rotation with Kaiser normalisation instead of varimax, so as to allow for possible correlations between the factors [15]. Pairwise deletion was used to handle missing data [16]. In order to determine the number of significant Eigenvalues to be extracted from the data, we ran a parallel analysis with Montecarlo simulation [17]. Finally, we used the JASP software to calculate the omega coefficient (ω) for each of the obtained factors to check their reliability.
We also performed sensitivity analyses by repeating the EFA, first excluding the first item (satisfaction with life as whole), to examine whether the factorial structure is influenced by this item due to its generic nature, and second by omitting item 5 (satisfaction with sex life), due to the amount For the CFA, we used AMOS (Analyses of Moment Structures) with the validation sample. Although using the Chi square statistical test is a common practice to determine the model's goodness of fit when the p value is < .05, this is not recommended for large datasets, as it is influenced by the sample size [18]. To account for such drawback, based on Hu and Bentler's [19] recommendations for avoiding Types I and II error, we used a combination of indexes to estimate the goodness of fit of our model. Specifically, we used the Root Mean Square Error of Approximation (RMSEA) that assesses the errors in fitting the data to the covariance matrix, with values below .05 representing an excellent fit and narrow confidence intervals from .00 to .08 indicating a good model fit [20]. We also considered the Comparative Fit Index (CFI) that provides a comparison of the hypothesised model to an unfit model, delivering a measure of complete variation of the data and showing an adequate fit when the values are > .95 [19] and acceptable when > .90 [21].
Finally, to check whether the same general specification for the model holds across groups, we examined the measurement invariance for gender (male, female), service setting (inpatients, outpatients), and living situation (alone, other). We did this by pooling the general fit across groups and checking the configural, metric (factor loadings are equal across groups), scalar (the observed scores are related to the latent scores regardless of the group), and strict invariance (the residuals are equal showing the same amount of error across groups), across categories for each of these variables. Then, we performed additional analyses to assess whether the proposed factors showed diverse associations with those variables, as an estimation of their distinctive nature. Concretely, we used three independent samples T-test to test their relationship with gender, service setting, and living situation. We also carried out a Pearson correlation analysis to test their relationships with age.
Sample characteristics
The socio-demographic and clinical characteristics of the total sample can be seen at Table 2 Table 3. In both samples, patients seemed to have the highest scores for items evaluating satisfaction with accommodation, people living with, safety, and family (items 7, 8, 9, and 10) and the lowest for items assessing finance and sex life (items 3 and 5).
Results of the exploratory factor analysis (calibration sample)
The correlation matrix showed that all the factors correlated with each other. The parallel analyses indicated the existence of two factors with significant Eigenvalues. The first factor (satisfaction with life and health-related aspects) had an Eigenvalue of 4.05 and included items assessing satisfaction with life as a whole, job or being unemployed, financial situation, number and quality of friendships, sex life, leisure activities, physical and mental health (1, 2, 3, 4, 5, 6, 11, and 12), and the second factor (satisfaction with quality environment) had an Eigenvalue of 1.12, and included items assessing satisfaction with accommodation, personal safety, people living with or living alone, and family relationships (7, 8, 9, and 10). The correlation between the two factors was r = .62. The load coefficients per item (in bold) together with the reliability of the two factors can be seen at Table 4.
When repeating the analyses after excluding item 1 (satisfaction with life as a whole), the structure remained the same. However, the reliability for Factor 1 (satisfaction with life and health-related aspects) was lower (Coefficient Omega = .730). When doing the same by excluding item 5 (satisfaction with sex life), the structure did not change either but the reliability of Factor 1 (satisfaction with life and health-related aspects) was lower when excluding this item (Coefficient Omega = .767). Therefore, we opted for adopting the two-factor solution including all the MANSA items.
Results of the confirmatory factor analysis (validation sample)
The CFA confirmed the solution provided by the EFA. The RMSEA fit index was higher than .05 but had a narrow confidence interval revealing an acceptable goodness of fit for the model [RMSEA = .067; 95% CI (.060, .075)], similar to the CFI (= .90). The standardised item loadings for each factor are seen at Fig. 1. All loadings were significant and exceeded .40, ranging from .45 to .68. Finally, the interfactor correlation was r = .77, which is below the threshold of .80, confirming the existence of distinct multidimensional factors comprised under the latent SQoL construct.
The analysis excluding item 1(satisfaction with life as a whole) showed similar but slightly poorer fit: [RMSEA = .068; 95% CI (.060, .077)] whereas the CFI was marginally below the acceptable threshold (CFI = .89). The results when excluding item 5 (satisfaction with sex life) were similar [RMSEA = .072; 95% CI (.064, .081) and (CFI = .89)]. In addition, the comparison of the twofactor solution with a single-factor model encompassing all the items revealed no significant differences [χ 2 diff (1) = 1.596; p = .10), indicating that the single-factor thus, further invariance testing was stopped and potential differences in Factors 1 and 2 for patient type could not be explored further.
Relationships of factor 1 (life and health-related aspects) and 2 (quality of living environment) with gender, service setting, age, and living situation
The T-test analyses showed that men and women were not statistically different across Factors 1(satisfaction with life and health-related aspects) [t(3115) = .053; p = .958] and 2(satisfaction with quality environment) [t(2463) = − 1.276; p = .202]. Age showed a weak positive correlation with Factor 1 (r = .036; p < .05) and no correlation with Factor 2. Finally, Factor 1 was not found to be different between people living alone and those having another living situation such as family, friends, or sheltered housing [t(2662) = − 1.038; p = .299] but Factor 2 was clearly different between the two living situations [t(2661) = − 5.100; p < .001], with people living alone
Main findings
Our results provide evidence that SQoL as measured by the MANSA comprises two distinct but correlated factors. The first factor incorporates several indicators of life and health, such as satisfaction with life as a whole, physical and mental health, leisure activities and friends, job/unemployment and financial situation, and sex life. The second factor encompasses satisfaction with family, accommodation, safety, and their living situation (whether they are living with someone or alone). This factor expresses a latent variable related to satisfaction with quality of living environment, which may be regarded as a separate aspect of SQoL. The model has an adequate fit and includes all the MANSA items in its two-factor structure. Sensitivity analyses performed by excluding certain items did not considerably change the structure, further supporting the robustness of the model. Additionally, though the model does not show better fit when compared to a single-factor model, it provides a more in-depth examination of quality of life components, and thus, its use can be considered more advantageous in research and clinical practice. The confirmation of measurement invariance followed by the distinct associations of the two factors with socio-demographic variables added evidence for their discrete nature and distinctive rating value for exploring associations of SQoL with other variables.
Strengths and weaknesses
This is the first systematic examination of the MANSA factorial structure. The analyses were based on a large sample of patients with schizophrenia-spectrum disorders, from different countries. This allowed us not only to have appropriate statistical power, but also to include variation in our sample related to different contexts and cultures. Additionally, we used a methodologically sound procedure to cross-check the accuracy of the proposed solution by randomly splitting the sample and by implementing two separate analytical procedures. Sensitivity analyses were designed and carried out to further check the robustness of the findings.
However, the present study also has some limitations. First, the reliability indexes for both factors were acceptable but not high. Although these were not high, they are consistent with the reliability indexes of the SQoL component of the MANSA (α = .74) as reported in its initial validation [5]. Similarly, the goodness of fit indexes for the CFA revealed acceptable but not excellent fit of the model, though when excluding items as a means of addressing this, the goodness of fit did not change dramatically. Second, despite the fact that for all the other items the percentage of missing values was under 4%, the item assessing satisfaction with sex life had a higher percentage of missing data (12%), perhaps reflecting the patients' or the clinicians' reluctance to speak about this issue [22]. Satisfaction with sex life had the lowest loading to the first SQoL factor. Although the sensitivity analyses suggested that the item can be maintained without significant reliability changes, perhaps sex life demands further exploration as a separate domain of quality of life, particularly considering that its scores are the lowest among the MANSA items, which is in line with previous [23,24]. Third, the study samples included people at different stages of their illness. It is known that SQoL ratings can vary between patients experiencing their first psychotic episode and those with a longer duration of illness [25]. However, we could not test whether this influenced our results because reliable information on illness duration was not available across all of the included studies. Similarly, the fact that the MANSA can be both clinician and self-report administered may have influenced the patients' SQoL-reported ratings, but data on the administration form were not available across studies. Nevertheless, evidence from such comparisons of other instruments used in the routine clinical practice revealed no significant influence of the administration form on the reported outcomes [26].
Comparison with previous literature
Previous analyses of the MANSA items also supported a two-factor solution [11,12]; however, the composition of the factors was different to the one proposed by the present results, perhaps due to the lack of inclusion of all items and small sample sizes in the former studies. Eklund and Backstrom [12] did find that satisfaction with family, personal safety, and accommodation clustered together, whilst Priebe et al. [11] found that satisfaction with personal safety was included in a different factor, along with satisfaction with mental and physical health. Yet, the joint evidence from these three studies strongly supports a two-factor model for the MANSA. Our study, in consideration of its methodological strengths (higher statistical power, inclusion of all items, and analysis of data from international samples), might be regarded as better suited than previous ones to identify the nature of the specific items included within each factor.
Implications
A two-factor structure of the MANSA may provide a hypothesis for explaining the frequently replicated finding that patients with schizophrenia report high levels of SQoL despite their often disadvantaged living conditions, a phenomenon known as the "disability paradox" [27]. Indeed, previous studies have consistently reported weak associations between objective indicators and SQoL [28,29]. The present analysis specifies those findings further. Specifically, the latent domain related to quality of living environment appears to be correlated to objective living conditions, whilst the other factor (satisfaction with life and health) does not. It may be the case that the satisfaction with life and health domain is more dependent on a general appraisal tendency, which is not influenced by objective life conditions, whilst satisfaction with living environment is more directly affected by real-life conditions. Further studies should confirm whether the different patterns of correlations between the two factors and objective life conditions can be replicated, using the proposed two-factor structure of the MANSA Also, future research should explore the feasibility of a bifactor solution, testing whether SQoL as measured by the MANSA could in fact represent an underlying construct with two domain-specific factors.
A two-factor model of quality of life may help to evaluate interventions of different types. For example, interventions that are principally aimed to improve satisfaction with health or personal life domains might be better assessed using a subscale reflecting the items included in our factor related to satisfaction with personal life and health. Social interventions targeting housing and neighbourhoods may, instead, benefit from a more specific and sensitive subscale to measure their effects, which may be represented by our factor related to satisfaction with living environment. Being aware of these two latent constructs within the MANSA can therefore be used to tailor how this instrument is used in evaluation protocols of routinely provided mental health care, or research studies of novel interventions. | 2019-11-14T14:17:17.674Z | 2019-11-12T00:00:00.000 | {
"year": 2019,
"sha1": "fe5d7d97433caaba3f0af24ec0599d880c7d361a",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11136-019-02356-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ddb0de5972d721a3554a674b6354f71a6f058b16",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
12586209 | pes2o/s2orc | v3-fos-license | High Performance Recycling of Polymers by Means of Their Fluorescence Lifetimes *
Technical polymers could be identified by means of their remarkably strong auto fluorescence. The time constants of this fluorescence proved to be characteristic for the individual polymers and can be economically determined by integrating procedures. The thus obtained unequivocal identification is presented for their sorting for recycling. Furthermore, polymeric materials were doped with fluorescent dyes allowing a fine-classification of special batches.
Introduction
The recycling of organic polymers obtains an increasing interest both in research and technology.There is a necessity for the development of efficient processes because of increasing environmental pollution by polymers ("plastic planet").Moreover, their recycling may open an economic source for organic materials.The majority of technical polymers are thermoplasts and melt and moulding again is attractive for their easy re-use.However, the immiscibility and incompatibility of organic polymers are therefore the main obstacles because lacking uniformity as low as 5% lowers the value of polymers appreciably and an even higher uniformity is required for high performance materials.Pure polymers for recycling may be collected in polymer-processing manufactories, however, the majority of collected material forms mixtures where an efficient sorting is required before processing.The machine-based recognition of polymers is a prerequisite for such processes where methods using the density or electrostatic properties were described [1]- [3].Optical methods are more attractive because of simple, stable and efficient technology where fluorescence is advantageous [4]-[8] because of unproblematic light path and detection.The doping of polymers with fluorescent markers [9] and their re-identification by the spectral resolution of their fluorescence in combination with a binary coding was described in preceding papers [10] [11].This demonstrated the efficiency of the application of fluorescence.However, there are two topics for a fundamental improvement: 1) Only doped material can be recycled where the recycling has to be already targeted in the production of final products; undefined wastes cannot be recycled in this way; 2) the spectral resolution for every flake for recycling costs appreciable efforts for detection and signal processing.Optical processes for the sorting of undoped material would bring about an appreciable progress and would even allow working up deposited material.
Auto Fluorescence of Polymers
The identification of polymers was concentrated to the technical high performance products Luran ® , Delrin ® and Ultramid ® .We found an appreciable strong auto fluorescence of these technical materials with standard optical exciting at 365 nm where mercury lamps may be applied as a light source; see Figure 1.Slight variations of the wavelengths of excitation do not alter the spectra.
The investigated polymers exhibit individual shapes of their auto fluorescence spectra; see Figure 1.We preferred a fluorescence excitation at 365 nm where intense light sources are available.A slight variation of the wavelength of excitation does not influence the fluorescence.The spectra may be used for the identification and sorting of polymers by means of methods of pattern search.Thus, even undoped material can be sorted, however, this requires still an appreciable effort of calculation.As an alternative, we investigated the fluorescence lifetimes of the auto fluorescence and found remarkable differences for various polymers; see Table 1, lines 1 to 3. Fluorescence decay essentially proceeds first order in time with the time constant τ.Minor, less important bi-exponential components (τ bi ) could be detected, however, a mono-exponential interpretation is sufficient for the identification by far.The decay curves can be easily splitted into two branches representing each of the single components of the fluorescence lifetime.Factors about two are between the decay times τ for Delrin ® , Ultramid ® and Luran ® allowing an unambiguous identification of the polymers.
The decay curves of the auto fluorescence of the polymers are reported in Figures 2(a)-(c) and are clearly indicating their pronounced differences.These can be even more easily seen in the fitted function in Figure 2(d).
A simple logarithmic representation of the right branch of the decay curves is by far sufficient for the determination of the differences in lifetimes; right scales in Figure 2.
Fluorescent Labels
An additional labelling of the polymers by means of fluorescent dyes was taken into account for not only identifying the basic polymeric material but also special technological batches.We applied the perylene ester 1 (PTIE), the perylene carboxylic bisimide 2 (S-13) and the terrylene carboxylic bisimide 3 (S-13TBI) because of their light fastness and high fluorescence quantum yields.The fluorescence of these dyes proceeds in different spectral regions forming three channels for detection as can be seen from their fluorescence spectra in Figure 3.The spectra in various polymeric materials differ only slightly from the spectra in solution because solvatochromism of the dyes is weak.As a consequence, the three channels of fluorescence can be taken to be invariant with respect to the tested material.The labelling of polymers can proceed with a binary coding where the first or the second dye or both were applied and so on resulting in 2 n − 1 possibilities for labelling with n as the number of applied fluorescent dyes: Thus, seven individual batches may be labelled for each polymeric material with the application on dyes 1 to 3. The fluorescence spectra may be applied for the identification of the labelling with the individual dyes.The formation of the second derivative of the spectra improves [10] [11] the security of detection.
Furthermore, we found that the time constants for fluorescence decay vary both with the applied dye and the applied polymer; see Table 1.Such combinations can be taken as an additional pattern for the recognition of the entire batch of a polymer for further improvement for the identification of polymers.Moreover, the determination of decay times needs no calibration concerning the fluorescence intensities (such a calibration may by applied with the auto fluorescence of polymers as internal standards), because the exponential decay remains similar independent from the starting intensity and some dead time before acquisition; this may be advantageous, even for very inhomogeneous flakes for recycling concerning size and shape.We tested the reproducibility of the determined time constant of fluorescence decay and found standard deviation only for the second decimal; see Table 2 for examples.As a consequence, the reproducibility is good enough by far for the unequivocally discrimination between the individual samples; on the other hand, even an
Time-Resolved Detection
The first order exponential decay curves need not be completely registered and fitted because there are well established mathematical procedures [12]- [14] for the determination of the time constant by means of the measurements of two points of the decay curve or even more appropriate by two integrated regions, preferment before and behind the half time (t 1/2 ).This is schematically shown with two Gaussian-shaped samplings in Figure 4 where the integrating measurements improve the signal to noise ratio.One up to two ns time for integration time seem to be appropriate concerning a decay time of about 5 ns for the majority of fluorescent structures.The fluorescence is induced by the periodically pulsed light for excitation where one can expect a sufficiently complete fluorescence decay of 70 ns for the case of an unfavourable lifetime of 10 ns.As a consequence, an unproblematic frequency of about 15 MHz results for repetition.The two regions of integration may be selected by means of two phase sensitive detectors (PSD) and a phase shift between the two analyzing signals for sampling.These need not be applied for each pulse, but may be distributed, for example, between two consecutive pulses.
A further improvement of the signal to noise ratio may be obtained by the accumulation of the signals of detection.One can roughly calculate the upper limit of the detection for sorting with industrial flakes for recycling of dimension of max. 10 mm.Minimal 20 mm space between the individual flakes seem to be realistic and a transport of maximal 500 m/s where 200 pulses for excitation at 15 MHz of repetition should be obtainable for a single flake; this should be more than sufficient for a good signal to noise ratio for an unequivocal sorting.An average mass of about 25 mg was found for standard industrial recycling flakes resulting in a sorting capacity of 1.5 tons of material per hour.This has to be taken as an upper technological limit for a permanent sorting of polymers covered by the described method.The bottleneck for such capacities seems to be more the mechanics than the methodology of detection.Both electronics and mechanics become much more simple for lower demand.
Conclusion
The unequivocal identification of technical polymers by means of their time constants of auto fluorescence decay is a promising method for their sorting for recycling.Time constants can be economically determined by phase-shifted integration of the fluorescence response of pulsed optical excitation.The auto fluorescence of polymers can be applied for the identification of the basic material where a doping with fluorescent dyes allows the further fine-classification of special batches.A binary coding of the doping with n fluorescent dyes results in 2 n − 1 possibilities for the labelling of batches.
Figure 2 .
Figure 2. Fluorescence decay of polymers at 573 nm in linear (left) and logarithmic scales (right) and the characteristic of the light pulse of excitation at 365 nm as dotted lines.Mono-exponentially fitted functions of decay as solid lines.(a) Fluorescence decay of Luran ® ; (b) fluorescence decay of Delrin ® ; (c) fluorescence decay of Ultramid ® ; (d) comparison of the fitted functions for Delrin ® : Solid line, Ultramid ® : Dotted line and Luran ® : Dashed line.
I
absolute determination of the time constant is not necessary as long as the complete setup produces sufficiently reproductive values.
Figure 4 .
Figure 4. Schematic first order decay (solid line) with a time constant of τ = 14.4ns corresponding to a half life t 1/2 of 10 ns.Gaussian-shaped samplings at t = 5 ns (dashed curve) and at t = 15 ns (dotted curve).
Table 1 .
Fluorescence lifetimes of genuine polymers, the fluorescence labels in chloroform solution and doped polymers.
a) Fluorescence lifetime; b) Additional biexponential component; c) Wavelength of excitation in nm; d) Wavelength of detection in nm.
Table 2 .
Test of reproducibility of the time constant of fluorescence decay including the applied method; measurements with individually prepared and re-oriented samples of labelled granulates.
a) Time constant of fluorescence decay; b) Mean value, standard deviation s; c) Wavelengths of excitation in nm; d) Wavelengths of detection in nm. | 2016-10-26T03:31:20.546Z | 2014-06-26T00:00:00.000 | {
"year": 2014,
"sha1": "8dddefdb5d67c9067b1095ea76464a73894c66c7",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=48915",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "411755c4a9cdf76f00ef1c53755627afa8a1f8d1",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
52839731 | pes2o/s2orc | v3-fos-license | MECHANICAL PROPERTIES OF CASTOR BEANS SUBJECT TO DIFFERENT DRYING TEMPERATURES AIMING TO DISRUPT THE BEAN COAT
In castor oil extraction process, the bean coat is abrasive to the equipment and releases substances that modify the oil color, reducing its quality. A potential solution would be to run the extraction by compressing only the endosperm. Due to lack of information, the objective of this study was to evaluate the influence of forced air drying at 40, 60, 80 and 100 oC and farmyard drying, in the mechanical properties of the beans, aiming to break the bean coat. Castor beans were subjected to compression tests, in two perpendicular directions, at a strain rate of 0.6 mm.s-1. Average values of force, deformation energy, strain, all at rupture, and stiffness were used to evaluate the effects of dehydration. It was observed that the heat treatments did not alter the mechanical properties of castor beans, the strain and stiffness values discriminate the differences between the directions and had the lowest coefficients of variation. It was concluded that forced air drying, more costly than farmyard drying, does not bring benefits to the decortication. However, regardless the heat treatment used, the mechanical stress lengthwise is the most suitable to promote decortication. KEYS-WORDS: conditioning, threshing, decortication, compression. PROPRIEDADES MECÂNICAS DOS GRÃOS DE MAMONA SUBMETIDOS A DIFERENTES TEMPERATURAS DE SECAGEM A VISANDO A RUPTURA DO TEGUMENTO RESUMO: No processo de extração do óleo de mamona, o tegumento libera substâncias que modificam a coloração do óleo, reduzindo a qualidade, e causa abrasão nos equipamentos. Potencialmente, uma solução para esse problema seria executar a extração prensando somente o endosperma. Devido à carência de informações, o objetivo deste trabalho foi avaliar a influência da secagem com ar forçado a 40; 60; 80 e 100 oC e secagem em terreiro, nas propriedades mecânicas dos grãos, visando ruptura do tegumento. Grãos de mamona foram submetidos a ensaios de compressão, em duas direções perpendiculares, com taxa de deformação de 0,6 mm.s -1 . Valores médios da força, energia de deformação, deformação específica, todos na ruptura, e rigidez foram utilizados para a avaliação dos efeitos da secagem. Observou-se que, nas condições consideradas, os tratamentos térmicos não alteraram as propriedades mecânicas dos grãos de mamona e que os valores de deformação específica e rigidez discriminaram as diferenças entre as direções e apresentaram os menores coeficientes de variação. Concluiu-se que a secagem com ar forçado, mais oneroso que a secagem no terreiro, não traz benefícios à decorticação. No entanto, independentemente do tratamento térmico, a solicitação mecânica do grão na direção longitudinal é a mais indicada para promover a decorticação. PALAVRAS-CHAVE: condicionamento, debulha, decorticação, compressão.
INTRODUCTION
The castor plant (Ricinus communis L.) is an oil crop of economic and social importance in Brazil, from whose grains oil with excellent properties is extracted.Generally, the industry uses hot pressing of the whole grains and due to the high abrasion of the bean coat or shell, the life of the extraction equipment is reduced, and accordingly, an undesirable migration of the pigment from the bean coat into the oil occurs.For these reasons, the extraction of oil by compressing only the endosperm without the bean coat, or part of it, is advantageous since it allows obtaining oils with lighter coloring, eliminates most of the abrasion problems, increases the extraction efficiency, promotes greater oil recovery and reduces energy requirements (RITTNER, 1996).However, in Brazil there are no decortication machines for castor beans.It is known that for the proper design of a huller, which strongly interacts with the product, it is necessary to know its physical and mechanical properties (MOHSENIN, 1986).Under this perspective, several investigations were conducted aiming the decortication of agricultural products considering the mechanical properties thereof (PLIESTIC et al, 2006;ARAÚJO & FERRAZ, 2006;SIRISOMBOON et al (2007); ARAÚJO & FERRAZ, 2008).Most often, a good performance in decortication requires prior conditioning of the product to properly modify some of its properties.GONELI et al. (2008) and GONELI et al. (2011) point out that dehydration significantly alters the physical properties of castor fruits, such as density, porosity and volume, being instrumental in the design of equipment and industrialization of grains.RIBEIRO et al. (2007) found that the maximum force and strain modulus for soybeans decrease with increasing water content.Evidence of anisotropic behavior was reported by RESENDE et al. (2007) at compressing beans in three mutually perpendicular directions.Despite the increased attention that the culture of castor beans has received from researchers and the Brazilian government, the work hitherto published is not aimed at post-harvest and processing of grains and their interactions with the oil quality.The work of OLAOYE (2000) and, more recently, GONELI (2008), represent efforts in investigating properties of castor beans, but are not targeted at decortication.The rupture force and corresponding strain, deformation energy and stifness are important mechanical parameters and can provide support for a decortication strategy.In this work, these parameters were used to evaluate the response of castor beans to various drying conditions. .
MATERIALS AND METHODS
The castor fruits used were produced at the Central North Pole of the Paulista Agency of Agribusiness Technology (APTA), located in the city of Pindorama -SP.Bunches of castor fruits 'AL Guarany 2002', early ripened, harvested and detached from the clusters manually, were transported to the UNICAMP Agricultural Engineering College where, on the same day, the green color fruits were separated from the dark colored ones.Only dark colored fruits were used.The initial water content of the fruit was determined after 24 hours at 4 °C ± 0.4 °C and 75 % relative humidity storage using the gravimetric method and forced air oven (model 320-SE FANEM ® ) at the temperature of 105 ± 1 °C, until constant weight, with five repetitions (BRAZIL, 2009).The harvest of the fruits used in the experiment was done in May and July 2010, from the same planting area.
CASTOR FRUITS DRYING.Drying runs were performed in the Drying Laboratory ( UNICAMP Agricultural Engineering College) using a conjugate Convective Dryer perpendicular and/or parallel to the drying bed flow, which consists of a drying chamber with twenty drawers arranged in two columns, ventilation systems, heating system using electric resistances, air flow and temperature control system.In each repetition, the castor fruits were placed on a tray in number enough to fill it out and form a uniform layer, with an average weight of 636.39 g ± 57.41 g.Drying with 4 repetitions of each treatment was conducted with air temperatures of 40, 60, 80 and 100 °C, variation of ± 4 °C and average air speed of 0.9 m s -1 .The recording of relative humidity and room temperature values was performed by a thermo hygrograph (model TH 508, CIBRAPAM ® ).. The monitoring of water loss during drying was performed by weighing the tray, using an analytical scale with a resolution of 0.01g (model SL-3000, SCIENTECH ® ), at intervals of 15 minutes.We conducted tumbling of the fruit in the tray every 30 minutes.Drying was stopped when the water content of the grains was calculated between 4 and 8 % (db), indicated by the final weight of the tray with the product.This estimate was performed through the initial moisture content of the fruit and considering only the water loss and constant dry mass.After drying, the castor fruits were threshed manually.Additionally, farmyard drying was performed on concrete floor, conventionally employed by producers, and used as the control treatment.The final water content of the grains after each drying treatment was determined by the gravimetric method (BRAZIL, 2009).After drying and threshing, grains were subjected to compression tests.
COMPRESSION TESTS.For mechanical testing a universal testing machine (model TA 500 Texture Analyser, LLOYD Instruments © ) was used, from the Laboratory of Mechanical Properties of Biological Materials, UNICAMP Agricultural Engineering College.The grain test was conducted at a strain rate of 0.6 mm s -1 in two perpendicular directions, width and length (Figure 1), with rigid, flat and parallel plates with 15 repetitions for each direction and four repetitions for each drying treatment.The initial grain size in the direction of the load application was measured with a digital caliper (model 727-2001, STARRETT ® ) with a resolution of 0.01 mm.Preliminary tests indicated achieving the maximum strain of 2.50 mm for length and 2.00 mm in width, sufficient to promote application beyond the rupture of the bean coat.For this reason they were adopted for all tests.For each test the force-deformation curve up to breakage of the bean coat was obtained, identified by a sudden reduction in compressive force.From the curve, the maximum force, deformation energy to maximum force, strain and stiffness were determined (MOHSENIN, 1986;OLAOYE, 2000).The energy required for the rupture of the bean coat of castor beans was obtained by calculating the area under the force-deformation curve up to the moment of rupture, using the software NEXYGEN 3.0 by Lloyd Instruments © .The maximum force and the corresponding strain were read from the force-deformation curves.Stiffness was calculated as the ratio between the maximum force and corresponding strain (GONELI, 2008).
In order to analyze the values obtained in the compression tests at different drying treatments, a randomized blocks statistical design was used.Analysis of variance was performed followed by mean comparisons between treatments and directions using the Duncan test (p < 0.05), with the aid of the statistical computational package SAS 9.0 from the SAS Institute Inc. © .
RESULTS AND DISCUSSION
DRYING CASTOR FRUITS.The average initial moisture of the fruits for the May batch was 25.93 % (db) and 21.79 % (db) for the July batch, with coefficients of variation of 22.63% and 26.22 %, respectively.The beans, in turn, had average initial moisture content of 12.35% (db) for the May batch and 11.39 % (db) for the July batch with coefficients of variation of 17.30 % and 7.55 %, respectively.Drying times had higher variation in treatments at temperatures of 40 and 60 °C.The drying time at 40 °C ranged from 165 to 225 minutes and for 60 °C, it ranged from 60 to 105 minutes.For drying treatments with temperatures of 80 to 100 °C, the drying times showed no variation, consisting of 60 minutes for the 80°C treatment and 45 minutes for the 100°C treatment.It took 31 hours and 55 minutes to complete farmyard drying of the fruits.
Table 1 shows the beans average water content after drying.Farmyard treatment had the lowest variability of water content since its slower drying process allows homogenization.Forced Length Width air drying showed high variability of the final water content values, with a coefficient of variation between 16.46 and 23.89 %, even turning over the samples during drying.The variability of the water content may have affected the mechanical properties of the beans.However, it is difficult to control the water losses during drying for small samples with high initial moisture content variability, especially using high drying temperatures.The final water content of the grains ranged between 4.66 and 7.79 % (db), with higher temperatures resulting in lower water content, due to the high drying rates and difficulty in controlling the water content.The average final water content of all the drying treatments was of 6.60 % (db) with a coefficient of variation of 27.36 %.COMPRESSION TEST.Force-deformation Curves.The general appearance of characteristic curves obtained from compression tests after drying are illustrated in Figure 2. The biological yielding was not observed, and for both directions rupture was similar to that of fragile materials such as sunflower seeds (GUPTA & DAS, 2000).Maximum rupture force.Table 2 shows the average values of the maximum rupture force of the bean coat for each heat treatment obtained after compression tests for the two perpendicular directions.It is observed that the highest average value of the maximum force was 70.02 N, along the width of the grain, for farmyard drying.The force values to rupture the bean coat are relatively low compared to 700 N needed to break the hazel nut (PLIESTIC et al., 2006), but are equivalent to those obtained by ARAUJO & FERRAZ (2008) for cashew nuts and by SIRISOMBOON et al. (2007) for jatropha curcas beans.
The variation of the average maximum force value showed no clear trend associated with different drying temperatures.It was shown, however, that farmyard and 100 °C treatments lengthwise were significantly different, which does not occur widthwise.It was also observed by comparing the variations between length and width, in the same treatment, that the average maximum force did not discriminate for treatments 40, 60 and 80 °C any possible anisotropic behavior.Comparing the overall averages for all treatments, the value of the maximum force was higher along the width.GUPTA & DAS (2000) also observed higher values of maximum force in the length or vertical direction for sunflower beans.
Deformation energy.Table 3 shows the average values of deformation energy until rupture of the bean coat, length and widthwise, for the drying treatments.Averages followed by the same capital letter in the column do not differ statistically from each other (Duncan, p < 0.05).Averages followed by the same letter in the line do not differ statistically from each other (Duncan, p < 0.05).
Similarly to the behavior of the average values of maximum force, the strain energy showed no trend associated to the increase of the drying air temperature.The coefficients of variation increased indicating variability in force-deformation relationship to achieve maximum force.Because of this high variability, strain energy may not be a good parameter for decortication.On the other hand, strain energy was effective to discriminate the differences in behavior between length and width, because for all treatments strain energy was significantly lower lengthwise.GONELI (2008) obtained values of castor bean coat deformation energy till rupture thicknesswise close to those found in this work, between 0.0256 and 0.0477 J for water contents between 8-66 %.
Rupture strain.Table 4 shows average values of rupture strain of the bean coat length and widthwise for the drying treatments.Averages followed by the same capital letter in the column do not differ statistically from each other (Duncan, p < 0.05).
Averages followed by the same letter in the line do not differ statistically from each other (Duncan, p < 0.05).
The average strain values obtained show the difference between width and lengthwise force application and are able discriminated anisotropic behavior for all treatments.However, similar to maximum force and deformation energy, they show no differences between drying treatments.The average strain value to rupture the bean coat was 58.67 % less lengthwise.This shows that the rupture of the bean coat in this direction may result in less damage to the endosperm.The coefficients of variation are within an acceptable range for biological materials, except the too high a value of 37.29 % for the 60 °C treatment, for reasons beyond our understanding.This fact was observed in sunflower beans by GUPTA & DAS (1999).The strain values reported lengthwise show that only the 100 °C treatment differed statistically from the others, this being the lowest value presented.Despite showing statistical difference, this difference of 0.45% in strain between treatments is negligible in machine design, due to numerous factors.Widthwise,, the 40 and 80 °C treatments were statistically different from each other, with the highest and lowest values.This difference of 1.4% in strain values between these drying treatments can be significant in grading material for decortication since dependency on the average sizes, may cause endosperm rupture.
In the castor bean radiography (Figure 3), showing average dimensions, it is observed that lengthwise and around the caruncle there is a gap between the endosperm and bean coat.This gap was estimated to be 0.21 mm or equivalent to 1.43% of the length.. Widthwise, a wider gap is observed, estimated at 0.38 mm or corresponding to 4.2% of the original width.These values are 34 % and 41 % lower than the strain averages values found length and widthwise, respectively, and show that the endosperm is compressed before rupture occurs.Therefore, the resistance of the endosperm contributes to the values of maximum rupture strength of the bean coat.The average strain values, regardless of the heat treatment, were 4.29 % for the length direction and 10.38% for the width direction.The value of the strain to reach rupture, regardless of the direction, can be reduced if a high strain rate is applied, due to the viscoelastic behavior of the grain (ARAÚJO & FERRAZ, 2006).However, this does not guarantee fully disruption of the bean coat and separation from the endosperm.FIGURE 3. Castor bean X-ray with the estimated average dimensions (mm) illustrating the gap (shadowy area) between the endosperm and bean coat (adapted from CARVALHO et al., 2010).
This X-ray of the castor bean (CARVALHO et al., 2010) was used to estimate the size of the endosperm and the bean coat using the average values for width and length of the grain putting the image in scale and considering a castor grain with no physiological problems.
Similar values for strain and maximum rupture force were found by OLAOYE (2000) for castor beans 'Asbowu', 'Evahura' and 'Ojji', all of Nigerian origin.GONELI (2008) found strain values between 12.31 and 19.68%, for height or resting direction, higher than the values found for the length and width positions in this experiment, showing anisotropic behavior in the three orthogonal directions for the rupture of the castor beans bean coat.The values found by the author are relatively high, showing that efforts in the height or resting would not be suitable for proper decortication to obtain the whole endosperm.
Stiffness.Table 5 shows the values of stiffness length and widthwise as affected by drying treatments.It is an important property in designing hulling machines, since estimations of force and strain variations to be imposed on the product are possible.Finally, as they were expected to, the average stiffness values behaved similarly to those of strain, showing the anisotropic behavior between the directions along the length and width, but not discriminating the drying treatments.These values also corroborate with the strain values, since the highest stiffness accompany the smallest strain values.Stiffness showed the lowest coefficients of variation when compared to the other parameters.
Lengthwise the farmyard treatment value obtained was statistically different from treatments 40 and 60 ºC.Widthwise, the 60°C and farmyard drying treatments are statistically different, being the highest and lowest stiffness values, respectively.The other drying treatments are statistically equal.The coefficients of variation of stiffness values widthwise exhibit the lowest variability among those parameters determined in compression tests, namely maximum rupture force, deformation energy at rupture and strain.The stiffness values were calculated from the values of maximum force and corresponding strain, and showed that the correlation of these parameters represented a decrease in variability (Table 2).This indicates that this parameter has the potential to be used in designing of sizing and hulling mechanisms, together with strain values.
Average stiffness values, regardless the heat treatment, are larger lengthwise.GONELI (2008) found lower and intermediate castor beans stiffness values thickness wise, ranging from 48.18 to 77.82 N mm -1 , for water content of 8 to 66 % (db), and, therefore, confirming the anisotropic behavior for stiffness in the three orthogonal directions, as previously mentioned.SIRISOMBOON et al. (2007) found similar stiffness values for Jatropha beans.
CONCLUSIONS
It was concluded that forced air conditioning, more costly than farmyard conditioning, does not bring benefits to decortication.However, regardless of the heat treatment, the mechanical stress of the grain in the longitudinal direction is the most suitable to promote decortication.It was also found that the endosperm of the castor bean contributes to the strength of the bean coat, since the strain value required to disrupt the coat is higher than the estimated gap existing between them.As the compression tests were performed at a strain rate of 0.6 mm s -1 , a low value compared to the strain rates developed in commercial shelling mechanisms, the application of high strain rates may display even lower values of rupture strain, due to the viscoelastic properties of the grain.
FIGURE 1 .
FIGURE 1. Castor bean positioning illustration for compression in two perpendicular directions.
FIGURE 2 .
FIGURE 2. Characteristic castor beans compression force-strain curves in two perpendicular directions after drying treatment.
TABLE 1 .
Average water content of the beans after drying.
TABLE 2 .
Rupture force average values (N) obtained in compression tests length and widthwise with respective coefficients of variation (CV).Averages followed by the same capital letter in the column do not differ statistically from each other (Duncan, p < 0.05).Averages followed by the same letter in the line do not differ statistically from each other (Duncan, p < 0.05).
TABLE 3 .
Average strain energy to rupture (J) values from compression tests, length and widthwise, and respective coefficients of variation (CV).
TABLE 4 .
Strain average values (%) at rupture force from compression tests width and lengthwise and corresponding coefficients of variation.
TABLE 5 .
Average stiffness values (N mm -1 ) obtained in the compression tests length and widthwise with the respective coefficients of variation (CV).
Averages followed by the same capital letter in the column do not differ statistically from each other (Duncan, p < 0.05).Averages followed by the same letter in the line do not differ statistically from each other (Duncan, p < 0.05). | 2018-09-19T04:42:50.378Z | 2014-02-01T00:00:00.000 | {
"year": 2014,
"sha1": "453f92015068a930eded602116945ef06a0f02a6",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/eagri/v34n1/v34n1a11.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "453f92015068a930eded602116945ef06a0f02a6",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Chemistry"
]
} |
260650273 | pes2o/s2orc | v3-fos-license | Anti-Inflammatory Potential of Pteropodine in Rodents
Pteropodine (PT) is a component of some plants with potentially useful pharmacological activities for humans. This compound has biomedical properties related to the modulation of the immune system, nervous system, and inflammatory processes. This study addresses the anti-inflammatory and antioxidant capacity of pteropodin in a murine model of arthritis and induced edema of the mouse ear. To evaluate the anti-inflammatory activity, we used the reversed passive Arthus reaction (RPAR), which includes the rat paw edema test, the rat pleurisy test, and a mouse ear edema model. The antioxidant effect of PT was evaluated by determining the myeloperoxidase enzyme activity. PT showed an anti-inflammatory effect in the different specific and non-specific tests. We found a 51, 66 and 70% inhibitory effect of 10, 20 and 40 mg/kg of PT, respectively, in the rat paw edema test. In the pleurisy assay, 40 mg/kg of PT induced a low neutrophil count (up to 36%) when compared to the negative control group, and 20 mg/kg of PT increased the content of lymphocytes by up to 28% and the pleural exudate volume decreased by 52% when compared to the negative control group, respectively. We also found an 81.4% inflammatory inhibition of the edema ear with 0.04 mg/ear of PT, and a significant myeloperoxidase enzyme inhibition by the three doses of PT tested. We conclude that PT exerted a potent anti-inflammatory effect in the acute inflammation model in rodents.
Introduction
Some diseases, such as rheumatoid arthritis (RA), are characterized by chronic inflammation that affects mainly the joints, where it can produce a progressive degree of deformity and functional disability related to the continuing destruction of cartilage, as well as damage to tendons, ligaments, and bones [1]. In addition, the disease can affect other organs, such as the eyes, lungs, pleura, heart, skin, and blood vessels [2]. In general, the average prevalence is 1%, although some specific studies have shown higher levels, probably related to race, diagnostic criteria, and methodological differences [3]. The reported incidence is variable; for example, a high annual incidence rate of 90 cases/100,000 habitants was reported in Germany [4,5], whereas values of 42 and 45 cases/100,000 were found 2 of 11 in Finland and Japan [6,7], respectively. In Mexico, about one million people have some degree of illness [8].
Although the precise etiology of RA is not entirely known, the leading role played by self-immunity in its development is well documented, as well as its genetic predisposition, alterations in the union of joints or cartilage, joint injury, certain bacteria, fungi, or viruses that infect the joints, among others [9]. Painful inflammation is normally a restorative response that sometimes progresses towards a chronic situation that usually gives rise to degenerative arthritis. In the inflammatory process of RA, there are cellular and humoral components involved in the etiology of the disease; however, the increase in the proinflammatory cytokines such as IL-8, TNF-alpha, IL-6, IL-1beta, IL-15, IL-18, and IL-17A, which can be detected in both synovial fluid and serum of patients, play the main role [9].
Some of the drugs used to reduce disease progression and improve the quality of life are chloroquine, hydroxychloroquine, sulfasalazine, D-penicillamine, azathioprine, cyclophosphamide, methotrexate, cyclosporine, leflunomide, and some steroid and nonsteroidal drugs [10][11][12]. However, the therapeutic effect is usually symptomatic, nonpermanent, and can cause collateral damage. The group of immunosuppressants can cause numerous adverse reactions; however, their mechanisms are not yet well known. Some collateral damages of steroids are osteoporosis, predisposition to infections, gastro toxicity, increased risk of skin infections, such as bacterial (e.g., cellulitis) and fungal (e.g., tinea, candidiasis), skin thinning, resulting in easy bruises (purples), skin tearing after minor injury, and slow healing; these effects are most prominent on sun-exposed areas, particularly the back of the hands and the forearms [13,14]. Other steroids-induced alterations are stretch marks (striae), particularly under the arms and in the groin, acne, clusters of small spots on the face, chest, and upper back, excessive hair (hypertrichosis), hair loss (alopecia), and subcutaneous lipoatrophy (loss of fat under the skin surface) caused by an injected steroid that does penetrate deep enough into the muscle [15][16][17]. This makes the search for new compounds that efficiently reduce the physiopathological mechanisms that lead to the disease and the toxic side effects worthwhile. To achieve these purposes, extracts or compounds derived from plants have been investigated using the reversed passive Arthus reaction (RPAR) in rabbits to detect more effective anti-inflammatory agents for the treatment of RA [18,19]. Animal murine models used in the investigation of inflammatory processes are as diverse as the materials and repair strategies. In general, and as in other areas of science, animal studies begin with models such as mice or rats, later moving on to rabbits, pigs, and sheep. Due to the type of tissue, most studies focus on medium-and large-sized animals, since the work area is larger, and this facilitates the surgery to be performed. In addition, the results obtained in these cases are more easily extrapolated to possible clinical use.
Uncaria tomentosa (UT) is a Rubiaceae plant native to Peru, commonly called "cat's claw", used in traditional medicine to treat some diseases like cancer, arthritis, candidiasis, menstrual and intestinal disorders, and HIV infections [20]. Other investigations have confirmed its biomedical properties with immunostimulant, cytostatic, anti-inflammatory, antimutagenic, and anticancer effects [21,22]. Several components of the plant have been chemically identified as oxindole alkaloids, proanthocyanidins, polyphenols, triterpenes, and sterols, among others [23]. Six oxindole alkaloids have been isolated from the plant, including pteropodine (PT), also called uncarine, which is a heterohimbine-type oxindole that has been reported to show an apoptotic effect in leukemic lymphoblasts and participates in the improvement of memory impairment induced by dysfunction of cholinergic systems in the brains of mice [24,25]. These data suggest that PT could act synergistically with other UT components in one or more of the effects reported for the plant. Quality control of medicinal plants is a multi-step process that covers all stages of production, from the plant as raw material to its packaging as a finished product, whether it is a medicine or herbal remedy. From the point of view of its application, quality is aimed primarily at authenticating the plant species. Knowing the metabolic content of the plants is also important and fundamental because it allows for establishing, which will be the marker that will be used for the pharmacopeial trials. It is strictly necessary that the medicinal plants that are intended for preparing herbal products comply with the specific analytical determinations related to their quality. In Mexico, these determinations are compiled in the Herbal Pharmacopoeia of the United Mexican States (FHEUM), which is the official document used for this purpose. Any medicinal plant that must be used for therapeutic purposes to be marketed, or that affects aspects of the health of the general population, will have to demonstrate its quality in accordance with the official procedures framed in the FHEUM (identity, composition, and purity parameters), as well as in all current regulations related to the regulation of herbal products [20][21][22][23].
Our laboratory evaluated the genotoxic and antigenotoxic potential of PT, finding that this compound is not genotoxic in mice, since PT significantly decreases the frequency of sister-chromatid exchanges and micronucleated polychromatic erythrocytes in mice [26]. Moreover, we determined that PT protects mouse cells from DNA damage induced by doxorubicin (DX), which is an antineoplastic agent that damages DNA; this protective action of PT is due to its efficient free-radical-trapping ability showed in the DPPH assay [27].
Based on the above information, in this report, we expanded the studies on the capacity of PT as an anti-inflammatory agent by applying tests in mice and rat, attempting to determine its anti-inflammatory potential in a mouse model that has the physiopathological mechanisms of rheumatoid arthritis and in a mouse ear edema model.
Description of the Murine Model
The Arthus reaction allows us to experimentally reproduce the local physiopathology of RA. The obtained lesion is characterized by edema, erythema, and accumulation of polymorphonuclear cells. When the antigen is injected, a complex is formed with the antibody, the stimulation of the complement occurs, and anaphylatoxins are generated rapidly, which causes granulation of the mast cells.
The intravascular local complex can also cause platelet aggregation and release of vasoactive amines that lead to an increase in swelling, erythema, and the formation of chemotactic factors, which causes the influx of polymorphonuclear leukocytes (PMNs) [28].
Induction of Rat Paw Edema
Six groups of Wistar rats were used; tests were performed by administering PT at doses of 10, 20, and 40 mg/kg of body weight. The two positive controls used were IBU and PRED at doses of 200 and 10 mg/kg, respectively. IBU is a non-steroidal drug with antipyretic and analgesic properties that inhibits the synthesis of prostaglandins at a central and peripheral level, and cyclooxygenase 1 and 2 enzyme isoforms (COX-1 and COX-2). PRED is one of the most used corticosteroids in the medical clinic that prevents or inhibits inflammation and immune responses when administered in therapeutic doses. A 0.1 mL of an antiserum solution of rabbit ovalbumin diluted 1:3 in 0.9% NaCl was injected into the sole of the right leg of the hindquarters of the rat. The contralateral leg was injected with 0.1 mL of a 0.9% NaCl solution, used as negative control, egg albumin was immediately injected intravenously at a dose of 25 mg/kg of body weight. In each animal, the volume of the leg edema induced by the antigen-antibody reaction was determined. The volumes of the swollen and control legs were measured 3 h after injection with a digital plethysmometer (LE-7500, Labequim S. A. de C. V.). Paw edema was determined by the difference between the value of the volume of the treated leg and that of the control [29].
Pleurisy Assay
For the Pleurisy assay, we used 25 Wistar rats organized in five groups of 5 rats each. The animals were injected in the pleural cavity with 0.2 mL anti-ovalbumin antibody diluted 1:10 in a 0.9% NaCl solution. Twenty minutes after the administration, a group was injected intravenously with 25 mg/kg of bovine albumin, the positive control group was administered PRED (10 mg/kg), the last three groups were treated orally with 10, 20, and 40 mg/kg of PT, respectively, and the negative control group with NaCl 0.9% solution. The animals were euthanized by CO 2 -inhalation 6 h after pleural inoculation; the volume of pleural exudate in the pleural cavity was quantified, and then centrifuged at 1500 rpm for 5 min, the sediment was placed on a slide, fixed with methanol, and stained with Giemsa for 10 min. A differential counting of neutrophils and lymphocytes was made [30].
Mouse Ear Edema Model
For this, we used 25 NIH mice organized in five groups of 5 mice each, and 2.5 µg of TPA dissolved in 20 µL of acetone was applied, according to the method of Young et al. [31], to both the internal and the external surface of the right ear of the mouse. After 1 h, PT was applied to the ear at different doses, 0.010, 0.020, and 0.040 mg/ear, each dissolved in 20 µL of acetone. We used indomethacin (IND) (0.5 mg/ear) as a positive control group. The mice were euthanized by cervical dislocation after 4 h, then a 7 mm diameter slice was made, and the central portions of the ears were weighed. The edema value was calculated by the weight difference between the treated ears (the ones on the right) and the non-treated ears (the ones on the left). The inhibition of edema (expressed in percentage) was also calculated versus the control group [31]. The results are presented as the average of the values obtained for each batch of animals ± standard error.
Myeloperoxidase Inhibition
For the determination of the activity of the myeloperoxidase enzyme (MPO), the inflamed ears were homogenized, according to Suzuki's technique [32]. The absorbance was measured at 665 nm in a Perkin Elmer Lambda 3 spectrophotometer (Perkin Elmer Inc., Waltham, MA, USA). The enzymatic inhibition (expressed in percentages) corresponds to the absorbance differences observed with respect to the control group. The results are presented as the average of the values obtained for each batch of animals ± standard error.
Statistics
The statistical analysis of data obtained from the different anti-inflammatory assays was performed with an ANOVA followed by the Tukey's multiple comparisons test, using Graph Prism 9.1.0 (GraphPad, San Diego, CA, USA)
Results
The three PT doses produced an inhibition of 51, 66, and 70%, respectively, of the rats' leg edemas, giving a statistically significant difference and a 55% increase when PT-40 was compared to the negative control group (administered with NaCl 0.9%, 0.5 mL), and up to a 18% increase in inhibition compared to the positive control (IBU) (Figure 1).
In the assessment of the percentage of neutrophils, PT showed values of 60, 51, and 43% for the three doses, respectively, revealing a significant difference of up to 36% for PT-40 when compared to the negative control group, and very close to the values of the positive control group (PRED) (Figure 2). 40 was compared to the negative control group (administered with NaCl 0.9%, 0.5 mL), and up to a 18% increase in inhibition compared to the positive control (IBU) (Figure 1). In the assessment of the percentage of neutrophils, PT showed values of 60, 51, and 43% for the three doses, respectively, revealing a significant difference of up to 36% for PT-40 when compared to the negative control group, and very close to the values of the positive control group (PRED) (Figure 2). The effect of PT on the content of lymphocytes in the pleural cavity exudate, induced by the antigen-antibody reaction, yielded values of 36, 42, and 39% for the three doses, respectively, with an increase of up to 28% for PT-20 when compared to the negative control group (NaCl 0.9%) and a 21% increase with respect to the positive control (PRED) The effect of PT on the content of lymphocytes in the pleural cavity exudate, induced by the antigen-antibody reaction, yielded values of 36, 42, and 39% for the three doses, respectively, with an increase of up to 28% for PT-20 when compared to the negative control group (NaCl 0.9%) and a 21% increase with respect to the positive control (PRED) (Figure 3).
The effect exerted by PT on the reaction volume for the antigen-antibody interaction induced in the pleural exudate showed values of 3.1, 2.7, and 3.3 mL, respectively, for each dose, observing a statistically significant decrease of up to 52% for PT-20 when compared to the negative control group (NaCl 0.9%), and very similar values to those yielded in the PRED (positive control) (Figure 4). Evaluating the activity exerted by PT on the TPA-induced edema in the mouse ear, an inhibition of 72, 75, and 81% for each dose, respectively, was observed, and a 9% increase was observed when the PT-D3 group was compared to indomethacin (positive control) ( Table 1). Table 2 shows the results relative to the myeloperoxidase assay. A significant inhibitory effect is observed with the tested doses of PT; the high dose of PT (1.5 mg/ear) gives a slightly higher inhibition percentage than that observed with indomethacin.
Discussion
This study evaluated the anti-inflammatory and antioxidant capacity of pteropodin in rodents, which is a component of U. tomentosa that can modulate the immune system and inflammatory processes and it is used in traditional medicine to treat arthritis, among other diseases. Our results showed that PT (99% pure) exerts a potent anti-inflammatory effect in an acute inflammation model in rodents.
At the level of the collective consciousness of society, there is a rooted belief that everything "natural" is good, regardless of the amount consumed, since, if it comes from nature, it is considered that it will not cause any harm. In addition, the population usually does not associate medicinal and phytotherapeutic plants with the concepts of drugs and medicine, understanding drugs to be the substances that cause an effect in the organism depending on the dose, route of administration and interindividual variability. Regarding medicinal plants, it is often unknown how and where it was collected, its composition and uniformity, and the "dose" administered. Both medicinal plants and phytomedicines have been used (and abused) for their pharmacological properties and pleiotropic effects (broad, poorly selective) and many times without identifying the possible adverse effects and drug interactions produced by them [33].
In terms of quality, there are several difficulties: heterogeneous presentations in their composition, with multiple phytochemical components, many of which do not present well-characterized biological activities and more than one may contribute to the effect. To this, we must add the variation in purity and composition batch-to-batch given the natural variability of the plant of origin and the preparation methods, as well as the little knowledge of the stability of these preparations. Therefore, it is necessary to guarantee the quantifiable and uniform content of active substances in phytomedicines through processes that are harmonized. This is a basal condition to later be able to ensure the desired pharmacological effect, knowledge of its pharmacokinetic behavior, establish doses and therapeutic regimens, and reduce adverse effects [34].
Phytopharmaceuticals are medicines whose active substance contains the extract of a certain plant, unlike a chemical drug that comes from a chemically synthesized molecule. In the UC plant, there are several fractions of phytopharmaceuticals, among which are the pentacyclic oxindolic alkaloids and tetracyclic oxindolic alkaloids. For example, the pentacyclic oxindolic alkaloids of Uncaria tomentosa such as pteropodin induce the release of the factor regulator of lymphocyte proliferation in human endothelial cells, a property not attributable to tetracyclic oxindole alkaloids, quite the contrary, since they seem to reduce the activity of pentacyclic alkaloids in a dose-dependent manner in these cells. The tetracyclic oxindole alkaloids act on the central nervous system, while the pentacyclic ones act on the immune system, and both groups of compounds are found in two different chemo-types of the plant. Since the mechanism of action of tetracyclic and pentacyclic oxindolic alkaloids can be antagonistic to each other, it is of great importance to determine the chemotype through the analysis and adequate standardization of the plant to establish a specific effect of the active principle isolated from this fraction, which would give us the guideline to investigate its mechanism of action as a pure compound. On the other hand, using the pentacyclic oxindolic extract of the plant implies that the purity is lower since it includes pteropodin plus other compounds such as mitrafylline, isomitrafylline, isopteropodin, and uncarins. This mixture of compounds would give the therapeutic effect, but it would be difficult to know which of the components exerts the beneficial effect, hence the importance of studying pure pteropodin [35].
Recent research shows that the presence of tetracyclic oxindole alkaloids (TOA) inhibits the immunomodulatory effect of pentacyclic oxindole alkaloids. In recent years, a chemotype of U. tomentosa (Willd) DC has been found that does not present tetracyclic oxindole alkaloids (TOAF chemotype, "TOA-free chemotype"). The first clinical evidence indicates that this new TOAF chemotype could have great therapeutic potential as an immunomodulatory plant, which should be confirmed by the scientific community in the coming years [20,22].
The anti-inflammatory activity of cat's claw (U. tomentosa) has been attributed, at least in part, to the inhibitory activity on cyclooxygenase-1 and -2 [36]. This anti-inflammatory action has been related to the capacity of the cat's claw to neutralize the harmful effect of oxidizing organic substances, as well as its capacity to inhibit the expression of certain inducible genes during the inflammatory process [37].
The cortex of U. tomentosa (Willd) DC also has immunostimulant properties. The pentacyclic oxindole alkaloids increase the phagocytosis of macrophages and granulocytes and stimulate the proliferation of lymphocytes. In addition, cat's claw causes macrophages to produce interleukins-1 and -6, which initiate the cascade of defensive activities of the immune system [38].
Several extracts of U. tomentosa root cortex have been tested for anti-inflammatory activity in a carrageenan-induced rat paw edema, and the quinovic acid-3-β-O-(β-Dquinovopyranosyl)-(27,1)-β-D-glucopyranosyl ester was isolated as one of the active compounds, which reduces the inflammatory response by 33% at 20 mg/kg. There is evidence that the combination of compounds is responsible for the strong anti-inflammatory effect of the extracts [39].
The addition of 100 µg/mL of an undefined extract of the stem cortex significantly attenuates the peroxy-nitrite-induced apoptosis in HT29 (epithelial cells) and RAW 264.7 cells (macrophages) (p < 0.05) and inhibits the expression of the lipopolysaccharide-induced nitric oxide synthase gene (iNOS), nitrite formation, cell death, and the activation of the nuclear transcription factor-қβ in RAW 264.7 cells. Oral administration of 5 mg/mL of the extract attenuates indomethacin-induced enteritis in rodents, reducing myeloperoxidase activity, morphometric damage, and liver metallothionein expression [40].
Anti-inflammatory activity of two types of extracts from the stem cortex: a hydroalcoholic extract containing 5.6% alkaloids (mainly of the pentacyclic type, extract A) and an aqueous freeze-dried extract containing 0.26% alkaloids (extract B) were assessed in the carrageenan-induced rat edema test. Extract A was significantly more active than extract B, suggesting that the effect could be due to the presence of pentacyclic oxindole alkaloids. Both extracts showed scarce inhibitory activity on cyclooxygenase-1 and -2. Only a slight inhibitory activity on DNA-binding of NF-қβ was observed [33].
The anti-inflammatory activity of PT was evaluated with a widely used method for assessing anti-inflammatory substances, such as TPA-induced ear edema. The inflammatory process triggered by the topical application of TPA is due to the activation of the protein kinase C of the skin (PKC), which starts the inflammatory response. All anti-inflammatory agents show activity in this model, but mainly the dual COX/LOX inhibitors [40]; furthermore, cyclooxygenase (COX) inhibitors seem to be more effective as lipoxygenase inhibitors (LOX) than others in reducing the edematous response [41]. Along these lines, PT could behave like the COX inhibitors in this model.
PT produced a growing anti-inflammatory activity according to the administered doses of each fraction. It also showed important anti-inflammatory properties by significantly inhibiting the acute-induced edema dependent on the TPA dose, with an inhibition comparable to that presented by indomethacin with the same dose (500 µg/ear).
The present study is one of the few studies on PT and of the first regarding the assessment of its anti-inflammatory activity. These results, along with those obtained by Okada et al. [10] with U. tomentosa extracts, constitute the first reports showing extracts of a phytopharmacological genus with anti-inflammatory activity in vivo.
Conclusions
The results of this study support the use of PT in traditional medicine for the treatment of inflammatory diseases like rheumatism. In addition, it considers PT as one of the main compounds in plant extracts responsible for anti-inflammatory and antioxidant activities, which is easily obtained with excellent yields, making it very promising for the effective development of herbal extracts with pharmacological effects. Results are promising and encourage further studies on this active compound to assess it in other models of inflammation, both acute and chronic, and to determine the possible mechanisms involved in its pharmacological effects through its evaluation against specific mediators of inflammation, such as prostaglandins, nitric oxide, myeloperoxidase, and tumor necrosis factor, in addition to examining its ability to act as a scavenger of free radicals. | 2023-08-07T15:36:18.926Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "6c049710cb90946d0d831eb7df5235776d31e2f1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2636c9072c964490f1dd0efe8d61e32219e638f3",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259188300 | pes2o/s2orc | v3-fos-license | Anionic bio-flocculants from sugarcane for purification of sucrose: An application of circular bioeconomy
In sugar production, polyacrylamide-based anionic flocculants are added for juice treatment, the main objective being to remove impurities that affect the quality of the sugar. However, if they remain in the final product, those polymers can present carcinogenic and neurotoxic actions besides contaminating the soils where the waste is discharged. To overcome this problem, the present study proposes, for the first time, natural flocculants based on cellulose obtained from sugarcane bagasse (residue from sugarcane processing) as substitutes for the flocculants based on polyacrylamide, normally used in sugar cane juice purification. Additionally, cellulose-based flocculants obtained from Acacia wood, developed in a previous study, have also been tested for sugar juice treatment. Acacia wood and sugarcane bagasse were first treated with a choline chloride/levulinic acid solution in a molar ratio of 1:2, at 160 °C, for 4 h. Subsequently, the cellulose-rich samples were modified by a two-stage process (oxidation with sodium periodate followed by reaction with sodium metabisulfite), and polyelectrolytes with different characteristics were produced. The final products obtained were characterized, and their performance in the treatment of sugarcane juice, at different concentrations (10, 50, 100, 250, and 500 mg kg−1), was evaluated and compared to the synthetic commercial flocculant (Flonex, based on polyacrylamide) usually used by the sugarcane industry in Brazil. The substitution of petrol-based flocculants by natural-based ones, obtained from sugarcane residues, is presented for the first time in this study, with very relevant performance of the new flocculants. Overall, it was possible to produce anionic flocculants, modifying the cellulose obtained from different raw materials, which showed good results in the purification of sucrose, when compared with the commercial polyacrylamide normally used. It is also important to stress that, for the first time, a residue from sugarcane industry could be used with success in the purification of the sugar juice itself, which constitutes a major novelty.
In sugar production, polyacrylamide-based anionic flocculants are added for juice treatment, the main objective being to remove impurities that affect the quality of the sugar. However, if they remain in the final product, those polymers can present carcinogenic and neurotoxic actions besides contaminating the soils where the waste is discharged. To overcome this problem, the present study proposes, for the first time, natural flocculants based on cellulose obtained from sugarcane bagasse (residue from sugarcane processing) as substitutes for the flocculants based on polyacrylamide, normally used in sugar cane juice purification. Additionally, cellulose-based flocculants obtained from Acacia wood, developed in a previous study, have also been tested for sugar juice treatment. Acacia wood and sugarcane bagasse were first treated with a choline chloride/levulinic acid solution in a molar ratio of 1:2, at 160 • C, for 4 h. Subsequently, the cellulose-rich samples were modified by a two-stage process (oxidation with sodium periodate followed by reaction with sodium metabisulfite), and polyelectrolytes with different characteristics were produced. The final products obtained were characterized, and their performance in the treatment of sugarcane juice, at different concentrations (10,50,100,250, and 500 mg kg − 1 ), was evaluated and compared to the synthetic commercial flocculant (Flonex, based on polyacrylamide) usually used by the sugarcane industry in Brazil. The substitution of petrol-based flocculants by natural-based ones, obtained from sugarcane residues, is presented for the first time in this study, with very relevant performance of the new flocculants. Overall, it was possible to produce anionic flocculants, modifying the cellulose obtained from different raw materials, which showed good results in the purification of sucrose, when compared with the commercial polyacrylamide normally used. It is also important to stress that, for the first time, a residue from sugarcane industry could be used with success in the purification of the sugar juice itself, which constitutes a major novelty.
Introduction
Sugarcane grows worldwide, and its production is mainly used for the sugar industry, currently corresponding to 80% of the sugar produced, contributing significantly to the world economy.
During sugar production, a critical step is the juice treatment, being the main objective to remove impurities, such as polyphenols including flavonoids and phenolic acids (gallic and chlorogenic acids, ferulic acid, etc.) [1,2], that negatively affect the sucrose quality for the production of the final product (crystal sugar). Synthetic anionic flocculants are currently added as clarification aids during this step, consisting of polyacrylamides. If they remain in the final product, these compounds, or their hydrolysis products, can exhibit carcinogenic and neurotoxic actions [3]. Thus, it is essential to consider the concentrations used since high concentrations can result in the retention of polyacrylamide molecules in the final crystal sugar [3]. On the other hand, when carried by the flocs to the effluents, they may contaminate sugarcane crops, due to the use of the effluents as biofertilizers. Thus, replacing synthetic polyelectrolytes with sustainable and non-toxic options is important for the sugarcane industry, and natural polyelectrolytes may be promising alternatives in the treatment of sugarcane juice.
Moreover, during the processing of sugarcane, large amounts of waste, such as sugarcane bagasse (SCB), molasses, and sludge or filter cake, are produced, and the valorisation of these resources is relevant and currently under research [4].
In general, to produce 100 kg of sugarcane, 1 ton of cane is crushed, generating 300 kg of bagasse (with 50% moisture), 40 kg of molasses, and 30 kg of decanter sludge. The molasses are used for ethanol production and the decanter sludge for hydrocarbon or chemical production [4]. On the other hand, the SCB is currently used as a fuel for the boilers in the sugar factory or as a raw material for the manufacture of lignocellulosic products. Various types of building blocks and certain chemicals [5][6][7] have been obtained from SCB treatment. SCB has a complex structure, and it is primarily composed of cellulose (40-50%), hemicelluloses (25-35%), lignin (15-35%), ash and waxes [8]. The composition of SCB makes it an ideal additive to be applied and utilized as bio-flocculants for purification of sucrose, instead of polyacrylamides.
Cellulose is itself a fascinating polymeric product and possesses several attributes, but also some inherent issues. Among the drawbacks, the poor solubility in common solvents should be highlighted [9]. To overcome such limitations, the controlled physical and/or chemical modification of the cellulose structure is often a strategy which is adopted [10,11].
Polyelectrolytes for application in flocculation should be water-soluble, and so, in order to use cellulose as a flocculation agent, it is crucial to obtain a final water-soluble cellulose derivative with the charged groups spread effectively among the polymeric backbone. However, lignocellulosic biomass has a high degree of heterogeneity, which leads to different interactions among the different biomacromolecules [12], and therefore, fractionation and isolation of the different components are necessary, important steps before any modification [13]. The effectiveness of the fractionation step has proven to play a key role in the anionisation of the cellulose backbone [14,15].
Regarding the cellulose modification, to obtain an ionic polymer with a better solubility, the use of a two-stage strategy has proven more favourable [16]. The first stage is the selective oxidation with periodate, which partially destroys the crystalline cellulose structure. Periodic acid and its salts, periodates, are known as regioselective oxidation agents capable of converting vicinal diols, such as carbohydrates, to dialdehyde structures [14]. In this case, the diol cleavage of cellulose by periodate, under acidic conditions, occurs in the C2-C3 bond, resulting in the formation of two aldehyde groups, at the OH-C2 and OH-C3 positions, and leading to dialdehyde cellulose (DAC). In this reaction, a high modification degree can be achieved, since two aldehyde groups are introduced per anhydroglucose (AGU) unit, which allows obtaining highly modified end products [13]. On the other hand, it is observed that, in most cases, the degree of polymerization (DP) decreases during this oxidation procedure [17]. Several reaction parameters may influence the properties of obtained DAC, such as the concentration of periodate (i.e., higher concentrations of periodate improve the formation of aldehyde groups and allows to obtain cellulose with higher aldehyde contents), temperature and the reaction time [13]. Anionic water-soluble cellulose-based polymers can be obtained in the second reaction stage, after the first oxidation step, through the sulfonation of DAC [14,15]. The anionic flocculants were obtained by the DAC modification using sodium metabisulfite, which leads to a sulphur trioxide bond resulting in negatively charged groups being introduced into the cellulose backbone. This type of modification allows the introduction of more than one anionic group per AGU. Products obtained in such a way are thus characterized by a high degree of substitution, high ionic character and consequently high charge density and water solubility at room temperature.
In the present study, the objective was to obtain anionic cellulose-based polyelectrolytes, using as raw material sugarcane bagasse. The first stage was to isolate cellulose from the bagasse using an eco-friendly procedure based on the use of a mixture of choline chloride and levulinic acid. This procedure was compared with the traditional methodology based on the treatment of the bagasse in a basic medium using a NaOH solution [18]. Afterwards, the cellulose was modified (anionised) using the two-stage procedure previously applied by the research group for other raw materials [14,15]. Finally, the efficiency of the new bio-based polyelectrolytes developed was validated for the treatment of sugarcane juice in the sugar production process, as an alternative to the polyacrylamide polymers normally used nowadays, which constitutes a complete novelty and a step forward towards a circular bio economy. It can be stated that the new polymers developed will be fully biodegradable. Still, it is foreseen to perform, in the future, a complete toxicological assessment of these new natural-based polyelectrolytes.
Materials
The sugarcane bagasse (Saccharum officinarum) was supplied by the Hugot Laboratory of Sugar Technology at the University of São Paulo and had, on average, 38.78% of cellulose, 23.81% of hemicellulose, 24.02% of lignin, determined according to the procedure which will be described later. The average particle size was 0.84 mm after milling (knife mill, supplied by Thomas Scientific, USA).
The Acacia wood (Acacia dealbata) used in this work was collected and harvested in Midões (Tábua), Portugal, and the main components are given in Table 1. The raw material consisted of waste branches, which were finely milled in the same laboratory mill (knife mill, Thomas Scientific, USA) and classified in a mechanical sieve shaker (Thomas Scientific, USA). The sawdust sample with a particle size between 0.25 and 0.84 mm was selected for the subsequent pre-treatment.
Biomass fractionation and cellulose purification
The acacia wood and the sugarcane bagasse fractionation was performed using a deep eutectic solvent system (choline chloride (ChCl):levulinic acid (Lev) in a 1:2 molar ratio). First, 0.75 g of sugarcane bagasse or acacia chips (dry basis, previously dried in an oven at 105 • C) were introduced into the fractionation vessel (a cylindrical metallic reactor capable of supporting high pressures), and 10 mL of eutectic solvent (ChCl:Lev) were added. Next, the reactor was placed in the oven at 160 • C for 4 h.
After fractionation was completed, the cylindrical reactor was carefully opened, and the cellulose-rich fraction was meticulously washed with distilled water, in order to drag the lignin-rich material into the liquid fraction. Afterwards, both the cellulose-rich fraction and the lignin-rich fraction were vacuum filtered. The cellulose-rich material, initially washed with water to remove the insoluble lignin, was then washed during vacuum filtration with a 10 wt% NaOH aqueous solution to remove any residual lignin that remain on the fibres. Finally, the cellulose-rich fraction was further washed extensively with distilled water to remove any remaining chemicals and neutralize the fibres. The final washing with water was conducted till the used cleaning water exhibited neutral pH, which usually corresponded to 5 to 6 washing cycles. On average, 100 g of cellulose fibres required 200 mL of water for the washing process. It must be stressed that the washing water can be later purified and reused. After filtration, the two materials, one rich in lignin and the other in cellulose were dried in an oven at 50-60 • C for 24 h, for further characterization.
Another method for sugarcane bagasse fractionation evaluated in the present study was the experimental procedure described by Aguiar and Menezes (2000) [17]: 100 g of washed and milled sugarcane bagasse (dry basis) was treated with 2000 mL of 4% sodium hydroxide solution in an autoclave at 121 • C for 30 min. The material recovered by filtration was then washed with running water until pH neutrality and dried at 65 • C until constant mass.
Determination of lignin, cellulose, and hemicellulose content
The lignin content was estimated using the standard LAP-004 protocol from the National Renewable Energy Laboratory LAP-004 (NREL/TP-510-42618) [19]. In brief, ca. 300 mg of the reaction extract was weighed and hydrolysed in 3 mL of 72% sulfuric acid solution (12 mol L − 1 ) for 1 h at 30 • C, with stirring. Then, the hydrolysed sample was diluted till a 4% sulfuric acid solution was obtained, which was then placed in the autoclave to react for 1 h at 121 • C. Subsequently, the samples were vacuum filtered using filter crucibles (40 mm diameter and pore size 0.6 μm), to determine the acid-insoluble lignin by gravimetry. The acid-soluble lignin was determined using aliquots of the filtrate, which were adequately diluted, and by measuring the absorbance of the solutions at 205 nm in a UV-VIS spectrometer (JASCO V650 spectrophotometer, JASCO, Germany).
The carbohydrate content of the extracted cellulose-rich fractions was accessed by HPLC (High Pressure Liquid Chromatography, Knauer, Germany, model K301). Samples were first autoclaved, and a 20 mL sample of the hydrolysate, resulting from the procedure described above to determine the lignin content, were transferred to a 50 mL Erlenmeyer flask, and slowly neutralized with calcium carbonate until reaching a final pH of 5-6. The pH of the solution was monitored with a pH meter (inoLab, WTW of Weilheim, Germany). Then, the solution was filtered with a 0.22 μm nylon filter directly into an Eppendorf tube, and this sample was used for further HPLC analysis. The pump and detector used were Smartline 1000 and Smartline RI S2300 (refractive index detector), respectively. An Agilent Hi-Plex Ca, 300 × 7.7 mm column from Agilent Technologies was used. Ultra-pure water without any added buffer or other compounds was used as the mobile phase. The pH was set to 5.9. The stationary phase was a Hi-Plex Ca -strong cation exchange resin, consisting of sulfonated crosslinked styrene-divinylbenzene copolymer in the calcium form, with a particle size of 8 μm. For microporous resins, the crosslink content controls the pore size and hence the molecular weight range of the materials which can be analysed, determining the size exclusion properties of the resin. The lower the crosslink content, the higher the molecular weight that can be analysed. The 8% crosslinked resin has a low exclusion limit and is suitable for oligosaccharides with a DP (degree of polymerization) lower than 5. The flow rate was set to 0.6 mL min − 1 and the run time was 60 min. The chemical composition and purity of the cellulose and extracted hemicellulose samples were accessed based on the monosaccharide composition obtained by HPLC (i.e., cellulose is determined based on the glucose content and hemicellulose based on the xylose content). The calibration standard was prepared in our lab, and included the following compounds: glucose, cellobiose, xylose, arabinose, formic and acetic acids and levulinic acid.
Synthesis and characterization of anionic cellulose-based polyelectrolytes
The anionisation of the cellulosic material was achieved using a dual-step procedure, in which the cellulose is firstly oxidized to DAC followed by anionisation with sodium metabisulfite. Fig. 1 summarizes, schematically, the two reactions that occur (oxidation followed by anionization) and the structure of the final product (anionic polyelectrolyte) obtained.
The procedure used was based on the work developed by Grenda et al. (2020) [14]. The first step focuses on the oxidation of the cellulose material, which avoids the use of alkali treatments. Briefly, a highly oxidized cellulose is produced by reacting the extracted cellulose with NaIO 4 . To do so, it is prepared a cellulose dispersion in distilled water (4 g of cellulose dry basis in 100 mL of water), stirring overnight using a magnetic stirrer. Then, the suspension was placed in a round flask and diluted with 200 mL of distilled water. Precise quantities of NaIO 4 , 7.2 g, and LiCl, 8.2 g (per 4 g of cellulose, dry basis) were added to the aqueous dispersion to initiate the reaction. According to Sirviö et al. (2011) [20], LiCl can act as a catalyst and improve the oxidation efficiency. The improvement obtained is attributed to the ability of the lithium ions to disrupt the hydrogen bonds between the cellulose chains which facilitates the interaction between the chemical reagents and the cellulose chains, as described by Sirviö et al. (2011) [21]. After the reaction period, the product was filtered and washed several times with distilled water to remove iodine compounds from the obtained DAC. Washing was performed several times (3-6 times, depending on the degree of substitution of the DACs) till the conductivity of the washing water and that of the collected water were the same, in order to guarantee a full removal of the iodine compounds. The aldehyde content of the oxidized cellulose was determined based on the oxime reaction between the aldehyde groups with a hydroxylamine salt (NH 2 OH HCl). The non-dried periodate-oxidized cellulose (approximately 0.1 g) was placed in a 250 mL beaker to which 1.39 g of NH 2 OH HCl and 100 mL of a sodium acetate buffer solution (0.1 M and pH = 4.5) were added. The mixture was then put under stirring (300 rpm) and left to react for 48 h. Then, the product was vacuum filtered and washed with distilled water and dried at 60 • C in an oven. Based on the reaction described, it is known that 1 mol of aldehyde reacts with 1 mol of NH 2 OH HCl resulting in 1 mol of the oxime derivative. Thus, the degree of substitution by aldehyde groups in DAC and the aldehyde content can be calculated directly from the measurement of the nitrogen content in the oxime sample, determined using the elemental analysis technique, and using equation (1) and equation (2), respectively, which relate the nitrogen content (N%) with the degree of substitution by aldehyde groups in the cellulose chain (DS DAC ).
where 14 (g mol − 1 ) represents the molar mass of the nitrogen atom, 190 (g/mol) is the molar mass of the oxime unit, 162 (g/mol) is the molar mass of the AGU unit, N% corresponds to the percentage of nitrogen present in the oxime sample, and MW DAC stands for the molecular weight of 10 mol of DAC units (g mol − 1 ). The anionisation followed the procedure developed in the work by Grenda et al. (2020) [14]. In the present study the undried DAC was weighed in a 200 mL beaker and distilled water was added in order to obtain a ratio of 1.5 g of dried DAC to 60 mL of distilled water. Subsequently, sodium metabisulfite was added to the beaker in the proportion of 14 mmol of Na 2 S 2 O 5 /g of dried DAC. Finally, the reaction mixture was placed under stirring (500 rpm) for 24 h at room temperature.
After the reaction period, the solution was mixed with isopropanol to precipitate the soluble material and then centrifuged, being Fig. 1. Two-step reaction procedure used to produce anionic lignocellulose-based polyelectrolytes (ADAC).
the precipitated material washed afterwards four times with a water/isopropanol mixture (1:9, v/v) until the conductivity of the liquid after cleaning is close to that of the washing aqueous solution used in the process after cleaning, the anionic precipitate (ADAC) was placed in the oven at 60 • C to dry. The final ADAC was characterized by elemental analysis (EA 1108 CHNS, from Fisons, UK) in order to determine the degree of substitution by anionic groups (DS ADAC ) (equations (3)-(5)) and the anionicity index (equations (3) and (6)), through the measurement of the sulphur content.
Amount of anionic units (Aa) =
where Aa represents the mass (g) of anionic units in 1 g of sample, 162 (g mol − 1 ) corresponds to the molar mass of the AGU unit and 368.1 (g mol − 1 ) corresponds to the molar mass of the anionic unit. It was assumed that only anhydroglucose and anionic units are present in the final product (no cellulosic dialdehyde exists in the final product).
The zeta potential values of each polyelectrolyte were also measured by electrophoretic light scattering (ELS) in a Zetasizer NanoZS from Malvern Instruments, UK. For that a 0.5% (wt%) solution of the polyelectrolyte was prepared in deionized water.
Evaluation of performance in the treatment of sugarcane juice 2.4.1. Cane plants and juice extraction
Healthy canes were disintegrated in forage and then pressed at 250 kgf cm − 2 during 1 min to obtain the raw sugarcane juice (SCJ). SCJ was filtered through a 0.074 mm sieve to remove inorganic impurities and fibres and after that it was stored at − 18 • C until the purification tests were performed. SCJ was characterized and presented 24.8 ± 0.01 • Brix (total soluble solids; wt%), turbidity of 1287 ± 0.01 NTU (Nephelometric Turbidity Unit), pH 5.20 ± 0.01, conductivity of 1.96 ± 0.01 mS cm − 1 , 1.32 ± 0.1% g of reducing sugars per 100 g of juice, and pol%juice of 16.7 ± 0.4% (apparent sucrose content per 100 g of juice). The procedures for the analysis of the cane juice samples followed the official methods from Icumsa (2011) [22] and Consecana (2015) [23].
Treatment of sugarcane juice by cellulose-based polyelectrolytes
The extracted sugarcane juice was subjected to a treatment similar to that applied in cane mills for the purification of sugarcane juice using the commercial polyelectrolyte (Flonex at 6 mg kg − 1 ). In this study, the commercial polyelectrolyte was replaced by different doses of cellulose-based polyelectrolytes obtained from Acacia wood or from sugarcane bagasse, as shown in Fig. 2. First, the sugarcane juice was subjected to clarification by sulfitation using sulfurous acid (analytical grade) to adjust the pH to 3.8. Then, the juice pH was neutralized by a calcium hydroxide suspension (milk of lime) to pH 7.2. For protein denaturation and promotion of phospholipid coagulation, the juice was preheated to 65 • C for 20 min, simulating the operating conditions of a sugar industry. The temperature was then raised to 103 • C for 2 min, to remove microbubbles dispersed in the cane juice. Then, different polyelectrolytes, at different dosages, were added to the pre-treated juice to analyse the sedimentation and purification efficiency and the treated juice was collected.
Total soluble solids ( • Brix) of the sugarcane juice
Total soluble solids were measured for each independent juice obtained from sugarcane juice with or without treatment. Brix values were measured using a refractometer (RFM712 refractometer, Bellingham + Stanley Ltd., Tunbridge Wells, UK) expressed as • Brix (wt %) according to CONSECANA (2015) [23].
Conductivity of the sugarcane juice
Conductivity was measured for each independent juice obtained from samples before and after the treatment with the addition of the polyelectrolytes. The values were measured using a conductivity meter HI8820 N (Hanna Instruments Co., Portugal) and expressed as mS cm − 1 according to the ASTM Method D1125-95 (1999).
Apparent sucrose content (pol%juice) of sugarcane juice
The analysis of pol%juice was done through the saccharimetric reading of the clarified samples, using a clarifying mixture based on aluminium, in a polarimeter model ADS420 (Bellingham + Stanley Ltd., Tunbridge Wells, United Kingdom), according to CONSEC-ANA (2015) [23]. The conversion of saccharimetric reading using the aluminium-based clarifying mixture (L Al ) to the equivalent reading in lead acetate (L Pb ) was done using equation (7). The clarifying step is necessary to remove the interference of solids on the optical measurement inside the polarimeter. The pol%juice (S) was expressed by equivalent reading in lead acetate (LPb) and initial total soluble solids ( • Brix0) as wt% (or g of apparent sucrose per 100 g of cane juice, (equation (8)).
The purification index of sucrose in the sugarcane juice treated is given by equation (9): where S final and S initial represent final (after the treatments by polyelectrolytes) and initial pol%juice (SCJ without treatment), respectively.
Turbidity measurements of sugarcane juice
The SCJ's turbidity was measured to determine the amount of cloudiness, and it was expressed as Nephelometric Turbidity Units (NTU). The turbidity of the samples was measured using a bench-top turbidimeter (Tecnopon TB1000, Brazil), properly calibrated against a curve at 0 to 1000 NTU [24]. The turbidity removal efficiency of the sugarcane juice treated is given by equation (10): Removal turbidity efficiency (%) = (residual turbidity − initial turbidity) initial turbidity x 100 (10) groups were assessed by one-way analysis of variance and Tukey's multiple range tests using the Excel package software. Differences were considered significant at p ≤ 0.05.
Synthesis and characterization of anionic of cellulose-based polyelectrolytes
Three anionic polyelectrolytes, containing different amounts of anionic modification, were successfully synthesized. They differ in the biomass origin and the fractionation treatment. Sugarcane bagasse was treated, in one of the tests, with the solvent mixture (ChCl: Lev in a 1:2 molar ratio for 4 h at 160 • C) and, in another test, with a 4% NaOH solution for 30 min at 121 • C [17], leading, after the modification, to ADAC Bagasse-3 and ADAC Bagasse-NaOH, respectively. Acacia wood was treated with ChCl:Lev in a 1:2 molar ratio for 4 h at 160 • C, leading, after modification, to ADAC Acacia-1. The characterization of the synthesized anionic polyelectrolytes is presented in Table 2.
Looking at Tables 2 and it is possible to conclude that it was always possible to introduce anionic groups in the cellulose backbone obtained from the extraction procedure applied to the sugarcane bagasse.
In addition, the results presented in Table 2 show that the extraction procedure used to produce ADAC Bagasse-3, using the eutectic solvent, led to a purer cellulose-rich fraction with a lower percentage of lignin and hemicellulose than the NaOH treatment of the bagasse. Even so, the final product does not show a higher degree of substitution than when using the other extracted materials. This is because both the origin of the raw material and the presence of hemicellulose in the raw material can play an important role in the final product characteristics. In fact, the presence of hemicellulose in the fractionated material seems to lead to higher DS in the final ADAC. The material used to produce ADAC Bagasse-NaOH presents the highest hemicellulose content, which seems to facilitate the chemical modification due to its branched structure and smaller DP, which allows a better interaction of the modification chemicals with the OH groups, due to the higher aqueous solubility compared to pure cellulose, as described by Bajpai (2018) [25]. Thus, both a higher aldehyde modification and final degree of substitution were obtained, even if, probably, the average molecular weight of the final materials can be smaller, compared to the other two samples.
Evaluation of the performance of cellulose-based polyelectrolytes in sugarcane juice treatment
The data from the sugarcane juice (SCJ) purification trials with cellulose-based polyelectrolytes was compared with data referring to untreated juice (raw juice) and juice treated with synthetic polyacrylamide-based polymer.
As a whole, the different polyelectrolytes, in the respective doses used, did not affect significantly the SCJ pH value, soluble solids content and conductivity. However, some statistical differences were observed by Tukey's test (at p < 0.05), which can be explained since the standard deviations calculated for the mean values of the treatments were large enough to denote the differences between the means of the different treatments (indicated by the different letters).
The most crucial parameter for sugar production is the apparent sucrose content, which indicates whether sucrose degradation occurred during the process. In the present study, we can conclude that, regardless of the polyelectrolyte and the doses used, there was efficient purification of the juice, resulting in increased sucrose purity. The reduction in soluble solids content can confirm this, representing a reduction of 1-3% points. The treatments with cellulose-based polyelectrolytes (regardless of the source of cellulose) were able to remove, initially, some soluble solids in the sugarcane juice, such as proteins, peptides, and phospholipids, etc, which is confirmed by the reduced soluble solids levels after the treatment.
In the production of crystal sugar, especially white sugar, an essential variable in the industrial process is the content of reducing sugars (for glucose and fructose in sugarcane see Xiao et al. (2017) [26] and Misra et al. (2022) [27]. Since commercial crystal sugar contains at least 99.3% of sucrose, the total glucose and fructose contents (i.e., reducing sugars) should be as low as possible. According to Ogando et al. (2022) [28], reducing sugars content in sugarcane juice was around 1.5 g L − 1 . A possible increase in the reducing sugar content may be a consequence of two physiological factors: first, the sugarcane is still at an early physiological state or immature or in an advanced state of maturity or ageing; this increase may be associated with the action of hydrolytic enzymatic extracts from the sugarcane itself or to microorganisms presence that degrade sucrose as a carbon source for its development or production of protective metabolites, such as organic acids, exopolysaccharides (levan, dextran, etc.) [26,27]. According to the data obtained for the sugarcane juice used in the trials, the reducing sugar content was 1.32 ± 0.10 wt%, which is within the expected range of reducing sugars, i.e., 0.50 to 1.50 wt%. Furthermore, the sugarcane juice conventionally treated with the polyacrylamide-based synthetic polyelectrolyte also showed reducing sugar values within the expected range, i.e., 1.40 ± 0.10 wt%. The same is observed when using the cellulose-based polyelectrolytes, independently of the cellulose source, with lower values of reducing sugars obtained when using the ADAC polyelectrolytes. Additionally, in general, higher dosages of ADAC lead to lower values of reducing sugars, demonstrating that the bio-polyelectrolytes were capable of not altering the physiological environment of the sugarcane juice and thus contributed to avoid the sucrose degradation increasing the reducing sugar content.
The synthetic polyelectrolyte used in this work was a commercial polyacrylamide-based polymer characterized as an extendedchain anionic flocculant. Its mechanism of action-interaction has been characterized as bridging [29]. The reducing sugar content in the cane juice treated with polyacrylamide-based polymer showed a slight increase (5.7%) with a nominal value of 1.40 ± 0.10 wt%, compared to the raw SCJ. Polyacrylamide-based polymers contain a large number of carboxylic groups (-COOH) that, depending on the pH, can dissociate with the release of protons in the medium (H + ), reducing the pH of the medium and leading to acid hydrolysis of sucrose and release of glucose and fructose in the juice. Dawber et al., in 1966 demonstrated the mechanism and kinetics of sucrose acid hydrolysis [30].
On the other hand, another route of sucrose hydrolysis associated with temperature increase has been reported by Richards and Shafizadeh in 1978 [31]. Nolasco and Massauger [32] demonstrated the thermal degradation of sucrose at temperatures of 110, 120, 130, and 140 • C. In conventional sugarcane juice treatment, the juice is heated with superheated steam at 103-107 • C. Simulated test protocols, with similar operating conditions to those employed in sugar mills, were applied in this study during the treatments with synthetic or cellulose-based polyelectrolytes. For the heat treatment, the juices in the trials were subjected to heating in an oil bath in two steps: preheating at 65 • C for 20 min followed by removal of microbubbles of air (flashing) at 103 • C for 2 min. Thus, it is very likely that thermal degradation of sucrose to glucose and fructose occurs initially. Subsequently, the monosaccharides undergo thermal degradation to low molecular weight compounds, or copolymerization to high molecular weight compounds, as reported by Aguiar et al. (2015) [33] and Rockland (1960) [34]. Conversely, cellulose-based anionic polyelectrolytes were shown to have little or no influence on sucrose hydrolysis, in the assays performed. For the three cellulose-based polyelectrolytes used, at different doses, the levels of reducing sugars were lower than those found for the raw juice, or the juice treated with the synthetic polyelectrolyte, on the order of 79.6% for ADAC Acacia-1; 78.0% for ADAC Bagasse-NaOH; and 60.6% for ADAC Bagasse-3, considering the best results for reducing sugars content when using the three ADACs at 100 mg of ADAC per kg of sugarcane juice.
It is important to note that analyzing the performance of the bio-based polyelectrolytes, even with a low dosage, 10 mg kg − 1 (similar to the dosage of the synthetic polyelectrolyte), in the treatment of sugarcane juice, the obtained results showed an improved efficiency compared to the synthetic option, regardless of the variable analysed (see Table 4). In the case of the ADACs, concentrations lower than 10 mg kg − 1 (not shown here), led to low total solids removal, in spite of the reasonable results for the other purification parameters.
If we consider the primary variable of this study, that is, the reduction in the turbidity of the sugarcane juice (Table 4), the efficiency of the bio-based polyelectrolytes varied in the range of 99.1%-99.8% for ADAC Acacia-1, between 99.1% and 99.7% for ADAC Bagasse-NaOH, and from 98.9% to 99.8% for ADAC Bagasse-3, which demonstrates very high efficiencies of the three biopolyelectrolytes in the reduction of the turbidity. Fig. 3 presents an example of a microscope image (optical microscope Olympus BH2, Olympus Optical Co., Japan, coupled with a digital camera Olympus ColorView III) of a juice treated with ADAC Bagasse 3, confirming that flocculation occurred during the treatment. Large flocs are visible in the microscope image. In addition, it was observed that the efficiency to reduce turbidity changed only slightly with the cellulose-based polyelectrolyte dosage, the efficiencies observed being always higher than for the synthetic polyelectrolyte. For example, for ADAC Bagasse-3, the best turbidity reduction (99.8%) was with 250 mg kg − 1 (Table 4), while it was observed an efficiency of 98.9% for the lowest dosage. So, it can be concluded that even ADACs concentration as low as 10 mg kg − 1 present good efficiency in the SCJ treatment.
Rationalizing the mechanism of turbidity reduction, it is interesting to note that all the three bio-based polyelectrolytes presented similar efficiencies, even if having different DS and anionicity indexes, revealing that the charge of the polyelectrolyte is not the only factor to take into account in the turbidity reduction mechanism. Not only electrostatic interactions are involved in the flocculation mechanism, but also other interactions, such as hydrophobic interactions, which can play a key role in the flocculation process. An important factor influencing the polyelectrolyte characteristics and performance is that all initial raw materials used in the modification (cellulose fibres) presented a certain amount of lignin, which can positively impact the interaction between the bio-based polyelectrolytes and the particles in suspension in the sugarcane juice, such as proteins, peptides, and phospholipids, which are clearly amphiphilic molecules. The presence of lignin can induce hydrophobic interactions with this type of molecules and influence positively the performance of the bio-based flocculants [35,36].
The results obtained for reducing the sugarcane juice turbidity by using the cellulose-based polyelectrolytes, show a good correlation to the increase of apparent sucrose contents (Tables 3 and 4). In Tables 4 and it is possible to note that the reduction of sugarcane juice turbidity confirmed the removal of insoluble or soluble impurities by the joint action of polyelectrolyte addition and heating.
Conclusions
In this work, anionic polyelectrolytes were successfully synthesized from Acacia wood and sugarcane bagasse, in a two-step process, and applied in sugarcane juice purification. The tests were performed for different polyelectrolyte concentrations (10, 50, 100, 250 and 500 mg kg − 1 ) and compared with the synthetic polyacrylamide-based polymer, normally used in these processes, at the most commonly used concentration (6 mg kg − 1 ). All flocculants could purify the juice regardless of the dose. However, in general, with the increase, above a certain level, of the concentration of flocculant added, the performance in the juice treatment decreases, and the concentration of the cellulose-based polyelectrolyte that presented the best results, regardless of the variable analysed, was 100 mg kg − 1 , even if the differences in efficiency for the lower concentrations used were small. Overall, the results obtained with the bio-based polyelectrolytes were better than the ones obtained with the synthetic polyelectrolyte. Also, it was possible to conclude that the flocculation mechanism is not only based in electrostatic interactions, but also other interactions, such as hydrophobic interactions, can play here an important role. It is also important to note that the bio-based polyelectrolytes are eco-friendly and are expected not to have any kind of negative impact on nature or on the purified sugars, considering the high biodegradability. Additionally, no or very low toxicity is expected to be associated to these biopolymers, especially considering the washing procedures conducted during the modification process, even if further studies will have to be conducted, to assess this parameter in a more comprehensive way, including a full toxicological assessment of the new polymers, which is foreseen to be performed in the near future.
Lastly, it should be stressed that the results presented demonstrate the viability of using sugarcane bagasse, a residue from sugar Different letters in the same column indicate significant statistical differences at p < 0.05 by Tukey test. Minimum Significant Difference (MSD) at 5% by Tukey test. Coefficient of variation (CV) expressed in percentage. NTU, nephelometric turbidity units. Apparent sucrose was determined by polarimetric analysis and expressed as g of sucrose per 100 g of cane juice.
production, to produce a polyelectrolyte that can be used in sugarcane juice treatment, even if a full economic evaluation is still required, which must also take into account the possibility of recycling/reusing the excess chemicals from the different steps, including solvents. This strategy is fully aligned with the principles of a circular bioeconomy. | 2023-06-19T05:04:54.897Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "d57d5407bc259cce5b864ca72955fb06b5025a0d",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d57d5407bc259cce5b864ca72955fb06b5025a0d",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
242509918 | pes2o/s2orc | v3-fos-license | Human–dog relationships during the COVID-19 pandemic: booming dog adoption during social isolation
The recent COVID-19 pandemic led to uncertainty and severe health and economic concerns. Previous studies indicated that owning a companion animal, such as a dog or a cat, has benefits for good mental health. Interactions with animals may help with depression and anxiety, particularly under stress-prone conditions. Human–animal interactions may even improve peer-to-peer social relationships, as well as enhance feelings of respect, trust, and empathy between people. Interestingly, it has also been shown that stress and poor well-being of dog owners negatively affect the well-being of their companion animals. However, a dramatic increase in dog abandonment could potentially occur due to COVID-19 related health, economic and social stresses, as well as due to the inconclusive reports of companion animals being potential COVID-19 carriers. Such a scenario may lead to high costs and considerable public health risks. Accordingly, we hypothesized that the COVID-19 pandemic, and the related social isolation, might lead to dramatic changes in human–dog bidirectional relationships. Using unique prospective and retrospective datasets, our objectives were to investigate how people perceived and acted during the COVID-19 pandemic social isolation, in regards to dog adoption and abandonment; and to examine the bidirectional relationship between the well-being of dog owners and that of their dogs. Overall, according to our analysis, as the social isolation became more stringent during the pandemic, the interest in dog adoption and the adoption rate increased significantly, while abandonment did not change. Moreover, there was a clear association between an individual’s impaired quality of life and their perceptions of a parallel deterioration in the quality of life of their dogs and reports of new behavioral problems. As humans and dogs are both social animals, these findings suggest potential benefits of the human–dog relationships during the COVID-19 pandemic, in accordance with the One Welfare approach that implies that there is a bidirectional connection between the welfare and health of humans and non-human animals. As our climate continues to change, more disasters including pandemics will likely occur, highlighting the importance of research into crisis-driven changes in human–animal relationships.
Introduction
T he virus SARS-CoV-2 emerged in December 2019, in Wuhan, China. This unknown respiratory disease developed into the pandemic, termed COVID-19, as declared by the World Health Organization on March 2020 (Bojdani et al., 2020). One of the main approaches worldwide for combating the disease is social isolation and distancing, at least until a protective vaccine is available (Koo et al., 2020;Lewnard and Lo, 2020;Bavel et al., 2020). Social isolation may prevent the spread of the disease, but it may also lead to other concerns. One of the greatest concerns regarding the influence of social isolation is its psychological effect on humans. Extended social isolation may lead to a significant decrease in quality of life and well-being, and high levels of stress, in both the infected and non-infected populations (Xiao et al., 2020;Bavel et al., 2020). Social isolation is an additional stressor to an already highly stressful world environment and people's extensive fear of the novel COVID-19 pandemic threat (Bavel et al., 2020;LeDoux, 2012;Mobbs et al., 2015). In addition, social distancing included full lockdowns in many countries, as well as in Israel, with dramatic economic effects (Anser et al., 2020;Sangar et al., 2019). Adverse local and global economic impacts, in addition to drastic personal income reduction, may be detrimental to people's psychological health and general well-being (Xiao et al., 2020).
Interestingly, the mental health benefits of owning a companion animal, such as a dog or a cat, have been shown by several scientific studies (Serpell, 1991;Beetz et al., 2012;Powell et al., 2019). The majority of studies indicate that interactions with animals may help with depression, anxiety, and stress, in particular under stress-prone conditions (Beetz et al., 2012). On the one hand, companion animals provide companionship, improve mood, and may ease loneliness; human-animal interactions may even improve peer-to-peer social relationships, as well as enhance feelings of respect, trust, and empathy between people (Powell et al., 2018;Beetz et al., 2012;Powell et al., 2019). On the other hand, it has also been shown that stress and poor well-being of owners negatively affect the stress and well-being of their companion animals (Buttner et al., 2015;Sumegi et al., 2014;Ryan et al., 2019). For example, there has been some indication that the stress of the owner could influence their dog's cognitive ability (Sumegi et al., 2014). Moreover, changes in the attention of owners to their dogs may affect the behavior of the dogs (Kaminski et al., 2009;Payne et al., 2016). Therefore, we hypothesized that the COVID-19 pandemic might lead to dramatic changes in human-dog bidirectional relationships. On the one hand, owning a dog may assist the owner in coping with the stressful world situation, and therefore, more people may decide to adopt a dog during this pandemic. On the other hand, behavioral problems in dogs were reported to be one of the main reasons for the abandonment of dogs to shelters (Patronek et al., 1996;Salman et al., 2000); if changes in the lives of owners occurred during the COVID-19 pandemic, and indeed, if behavioral problems in their dogs developed as was shown under other circumstances (Sumegi et al., 2014), then this might increase the risk of dog relinquishment.
Another potential risk factor for dog abandonment and relinquishment during the COVID-19 pandemic was their suspected epidemiological role in the spread of SARS-CoV-2. There was a worldwide growing concern that companion animals, specifically dogs and cats, could transmit the disease to humans (Goumenou et al., 2020;Parry, 2020;Leroy et al., 2020). Although the anecdotal reports were inconclusive, it could lead to an increase in the number of dogs relinquished by their owners. Thus, overall, the inconclusive reports of companion animals being potential carriers of the COVID-19 virus, the economic crisis, and the general stress and panic during this pandemic, could potentially cause a dramatic increase in dog abandonment numbers. Since such a scenario might incur high costs and present considerable risk to public health, it should be explored. Relinquishment and abandonment of companion animals is a global problem. It is estimated that millions of pets are abandoned each year (Fatjo et al., 2015), even without a pandemic in the background. It results in increasing numbers of free-roaming animals, overcrowded animal shelters, impaired animal welfare, and it carries high costs to tax payers (Fatjo et al., 2015). Moreover, it is a severe public health issue due to the potential transmission of zoonotic diseases (such as rabies) and attacks on people (Carter, 1990;Burgos-Caceres, 2011). All of these threats also carry remarkable economic consequences, which affect national and local governments, humane organizations, as well as individuals (Carter, 1990).
In 2012, an online, searchable database of animals that need homes in Israel was established (http://Yad4.co.il), by the first author. The first and only project of its kind in Israel, Yad4, serves as a national database for dog adoption, as it includes the vast majority of abandoned dogs that need homes throughout the country. As such, the established database provides both an understanding of the current landscape of dog abandonment and adoption at any given moment, as well as a unique look into the longitudinal relationships of dogs and people as the same dogs may be tracked across time, multiple homes, and shelter stays. The Yad4 initiative aims to rescue abandoned animals in Israel by increasing adoption rates, reducing the extent of dog euthanasia, and shortening the length of stay at the shelters until adoption, and has no profit purposes. The website offers a user-friendly search engine for potential adopters to find available dogs from organizations and municipal shelters across the country of Israel. The information is uploaded and updated by animal welfare organizations and municipal veterinarians, typically as soon as they have the dog in their possession. As of 2020, 72 animal welfare organizations and municipal shelters are registered and active on the website, each managing its own pool of adoptable pets independently, with its own online account. During the COVID-19 pandemic, the website operated as usual; although, initially, there was a concern for massive abandonment and a decrease in adoption.
In order to control the pandemic, gradual social restrictions were initiated during March 2020 in Israel, while in April, a total lockdown was implemented for a full month by the Israeli government, as marked on the timeline in Fig. 1. During this period, walking the dog and veterinary care were exceptions for the lockdown restrictions, as well as dog adoptions from animal welfare organizations and municipal shelters. Therefore, whereas it was not allowed to be outside of a 100 m radius from your home, dog adoption and dog walking were permitted throughout these periods.
The objectives of this study were to investigate: (1) how the COVID-19 pandemic affected adoption and abandonment of dogs at shelters, and the public's general interest in adopting a dog; (2) the association between the quality of life of owners and their dogs during the pandemic; as well as (3) the effect of the pandemic on the development of new behavioral problems and on the relinquishment rate of dogs by their owners.
Results
This study focused on a new aspect of the COVID-19 pandemic by investigating the human-dog relationship during this crisis. Dog adoptions, abandonment, as well as the association between the well-being of the owners and their perceptions of the quality of life of their dogs, were examined. Overall, in contrast to some of the initial concerns, all dog adoption measures significantly improved as the social restrictions became stricter. Furthermore, there was a clear association between an individual's quality of life and their perceptions of their dog's quality of life and behavior, as well as the probability of their relinquishing their pet.
Changes in dog adoption and abandonment. The database of Yad4 website was analyzed in order to investigate dog abandonment and adoptions under the growing pressure of the COVID-19 pandemic. Most abandoned dogs which are offered for adoption in Israel are published on Yad4 website, which includes most animal welfare organizations and municipal shelters for dogs. Therefore, the dogs uploaded on a daily basis to the website represent the abandoned dog population, which includes mainly dogs that were relinquished by their owners. Overall, according to our analysis, the stricter the social restrictions became during the COVID-19 pandemic in Israel, the number of potential adopters (people looking to adopt a dog), as well as the dog adoption rate, increased significantly (Fig. 2); while dog abandonment did not change. Multiple linear regression analyses of Yad4 records from January 2016 to May 2020 revealed that the main periods during the development of the pandemic in Israel were significantly associated with dog adoption measures, while the abandonment rate did not change ( Fig. 2; Supplementary Table S1). The number of dogs uploaded to the website, representing most of the abandoned dogs in Israel, did not change significantly over the years, including during the COVID-19 pandemic ( Fig. 2a-c). On the contrary, adoption measures were significantly affected by the different periods ( Fig. 2d-i), particularly after the first COVID-19 patient in Israel was diagnosed, and even to a further extent during the social lockdown. Between the time that the first patient in Israel was diagnosed to the full lockdown of the country, the average number of adoption requests submitted online was 31.1 ± 1.9 (Mean ± SEM) requests per day; during the total lockdown, the average number of dog adoption requests was 111.3 ± 4.1 requests per day; and during the gradual opening on May, 73 ± 4.6 adoption requests were submitted per day. However, before the COVID-19 outbreak in China, the average daily number of dog adoption requests was only 25.7 ± 4.1 requests per day. Linear regression analysis revealed that after controlling for the effects of the month, the year and governmental initiatives for the encouragement of responsible dog ownership between 2018 and 2019, the increase in the number of adoption requests during the outbreak in Israel, and the full lockdown, were significantly higher than the period before the COVID-19 outbreak in China (P < 0.05; Fig Table S1). Accordingly, the average number of adopted dogs increased significantly already following the outbreak in China, as well as during the outbreak in Israel and the full lockdown, as compared to before the pandemic (P < 0.05; Fig Table S1). Immediately after the outbreak in China, the average daily number of adopted dogs was 17.3 ± 2.2 dogs per day; during the outbreak in Israel it was 22.8 ± 2.1 adopted dogs per day; during the total lockdown it was 26.1 ± 2.2 adopted dogs per day; and after the gradual opening it was 14.7 ± 1.1 adopted dogs per day, which is similar to the period before the COVID outbreak in China (14.1 ± 0.3 adopted dogs per day). Furthermore, as compared to the years prior to the COVID-19 pandemic, the length of stay (LOS) of the dog at the shelter, calculated as the interval from the time the dog was uploaded online to the Yad4 website until it was marked by the organizations as adopted, was significantly shorter following the media report of the COVID-19 outbreak in China and subsequently, with the shortest LOS (10.1 ± 0.5 days) during the full lockdown. Potential effects such as the month, the year, and governmental initiatives were controlled in the linear regression models (Supplementary Fig. S1; Supplementary Table S1; P < 0.05).
Another option available for the public on Yad4 website was to fill in a request to serve as a foster family, as an alternative to adoption. Usually, the demand for foster families among the organizations is very high, but the number of available foster families is low. Therefore, typically, there are no available foster families since the organizations use them all. During the pandemic period, the number of foster families was higher than the demand. Accordingly, from the reports about the outbreak in China until the end of the lockdown in Israel, as well as during the gradual opening, the number of available foster families increased significantly. For example, as described in Fig. 3a, b, by the end of April 2019 there were no available foster families on Yad4 website, since they were all occupied and used by the organizations; contrarily, at the time of the outbreak in China, 226 foster families were available but did not receive a dog to foster, and by the end of April 2020, there were 844 available foster families.
Local and global online searches for adoptable dogs. The daily number of visitors on the Yad4 website since the first COVID-19 patient diagnosed in Israel until the end of the full lockdown was significantly higher, as compared to the whole period before the Fig. 1 Timeline of COVID-19 pandemic in Israel. The different colors, which get darker, represent the various periods analyzed in this study (x-axis): before the COVID-19 outbreak in China (years 2016-2019; light gray); from the initial outbreak in China until the first diagnosed patient in Israel (dark gray); during the outbreak in Israel, from the diagnosis of the first COVID-19 patient until lockdown declared by the Israeli government (light brown); during the full lockdown for a month (brown); and the gradual opening on May (gray, on the right side of the figure). The daily number of new diagnosed COVID-19 patients in Israel is represented as red dots. pandemic (Fig. 3). The effect of year and month were controlled in the models (Supplementary Table S1). The linear regression model revealed that there was a significant increase in daily visits online when the outbreak emerged in Israel during March by 657.9 ± 80.8 (coefficient ± SE) visits, and by 2311 ± 82.1 daily visitors online during the total lockdown period (Fig. 3a-c; P < 0.05). For example, the absolute number of visits online in April 2020 was 221,959 visits, as compared to 72,703 in April 2019, and 91,920 visits in October 2019, which is typically the busiest season of the website. Interestingly, according to global non-scientific media reports, the demand for adoptable dogs worldwide was also high in other countries. Pictures of empty cages from many countries were published, but until now, to the best of the authors' knowledge, no scientific data has yet been published documenting this phenomenon. Thus, the global trend was investigated by analyzing Google Trends data for searches all around the world, as well as specifically in the USA. In order to do so, the timeline was divided to three periods: (1) before the outbreak in China; (2) from the first media reports about the outbreak in China on December 27th until March 13th-when the World Health Organization (WHO) announced Europe to be the epicenter of the pandemic; (3) the main lockdown worldwide -from the announcement of the WHO until the gradual opening on May; and (4) during May. The effect of year and month were controlled in the models (Supplementary Table S2). Interestingly, the world trends, according to the Google Trends data, were found to be similar to that we report herein for Israel (Fig. 3). The trends of worldwide searches online for "adopt a dog" were Each row represents data of a different variable: upper row (panels a-c) number of abandoned dogs (marked in red); middle row (panels d-f) number of adoption requests made by potential owners (marked in blue); lower row (panels g-i) number of adopted dogs (marked in green). Daily data is presented on the first and second columns. Each dot represents the daily number of each parameter, to demonstrate trends over time. In the left column (panels a, d, g), data are presented from 2016 until May 2020. On the middle column (panels b, e, h) data are presented as zoom-in, from November 2019 to May 2020. Period of times related to the COVID-19 pandemic are separated by colors, as detailed in Fig. 1. In the right column (panels c, f, i), the results of Multivariate Linear Regression models are presented. In these models, the predictors were: the different time periods, from the outbreak in China to outbreak in Israel, the developments in Israel until full lockdown, full lockdown, and gradual opening; each period was compared to the period prior to COIVD-19 pandemic (from 2016 until the outbreak in China, represented by the horizontal dotted line); controlled for year, month, and governmental initiatives for dog adoption on 2019. The data are presented as coefficients (large dots) and its 95% confidence interval (bars); P < 0.05. significantly higher during the periods of the outbreak in China, as well as during the period many countries declared lockdowns, as compared to the year of 2019 ( Fig. 3d-f).
Given the high demand for dogs to adopt during the pandemic, the second part of our study included questionnaires targeting people who had recently adopted a dog, as well as current general dog owners, to explore the motivation behind this increase in demand for adoptable dogs.
The motivation for dog adoptions during COVID-19 pandemic lockdown, and the return rate of dogs back to shelters after the gradual opening of the lockdown. An online questionnaire was carried out in order to explore the reasons for dog adoption, particularly during the COVID-19 related lockdown, as well as to explore the return rate of the adopted dogs to the shelters, during the lockdown, and after the opening of the lockdown. This questionnaire was active for five days, starting on May 20th, 2020 (20 days after the gradual opening of the lockdown), and targeted people who adopted a dog from a shelter during the COVID-19 pandemic. The questionnaire targeted individuals who had adopted a dog as described in the :Methods" section, resulting in n = 508 people in total; 312 of the respondents stated that they had adopted a dog during the pandemic (January-May). Of these 312 new dog owners, 38.5% of participants stated they had considered adopting a dog for a long time, and being at home during the COVID-19 lockdown seemed like a good opportunity; 37.8% stated that they had planned to adopt a dog regardless of the situation; 8.0% stated they felt lonely and/or stressed and believed that owning a dog might help; 9.3% had heard about dog abandonment in the media and felt it was the right thing to do; and a few people adopted for other reasons, as detailed in Fig. 4. Only 8 of the participants, who had adopted a dog during the pandemic (2.6%), had already returned or relinquished the dog or have been considering relinquishment.
The association between impaired quality of life of owners to their perception of the quality of life of their dogs. In order to study the association between the quality of life of owners and their companion dogs under the COVID-19 pandemic situation, a digital questionnaire for dog owners was active during the full lockdown and social isolation (April). Participants replied to questions regarding their own well-being, as well as the well-being of their companion dog. Questions such as the effect of the pandemic on their stress level and personal finances, their concern about their own health, and their perceptions regarding their dog's well-being and behavior under the COVID-19 related Fig. 3 Online users' visits in the Israeli Yad4 website, and worldwide Google searches, for adoptable dogs before and during COVID-19 pandemic. a The daily numbers of visitors on Yad4.co.il, the Israeli adoption search engine, from January 2016 to May 2020. b Zoom-in on the same data as in panel a during COVID-19 pandemic, from November 2019 to May 2020. c Results of the Linear Regressions model for Yad4 online visits, in each period during the COVID-19 pandemic, as compared to before the pandemic. In these models, the predictors were: the different periods, from the outbreak in China to outbreak in Israel, the developments in Israel until full lockdown, full lockdown, and gradual opening; each period was compared to the period prior to the COVID-19 pandemic (from 2016 until the outbreak in China, represented by the horizontal dotted line); controlled for: year, month, and governmental initiatives for dog adoption on 2019. d The weekly trends of Google searches for "adopt a dog" are presented from November 2019. e Zoom-in on the same data as in panel d, during COVID-19 pandemic. Both worldwide searches (orange) and USA searches (blue) are presented. f Results of the Linear Regressions model for global searches for adoptable dogs. In this model, the predictors were: the different periods, from the outbreak in China to the declaration of the World Health Organization on Europe as the epicenter of the pandemic, during the time most of the world was under restricted social isolation, and the gradual opening on May 2020. Each period was compared to the period from January 2019 to the outbreak in China (represented by the horizontal dotted line); controlled for: year and month. In panels c and f, data are presented as coefficients (large dots) and its 95% confidence interval (bars); P < 0.05. HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | https://doi.org/10.1057/s41599-020-00649-x ARTICLE HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | (2020) 7:155 | https://doi.org/10.1057/s41599-020-00649-x lockdown, were included. The questionnaire also included questions regarding the characteristics of the owners and their dogs, as well as the care they provided to their dog during the pandemic. These variables were controlled in the statistical models (details in Supplementary Table S3). The outcome variables were set on a scale of 1-5 (for example, 1-low stress; 5-extremely stressed). Scores 4 and 5 were relabeled as "severe stress" for the analyses, and they were compared to scores 1-3: "none to moderate". The questionnaire was answered by n = 3138 individuals. Overall, 25% of the participants were very concerned about their health ( Fig. 5a), 25.6% stated they were extremely stressed (Fig. 5b), and 22.9% reported that their personal finances were severely affected (Fig. 5c). For further analysis, an impaired quality of life index was calculated as the mean of these scores (general stress; concern for their own health; and the damage to their personal financial situation; Fig. 5d). In addition, in the questionnaire, owners were asked to rank on a scale of 1-5 their assessment of the quality of life of their dogs during the COVID-19 lockdown, as well as their recognition of new behavioral problems, and whether they have considered relinquishing their dog. , and the intention of the owner to abandon the dog (triangular). Data are presented as odds ratios and its 95% confidence interval (bars); P < 0.05 when the 95% confidence interval does not cross the horizontal dotted line.
As hypothesized, multivariate logistic regressions revealed that an increase in the impaired quality of life index of the owner was associated with lower quality of life of the dog, as was assessed by the owner (odds ratio: 0.887 times lower for every one-unit increase in owner's quality of life index; Fig. 5e; P < 0.05). In addition, for a one-unit increase in impaired quality of life index of the owner, the odds ratio for recognition of new behavioral problems of the dog (as defined and recognized by the owner) was 1.397 times higher ( Fig. 5e; P < 0.05). Moreover, for a oneunit increase in the impaired quality of life index of the owner, the odds ratio for relinquishment was 1.762 times higher ( Fig. 5d; P < 0.05). Overall, the number of people that recognized behavioral problems in their dogs was low (11.6% of the dog owners), as well as the number of people that considered relinquishing their dog (1%). Still, according to these data, severely impaired quality of life of owners under the COVID-19 pandemic and lockdown was a significant risk factor associated with the quality of life of the dog, as well as for the recognition of the dog's behavioral problems, and for dog relinquishment, as reported by dog owners. Characteristics of the dogs and owners, as well as ownership habits, were controlled in the statistical models, as fully detailed in the "Methods" section and Supplementary Table S3. Further questions that were included in the model regarding the type of behavioral changes in the dogs are presented in Supplementary Table S4.
Methods
The study was conducted in accordance with the ethical guidelines of The Hebrew University of Jerusalem. As detailed below, data analyzed included four main datasets: (1) retrospective data from the pet adoption website Yad4 (http://Yad4.co.il), an online search engine for adoptable pets in Israel, from January 2016 to May 2020; (2) retrospective data regarding worldwide Google searches for adoptable dogs, downloaded from Google Trends, from November 2016 to May 2020; (3) data gathered from a prospective online digital questionnaire targeting dog owners in Israel, which was active from March 27th, 2020 to April 30th, 2020, during the COVID-19 related full lockdown in Israel; and (4) data gathered from an online digital questionnaire targeting people in Israel who adopted a dog from a shelter during the COVID-19 pandemic; the questionnaire was active from May 20th to May 25th, 2020, a period of time following the gradual opening of the lockdown.
Collection of data regarding abandoned adoptable dogs, adoptions and adopters from Yad4 website. Information was gathered regarding abandoned adoptable dogs, adoptions, and adopter's data, as recorded by Yad4, an open-source online website. In this Israeli website, animal welfare organizations and municipal veterinarians upload individual information for each abandoned dog, typically as soon as it enters the shelter, and this information is available to potential new dog owners who can fill out an online adoption request form through the website to be considered and approved by the shelter. Dataset included: records of the dogs uploaded to the website, date of marking the dog as adopted if indeed adopted, the number of adoption requests sent through the website, as well as requests to serve as foster families. Data regarding the online use of the Yad4 website were extracted from Google Analytics. The database of the Yad4 website was analyzed from January 2016 to May 2020, and included: 33,883 adoptable dogs, 2,618,190 online visits on the website, 53,923 online adoption requests, and 2042 fostering applications. As demonstrated in Fig. 1, data were compared in five different periods: from January 2016 until the outbreak in China; from the initial outbreak in China until the first diagnosed COVID-19 patient in Israel; during the outbreak in Israel, from the diagnosis of the first patient until the full lockdown declared by the Israeli government; during the full lockdown for a month; and during the gradual opening in May 2020.
Collection of data regarding worldwide Google searches for adoptable dogs. Data regarding the use of the website and worldwide Google searches for adoptable dogs were extracted by Google Analytics (https://analytics.google.com). Retrospective data (from January 2016 until May 2020) regarding trends of searches online for "adopt a dog", both in the USA and worldwide, were downloaded from Google Trends (https://trends. google.com) and used for the analysis. As detailed below, data were compared in four different periods: before the COVID-19 outbreak in China; from the media reports about the outbreak in China on December 27th, 2020 until March 13th, 2020-when the World Health Organization (WHO) announced Europe as the epicenter of the pandemic; from the WHO announcement until the gradual opening in May 2020 (the main lockdown in many countries worldwide); and (4) during May 2020 (the gradual opening in many countries).
Online digital questionnaire for dog owners during the COVID-19 related lockdown in Israel. An online digital questionnaire targeting dog owners in Israel was active during the COVID-19 related full lockdown in Israel. The questionnaire was designed by the researchers using Google Forms, and was distributed by a company who specializes in this purpose (Lead Marketing Ltd.; https://leadmarketingltd.com), in order to specifically and effectively reach dog owners in Israel. The online distribution of the questionnaire was based on targeting a predefined group of respondents, with a high level of accuracy, characterized by their interests and online behavior (e.g., users that shop online for dog food or who perform searches for information on dog care). Various digital platforms were used (i.e., Google Display Network, Facebook, Instagram, and others), and banner ads led users to the questionnaire, asking them to voluntarily and anonymously participate in the survey with their consent to be part of this research study.
The questionnaire for dog owners was in Hebrew, and participants were asked to reply to questions regarding their own well-being, as well as the well-being of their companion dog. It included questions regarding the characteristics of the dog population; the source of their dog (e.g., adopted from a shelter, backyard breeding, official breeders), the age of the dog and reproductive status (sterilized or intact), the number of years they have owned the dog, where the dog is kept (e.g., inside an apartment, in a private house, in a garden, free roam). The characteristics of the owners included age, the geographical area in Israel, type of residential area (e.g., city, countryside), and gender. In addition, owners were asked questions regarding their well-being under the lockdown due to the COVID-19 pandemic (on a scale of 1-5); "how stressed are you overall from the COVID-19 pandemic?" (1-not stressed, 5-severely stressed); "to what extent are you worried about your health risk from the COVID-19 epidemic?" (1-not worried, 5-extremely worried); "to what extent the current crisis was harmful to your personal financial income?" (1-not at all, 5-severely harmful); "to what extent was your daily routine altered during this time?" (1-no change, 5-extreme change). In addition, owners were asked how many times a day they walked the dog during the lockdown, the average length of the walk, whether the attention they gave the dog changed (increased, was not changed, or decreased), their assessment of the overall quality of life of their dog under the lockdown (1-remarkedly impaired, 5-markedly improved), if there were new behaviors expressed by their dog, and whether or not they were considering relinquishing their dog. For the purpose of analyses regarding the link between human well-being and their answers regarding their dog, an impaired quality of life index was generated, by calculating the average score of the owners based on the responses regarding their overall stress, health concerns, and their personal financial harm due to the COVID-19 epidemic and lockdown, as detailed above. The questionnaire was conducted from March 27th to April 30th, 2020, during the COVID-19 related full lockdown in Israel, and was successfully answered by 3138 individuals. Records were not included if they were incomplete, were completed by people who stated they did not own a dog during the time of the questionnaire or by minors (under 18 years old), or if the age of the dog or the number of years that they raised it were irrational (e.g., 51, 139). Thus, 2906 records were included in the analyses. Characteristics of the participants and their dogs are detailed in Supplementary Fig. S2.
Online digital questionnaire for people in Israel who adopted a dog from a shelter during the COVID-19 pandemic. An online digital questionnaire targeting mainly people in Israel who adopted a dog from a shelter during the COVID-19 pandemic, was active from May 20th to May 25th, 2020, after the gradual opening of the lockdown. The questionnaire was designed by the researchers in Hebrew using Google Forms, and was distributed by Lead Marketing Ltd., in a similar manner as the first questionnaire, in order to effectively target individuals of the predefined group, such as individuals who visited dog adoption websites, mainly Yad4, dog shelters and their Facebook fan pages.
Participants were asked to reply to questions regarding the date of dog adoption, the main reason for the specific timing of the adoption (Fig. 5), as well as on the short-term success of the rehoming (e.g., planning to keep the dog, gave it to another family, returned it to the shelter, or considering not keeping it). The questionnaire was answered by 508 participants, and 312 of them stated they adopted the dog during the pandemic (January-May, 2020) Statistical analyses. Statistical analyses were performed using commercial statistical software (IBM SPSS Statistics, version 24.0; STATA, version 15.0). Linear regression analysis was utilized to evaluate the effects of the spread of COVID-19 and lockdown stages on adoption and abandonment outcomes for pet dogs, using data from the Yad4 adoption website. The general structure of the estimated regressions was as detailed in the Eq. (1).
where Y t was the outcome of interest in month t, the δ's were dummy variable effects of the stages of outbreak and lockdown, all compared to the baseline period, before the outbreak of COVID-19 in China. δ ChinaOutbreak is a dummy variable for the months between the outbreak of COVID-19 in China and the first confirmed case in Israel; δ LocalOutbreak is a dummy variable for the period between the first local case and the start of the full lockdown; δ LocalLockdown -between the start of the lockdown and the start of gradual opening in Israel. γ t are the calendar month fixed effects controlling for seasonality in adoption activities, and trend t controls for a linear annual time trend. regulation t controls for a change in governmental initiatives with regard to encouragement of responsible ownership and adoptions between 2018 and 2019, in order to make sure this does not drive our results. ε t is a standard error term. Several outcomes were considered variables: the number of adoption requests received through the website and the number of dogs marked as adopted, as measures of the level of interest in conducting an adoption process and the final outcome of successful adoptions; the number of dogs uploaded to the website, as a measure of recent abandonment cases; and the number of users on the website, as a measure of general interest in adoption.
The major part of the analysis uses Israeli data, as detailed above. However, two outcome variables were added from Google Trends to compare the results to worldwide trends. Google Trends were used to construct two additional outcome variables: the number of web-searches, in the US, and worldwide, of the phrase "adopt a dog" during the same period. The regression analysis for these outcomes was similar to the main model, but the stages of shutdown were defined as: the outbreak in China, the declaration of World Health Organization on Europe as the epicenter of the pandemic, and the gradual opening in May 2020. Because the lockdown policies are not centralized in the US and worldwide, we did not include a specific separate shutdown time-period.
The digital questionnaire of the dog owners was analyzed using logistic regression. The binary outcome variables that were considered were: the quality of life of the dog, as assessed by the owner; the development of new behavioral problems if recognized and defined by the owners; and whether the owner was considering abandoning the dog. The general model for estimation was as detailed in Eq. 2.
where Y t is the binary outcome of interest for respondent i. LifeChange i is a dummy variable depicting whether the respondent declared that his life changed following the COVID-19 outbreak. CrisisIndex i is an average of three responses addressing three aspects of negative effects of the outbreak: economy, health concerns, and stress (as reported by the responders). Z i are owner characteristics: gender, age and whether there are young children in the household. W i are dog characteristics: age, whether the dog was adopted from a shelter, number of years with the owner. D i are characteristics of the care given to the dog: number of walks a day, the average duration of the walks, and a general measure of attention to the dog. v i are geographical area fixed-effects and ε i is a standard error term. The logistic regressions, as they are based on the responses of the owners, which can be biased on their own, should be interpreted as descriptive analyses rather than being given causal interpretation. Descriptive statistics are given as mean ± SE, 95% confidence interval, or as frequency (n) with percentage (%), as applicable. A P < 0.05 was considered statistically significant. All reported Pvalues were based on a two-tailed hypothesis.
Discussion
Humans and dogs are both social animals, and their bond can be traced back at least 15,000 years to the Bonn-Oberkassel dog that was found buried with two humans (Janssens et al., 2016). According to the 2019-2020 National Pet Owners Survey conducted by the American Pet Products Association (APPA), approximately 63.4 million households in the USA owned at least one dog, making them the most widely owned type of companion animal across the USA at this time. The advantages of raising a dog have been widely investigated. The human-dog bond has potential physical, psychological and mental benefits, and can improve the general well-being and happiness of owners (Lass-Hennemann et al., 2020;Tzivian et al., 2015;Barker and Barker, 1988;Wells, 2007). Despite all the known advantages, and the evidence that separation between a dog and its owner negatively impacts not only the dog but also the wellness of the owner (Lowe et al., 2015), millions of companion dogs are abandoned every year (Marder and Duxbury, 2008). Dog abandonment carries high costs and a significant risk for public health (Fatjo et al., 2015;Kumar, 2002;Carter, 1990). Prior to this study, it was unknown whether the COVID-19 pandemic was a risk factor for dog abandonment, as well as a risk for impaired well-being of the dogs as a reflection of the potentially impaired well-being of the owners. Therefore, the motivation to conduct this study was to explore the human-dog relationship during this pandemic, to benefit the welfare and well-being of both humans and animals, in accordance with the One Welfare approach. The One Welfare approach extents the One Health theme, suggesting that there is a strong connection between welfare and health of human and animals, including both physical and mental health, and that improving animal welfare often improves human welfare (and vice versa) (Pinillos et al., 2016;Mor et al., 2018;Panning et al., 2016;Lem, 2019;Jordan and Lem, 2014;Card et al., 2018). According to this approach, veterinarians, animal's owners, animal welfare organizations, human psychiatrists, environmental scientists, and others, should collaborate and share expertize in order to care for the welfare of both animals and their owners. Accordingly, the rationale behind this study was the hypotheses that human perceptions and acts regarding dog ownership and adoption might be influenced by the COVID-19 pandemic and the related social isolation, as well as the stress and well-being of both species.
Our data indicate that not only is the concern of increased dog abandonment not justified, at least so far, the opposite has occurred. As social restrictions increased during the COVID-19 pandemic, the rates of dog adoptions improved significantly (Fig. 2); the demand for adoptable dogs and the requests to serve as foster families increased significantly, and accordingly, the length of stay of dogs at the shelter was significantly shorter. Previous reports following disasters, such as earthquakes or other situations that require immediate evacuation, were associated with a massive unintentional dog abandonment (Nagasawa et al., 2012). However, people may refuse to separate from their pet when needed due to disasters or extreme situations, as pet owners may find their pets closer or at the very least, as close as family (Chadwin, 2017;Barker and Barker, 1988). This may be the reason why, so far, the vast majority of people were reluctant to relinquish their dog during the COVID-19 pandemic. Still, further investigation is required, as the potential risk for dog relinquishment in the coming months cannot be completely excluded, due to the various social and economic impacts that this pandemic may yet bring. Furthermore, as our climate continues to change, more disasters, including additional pandemics, will likely occur, highlighting the need for more research into crisis-driven human behavior changes, including changes in the human-animal relationship.
While it may be clear why people kept their companion animals, the motivation to acquire a new dog through adoption, particularly during the COVID-19 related lockdown, is less intuitive. As expected, many people stated they decided to adopt a dog since they had been planning to adopt prior to the COVID-19 outbreak, as well as the fact that people were at home and more available to the new challenge. In addition, acknowledgment of the fact that a dog can reduce feelings of stress and loneliness, as well as misleading media publications about increased dog abandonment, played an important role in their decision. Surprisingly, neither pressure from children and the desire to keep children occupied, nor an excuse to leave the house during the lockdown, were reported to play an important role in the decision of owners to adopt a dog under the circumstances. In the scientific literature, the characteristics of individuals associated with a higher likelihood to adopt a dog, such as ethnicity and housing, were described (Holland, 2019;Weiss et al., 2012); however, the specific timing for adoption has not been investigated. Nevertheless, a previous study found that owners who just obtained a dog expected that the new dog ownership would increase their walking activity, happiness, companionship, and would decrease stress and loneliness (Powell et al., 2018). This may explain the increase in adoption rate during the COVID-19 pandemic, as social isolation was legally enforced.
In addition, to determining the adoption and relinquishment rates associated with COVID-19, we investigated the effect of the stressful pandemic on dog welfare in the pet home environment. Therefore, the questionnaire for dog owners examined the relationship between their impaired quality of life during the pandemic and the quality of life of their companion dog. Although there is an obvious limitation in this study in that the quality of life of the dog and the development of new behavior problems were based subjectively on the owner's perception rather than objectively, the results are nevertheless valuable; as the perceptions of the owners of their dog's behavior is likely a more important predictor of relinquishment than objective measures. Previous researches have found that new owners of dogs often do not report the same behavioral problems as relinquishing owners of the same dog, suggesting that perception plays an important role (Duffy et al., 2014;Stephen and Ledger, 2007).
It was found that impaired quality of life of the owners was associated with a decrease in quality of life of their dog, as well as increased development of new behavioral problems, as judged by their owners. Although it has been reported that owners have poor ability to recognize behavioral problems (Powell et al., 2018;Tami and Gallagher, 2009), the perception of the owners can influence the future of the owner-pet relationship, as well as the probability that they would decide to relinquish it (Payne et al., 2015). Thus, the perception of the owner that their dog has behavioral problems may influence their ownership, and by that, also the welfare of the dog. As mentioned, the quality of life of the dog, and its behavior, were neither diagnosed objectively, nor by professional observers. Therefore, the characteristics of the dogs and of the owners as risk factors for the low quality of life and new behavioral problems of the dog, cannot be thoroughly concluded from these models. Still, those variables were controlled in the statistical models, and it was found that the perception of dog owners regarding their own impaired quality of life was significantly associated with their assessment of lower quality of their dog's life and its emerging behavioral problems. These results are consistent with previous studies, which found that there is an effect of the stress level and well-being of the humans, on the stress, well-being, cognitive ability and behavior of their dogs (Buttner et al., 2015;Sumegi et al., 2014;Kaminski et al., 2009). An alternative hypothesis may be that owners in a crisis situation may have had a pessimistic outlook regarding things in their life and their surroundings, and the resulting decrease of the quality of life of the dog was due to the overall negative outlook of the owner rather than any true reflection on the dog. This information is important for both humans and dogs, since it can provide valuable information for initiatives to improve the welfare and well-being of both the dog owners and their companion dogs, as suggested by the One Welfare approach. For example, the Israeli Veterinary Services (a branch of the Ministry of Agriculture), invests an annual amount of approximately $1.2 M on encouraging responsible dog ownership and adoptions, and has already approved new upcoming initiatives based on this study, such as digital online adoption days, with education geared towards responsible dog ownership.
In this study, although the overall number of dog owners who reported that they were going to relinquish their dog due to the COVID-19 situation was low, it was significantly associated with a poorer quality of life index of the owners. Among new dog owners that had adopted the dog during the pandemic, a similar percentage is reported for owners that had already relinquished their dog or were considering doing so. Since a lack of time of the owners is one of the main risk factors for dog relinquishment reported in the literature (Salman et al., 2000), it was a concern that people who adopted during the COVID-19 lockdown would relinquish their dog after going back to routine life. The second survey to detect the relinquishment rate of dogs adopted during the pandemic was performed at the end of May 2020, after opening the lockdown. Therefore, in most cases, people were back to routine life more than a month following adoption. According to previous research, the highest proportion of dog relinquishment happens just one month after adoption; in fact, owners report knowing of behavioral problems with their dogs within 24 h post-adoption (Shore, 2005). It is important to mention that people who decided to relinquish their dog might have avoided our survey. Still, the relatively low number of dogs that were uploaded to Yad4 website on May 2020 may indicate that, so far, there has not been a massive relinquishment after opening the lockdown. Hence, we tentatively suggest that the majority of adoptions were successful. One hypothesis can be that owners had more time to spend with their dog at the beginning, which may have helped to ease rehoming. Nonetheless, this hypothesis requires further investigation in the long run. Furthermore, an important issue that was not covered in this study is related to the differences between individuals who had already owned a dog and those who had not, in regards to their coping with the extreme social and economic challenges during the COVID-19 pandemic. Still, studies show that both children and adults cope better with stress when owning a dog (Chadwin, 2017;Powell et al., 2019). Therefore, we hypothesize that owning a dog might even prevent the development of Post Traumatic Stress Disorders (PTSD) caused by the pandemic or at least ease the coping with it, once it has occurred. It has been reported that after the SARS outbreak in 2003, which may be equivalent to the COVID-19 pandemic in many aspects, patients suffered from Post Traumatic Stress Disorders (Wu et al., 2005). It is known that dogs have a positive effect on the treatment of PTSD, and that dog owners might be more resilient (Chadwin, 2017;Powell et al., 2019;Beetz et al., 2019). Therefore, this is an important future direction for human-pet relationship research.
In summary, the COVID-19 pandemic that emerged in December 2019 in Wuhan, China, led to the utilization of social isolation in many countries, as well as to widespread uncertainty and severe health and economic concerns. Our study indicates that the stricter the social isolation became during the COVID-19 pandemic, the greater the interest in dog adoption. The adoption rate increased significantly, while dog abandonment did not change. Furthermore, there was a clear association between individual's quality of life and their perceptions of their dog's quality of life and behavior, as well as the probability of relinquishing the dog. As humans and dogs are both social animals, these findings suggest potential benefits of the human-dog relationships during the COVID-19 pandemic, in accordance with the One Welfare approach implying that there is a bidirectional connection between the welfare and health of humans and nonhuman animals.
Data availability
All data generated or analyzed during this study is included in this article (and its Supplementary Information files). | 2020-06-11T09:07:35.027Z | 2020-06-05T00:00:00.000 | {
"year": 2020,
"sha1": "891b487162cd3773cd620f51b3ad8563cab177d8",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41599-020-00649-x.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "d457d7de07e0abd6e58f6c472d84bf7c20a120d9",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": []
} |
119619654 | pes2o/s2orc | v3-fos-license | Concircular tensors in Spaces of Constant Curvature: With Applications to Orthogonal Separation of The Hamilton-Jacobi Equation
We study concircular tensors in spaces of constant curvature and then apply the results obtained to the problem of the orthogonal separation of the Hamilton-Jacobi equation on these spaces. Any coordinates which separate the geodesic Hamilton-Jacobi equation are called separable. Specifically for spaces of constant curvature, we obtain canonical forms of concircular tensors modulo the action of the isometry group, we obtain the separable coordinates induced by irreducible concircular tensors, and we obtain warped products adapted to reducible concircular tensors. Using these results, we show how to enumerate the isometrically inequivalent orthogonal separable coordinates, construct the transformation from separable to Cartesian coordinates, and execute the Benenti-Eisenhart-Kalnins-Miller (BEKM) separation algorithm for separating natural Hamilton-Jacobi equations.
Introduction 1
List of Notations It is shown in [RM14b] that any point-wise diagonalizable concircular tensor, hereafter called a orthogonal concircular tensor (OCT), can be used to recursively construct separable coordinates for the (geodesic) Hamilton-Jacobi equation. Such coordinates are called Kalnins-Eisenhart-Miller (KEM) coordinates. In [RM14a] it is shown that all orthogonal separable coordinates for the Hamilton-Jacobi equation in spaces of constant curvature occur this way. The work done in [RM14a] serves as an independent verification of the Kalnins-Miller classification of separable coordinates for Riemannian spaces of constant curvature [Kal86]. Hence the classification of OCTs in spaces of constant curvature is crucial for classifying orthogonal separable coordinates in these spaces.
Specifically, OCTs have the following uses: 1. An algebraic classification of these tensors modulo the action of the isometry group can be used to obtain a notion of inequivalence for KEM coordinate systems.
2. Crampin [Cra03] shows that one can obtain transformations to separable coordinates for OCTs with functionally independent eigenfunctions. It's evident from the results in [RM14b; RM14a] that a knowledge of the warped product decompositions of the space is sufficient to obtain transformations to separable coordinates for any KEM coordinate system. We will expand on this idea later.
3. When concircular tensors have simple eigenfunctions, it is shown in [Ben05] (see also [Ben92a;Ben93;Ben04]) that a basis for the Killing-Stackel space can be obtained. Using the theory presented in [RM14b] one can generalize this result to arbitrary KEM coordinate systems.
4. With a classification of concircular tensors, the BEKM separation algorithm (presented in [RM14b]), can be executed to solve the separation of variables problem for natural Hamiltonians.
Thus an unsolved problem is to obtain a complete classification of these tensors in spaces of constant curvature. A partial classification of these tensors in Euclidean space can be found in [Lun03] (cf. [Ben05]). A complete classification of these tensors for Euclidean space and the Euclidean sphere is implicit in [WW03].
Building on existing knowledge in [Lun03;Cra03] together with new insights [RM14b], in this article we obtain a complete (local) classification of orthogonal concircular tensors in all spaces of constant curvature with Euclidean and Lorentzian signature 1 . More details on our classification and the way in which it is done is given in Section 2.4, after we have introduced some preliminaries. Some of our results are also summarized in Section 2.4. 1 The classification for other signatures can be obtained fairly easily if one wishes.
Preliminaries and Summary
2 Different parts of this problem have been solved for special cases by different researchers over the past few decades. A classification of separable coordinate systems in Riemannian spaces of constant curvature was originally done by Kalnins and Miller in [KM86;KM82], see also [Kal86] which is a book containing their results. The insight provided by their classification was crucial for the development of the theory which we present here. They have extended this work to spaces of constant curvature with arbitrary signature in [KMR84] to obtain a partial classification. In [Kal75] orthogonal separable coordinates in two dimensional Minkowski space are determined and partial results in three dimensional Minkowski space are given. A more detailed classification in two dimensions is given in [MS02a], and in three dimensions in [KM76]. This classification in three dimensions is further refined in [Hin98] and [HM08]. A classification of orthogonal separable coordinates for four dimensional Minkowski space has been given in [KM78] and references therein. Classifications of isometrically inequivalent Killing tensors in two dimensional flat spaces are given in [MS02b], [MST04] and [CDM06], that in three dimensional Minkowski space in [HMS09], and that on the Euclidean three sphere in [CMS11]. Finally, building on results in [Kal86], a version of the BEKM separation algorithm is given in [WW03] for Euclidean space and the Euclidean sphere.
Our approach to this problem has several advantages over previous approaches. First we are able to give a unified theory applicable to spaces of constant curvature with both Euclidean and Lorentzian signatures. This approach allows one to solve the different but related problems listed above. We are able to give a precise notion of inequivalence for orthogonal separable coordinate systems in Minkowski space and thereby give a clear, rigorous and complete classification in this space.
Notations and Conventions
All differentiable structures are assumed to be smooth (class C ∞ ). Let M be a pseudo-Riemannian manifold of dimension n equipped with covariant metric g. Unless specified otherwise, it is assumed that n ≥ 2. The contravariant metric is usually denoted by G and ·, · plays the role of the covariant and contravariant metric depending on the arguments. We denote S p (M ) as the set of symmetric contravariant tensor fields of valence p on M. Furthermore F(M ) = S 0 (M ) is the set of functions from M to R and X(M ) = S 1 (M ) denotes the set of vector fields over M . If f ∈ F(M ) then ∇f ∈ X(M ) denotes the gradient of f , i.e. the vector field metrically equivalent to df . Also if x ∈ X(M ) then we denote x 2 := x, x .
Throughout this article we will be working in pseudo-Euclidean space, which is defined as follows. An n-dimensional vector space V equipped with metric g of signature 2 ν is denoted by E n ν and called pseudo-Euclidean space. We obtain Euclidean space E n in the special case where ν = 0. Also Minkowski space M n is obtained by taking ν = 1.
T x, y = x, T y
for all x, y ∈ V The above condition is equivalent to requiring T to be metrically equivalent to a symmetric contravariant tensor. By an orthogonal tensor, we mean a symmetric contravariant tensor whose uniquely determined endomorphism is diagonalizable with real eigenvalues. One can check that the eigenspaces of such an endomorphism are necessarily pair-wise orthogonal non-degenerate subspaces. Finally given a subspace W ≤ V , the restriction of T to W is denoted T | W .
All the above notions generalize point-wise to a pseudo-Riemannian manifold. Although only locally. For example given a self-adjoint 1 1 -tensor T on M , we say it is an orthogonal tensor if it is point-wise diagonalizable on some (non-empty) open subset of M and we tacitly work on this subset. Similarly we say T is not an orthogonal tensor on M if T is not point-wise diagonalizable on a open dense subset of M . Similar definitions apply to other notions such as constancy of functions on M .
Self-adjoint operators in pseudo-Euclidean space
In this section we review the metric-Jordan canonical form of a self-adjoint operator on a pseudo-Euclidean space. The details of the theory behind this canonical form is given in [Raj14a]; these are solutions to exercises 18-19 in [O'N83, P. 260-261].
A Jordan block of dimension k with eigenvalue λ ∈ C is a k × k matrix denoted by 2 Preliminaries and Summary 4 J k (λ), and defined as: The skew-diagonal matrix of dimension k is denoted by S k , and defined as: An ordered sequence of vectors β = {v 1 , . . . , v k } where the matrix representation of g with respect to (w.r.t) β has the form g| β = εS k , is called a skew-normal sequence of (length k) and (sign ε = ±1). The subspace spanned by a skew-normal sequence is necessarily non-degenerate and of dimension k (see [Raj14a,lemma 2.1]).
In order to express the metric-Jordan canonical form of a self-adjoint operator on a pseudo-Euclidean space [Raj14a], we use the signed integer εk ∈ Z where k ∈ N and ε = ±1. Then the notation J εk (λ) is short hand for the pair: g =εS k Furthermore, given matrices A 1 and A 2 , we denote the following block diagonal matrix by A 1 ⊕ A 2 The (real) metric-Jordan canonical form of a self-adjoint operator is discussed in detail in [Raj14a]. In this article (for convenience) we will be working with the complex version (it can be deduced from [Raj14a, theorem 3.7]), which is given as follows:
Theorem 2.1 (Complex metric-Jordan canonical form [O'N83])
A real operator T on a pseudo-Euclidean space E n ν is self-adjoint iff there exists a (possibly complex) basis β such that T | β = J ε 1 k 1 (λ 1 ) ⊕ · · · ⊕ J ε l k l (λ l ) Furthermore there exists a canonical basis such that the unordered list {J ε 1 k 1 (λ 1 ), . . . , J ε l k l (λ l )} is uniquely determined by T and an invariant of T under the action of the orthogonal group O(E n ν ). Remark 2.2 Since T is real, each Jordan block J εk (λ) with λ ∈ C\R comes with a complex conjugate pair J εk (λ). For complex eigenvalues, we can additionally assume that ε = 1.
✷
A key fact used to derive the above canonical form and one to keep in mind is that for any self-adjoint operator T , any non-degenerate T -invariant subspace has a T -invariant orthogonal complement.
Concircular tensors
L ∈ S p (M ) is called a concircular tensor also called a C-tensor (CT) of valence p if there exists C ∈ S p−1 (M ) (called the conformal factor ) such that for all x ∈ X(M ). Concircular tensors of arbitrary valence were originally defined in [Cra08], where they were called special conformal Killing tensors. This is because concircular tensors are conformal Killing tensors [Cra08]. When p = 1, L is called a concircular vector (CV). When p = 2, we will simply call L a concircular tensor since we will mainly be working with these objects. Furthermore one should note that the CTs form a real vector space and the symmetric product of CTs is again a CT. Sometimes we denote the space of concircular tensors of valence p by C p (M ) and the subspace of covariantly constant tensors by C p 0 (M ). An OCT (also called an OC-tensor) is a concircular tensor which is also an orthogonal tensor. OC-tensors with simple eigenfunctions were studied extensively by Benenti, see [Ben92a;Ben04;Ben05]; thus in recognition of his contributions we refer to this special class of OC-tensors as Benenti tensors (also called L-tensors by Benenti).
OC-tensors have some useful properties. First, given a tensor L, let N L be the Nijenhuis tensor (torsion) of L [GVY08]. We say that L is torsionless if its Nijenhuis tensor vanishes. Then if L is a concircular tensor, the following equations hold [ Conversely, by Theorem 19.3 in [Ben05], an orthogonal tensor satisfying the above equations is a C-tensor. The first of the above equations tells us that a C-tensor is a conformal Killing tensor of trace-type. The second equation can be interpreted if we assume L is an OC-tensor.
Suppose now that L is an OC-tensor with eigenspaces (E i ) k i=1 and corresponding eigenfunctions λ 1 , ..., λ k . Since an OC-tensor has Nijenhuis torsion zero, by Theorem 13.29 (Haantjes theorem) in [GVY08], the eigenspaces (E i ) k i=1 are orthogonally integrable and each eigenfunction λ i depends only on E i . Furthermore the tracetype condition implies that the eigenfunction corresponding to a multidimensional eigenspace of L is a constant [RM14b].
6
Suppose D is a multidimensional eigenspace of a non-trivial 3 OCT L. Denote by D ⊥ the distribution orthogonal to D. Then one can show that (see [RM14b, Theorem 6.1] for example): • There is a local product manifold B × F of Riemannian manifolds (B, g B ) and (F, g F ) such that: {p} × F is an integral manifold of D for any p ∈ B and B × {q} is an integral manifold of D ⊥ for any q ∈ F .
• B ×F equipped with the metric π * B g B +ρ 2 π * F g F for a specific function ρ : B → R + is locally isometric to (M, g); where π B (resp. π F ) is the canonical projection onto B (resp. F ).
Such a product manifold is called a warped product and is denoted B × ρ F . We also say in this case that the warped product B × ρ F is adapted to the splitting (D ⊥ , D). The manifold F is a spherical submanifold and B is geodesic submanifold of M (see [Raj14c] and references therein). An important observation is that L restricted to B is an OCT; we will use this later to construct OCTs from Benenti tensors.
In general if L has multiple multidimensional eigenspaces, we will have to consider more general warped products.
functions with ρ 0 ≡ 1 and π i : M → M i are the canonical projection maps. Additionally we assume either dim M 0 > 0 or k > 1. Then (M, g) is called a warped product and the metric g is called a warped product metric. If dim M 0 = 0 then (M, g) is called a pseudo-Riemannian product. The warped product is denoted by M 0 × ρ 1 M 1 × · · · × ρ k M k . M 0 is called the geodesic factor of the warped product and the M i for i > 0 are called spherical factors. See [Raj14c] and references therein for more on warped products.
The following class of OCTs are fundamental to the classification:
Definition 2.3 (Irreducible concircular tensors)
An OC-tensor with functionally independent eigenfunctions is referred to as an irreducible concircular tensor (ICT) or more succinctly an IC-tensor. To be precise, an IC-tensor has real eigenfunctions u 1 , ..., u k (counted without multiplicity) satisfying: Furthermore an OC-tensor which is not irreducible is called reducible. ✷ Remark 2.4 IC-tensors were the class of C-tensors mainly studied in [Cra03].
✷
Since we observed earlier that the eigenfunction associated with a multidimensional eigenspace of an OCT is constant, it follows that an ICT must have simple eigenfunctions, hence ICTs are Benenti tensors. The special property that ICTs have is that 7 their eigenfunctions can be used as (local) coordinates for the separable web they induce [Cra03]. We will refer to these coordinates as the canonical coordinates induced by these tensors.
Away from singular points, locally, we can assume a reducible OC-tensor has eigenfunctions u 1 , ..., u k which are functionally independent and the rest of which are constants. Indeed, for the remainder of this article, this is what we will mean by a reducible OC-tensor. More generally we say a CT is reducible if it admits a nondegenerate eigenspace with constant eigenfunction. We will outline in Section 2.4 how we will break down the classification in terms of irreducible and reducible OCTs.
Properties of OCTs
We will now list some properties of OCTs that will be used later. The following proposition gives a necessary and sufficient (n.s.s) condition to determine when two OCTs (one of which is not covariantly constant) share the same eigenspaces.
Proposition 2.5
Suppose M is a connected manifold and L is an OCT on M which is not covariantly constant (around any neighborhood). ThenL is a CT sharing the same eigenspaces as L iff there exists a ∈ R \ {0} and b ∈ R such that The proof of this, which is a straightforward calculation, will appear else where.
The above proposition no longer holds if we relax the assumption that L is not covariantly constant. One can easily see why by considering any non-trivial covariantly constant symmetric tensor in Euclidean space. We now define an important notion for classifying KEM webs.
Definition 2.6 (Geometric Equivalence of CTs) We say two CTs L andL are geometrically equivalent if there exists a ∈ R \ {0}, b ∈ R and T ∈ I(M ) such thatL = aT * L + bG ✷ An immediate corollary of the above proposition is the following: Corollary 2.7 (Geometric Equivalence of OCTs) Suppose M is a connected manifold. Suppose L andL are OCTs with respective eigenspaces E = (E 1 , . . . , E k ) andẼ = (Ẽ 1 , . . . ,Ẽ k ). Suppose further that E is not a Riemannian product net [RM14b], equivalently one of the CTs is not covariantly constant. Then E andẼ are related by T ∈ I(M ), i.e.Ẽ i = T * E σ(i) for each i (where σ is a permutation of {1, . . . , k}) iff L andL are geometrically equivalent.
✷ 2 Preliminaries and Summary
8
The above corollary implies that the classification of isometrically inequivalent KEM webs can be reduced to the classification of geometrically inequivalent OCTs. For the proof of the following theorem, see [TCS05;Cra07].
Theorem 2.8 (The Vector Space of Concircular tensors [TCS05]) If n > 1, then the C-tensors of valence r ≤ 2 form a finite dimensional real vector space with maximal dimension equal to the dimension of the space of constant symmetric rtensors in R n+1 . Furthermore the maximal dimension is achieved if and only if the space has constant curvature.
✷
The above theorem implies the following: Corollary 2.9 (Concircular tensors in spaces of constant curvature) Suppose M n is a space of constant curvature with n > 1 and let r ≤ 2. Let β = {v 1 , . . . , v n+1 } be a basis for the space of concircular vectors, then a given C-tensor of valence r can be written uniquely as a linear combination of r-fold symmetric products of the vectors in β.
Summary of Results
We first give an overview of the classification. The classification breaks down into three parts: obtaining canonical forms for C-tensors modulo the action of the isometry group (Sections 3 and 4), classifying the webs described by IC-tensors (Section 5) and obtaining warped product decompositions adapted to reducible OCTs (Section 6).
The webs formed by IC-tensors are the basic building blocks of all separable webs. Section 5 is devoted to obtaining information about these webs from the corresponding IC-tensors. In that section we obtain the transformation from the canonical coordinates (u i ) induced by these tensors to Cartesian coordinates (x i ) and we obtain the metric in canonical coordinates. This is done by first calculating the characteristic polynomial of all CTs in spaces of constant curvature in a Cartesian coordinate system. In examples, we will also show how to obtain the coordinate domains for coordinate systems induced by IC-tensors.
To obtain all orthogonal separable coordinates in spaces of constant curvature, we also have to consider reducible OCTs. Let L be a non-trivial reducible OCT and suppose ψ : N 0 × ρ 1 N 1 × · · · × ρ k N k → M is a local warped product decomposition of M adapted to the eigenspaces of L such that L 0 := L| N 0 is an ICT 4 . Let (x 0 ) = (u 1 , . . . , u n 0 ) be the canonical coordinates induced by L 0 on some open subset of N 0 . For i > 0 suppose (x i ) = (x 1 i , . . . , x n i i ) are separable coordinates for N i , then it was shown in [RM14b, proposition 6.8] that the coordinates ψ(x 0 , x 1 , . . . , x k ) are separable coordinates for M . To construct the separable coordinates (x i ) on N i where i > 0, one would apply this procedure again on N i equipped with the induced metric. It follows from [RM14a, Section 1.2] and references therein that all orthogonal separable coordinates for spaces of constant curvature arise this way. Hence a remaining problem is to develop a method to construct warped product decompositions which decompose 9 a given reducible OCT as above; this is done in Section 6. Together with the results of Section 5, this gives a recursive procedure to construct the orthogonal separable coordinates of these spaces.
In Section 7 we will show how to apply the theory developed in this article to solve motivating problems. First, in Section 7.1 we will show how to enumerate the isometrically inequivalent separable coordinates in a given space of constant curvature. Then in Section 7.2 we will show how to construct separable coordinate systems by way of examples. Finally, in Section 7.3 we will show how to explicitly execute the BEKM separation algorithm in general. We also give the details of executing the BEKM separation algorithm for the Calogero-Moser system.
The classification generally breaks down into one for pseudo-Euclidean space E n ν then one for its spherical submanifolds E n ν (κ) (which usually reduces to a similar problem in E n ν ). We give more details in the following subsections.
pseudo-Euclidean space
First we define the dilatational vector field, r, to be the vector field given in Cartesian The general concircular contravariant tensor in E n ν is given as follows (see Proposition 3.2): where A ∈ C 2 0 (E n ν ), w ∈ C 1 0 (E n ν ) and m ∈ C 0 0 (E n ν ). For k ≥ 0, define constants ω k as follows: The above constants aren't necessarily invariant under isometries. But invariants can be defined from them. Definition 2.10 Suppose L is a CT in E n ν as defined above. Then we define the index of L to be the first integer k ≥ 0 for which ω k = 0; L is said to be non-degenerate if such an integer exists. Furthermore if L is non-degenerate, it has an associated sign (characteristic): The following theorem which is proven in Section 3 summarizes our results on the canonical forms of concircular tensors; it classifies C-tensors into five disjoint classes.
Theorem 2.11 (Canonical forms for CTs in E n ν ) LetL =Ã + mr ⊗ r ♭ + w ⊗ r ♭ + r ⊗ w ♭ be a CT in E n ν . Let k be the index and ε be the sign ofL ifL is non-degenerate. These quantities are geometric invariants ofL. Furthermore, after a possible change of origin and after changing to a geometrically equivalent CT, L = aL for some a ∈ R \ {0},L admits precisely one of the following canonical forms.
10
Central: If k = 0 L = A + r ⊗ r ♭ non-null Axial: If k = 1, i.e. m = 0, and w, w = 0: There exists a vector e 1 ∈ span{w} such that L has the following form: Ae 1 = 0, e 1 , e 1 = ε null Axial: If k ≥ 2, hence m = 0 and w, w = 0: There exists a skew-normal sequence β = {e 1 , ..., e k } with e 1 , e k = ε where e 1 ∈ span{w} which is A-invariant such that L has the following form: The degenerate null axial concircular tensors will be of no concern to us. In Euclidean space they don't occur and it will be proven later (see Section 3.3.2) that in Minkowski space that they are never orthogonal concircular tensors.
Remark 2.13
The precise classification for Euclidean and Minkowski space can be directly inferred from the above theorem by imposing the signature of the metric. The classification for Euclidean space is clear. In Minkowski space, k ≤ 3 and when k = 3 the sign of the axial CT must be positive (see [Raj14a, lemma 2.1]). ✷ Remark 2.14 When k = 0 and 1 respectively, the translation vector v for the isometry T : r → r + v which sendsL to canonical form is given as follows: For the general case, see Eq. (3.33).
✷
One can easily deduce that in Euclidean or Minkowski space, any covariantly nonconstant OCT is non-degenerate. Hence non-degenerate CTs are the main interest of this article. Some notation will be useful. The matrix A will be called the parameter matrix and the vector w the axial vector of the CT. When k ≥ 1 in the above theorem, we will refer to the CT as an axial concircular tensor.
Suppose L is a non-degenerate CT in the canonical form given by Theorem 2.11. We denote by D the A-invariant subspace spanned by w, Aw, . . . . This subspace is either zero (if w = 0) or metrically non-degenerate. We will let A c := A| D ⊥ , A d := A| D and the central CT in D ⊥ with parameter matrix A c by L c . Furthermore we define the following functions: where the second determinant is evaluated in D ⊥ .
The canonical forms for non-degenerate CTs can be enumerated by choosing a nondegenerate CT from Theorem 2.11 then choosing a metric-Jordan canonical form for the pair (A| D ⊥ , g| D ⊥ ). The proofs of these canonical forms, which are given in Section 3, can be omitted on first reading. Once these canonical forms are obtained, in Sections 5.1 and 5.2 we will calculate the characteristic polynomial for non-degenerate CTs in E n ν . Using this, for ICTs we can calculate the transformation from their canonical coordinates to Cartesian coordinates and the metric in canonical coordinates. Then in Section 6.1 we will show how to obtain the warped product decompositions induced by reducible OCTs.
Spherical submanifolds of pseudo-Euclidean space
In this section we assume n ≥ 3. Denote the orthogonal projection R onto the spherical distribution r ⊥ as follows: 3 Canonical forms for Concircular tensors in pseudo-Euclidean space
12
Then the general CT in E n ν (κ) is obtained by restricting A ∈ C 2 0 (E n ν ) to E n ν (κ). It is given as follows in E n ν in contravariant form (see Proposition 4.2): The matrix A is called the parameter matrix of the CT. We denote by L c the central CT in E n ν with parameter matrix A. Note that L = RL c R * . We will see later that several questions concerning L can be related to similar ones concerning L c .
The canonical forms for these CTs can be enumerated by choosing a metric-Jordan canonical form for the pair (A, g). The proofs of these canonical forms, which are given in Section 4, can be omitted on first reading. Once these canonical forms are obtained, in Section 5.3 we will calculate the characteristic polynomial for CTs in E n ν (κ) by making use of the solution to the similar problem in E n ν . Using this, for ICTs we can calculate the transformation from their canonical coordinates to Cartesian coordinates and the metric in canonical coordinates. Then in Section 6.2 we will show how to obtain the warped product decompositions induced by reducible OCTs by making use of the solution to the similar problem in E n ν .
3 Canonical forms for Concircular tensors in pseudo-Euclidean space
Standard Model of pseudo-Euclidean space
In this section we calculate the CVs and CTs for E n ν in its standard vector space model. These results are well known [Cra07; Ben05], but we include it here for completeness.
First we define the dilatational vector field, r, to be the vector field satisfying for any p ∈ E n ν , r p = p ∈ T p E n ν . In Cartesian coordinates (x i ), we have In the following proposition we calculate the general CV in E n ν as done originally in [Cra07].
where r is the dilatational vector field. This equation can be easily solved by observing the following: Thus taking i = j = k, we find that ∂φ ∂x k = 0. Thus φ ∈ R and we find that v must have the form given by Then using Corollary 2.9 we can deduce the general CT in E n ν : Proposition 3.2 (Concircular tensors in E n ν ) L is a concircular 2-tensor in E n ν where n > 1 iff there exists A ∈ C 2 0 (E n ν ), w ∈ C 1 0 (E n ν ) and m ∈ C 0 0 (E n ν ) such that: where r is the dilatational vector field. The tensors A, w and m are uniquely determined by L.
Parabolic Model of pseudo-Euclidean space
In order to obtain canonical forms for CTs it will be useful to work with a different model of E n ν . We will refer to it as the parabolic model of E n ν , to be introduced shortly. The main reason for working with this model is because it is a spherical submanifold Proposition 3.3 (Isometry group of P n ν ) The isometry group of P n ν is: Furthermore suppose we fix an isometry with E n ν via Eq. (3.6) by fixing a subspace V ⊂ a ⊥ such that V ≃ E n ν , then for p ∈ V andp ∈ V ⊥ we have the following Lie group isomorphism: Proof See [Raj14c] or [Nol96, lemma 6] which covers the case when E n ν is Euclidean.
Remark 3.4
If ψ : E n ν → P n ν is the standard embedding from Eq. (3.6), then ψ is equivariant. In other words, if we let We also have the following: Lemma 3.5 Forp ∈ V and X ∈ TpV For Y ∈ T ψ(p) P n ν , the inverse of the above map is given by: The first statement is clear. First observe that P b ψ * X = X. Now, 3 Canonical forms for Concircular tensors in pseudo-Euclidean space 15 Furthermore we denote by P 1 the orthogonal projector onto T P n ν . It is given as follows for r ∈ E n+2 ν+1 P 1 : We will now calculate the CT in E n+2 ν+1 which restricts to the most general CT in P n ν . Due to Corollary 2.9 we only need to examine how CVs restrict. By Proposition 3.1 and Theorem 2.8, the general CV in E n+2 ν+1 can be written where each c i ∈ R, a 1 , . . . , a n is a basis for V and r is the dilatational vector field in E n+2 ν+1 . Then where x is the dilatational vector field in V . Then using Corollary 2.9 we have proven the following: Proposition 3.6 Suppose P n ν is identified with E n ν by the embedding in Eq. (3.6). Denote by V = span{a, b} ⊥ , letà ∈ C 2 0 (V ), w ∈ C 1 0 (V ), and m ∈ C 0 0 (V ). Define Then the restriction of A to V , denoted L, via the embedding in Eq. (3.6) is: Note that b is an eigenvector of A b with eigenvalue 0. Also observe that 3 Canonical forms for Concircular tensors in pseudo-Euclidean space
16
The above equation shows that A and A b induce the same CT on P n ν . From the calculations proceeding Eq. (3.15) we see that {a 1 − a 1 , r a, . . . , a n − a n , r a, b − b, r a − r} is basis for the space of CVs on P n ν . Thus it follows from Corollary 2.9 and the proceeding calculations that A, B ∈ C 2 0 (E n+2 ν+1 ) induce the same CT on P n ν iff for some b ∈ P n ν we have
Existence of Canonical forms
In this section A ∈ C 2 0 (E n+2 ν+1 ). We are interested in finding canonical forms for the CT on P n ν induced by this tensor. As it was shown in the previous section, the induced CT depends only on A b for some b ∈ P n ν . Hence our goal will be to findb ∈ P n ν such that Ab is in a canonical form. Since the isometry with E n ν (see Eq. (3.6)) is fixed by a vector b ∈ P n ν , we will then choose T ∈ I(P n ν ) such that Tb = b. This will transform Ab to (T * A) b which can be restricted to E n ν using Proposition 3.6 to obtain a canonical form for the original CT in E n ν . To obtain the canonical choice of b ∈ P n ν , first note that A b is completely determined by the fact that A b b = 0. Secondly, note that since isometries of P n ν fix a, it follows that for each l ≥ 0, a, A l a are invariants of A. Although these are in general not invariants of the CT induced by A, they will play a significant role in the classification. Thirdly, since a cannot be transformed by isometries, we will attempt to choose b ∈ P n ν such that a is a basis vector in a metric-Jordan canonical basis for A b . Since a, b = 1, one can deduce that (using the metric-Jordan canonical form [Raj14a]) in the simplest cases, a, b lie in the same eigenspace of A b or a generates a Jordan cycle ending in a constant multiple of b. These observations motivate our search for b.
For the following calculations, b ∈ P n ν is arbitrary and we letà := A b . The following lemma will get us started: So the constants a, A l a are invariants of the CT on P n ν induced by A. Proof We prove Eq. (3.20) by induction. It clearly holds for l = 0, 1. Now assume it holds for l − 1, theñ Hence the first equation follows by induction. Suppose 0 ≤ l < k, then Thus it follows by induction that a,à l a = 0. Thus a,à k a = a, A k a . Now, define ω i by We will also need the following lemma to calculate ω i in E n ν . Lemma 3.8 Suppose A has the form given by Eq. (3.15), then and ω i is given by Eq. (2.12).
✷
Using the above lemma we can also apply the definitions of index, sign and degeneracy of CTs in E n ν from Definition 2.10 to CTs in P n ν .
Non-degenerate cases
Now we consider the case where there exists a least k ∈ N such that a, A k a = 0. This will be the most important case for our interests. Motivated by special cases and the metric-Jordan canonical form ofà discussed earlier, we will try to find b such that a,Ãa, . . . ,à k a forms a skew-normal sequence with a, A k a b =à k a. The following lemma describes b provided it exists: Suppose there is k ∈ N such that a, A l a = 0 for 0 ≤ l < k and a, A k a = 0. Assume there exists a b such that a, A k a b =à k a and à j a,à k a = 0 for all 1 ≤ j ≤ k. Then b must satisfy the following equations for each l ∈ {0, . . . , k} By imposing the condition à l a,à k a = 0, the above equation implies that: à l a, A k a − b, A l a a, A k a = 0 (3.26) Now expandingà l a using Eq. (3.20), the above equation becomes Equating the above equation with Eq. (3.26) and solving for b, A l a proves the result. Now we will use the above lemma and Eq. (3.20) to construct a vector b such thatà is in canonical form. First define a sequence b 1 , . . . , b k of scalars recursively as follows: Then define vectors s 0 , s 1 , . . . , s k as follows: The following lemma shows that this choice does work: Proposition 3.10 The vectors s 0 , s 1 , . . . , s k form a skew-normal sequence with s 0 , s k = a, A k a . Ifà l a are defined as in Eq. (3.20) with the above vector b thenà l a = s l .
✷
Proof The fact that s 0 , s 1 , . . . , s k form a skew-normal sequence follows verbatim from Lemma 3.7 and the proceeding arguments by replacing s l →Ã l a and b l → b, A l a .
Suppose that s 0 , s 1 , . . . , s k form a skew-normal sequence where s 0 , s k = a, A k a . By definition of s l , it follows that each A l a can be expanded in this basis as: Then it follows by definition of s l andà l a in Eq. (3.20) that A l a = s l . Now suppose A is in the canonical form stated above. Let V = span{a, b} ⊥ where b was chosen as above.
We now mention more precisely what we mean by "the" canonical form: Suppose L is a CT in P n ν with parameter matrix A as above and index k ′ := k − 1 ≥ 0, i.e. L is non-degenerate. The iso-canonical form for L is the metric-Jordan canonical form for (A| H ⊥ , g| H ⊥ ) together with the index k ′ and constant a, A k ′ +1 a ∈ R \ {0}. ✷
20
We will prove later on that this canonical form is uniquely determined by L. But for now we will examine this canonical form further. Letà := A| H ⊥ , then we can write: If ω 0 = 0 then it follows that w = 0 and it follows by Proposition 3.6 that the induced CT on V isà + ω 0 r ⊙ r Thus after dividing by ω 0 we get the central CT from Theorem 2.11. If ω 0 = 0, one can check that w,Ãw, . . . ,à k−2 w ∈ V form a skew-normal sequence with w,à k−2 w = ω k−1 . It follows by Proposition 3.6 that the induced CT on V is This CT is a constant multiple of a (null) axial CT with the same index and sign from Theorem 2.11 (after an appropriate choice of basis).
Transformation to Canonical form:
We now denote byb the vector b obtained above which puts A into a canonical form. The vector b ∈ P n ν is fixed by an isometry with E n ν (see Eq. (3.6)), furthermore we let V = span{a, b} ⊥ . We can assume A has the form given by Eq. (3.15). The last problem is to choose T ∈ I(P n ν ) such that Tb = b. We can obtain a unique transformation if we require T to induce a translation in V . Indeed, by Eq. (3.9) the most general transformation of this type is where v ∈ V is arbitrary. The unique transformation with the above form satisfying We now proceed to calculate v. First we can writẽ
21
Since b, A l a = 0 for any l > 0, we see that where the last equation follows from the fact that c k = 1. We have calculated the first four coefficients (which are sufficient for Euclidean and Minkowski space): In particular when k = 1 and 2 respectively we have the following: Finally, we note that by equivariance of the map ψ (see remark after Proposition 3.3), one only needs to apply the isometry T : V → V given by r → r + v to send the induced CT in V into canonical form. Hence in practice one does not need to work in P n ν .
Degenerate cases
We now consider the case where a, A l a = 0 for every l ∈ N. First note that the dimension of the subspace spanned by a, Aa, . . . must be at most n−1 by non-degeneracy of the scalar product. So there exists a least l ≤ n − 1 such that {a, Aa, . . . , A l a} ⊆ a ⊥ is a linearly independent set but A l+1 a ∈ span{a, Aa, . . . , A l a}. Thus it follows that A m a ∈ span{a, Aa, . . . , A l a} for all m > l. Also note by Lemma 3.7 it follows that these properties are invariant under the transformation A → A b .
Case 1 l = 0 In this case a is an eigenvector of A. After transforming A to A b (if necessary), we can assume that Aa = 0. Also Ab = 0, then since a, b = 1 it follows that span{a, b} is a non-degenerate A-invariant subspace. Hence after identifying E n ν ≃ span{a, b} ⊥ , it follows by Proposition 3.6 that A restricts to a Cartesian CT on E n ν .
Case 2 l ≥ 1 Fix b ∈ P n ν , let V = span{a, b} ⊥ and assume Ab = 0. Then we can write: Now note that for any j ∈ N, a, A j a = 0. Suppose inductively that for all In particular, when l = 1 we see that w is a lightlike eigenvector ofÃ. Then by Proposition 3.6, A induces the following CT in E n Observe that w is a lightlike eigenvector of L with non-constant eigenfunction. Thus L is never an OC-tensor because lightlike eigenvectors of OC-tensors must have constant eigenfunctions.
If l > 1, we see that Aa, A 2 a ∈ V are linearly independent orthogonal lightlike vectors. Thus this case can't occur in Euclidean or Minkowski case, so we ignore it.
Uniqueness of Canonical Forms
In this section we will show that the canonical forms obtained in the previous section are uniquely determined by a given CT in P n ν . As a consequence of this we will show that the different canonical forms divide the CTs into isometrically inequivalent classes. We will be working with the case when the CT is non-degenerate as the other cases are either straightforward or uninteresting.
Suppose L and M are CTs in P n ν with parameter matrices A and B respectively. We observed at the end of Section 3.2 that L = M iff for one (hence all) b ∈ P n ν : Thus it follows that L = T * M for some T ∈ I(P n ν ) iff for one (hence all) b ∈ P n ν : Lemma 3.12 Suppose A 2 is a parameter matrix, and A 1 = (A 2 ) b for some b ∈ P n ν . Assume each A i have the same index and admit a vector b i which transforms it to canonical form according to Proposition 3.10.
forms an adapted cycle of generalized eigenvectors for A 0 with eigenvalue 0. In this case a, A k 0 a ∈ R \ {0}. Let b 1 be the vector admitted by A 1 and let A 3 := (A 1 ) b 1 = (A 0 ) b 1 . Now by Proposition 3.10 and Lemma 3.7, b 1 satisfies: Since A 3 is in canonical form, it follows for each l ∈ {1, · · · , k}, b 1 , A l 0 a satisfies Eq. (3.25). Then since A 0 is in canonical form, we have b 1 , A l 0 a = 0 for l ∈ {1, · · · , k}. Thus Eq. (3.42) shows that In the following theorem we will show that the iso-canonical form defined in Definition 3.11 for non-degenerate CTs is uniquely determined by the CT.
Theorem 3.13 (Isometric Equivalence of CTs in E n ν ) Suppose L and M are CTs in P n ν such that M has an index k ≥ 0. Then L = T * M for some T ∈ I(P n ν ) iff L and M have the same iso-canonical form.
✷
Proof Assume that L = T * M for some T ∈ I(P n ν ). Then for some b ∈ P n ν : By the above equation and Lemma 3.7 it follows that the index of L is also k. Let b 2 be the vector which puts B in canonical form given by Proposition 3.10. Then T b 2 sends T * B to canonical form. By Lemma 3.12, T b 2 is the vector obtained from Proposition 3.10 which puts A in canonical form. Letb := T b 2 then Hence B b 2 is isometric to Ab. Then it follows from the uniqueness of the metric-Jordan canonical form [Raj14a] that Ab and B b 2 have the same iso-canonical form.
Conversely suppose L and M have the same iso-canonical form. Then A (resp. B) each admit a vector b 1 ∈ P n ν (resp. b 2 ∈ P n ν ) such that A b 1 and B b 2 have the same iso-canonical form. Then one can easily construct T ∈ I(P n ν ) which transforms a metric-Jordan canonical basis of Note that in the last equation we have used the fact that a, B k a = a, A k a . Then Thus L = T * M , which proves the converse.
Geo-Canonical forms We now give a geo-canonical form for non-degenerate CTs in P n ν . Suppose L is such a CT with index k and parameter matrix A in iso-canonical form. Then for c ∈ R, cL has parameter matrix cA and a, (cA k+1 a) = c k+1 a, A k+1 a Hence after an appropriate transformation L → cL, we can assume Note that when k is odd, c is only determined up to sign. Hence there are two possible geo-canonical forms in this case. Now, if L is an axial CT, we can fix d ∈ R by requiring that (A + dI) k a ∈ span{a, b}. This condition is satisfied in the iso-canonical form. If L is central, we choose d such that the real part of the smallest eigenvalue (see Definition A.1) of A| H ⊥ is zero.
Canonical forms for Concircular tensors in Spherical
submanifolds of pseudo-Euclidean space 4.1 Obtaining concircular tensors in umbilical submanifolds by restriction LetM be a pseudo-Riemannian submanifold of M with Levi-Civita connections∇ and ∇ respectively. We sayM is an umbilical submanifold (see H is orthogonal to TM) called the mean curvature normal ofM such that for all x, y ∈ X(M ). By generalizing an observation made in [Cra03] one can deduce the following:
Proposition 4.1 (Restriction of CTs to umbilical submanifolds [Cra03])
SupposeM is an umbilical submanifold of M with mean curvature normal H and L is a concircular r-tensor on M with conformal factor C in covariant form. Then the pullback of L toM is a concircular r-tensor with conformal factor equal to the pullback Since spherical submanifolds are umbilical submanifolds and E n ν (κ) is a spherical submanifold (see for example [Raj14c]), the above proposition allows us to obtain CTs on E n ν (κ). We will do this in the following section.
Concircular tensors in Spherical submanifolds of pseudo-Euclidean space
In this section we study the CTs in E n ν (κ) via the canonical embedding in E n ν . Let r denote the dilatational vector field, we work on the subset of E n ν for which r 2 = 0. Let E := r ⊥ and let L be a CT on M . To obtain the CT on E n ν ( 1 r 2 ) (which is an integral manifold of E), we first let R := I − r ♭ ⊗ r r 2 where I is the identity endomorphism then L E := L| E is given as follows: Now we will calculate the general CT on E n ν (κ). Proposition 4.2 (Concircular tensors in E n ν (κ)) L is a concircular tensor in E n ν ( 1 r 2 ) where n > 2 iff there exists A ∈ C 2 0 (E n ν ) such that L has the following form embedded in E n ν :
Canonical forms for Concircular tensors in Spherical submanifolds of pseudo-Euclidean space 26
A is uniquely determined byL. FurthermoreL is covariantly constant iff its a constant multiple of the metric on E n ν ( 1 ). Choose an orthonormal basis a 1 , . . . , a n for E n ν . Let are CVs on E n ν ( 1 r 2 ). Furthermore one can check that these vectors are linearly independent. Thus by Corollary 2.9 every CT can be written uniquely as a linear combination of symmetric products of the above CVs. Thus it follows that we can choose a unique . In E n ν , A E is given as follows: r 2 Ar ⊙ r Conversely by Corollary 2.9 it follows that for any A ∈ C 2 0 (E n ν ), A E corresponds to CT on E n ν ( 1 r 2 ). The last statement follows from Proposition 4.1.
Remark 4.3
The general CT in E n ν (κ) has been obtained in [TCS05, Section 3] with respect to certain canonical coordinates for these spaces. They use a different method for obtaining these tensors based on the theory developed in their article. ✷ For the remainder of this article we will always work with CTs in E n ν (κ) via the tensor L defined in E n ν in the above proposition. Definition 4.4 Suppose L is a CT in E n ν (κ) with parameter matrix A ∈ S 2 (E n ν ) as above. The isocanonical form for L is the metric-Jordan canonical form for (A, g).
✷
Except for hyperbolic space H n−1 0 and the space anti-isomorphic to it S n−1 n−1 , uniqueness of the iso-canonical form follows from the uniqueness of the metric-Jordan canonical form and the fact that In this case, minor modifications of the proof of the uniqueness of the metric-Jordan canonical form will show that it holds true with I(H n−1 0 ) in place of of O(E n 1 ). A similar argument goes for S n−1 n−1 . Hence we have proven the following: Theorem 4.5 (Isometric Equivalence of CTs in E n ν (κ)) Suppose L and M are CTs in E n ν (κ). Then L = T * M for some T ∈ I(E n ν (κ)) iff L and M have the same iso-canonical form. Geo-Canonical forms By definition, the restriction of G to E n ν (κ) is the metric on E n ν (κ). Hence we see that if a ∈ R \ {0}, b ∈ R and A ∈ C 2 0 (E n ν ), then A and aA + bG induce geometrically equivalent CTs on E n ν (κ) (see Proposition 2.5). We now show how to obtain the geo-canonical forms. Suppose λ 1 , . . . , λ k ∈ C are the distinct eigenvalues of A. Let |·| denote the modulus of a complex number, then define: Note that this quantity is invariant under geometric equivalence. By making the transformation λ i → λ i |a| , we can assume |a| = 1. Furthermore we choose b ∈ R such that the real part of the smallest eigenvalue (see Definition A.1) of A is zero. Since its not possible to specify the sign of a, we conclude that there are (in general) two geo-canonical forms for CTs in E n ν (κ). Although in practice one can often use more information from the metric-Jordan canonical form of A to obtain a single geo-canonical form, as the following example shows: Example 4.6 (Separable coordinates in hyperbolic space) Consider H n−1 = E n 1 (−1) with the standard metric: For λ 1 < · · · < λ n ∈ R define two linear operators A 1 and A 2 as follows: These two operators are isometrically inequivalent since they have different metric-Jordan canonical forms. The timelike eigenvalue of the first is the smallest, while that of the second is the largest. Although −A 2 = A 1 and hence the CT on H n−1 induced by these operators are geometrically equivalent. So, in H n−1 we can work with inequivalent CTs (under change of sign) by working with those whose parameter matrix has a timelike eigenvalue which is less than or equal to ⌊ n 2 ⌋ spacelike eigenvalues. Thus the set of eigenvalues λ 1 < · · · < λ n ∈ R induce ⌈ n 2 ⌉ inequivalent separable coordinates in H n−1 ; in contrast with the n inequivalent separable coordinates in E n 1 induced by central CTs.
Properties of Concircular tensors in Spaces of Constant Curvature
In this section we will assume that each CT in E n ν or E n ν (κ) is in a canonical form listed in Section 2.4. Furthermore we will assume that the Cartesian coordinates are chosen such that the parameter matrix A c is in the complex metric-Jordan canonical form stated in Theorem 2.1 (see [Raj14a] for details). We now describe how to transform to 5 Properties of Concircular tensors in Spaces of Constant Curvature 28 real Cartesian coordinates such that A c obtains the real metric-Jordan canonical form (see [Raj14a]). Suppose λ ∈ C \ R and (A, g) is given as follows: . Define real coordinates (s 1 , t 1 , . . . , s k , t k ) implicitly as follows: These coordinates were chosen so that the pair (A, g) are in the real metric-Jordan canonical form in the real coordinates (s 1 , t 1 , . . . , s k , t k ) after applying the appropriate tensor transformation law.
In Cartesian coordinates (x i ), we will use the convention that x i := g ij x j ; this is the only case where the Einstein summation convention is used in this section.
We now list some generic facts about tensors and C-tensors that will be used. We first present some facts about 1 1 -tensors. In the following proposition, we use the notation C p to denote the differentiability class of a geometric object, where p ∈ N ∪ {∞, ω}, and C ω denotes the analytic class.
Proposition 5.1 Suppose T is a 1 1 -tensor of class C p and fix q ∈ M . Let λ 0 be a simple eigenvalue of T q . Then there exists a neighborhood of q in which T has a simple eigenfunction λ with a corresponding eigenvector field which are both of class C p , and λ(q) = λ 0 .
If T q has simple eigenvalues, then there exists a neighborhood of q in which T has simple eigenfunctions of class C p , and T admits a basis of eigenvector fields of class C p . ✷ Proof The proof is an application of the implicit function theorem (see, for example [Die08, Theorems 10.2.1-10.2.4]). Details can be found in [Kaz98], see also [Lax07].
The above proposition shows that Benenti tensors necessarily locally admit a smooth basis of eigenvectors with corresponding smooth eigenfunctions. The following proposition gives necessary and sufficient conditions to determine when a given Benenti tensor is an IC-tensor. Proof This is a direct consequence of the torsionless property of these tensors. Since in this case there are coordinates (q i ) such that L is diagonal and each eigenfunction u i (q i ). Then Hence if du i = 0 for each i, the eigenfunctions are functionally independent. If the u i are analytic functions of q i , then by assumption it follows that L is an IC-tensor in a dense open subset of U .
Proposition 5.3
Suppose L is an OCT and p(z) = det(zI − L) is its characteristic polynomial. Suppose u i is a simple eigenfunction of L and du i = 0, then the corresponding eigenform is given by: where dp is the exterior derivative of p with respect to the ambient coordinates and p ′ is the partial derivative of p with respect to z. Furthermore if L is an IC-tensor, then the metric in the coordinates induced by the eigenfunctions of L is: for a smooth function f (z). By taking the exterior derivative, we get: Then by L'Hopital's rule, we find that: which can be solved for du i since u i is a simple eigenfunction. The fact that Ldu i = u i du i follows from the fact that L is torsionless.
To calculate the metric, first it follows that g ij = 0 when i = j since L is self-adjoint and has simple eigenfunctions. For the remaining component:
30
Remark 5.4 The assumption that L is a concircular tensor can be replaced with any symmetric contravariant tensor whose associated endomorphism is torsionless.
✷
The following lemma on determinants will be used several times.
Then det T is given as follows: The formula clearly holds for n = 1, so inductively suppose the formula holds for k = n − 1, then: In the following sections, we will obtain the following information. First we will calculate the characteristic polynomial for CTs in spaces of constant curvature. Using this, for ICTs we will calculate the transformation from the canonical coordinates they induce to Cartesian coordinates, and we will calculate the metric in canonical coordinates.
Central Concircular tensors
The following general lemma will be used to calculate the characteristic polynomial of central CTs.
Properties of Concircular tensors in Spaces of Constant Curvature
Proof The first statement follows from Lemma 5.5 by taking A → A, r → v and r ♭ → x. Now for the second part, let k = dim U , then in a basis adapted to the decomposition V = U k U ⊥ , we have: The main fact we use is that for any square matrix, T , of the form: Now consider the simplest case where A = diag(λ 1 , ..., λ n ). Then Eq. (5.5) can be used to get the characteristic polynomial of L, which is:
32
Now suppose L is an ICT with eigenfunctions (u 1 , . . . , u n ), then from the above equation we have: One can check that by assumption we must have λ i = λ j if i = j. This will eventually be proven later. Thus we deduce the transformation from the coordinates (u 1 , . . . , u n ) to Cartesian coordinates to be: The derivation of the transformation to Cartesian coordinates follows that of [Cra03, section 5]. We will use this method for all other types of CTs as well. Now, it will be useful to write the characteristic polynomial in standard form: Proposition 5.7 Suppose L is a central CT with parameter matrix A = diag(λ 1 , ..., λ n ) and arbitrary orthogonal metric. Write the characteristic polynomial of A as: Proof We will prove this formula by expanding Eq. (5.9). For the following calculations, if a(z) is a polynomial in z, then [z l ]a(z) is the coefficient of z l in this polynomial. First observe that We also have
33
We will prove inductively that Then by inductive hypothesis, we have Which together with Eq. (5.9) proves the proposition.
In the following theorem we collect a useful limiting procedure for dealing with Jordan blocks. It has been proven by Kalnins, Miller, and Reid in [KMR84] for general dimensions. We have independently verified it only for dimensions less than three. The details of this verification are only partially included in the following proof, which can be omitted without loss of continuity.
34
Proof First consider the following definitions: Note that ǫ k l is of order k if k, l > 0. Finally let λ i := λ 1 +ǫ 1 i−1 . Then the conclusion follows by direct calculation if for each i = 2, . . . , n, ǫ 1 i → 0. Now suppose L is a central CT with parameter matrix A = J T k (0). We will use the above theorem to obtain this CT as a limit of central CTs with parameter matrix A = diag(0, λ 2 , . . . , λ k ). The characteristic polynomial of these CTs is given by Eq. (5.13). In order to obtain the characteristic polynomial for a CT with A = J T k (0) we will use the fact that the characteristic polynomial of J T k (0) is z k . Then starting with A = diag(0, λ 2 , . . . , λ k ), by Eq. (5.13) we have: Thus we have proven part of the following: Proposition 5.9 Suppose L is a central CT with parameter matrix A = J T k (0) and metric g = εS k . Then the characteristic polynomial of L is: Proof We first prove the case where A is a real Jordan block. To prove that L has no constant eigenfunctions, we differentiate an equation preceding this proposition to obtain: from which we see that e k , ∇p = −2εz k−1 x 1 . Thus L cannot have a constant eigenfunction. The equation for dT, dT is proven as follows. When A = diag(0, λ 2 , . . . , λ k ) one can easily prove the formula using Eq. (5.9). Then the formula for A = J T k (0) follows by applying the limiting technique in Theorem 5.8 used above. Finally, for the case of a complex Jordan block, i.e. A = J T k (λ) where λ ∈ C, note that these proofs hold by replacing A → A − λI and z → z + λ. Now one can use the second part of Lemma 5.6 to obtain the characteristic polynomial of any central CT in E n ν . Indeed, suppose L is a central CT with parameter matrix We can apply Lemma 5.6 with U equal to the subspace corresponding to J T k (0), then When L is an ICT, we can obtain a transformation from canonical coordinates to Cartesian coordinates. Our formula is motivated by one in [KMR84] and is given as follows: The following lemma will be used to obtain the metric in canonical coordinates adapted to an ICT defined in a space of constant curvature.
36
Lemma 5.10 Suppose L is a central CT with parameter matrix A. Let We prove this by induction. The base cases are given by Proposition 5.9. Suppose U is a non-degenerate invariant subspace of A such that L u has the form given by Proposition 5.9 and U ⊥ satisfies the induction hypothesis. By Eq. (5.6) we can write: Then dp = B u ⊥ dp u + B u dp u ⊥ Thus from the above equation, we have:
Examples
We end this section with some separable coordinate systems induced by central ICTs which can be analyzed fairly easily. These examples are a natural generalization of those presented in [Cra03, section 5].
37
Using the above formula, one can show that L has no constant eigenfunctions (see the proof of Proposition 5.9). Then by Proposition 5.2, this CT is an ICT near any point where the eigenfunctions of L are simple. We will now show that L is an ICT in a dense subset of E n ν . First note that Assume each x i = 0, then from Equation 5.22, we find that sgn p(λ i ) = ε i (−1) n+1−i . Also since the coefficient of leading degree of p(z) is z n , we find that lim z→∞ p(z) = 1 and lim z→−∞ p(z) = (−1) n . Since by assumption we have that ε n = 1, we can use the intermediate value theorem to deduce the following about the roots of p(z). If ν = 0 (i.e. in Euclidean space), there are n distinct roots u 1 , ..., u n satisfying: λ 1 < u 1 < λ 2 < u 2 · · · < λ n < u n If ν > 0 then there are n distinct roots u 1 , ..., u n satisfying: Hence L is an IC-tensor on an open dense subset of E n ν ; because of this property one could consider the induced separable coordinates to be a generalization of elliptic coordinates. Since p(λ i ) = By using Eq. (5.24) and Proposition 5.15, one can check that in the separable coordinates (u 1 , . . . , u n ), for 1 ≤ i ≤ ν, sgn g ii = (−1) n−i+1 (−1) n−i = −1. Hence ∂ 1 , . . . , ∂ ν are timelike vector fields and the remaining ones are spacelike. ✷ We now show that if we relax the condition that λ 1 < · · · < λ n in the above example then the coordinate system may no longer be defined on a dense subset of E n ν . Although one should note that in E n that condition was not restrictive. The simplest case occurs in E 2 1 . Example 5.12 Consider a central CT L in E 2 1 with parameter matrix A = diag(λ 1 , λ 2 ) where λ 1 > λ 2 and orthogonal metric g = diag(−1, 1). Denote Cartesian coordinates by (t, x). In this 5 Properties of Concircular tensors in Spaces of Constant Curvature 38 case the characteristic polynomial of L, p(z), given by Eq. (5.13) reduces to: One can calculate the discriminant of this polynomial to be: If we define new Cartesian coordinates (y 1 , y 2 ) by: and we let e := √ λ 1 − λ 2 , then L is a Benenti tensor on the following connected regions: Hence the regions are separated by the lightlike lines y i = e. Thus as claimed the associated separable coordinate systems aren't defined on a dense subset.
One can also find the coordinate domains as follows. Suppose L is an ICT with eigenfunctions u 1 < u 2 . Then by requiring that the metric in these coordinates given by Proposition 5.15 to be Lorentzian, one finds the following constraints: The above inequalities shown that in the subset where L is a Benenti tensor, if the eigenfunctions transition from one coordinate domain to another then one of the eigenfunctions must take the value λ 1 or λ 2 . Hence the transition manifolds are solutions of p(λ i ) = 0, i.e. by Eq. (5.9) where (x i ) 2 = 0. In this case, the eigenfunctions of L can be readily calculated:
39
Using the values of the eigenfunctions on these subsets and their possible ranges given in Eq. (5.28) one can deduce the following: Together with Eq. (5.11), this completes the analysis of these coordinate systems. ✷ Even in three dimensions, the above analysis becomes much more difficult. This is because in three dimensions one can show that the discriminant is an eight degree polynomial in the coordinates with many terms. However, we note two simplifications that could be made for the general case. First by transferring to a geometrically equivalent CT, we could assume one of the eigenvalues of A is zero. Secondly since the characteristic polynomial of L, given by Eq. (5.9) only depends on the quantities (x i ) 2 and not x i explicitly, one can restrict the analysis to the quadrant where each x i > 0 without losing generality. This symmetry is a consequence of the non-uniqueness of the chosen basis, in particular due to the fact that if v is an eigenvector of A then so is −v.
Axial Concircular tensors
Proposition 5.13 Let L be an axial CT with parameter matrix A = J k (0) T and metric g = εS k . Then Furthermore the following are true: • L has no constant eigenfunctions.
• If k ≤ 3, then dp, dp = 4ε d dz p(z). ♦ Proof We first outline how one proves the above formula for p(z). It is sufficient to calculate det L when L has the parameter matrix A = J k (λ) T . Letà = [ã 1 , ...,ã n ] := A + εr ⊗ e k . Then applying Lemma 5.5 to L =à + e 1 ⊗ r ♭ gives: 1 ∧ · · · ∧ x i e 1 ∧ · · · ∧ã n After expanding r and e 1 in the basis {a 1 , . . . , a k } and simplifying, the result then follows by a straightforward but tedious calculation.
40
Suppose the above formula for p(z) holds. We now show that L has no constant eigenfunctions. The constant term of dp is: If λ ∈ R satisfies p(λ) ≡ 0, then the above form must be identically zero. A contradiction, hence L has no constant eigenfunctions.
The formula involving dp, dp can be checked manually for the cases k ≤ 3.
The following proposition will reduce the calculation of the characteristic polynomial for general axial concircular tensors to cases already considered.
Proposition 5.14 (Determinant of Axial Concircular tensors)
Suppose L is an axial CT in canonical form given as follows: Then p(z) = det(zI − L) is given as follows: Proof First note that it is sufficient to calculate det L. Write r = r d + r c adapted to the decomposition E n ν = D k D ⊥ where D is the A-invariant subspace generated by e 1 . Then where L d is L restricted to D and A c is A restricted to D ⊥ . LetL = L d +A c +e 1 ⊗(r c ) ♭ , then applying Lemma 5.5 to L =L + εr c ⊗ e k gives: det L = detL + εL 1 ∧ · · · ∧ r c ∧ · · · ∧L n (5.34) where r c appears in the kth position. Note that in block diagonal form Then after applying Lemma 5.5 once more, we get
One can use Proposition 5.14 to obtain the characteristic polynomial of any axial CT in E n ν . This is done as in the example in the discussion following Proposition 5.9. As an example, we will calculate the Cartesian coordinates for a non-null axial CT (i.e. k = 1). Indeed, suppose L is a non-null axial ICT with eigenfunctions (u 1 , . . . , u n ). Let A c = diag(λ 2 , . . . , λ n ), then from Eq. (5.32) and Eq. (5.29), we see that (z − u i ), we can deduce the transformation from the coordinates (u 1 , . . . , u n ) to Cartesian coordinates as follows. By evaluating p(λ i ), we get By taking the coefficient of z n−1 of p(z), we get: x 1 = ε 2 (u 1 + · · · + u n − λ 2 − · · · − λ n ) (5.38) In conclusion, we note that this procedure can be generalized for k ≥ 2. Observe that Eq. (5.32) holds for a central CT if we define p d (z) ≡ 1 in this case. We will use Eq. (5.32) and Lemma 5.10 to obtain the metric in canonical coordinates for some ICTs in E n ν . We have the following:
42
Proposition 5.15 (ICT metrics in E n ν ) Suppose L is an ICT in Euclidean or Minkowski space in canonical form with eigenfunctions (u 1 , . . . , u n ). Then the metric in adapted coordinates is orthogonal and where ε is the sign associated with L and λ 1 , . . . , λ n−k are the roots of B(z).
Remark 5.16
The above formula likely holds in general (see [KMR84]) but we haven't verified it for null axial CTs when k > 3. Thus we have the following: From Proposition 5.3 we have:
Remark 5.17
The above technique for calculating the metric is based on Moser's calculation of the metric for sphere-elliptic coordinates in [Mos11, P. 179-180].
Corollary 5.18
Suppose L is a non-degenerate CT in Euclidean or Minkowski space in canonical form. Then the points at which a real eigenvalue of A c is an eigenvalue of L are singular, i.e. L cannot be an ICT in any neighborhood of these points.
Concircular tensors in Spherical Submanifolds of pseudo-Euclidean space
In this section we treat the case of CTs defined on E n ν (κ). We will be able to reduce most calculations to similar ones involving central CTs. The following proposition will allow us to do this.
45
Using Eq. (5.41), for ICTs, the transformation from canonical coordinates to Cartesian coordinates can be calculated using the standard method. Indeed, if L is an ICT in E n ν ( 1 r 2 ) with parameter matrix: Then by a calculation almost identical to the one used to derive Eqs. (5.17a) and (5.17b), one obtains the following now using Eq. (5.41): The transformation from canonical coordinates (u 1 , . . . , u n−1 ) to Cartesian coordinates are obtained by noting that p(z) = Example 5.20 (Circular coordinates) Let M = E 2 ν (κ) where κ = ±1. Consider the CT in M with parameter matrix: Then by Eqs. (5.45a) and (5.45b), Cartesian coordinates (x, y) are given by: We now show how to obtain the standard parameterizations of these coordinates. First note that by metric-Jordan canonical form theory, there are three isometrically inequivalent cases 5 : Case 1 κ 1 = κ and ε = κ, thus g = diag(κ, κ) If we take u = cos 2 (t), then we obtain: x 2 = cos 2 (t) y 2 = sin 2 (t) 5 Note that these cases additionally depend on ν.
✷
Proof We reduce this calculation to the corresponding one for L c using Eq. (5.41). We assume that L is an ICT with eigenfunctions (u 1 , . . . , u n−1 ) in some neighborhood in E n ν ( 1 r 2 ). Now if we letd denote the exterior derivative on the sphere, note that dp = R * dp Now we make the following observation. dp, r ♭ = ∇ r p = 0 This can be proven, for example, by using Eq. (5.5) and the fact that r is a CV. Note that the above equation also implies that dp c , r ♭ = −2r 2 p.
47
Hence we see that d p,dp = dp, dp Thus at a root z = u i , we have d p,dp = r −4 dp c , dp c Then at z = u i we have d p,dp B 2 = r −4 dp c , dp c B 2 5.10 = 4r −4 d dz Thus Proposition 5.21 follows from the above equation and Proposition 5.3.
Classification of reducible concircular tensors
In this section, we will show how to find a warped product which "decomposes" 6 a given reducible OCT defined in a space of constant curvature. First we will prove a generic result which will allow us to construct reducible OCTs. Then in the next two sections, we will apply this result to pseudo-Euclidean space, then to spherical submanifolds of pseudo-Euclidean space.
The following proposition will give us a useful characterization of reducible OCTs in terms of their irreducible part. Its proof, which is based on theorem 6.1 in [RM14b], can be omitted without lose of continuity. Proposition 6.1 (Characterization of Reducible OCTs) Suppose L ∈ S 2 (M ) is an orthogonal tensor. Then L is a reducible OCT iff there exists a warped product decomposition M = M 0 × ρ 1 M 1 × · · · × ρ k M k with adapted contravariant metric G = k i=0 G i such that L has the following contravariant form: where each λ i ∈ R andL ∈Ŝ 2 (M 0 ) is the canonical lift (see [RM14b]) of an ICT L ∈ S 2 (M 0 ) satisfying the following equation on M 0 for each i > 0 L(d log ρ i ) = d(λ i log ρ i + 1 2 tr(L)) (6.2) Proof Suppose L is an OCT. Let D 1 , . . . , D l be the eigenspaces of L associated with constant eigenfunctions and let M = M 0 × ρ 1 M 1 × · · · × ρ k M k be a warped product . . , D l ) which exists by theorem 6.1 in [RM14b]. We defineL to be the restriction of L to M 0 ; it follows by theorem 6.1 thatL is an ICT in M 0 . It also follows by theorem 6.1 that we can assume where a ranges over all eigenfunctions ofL. If dim M 0 = 0, i.e. L induces a pseudo-Riemannian product, the conclusion follows. Otherwise, since λ i is constant and be-causeL is torsionless, we see that on M 0 Conversely, it is easily checked that ifL is an ICT and ρ i satisfies the above equation, then cρ i must satisfy Eq. (6.3) for some c ∈ R + . Hence it follows that L defined in the statement is torsionless and then by theorem 6.1 in [RM14b] that L is a reducible OCT.
In the following sections we will use the above proposition to classify reducible OCTs in spaces of constant curvature. But first we will need the following definition.
Definition 6.2 Suppose L is a CT in M and let N = N 0 × ρ 1 N 1 × · · · × ρ k N k be a local warped product decomposition of M passing throughp ∈ N ⊆ M . We say L is decomposable in this warped product if for each p ∈ N and i > 0, T p N i is an invariant subspace for L. ✷
In pseudo-Euclidean space
We first need to review the standard warped product decompositions of E n ν . All other warped product decompositions of E n ν can be built up from the standard ones. Our exposition is based on the article by Nolker [Nol96]. More details are given in [Raj14c] where the standard warped products of spaces of constant curvature are given, generalizing results originally given in [Nol96].
49
Consider the following decomposition E n ν = V 0 k V 1 of E n ν into nontrivial (hence non-degenerate) subspaces. Choose a ∈ V 0 \ {0} andp ∈ V 0 such that a,p = 1. Denote κ := a 2 and ǫ := sgn κ. We have two types of warped products: non-null warped decomposition If κ = 0, let W 0 := V 0 ∩ a ⊥ and W 1 := W ⊥ 0 . Let c =p − a κ and null warped decomposition If κ = 0, then a is lightlike, so fix another lightlike vector b ∈ V 0 such that a, b = 1, let W 0 := V 0 ∩ span{a, b} ⊥ and W 1 := V 1 . Let In each case, we say that N 1 is the sphere determined by (p, V 1 , a). For i = 0, 1, let P i : E n ν → W i be the orthogonal projection. Let Then the following holds: is an isometry onto the following set: Furthermore, the following equation holds: ψ(p 0 , p 1 ) 2 = p 2 0 (6.10) Proof See [Raj14c].
50
In fact, for (p 0 , p 1 ) ∈ N 0 ×N 1 , ψ has one of the following forms, first if ψ is non-null: and if ψ is null: a, p 0 (P 1 p 1 ) 2 )a + a, p 0 b + a, p 0 P 1 p 1 (6.12) The above forms are obtained from the equation for ψ from the above theorem by expanding p 0 in an appropriate basis. The warped product decomposition ψ is completely determined by the fact that ψ(p,p) =p, N 1 is a spherical submanifold of E n ν withp ∈ N 1 , TpN 1 = V 1 and mean curvature normal −a atp [Nol96;Raj14c]. The pointp was restricted so that the warped product is in canonical form (see [Raj14c]); we will make this assumption throughout this article. We call ψ the warped product decomposition (of E n ν ) determined by (p; V 0 k V 1 ; a); often we omit the pointp as it doesn't enter calculations, in this case the warped product is assumed to be in canonical form.
We note that the warped products with multiple spherical factors can be obtained using the standard ones described above. Indeed, suppose φ 1 : N ′ 0 × ρ 1 N 1 → E n ν is the warped product decomposition determined by (p; V 0 k V 1 ; a 1 ) as above. Since V 0 is pseudo-Euclidean, consider a warped product decomposition, φ 2 :Ñ 0 × ρ 2 N 2 → V 0 , determined by (p;Ṽ 0 kṼ 1 ; a 2 ) with V 0 ∩ W ⊥ 0 ⊂W 0 (hence a 1 ∈W 0 ). Note thatW 0 is the subspace W 0 from the above construction for φ 2 . Let N 0 := N ′ 0 ∩Ñ 0 , then one can check that the map ψ defined by: is a warped product decomposition of E n ν . We illustrate this construction with an example.
✷
This procedure can be repeated as many times as necessary to obtain more general warped products. In general, for somep ∈ E n ν , suppose we have a decomposition TpE n ν = k Ë i=0 V i into non-trivial subspaces (hence non-degenerate) with k ≥ 1 and linearly independent pair-wise orthogonal vectors a 1 , . . . , a k ∈ V 0 \ {0}. Furthermore we will assume the warped product is in canonical form, sop ∈ V 0 and a i ,p = 1 for each i. This data determines a warped product decomposition ψ, having the following form [Raj14c]: where ρ i (p 0 ) = a i , p 0 and N i is the sphere determined by (p, V i , a i ). This general formula is originally from [Nol96,theorem 7]. We call ψ the warped product decompo- .., a k ). One can more generally let some of the a i be zero, this results in Cartesian products as done in [Nol96]. Since we assume the a i are non-zero, we say additionally that ψ is a proper warped product decomposition. Finally, note that the properties of the more general warped product decompositions of E n ν can be deduced from Theorem 6.3. Now suppose N = N 0 × ρ 1 N 1 × · · · × ρ k N k is a warped product andL is a CT in N 0 . We sayL can be extended to a CT in N ifL satisfies Eq. (6.2) for each i with some λ i ∈ R. AssumingL is an OCT, then Proposition 6.1 allows one to define a CT on N which restricts toL on N 0 . The following lemma will be our main tool for classifying reducible concircular tensors.
Lemma 6.5 Fix a proper warped product decomposition (V 0 kV 1 ; a) of E n ν and let L i j = A i j +mx i x j + w i x j +x i w j be a concircular tensor in N 0 . Then L can be extended to concircular tensor in E n ν decomposable in this warped product iff a is an eigenvector of A orthogonal to w.
52
Hence ∇ i tr(L) = 2(mx i + w i ). Now let ρ = a i x i = a, x > 0, then one can similarly show that Then, By definition, L can be extended to a CT decomposable in this warped product iff L i j ∇ j log ρ − 1 2 ∇ i tr(L) ∈ span{∇ i log ρ}. The above equation implies that this happens iff a is an eigenvector of A and a ∈ w ⊥ .
We now use the above lemma to construct reducible CTs in E n ν . Proposition 6.6 (Constructing Reducible CTs in E n ν ) Fix a proper warped product decomposition (V 0 k V 1 ; a) of E n ν and letL =à + mr ⊙r + 2r ⊙w be a concircular tensor in N 0 (in contravariant form) which can be extended to a concircular tensor L in E n ν via the above lemma. Since N 0 ⊂ V 0 ⊂ E n ν , we can considerL to be a tensor in E n ν . Then L is given as follows: where as a linear operator, A =à + λI V 1 , where λ is the eigenvalue ofà associated with a and I V 1 is the identity on V 1 . ♦ Proof Throughout the proof, G is the contravariant metric for E n ν and this metric adapted to the warped product is given as follows: The non-null case: In this case κ 1 := a 2 = ±1. Let m := dim V 0 and choose an orthonormal basis for V 0 , {a 1 , ..., a m } with a m = a.
First note that for p = (p 0 , p 1 ) ∈ N 0 ×N 1 and v = (v 0 , v 1 ) ∈ T p (N 0 ×N 1 ), Eq. (6.11) implies that Hence we observe the following: and ψ * a i = a i for i = 1, ..., m − 1 (6.20) Now letL =à + mr ⊙r + 2w ⊙r be a concircular tensor in N 0 satisfyingÃa = λa for some λ and a,w = 0. Then from Lemma 6.5 we know that ψ * (L + λ ρ 2 G 1 ) is a concircular tensor in E n ν . We now calculate ψ * (L + λ ρ 2 G 1 ) explicitly. First note thatà where A 0 a = 0 and so ψ * A 0 = A 0 by Eq. (6.20). Let G be the contravariant metric for E n ν and G 0 be the restriction of G to W 0 , then Let G V 1 be the restriction of G to V 1 , then where the second last equality follows from Eq. (6.20) and the fact that ψ is an isometry. Eq. (6.19) implies that ψ * r = r, also Eq. (6.20) together with the fact that a,w = 0 implies that ψ * w =w. Thus we conclude that where as a linear operator, A =à + λI V 1 where I V 1 is the identity on V 1 .
54
The null case: In this case a is a lightlike vector. Let m := dim V 0 and choose a basis {a 1 , ..., a m−2 , a, b} for V 0 where {a 1 , ..., a m−2 } is an orthonormal basis for W 0 and a, b are as in the null warped product decomposition.
First note that for p = (p 0 , p 1 ) ∈ N 0 ×N 1 and v = (v 0 , v 1 ) ∈ T p (N 0 ×N 1 ), Eq. (6.12) implies that Hence we observe the following: Now letL =Ã + mr ⊙r + 2w ⊙r be a concircular tensor on N 0 satisfyingÃa = λa for some λ and a,w = 0. Then from Lemma 6.5 we know that ψ * (L + λ ρ 2 G 1 ) is a concircular tensor in E n ν . We now calculate ψ * (L + λ ρ 2 G 1 ) explicitly. SinceÃa = λa,Ã can be decomposed in contravariant form as follows: where A 0 a = 0 and so ψ * A 0 = A 0 by Eq. (6.25). Let G be the contravariant metric for E n ν and G 0 be the restriction of G to W 0 , then we see that Let G V 1 be the restriction of G to V 1 , then
55
where the third equality follows from Eq. (6.25) and the fact that ψ is an isometry. Eq. (6.24) implies that ψ * r = r, also Eq. (6.25) together with the fact that a,w = 0 implies that ψ * w =w. Thus we conclude that where as a linear operator, A =Ã + λI V 1 where I V 1 is the identity on V 1 .
Remark 6.7
Note that even though the extended CT, L, can be naturally extended to all of E n ν . It is the extension ofL only for the subset Im(ψ) of E n ν given by Theorem 6.3, which is in general not a dense subset of E n ν .
✷
The following corollary will be useful in the sequel.
Corollary 6.8 Fix a proper warped product decomposition ψ determined by the data (V 0 k V 1 ; a) with κ 1 := a 2 = ±1. Letr = P 1 r be the dilatational vector in W 1 and G 1 be the metric in W 1 . Write the metric adapted to the warped product as G = G ′ + 1 ρ 2G , then: Proof Let G be the contravariant metric for E n ν and G 0 (resp. G 1 ) be the restriction of G to W 0 (resp. W 1 ), then recall that Hence the above equation together with Eq. (6.20) implies that Letp 1 = p 1 − c ∈ W 1 (κ 1 ) thenr = P 1 r = a, p 0 p 1 . Then by Eq. (6.19) Thus sincer 2 = ρ 2 κ 1 , we have: 6 Classification of reducible concircular tensors We now present some examples which show how to use the above proposition (Proposition 6.6) to construct warped products which decompose a given reducible CT.
Let W := e ⊥ and P be the orthogonal projection onto W . Choosep ∈ E n ν such that (Pp) 2 = 0, WLOG we assume (Pp) 2 = ±1. We construct a warped product passing throughp which decomposes L.
Let κ 1 := sgn(Pp) 2 and take a := κ 1 Pp ∈ W . Let V 1 = W ∩ a ⊥ and V 0 = V ⊥ 1 = Re k Ra. Note that a was chosen so that the initial data (p; V 0 k V 1 ; a) is in canonical form and also note that κ 1 = a 2 . Let ψ : N 0 × ρ N 1 → E n ν be the warped product in Theorem 6.3 determined by this initial data. Now letà := εe ⊙ e + 0a ⊙ a ∈ C 2 0 (N 0 ), then by construction we have that: LetL be the central CT in N 0 with parameter matrixà and suppose the contravariant metric in the warped product decomposes as G = G ′ + 1 ρ 2 G 1 . The above proposition shows that: for all points in the image of ψ, which includesp. Hence this warped product decomposition decomposes L. Note that this warped product was constructed so thatà has simple eigenvalues and soL is no longer reducible. In the following we replace N 1 with N 1 − c 1 so that N 1 is a central hyperquadric. Then by Eq. (6.11), we have for (p 0 , p) = (κ 1 xa + ye, p) ∈ N 0 × N 1 ψ(p 0 , p) = xp + ye ✷ The above example will be applied to construct separable coordinates in Section 7.2, see Example 7.4. We now give a non-Euclidean variation of the above example. Let W = a ⊥ . Choosep / ∈ W , WLOG we assume p, a = ±1. We now construct a warped product passing throughp which decomposes L.
If p, a = −1, then set a := −a, so we can assume p, a = 1. Define b as follows: b :=p −p 2 2 a (6.33) Note that b is a lightlike vector satisfying a, b = 1. Define V 1 = a ⊥ ∩ b ⊥ and V 0 = span{a, b}. Note that b was chosen so that the initial data (p; V 0 k V 1 ; a) is in canonical form. Let ψ : N 0 × ρ N 1 → E n ν be the warped product in Theorem 6.3 determined by this initial data.
Note that {b, a} forms a cycle of generalized eigenvectors for A and A| V 1 = 0I V 1 . Hence by the above proposition, (ψ −1 ) * L is decomposable in this warped product. Also by Theorem 6.3,p ∈ Im(ψ). Also, the restriction of (ψ −1 ) * L to N 0 ,L, is a central CT with 2D parameter matrix a ⊙ a.
In the following we replace N 1 with P 1 (N 1 −p) so that N 1 = V 1 is a vector space. Then by Eq. (6.12), we have for (p 0 , p) = (xb + ya, p)
General Construction
We will show how to use Proposition 6.6 to construct a warped product which decomposes an interesting class 7 of non-degenerate reducible CTs. This construction generalizes the above examples. First we need a preliminary definition. Suppose A is a linear operator on a vector space. We say that a vector v is a proper generalized eigenvector of A if (A − λI) k v = 0 for some λ ∈ C and k > 1. Let L = A + mr ⊙ r + 2r ⊙ w be a non-degenerate CT in E n ν in the canonical form given by Theorem 2.11. We let the subspace D and the matrix A c be as in the remarks following the theorem. We assume that each real generalized eigenspace of A c admits at most one proper generalized eigenvector. We lose no generality when working in Euclidean or Minkowski space [Raj14a]. Now let W 1 , . . . , W k be the multidimensional (real) eigenspaces of A c with corresponding eigenvalues λ 1 , . . . , λ k . The following construction is based on the metric-Jordan canonical form of A c , see Theorem 2.1 or [Raj14a, theorem 3.7].
Case 1 W i is a non-degenerate subspace Choose a unit vector a i ∈ W i and define Case 2 W i is a degenerate subspace Consider the metric-Jordan canonical form for A c . By assumption there must be a single cycle v 1 , . . . , v r of generalized eigenvectors with v r ∈ W i being a lightlike eigenvector. Let a i := v r and V i : determines a warped product decomposition ψ : N 0 × ρ 1 N 1 · · ·× ρ k N k → E n ν in canonical form. By repeatedly applying Proposition 6.6 we see that L is decomposable in the warped product decomposition induced by ψ, with the following properties: • ((ψ −1 ) * L)| N 0 =Ã + mr ⊙r + 2r ⊙ w wherer is the dilatational vector field in N 0 •Ã| D ⊥ only has eigenspaces of dimension one, i.e. each Jordan block ofÃ| D ⊥ has a distinct eigenvalue.
• For each i > 0, T N i is an eigenspace of (ψ −1 ) * L with constant eigenfunction λ i On Completeness We will end this section by showing that the above construction is complete, meaning that the restriction of (ψ −1 ) * L to the geodesic factor N 0 no longer has constant eigenfunctions.
We also note here that with an appropriate choice of a 1 , . . . , a k we can choose warped product decompositions to cover all of E n ν except for a union of closed submanifolds with dimension strictly less than n. Examples 6.9 and 6.10 give more details on how to do this, see also Theorem 6.3. In other words, for the non-degenerate CTs considered above, there exists a warped product decomposition ψ : N 0 × ρ 1 N 1 · · ·× ρ k N k → E n ν such that Im(ψ) is a dense subset of E n ν . Although the cost of this is that the factors N i may no longer be connected.
The following lemma shows that the classification of reducible CTs given above is complete for central CTs.
Lemma 6.11 (Reducible central CTs) Let L be a central CT with parameter matrix A. Suppose that each real generalized eigenspace of A has at most one proper generalized eigenvector. Then A has a real eigenspaceẼ λ with dimension m > 1 iff L has a non-degenerate eigenspace E λ (defined on a dense subset of E n ν ) with constant eigenfunction λ and dimension m − 1.
✷ Proof It was proven above that under the hypothesis, if A has a real eigenspace with dimension m > 1 then L has a non-degenerate eigenspace E λ with dimension m − 1.
We will now prove the converse.
To prove the converse, we simply have to prove that if all real eigenspaces of A are at most one dimensional then L has no non-degenerate eigenspaces with constant eigenfunctions defined on open subsets of E n ν . It is sufficient to show that L has no constant eigenfunctions defined on open subsets of E n ν . We prove this by induction. The base cases are given by Proposition 5.9. Suppose U is a non-degenerate invariant subspace of A such that L u has the form given by Proposition 5.9 and U ⊥ satisfies the induction hypothesis. By Eq. (5.6) we can write: 6 Classification of reducible concircular tensors 59 Then dp = B u ⊥ dp u + B u dp u ⊥ By the induction hypothesis, L u ⊥ has no constant eigenfunctions. Suppose λ is a constant eigenfunction of p, then by Proposition 5.9 and the above equation, it follows that If B u has no real roots, we reach a contradiction. Otherwise, by construction A must have a real eigenspace with dimension m > 1, a contradiction. Hence we conclude that L has no constant eigenfunctions which proves the claim by induction.
Since a multidimensional eigenspace of an OCT has a constant eigenfunction, the above proposition allows us to classify these eigenspaces when the CTs considered induce an OCT on some subset of E n ν . For completeness, we will show that the hypothesis of the above proposition is the most general for classifying OCTs.
Proposition 6.12 Let L be a central CT with parameter matrix A. Suppose A has a real generalized eigenspace with multiple proper generalized eigenvectors, then L is not an OCT. ♦ Proof WLOG we can assume that that this generalized eigenspace of A is associated with the eigenvalue zero. First we have L = A + r ⊙ r L 2 = A 2 + Ar ⊙ r + r 2 r ⊙ r By hypothesis, dim N (L) ≥ 1. We also have that dim N (A 2 ) ≥ 4. The above equation shows that the range of L 2 is spanned by {r, Ar} and the range of A 2 (on a dense subset of E n ν ), hence we see that dim N (L 2 ) ≥ 1 + dim N (L). This implies that L is not point-wise diagonalizable on some dense subset of E n ν (see for example [FIS03]).
In fact one can show that if A = J 2 (0) ⊕ J 2 (0), then the associated central CT has a 2-cycle of generalized eigenvectors associated with eigenvalue zero.
The following lemma is the analogue of Lemma 6.11 for axial CTs. Its proof is also analogous and reduces to Lemma 6.11 with the help of Eq. (5.32) and Proposition 5.13. Lemma 6.13 (Reducible axial CTs) Let L be an axial CT with parameter matrix A. Suppose that each real generalized eigenspace of A c has at most one proper generalized eigenvector. Then A c has a real eigenspaceẼ λ with dimension m > 1 iff L has a non-degenerate eigenspace E λ (defined on a dense subset of E n ν ) with constant eigenfunction λ and dimension m − 1. In conclusion we have the following theorem which summarizes our classification: Theorem 6.14 (Classification of Reducible CTs in E n ν ) Let L be a non-degenerate CT in E n ν such that each real generalized eigenspace of A c has at most one proper generalized eigenvector. Then L is reducible iff A c has a multidimensional real eigenspace. If L is reducible, then there exists an explicitly constructible warped product decomposition ψ : N 0 × ρ 1 N 1 · · · × ρ k N k → E n ν such that the following hold: • L is decomposable in the warped product N 0 × ρ 1 N 1 · · · × ρ k N k .
• The restriction of (ψ −1 ) * L to N 0 has no constant eigenfunctions.
• Im(ψ) is an open dense subset of E n ν . ♦
In Spherical submanifolds of pseudo-Euclidean space
In this section we show how the problem of classifying reducible CTs in E n ν (κ) can be reduced to the same problem in E n ν ; we will assume n > 2 to avoid trivial cases. First we will need to obtain the warped product decompositions of E n ν (κ). The following proposition shows that any proper warped product decomposition of E n ν in canonical form restricts to a warped product decomposition of E n ν (κ). Its proof is straightforward consequence of Eq. (6.10); see [Raj14c] for more details.
Remark 6.16
Sometimes N 0 (κ) may not be connected, for more details on this see [Raj14c]. ✷ Now we show how to restrict a reducible CT in E n ν to one in E n ν (κ).
Proposition 6.17 (Restricting Reducible CTs to E n ν (κ)) Let ψ : N 0 × ρ 1 N 1 · · · × ρ k N k → E n ν be a proper warped product decomposition in canonical form and letp ∈ Im(ψ) as in the above theorem. Suppose L c is a reducible central CT in E n ν satisfying
61
where G i is the restriction of G to T N i , λ i ∈ R andL c is a CT in N 0 . Let φ := ψ| N ′ be the induced warped product decomposition of E n ν (κ) as in the above theorem. Then if we let L (resp.L) be the restriction of L c (resp.L c ) to E n ν (κ) (resp. N 0 (κ)), then Proof Letr (resp. r) be the dilatational vector field in N 0 (resp. E n ν ). We will use the fact that ψ * r = r; this can be deduced from the proof of Proposition 6.6 or Eq. (6.14). We let R * = I − r ⊗ r ♭ r 2 be the orthogonal projection onto T E n ν (κ) with a similar definition forR * with respect to T N 0 (κ). In the following, given L ∈ S 2 (E n ν ), we denote by R * L the restricted tensor given by (R * L) ij = R i l L lk R j k . Using the fact that ψ is an isometry and ψ * r = r, one can show that R * •ψ * = ψ * •R * . Also note thatR * G i = G i . Thus By evaluating the above equation in N 0 (κ)× ρ 1 N 1 · · ·× ρ k N k , one obtains the desired result. Now we show how to apply the above results to obtain a warped product decomposition in which a given CT in E n ν (κ) is decomposable. Let L be a non-trivial CT in E n ν (κ), then there is a unique central CT, L c , such that L = R * L c . As described in the previous section, provided L c is reducible, we can choose a warped product decomposition of E n ν , ψ, such that L c = ψ * (L c + k i=1 λ i G i ) satisfying the hypothesis of the above proposition. Thus the above proposition gives a warped product decomposition φ which decomposes L, and is obtained by an appropriate restriction of ψ. We now give some examples of this procedure to obtain the standard spherical coordinates.
Example 6.18 (Spherical Coordinates I) Let M = E n ν (κ) where κ = ±1 and n ≥ 3. Consider the CT L in E n ν (κ) induced by A = εe ⊙ e with ε := e 2 = ±1. Let P be the orthogonal projector onto e ⊥ and choosep ∈ E n ν (κ) such that (Pp) 2 = ±1. By Example 6.9 there is a warped product 6 Classification of reducible concircular tensors 62 decomposition ψ : N 0 × ρ N 1 → E n ν passing throughp which decomposes L c := A+ r ⊙ r. For (p 0 , p) = (xκ 1 a + ye, p) ∈ N 0 × N 1 , we have To obtain a warped product decomposition of E n ν (κ), by Theorem 6.15 we need to restrict ψ to N 0 (κ) × N 1 . Let φ be the induced warped product decomposition of E n ν (κ), then it follows by Proposition 6.17 that L is decomposable in this warped product. We now give the standard forms of this warped product by parameterizing (x, y) as in Example 5.20 while enforcing x = a, p 0 > 0 and N 0 (κ) to be connected. We have three different cases: Case 2 κ 1 = κ and ε = −κ φ : Case 3 κ 1 = −κ and ε = κ φ : Note that even though there is only one inequivalent coordinate system on E 2 ν (κ), the last two warped products are inequivalent. This is due to the fact that a 2 = κ 1 is different in these cases and N 0 = {p ∈ V 0 | a, p > 0}.
✷
The following example considers spherical coordinates that only occur in non-Euclidean spheres.
63
Restricting ψ to N 0 (κ) × N 1 forces: Let φ be the warped product decomposition of E n ν (κ) induced by ψ as in Theorem 6.15. Again, it follows by Proposition 6.17 that L is decomposable in this warped product. We now give φ with the standard parameterization of N 0 (κ), by enforcing x = a, p 0 > 0 and N 0 (κ) to be connected. These conditions are all satisfied if we take x = 1 √ 2 exp(t). Then we have the following: a Also note that if ν = −κ = 1, then φ is an isometry onto a connected component of E n 1 (−1) ≃ H n−1 .
✷ In conclusion we have the following theorem which summarizes our classification: Theorem 6.20 (Classification of Reducible CTs in E n ν (κ)) Let L be a non-trivial CT in E n ν (κ) with n > 2 such that each real generalized eigenspace of A has at most one proper generalized eigenvector. Then L is reducible iff A has a multidimensional real eigenspace. If L is reducible, then there exists an explicitly constructible warped product decomposition ψ : N 0 × ρ 1 N 1 · · · × ρ k N k → E n ν (κ) such that the following hold: 1. L is decomposable in the warped product N 0 × ρ 1 N 1 · · · × ρ k N k .
Im(ψ)
is an open dense subset of E n ν (κ). ♦ Proof We give the proof of Item 2. First suppose λ is a constant eigenfunction of L, then one can naturally lift λ to a constant function on E n ν . Let p(z) be the characteristic polynomial of L having the form given by Eq. (5.41). Then since L r p = 0 (see the proof of Proposition 5.21), we must have p(λ) = 0 on some open subset of E n ν . Then the proof of Lemma 6.11 holds verbatim by Eq. (5.41), which proves the result.
Item 3 follows from the construction of ψ (see Proposition 6.17) and Theorem 6.14.
Applications and Examples
In this section we show how to apply the theory developed in this article to solve some of the motivating problems stated in the introduction. First, in Section 7.1 we show how to enumerate the isometrically inequivalent separable coordinates in a given space of constant curvature. Then in Section 7.2 we show how to construct separable coordinate systems by way of examples. Finally, in Section 7.3 we show how to explicitly execute the BEKM separation algorithm in general. We also give the details of executing the BEKM separation algorithm for the Calogero-Moser system.
Enumerating inequivalent separable coordinates
In this section we show how one can use the theory developed in this article to enumerate the isometrically inequivalent separable coordinate systems on a given space of constant curvature. For dimensions greater than two, this problem is recursive as described in [RM14b, section 6.2]. This recursive nature was originally discovered by Kalnins et al. and is discussed more concretely in [Kal86]. So one will also have to enumerate the separable coordinate systems on spherical submanifolds of the underlying space and then construct the separable coordinates systems using warped products (see the beginning of Section 2.4 and also [RM14b, section 6.2]). The main step is to enumerate the geometrically inequivalent CTs, so we will focus on this. To do this, one has to enumerate the canonical forms summarized in Section 2.4 together with the metric-Jordan canonical forms for A c and take into account geometric equivalence. We illustrate this idea with some examples.
Example 7.1 (Central CTs) Let L be a central CT with parameter matrix A. In this case, we essentially have to enumerate the different metric-Jordan canonical forms for A. Fix λ 1 < · · · < λ n ∈ R.
In Euclidean space there is only one central CT we can build from these parameters; it is given by the parameter matrix A = diag(λ 1 , . . . , λ n ) and it induces the well known elliptic coordinate system (see Example 5.11).
In Minkowski space there are n (geometrically inequivalent) central CTs we can build from these parameters, they are given as follows: . . .
They differ by the eigenvalue of A which is timelike. Similarly there are n − 1 central CTs built only using λ 2 < · · · < λ n with parameter matrix of the form: Now consider the case where A has a two dimensional eigenspace, the rest being simple. Using λ 2 < · · · < λ n , in Euclidean space there are n − 1 central CTs depending on which λ i corresponds to the two dimensional eigenspace 8 . Each of these cases in Euclidean space induce n − 1 different cases in Minkowski space depending on which λ i becomes timelike, hence there are a total of (n − 1) 2 cases in Minkowski space.
Finally we note that in Minkowski space A can have two complex conjugate eigenvalues, then since the corresponding real Jordan block is distinguishable from the other real eigenvalues of A, a similar analysis applies. In general one would have to order the complex eigenvalues (see Definition A.1). ✷ Enumerating inequivalent axial CTs can largely be reduced to the same problem for central CTs. For example, in Euclidean space there is only one type of axial CT if all the eigenvalues of A c are distinct. We conclude with CTs in spherical submanifolds of pseudo-Euclidean space as they are somewhat different.
Example 7.2 (CTs in E n ν (κ)) Let L be the CT in E n ν (κ) with parameter matrix A. Fix λ 1 < · · · < λ n ∈ R. In this case there are sometimes less geometrically inequivalent CTs then isometrically inequivalent ones.
In the Euclidean sphere there is only one CT we can build from these parameters, it is given by the parameter matrix A = diag(λ 1 , . . . , λ n ) and it induces the sphere-elliptic coordinate system. Now suppose the ambient space is Minkowski space. Then we only need to consider ⌈ n 2 ⌉ cases given by (see Example 4.6): Note that only the first ⌈ n 2 ⌉ eigenvalues of A are made timelike. Most of the other cases can be deduced from the first example if one desires. Although we illustrate one difference with an example. For the Euclidean sphere E 3 (1), fix λ 1 < λ 2 ∈ R and consider the CT induced by the following parameter matrices: Note that −A 2 has the same form as A 1 , specifically the smallest eigenvalue of −A 2 is repeated. Hence in considering parameter matrices with two dimensional eigenspaces, we only need to enumerate those with the form given by A 1 , where the smaller eigenvalue is repeated.
✷
We have described how to enumerate the geometrically inequivalent CTs in spaces of constant curvature. One should note though, that in non-Euclidean spaces a given CT could induce different coordinate systems on disjoint connected subsets of the space (see Example 5.12). Hence in these cases, more work has to be done to enumerate the isometrically inequivalent separable coordinate systems.
Constructing separable coordinates
In a two dimensional Riemannian manifold, all non-trivial CTs are Benenti tensors. Hence in this case, one can enumerate all isometrically inequivalent separable coordinates simply by enumerating the geometrically inequivalent CTs. The latter problem 7 Applications and Examples 66 can be solved in pseudo-Euclidean space using Theorem 2.11. In Table 1 we have done this for E 2 and included the standard transformations from separable to Cartesian coordinates.
The vectors d, e form an orthonormal basis for E 2 and a > 0.
We end with a few more examples to further illustrate the theory. The first example shows how to obtain coordinates which diagonalize a Benenti tensor which is not an ICT.
Example 7.3 (Spherical coordinates in S 2 ) Fix d ∈ S 2 and let L be the CT induced in S 2 by restricting d ⊙ d. As we observed earlier, L is necessarily a Benenti tensor. In Example 6.18 it was shown that a warped product which decomposes L is given by: ψ(φ, p) = cos φ d + sin φ p
67
where p ∈ d ⊥ (1), i.e. p ∈ S 2 ∩ d ⊥ and φ ∈ (0, π). Since d ⊥ (1) is the unit circle we obtain coordinates on it by taking p = cos θ e + sin θ f where e, f is an orthonormal basis for d ⊥ . Then the above equation becomes: ψ(φ, p) = cos φ d + sin φ(cos θ e + sin θ f ) Furthermore, since ψ is a warped product decomposition with warping function sin φ, it follows from Example 6.18 that the metric is: Example 7.4 (Oblate/Prolate spheroidal coordinates in E 3 ) Fix a unit vector d ∈ E n , c = 0 and consider the following CT in E n : It follows from Example 6.9 that a warped product ψ which decomposes L is given as follows: Let e ∈ d ⊥ be a unit vector, then for (p 0 , p) = (xd + ye, p) ∈ N 0 × N 1 ψ(p 0 , p) = xd + yp Observe that N 0 ≃ E 2 and L induces a Benenti tensor,L, on N 0 which has the form given by Eq. (7.5). If we let a := |c|, then using Table 1 we can take coordinates on N 0 which diagonalizeL yielding the following maps. ψ(p 0 , p) = c > 0 a cos φ cosh η d + a sin φ sinh η p c < 0 a sin φ sinh η d + a cos φ cosh η p Also N 1 is the unit sphere in d ⊥ , hence N 1 ≃ S n−2 . We can obtain separable coordinates for E n by taking any separable coordinates for S n−2 on N 1 [RM14b]. For example, if c > 0 and n = 3, we obtain prolate spheroidal coordinates: where e, f is any orthonormal basis for d ⊥ . Also note that using Proposition 5.15 and the fact that ψ is a warped product decomposition with warping function a sin φ sinh η, one can obtain the following expression for the metric: g = a 2 (sinh 2 η + sin 2 φ)((dφ) 2 + (dη) 2 ) + a 2 sin 2 φ sinh 2 η(dθ) 2 Finally note that oblate spheroidal coordinates can be obtained by taking c < 0. ✷
68
Example 7.5 (Product coordinates in E 4 ) Consider the decomposition E n = V k W into non-trivial subspaces. LetG denote the induced contravariant metric in V and consider the following CT in E n : L =G Observe that the warped product ψ : V × 1 W → E n given by (q, p) → q + p is adapted to the eigenspaces of L. We can construct separable coordinates by parameterizing q (resp. p) with separable coordinates on V (resp. W ). For example, if dim V = dim W = 2, by taking polar (resp. elliptic) coordinates on V (resp. W ) from Table 1, we have the following separable coordinates on E 4 : ψ(q, p) = ρ cos θ b + ρ sin θ c + a cos φ cosh η d + a sin φ sinh η e where b, c (resp. d, e) is an orthonormal basis for V (resp. W ).
✷
Extending the above analysis one can prove that there are eleven classes of isometrically inequivalent separable coordinate systems in E 3 .
The BEKM separation algorithm
In this section we show how to execute the BEKM separation algorithm (see [RM14b, section 6.3] for details) in spaces of constant curvature using the classification of CTs given in this article.
In order to execute this algorithm in E n ν we will need the Killing Bertrand-Darboux (KBD) equation in E n ν and in E n ν (κ). Fix a function V ∈ F(E n ν ) and suppose n > 1. Then if L is the general CT in E n ν given by Eq. (2.11) and K e := tr(L)G − L is its KBDT, then the KBD equation in E n ν is: We will often refer to the above equation as just the KBD equation.
It will be convenient to evaluate the KBD equation in E n ν (κ) via its embedding in E n ν . Then ifL is the general CT in E n ν (κ) given in E n ν by Eq. (2.18), let L := r 2L and K s := tr(L)R − L, then the KBD equation in E n ν (κ) (embedded in E n ν ) is: We will often refer to the above equation as the spherical KBD equation. We will show how this equation is derived in Section 7.3.2.
We should also mention here that we carry out the BEKM separation algorithm slightly differently than described in [RM14b, section 6.3]. We construct warped products which decompose reducible OCTs such that the induced CT on the geodesic factor is an ICT as opposed to a Benenti tensor. This allows one to simultaneously construct separable coordinates while carrying out the algorithm, as illustrated by the following example.
Example: Calogero-Moser system
We first present an example which separates in several different coordinate systems and hence provides a good example for the BEKM separation algorithm. Our example is the Calogero-Moser system, which will be defined shortly. Another advantage of this example is that its separability properties have been studied by several different authors [HMS05; WW05; WW03; BCR00; Cal69], hence it allows one to compare and contrast different methods. Finally we mention that we obtained this example from [WW03] where an algorithm equivalent to the BEKM separation algorithm was used to study this example.
The n-dimensional Calogero-Moser system is given by the following natural Hamiltonian [Cal08]: We will take ω = 0, g = 1 for convenience. In this case this Hamiltonian models n point particles moving on a line acted on by forces depending on their relative distances. We can write the potential V as follows: where a i = e k − e l for some k, l ∈ {1, . . . , n} with e i := ∂ i . Furthermore we let We can obtain solutions to the KBD equation by using the following result.
Proposition 7.6 Suppose L = A + mr ⊙ r + 2w ⊙ r is a CT in E n ν and letL be the restriction of L to E n ν (κ). Let a be a covariantly constant vector and let V := r, a −2 . If a is an eigenvector of A orthogonal to w then V satisfies the KBD equation with L in E n ν . If a is an eigenvector of A then the restriction of V to E n ν (κ) satisfies the KBD equation withL in E n ν (κ). ♦ Proof We first consider the case in E n ν . Under these hypothesis it follows by Lemma 6.5 that if ρ := | r, a |, then we have:
70
for some λ ∈ R. From the above equation one can check that L satisfies the KBD equation with V . A similar proof holds for the case in E n ν (κ), but now the above equation withL follows either by restriction of the one in the ambient space or by Proposition 6.17 together with Eq. (6.2) from Proposition 6.1.
Remark 7.7
This result comes from the connection between extending KTs into warped products and the separation of the Hamilton-Jacobi equation for natural Hamiltonians [RM14b]. One can show that the commuting integrals can be explicitly calculated; this is a consequence of the fact that L is torsionless. ✷ Remark 7.8 One can naturally construct separable potentials from the above proposition. For example if a 1 , . . . , a n is an orthonormal basis for E n ν then the above proposition implies that the following potential is separable in generalized elliptic coordinates (see Example 5.11): for some k i ∈ R. In fact this potential is clearly multi-separable. Furthermore we can also obtain a multi-separable potential on E n ν (κ) by restriction.
✷ Now returning to the Calogero-Moser system, we construct the most general solution to the KBD equation that one can construct using the above proposition: If V is the potential of the Calogero-Moser system given by Eq. (CM), then the following CT is a solution of the KBD equation: where c, w, m ∈ R. Furthermore the restriction of the above CT to S n−1 is a solution of the spherical KBD equation. ♦ Proof Consider the vectors b i := e 1 − e i for i = 1. We construct the most general CT for which each vector b i is an eigenvector of A and orthogonal to w. Observe that none of them are orthogonal, they span an n − 1 dimensional subspace and Now suppose A is a self-adjoint operator such that each b i is an eigenvector of A. Then it follows that A must have d ⊥ as an eigenspace, hence A = kI + cd ⊙ d for some k, c ∈ R. Thus up to equivalence the above form of L satisfies our requirements, and it follows by Proposition 7.6 that L satisfies the KBD equation with V .
The second statement on the spherical KBD equation follows by a similar argument using Proposition 7.6.
71
Remark 7.10 It follows by a straightforward calculation that the CT stated in the above proposition is the most general solution of the KBD equation. Similarly when n = 3 one can check that the solution to the spherical KBD equation given in the above proposition is the most general. ✷ Canonical forms We obtain the canonical forms according to Theorem 2.11 for the CTs given by Eq. (7.18). First the constants ω i from Eq. (2.12) are given as follows: Note that in Euclidean space, one only needs to calculate ω 0 and ω 1 to carry out the classification. We now consider the cases given by Theorem 2.11: Case 1 Elliptic: ω 0 = 0 By applying the translation given by Eq. (2.16) and changing to a geometrically equivalent CT one obtains: for some c ∈ R.
Case 2 Parabolic: ω 0 = 0, ω 1 = 0 By applying the translation given by Eq. (2.17) and changing to a geometrically equivalent CT one obtains: Case 3 Cartesian: ω 0 = 0, ω 1 = 0, c = 0 In this case after changing to a geometrically equivalent CT, we have: Hence the three geometrically inequivalent solutions of the KBD equation for the Calogero-Moser potential are given by Eqs. (7.20) to (7.22). Note that we can obtain these CTs from Eq. (7.18) with an appropriate choice of parameters, hence there is no need to apply any isometries.
Determining Separability We analyze these solutions further to find separable coordinates. We will obtain a compete analysis for the case n ≤ 3 for purposes of illustration. For the following analysis, we fix unit vectors a ∈ d ⊥ and e ∈ d ⊥ ∩ a ⊥ .
We define N 1 to be the unit sphere in d ⊥ : Note if d ⊥ = Ra, then we take N 1 = {a}. When dim N 1 = 1, we take coordinates on it as follows: Case 1 Elliptic with c = 0 When n > 2, this CT is reducible and a warped product decomposition ψ which decomposes this CT is given by Example 6.9. First define N 0 as follows: For (p 0 , p) = (xa + yd, p) ∈ N 0 × N 1 , ψ is given as follows (see Example 6.9): ψ(p 0 , p) = xp + yd Note that this equation also holds when n = 2, but in this case ψ is not a warped product decomposition. To separate V , we have to apply the BEKM separation algorithm with V restricted to N 1 on N 1 . Although it will be more convenient to use the spherical KBD equation in d ⊥ , see the next section for more details.
When n ≤ 3, no additional steps are needed since in this case dim N 1 ≤ 1. Indeed, by Example 5.11 L restricted to N 0 is an ICT (in a dense subset) hence L has simple eigenfunctions (locally), and so one obtains separable coordinates for V by taking elliptic coordinates on N 0 [RM14b]. When c < 0 we obtain oblate spheroidal coordinates and when c > 0 we obtain prolate spheroidal coordinates; see Example 7.4 for more details.
Case 2 Parabolic When n > 2, then proceeding as in Example 6.9 (see also Eq. (6.34)), one observes that the same warped product ψ as in the above case decomposes this CT. When n ≤ 3, with similar arguments as in the above case, one finds that L locally has simple eigenfunctions, and one obtains separable coordinates for V by taking parabolic coordinates on N 0 [RM14b]. The resulting coordinate system is often called rotationally symmetric parabolic coordinates.
73
Case 3 Spherical: Elliptic with c = 0 In this case, one can check that the following warped product, ψ, decomposes L. For (p 0 , p) = (ρa, p) ∈ R + a × S n−1 , ψ is given as follows: ψ(p 0 , p) = ρp Now observe that even when n = 3, L does not have simple eigenfunctions; in contrast with the previous two cases. To fill the multidimensional eigenspace of L corresponding to r ⊥ , we have to solve the spherical KBD equation (see the next section for more details). When n = 3, we can fill this degeneracy by using the solution to the spherical KBD equation given by Proposition 7.9. Indeed, that proposition shows that the CT on S n−1 induced by d ⊙ d is a solution of the spherical KBD equation. Hence by Example 7.3, this induced CT is diagonalized in spherical coordinates, and we see that V separates in the following coordinates [RM14b].
Case 4 Cartesian
In this case we obtain a product which decomposes L as follows. First let N 0 = Rd and N 1 = d ⊥ , then for (p 0 , p) = (xd, p) ∈ N 0 × N 1 , we have: ψ(p 0 , p) = xd + p As in the above case, even when n = 3, L does not have simple eigenfunctions. Hence we have to apply the BEKM separation algorithm with V restricted to N 1 on N 1 . When n = 3 one finds that the general solution to the KBD equation isr⊙ r wherer is the dilatational vector field in N 1 . Thus if we take polar coordinates in N 1 , we obtain separable coordinates for V . For (p 0 , p) = (xd, yσ(θ)) ∈ N 0 × N 1 with y > 0, we have: ψ(p 0 , yσ(θ)) = xd + y(cos(θ)a + sin(θ)e) We conclude with some remarks. First the analysis given above is complete when n ≤ 3. Although when n > 3 the warped product decompositions obtained may allow for partial separation of the Hamilton-Jacobi equation. When n = 4 it was shown in [WW05] that no additional solutions to the (spherical) KBD equation could be obtained. Hence our analysis above is complete when n = 4. 75 induces one on any leaf of the foliation induced by r ⊥ . The following proposition shows how to solve the problem described earlier in this more general context.
Proposition 7.11
Suppose L is a CT on M and r is a non-null CV. Let E := r ⊥ , and L E := L| E . Theñ L := r 2 L E restricts to a CT on any integral manifold of E and it satisfies L rL = 0 on M whereL is in contravariant form. ♦ Proof The proof of this fact is a straightforward calculation. We first note that since r is a CV with conformal factor φ, we have that Finally (L r (r 2 L ij ))u i v j = r 2 (L r L ij )u i v j + (∇ r r 2 )L ij u i v j = −2r 2 φL ij u i v j + 2r 2 φL ij u i v j = 0 Thus since r ♭ is closed, we conclude that L rL = 0. Also, as we noted earlier, Proposition 4.1 implies thatL induces a CT on any integral manifold of E.
Remark 7.12
The above ansatz forL was deduced by studying results obtained by Benenti in [Ben08]. Although one can also obtainL by solving a certain differential equation.
✷
Returning to E n ν , let r be the dilatational vector field and L = r 2 L E as in the above proposition. Note that L E is given in general by Eq. (2.18). Let G be the metric of E n ν , then R = G E is the induced metric on E n ν ( 1 r 2 ) and the above proposition shows 7 Applications and Examples 76 that L r (r 2 R) = 0. Hence r 2 R is the r-lift of the metric of E n ν (κ) (up to sign). Hence if tr(L) is obtained by using the metric of E n ν , the lifted KBDT is given as follows: K s = (tr(L) 1 r 2 )(r 2 R) − L = tr(L)R − L which is the KBDT in E n ν (κ) embedded in E n ν . Also note that it follows from proposition 4.3 in [RM14b] that K s is a KT in E n ν . Also, using Eq. (2.18), one can calculate K s explicitly: K s = tr(A)r 2 R − r, Ar G − r 2 A + 2Ar ⊙ r Note that since the term tr(A)r 2 R is a multiple of the metric of E n ν (κ), that term can be removed. We summarize our results in the following statement: Proposition 7.13 (Spherical KBD equation) Suppose V ∈ F(E n ν ) is a potential in E n ν which satisfies the KBD equation with r ⊙ r. Let L be a CT in E n ν (κ) with parameter matrix A. Then V satisfies the KBD equation induced by L in E n ν (κ) iff it satisfies the spherical KBD equation (Eq. (7.13)) with L in E n ν . ♦
In pseudo-Euclidean space
We show how to execute the BEKM separation algorithm in pseudo-Euclidean space. Fix a non-trivial solution L of the KBD equation in E n ν . First apply the classification given by Theorem 2.11 to L. We assume that L is in one of the canonical forms listed in that theorem. If L is a Cartesian CT then the analysis is straightforward, see Section 7.3.1 for example. So we now assume L is non-degenerate and each generalized eigenspace of A c has at most one proper generalized eigenvector 9 .
First if A c has no multidimensional (real) eigenspaces, then it is not reducible by Theorem 6.14. Hence one obtains separable coordinates for the natural Hamiltonian on the subset where L is an ICT. Now suppose A c has multidimensional (real) eigenspaces W 1 , . . . , W k . It is shown in Eq. (6.34) that one can obtain data (p; k Ë i=0 V i ; a 1 , ..., a k ) which determines a warped product decomposition ψ : N 0 × ρ 1 N 1 · · · × ρ k N k → E n ν in canonical form. Note that ψ decomposes the KBDT, K, associated with L. We now work with K.
We consider a somewhat more general situation in order to incorporate the spherical case later. Suppose K is an orthogonal KT in E n ν which is decomposed by the warped product ψ just constructed. Furthermore assume that each N i corresponds to a distinct eigenspace of K. Now we show how to apply the BEKM separation algorithm on the spheres N i by working only in a pseudo-Euclidean space.
Case 1 N i is a non-null sphere, i.e. a 2 i = 0 Let W i⊥ := W ⊥ i and c i :=p − a i κ i . Define φ : W i⊥ × W i → E n ν to be the standard product decomposition. Embed W i in E n ν as follows: Note that N i = τ i (W i (κ i )). Let r i be the dilatational vector field in W i . By Corollary 6.8 and Proposition 5.2 in [Raj14c], it follows that τ * i V satisfies the KBD equation with r i ⊙r i . Hence by Proposition 7.13 it is necessary and sufficient to solve the spherical KBD equation on W i with τ * i V . Case 2 N i is a null sphere, i.e. a 2 i = 0 Embed N i in E n ν as follows (see Eq. (6.14)): . . ,p, p i ,p, . . . ,p) = p i In this case N i is isometric to V i which is a pseudo-Euclidean space. Hence the BEKM separation algorithm can be applied on V i .
In the following section we show how to apply the BEKM separation algorithm on E n ν (κ).
In Spherical submanifolds of pseudo-Euclidean space
We show how to execute the BEKM separation algorithm in E n ν (κ). First we convert it to a problem in E n ν . LetṼ be a potential in E n ν (κ). Note thatṼ can be naturally lifted to a potential in E n ν satisfying L rṼ = 0 using an appropriate coordinate system. Then, one can check that the potential V :=Ṽ κr 2 in E n ν satisfies the KBD equation with r ⊙ r in E n ν and equalsṼ for points in E n ν (κ). So we lose no generality in working with a potential V ∈ F(E n ν ) which satisfies the KBD equation with r ⊙ r.
Note that by Proposition 7.13, we only need to consider solutions of the spherical KBD equation in E n ν . So let L be a non-trivial solution of the spherical KBD equation (Eq. (7.13)). As in the pseudo-Euclidean case, we assume each generalized eigenspace of A has at most one proper generalized eigenvector. In order to execute the BEKM separation algorithm in E n ν , we will need the following lemma: Lemma 7.14 Let L c be the central CT associated with L and K s = tr(L)R−L be the KBDT associated with L. Suppose L c is reducible and let ψ : N 0 × ρ 1 N 1 · · · × ρ k N k → E n ν be a warped product which decomposes L c . Then ψ decomposes K s .
78
Proof This follows from the proof of Proposition 6.17. In that proof we obtained the following equation: Then we have: Hence the result follows. Now by Proposition 6.17 it follows that L is reducible iff L c is reducible. Hence if L c is not reducible, one obtains separable coordinates for the natural Hamiltonian on the subset (of E n ν (κ)) where L is an ICT. If L c is reducible, then by the above lemma, one can follow the arguments given in the previous section using the warped product decomposition induced by L c which decomposes the KT K s .
We now make some crucial remarks. Let ψ : N 0 × ρ 1 N 1 · · ·× ρ k N k → E n ν be a warped product decomposition which decomposes L c and let φ : N 0 (κ) × ρ 1 N 1 · · · × ρ k N k → E n ν (κ) be an induced warped product decomposition of E n ν (κ) as in Theorem 6.15. First note that the separable coordinates are constructed using the warped product φ. Also because the spherical factors N i (where i > 0) are simultaneously spherical factors of ψ and φ (see Theorem 6.15), we can work in the ambient space.
Conclusion
In this article we have given a classification of concircular tensors in spaces of constant curvature which permits us to apply them to the separation of variables problem as suggested in [RM14b]. We have obtained canonical forms for these tensors modulo the action of the isometry group in Sections 3 and 4, studied the webs described by irreducible concircular tensors in Section 5 and obtained warped product decompositions adapted to reducible orthogonal concircular tensors in Section 6. In Section 7 we have shown how to apply these results to solve some of the motivating problems listed in the introduction.
In our solution, there is one important problem that has been unresolved. In Minkowski space, M n , with n ≥ 3, it is still computationally difficult to find the subset on which a given concircular tensor (CT) is a Benenti tensor. This implies that 8 Conclusion 79 we still don't have a complete understanding of the separable coordinate systems for these spaces. However, when the space has Euclidean signature or n = 2, this is not a problem as is illustrated by Examples 5.11 and 5.12 respectively.
For future research, it would be interesting to see if concircular tensors can be applied to other types of separation such as non-orthogonal separation [Ben92b; Ben97; KM79], complex separation [DR07], and conformal separation [BCR05]. Note that the first two types of separation are of no interest in Euclidean space but they are in Minkowski space. In [BM13], a procedure is given to obtain the local canonical (normal) forms for CTs in pseudo-Riemannian manifolds. Hence the results developed therein may be of interest for the study of the first two types of separation.
A Lexicographic ordering of complex numbers
Complex numbers can be given a natural lexicographic ordering (as in dictionaries) by using their Cartesian product structure: Definition A.1 Suppose λ = a + ib and ω = c + id are complex numbers. We write λ < ω if: b < d or (b = d and a < c) ✷ In the following we use "xor" to mean exclusive or and "or" has its standard meaning. Suppose λ, ω, ν ∈ C and a ∈ R + , one can check that this ordering has the following properties: trichotomy: λ = ω xor λ < ω xor ω < λ transitivity: If λ < ω and ω < ν then λ < ν translation invariance: If λ < ω then λ + ν < ω + ν dilatation invariance: If λ < ω then aλ < aω skew symmetry: If λ < ω then −ω < −λ Furthermore we note that if λ, ω ∈ R then this ordering reduces to the natural ordering of real numbers. | 2015-09-28T20:19:03.000Z | 2014-04-10T00:00:00.000 | {
"year": 2014,
"sha1": "bea0b9749abff968e8b9d2cde30bc9e36257c1a1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "bea0b9749abff968e8b9d2cde30bc9e36257c1a1",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
3592732 | pes2o/s2orc | v3-fos-license | Replication of Mini-Sentinel Study Assessing Mirabegron and Cardiovascular Risk in Non-Mini-Sentinel Databases
Background In 2014, the US Food and Drug Administration (FDA) initiated a prospective routine surveillance using the Mini-Sentinel (M-S) program to assess potential signals of acute myocardial infarction (AMI) and stroke with use of mirabegron, indicated for the treatment of overactive bladder (OAB), compared with oxybutynin. Purpose To replicate the FDA M-S analysis of mirabegron using datasets that did not contribute to the M-S program. Methods IMS PharMetrics Plus and Truven MarketScan claims data from 2012–2015 were converted to the M-S Common Data Model. New and non-new users of mirabegron and oxybutynin were analyzed per the publicly available M-S protocol, and propensity score-matched 1:1 using the M-S PROMPT 2 module. Incidence rates (IR) were calculated per 1000 person-years (PY). Adjusted hazard ratios (aHRs) for mirabegron versus oxybutynin were calculated using Cox regression models. Results In PharMetrics, 12,429 new mirabegron users and 61,548 new oxybutynin users were identified. The aHR was 0.67 (95% confidence interval (CI)] 0.33–1.37) for AMI (mirabegron IR 4.4/1000 PY), and 0.62 (95% CI 0.34–1.13) for stroke (mirabegron IR 6.3/1000 PY). In MarketScan, 17,182 new mirabegron users and 63,962 new oxybutynin users were identified. The aHR was 0.57 (95% CI 0.17–1.95) for AMI, and 0.69 (95% CI 0.30–1.62) for stroke; IRs were similar to those from PharMetrics. Neither dataset suggested an increased risk of AMI or stroke associated with mirabegron in non-new users. Conclusions Using the publicly-available M-S protocol and analysis programs with alternative (non M-S) data sources, no statistically significant increased risk of AMI or stroke was found among new or non-new users of mirabegron compared with oxybutynin. These findings were consistent with the FDA M-S mirabegron study.
Introduction
Overactive bladder (OAB) was defined in 2002 by the International Continence Society as a condition characterized by urgency with or without incontinence, generally in the presence of frequency and nocturia, and suggestive of lower urinary tract dysfunction [1][2][3]. OAB is a common disorder and occurs in a wide range of patients, from the young to the very elderly. OAB increases with age in both sexes, and it is often underdiagnosed and undertreated. The main symptom of OAB is urgency, and, therefore, persons with this symptom are considered to have OAB. The symptoms of OAB, particularly urinary urgency and urinary incontinence, can have a considerable impact on quality of life [4].
Mirabegron is a beta-3 adrenergic agonist indicated for the treatment of OAB with symptoms of urge urinary incontinence, urgency, and urinary frequency. During clinical development, mirabegron at a dose of 50 mg once daily was associated with mean increases in pulse rate of approximately one beat per minute compared with placebo, and a mean increase in blood pressure (BP) of 0.5-1 mmHg (systolic and diastolic) compared with placebo in patients with OAB [5].
In population-based epidemiologic studies, increased levels of heart rate and BP have been positively associated with the risk of stroke and coronary heart disease (CHD) [6]. Randomized trials have shown that pharmacologically reducing diastolic blood pressure by 5-6 mmHg for a few years in hypertensive patients was associated with relative reductions in stroke and CHD risk of 42 and 14%, respectively [6]. A 5 mmHg reduction in systolic BP resulted in a 14% overall reduction in mortality due to stroke and a 9% reduction in mortality due to CHD in hypertensive (C 140/90 mmHg) patients [7].
In June 2014, following the approval of mirabegron for the treatment of OAB, the US Food and Drug Administration (FDA) initiated a prospective routine observational surveillance assessment as part of the Mini-Sentinel (M-S) program to identify potential signals of acute myocardial infarction (AMI) and stroke with use of mirabegron. The objective of this research was to replicate the FDA's study, using databases not contributing to the M-S program.
Methods
The present study was a retrospective cohort study of US administrative claims data from the IMS PharMetrics Plus and Truven MarketScan databases from July 2012 to the latest date available in each database (June 2015 in Mar-ketScan and September 2015 in PharMetrics). Mirabegron was approved by the FDA on 28 June 2012.
These databases were then converted to the FDA's M-S Common Data Model (CDM), using publicly-available specifications [8]. Conversion to the CDM permits the use of the M-S Prospective Routine Observational Monitoring Program Tool: Cohort Matching (PROMPT 2 module) [9]-an assessment tool using a propensity score (PS)matched cohort design. The use of this module was specified in the M-S protocol for mirabegron and generates analyses that can be easily compared to estimates from the FDA's M-S mirabegron safety study.
One minor CDM deviation was made to accommodate the IMS PharMetrics dataset. Per the FDA's M-S CDM specifications [8], a Provider ID is required to count distinct patient visits. As the IMS PharMetrics data does not include a Provider ID in the standard data available for licensing, visits to specific providers could not be directly identified. Therefore, multiple visits to the same provider in the same day were defined as visits in the same day and setting, with the same diagnosis codes across all fields; these were counted as one encounter for consistency with the CDM specifications. Visits on different days, visits on the same day in different settings, or visits on the same day with different diagnoses were considered to be separate encounters.
Cohort Selection
The index date was the first date of exposure to mirabegron or oxybutynin, and the baseline period was defined as the 183-day period prior to and excluding the index date. The primary analysis included new mirabegron and oxybutynin users, and new users were defined as those without any OAB prescription during the baseline period. The secondary analysis included non-new users of the same drugs, where non-new use was defined as an initiation of the cohort-defining drug, with at least one exposure to another OAB drug other than the cohort-defining drug during the baseline period. Therefore, this non-new user analysis includes patients who initiated the cohort-defining drug with recent prior OAB drug use, but is not limited to patients who actively switched from one OAB drug to the cohort-defining drug at the index date.
Patients aged \20 years old and those newly-initiating mirabegron or oxybutynin on the same day as another OAB drug were excluded from the study. Persons with an AMI or stroke in the 30 days prior to the index date were excluded from the analysis of that respective outcome.
Follow-up and Censoring
Follow-up time began on the index date with the cohort entry-defining mirabegron or oxybutynin dispensing and continued based on the number of days supplied of prescriptions for these agents. Follow-up ended (i.e., persontime was censored) upon the earliest occurrence of: the outcome of interest, a gap of C 7 days between two consecutive prescriptions for the cohort-defining agent, discontinuation of the cohort-defining agent, a prescription for an OAB drug other than the cohort-defining agent, end of the study period, or health plan disenrollment. The earliest censoring event occurring for either person in a matched pair served as the censoring date for both persons in the pair.
AMI, the primary outcome of interest, was defined by an International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) inpatient diagnosis of 410.X0 or 410.X1 in the principal position on an inpatient record. Stroke, the secondary outcome of interest, was identified by the presence of an ICD-9-CM code of 430, 431, 433.X1, 434.X1, or 436 in the principal position on an inpatient record.
The PROMPT 2 module controls for confounding by generating PS from the pre-defined lists of covariates specified and defined in the M-S protocol; all covariate definitions used in the present study match those specified in the M-S protocol [10]. These lists include baseline variables related to demographic characteristics (e.g., age, sex), healthcare resource utilization (e.g., number of visits), and clinical characteristics (e.g., co-morbidities and medication use). All covariates listed were identified using information available in claims databases, such as diagnosis and procedure codes.
Statistical Analysis
Statistical analyses were conducted with SAS Ò 9.4, using the published PROMPT 2 module as described previously.
The PROMPT 2 module was used to match mirabegronexposed persons to oxybutynin-exposed persons by PS at a 1:1 ratio. This module implements the matching process outlined in the M-S PROMPT: Cohort Matching Technical Users' Guide [9]. Nearest neighbor matching on PS was conducted using a caliper distance of 0.025 units on the PS scale. Baseline characteristics of the unmatched and matched treatment cohorts and time on drug were summarized using descriptive statistics including means, standard deviations (SDs), medians, and ranges for continuous variables, and frequencies for categorical variables. Per the FDA's M-S protocol, the PROMPT 2 module does not generate data for the matched cohorts if the PS matching model does not converge (i.e., does not complete the PS identification and/or matching process). This may be due to covariates that occur infrequently, leading to very small cell sizes.
The PROMPT 2 module then conducts a Cox regression model and generates hazard ratios (HR) for mirabegron use compared to oxybutynin use with 95% confidence intervals (CIs), as well as other risk-associated information in the unmatched treatment cohorts. Adjusted hazard ratios (aHRs) and 95% CIs were calculated from Cox regression models stratified by PS decile without trimming, and in the matched treatment cohorts, if the PS matching model converged. Incidence rates (IRs) were calculated per 1000 person-years (PY) using the number of outcomes and person time in the matched treatment groups when the model converged, or in the unmatched groups if the matching model did not converge. All PS deciles contained patients from each treatment cohort. Details on the analytic approach are specified in the PROMPT User's Guide [11]. Results are reported for each dataset, in both the primary (new users) and secondary (non-new users) analyses, and for both outcomes, for a total of eight sets of analyses.
New Users
For the AMI analysis, the mean age of 12,429 new mirabegron users was 55.6 ± 12.3 years, compared to 52.9 ± 13.3 years among 61,548 new oxybutynin users (Table 1). Among mirabegron users, 73.8% were women, compared to 64.0% of oxybutynin users. Mirabegron users had a lower mean number of healthcare visits when compared to oxybutynin users, including inpatient stays (0.6 vs. 1.3), emergency department visits (0.2 vs. 0.6), and ambulatory visits (2.5 vs. 3.2). Mirabegron users had a higher mean number of prescriptions when compared to oxybutynin users (19.3 vs. 16.1). Matched characteristics are not presented as the PS matching model did not converge.
Non-new Users
A total of 9025 non-new mirabegron users and 7899 nonnew oxybutynin users were identified for the AMI analysis. Mean ages (mirabegron 57.9 ± 12.6, oxybutynin 57.2 ± 12.4) were higher than for new users (see above), a higher proportion of non-new users (77.9% of mirabegron users and 78.2% of oxybutynin users) were women compared to new users (see above), and non-new users had a higher mean number of prescriptions (mirabegron 25.8 ± 19.5, oxybutynin 24.4 ± 20.5). After PS-matching, 5172 patients remained in each cohort, and patient characteristics were similar across cohorts.
Characteristics of the new and non-new users identified for the stroke analysis are presented in Table 2. Among 12,379 new mirabegron users and 61,411 new oxybutynin users, characteristics were similar to new users identified for the AMI analysis, and the PS-matching model did not converge. Among non-new users, 8959 mirabegron users and 7872 oxybutynin users were identified with similar characteristics to non-new users in the AMI analysis; 5236 patients remained in each treatment cohort after PSmatching. 10.2 (5.9) 9.6 (6.0) 9.6 (5.6) 9.6 (6.0) AMI acute myocardial infarction, NA not applicable (propensity score matching model did not converge), SD standard deviation a Data appear as mean (SD) except for gender data, which appear as N (%)
New Users
The analysis of AMI from Truven MarketScan included 17,182 new mirabegron users and 63,962 new oxybutynin users ( 10.1 (5.9) 9.5 (6.0) 9.6 (5.6) 9.6 (5.9) NA not applicable (propensity score matching model did not converge), SD standard deviation a Data appear as mean (SD) except for gender data, which appear as N (%) A total of 17,138 new mirabegron users and 63,835 new oxybutynin users met the criteria for inclusion in the stroke outcome analysis (Table 4), with similar characteristics to the AMI analysis of new users in the same dataset; 15,973 patients remained in each cohort after matching. In the non-new user analysis of stroke, 15,173 mirabegron patients and 11,314 oxybutynin patients were selected, and their characteristics were again similar to the non-new users in the AMI analysis. After matching, 8103 patients in each cohort remained with similar characteristics.
Outcomes
Outcome analyses from new and non-new users in both datasets are presented in Table 5.
Among new users in the AMI outcome analysis in IMS PharMetrics, the mean length of drug exposure was 79 days on mirabegron and 44 days on oxybutynin (data not shown). The IR of AMI was 4.4/1000 PY for mirabegron and 6.5/1000 PY for oxybutynin. Prior to matching, the HR for AMI was 0.68 (95% CI 0.36-1.28), similar to the aHR after PS decile stratification (0.67; 95% CI 0.33-1.37). The PS-matched model failed to converge.
Among non-new users in the PharMetrics AMI analysis, the IR for mirabegron users was 5.8/1000 PY compared to 2.7/1000 PY among oxybutynin users. Point estimates varied, but no statistically significant association between mirabegron use and AMI was observed for unmatched users (HR 0.95; 95% CI 0.38-2.33), after stratification by PS decile (aHR 1.08; 95% CI 0.39-3.00), or after matching (aHR 2.00; 95% CI 0.37-10.92). AMI acute myocardial infarction, NA not applicable (propensity score matching model did not converge, SD standard deviation a Data appear as mean (SD) except for Gender data, which appear as N (%) During follow-up, the IR of stroke was 6.3/1000 PY among new mirabegron users and 9.5/1000 PY among new oxybutynin users in PharMetrics, with a HR in the unmatched groups of 0.66 (95% CI 0.39-1.13). The aHR after stratification of PS decile was 0.62 (95% CI 0.34-1.13); the model in the PS-matched groups did not converge.
Discussion
The present study assessed two large US administrative claims databases from 2012-2015 and did not identify a statistically significant increased risk of AMI or stroke among new or non-new mirabegron users compared to oxybutynin users. These findings were consistent prior to and after matching (when the model converged) on PS created from demographic and clinical characteristics, as well as healthcare resource utilization data. To our knowledge, this is the first published attempt at replicating an M-S safety study using the publicly-available CDM specifications, study protocol, and PROMPT 2 module, using data sources other than those participating in M-S.
The FDA M-S reports on mirabegron published in September 2016 similarly found no increased risk of AMI or stroke among mirabegron users compared to oxybutynin users [12]. For instance, in the new user analysis of primary diagnoses of AMI, 4465 mirabegron users and 4464 oxybutynin users were matched [12]. The aHR for matched treatment groups was 1.00 (95% CI 0.14-7.10), while the wide confidence intervals reflected relatively few outcomes observed during the study period (five cases of AMI among mirabegron users vs. three among oxybutynin users) [12]. In the matched analysis of primary diagnoses of stroke, an aHR of 0.80 (95% CI 0.21-2.98) was reported [12]. Published studies to date have reported no association of mirabegron use and increased AMI and/or stroke risk [13,14].
Strengths of the FDA's M-S program include the use of a CDM, standardized cohort selection, and analysis modules that are publicly available and used by the M-S data partners. As one of the goals of this analysis was to replicate the methods of the FDA's M-S study, only a minor necessary deviation from the M-S CDM was made to identify unique patient visits. This overall consistency makes the results comparable to the findings reported by the FDA's M-S study of mirabegron, even though different analysis datasets were used. The methods described here may be applied by other researchers who wish to replicate a Mini-Sentinel study in other databases. HRs are associated with mirabegron use, with oxybutynin use as the reference Some known limitations inherent to administrative claims databases must also be noted. Claims diagnoses represent justifications for billing and may not always accurately reflect patients' medical conditions. Variables that might be found in electronic health record data, such as alcohol consumption and body mass index, are not available in administrative claims databases; however, all variables specified in the M-S protocol could be coded in the datasets used. Confounders of outcomes related to the decision to treat with mirabegron versus oxybutynin may exist despite the use of PS matching.
A pharmacy claim indicates the availability of a medication to a patient, not actual use of that medication. Therefore, details of medication dispensing only approximate actual treatment patterns. Health care received outside of the health care plan, such as use of over-the-counter medications, do not appear in the claims data. Claims databases do not capture the reasons for failure to refill medications.
The databases used in this study are large commercial administrative claims databases, and they are considered to be generalizable to the US population with access to commercial health insurance. It is likely that there is some patient overlap between the PharMetrics and MarketScan databases, and it is possible that some overlap could be present with the M-S data. As patients are anonymized, however, the amount of any overlap cannot be determined and this information is not disclosed by the data vendors. Lastly, although use of the PROMPT 2 module was specified in the Mini-Sentinel protocol for mirabegron, this module has since been replaced by the Cohort Identification and Descriptive Analysis (CIDA) ? Propensity Score (PS) tool; analyses conducted using the updated CIDA ? PS tool may differ from those presented here.
Conclusions
This study of two large US administrative claims databases did not detect a statistically significant increased risk of either AMI or stroke among new or non-new users of mirabegron compared with oxybutynin users. These findings are consistent with both the FDA's Mini-Sentinel safety study of mirabegron and other published literature. The replication methods described here may be considered for other therapies, outcomes, and databases of interest to researchers. | 2018-04-03T00:15:45.505Z | 2017-11-13T00:00:00.000 | {
"year": 2017,
"sha1": "2045fcdb77903a30ac95387150c5bc88fa2f0c27",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40801-017-0124-7.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2045fcdb77903a30ac95387150c5bc88fa2f0c27",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260211988 | pes2o/s2orc | v3-fos-license | Heterogeneity of Glycolytic Phenotype Determined by 18F-FDG PET/CT Using Coefficient of Variation in Patients with Advanced Non-Small Cell Lung Cancer
We investigated the role of Coefficient of Variation (CoV), a first-order texture parameter derived from 18F-FDG PET/CT, in the prognosis of Non-Small Cell Lung Cancer (NSCLC) patients. Eighty-four patients with advanced NSCLC who underwent 18F-FDG PET/CT before therapy were retrospectively studied. SUVmax, SUVmean, CoV, total Metabolic Tumor Volume (MTVTOT) and whole-body Total Lesion Glycolysis (TLGWB) were determined by an automated contouring program (SUV threshold at 2.5). We analyzed 194 lesions: primary tumors (n = 84), regional (n = 48) and non-regional (n = 17) lymph nodes and metastases in liver (n = 9), bone (n = 23) and other sites (n = 13); average CoVs were 0.36 ± 0.13, 0.36 ± 0.14, 0.42 ± 0.18, 0.30 ± 0.14, 0.37 ± 0.17, 0.34 ± 0.13, respectively. No significant differences were found between the CoV values among the different lesion categories. Survival analysis included age, gender, histology, stage, MTVTOT, TLGWB and imaging parameters derived from primary tumors. At univariate analysis, CoV (p = 0.0184), MTVTOT (p = 0.0050), TLGWB (p = 0.0108) and stage (p = 0.0041) predicted Overall Survival (OS). At multivariate analysis, age, CoV, MTVTOT and stage were retained in the model (p = 0.0001). Patients with CoV > 0.38 had significantly better OS than those with CoV ≤ 0.38 (p = 0.0143). Patients with MTVTOT ≤ 89.5 mL had higher OS than those with MTVTOT > 89.5 mL (p = 0.0063). Combining CoV and MTVTOT, patients with CoV ≤ 0.38 and MTVTOT > 89.5 mL had the worst prognosis. CoV, by reflecting the heterogeneity of glycolytic phenotype, can predict clinical outcomes in NSCLC patients.
Introduction
Lung cancer is the leading cause of cancer-related death worldwide [1]. Due to the late onset of clinical symptoms, most patients are already in advanced stages having distant metastases and poor overall survival at diagnosis. Based on their molecular and immunophenotypic profiles, these patients are candidates for chemotherapy, targeted therapy or immunotherapy. However, after an initial good response to therapy, the majority of these patients will become resistant to treatment and develop disease progression or die. Therefore, it would be helpful to identify from the beginning those with a higher risk of disease progression and death allowing the adoption of more aggressive therapeutic regimens. The tumor stage at initial diagnosis is the most reliable prognostic factor in Non-Small Cell Lung Cancer (NSCLC) patients and is used to establish subsequent therapeutic strategies. Nevertheless, patients within the same stage can show a wide spectrum of treatment responses and clinical outcomes highlighting the need for additional prognostic factors for a better stratification of these patients.
Texture analysis is an emerging tool for assessing intratumoral heterogeneity in medical imaging allowing to extract clinically relevant subvisual information from images obtained with different modalities, such as Computed Tomography (CT), 2-[ 18 F]fluoro-2deoxy-D-glucose positron emission/computed tomography ( 18 F-FDG PET/CT), Magnetic Resonance Imaging (MRI) [2][3][4][5]. Intratumoral heterogeneity of biological, molecular and pathological traits has been considered the main cause of treatment failure, therapeutic resistance and poor overall survival in cancer patients with metastatic disease [6][7][8]. Therefore, assessing tumor heterogeneity could be extremely useful to characterize tumor aggressiveness and to select risk-adapted therapy in individual patients. Similarly, among clinical diagnostic images, heterogeneity of 18 F-FDG uptake within tumors has been attributed to several factors, including cellularity, proliferation, angiogenesis, necrosis and hypoxia [9], and a high 18 F-FDG uptake has been often associated with more aggressive tumors, poorer response to treatment and worse prognosis [10].
Previous studies performing texture analysis of 18 F-FDG PET/CT images in lung cancer patients showed that several parameters including dissimilarity, asphericity, coarseness and entropy were able to predict both Progression-Free Survival (PFS) and Overall Survival (OS) of patients [11][12][13][14][15][16]. Although we are aware that texture analysis is a powerful tool to evaluate tumor heterogeneity, we aimed at obtaining an easy and clinically suitable imaging parameter for the characterization of tumor heterogeneity. To this end, we selected Coefficient of Variation (CoV, Standard Deviation divided by SUVmean) as a simple and easy to calculate first-order texture parameter that may reflect the heterogeneity of glycolytic phenotype.
The aim of our study was to test the ability of CoV derived from 18 F-FDG PET/CT images in the evaluation and the quantification of the heterogeneity of glycolytic phenotype in primary and metastatic lesions of NSCLC patients with advanced stages. Furthermore, we evaluated the prognostic power of this simple parameter determined on primary tumors and its ability to predict OS and PFS along with other PET-based volumetric parameters such as total Metabolic Tumor Volume (MTV TOT ) and whole-body Total Lesion Glycolysis (TLG WB ) measured on all tumor lesions in each patient.
Patients
Our study included 84 consecutive patients (59 men, 25 women; mean age 66 ± 12 years; range 38-87 years) with histologically proven non-small cell lung cancer in advanced disease (stages III and IV) who had undergone whole-body 18 F-FDG PET/CT scan before any therapy at our Institution (Table 1). This retrospective study has been approved by the institutional ethics committee (Protocol N. 352/18) and all subjects signed an informed consent form.
We studied 41 patients with adenocarcinoma, 20 with squamous cell carcinoma, 3 with large cell carcinoma and 20 with NSCLC Not Otherwise Specified (NOS). Twenty-seven patients were in stage III (7 IIIA, 11 IIIB and 9 IIIC) while 57 patients were in stage IV (20 IVA and 37 IVB). Patients were treated according to their stage and other factors such as histology, molecular pathology, age, performance status and comorbidities [17]. In particular, 69 patients underwent chemotherapy, 4 of which in association with radiotherapy and 15 with immunotherapy. The remaining 15 patients did not receive any specific cancer therapy due to advanced age or severe comorbidities.
Patients were then monitored and the mean follow-up period was 11 months (range 1-58 months). PFS was measured from the date of the baseline 18 F-FDG PET/CT to the first observation of a progressive disease, relapse or death. OS was calculated from the date of the baseline 18 F-FDG PET/CT to the date of death. 18 F-FDG PET/CT scans were acquired after fasting for 8 h and 60 min after intravenous administration of 370 MBq (10 mCi) of 18 F-FDG. The blood glucose level, measured just before tracer administration, was <120 mg/dL in all patients. Hybrid imaging was performed with an Ingenuity TF scanner (Philips Healthcare, Best, The Netherlands). A multidetector CT scan was acquired using the following parameters: 120 kV, 80 mAs, 0.8 s rotation time, and pitch of 1.5; a fully diagnostic contrast-enhanced CT was acquired if not previously performed. PET scan was performed in 3-dimensional mode using 3 min per bed position and six to eight-bed positions per patient, depending on patient height. Iterative image reconstruction was performed with an ordered subsets-expectation maximization algorithm. Attenuation-corrected emission data were obtained using filtered back projection of CT reconstructed images (Gaussian filter with 8 mm full-width half maximum) to match the PET resolution. Transaxial, sagittal, and coronal images as well as coregistered images were preliminary examined using Ingenuity TF software (IntelliSpace Portal V5.0).
18 F-FDF PET/CT Image Analysis
PET/CT data were transferred in DICOM format to a workstation and processed by the LIFEx program [18]. All areas of focal 18 F-FDG uptake visible on 2 contiguous PET slices at least and not corresponding to physiological tracer uptake were considered to be positive. In case of multiple regional or non-regional lymph nodes, liver, bone or metastases in other sites the lesion with the highest SUVmax was analyzed for each category. A Volume of Interest (VOI) of each lesion was delineated on PET images by drawing a tridimensional region around the target lesion using an automated contouring program setting an absolute threshold for SUV at 2.5, in agreement with previous studies [19,20]. Areas of necrosis were not included in the region of interest and were carefully excluded from the analysis. In addition, the accuracy of lesion delimitation was confirmed on the corresponding CT images. By computed analysis of each VOI, the following parameters were obtained: SUVmean, CoV, SUVmax, MTV and Total Lesion Glycolysis (TLG). CoV was determined as Standard Deviation (SD) divided by SUVmean whereas MTV TOT and TLG WB were calculated by the sum of the corresponding values for all primary tumors, lymph nodes and distant metastatic lesions of each patient [21]. Multiple coalescent lymph nodes were considered as a single lesion. Brain metastases were not included in the analysis because of the physiological high FDG avidity of the brain that can affect the correct delineation of the regions of interest. Moreover, not measurable disseminated metastases were also excluded.
Statistical Analysis
Statistical analysis was performed using the software MedCalc for Windows, version 10.3.2.0 (MedCalc Software, Mariakerke, Belgium). A probability value of <0.05 was considered statistically significant. Student's t-test was used to compare the means of unpaired data. Pearson's correlation coefficient was used to evaluate the linear relationship between continuous variables. Univariate and multivariate analyses of clinical and imaging variables were performed using Cox proportional hazards regression. Variables that predicted PFS and OS by univariate analysis were included in the model for multivariate analysis along with age, the latter independently from its statistical significance. Survival analysis was performed using the Kaplan-Meier method and log-rank tests. Survivors were censored at the time of the last clinical control.
In addition, volumetric parameters such as MTV and TLG were calculated on all lesions of each patient for a total of 419 lesions including 84 primary tumor lesions, 163 lymph nodes and 172 distant metastases. Mean MTV and TLG values in the 84 primary tumors were 66.79 ± 10.74 mL and 382.77 ± 56.83 g, respectively. Moreover, MTV TOT and TLG WB that reflect whole-body tumor burden were calculated by summing all measurable lesions detected in each patient. Mean MTV TOT and TLG WB values were 140.85 ± 16.97 mL and 756.24 ± 88.60 g, respectively.
After a mean follow-up period of 11 months, 53 patients had progressive disease and died, 16 had progression and were alive whereas 15 patients had stable disease. Survival analysis was then performed including age, gender, histology, stage, imaging parameters derived from primary tumors (diameter, SUVmax, SUVmean, CoV, MTV and TLG) and whole-body volumetric parameters (MTV TOT and TLG WB ). SUVmax, SUVmean and CoV of primary tumors were dichotomized using the median value as threshold (11.63, 5.05 and 0.38, respectively). Table 3 reports the results of univariate analysis for both OS and PFS. OS was predicted by CoV (p = 0.0184), MTV TOT (p = 0.0050), TLG WB (p = 0.0108) and stage (p = 0.0041).
These variables along with age were tested in multivariate analysis and age, CoV, MTV TOT and stage were retained in the model (χ 2 = 24.4730, p = 0.0001). Subsequently, Kaplan-Meier analysis and long-rank testing were performed using the median values of CoV (0.38) and MTV TOT (89.5 mL) as cutoff showing that patients with CoV > 0.38 had significantly better OS as compared to those with CoV ≤ 0.38 (χ 2 = 6.0005, p = 0.0143) (Figure 2a). Moreover, OS was significantly better in patients with MTV TOT ≤ 89.5 mL than those with MTV TOT > 89.5 mL (χ 2 = 7.4546, p = 0.0063) (Figure 2b).
Finally, CoV and MTV TOT were tested in the four possible combinations by using the respective median value as cut off for Kaplan-Meyer analysis. A statistically significant difference among the four survival curves was found (χ 2 = 14.1719, p = 0.0027). In fact, patients with COV ≤ 0.38 and MTV TOT > 89.5 mL had the worst prognosis, while the best OS was observed in patients with COV > 0. 38 stage (p = 0.0041). At univariate analysis, PFS was significantly predicted by MTVTOT (p = 0.0046), TLGWB (p = 0.0056), and stage (p = 0.0039); these variables along with age were tested in multivariate analysis and only MTVTOT and stage were retained in the model (χ 2 = 14.6020, p = 0.0007). By Kaplan-Meyer analysis and long-rank test patients with MTVTOT ≤ 89.5 mL showed a significantly prolonged PFS as compared to those with MTVTOT > 89.5 mL (χ 2 = At univariate analysis, PFS was significantly predicted by MTV TOT (p = 0.0046), TLG WB (p = 0.0056), and stage (p = 0.0039); these variables along with age were tested in multivariate analysis and only MTV TOT and stage were retained in the model (χ 2 = 14.6020, p = 0.0007). By Kaplan-Meyer analysis and long-rank test patients with MTV TOT ≤ 89.5 mL showed a significantly prolonged PFS as compared to those with MTV TOT > 89.5 mL (χ 2 = 9.2252, p = 0.0024).
Discussion
The present study shows that the first-order parameter CoV and the whole-body volumetric parameter MTV TOT derived from 18 F-FDG PET/CT may both predict the clinical outcome of patients with advanced NSCLC. In particular, patients with CoV of primary tumors lower than the threshold had worse OS suggesting that a high expression of the glycolytic phenotype in a large proportion of tumor cells, producing a small SD and a high SUVmean, can be associated with aggressive disease, poor response to treatment and consequent poor prognosis. On the contrary, patients with CoV higher than the threshold may have tumors with a low proportion of cells with a glycolytic phenotype that would lead to less aggressive disease, better response to therapy and improved survival. Moreover, also patients with MTV TOT higher than the threshold had worse outcomes and increased risk of progression due to their high tumor burden. Despite tumor heterogeneity of NSCLC occurring at both genetic and molecular levels, the glycolytic phenotype is retained by primary tumors, lymph node metastases and distant metastases with no statistically significant variations of CoV. Therefore, the glycolytic phenotype at different tumor sites has similar characteristics showing a comparable degree of heterogeneity. A further consideration is that the large panel of driver mutations found in NSCLC can modulate in a similar manner the glycolytic phenotype.
However, the limitations of our study including the retrospective design, the relatively limited number of patients and heterogeneous histology may require validation of the results in a larger prospective study. In fact, the use of stringent criteria for the prospective enrollment of a large number of patients may reduce the heterogeneity caused by different histology of lung lesions avoiding any potential variability in the study population. In addition, although the interobserver variability in our study was limited by the fact that the regions of interest were drawn using an automated contouring program, different segmentation methods and thresholds may be compared to further reduce the variation in the extraction of texture features.
Tissue biopsy or random sampling cannot encompass the full extent of phenotypic or genetic variation within a tumor and it cannot be used as a representative parameter of intratumoral heterogeneity across the entire tumor volume. Therefore, it would be helpful to use non-invasive methods to assess tumor heterogeneity for survival prediction and selection of patients who may need more intensive therapeutic regimens. Texture analysis is emerging as a powerful tool with an increasing number of published studies for a quantitative assessment of tumor heterogeneity by analyzing the distribution and relationship of pixel or voxel grey levels in the image [22,23]. In particular, the heterogeneity of FDG uptake in primary lung tumors was evaluated by taking into account a number of texture parameters sometimes combined in statistical models [24,25]. Lovinfosse et al. [26] studied 63 NSCLC patients in stage I that were subjected to 18 F-FDG PET/CT scan and then treated by stereotactic body radiation therapy. Dissimilarity, a second-order feature of texture analysis that describes the local variation of the grey level of voxel pairs in an image, was found to be a strong and independent predictor of OS since the higher the dissimilarity the better the OS. Moreover, survival analysis by the Kaplan-Meier method showed that patients with dissimilarity lower than or equal to the cutoff level had a higher risk of recurrence as compared to patients having dissimilarity higher than the threshold. Therefore, despite the more sophisticated calculation of dissimilarity, the behavior of this parameter is in agreement with the findings obtained with CoV. Similarly, coarseness, a higher-order texture feature that indicates the grey level difference between a central voxel and its neighborhood, was evaluated in lung cancer patients candidate for chemoradiotherapy and subjected to 18 F-FDG PET/CT before treatment [13]. In this study, a high coarseness, i.e., a relatively uniform grey level in a ROI drawn around a primary lung tumor [23], was associated with an increased risk of progression and death. These findings were again in agreement with the behavior of CoV since a low CoV value is indicative of a higher homogeneity of glycolytic phenotype. Furthermore, significantly greater pre-treatment COV values were found in patients with locally advanced NSCLC who responded to treatment compared with non-responders [27]. In another study, a higher CoV value of primary NSCLC in newly diagnosed patients with clinically suspected N2 predicted the presence of lymph node metastases at histopathological examination [28]. The latter results are apparently in contrast with our findings since a high CoV in our study is associated with longer survival. This apparent discrepancy can be explained by the fact that CoV is directly correlated with SUVmax and SUVmean and both are indices of tumor aggressiveness. Considering other types of cancer, high CoV values were correlated with a longer PFS in patients with locally advanced rectal cancer [29].
In addition to its prognostic value, CoV has been used also to discriminate metastatic and normal regional lymph nodes in NSCLC patients. In fact, significantly higher CoV values were found in involved lymph nodes as compared to normal lymph nodes and these observations may be ascribed again to its correlation with SUVmax and SUVmean [30].
Texture analysis by generating a large set of data-driven information often lacks biological correlates and radiomic features can be predictive of a good or poor prognosis without a real understanding of their biological meaning. In addition, the biological comprehension of a set of radiomic features may vary depending on the tracer used. In the case of 18 F-FDG, the radiomic features reflect the local and regional heterogeneity of the glycolytic phenotype. When analyzing the uptake of a radioligand, such as a 68 Ga-labeled somatostatin analog, these features reflect the heterogeneity of receptor expression [31] and the higher its heterogeneity the worst the response to receptor targeted therapy. Similarly, if texture analysis is focused on the expression of a differentiation marker in a tumor, the higher local and regional variation of its expression can be associated with more aggressive disease and the worst prognosis [32].
Several attempts have been performed to find the relationship between a radiomic signature and clinical findings [33], genomic profiles [34][35][36], or pathological correlates [37,38] and, although these studies provided many biological clues for the interpretation of radiomic features, evidence of their association with specific molecular processes and pathways remains elusive. At present, a high expectation relies upon the analysis of single-cell genomics, proteomics and transcriptomics of tumor samples that had been subjected to radiomic analysis. The biological validation of radiomic features in these studies can lead to widespread use of these methods based on a higher comprehension of their meaning [39].
Conclusions
Our study shows that the coefficient of variation is an independent prognostic factor for predicting survival in NSCLC patients. This simple first-order parameter can be easily interpreted thus providing information on the variability of the glycolytic phenotype in primary and metastatic lesions. CoV's biological meaning is different but equally important as compared to that of MTV TOT which represents the whole tumor burden. Therefore, the combination of both parameters may improve the risk stratification of NSCLC patients allowing them to receive more personalized therapeutic approaches. | 2023-07-28T15:45:18.649Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "b543834bbce401d28355680f4e50b4448fda3a46",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "e6d46e1fc395f796d0a2db1c87ff6996b3a6524d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233277992 | pes2o/s2orc | v3-fos-license | Variation in grain-size characteristics of simulated shrubs as a novel sand-barrier in a wind tunnel experiment
Sand transport is the main manifestation of sand damage in the arid and semiarid regions globally. It is a huge challenge to stabilize mobile sandy and change them into stable productive ecosystems. The establishment of simulated shrubs is one of the most effective measures to solve the above difficulties as a novel sand-barrier. To clarify simulated shrubs’ role in the process of ecological restoration. It will be greatly helpful to incorporate the shelter device proposed in the present work into landscape models for aeolian soil transport, to optimize the parameters associated with the sand-barrier characteristics for aeolian soil stabilization at the field scale. A series of wind tunnel experiments were conducted to analyze the variations of soil grain-size of simulated shrubs with different spatial configurations, row spaces, and net wind speeds. Further, the soil grain-size parameters were calculated by the classic method proposed by Folk and Ward to clarify the change of soil particles resulted from the blocking effects. The average grain-size content of simulated shrubs with different spatial configurations, row spaces, and net wind speeds was dominated by medium sand and fine sand, and the total percentage was more than 90%. Moreover, the sand deposition of simulated shrubs with different spatial configurations increased with the improvement of wind speeds. The average sand deposition of spindle-shaped simulated shrubs in 17.5 × 17.5 cm and broom-shaped simulated shrubs in 17.5 × 26.25 cm under different net wind speeds was the least. The effects of row spaces on average grain-size parameters increased with the improvement of net wind speeds. By calculating the correct characteristics of specific shelter devices proposed in the present work, all of these findings suggest that the application of simulated shrubs will be an important component to further extend ecological engineering projects in arid and semiarid regions.
Introduction
Researches on reducing sand blow have gained increasing worldwide attention as a response to land desertification and climate change. 1,2 Many sand-control engineering techniques (e.g. straw-checkerboard, stone-checkerboard, hole plated-type, Salix psammophila, sand fence, HDPE (High-density Polyethylene), and clay sandbarriers are produced to protect oil field installations, roads, and living facilities from being buried by sand blow. In the long-term field application, it is found that the traditional sand-barrier layer commonly suffers from short life, weak durability, poor effect, serious pollution, and other restrictions. Based on this, simulated shrubs used in this study offer a long using-life, recycle material, and great effects to successfully manage the near-surface sand flow. Moreover, the simulated shrubs are designed which have a three-dimensional, flexible, directly-use, and characteristic breaking through the bottleneck of complicated installation and poor effect in fields with traditional sand-barriers. Besides, it has beautiful visual effects while playing key roles in sand-fixation. [3][4][5][6][7] The field measurement, wind tunnel experiments, and numerical simulations have been contributing to acquiring a better quantitative understanding of the turbulent wind field over sand-barriers erected to stabilize aeolian soil. And the soil grain-size is considered as one of the major parameters described when the development of aeolian soil. [8][9][10] The grain-size strongly affects aeolian particle transport and makes grain-size analysis an important tool of aeolian research. [11][12][13] Grain-size parameters and sorting variations of sediments can provide important information about aeolian depositional processes and the associated environment. At present, 9 the horizontal and vertical characteristics of soil grain-size are the two main research aspects of aeolian soil. 14,15 Finer crest, coarser crest, and no difference are three models that are used to describe mean grain-size patterns for sediments in deserts. Lancaster 16 and Dong et al. 17 found that the mean grain-size and sorting became finer when transport distances increased. However, Qian et al. 18 found that the mean grain-size did not change with distance changed in the Badain Jaran Desert, and Chen 19 found that there was no relationship between the soil grain-size and transport distance. Moreover, Zhang et al. 20 found that grain size distribution in barchan dune fields can include a broad range of particle sizes, from fines to gravel-sized particles. So findings related to soil grain-size haven't yet been agreed upon, and it should be gained increasing studies. The soil grain-size distribution and protective effect of sand-barriers depend on their geometric design, covering height, length, width, porosity, opening size, hole distribution, row spaces, etc. 21,22 Among these factors, geometric design (spatial configurations) and row spaces are the main structural characteristics influencing the protection efficiency of sandbarriers and are also the factors that the present work focuses on. [23][24][25][26] Extensive types of research and encouraging results about all kinds of sand-barriers have been done, including soil grain-size variation, sediments transportation, and their effectiveness. Yet, simulated shrubs as one of the novel sand-barriers, little is known concerning the sand deposition around simulated shrubs with different spatial configurations and row spaces. Therefore, based on a series of wind tunnel experiments, the variations of soil grain-size of simulated shrubs with different spatial configurations, row spaces, and net wind speeds were studied in detail to optimize the parameters associated with the sand-barrier characteristics for aeolian soil stabilization that yield the most effective protection. By calculating the ''correct'' characteristics of specific shelter device proposed in the present work, all of these findings suggest that the application of simulated shrubs will be an important component to further extend ecological engineering projects in arid and semiarid regions.
Set-up of wind tunnel experiment
The wind tunnel experiment was conducted in the Key Laboratory of Desert and Desertification of the Chinese Academy of Sciences, Lanzhou, China. The wind tunnel consists of six sections: (1) Air inflow, (2) impeller, (3) flow stabilization, (4) flow contraction, (5) test section, and (6) outflow diffusion section. The wind tunnel is 37.78 m long totally and the size of the test section is 16.23 m (Length) 3 1 m (Width) 3 0.6 m (Height). This is a non-circulating blow-type wind tunnel, which wind speeds ranges from 1 to 40 m/s (turbulence intensity \0.4%). The thickness of the boundary layer in the test section is more than 120 mm. The arrangement of simulated shrubs and instruments in the test section of the wind tunnel is shown in Figure 1. According to the continuously adjustable imported wind speed, the study of soil grain-size and sand deposition of simulated shrubs with different spatial configurations under different row spaces and net wind speeds can be carried out.
Structure of simulated shrubs
Before the wind tunnel experiment, the height, canopy dimension, and canopy porosity of annual Nitraria tangutorum in the field were measured. As shown in Table 1, the canopy dimension was calculated by the long diameter 3 width diameter 3 thickness of the leaf. 27 The optical porosity of the canopy was obtained by using the ERDAS IMAGINE 9.2 and then analyzed by an unsupervised classification method of canopy photographs. Accordingly, the simulated shrubs used in this study were at a scale of 1:4 (simulated shrubs:field shrubs), and the geometric morphology and canopy porosity of simulated shrubs are the same as field shrubs (Nitraria tangutorum).
The simulated shrubs consist of new material that is polymerized by anti-aging polymer compounds. Compared with the materials of traditional sand-barriers, the using-life of simulated shrubs is longer than 15 years. To make the simulated shrubs keep upright under different net wind speeds, the simulated shrubs are made of iron wires wrapped in plastic and were fixed in the wooden board of the test section of the wind tunnel. The overall height of simulated shrubs is 22 cm, of which the height above the wooden board is 17.5 cm, the height below the wooden board is 4.5 cm. The number of main branches for every simulated shrub is 8-10. Each main branch contains 10-15 leaves which are flat and obovate with 3 cm in length, 1.5 cm in width, and 0.1-0.2 mm in thickness. Based on the features of branches and leaves of Nitraria tangutorum in the fields, three spatial configurations of spindle-shaped, broom-shaped, and hemisphere-shaped shrubs were made. Simulated shrubs with different spatial configurations (left) and a schematic diagram of simulated shrubs fixed in the test section of the wind tunnel (right) are shown in Figure 2. used as a control (CK) to clarify the sand blocking effects of simulated shrubs with different spatial configurations, row spaces, and net wind speeds. 29,30 To determine the dynamic similarity in the wind tunnel experiment, the Reynolds number (Re) was calculated using the method proposed by Wu and Yang. 29 The calculated Re is 6.9 3 10 4 -12 3 10 4 , which means that a self-similarity requirement with a fully turbulent flow environment for the wind tunnel experiment was achieved.
Analysis of soil grain-size
Air drying, sieve off impurities, and desalt of sediment samples were conducted in the Key Open Laboratory of National Forestry Administration for the Protection and Cultivation of Biological Resources in Sandy Land of Inner Mongolia Agricultural University. First, we used an electronic scale with an accuracy of 0.01 g to weigh the soil samples. Next, we passed the soil samples by using a shaker through sieves ranging from 22F to 6.64F (10-4000 mm) in size, at intervals of 1/ 3F, 9 and the results were expressed as the weight percentage. Then, 31,32 each 5 g soil sample was fully heated by adding 10 ml Hydrogen Peroxide and 10 ml Hydrochloric Acid. And distilled water was used to completely remove carbonate from soil samples. After 24 h, the pH value was tested repeatedly until it was between 6.5 and 7.0. Finally, 33 the soil grain-size distributions were measured using a Malvern MasterSizer 3000 (Malvern Instruments Ltd., Malvern, UK), combined with Hydro LV large capacity pool. 34 The measurement accuracy is 0.6%. Each soil sample was repeated three times and its arithmetic average value was calculated.
Results and analysis
The average percentage content of soil grain-size Figure 3 shows the average soil grain-size content among the simulated shrubs of different spatial configurations (hemisphere-shaped, spindle-shaped, and broomshaped), row spaces (17.5 3 17.5 cm, 17.5 3 26.25 cm, and 17.5 3 35 cm), and net wind speeds (8,12, and, 16 m/s). The average soil grain-size content was dominated by medium sand ranging from 250 to 500 mm, followed by fine sand ranging from 100 to 250 mm. And there were less in other average soil grain-size content and these were almost the same. The percentage of the medium sand and fine sand was more than 90%, while the other soil grain-sizes were less than 10%. Moreover, the clay sand was the smallest with a varied range of 0.01%-0.05%. All in all, there was less variation of average soil grain-size content among the simulated shrubs of different spatial configurations, row spaces, and net wind speeds. grain-size among the simulated shrubs with different spatial configurations, row spaces, and net wind speeds.
Distribution of grain-size parameters
The sorting coefficient showed a change from well-sorted to be moderate ranged from 0.4F to 1.1F. The sorting coefficient showed from well-sorted to be moderate ranged from 0.4F to 1.1F. The sorting coefficients of the spindle-shaped simulated shrubs in 17. among the simulated shrubs with different spatial configurations, row spaces, and net wind speeds.
The skewness values ranging from 0.08 to 0.43, representing a skew trend ranged from symmetrical to extremely positive. Further, the skewness values above 4 cm in 17.5 3 17.5 cm under 16 m/s of the spindle-shaped simulated shrub were the highest and the grain-size distribution was positive. And there was an obvious change for the spindle-shaped simulated shrubs in 17.5 3 35 cm below 8 cm. Furthermore, the skewness values for other simulated shrubs were almost the same and maintained a straight line.
The kurtosis coefficient ranged from 0.9 to 1.7 represented medium to very narrow to sharp distribution. The kurtosis coefficients for the spindle-shaped simulated shrubs in 17.5 3 35 cm below 8 cm and 17.5 3 17.5 cm under 16 m/s below 4 cm were different from other simulated shrubs. Also, there was little difference among the simulated shrubs of different spatial configurations, row spaces, and net wind speeds.
Compared with widely used straw-checkerboard sand-barriers
Vegetation restoration is one of the important sand-control approaches which has been proved to be greatly effective in the long-term applied. 38,39 However, 40 almost not feasible for accomplishing vegetation restoration to control sand in arid and semiarid regions because of the extremely limited water. Based on this, the results showed that simulated shrubs proposed in this study had obvious sandfixation effects without any water, which can solve the above difficulties. And to optimize the parameters associated with the sand-barrier characteristics for aeolian soil stabilization that yield the most effective protection, the variations of soil grain-size of simulated shrubs with different spatial configurations, row spaces, and net wind speeds were studied in detail. Our results show that the average grainsize content was dominated by medium sand and fine sand, and the total percentage was more than 90% (Figure 3). The average grain-size content for other soil grain-sizes was almost the same and the proportion was less than 10% ( Figure 3). Moreover, the sand deposition of simulated shrubs with different spatial configurations increased with the improvement of wind speeds (Figure 4).
it is
The straw-checkerboard sand-barrier has been widely used on railway and postmining landscapes protection in arid and semiarid regions in China since the 1950s. Meanwhile, the protective efficiency and soil grain-size of straw-checkerboard sandbarrier have been studied. The results presented by Guo et al. 41 reveal that the fine soil particles increased greatly in the straw-checkerboard area. Similarly, Li et al. 34 present that after sand stabilization by straw-checkerboards and revegetation, the proportions of clay and silt changed remarkably over time. And the silt and clay content increased from 1.00% 6 0.07% and 0.88% 6 0.44% in the beginning to 24.60% 6 1.57% and 11.00% 6 1.11% in the end stage, respectively. These results are consistent with our study that the simulated shrubs with the same geometric configuration as field plants as a novel sand-barrier also made an important contribution to the accumulation of fine-grained materials. Most importantly,34 soil crusts can form on the soil surface as a result of the continuous deposition of fine particles. Thick soil crusts can increase significantly the surface stability. However,2 in terms of installation labor and materials consumption, the straw-checkerboard sand-barriers are proved to be expensive, and it provided poor effect because of rapidly buried by sand. In contrast, the simulated shrubs can be an important part of further extend ecological engineering projects in arid and semiarid regions, and the contribution of simulated shrubs is worthy of gaining increasing attention.
Sediment grain-size parameters
Sand particles are transported by the wind and directly determined by the grain-size and wind intensity. 42 The grain-size parameters showed in Figure 7 that there were obvious differences in mean grain-size, sorting, skewness, and kurtosis at different heights of sand deposition. The sorting values decreased with the improvement of heights but the vertical variation of skewness with symmetrically distributed sands was very slight in Williams's 43 studies. However, Li et al. 44 found that sorting values increased with the improvement of heights. Van 45 presents that there was no obvious vertical variation in the standard deviation, skewness, or kurtosis of collected sands. It is consistent with our finding that the skewness, kurtosis, mean grain-size, and sorting values of simulated shrubs were almost the same and maintained a straight line. And there was less variation of mean grain-size among the simulated shrubs with different spatial configurations, row spaces, and net wind speeds. More specificity, 46,47 the elasticity of surface properties affects particles' rebound after striking the bed. In this study, the branches of simulated shrubs are made of iron wires wrapped in plastic, and this is probably one of the reasons why the soil grain-size distribution of the simulated shrubs differed from that of the previously studied. And field measurements of simulated shrubs need to be a further study in future works, while this study is based on a wind tunnel experiment.
Sand deposition resulted from airflow field variation Ma et al. 27 showed that the blocking effect by the canopy of the windbreaks resulted in that part of the flow was deflected upward and passed over the windbreaks when the airflow approached the windbreaks. The other airflow was forced to enter the windbreaks, which could be distinguished into ''bleeding-flow'' at the canopy height, and ''through-flow'' under the canopy. This above conclusion further resulted in our result as shown in Figure 4, in which the sand deposition of different spatial configurations with simulated shrubs presented first increasing and then decreasing with the improvement of sand collector heights. Moreover, the sand deposition of simulated shrubs with the different spatial configurations above 10 cm increased obviously. Further, Ma et al. 27 presented that the speed of the bleeding-flow decreases markedly on the near windward side. Due to the discontinuous blocking of plant rows, the speed of bleeding-flow decreases in a fluctuating way within the canopy and reached the minimum on the leeward side. Moreover, 7 incoming velocity and turbulence intensity can influence the shelter effect and/or trapping effect of the fence, but that influence is generally secondary to that of fence geometry. Based on the features of branches and leaves of Nitraria tangutorum in the fields, three spatial configurations of spindle-shaped, broom-shaped, and hemisphere-shaped shrubs were made and used in this study. Therefore, these three spatial configurations were extremely helpful to trap the sand compared with the sand-barriers with single spatial configuration. Therefore, there was little possibility for simulated shrubs where deposition did not occur at all.
Factors affecting sand-retaining and protective efficiency of sand-barriers
Dong et al. 48 described the evolution patterns of the vortex zone of a verticallyholed sand fence and presented that fence porosity is the most important affecting factor. They showed that, with all else being equal, there was a fence porosity that provided an optimal sheltering effect with the effects on shelter distance. The optimal porosity proposed by Dong et al. 48 is around 0.2 or 0.3, which corresponds to a critical porosity above which bleed flow dominates and below which reversed flow becomes significant. Qu et al. 2 presented that the key to protecting sand using checkerboard sand-barriers depended on the development of a stable concave surface within the cells of the grid. And they pointed out, the ratio of the length of its sides to the depth of wind erosion in 1.0 3 1.0 m sand-barriers was about 1:7, while this ratio was about 1:9 in 1.5 3 1.5 m and 2.0 3 2.0 m. Also, the protective effects of similar sizes of checkerboard sand-barrier are different if topography conditions are not the same. Therefore, there are different factors for different kinds of sandbarriers.
As a novel sand-barriers, there is a lack of systematic studies in more detail on factors affecting sand-retaining and protective efficiency. Our findings showed that spatial configuration and row space were important factors affecting the sandretaining efficiency of simulated shrubs. With researches on protection efficiency being constantly deepened, exploring the optimal characteristics of sand-barriers has become a direction for our current and future work. Further studies will reveal in-depth the effects of simulated shrubs with different spatial configurations on wind-sand flow fields to further optimize the spatial configuration of simulated shrubs and improve the sand-retaining effects.
Conclusions
Based on the wind tunnel experiment, we analyzed the variations of soil grain-size of simulated shrubs with different spatial configurations, row spaces, and net wind speeds. Our analysis revealed that the average grain-size content was dominated by medium sand ranging from 250 to 500 mm, followed by fine sand ranging from 100 to 250 mm. The percentage of the medium sand and fine sand was more than 90%. The average grain-size content for others was almost the same and the proportion was less than 10%. Moreover, we found that the sand deposition of simulated shrubs with different spatial configurations increased with the improvement of wind speeds, but there was no obvious difference in a sand deposition under different row spaces. And the average sand deposition of spindle-shaped simulated shrubs in 17.5 3 17.5 cm and broom-shaped simulated shrubs in 17.5 3 26.25 cm under different net wind speeds was the least. However, there was less variation for simulated shrubs with different spatial configurations in 17.5 3 35 cm under 12 and 16 m/s. Furthermore, the soil grain-size parameters presented that the mean grain-size ranging from 0.6F to 2.1F (220-460 mm), dominated by fine sand and medium sand. The sorting coefficient changed from well-sorted to be moderate ranging from 0.4F to 1.1F. Skewness values ranging from 0.08 to 0.43, represented a skew trend ranging from symmetrical to extremely positive. And kurtosis coefficient presented from medium to very narrow and sharp distribution ranging from 0.9 to 1.7. In summary, there were fewer changes of four soil grain-size parameters among the simulated shrubs with different spatial configurations, row spaces, and net wind speeds. The effect of row spaces on average grain-size parameters increased with the improvement of wind speeds.
As one of the key measures for control desertification in arid and semiarid regions, all of these quantitative findings suggest that the application of the simulated shrubs will be an important component of efforts to further extend ecological engineering projects in arid and semiarid desert regions. [49][50][51] It would be interesting to incorporate the shelter device proposed in the present work into landscape models for aeolian soil transport and dune migration, with the aim of optimizing the parameters associated with the windbreak characteristics for aeolian soil stabilization at the field scale.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Xia Pan is a Ph.D. candidate at Inner Mongolia Agricultural University, China. And she was also a visiting Ph.D. student at Temple University, USA. Her main research is the application of GIS and statistical analysis to the environment. | 2021-04-18T06:16:13.987Z | 2021-04-01T00:00:00.000 | {
"year": 2021,
"sha1": "568542444f8288c3d0c2f29d640a26f1b45d72a5",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/00368504211009368",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "60f198fed48868cc2606c43b349ab723944d27d2",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254405980 | pes2o/s2orc | v3-fos-license | Baseline brain and behavioral factors distinguish adolescent substance initiators and non-initiators at follow-up
Background Earlier substance use (SU) initiation is associated with greater risk for the development of SU disorders (SUDs), while delays in SU initiation are associated with a diminished risk for SUDs. Thus, identifying brain and behavioral factors that are markers of enhanced risk for earlier SU has major public health import. Heightened reward-sensitivity and risk-taking are two factors that confer risk for earlier SU. Materials and methods We characterized neural and behavioral factors associated with reward-sensitivity and risk-taking in substance-naïve adolescents (N = 70; 11.1–14.0 years), examining whether these factors differed as a function of subsequent SU initiation at 18- and 36-months follow-up. Adolescents completed a reward-related decision-making task while undergoing functional MRI. Measures of reward sensitivity (Behavioral Inhibition System-Behavioral Approach System; BIS-BAS), impulsive decision-making (delay discounting task), and SUD risk [Drug Use Screening Inventory, Revised (DUSI-R)] were collected. These metrics were compared for youth who did [Substance Initiators (SI); n = 27] and did not [Substance Non-initiators (SN); n = 43] initiate SU at follow-up. Results While SI and SN youth showed similar task-based risk-taking behavior, SI youth showed more variable patterns of activation in left insular cortex during high-risk selections, and left anterior cingulate cortex in response to rewarded outcomes. Groups displayed similar discounting behavior. SI participants scored higher on the DUSI-R and the BAS sub-scale. Conclusion Activation patterns in the insula and anterior cingulate cortex may serve as a biomarker for earlier SU initiation. Importantly, these brain regions are implicated in the development and experience of SUDs, suggesting differences in these regions prior to substance exposure.
Introduction
Adolescence is commonly characterized as a period of increased risk-taking coupled with heightened reward sensitivity (1,2). Risk-taking during this evolutionarily conserved developmental period may have positive outcomes (3), with exploratory behaviors allowing for adaptive risk-taking (4)(5)(6)(7), facilitating the achievement of key developmental milestones in preparation for the transition to adulthood (8). However, brain changes that condition adaptive risk-taking also render adolescents vulnerable to risk-taking that can lead to negative outcomes, including substance use (SU) (9).
Early SU initiation is associated with a constellation of other negative risk-taking behaviors and related adverse outcomes (10), including delinquency or criminal activity (11), risky sexual behavior (12,13), physical assault (14), accidental injury (15), and death (16). Given the potential for deleterious outcomes, SU among adolescents has been identified as a global health concern (17). Critically, while earlier SU initiation is associated with greater risk for development of lifetime SU disorders (SUDs) (18)(19)(20)(21)(22)(23)(24)(25)(26)(27), any delay in SU initiation decreases risk for development of SUDs (19, 28). For instance, each year of delayed alcohol initiation is associated with a 5-9% decrease in risk for alcohol use disorder (20). Identifying factors which help us to understand -and ultimately predict -early initiation, may be beneficial in targeting prevention efforts to delay SU onset.
Developmental neuroscience models offer potential explanations for increased risk-taking during adolescence that leads to SU initiation. The dual systems (29, 30), triadic (31, 32), and imbalance models (33,34), generally postulate that subcortical brain regions (e.g., ventral striatum, amygdalae) develop earlier than neocortical regions [e.g., prefrontal cortex (PFC)], and that these asynchronous maturational trajectories condition increased risk-taking in adolescence. Specifically, earlier development of brain structures associated with reward processing, relative to development of neocortical regions associated with cognitive control, generates an imbalance, whereby earlier-maturing reward-processing systems exert greater influence over behavior in adolescence. Development of the PFC and its functional networks, which continues throughout adolescence and into early adulthood (35,36), is associated with improvements in top-down control of behavior (37)(38)(39). This protracted course of development may render the PFC vulnerable to the impacts of abused substances during adolescence (40), with early SU potentially altering neurodevelopmental trajectories (41)(42)(43)(44) and, ultimately, adversely affecting adult neurobiology and behavior (9,45).
Deemphasizing the role of a subcortical-cortical "imbalance" in increased adolescent risk-taking is the Lifespan Wisdom Model (46). This model incorporates fuzzy-trace theory's conceptualization of risk-related decision making (47) and emphasizes the adaptive nature of increased risk-taking during adolescence. The Lifespan Wisdom Model posits that youth with pre-existing compromised cognitive control form a subset of adolescents who are vulnerable to those risk-taking behaviors with negative sequelae (e.g., addiction).
Given the proximal and distal negative outcomes associated with early SU, it is critical to understand behavioral and neural risk markers that precede initiation. Adolescents who initiate SU early often demonstrate pre-existing heightened impulsivity (48,49), sensation seeking (50), and reward sensitivity (51)(52)(53). Such traits are associated with poor emotion regulation (54), behavioral dyscontrol (53), and a relative imperviousness to punishment (55), along with increased susceptibility to negative peer influences (56).
It is key to understand not only these types of behavioral markers associated with increased risk for SU initiation, but also to understand the underlying neurobiology associated with this risk (57). Identifying neurobiological profiles of SU initiation will help identify neuroendophenotypes associated with risk for and protection from SUDs (58,59). And indeed, brain metrics may predict risk for psychopathology with greater specificity and sensitivity than behavioral measures alone (60).
Existing functional neuroimaging research that has examined neural predictors of SU initiation and/or SU escalation in adolescents reports differences in brain activity in regions implicated in reward processing, including striatum (49,(61)(62)(63), amygdala (49), and medial orbitofrontal cortex (64). These studies have not all examined SU broadly, however, with some characterizing initiation/escalation of alcohol only, without examining other substances (49,63). Additionally, few studies have prospectively investigated brain markers of Frontiers in Psychiatry 02 frontiersin.org SU initiation in early adolescent samples comprised only of youth who have not initiated SU. Characterizing neurobiology prior to substance initiation is particularly critical, given that substance exposure may alter neurobiology, and that such alterations challenge our ability to disentangle whether brain differences-for example, between those who have and have not initiated SU (or between those with and without SUDs)-are an antecedent or a consequence of SU. Thus, the central aim of the current study was to comprehensively characterize demographic, behavioral, cognitive, and neural factors that may be associated with risk for substance initiation, including both alcohol and drugs, in a drug-and alcohol-naïve sample. We examined 70 (aged 11.1-14.0 years) SU-naïve early adolescents prospectively over 36 months. We compared those who did and did not report initiation of alcohol and/or drugs at follow-up on "baseline" demographic, behavioral, cognitive, and neural measures. We characterized behavioral and neural profiles associated with reward-sensitivity/risk-taking in relation to SU initiation. Adolescent participants completed measures probing reward sensitivity and risk aversion [Behavioral Inhibition System/Behavioral Activation System (BIS/BAS) Scales], and tasks to assess risk-taking and impulsivity in the context of rewards (Wheel of Fortune and delay discounting tasks, respectively). We predicted that at baseline, those who would go on to initiate SU at 18-or 36-months follow-up would demonstrate greater hedonic and behavioral responsivity to rewards, overvalue immediate rewards, and make riskier choices in a reward-related decision-making task compared to adolescents who remained SU-naïve throughout the study. Further, we predicted that, prior to SU onset, reward-based decision making would be associated with differences between subsequent SU initiators and non-initiators in brain regions implicated in decision-making under uncertainty [i.e., medial prefrontal cortex (65)(66)(67)(68)], and in the modulation of reward processing/sensitivity [i.e., ventral striatum and amygdalae (65,69)].
Study design
Participants were recruited as part of the Adolescent Development Study (ADS), a prospective longitudinal investigation of the neurodevelopmental precursors to and consequences of early SU initiation and escalation. Detailed information on ADS study methods and aims is presented elsewhere (70). Briefly, a total of 135 typically developing, SU-naïve early adolescents were recruited from the Metropolitan Washington D.C. region and followed longitudinally. Demographic, cognitive, behavioral, and imaging assessments were conducted at an initial ("baseline") visit and during two follow-up visits, at 18.4 (SD = 3.6) months (Wave 2) and 36.7 (SD = 4.4) months (Wave 3) after baseline. Imaging and behavioral data reported here were collected during the initial SU-naïve baseline assessment. Exclusionary criteria for the study included adolescent self-report of alcohol (>1 full drink of alcohol at any time) or, with the exception of nicotine, any SU prior to the initial visit; in utero exposure to alcohol or illicit drugs (parent-reported); a diagnosed neurodevelopmental disorder (e.g., autism spectrum disorder); left-handedness; a sibling of a current participant; history of head injury resulting in loss of consciousness >5 min; or MRI contraindication. The Georgetown University IRB approved all procedures, and written consent and assent were obtained from the parent and adolescent, respectively.
Participants
Of the 135 participants enrolled in the study, 70 adolescents aged 11.1-14.0 years [M = 12.7 years, SD = 0.66; female = 40 (57%)] were included in the analyses reported here. One enrolled participant was excluded due to neurodevelopmental disorder. Participants were excluded from analyses due to missing or incomplete imaging data (n = 15) and/or excessive head motion during imaging (n = 24). Additionally, since a primary aim was to examine neural activation during risktaking, participants who did not make any "high-reward/highrisk" selections during the Wheel of Fortune task (WOF; described below) were excluded from analyses (n = 4). Groups were defined based on SU status at follow-up, as detailed below. Participants for whom SU status could not be determined due to attrition or survey discrepancies (n = 21) were also excluded from analyses reported here. (Supplementary Table 1 provides a detailed summary of exclusions/inclusions).
Family/Caregiver measures Socioeconomic status index
An Socioeconomic status (SES) Index was calculated by averaging the mean of two standard scores (mean household income bracket before taxes and mean cumulative years of parental education), and re-standardizing these to obtain a z-score distribution with a 0-centered mean and a standard deviation of 1 for the sample analyzed (N = 70) [method adapted from Manuck et al. (71)].
Family history of substance use
History of alcohol and SU problems in biological relatives of participants was determined using a modified Family Tree Questionnaire (FTQ) (72), which was completed by the accompanying parent. The FTQ was modified to include drugs of abuse, reported separately from alcohol. Respondents were asked to report alcohol/drug use history of first-and seconddegree biological relatives of the enrolled adolescent as follows: 1 = never drank/never used, 2 = social drinker/occasional user, 3 = possible problem drinker/possible problem user, 4 = definite problem drinker/definite problem user. Respondents could also Frontiers in Psychiatry 03 frontiersin.org indicate that they did not know or did not remember. Each parent completed an FTQ reporting on the FH for his/her own biological relatives and provided information for the nonvisiting parent's family, where possible. In the analyses reported here, determination of positive FH (FH+) was defined as possible or definite problematic alcohol or drug use by either the mother or father of the adolescent; otherwise, the adolescent was considered FH negative (FH−).
Adolescent measures SU initiation status
At baseline and at the Wave 2 and Wave 3 follow-up visits, adolescents completed two self-report surveys to determine SU status: the Tobacco Alcohol and Drug (TAD) survey and the Drug Use Screening Inventory Revised (DUSI-R) (73,74). The study-specific TAD included the alcohol and drug portion of the Semi-Structured Interview for the Genetics of Alcoholism (75) and asked about the use of substances, including tobacco, alcohol, and illicit drugs (i.e., marijuana, cocaine, methamphetamine, ecstasy, opiates, salvia, synthetic marijuana, inhalants, and illegally used prescription drugs), along with an open-ended "any other substances" question.
Adolescents also completed the DUSI-R, a survey with demonstrated psychometric validity (76)(77)(78) and reliability (79) for assessing SU and factors associated with risk for SUD later in adolescence. The DUSI-R includes 20 questions concerning use of specific substances (e.g., alcohol, marijuana, prescription painkillers, smoking tobacco, chewing tobacco) or substance classes (e.g., over the counter medications, tranquilizer pills, stimulants).
For the purposes of the analyses reported here, affirmative SU responses on both the TAD and the DUSI-R were used in determining SU status. Participants who reported SU on both the TAD and DUSI-R at either Wave 2 or Wave 3 follow-up were categorized as SU initiators (SI). Those who reported no SU on both the TAD and DUSI-R at both follow-up assessments were categorized as SU non-initiators (SN). As detailed above, participants for whom SU status could not be determined were excluded from analyses reported here.
As noted above, nicotine use reported at baseline was not considered exclusionary for the current study. Of importance, however, only two participants reported nicotine use, one in each of the SU groups (both reporting last use >30 days prior to the baseline visit).
DUSI-R absolute problem density (APD) score
In addition to questions concerning SU, the DUSI-R probes experiences and behaviors known to precede and co-occur with SU. The survey includes eight domains comprised of 159 yes-no items that are relevant for early adolescents: SU, behavior, health, social competence, psychiatric symptoms, school performance, family and peer relationships, and recreation (80). An absolute problem density (APD) score, which reflects overall risk for SU, is calculated by dividing the total number of "yes" questions by the total number of DUSI-R items. Here, group comparisons were conducted for the DUSI-R APD score only.
Delay discounting (DD) task
Adolescents completed the delay discounting (DD) (81) task outside of the scanner. The task was implemented in E-Prime 2.0. Participants were instructed to choose between receipt of a variable immediate reward (≤$10, in increments of $0.50), versus receipt of a fixed $10 after a specified temporal delay (e.g., Would you rather have $2 now, or $10 in 30 days). Discounting was assessed at six delays: 1, 2, 10, 30, 180, and 365 days. Participants were instructed to make their selections with care, as they would receive a reward (≤$10) based on a random selection of one of their choices (82).
Values for which the participant demonstrated equal preference for immediate versus delayed receipt (i.e., the "indifference point") were normalized to the fixed delayed reward value ($10) (83) and plotted against each delay. To adjust for unequal weighting of indifference points at longer delays (a limitation of conventional methods of calculating area under the discounting curve; AUC), while preserving the notion of subjective experience of time via delay scaling (an appeal of conventional AUC metrics), data were log10transformed [AUClogd (84)]. Values ranged from 0 to 1, with smaller AUClogd values representing steeper discounting and thus preference for immediate (smaller, sooner) reward.
Behavioral Inhibitory System/Behavioral Activation System (BIS/BAS) Scale
Adolescent participants completed the BIS/BAS (85), a 20item self-report measure answered on a 4-point Likert scale. The 7-question BIS scale probes behavioral and emotional responsivity to punishment. Conversely, the BAS is comprised of three sub-scales: Reward Responsiveness, Drive, and Fun Seeking. A higher BIS score reflects aversion to and avoidance of potential punishment, while higher BAS sub-scale scores reflect positive emotionality (Reward Responsiveness) and behavioral approach (Drive and Fun Seeking) in the context of potential rewards.
IQ and pubertal development measures
Full-scale IQ (FSIQ) was estimated using the Kaufman Brief Intelligence Test (KBIT), Second Edition (86). Adolescents completed the Pubertal Development Scale (PDS) (87, 88) as a proxy assessment of physical development via Tanner stage (87).
Wheel of Fortune (WOF) task
The WOF task was completed during functional neuroimaging. This well-validated paradigm has been used to probe the neural bases of reward responsivity and risky-decision making under conditions of probabilistic reward versus penalty in both adults (66,67,89,90) and adolescents (66,90,91). A modified version of this task was used in this study to probe reinforcing Frontiers in Psychiatry 04 frontiersin.org outcomes (i.e., winning or losing) (Figure 1; see Supplementary section 1.1 for further description of the WOF task). Participants were guided through an in-scanner practice (during the structural MRI scan) to ensure their understanding of how to perform the task. Prior to each run, participants were encouraged to maximize their hypothetical gains and/or exceed their previous total winnings. The task was implemented in E-Prime 2.0, and stimuli were presented via back-projection onto a screen viewed in a mirror mounted to the head coil. A slow eventrelated design with temporal jitter provided by a variable inter-trial fixation of 2,500-10,000 ms based on a Poisson distribution was utilized.
Contrasts of interest for the selection and feedback phases were High-reward/risk > Low-reward/risk and Win > Lose, respectively. Behavioral data analyses considered the percentage of high-reward/risk selections and the average response times (RT) for high-reward/risk and low-reward/risk selections as well as the average RT across all selections.
Functional MRI data pre-processing
Image pre-processing and statistical analyses were carried out using SPM8. 1 Pre-processing included correction for interleaved slice timing, realignment of all images to the mean fMRI image to correct for head motion artifacts between images, and co-registration of realigned images to the anatomical MPRAGE. The MPRAGE was segmented and transformed into Montreal Neurological Institute (MNI) standard stereotactic space using non-linear warping. Lastly, these transformation parameters were applied to normalize the functional images 1 http://www.fil.ion.ucl.ac.uk/spm into MNI space, and the data were spatially smoothed using a Gaussian kernel of 6 mm 3 FWHM. A scrubbing algorithm utilizing frame-wise displacement was implemented to assess participant movement during the fMRI scans (92). Participants included in analyses demonstrated less than 1 mm displacement in fewer than 20% of their total volumes across all three runs of the task.
Statistical analyses Imaging data
First-level statistical analyses of imaging data included regressors encoding for trials during which the subject chose either the 10 or 30% probability (High-reward/risk) or the 70 or 90% probability (Low-reward/risk). Regressors of interest also included feedback trials on which subjects won (Win) or lost (Lose). Six translations and rotations modeling participant motion calculated during realignment were included as nuisance regressors.
Contrasts of interest examined whole brain activation for high-reward/risk compared to low-reward/risk trials (Highreward/risk > Low-reward/risk), and winning versus losing outcomes (Win > Lose). Regressors were convolved with the canonical hemodynamic response function. A temporal highpass filter of 128 s was applied to the data to eliminate lowfrequency noise (e.g., MRI signal drift). First-level contrasts of interest were used in a second-level analyses for comparisons between SI and SN groups. The initial cluster defining threshold was p < 0.001, with a cluster extent of 10 voxels (voxel size = 2.0 mm isotropic). Corrections for multiple comparisons were made using a cluster-level FWE threshold of p < 0.05. Macro-anatomical labels reported are based on peak coordinates and were assigned by the Harvard-Oxford Cortical/Subcortical Structural atlases (93)(94)(95)(96), supplemented with labels from Atlas of the Human Brain, 4 th edition (97).
Standard transformations for the above dependent variables did not correct distributions; thus, the between-group comparisons were performed using non-parametric Mann-Whitney U test. Alpha was set at p = 0.05, and Bonferroni correction for multiple comparisons was applied where noted. With the exception of the group comparisons for DUSI-R APD, all statistical tests were two-tailed. We used one-tailed tests in comparing groups on this measure given a priori evidence of directionality [i.e., DUSI-R APD severity, reflected by higher scores, positively predicts SU (74)].
Demographics
Substance initiators and Substance non-initiators groups were similar in age, sex, PDS, SES, and race/ethnicity ( Table 1) The SI group had a higher proportion of FHP individuals. It is important to note that FHP youth who initiate use early are at particularly heightened risk for problematic SU (103); thus, it is possible that those FHP youth in the SI group may be at dramatically increased risk for SUDs relative to FHP youth in the SN group. Thus, in examining neural activation in SI and SN youth during reward-related decision making, we conducted post-hoc analyses that controlled for FH status (in addition to IQ). These results are reported in Supplementary Material Section 2.3, Supplementary Tables 7, 8, and Supplementary Figures 3, 4).
DUSI-R APD, BIS/BAS, and DD
A one-tailed independent samples t-test showed adolescents in the SI group had significantly higher scores on the DUSI-R APD compared to the SN group [t(67) = −1.89, p = 0.03] ( Table 2), suggestive of increased problematic behavior in domains predictive of a future SUD. Compared to the SN group, SI adolescents had significantly higher scores on the BAS Drive [t(68) = −2.6, p = 0.012] and Fun Seeking (U = 362.5, p = 0.008) scales, but did not differ for BAS Reward Responsiveness or discounting behavior ( Table 2) indicating similarities in aspects of reward processing.
WOF task behavior
The groups made similar proportions of high-reward/risk selections (Z = 0.537, p = 0.70) (
Functional MRI results
Compared to SN youth, SI adolescents demonstrated less activation in the left insula when selecting high-reward/risk versus low-reward/risk options. Additionally, when presented with winning versus losing feedback, SI adolescents showed Table 4 and Figure 2.
Exploratory analyses: Post-hoc tests of parameter estimates
Visual inspection of the parameter estimates in Figure 2 suggest that the significant between-group results for the contrasts of interest may be driven by differences in how the groups are processing each trial type during selection and feedback. To probe these potential differences, exploratory tests were conducted. All tests were two-tailed. Followup independent-samples t-tests showed groups differed
Discussion
This study aimed to characterize "baseline" behavioral and neural profiles of reward-sensitivity/risk-taking that distinguished SU-naïve early adolescents who did versus did not report SU initiation at 18-and 36-months follow-up. We sought to address an important gap in the literature-characterizing these behavioral and neural profiles in early adolescents prior to substance exposure. The elucidation of such profiles prior to substance initiation is critical, as the examination of behavioral/neural factors after SU initiation limits the ability to disentangle factors that may be antecedents of SU from those that may be a consequence of SU.
Our hypotheses concerning behavioral and neural profiles of SI and SN youth were partially confirmed. SI and SN adolescents significantly differed on self-report measures of reward-sensitivity/risk-taking, including the BAS Drive and
Behavioral profiles of reward-sensitivity/risk-taking
We predicted that SI youth would show greater sensitivity to potential rewards and lower aversion to potential punishments reflected by higher BAS and lower BIS scores. These hypotheses were partially confirmed. Groups did not differ for the BAS Reward Responsivity scale or the BIS. However, SI youth showed higher scores on the BAS Drive and Fun-Seeking scales. Elevated scores on these two BAS scales have been associated with low levels of harm avoidance (85) and problematic drug or alcohol use (104,105), including in adolescents (106)(107)(108)(109)(110). Further, these two scales were positively correlated with adolescent risky choices during a win-only (but not a lose-only) version of a WOF task (111). Thus, elevated scores on these BAS scales prior to SU may reflect a propensity toward affective and behavioral responsivity to rewarding stimuli in SI youth, potentially biasing these individuals toward greater risk-taking and decreased harm avoidance (112).
In contrast to the literature on the BAS, findings concerning associations between BIS and SU have been less consistent. BIS was negatively correlated with SU among adolescents (escalation of cannabis use) (113) and college students (amount and frequency of alcohol consumption) (114). However, BIS scores of adolescents aged 15-18 years at baseline did not predict substance misuse 2 years later (52), findings consistent with the current study.
Compared to their SN peers, SI adolescents also showed elevated DUSI-R absolute problem density (APD) scores. The APD score is a multi-dimensional construct, quantifying adolescent difficulties across health, psychosocial and psychiatric domains associated with SUD (73). As such, elevated APD scores in SI adolescents may reflect increased relative risk for SU initiation. Importantly, however, 69% of SI youth would not be considered high risk according to the previously established cut-off score of 24 (74). We predicted that at baseline, those who would go on to initiate SU at 18-or 36-months follow-up would overvalue immediate gratification on a delay discounting task. This hypothesis was not confirmed. SI and SN youth did not differ on performance on the DD task, suggesting similar preference for immediate rewards under current task parameters. Unlike the BIS/BAS and DUSI-R, which query real-world preference and situationally-based behavior, the laboratory DD task (like WOF) may lack the sensitivity to detect group differences prior to SU initiation (115).
We further predicted that those who went on to initiate substances would make riskier choices in a reward-related decision-making task compared to adolescents who remained SU-naïve throughout the study. However, we found that taskbased risk-taking was similar between groups, and across all participants selection of high-reward/risk options was accompanied by longer deliberation than low-reward/risk options, an effect consistent with previous studies (67,89,111,116).
Neural profiles of reward-sensitivity/risk-taking
We hypothesized that SI and SN youth would show baseline differences in brain activity during the WOF task in regions associated with decision-making under uncertainty Between-group results for which substance non-initiators (SN) participants demonstrate increased activation relative to Substance Initiators (SI) adolescents. Interaction charts depict mean parameter estimates (error bars represent standard errors) for (A) High-reward/risk > Low-reward/risk, left insula; and (B) Win > Lose, left anterior cingulate cortex (ACC). FSIQ as covariate of no interest. Initial cluster defining threshold = p < 0.001, k = 10 voxels. Results survive FWE cluster-correction at p < 0.05. and in reward processing/sensitivity, and more specifically, the medial prefrontal cortex and ventral striatum and amygdalae, respectively. SI and SN youth did not show differences in ventral striatum or amygdalae, as predicted; however, SI and SN groups differed in neural activity underlying making risky selections and processing rewarded outcomes despite similar task-based behavior.
SN adolescents demonstrated significantly greater left insular cortex response to High-Reward/Risk versus Low-Reward/Risk trials (Figure 2A). Post-hoc exploratory analyses indicated that during risk-taking SI and SN groups differed in patterns of activation depending on whether they chose a highor low-risk option, suggesting that the marked difference in responsivity to Low-Reward/Risk trials drives the significant between-group result for the High-reward/risk > Lowreward/risk contrast. Within-group exploratory analyses revealed that only SN adolescents showed significantly different activation for High-Reward/Risk compared to Low-Reward/Risk trials, while brain response to the two trial types was similar within the SI group.
During decision-making, the insular cortex plays a role in refocusing attention based on salience, evaluating risk, inhibiting action, and processing outcomes (112,(117)(118)(119). Attenuated insula activity is associated with increased realworld risk-taking among adolescents (120,121), and aberrant insula engagement in processing salient stimuli is observed in individuals with addiction (122)(123)(124). Reduced activation of the anterior insula has been found to play an important role in adolescent risky decision making in comparison to adults and is linked to more emotionally driven decisions (125). Taken together, our results may be indicative of relative immaturity in the SI group in a region that plays an important role in evaluating degree of risk (126), potentially reflecting a neuroendophenotypic vulnerability to the early initiation of substances of abuse in these adolescents. SN youth showed greater left ACC response to winning outcomes. Parameter plots suggested that while processing outcomes related to gain or loss, SN and SI adolescents demonstrated differing patterns of responses in this region ( Figure 2B). Post-hoc exploratory analyses revealed groups significantly differed for activation during losing, but not winning, feedback. Further, only SI youth showed significantly different activation for Lose compared to Win trials. Increased ACC activity has previously been associated with processing gains (relative to no gains) in a gambling task in adolescents (116). Individuals with established SUDs show impairments in decision-making (127), altered ACC structure (128) and differences in brain activity during risktaking (129). Specifically, individuals with SUDs display not only greater substance-related cue-induced ACC activity during active use (130), but also blunted ACC activity during decisionmaking while abstinent (131), an effect which predicts craving, length of time to relapse and relapse severity (132). Importantly, some of these differences may be evident prior to development of AUD/SUD including alterations in ACC neuroanatomy (133) and these differences may reflect increased vulnerability to SUDs (e.g., youth with positive family history of alcoholism demonstrate hyper-activation during risk-taking compared to youth with negative family history) (134).
Both the insula and ACC are implicated in reward-related decision-making (66,89,120,126,135,136), and as hubs in the salience network, the anterior insula, and ACC (137,138) integrate automatic, bottom-up detection of relevant internal and external stimuli with cognitive, top-down processing (139). The salience network is implicated not only in altered cuereactivity among individuals with SUD (122,123), but may play a contributory etiological role in early SU and transition to SUD (140). While others have established that adults with SUD demonstrate aberrant patterns of insular and cingulate activity during risky decision-making (141) and that reduced insular activity during risk-related processing is predictive of relapse (142), our results suggest that variability in insular and ACC activity is present in individuals at risk for SUD prior to substance exposure.
The exploratory results intriguingly suggest SI youth may be more neurally sensitive to the distinction between wins and losses, though this remains to be empirically tested in planned comparisons correcting for multiple comparisons. It is also possible that steep hypoactivation of the ACC in SI adolescents in the context of rewarding outcomes indicates an increased threshold for rewarding stimuli (consistent with elevated BAS fun-seeking scores SI youth). These group differences may reflect differences in outcome monitoring and processing (143) and awareness of outcomes (144), which serve in part to guide behavior (145,146). A notable consistency between present findings and previous studies is that youth with differential risk for SU demonstrate similar task performance but differences in patterns of brain activation across a variety of tasks (147,148). Thus, early disruptions in PFC function, including ACC, may contribute to a constellation of impairments including aberrant response inhibition and salience assignment (149) and ultimately to real-world risky decision making.
Strengths and limitations
An important strength of the current study is the stringent inclusion criteria implemented to ensure SU-naïve status of youth at initial assessment, and the requirement of convergent responses on two SU measures (DUSI-R and TAD) to classify youth at follow-up. In contrast, previous studies examining "SUnaïve" youth include those who report "little to no" alcohol use (150), or who do not report "significant" (65,69,151) or "heavy" alcohol or drug use (152). Others rely on urine drug screening at scanning time (91), which, for many drugs, capture only recent use (153) and are not reflective of patterns of use over time.
Another strength of the current study is the narrow age range at baseline (11-13-year-olds). While previous studies enrolled participants with a more distributed age range (66,91,151), we restricted eligibility at enrollment to a much smaller range in an effort to capture information regarding early initiation and to minimize potential age-related confounds in neurodevelopment. Finally, the current sample of adolescents were well-characterized using a battery capturing a variety of factors presumed to confer risk for or resilience to early SU, including preference for immediate gratification (DD), affective and behavioral responsivity to rewards and punishment (BIS/BAS), multidimensional risk for SU problems (DUSI-R). Additionally, follow-up neuroimaging analyses controlled for an important factor associated with SUD risk (family history status) and results were largely similar to those reported in the main analyses.
On the other hand, by analyzing the selection phase of the WOF in a version of the task that consistently coupled high reward with low probability and low reward with high probability we were unable to dissociate between patterns of activation associated with reward versus risk. Although this limitation is not unique to the current study (154), it is unclear here whether between-group differences in insular cortex were driven by reward sensitivity or risky decisionmaking. Future inclusion of choices with equal probability of high/low reward (i.e., 50/50 wheels), as in the "classic" WOF task, will permit testing the relative contributions of reward magnitude independent of perceived risk (67). It is important to note, however, that estimation of reward value and tendency toward risk outside of the laboratory may not be not entirely separable either; decisions with greater reward potential, whether adaptive (e.g., approaching a classmate to initiate a conversation) or maladaptive (e.g., underage alcohol consumption) are inherently accompanied by risk (e.g., social consequences such as peer rejection; or adverse physiological impacts of alcohol consumption and parental or school punishment for drinking).
A notable strength of the current study was its implementation of stringent criteria to ensure the SU-naïve status of youth at baseline, with the exception of nicotine use which was not exclusionary. Given associations of nicotine exposure in adolescence with alterations in brain development (155) and given associations of nicotine use with initiation and use of other substances (156,157), the inclusion of participants who reported past nicotine use during the initial study visit reflects a limitation of the current study. Mitigating impacts of this limitation on the study's findings, however, is that only two of the 70 participants reported nicotine use at baseline, and one participant went on to initiate other substances while one of these participants did not.
The current study identified youth who initiated at different ages (initiation at approximately 18-vs. 36-months follow-up), which may also limit the interpretation of our outcomes. Although earlier and later initiators were similar in demographic, physical, and cognitive characteristics, as well as task-based behavior and BIS/BAS scores, those who reported earlier initiation (approximately 18-month followup) scored higher on the DUSI-APD, indicative of greater risk in domains that precede or co-occur with problematic SU. Due to concerns regarding statistical power, we were unable to compare SI subgroups on brain activation during the WOF task. Future studies should recruit greater numbers of participants who are likely to be assigned to one of these two SU subgroups to prospectively examine group differences in neural activation among individuals who initiate at different stages of adolescence.
Relatedly, the current study is unable to determine pathways to SU escalation, and SUD. SU initiation itself, while necessary, is not sufficient to promote continued or escalated use or the eventual entrenchment of pathways that might be specific to SUD risk. The elucidation of factors that give rise to such pathways, including early brain biomarkers, may provide a much richer understanding of how brain functioning in SU-naïve adolescents portends subsequent life course outcomes.
Implications and directions for future research
Overall, our findings are consistent with the premise that differences in regional PFC activity may occur prior to SU initiation and thus may confer vulnerability to SUDs (149,158). A novel finding indicates that variability in activity in ACC and insula-key regions known to support reward-and risk-related decision-making-may distinguish SU-naïve early adolescents who initiate SU earlier from those who remain abstinent. The findings reported here furthermore lend support to models suggesting that divergent neurodevelopmental trajectories may be precede SU, and point to the potential promise of developing interventions to target these key brain regions and the behavioral functions they support before SU initiation to disrupt maladaptive and/or promote more adaptive trajectories (159).
Data availability statement
The data analyzed for this study are not readily available due to the inclusion of sensitive material about adolescent participants, including linkages between background and substance use. Requests to access de-identified datasets should be directed to dfishbein@unc.edu.
Ethics statement
This study was approved by the Georgetown University Institutional Review Board. Written informed consent and assent were obtained from the parent and adolescent, respectively.
Author contributions
DHF and ASV contributed to the conceptualization and design of the study. GM and AP performed statistical analysis. GM wrote the first draft of the full manuscript. AP assisted in the preparation of the initial draft of the methods and organized the dataset. GM, VLD, EJR, DHF, and ASV contributed to manuscript revisions. All authors read and approved the submitted version. | 2022-12-08T18:22:07.616Z | 2022-12-08T00:00:00.000 | {
"year": 2022,
"sha1": "12df167fb4984508a11bb0f4896cd7811ae6c77d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "12df167fb4984508a11bb0f4896cd7811ae6c77d",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16197385 | pes2o/s2orc | v3-fos-license | Temperley-Lieb pfaffinants and Schur $Q$-positivity conjectures
We study pfaffian analogues of immanants, which we call pfaffinants. Our main object is the TL-pfaffinants which are analogues of Rhoades and Skandera's TL-immanants. We show that TL-pfaffinants are positive when applied to planar networks and explain how to decompose products of complementary pfaffians in terms of TL-pfaffinants. We conjecture in addition that TL-pfaffinants have positivity properties related to Schur Q-functions.
Introduction
An immanant of an n × n matrix X = (x ij ) is an expression of the form (1) w∈Sn f (w)x 1,w(1) · · · x n,w (n) where f : S n −→ R is a function. The well-known examples of immanants are determinants and permanents. Desarmenien, Kung and Rota [DKR] gave a standard basis of the space I(X) of immanants, labeled by standard bitableaux while recently Pylyavskyy [Pyl] introduced a basis labeled by non-crossing bitableaux.
Immanants with certain positivity properties, most notably the irreducible immanants, had been studied earlier in [GJ,Gre,Hai,Ste92,SS]. In a series of papers [Ska,RS05a,RS05b] Rhoades and Skandera studied the dual canonical basis of I(A), also called Kazhdan-Lusztig immanants, labeled by permutations. These immanants possess remarkable positivity properties: (a) they are non-negative when applied to totally non-negative matrices [RS05a,RS05b], and (b) they are Schurpositive when applied to Jacobi-Trudi matrices [RS05b]. This second property was used in [LPP] to resolve several Schur-positivity conjectures. The subset of the dual canonical basis corresponding to 321-avoiding permutations can be given a purely combinatorial interpretation and were called Temperley-Lieb immanants, or TL-immanants, in [RS05a]. Rhoades and Skandera also gave a simple positive combinatorial rule for writing a product of two complementary minors of A in terms TL-immanants.
The pfaffian pf(A) of a skew symmetric 2n × 2n matrix A (see Section 2.1) replaces the symmetric group S 2n in the determinant with the set of matchings of 2n points. Replacing the symmetric group in (1) with matchings one also obtains a pfaffian analogue of immanants, which we call pfaffinants. The main object of this paper are the TL-pfaffinants denoted Pfaf D (A), which are analogues of the TL-immanants.
Stembridge [Ste90] interpreted the pfaffian pf(A(N )) in terms of non-intersecting path families in a planar network N , where A(N ) is a skew-symmetric matrix obtained from N . Separately, it is also known ( [JP, Mac]) that the Schur Q-function Q λ is equal to the the pfaffian pf(A λ ) for a particular skew symmetric matrix A λ , which we call a Q-Jacobi-Trudi-matrix. Our search for the TL-pfaffinants revolves around the following three properties: (1) a product of complementary pfaffians should decompose positively and simply in terms of the TL-pfaffinants; (2) a TL-pfaffinant should be positive when evaluated on the skew symmetric matrix A(N ) associated to a planar network; (3) a TL-pfaffinant should be Schur Q-positive when evaluated on a Q-Jacobi-Trudi matrix. The pfaffinants Pfaf D (A) that we define satisfy properties (1) and (2), and we conjecture that they satisfy property (3). The positivity properties (2) and (3) are subtly different from the situation with TL-immanants. Our definition of the pfaffinants Pfaf D (A) requires the intermediate definition of a diagram pfaffinant Pfaf ′ D (A). It appears rather mysteriously that it is the diagram pfaffinants that describe network and (conjecturally) Schur Q-positivity. We should point out that the correct pfaffian analogue of the entire dual canonical basis is still missing. A basis of this entire space of pfaffinants (without the positivity properties we desire) is given by DeConcini and Procesi [DP] from the point of view of invariant theory.
One of the Schur Q-positivity conjectures (Conjecture 50) that we state is a Schur Q-function version of a sequence of positivity results we call cell transfer: the monomial positivity version is established in [LP05], the fundamental quasisymmetric function version in [LP06] and the Schur positivity version in [LPP].
We now briefly describe the organization of the paper. In Section 2, we define diagram pfaffinants and Temperley-Lieb pfaffinants, and show that the latter form a basis for the space of products of pairs of complementary pfaffians. In Section 3, we explain Stembridge's work on pfaffians and planar networks and show that TL-pfaffinants are non-negative when applied to planar networks. We characterize the linear combinations of products of pairs of complementary pfaffinants that are network-nonnegative. In Section 4 we explore the relationship between TLimmanants and TL-pfaffinants when applied to certain matrices. In Section 5 we state a number of conjectures concerning Schur Q-positivity properties of pfaffinants, and in addition we prove a number of intermediate results.
Pfaffians and Pfaffinants
2.1. Preliminaries. A skew-symmetric matrix A = (a ij ) n i,j=1 is a matrix satisfying A t = −A or alternatively a ij = −a ji . These matrices are in bijection with arrays (a ij ) 1≤i<j≤n obtained by taking the part of A above the diagonal. We denote the corresponding array also by A and will not usually distinguish the skew-symmetric matrix from the upper-triangular array. Now suppose A is a skew-symmetric 2n × 2n matrix. Define the pfaffian pf(A) of A by pf(A) = π∈F2n ǫ(π) where the sum is taken over the set F 2n of matchings π on 2n vertices, and ǫ(π) is the sign or crossing number of a matching. It can be determined by the following rule: place the 2n vertices on a straight line and draw all the edges in π as arcs above this line. Let cn(π) denote the number of crossings between the arcs. Then ǫ(π) = (−1) cn(π) . For convenience we write a π := (i,j)∈π a ij for any π ∈ F 2n . We will generally think of the matching π as a set of unordered pairs of elements of [2n]. For example, if n = 2 we have pf(A) = a 12 a 34 − a 13 a 24 + a 14 a 23 . Let I ⊂ [2n] be a 2m-element subset and let A I be the corresponding submatrix, obtained by taking only the rows and columns with indices in I. We denote by pf I (A) the pfaffian of this submatrix. More generally, for disjoint subsets I 1 , I 2 , . . .
Two special cases of pf I1,I2,... (A) are particular important to us. One is the complementary pfaffians pf I,Ī (A), which are the products of pfaffians of two complementary subarrays. The second one is the monomials pf π (A) = a π = (i,j)∈π a ij . Thus one may also write the definition of the pfaffian as pf(A) = π∈F2n ǫ(π)pf π (A).
Next, for an arbitrarily function f : We call a partitioning (I,Ī) of [2n] standard if I = {i 1 < i 2 < · · · < i a } and I = {j 1 < j 2 < · · · < j b } where a ≥ b and i k < j k for each k ∈ [1, a]. Alternatively, (I,Ī) is standard if I andĪ form the first and second rows of a standard Young tableau. We say pf I,Ī is standard if (I,Ī) is.
Theorem 1 ( [DP]). A basis of P n is given by the set {pf I,Ī (A) | (I,Ī) is standard} of standard complementary pfaffians. The dimension of P n over R is equal to the number of standard Young tableaux of size 2n with at most 2 rows, each row of even size.
Proof. In [DP], a product of several complementary pfaffians is associated to any (possibly non-standard) tableau T with even parts. It is shown ( [DP,Theorem 6.5]) that the set of such products indexed by standard tableaux forms a basis for the space of all pfaffinants. The straightening algorithm showing that any tableau can be expressed in terms of standard ones ([DP, Lemma 6.1-6.3]) involves quadratic relations among products of pfaffians. Since the number of parts in the tableaux involved do not increase in such straightenings, the statement of the theorem follows.
We will give another proof of Theorem 1 later.
Remark 2. The following is the natural generalisation. Let P k,n ⊂ R[A] denote the subspace spanned by k complementary pfaffians of a 2n × 2n skew-symmetric matrix. Then the dimension of P k,n is equal to the number of standard tableaux of size 2n with at most k rows such that each row has even length.
2.3. Symmetric Temperley-Lieb diagrams. Consider a rectangle with the 2n points 1, 2, . . . , 2n on the left side and 2n points 1 ′ , 2 ′ , . . . , 2n ′ on the right side (the numbering goes from top to bottom). A Temperley-Lieb diagram D is a noncrossing matching on the resulting 4n vertices. An edge of D is called vertical if it is of the form (i, j) or (i ′ , j ′ ) and is called horizontal if it is of the form (i, j ′ ). A TL-diagram D is symmetric if it has symmetry about the vertical axis. Thus all the horizontal edges in D are of the form (i, i ′ ) and the vertical edges come in pairs {(i, j), (i ′ , j ′ )}. The order |D| of a symmetric TL-diagram is the number of edges in D with both ends on the left side of the rectangle or, alternatively, half the number of vertical edges. We call a TL-diagram D even (or odd) depending on the order of D. We denote by T n the set of symmetric TL-diagrams on 4n vertices, and by T e n the subset of even symmetric TL-diagrams.
Proposition 3. For any integer n ≥ 1 we have Proof. We show that T n is in bijection with n-subsets of a 2n element set. One possible such correspondence is obtained as follows: for D ∈ T n color all i ∈ [2n] such that (i < j) ∈ D black. Among the remaining points color black the largest ones so that we get n black points in total. The inverse map from a coloring of 2n points black and white, n of each color, can be described as follows. Start reading the points in reverse order, from 2n to 1. For each black point i one encounters we find the smallest j > i colored white which has not yet been used and include the edge (i, j) in D. If no such j exists, we include the edge (i, i ′ ) in D. After doing this for all the black points, we include an edge (j, j ′ ) for each unmatched white point j. Now let T o n = T n \T e n denote the set of odd symmetric TL-diagrams. We define an involution ω on T n which sends T e n to T o n . Let D ∈ T . If (1, 1 ′ ) ∈ D, there exists some smallest i ∈ [2n] where i = 1 so that (i, i ′ ) ∈ D. We define ω(D) by removing the edges (1, 1 ′ ) and (i, i ′ ) from D and including the edges (1, i) and (1 ′ , i ′ ). Otherwise (1, k) ∈ D for some (even) k ∈ [2n]. We define ω(D) by removing the edges (1, k) and (1 ′ , k ′ ) and including the edges (1, 1 ′ ) and (k, k ′ ). The involution ω shows that |T e n | = |T o n | = 1 2 |T n |.
2.4. Diagram pfaffinants. For each D ∈ T n we now define a function f D : F 2n −→ Z which in turn gives us the diagram pfaffinant Pfaf ′ D (A) := Pfaf fD (A). Recall that we have 4n vertices on the sides of the rectangle: 1, . . . , 2n on the left side and 1 ′ , . . . , 2n ′ on the right. Given a matching π ∈ F 2n , let ν(π) be the matching on [2n]∪[2n] ′ such that (i, j ′ ) and (i ′ , j) are in ν(π) if and only if (i, j) ∈ π. Pick a planar embedding of ν(π) such that all edges lie inside the rectangle, and every pair of edges intersect at most once. We assume the embedding is chosen (a) to have mirror symmetry, (b) no pair of edges have a point of tangency, and (c) that no 3 edges cross at a single point. Call an embedding satisfying these conditions nice. Such an embedding is far from unique, however we will show that the construction does not depend on the choice of embedding. We assume for now one such presentation has been chosen for each π, which we will (abusing notation) denote by ν(π) as well.
The set of intersections among the edges of ν(π) can be divided into two kinds: the unpaired crossings, which are the crossings between pairs of edges of the form (i, j ′ ) and (i ′ , j); and the paired crossings, which are the pairs of crossings between (p ′ , q) and (r ′ , s) and between (p, q ′ ) and (r, s ′ ), where inequalities q < s and r < p either both fail or both hold.
Given π ∈ F 2n we define a set X(π) of uncrossings of ν(π). Each embedded graph x ∈ X(π) is obtained from ν(π) by uncrossing every intersection, where each intersection can be uncrossed in two ways: as a vertical uncrossing " " or as a horizontal uncrossing " ". In addition, we require that paired crossings are uncrossed in the same way. With this additional restriction, the uncrossed diagram x is mirror symmetric. Thus x is topologically equivalent to an element D(x) ∈ T n union a number of closed loops.
We define the weight wt(x) of an uncrossed embedded graph x ∈ X(π) as Here l(x) is the number of closed loops in x, where pairs of mirror symmetric loops are counted only once; uv(x) is the number of unpaired vertical uncrossings in x; and ph(x) is the number of paired horizontal uncrossings in x.
Theorem 4. The function f D obtained in this way does not depend on the particular embedding we have picked for each ν(π).
Theorem 4 is in fact not logically required for the rest of the paper. Its proof is delayed to Section 6.
Example 5. For n = 2 and π = {(1, 4), (2, 3)}, there are essentially two different embeddings A and B of ν(π), shown in Figure 1. The embeddings are reflections of each other about a horizontal axis. These embeddings have two pairs of mirrorsymmetric crossings and two unpaired crossings, so the set X(π) has cardinality 16 in each case. The following table shows the calculation of f D (π).
Example 6. For n = 2 the diagram pfaffinants are given in the following table. The diagrams are described by the sets of their vertical edges. The reader can verify that the coefficients of a 14 a 23 agree with the calculations in Example 5. where the sum is over all I-compatible diagrams of T n .
The following proof imitates a proof in [LPP].
Proof. Let π ∈ F 2n . Then the monomial a π occurs in pf I,Ī if no edge of a π connects an element of I with an element ofĪ. In other words, π must be the union of the two matchings π I and πĪ obtained by restricting the vertex set. The coefficient of a π in pf I,Ī is then equal to (−1) cn(πI )+cn(πĪ ) . Now suppose x ∈ X(π) is an uncrossing of ν(π) such that D(x) ∈ D(I). We direct all the strands and loops in x so that the initial vertex of each strand belongs to I ∪ (Ī) ′ (and, thus the end vertex belongs toĪ ∪ I ′ ). We allow the closed loops to be directed in either direction. Thus the coefficient of a π in D Pfaf ′ D (A) is equal to the sum of (−1) uv(y)+ph(y) over all orientations y of the uncrossings {x ∈ X(π) | D(x) ∈ D(I)}.
Now we define a sign-reversing partial involution ι on this set of oriented graphs. A misaligned uncrossing is an uncrossing of the form " ", " ", " ", or " ". We say that we switch a misaligned uncrossing if we apply one of the following transformations: ←→ or ←→ . If y contains any misaligned uncrossings then we let ι switch the leftmost such uncrossing. If this uncrossing is a paired uncrossing, we also switch its mirror image. If all the uncrossings are aligned, then ι is not defined. Since ι is a sign-reversing involution on the set of oriented graphs where it is defined, we need only consider the contribution of (−1) uv(y)+ph(y) for oriented graphs y where ι is undefined. An example of the application of ι for n = 3 and I = {1, 3} is given in Figure 2. We switch the leftmost misaligned uncrossing, which in this case happens to be paired. Now suppose that y π is an oriented diagram with only aligned uncrossings (see for example Figure 3). Then converting the uncrossings back into crossings, keeping the orientation the same, we obtain an orientation µ(π) of ν(π) such that all edges start in I end in I ′ or start inĪ and end in (Ī) ′ . Thus π is the union of two matchings π I and πĪ . It is also clear that one can recover y π from µ(π) and that µ(π) is completely determined by ν(π). Thus y π , if it exists, is unique. Figure 3. Obtaining the orientation µ(π) of ν(π) from the uncrossing y π .
Finally, we calculate the sign of y π . Each unpaired crossing of ν(π) corresponds to the intersection of (i, j ′ ) with (i ′ , j) for an edge (i, j) in π I or πĪ . These crossings are always uncrossed horizontally to obtain y π , and so contributes no sign to y π . Each paired crossing (c, c ′ ) in ν(π) arises from a crossing ξ of π. To obtain y π , the pair (c, c ′ ) is uncrossed horizontally if ξ is a crossing in π I or πĪ , and (c, c ′ ) is uncrossed vertically otherwise. Thus (−1) uv(yπ)+ph(yπ) = (−1) cn(πI )+cn(πĪ ) , and we have checked that the monomial a π appears in both sides with the same coefficient.
2.5. Temperley-Lieb pfaffinants. Let D ∈ T n . For i, j ∈ {1, . . . , 2n} satisfying i < j we call the edge (i, j) of D odd if i is odd and even otherwise. For D ∈ T n let S(D) be the set of all diagrams in T n that can be obtained from D by erasing several odd edges (and their mirror images) and matching the resulting unmatched vertices by horizontal edges of the form (i, i ′ ). In particular, D ∈ S(D).
Proof. The first statement is clear since after obtaining D 1 out of D 2 by removing several odd edges, we can keep removing the remaining odd edges, and the result belongs to S(D 2 ) by definition. For the second part, note that if (i, j) is an odd edge, that is if i is odd, then all the edges inside [i, j] cannot be removed either because they are even or because they are contained within the segment bounded by ends of an even edge. Thus all odd edges that can be removed can be removed independently one from another, which implies the statement of the lemma.
Lemma 9. Suppose D ∈ T n and I ⊂ [2n] is a subset of even cardinality. If Proof. The first statement follows immediately from the definitions of the set S(D) and of I-compatibility. Now let D ′ ∈ D(I). We say that a vertex It is clear that there are the same number of black and white vertices in the I-coloring amongst the non-free vertices. Also, one checks that the free vertices alternate in parity beginning with an odd vertex and ending with an even vertex. If there are two vertices i < j such that between i and j there are no free vertices, i is odd, j is even and they have different colors then we call the pair (i, j) ∈ [2n]×[2n] addable. Removing (i, i ′ ) and (j, j ′ ) from D ′ and adding (i, j) and (i ′ , j ′ ) gives some D ∈ T n ∩ D(I) such that D ′ ∈ S(D). The unique maximal such D = D max is obtained by performing the above operation for every pair of addable vertices. Since I is required to have even cardinality and all the free vertices of D max has the same color, D max must be even.
We say that D ∈ T e n is I-maximal if it has the form D max as in Lemma 9. We denote the set of I-maximal diagrams by D max (I). By Lemma 8, if D 1 , D 2 ∈ D max (I) then D 1 / ∈ S(D 2 ) and D 2 / ∈ S(D 1 ).
Example 11. For n = 2 the TL-pfaffinants are given in the following table, calculated using Example 6. The even diagrams are described by the sets of their vertical edges. Theorem 13. Suppose I ⊂ [2n] is a subset with even cardinality. Then Proof. By Theorem 7, it suffices to show that the set of I-compatible diagrams D(I) ⊂ T n is the disjoint union of the sets S(D) for D ∈ D max (I). This follows from Lemmas 8 and 9.
Suppose D ∈ T n is a (possibly odd) symmetric TL-diagram on 4n vertices. We define a subset I(D) ⊂ [2n] by Note that |I(D)| = 2n − |D|, so that I(D) has even cardinality whenever D ∈ T e n . Recall from before Theorem 1 the definition of a standard partition of [2n]. Proof. We describe how to recover D from I(D). Let I(D) = {j 1 < j 2 < · · · < j k }. Then it must be the case that (j 1 − 1, j 1 ) ∈ D. More generally suppose we know all the edges of D connected to {j 1 , j 2 , . . . , j l−1 } for some l ≤ k. Then (i, j l ) is an edge of D, where i ∈ I(D) is the maximum number in I(D) which is less than j l and which is not connected to {j 1 , j 2 , . . . , j l−1 }. Furthermore, it is clear that this algorithmic definition of the inverse map (I,Ī) → D terminates successfully if and only if (I,Ī) is a standard partitioning.
Proof. This is an immediate corollary of Theorem 1, Proposition 3 and Lemma 14.
Let I, J ⊂ [2n] be two subsets of the same cardinality. We say I = {i 1 < · · · < i k } is lexicographically smaller than J = {j 1 < · · · < j k } and write I ≺ lex J if for some 1 ≤ l ≤ k we have i 1 = j 1 , i 2 = j 2 , . . . , i l−1 = j l−1 , i l < j l . We now define a total order ≺ on subsets of [2n]. Suppose I, J ⊂ [2n]. We define I ≺ J if |I| > |J| or |I| = |J| and I ≺ lex J. We use the map D → I(D) to give an induced total order on T n : . This in turn implies that Theorem 20. The TL-pfaffinants {Pfaf D (A) | D ∈ T e n } form a basis for P n . Proof. This follows from Theorem 1 and Proposition 18.
We will obtain another proof of Theorem 20 in Section 3.3.
Problem 21. Do the diagram pfaffinants {Pfaf ′ D (A) | D ∈ T n } always lie in P n ? If so, how are they expressed in the basis of T L-pfaffinants and in the basis of standard complementary pfaffians?
By Examples 6 and 11 the answer to the first question is affirmative for n = 2. Note also that by Proposition 3 the number of diagram pfaffinants is twice larger than the dimension of P n , so if the diagram pfaffinants {Pfaf ′ D (A)} do lie in P n there must be non-trivial relations among them.
Pfaffians and non-intersecting paths in networks
3.1. Stembridge's network interpretation of Pfaffians. John Stembridge in [Ste90] introduced an interpretation of pfaffians in terms of networks. Let G = (V, E) be a finite acyclic directed graph. We say that two directed paths in G intersect if they have a common vertex. If W and U are ordered sets of vertices of G, we say that W is G-compatible with U if whenever u < u ′ in W and v > v ′ in U , every path from u to v intersects every path from u ′ to v ′ .
Let us suppose that a weight function w : E −→ R, where R is some ring, has been fixed. For a G-path p, let w(p) = e∈p w(e) where the product is taken over all edges in p. For u ∈ V , W ⊂ V let P (u, W ) denote the set of G-paths from u to any v ∈ I, and let Q(u, W ) be the associated weight function Q(u, W ) = p∈P (u,W ) w(p). Similarly, for an r-tuple u = (u 1 , . . . , u r ) let P (u, W ) denote the set of r-tuples of paths (p 1 , . . . , p r ) such that p i ∈ P (u i , W ). The weight w(p 1 , . . . , p r ) of a r-tuple of paths is the product of the weights of each of the paths. Let P 0 (u, W ) ⊂ P (u, W ) denote the subset of non-intersecting tuples of paths. We define Q(u, W ) = Q 0 (u, W ) to be the sum of the weights of the elements of P 0 (u, W ).
Theorem 22 ([Ste90, Theorem 3.1]). Let u = (u 1 , . . . , u r ) be an r-tuple of vertices in an acyclic digraph G, and assume that r is even. If W ⊂ V is an ordered subset of vertices such that u is G-compatible with W , then For convenience, if G is an acyclic directed graph and ordered vertex sets u = (u 1 , . . . , u 2n ) ⊂ V and W ⊂ V have been chosen we call the triple N = (G, u, I) a network. For a network N , we define P (N ) = P (u, W ) and P 0 (N ) = P 0 (u, W ). We also let Q(N ) denote the weight sum Q(u, W ), and let , we let u I = {u i } i∈I denote the corresponding set of vertices. We then set P I (N ) ⊂ P (N ) to be the subset of paths p = (p 1 , . . . , p 2n ) such that p i and p j do not intersect if both i, j ∈ I or both i, j ∈Ī. We call the paths p ∈ P I (N ) compatible with I. Thus P 0 (N ) = P ∅ (N ) = P [2n] (N ). We finally define Q I (N ) to be the sum of the weights of the paths in P I (N ).
The following statement is immediate from Theorem 22 and the definitions we have made. 3.2. Planar network definition of Pfaffinants. Let N = (G, u, W ) be a fixed network. We assume that G is planar and that a Jordan curve C passes through the sets u and W of vertices so that G is contained completely in the interior of C. We also assume that u and W are contained in disjoint segments of C so that the ordering of u and W is consistent with the arrangement of these vertices on C. With this assumption, the G-compatibility of u and W is immediate. For short we will call a network N satisfying these assumptions a planar network.
Suppose that p = (p 1 , p 2 , . . . , p 2n ) ∈ P (N ) is a family of paths such that no three paths in p intersect at the same vertex. Removing all the edges of N that do not lie on any of the paths p i ∈ p, and in addition marking all the edges of N used twice by p we obtain a marked networkÑ =Ñ (p). Note that by our assumption an edge of N can be used at most twice by the path family p. We say that p coversÑ and denote the set of coverings ofÑ by P (Ñ ). IfÑ is the marked network obtained from some p ∈ P (N ) we callÑ a marked subnetwork of N and writeÑ ≪ N . The weight w(Ñ ) of a marked subnetwork is the weight w(p) for any path family coveringÑ . Suppose p i and p j intersect at some vertex v. Then there are two (possibly not distinct) edges e i ∈ p i , e j ∈ p j entering v and two edges f i ∈ p i and f j ∈ p j leaving v. The vertical uncrossing of v is obtained by detaching v into two new vertices v e and v f so that v e is incident with e i and e j while v f is incident with f i and f j , as it is illustrated on Figure 5. Alternatively, if the vertices u are arranged on the left, the vertices W arranged on the right, and all edges are directed strictly from left to right, then the vertical uncrossings always look like " ". Define an undirected graph Θ(Ñ ) by vertically uncrossing every intersection point ofÑ , removing all the marked edges and ignoring all the orientations. Note that Θ(Ñ ) does not depend on p, only onÑ . The graph Θ(Ñ ) is a disjoint union of a number of cycles, together with a number of paths. We define the multiplicity of the marked networkÑ by mult(Ñ ) = 2 r where r is equal to the number of connected components of Θ(Ñ ) which do not contain any of the vertices in u.
The components of Θ(Ñ ) containing one or more of the vertices of u are a collection of paths which give rise to a matching type(Ñ ) of [2n] ∪ [2n] ′ : if u i , u j belong to the same component of Θ(Ñ ) then (i, j), (i ′ , j ′ ) ∈ type(Ñ ). If u i does not belong in any component with some other u j , then (i, i ′ ) ∈ type(Ñ ).
Lemma 24. Let p ∈ P (N ) be a family of paths such that no three paths in p intersect at the same vertex and letÑ =Ñ (p). Then type(Ñ ) ∈ T n .
Proof. We need to check that if (i, j) ∈ type(Ñ ) and i < k < j then (k, l) ∈ type(Ñ ) for some i < l < j. The components of Θ(Ñ ) are simple curves in the interior of the Jordan curve C connecting two points on the boundary of C. The assumption that u is arranged in order along the boundary of C immediately implies the required criterion.
The definition of Θ(Ñ ) does not rely on the assumption that the graph is drawn inside a Jordan curve, but Lemma 24 does. Proof. For each p ∈ P (Ñ ) we orient Θ(Ñ ) in the following manner. If an edge e ∈ Θ(Ñ ) belongs to p i where i ∈ I we orient e with the same orientation as in N , that is, from u to W . If an edge e ∈ Θ(Ñ ) belongs to p j where j ∈Ī we orient e with the opposite orientation to the one in N . Since we removed all the marked edges when we produced Θ(Ñ ) no edge e ∈ Θ(Ñ ) receives both orientations. The resulting directed graph Θ(Ñ ) p is a disjoint union of directed paths and directed cycles. This follows from the fact that every intersection ofÑ involves a pair of paths (p i , p j ) where i ∈ I and j ∈Ī. One now checks that p → Θ(Ñ ) p is a bijection between path families in p ∈ P (Ñ ) and such directed graphs.
In addition, p ∈ P I (N ) if and only if the directed path in Θ(Ñ ) p that u i lies on is directed away from u i if i ∈ I and directed towards u i if i ∈Ī. This requirement can be satisfied only if type(Ñ ) ∈ D(I). The number of orientations of Θ(Ñ ) satisfying this additional condition is by definition equal to mult(Ñ ).
For D ∈ T n define the following functionP faf 3.3. Independence of Temperley-Lieb pfaffinants. We will show directly using Theorem 27 that the elements {Pfaf D (A) | D ∈ T e n } are linearly independent. This will give us alternative proofs of Theorems 1 and 20.
Let D ∈ T n . We will now define a planar network N (D) with the property that Pfaf ′ D ′ (N (D)) is non-zero if and only if D = D ′ . The network N (D) is embedded into the plane R 2 in a particular way. First, place the vertices u 1 , . . . , u 2n on the line x = 0 so that u i has coordinates (0, 2n − i). For an edge (i < j) ∈ D we call the vertex i outgoing and the vertex j ingoing. The vertices i such that (i, i ′ ) ∈ D are neither outgoing nor ingoing. Now place the "sink" vertices W as follows: for each i ∈ [2n] such that (i, i ′ ) ∈ D or (i < j) ∈ D we place w i ∈ W at coordinates (1, 2n − i). To obtain the rest of N (D), we first join u i with w i with a straight line whenever w i exists, that is when i is not ingoing. Finally we join u j k with w i k where j 1 < j 2 < · · · are the ingoing vertices and i 1 < i 2 < · · · are the outgoing vertices. The intersection of any of these lines is also defined to be a vertex of N (D) which does not belong to either u or to W . All edges are directed so that the x-coordinate increases along the edges.
Note that no three of the drawn lines intersect at one point, since by construction the set of these lines is a union of two pairwise non-intersecting families of lines. An example of this construction of N (D) is shown in Figure 6. Lemma 29. We have type(Ñ (D)) = D. Let p ∈ P (N ) be a family of paths such that no three paths intersect at the same vertex. ThenÑ (p) =Ñ (D).
Proof. By the previous comments, it is enough to prove the lemma for each of the networks N (D [i,j] ) corresponding to outside edges (i, j) ∈ D. We proceed by induction on |j − i|, the base case being trivial. All vertices in [i, j] are outgoing or ingoing, and there are twice as many source vertices u as sink vertices W in N (D [i,j] ). Call the edges of N (D [i,j] ) incident to the sink vertices the outer skeleton Sk(N (D [i,j] )). Now remove the outer skeleton from N (D [i,j] ). We obtain a network N (D [i,j] ) ′ isomorphic to N (D [i+1,j−1] ), which is the union of the networks N (D [ip,jp] ), where {(i p , j p )} is the set of outside edges formed when we remove edge (i, j) from D [i,j] . Under this identification, the sink vertices of N (D [i,j] ) ′ are the intersection points of the pairs of segments {(u j k , w i k ), (u i k+1 , w i k+1 )}. By the inductive assumption, we have type(Ñ (D [i+1,j−1] )) = D [i+1,j−1] and since Sk(N (D [i,j] )) (after redirecting the edges) is a path from u i to u j , it follows immediately that type(Ñ (D [i,j] )) = D [i,j] .
By the inductive assumption applied to each N (D [ip,jp] ), there is only one marked network of N (D [i,j] ) ′ arising from a family of paths p ∈ P (N ) without triple intersections. Each of the sink vertices of N (D [i,j] ) ′ has incoming degree 2, and thus p must cover (counted with multiplicity) two of the outgoing edges from each such vertex. However, p must contain the two paths consisting of the single edge (u i , w i ) and the single edge (u js , w is ), where j s = j. A simple counting argument shows that each sink vertex w ir is incident with exactly two paths. Combining these facts, one concludes that each edge of Sk(N (D [i,j] )) is covered by p exactly once.
An illustration of the proof is shown in Figure 7. Theorem 30 gives alternative proofs of Theorems 1 and 20 without relying on results of [DP].
3.4. Network positivity. Call a skew symmetric matrix A network-positive if it is equal to A(N ) for some planar network N with positive weights on edges (we assume the coefficient ring R = R).
The notion of network positivity is a substitute for the notion of total nonnegativity of matrices. Recall that an arbitrary matrix M is totally non-negative if all its minors are non-negative. It is known (see for example [Br,Theorem 3.1]) that every totally non-negative matrix arises from a planar network.
It is not clear how to make a similar definition for skew-symmetric matrices. The following example is taken from [Kim]. Take the following skew-symmetric matrix: Every skew-symmetric submatrix of A of even size has a non-negative pfaffian. However, as we will now show, A is not equal to A(N ) for any planar network N . Thus the naive generalization does not seem to be appropriate.
Lemma 31. A is not equal to A(N ) for any positive planar network N .
Proof. Indeed, assume u and W are placed on the boundary of a Jordan curve. Since a 23 = 0 there should be a pair of non-intersecting paths p 2 and p 3 from u 2 and u 3 to W (see Figure 8). Since a 12 = 0 there should be at least one path p 1 from u 1 to W . Since a 13 = 0, the path p 1 must intersect p 3 , and therefore p 2 . However, in that case if we traverse p 1 up to the point of intersection with p 2 and continue along p 2 , we obtain a path from u 1 to W not intersecting p 3 , contradicting our assumptions.
Proposition 32. For a network-positive A and any D ∈ T e n we have Pfaf D (A) ≥ 0. Proof. We know from Theorem 27 that Pfaf D (A) has an interpretation as the weight-multiplicity generating function of certain marked subnetworks of N . The statement follows immediately.
For any K ∈ P n one can formally write K as a linear combination of the symbols Pfaf ′ D . Namely, by Theorem 20 one can express K = c D Pfaf D in terms of TLpfaffinants. Now we use the expansions Pfaf Theorem 33 (cf. Corollary 3.6, [RS05a]). Let K ∈ (P n ) R . The following are equivalent: (1) for any network-positive A one has K(A) ≥ 0; (2) The coefficients c ′ D in K = D∈Tn c ′ D Pfaf ′ D are non-negative. We call an element f ∈ P n network positive if it satisfies one of the conditions (and thus both) of Theorem 33. Let C n ⊂ P n denote the cone consisting of network positive elements. Theorem 33 shows that C n is rational and polyhedral and a simple argument using the networks N (D) shows that C n is pointed (contains no lines). However, the cone C n possesses some interesting polyhedral geometry and the edge generators of C n are rather tricky to describe. Finding generators of the semigroup C n ∩ Z[pf I,Ī | I ⊂ [2n]] of integral points is even trickier. Note that by Theorem 13 and Proposition 18, the Z-span of {Pfaf D (A) | D ∈ T e n } is equal to the Z-span of The description of the edge generators of C n can be simplified to a combinatorial problem concerning boolean lattices.
Let us call an even symmetric diagram D ∈ T e n maximal if it is I-maximal for the subset I = I alt = {1, 3, 5, . . . , 2n − 1}. Since D(I alt ) = T e n , a diagram D ∈ T e n is maximal if no odd edges can be added to it. By Lemma 9, D ∈ T e n is maximal if and only if for every D ′ so that D ∈ S(D ′ ) we have D = D ′ . The following Lemma says that to find the edge generators of C n we may restrict our attention to elements f ∈ P n which are linear combinations of TL-pfaffinants labeled by a set S(D m ) ∩ T e n for maximal D m .
Lemma 34 Proof. Suppose when expressed in terms of diagram pfaffinants as in Theorem 33 Suppose D m is maximal and D ∈ S(D m ). By Lemma 9, the summation in (2) can be taken over D ∈ (T e n ∩ S(D m )) satisfying D ′ ∈ S(D) instead. Also using Lemma 8, this shows that f Dm lies in C n . Now let D m be maximal. By the proof of Lemma 8 the diagrams D ′ ∈ S(D m ) form a boolean lattice B s = 2 [s] under the order D 1 < D 2 ⇔ D 1 ∈ S(D 2 ). When s is even, the even diagrams S(D m ) ∩ T e n correspond to the even levels B e s in B s . When s is odd, the even diagrams S(D m ) ∩ T e n correspond to the odd levels B o s in B s . The edge generators of C n can then be calculated by solving the following problem.
Proposition 37. The difference pf min(I,Ī),min(I,Ī) − pf I,Ī is network positive. Proof. We shall show that D(I) ⊂ D(min(I,Ī)). The result will then follow from Theorems 7 and 33. So let D ∈ D(I) and suppose that (i < j) ∈ D. Then either i ∈ I and j ∈Ī or i ∈Ī and j ∈ I. We need to show that exactly one of (i, j) lies in min(I,Ī). The key fact is that Suppose that i = i a ∈ I and j = j b ∈Ī. If i a < j a then i ∈ min(I,Ī) and furthermore i b < j b by (3) so that j / ∈ min(I,Ī). Otherwise if i a > j a we deduce by (3) that i b > j b ; so we conclude again that exactly one of (i, j) lies in min(I,Ī). The case that i ∈Ī and j ∈ I is similar.
Relation between pfaffinants and immanants
4.1. Rhoades and Skandera's Temperley-Lieb immanants. The Temperley-Lieb immanants were discovered by Rhoades and Skandera [RS05a], who gave a number of remarkable positivity properties of these immanants. The exposition we now give is similar to the presentation in [LPP] to which we refer for unexplained notations.
Let TL n be the set of Temperley-Lieb diagrams on 2n points {1, 2, . . . , 2n}, with {1, 2, . . . , n} arranged top to bottom on the left side of a rectangle and {n + 1, . . . , 2n} arranged bottom to top on the right side. Let w be a permutation in S n . By abuse of notation we also denote by w a chosen wiring diagram, thought of as a planar network connecting the n source points on the left to n sink points on the right. Now uncross the crossings of w in all possible ways, each crossing becoming either a vertical uncrossing " " or a horizontal uncrossing " ". Let X(w) be the set of such uncrossings, and for x ∈ X(w) let D(x) be the element of TL n topologically equivalent to x (with any loops removed). Let h(x) be the number of horizontal uncrossings in x and let l(x) be the number of loops formed. Define the weight wt(x) of x by wt(x) = 2 l(x) (−1) h(x) . For d ∈ TL n define f d : S n → Z by Let B = (b ij ) be a n × n matrix. Then for d ∈ TL n the TL-immanant Imm d (B) is defined as
4.2.
Expressing TL-immanants as TL-pfaffinants. Let A = (a ij ) 1≤i<j≤2n be an uppertriangular array such that a ij = 0 if 1 ≤ i < j ≤ n or n + 1 ≤ i < j ≤ 2n. Let B = (b ij ) be the n × n matrix given by b ij = a i,j+n . Our aim is to relate the TL-pfaffinants Pfaf D (A) with the TL-immanants Imm d (B).
Call Proof. The first statement is clear since if I is not balanced any matching contains an edge corresponding to a zero entry of A. The second statement follows from the observation that pf(A) = (−1) ( n 2 ) ∆(B).
Thus non-zero products of complementary pfaffians of A are up to sign equal to products of complementary minors of B. Hence one should be able to express the TL-pfaffinants of A in terms of the TL-immanants of B.
Let d ∈ TL n . Define a matching ν(d) of [2n] ∪ [2n] ′ as follows: interpret the left side of d (originally labeled {1, 2, . . . n}) as the vertices from 1 to n and the right side of d (originally labeled {2n, 2n − 1, . . . , n + 1}) as the vertices from (n + 1) ′ to 2n ′ . Now force ν(d) to be mirror-symmetric by adding the edge (i, j) (resp. (i ′ , j ′ ), (i, j ′ )) whenever the edge (i ′ , j ′ ) (resp. (i, j), (i ′ , j)) is present in d. Let X(d) be the set of all ways to uncross all crossings in ν(d), where as in Section 2.4 we always uncross mirror symmetric crossings in the same manner. As usual, we pick the embedding of ν(π) so that no pair of edges intersect more than once or have a point of tangency, and no three edges intersect at a single point.
We define the weight wt(x) of an element x ∈ X(d) as Figure 10. The matching ν(d) produced from a TL-diagram d.
where l(x), uv(x), ph(x) are as defined in Section 2.4. Similarly we define D(x) ∈ T n to be the symmetric TL-diagram obtained from the uncrossing x. We define g D : TL n → Z by Denote by z(d) the number of edges in d with both ends in [n]. Finally, let g D (d) = (−1) z(d)·n g D (d). We have by definition Now we proceed as in the proof of Theorem 7. Suppose x ∈ X(d) is an uncrossing of ν(d) such that D(x) ∈ D(I). We direct all the strands and loops in x so that the initial vertex of each strand belongs to I ∪ (Ī) ′ (and, thus the end vertex belongs toĪ ∪ I ′ ). We allow the closed loops to be directed in either direction. Now define an almost sign-reversing involution on this set of oriented diagrams exactly as in Theorem 7.
Finally we must calculate (−1) uv(x)+ph(x) for x(d). The unpaired crossings between (i, j ′ ) and (j, i ′ ) are always uncrossed horizontally, so contribute nothing to the sign. The paired crossings which are uncrossed horizontally correspond to pairs of edges (i 1 < j 1 ) ∈ d and (i 2 < j 2 ) ∈ d, both of which are horizontal and such that both i 1 , i 2 ∈ I 1 or both i 1 , i 2 ∈ I 2 . Thus for d ∈ D(S) the coefficient of Imm This identity can be proven by induction on z(d), noting that (−1) ( k 2 ) = 1 if k ≡ 0, 1 mod 4 and (−1) ( k 2 ) = 1 if k ≡ 2, 3 mod 4. Now summing over over all d ∈ TL n and using Theorem 38 and Lemma 39, we see that pf I,Ī (A) = D∈Dmax(I)P faf D (A).
Quadratic relations between TL-pfaffinants and TL-immanants. Let
A be a skew-symmetric 2n × 2n matrix. The following formula is well known, see for example [Ste90]. 2Pfaf 2 L + 2Pfaf L Pfaf N + Pfaf 2 N However for n > 2 the TL-immanants cannot be expressed in a similar manner through TL-pfaffinants. For example, with n = 3 the immanant corresponding to the diagram with edge set {(2, 3), (4, 5), (6, 7), (8,9), (10, 11), (1, 12)} does not lie in the span of the products of the TL-pfaffinants. It remains unclear if any relation between TL-immanants and TL-pfaffinants of a skew symmetric matrix can be established in general.
Schur Q-positivity
In this section we discuss some conjectural applications of TL-pfaffinants to positivity properties of Schur Q-functions. Many of our results and conjectures can be stated alternatively in terms of Schur P -functions, but we will not do so explicitly.
5.1. Shifted tableaux. For further details concerning the material of this section we refer the reader to [Mac].
Let λ = λ 1 > λ 2 > · · · > λ l > 0 be a strict partition of integers. We will not distinguish between λ and its shifted diagram S(λ) obtained by shifting the i-th row of the usual (Young) diagram (i − 1) squares to the right, for each i. More generally, if λ and µ are two strict partitions so that S(µ) ⊂ S(λ) then the skew shifted diagram is denoted λ/µ. Our notation for diagrams follows the English notation, so that Young diagrams are top-left justified.
A shifted tableaux T with shape sh(T ) = λ/µ is a filling of the shifted diagram λ/µ with the numbers 1 ′ , 1, 2 ′ , 2 ′ , . . . so that (1) the rows and columns are weakly increasing under the order 1 ′ < 1 < 2 ′ < 2 < . . . (2) there is at most one occurrence of i ′ in a row (3) there is at most one occurrence of i in a column.
The weight wt(T ) of a shifted tableau is the composition α = (α 1 , α 2 , . . .) where α i is equal to the combined number of the letters i and i ′ used in T . The Schur Q-function Q λ/µ (x) is defined as Though it is not immediate from the definition, the function Q λ/µ (x) is a symmetric function in the variables x 1 , x 2 , . . ..
5.2.
Schur Q-functions and pfaffians. Schur Q-functions can be expressed as pfaffians, as follows. First extend the notation of Schur Q-functions by defining Q −r = 0 for r > 0 and Q (r,s) = −Q (s,r) . Define the l × l skew symmetric matrix A λ = [Q (λi,λj ) ] 1≤i,j≤l , where λ i is the i-th part of λ, l = l(λ) is the number of parts of λ. By possibly adding an extra zero part to λ, we may assume that l is even. The following theorem can be found in [Mac].
A skew version of this formula was proved by Józefiak and Pragacz [JP]. Let λ/µ be a skew shifted shape where λ = λ 1 > · · · λ l > 0 and µ = µ 1 > µ 2 > · · · > µ r ≥ 0. We assume that l + r is even. Let H = (h ij ) be the l × r matrix with h ij = Q λi−µr+1−j . Define a skew symmetric matrix We call the matrix A λ/µ a Q-Jacobi-Trudi matrix. If we allow in the definition λ and µ to possibly be non-strict partitions then we call A λ/µ a generalized Q-Jacobi-Trudi matrix.
5.3.
Schur Q-positivity and pfaffinants. As we saw in Section 3.4, network positivity of an element f ∈ P n depends on the decomposition of f into diagram pfaffinants Pfaf ′ D . Somewhat more surprisingly, we conjecture that this decomposition is also related to Schur Q-positivity.
Conjecture 46. Suppose f ∈ P n can be expressed as f = c D Pfaf ′ D with nonnegative coefficients c D . Then for any generalized Q-Jacobi-Trudi matrix A λ/µ , the evaluation f (A λ/µ ) is a nonnegative linear combination of Schur Q-functions.
Theorem 49. Let λ/µ and ν/ρ be skew shifted shapes. Then the difference is a nonnegative sum of Stembridge's peak functions K α .
We will not give the definition of the peak functions K α here and refer the reader to [Ste97] for full details. The K α form a basis for a subalgebra Π of the algebra quasi-symmetric functions and the K α take the place of the fundamental quasi-symmetric functions in Π. The Schur Q-functions Q λ/µ lie in this subalgebra Π and are known to be positive in the basis {K α }. We now make the following stronger conjecture.
Conjecture 50. Let λ/µ and ν/ρ be skew shifted shapes. Then the difference is a non-negative combination of Schur Q-functions.
Conjecture 50 is a Schur Q-function version of what we call the cell transfer theorem. The monomial positivity version was proved in [LP05], the fundamental quasi-symmetric function version in [LP06] and the Schur positivity version in [LPP]. As explained in the introduction of [LP06], these positivity phenomena arise from a collection of data: (a) a class of posets, (b) a ring containing the generating functions of "tableaux", (c) a basis of this ring, and (d) a set of skew functions. In our case, (a) the posets are shifted Young diagrams, (b) the ring is the subalgebra of the ring of symmetric functions generated by the odd power sums, (c) the basis is the set of Schur Q-functions for non-skew shifted shapes, and (d) the skew functions are the Schur Q-functions labeled by skew shifted shapes.
Proof. Let π be the (possibly no longer strict) partition obtained from taking the union of the parts of λ and ν. While π is not necessarily a strict partition, we can still formally define the matrix A π as above. Clearly, pf I,Ī (A π ) = Q λ Q ν for the appropriate choice of I. Now recall the definition of min(I,Ī) from Section 3.6. We have pf min(I,Ī),min(I,Ī) (A π ) − pf I,Ī (A π ) = Q (λ/µ)∨(ν/ρ) Q (λ/µ)∧(ν/ρ) − Q λ/µ Q ν/ρ . By the proof of Proposition 37, the difference pf min(I,Ī),min(I,Ī) − pf I,Ī is a nonnegative linear combination of the TL-pfaffinants Pfaf D . Conjecture 46 implies that Pfaf D (A π ) is Schur Q-positive, from which the result follows. 5.5. Further Schur Q-positivity conjectures. The usual Schur function analogue of Conjecture 50 was established in [LPP].
Theorem 52 was used to resolve a number of conjectures of Fomin, Fulton, Li, Poon [FFLP], of Lascoux, Leclerc, Thibon [LLT] and of Okounkov [Oko]. We now state the shifted analogue of the Fomin-Fulton-Li-Poon conjecture.
Proof. First note that if λ/µ and ν/ρ are skew shifted diagrams obtained from each other via a translation then Q λ/µ = Q ν/ρ . For a shifted shape λ, let λ ↓ denote the skew shifted shape obtained by translating λ down one row (and hence also one step to the right). We will assume that λ ↓ is presented as ν/ρ where ν 1 = ρ 1 is very large (much larger than any other parts involved in the proof). If ν/ρ is a shifted shape so that ν 1 = ρ 1 we let (ν/ρ) ↑ denote the shifted shape obtained by translating one row up (and hence also one step to the left).
If we apply Conjecture 50 to the Schur Q-functions indexed by the pairs of partitions (ρ, ν) we see that for each iteration of the above map Q ρ * * Q ν * * − Q ρ Q ν is Schur Q-positive. This proves the theorem.
Our proof here is very similar to an analogous proof in [LPP], where left and right shifts are used instead of our up and down translations. It would be interesting to generalize other Schur positivity results and conjectures to the shifted case.
We note the following result, which follows from Theorem 49 and the proof of Proposition 54.
Proof of Theorem 4
Let A and B denote two nice embeddings of ν(π) and denote by f D (A) and f D (B) the weight generating function of uncrossings defined by A and B respectively (see Section 2.4). By replacing A or B with a small deformation which is combinatorially equivalent we may assume even if we draw all the edges of A and B that (a) no two edges have a point of tangency and (b) no three strings cross at a single point. However, an edge of A and an edge of B may intersect more than once.
We now argue that A and B are connected by a sequence of three types of Reidemeister-like moves, denoted R α , R β and R γ , as shown in Figure 12. Let (i, j) be an edge in ν(π). To change A to B, we move the embedding of (i, j) in A continuously until it agrees with the embedding of (i, j) in B; and we repeat for each edge of ν(π). Note that we will always move the mirror symmetric edge simultaneously so that the diagram is always mirror symmetric. There are three types of "singularities" which may occur during this process, changing the combinatorial type of the embedding. These singularities violate the conditions (a) and (b) above. R α : If the singularity occurs on the vertical axis of symmetry then one obtains a quadruple intersection between two pairs of mirror symmetric edges, violating both conditions (a) and (b). The Reidemeister move R α allows one to pass from one side of the singularity to the other. R β : If the singularity is a paired singularity, it may involve three edges crossing at the same point, giving the move R β . R γ : If the singularity is a paired singularity, it may involve a point of tangency, giving the move R γ .
Note that R α allows us to permute the crossing points on the vertical axis of symmetry, while R β and R γ allow us to do all the other required changes. During this process the rule that no two edges crossing more than once can be violated (by moves R α or R γ ).
To complete the proof we show that f D (A) = f D (A ′ ) if A and A ′ are related by a Reidemeister-like move.
R α : We may use the move R γ (preserving f D ) to replace the initial and final pictures with the two intermediate ones shown in Figure 13. Now using the the calculation of Example 5 we may obtain the one intermediate picture from the other while again preserving f D . R β : There are three pairs of (mirror-symmetric) crossings, giving a total of 8 uncrossings for the initial and final pictures. Denote the three edges coming from the left by a, b, c from top to bottom and the three edges exiting to the right by a ′ , b ′ , c ′ . An uncrossing of this local picture will give a matching of a, a ′ , b, b ′ , c, c ′ together with a weight. One obtains the following table for the weights of the 8 uncrossings, showing that the weight generating functions agree for each matching. The "initial" embedding here is the top picture in Figure 12. R γ : For the initial (top) embedding, the picture has 4 uncrossings. Three of these 4 uncrossings give a matching (the vertical one) which does not occur for the final (bottom) embedding, but their weights (respectively 2,−1,−1) cancel out. For the other (horizontal) matching we obtain the same contribution of 1 for both the initial and final embeddings. This completes the proof of Theorem 4. | 2014-10-01T00:00:00.000Z | 2006-12-28T00:00:00.000 | {
"year": 2006,
"sha1": "82495808d859357d0b54eac08f1a651af7524ec2",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.aim.2008.03.027",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "82495808d859357d0b54eac08f1a651af7524ec2",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
19287590 | pes2o/s2orc | v3-fos-license | Quantized Vortices in Superfluids and Superconductors
We give a general review of recent developments in the theory of vortices in superfluids and superconductors, discussing why the dynamics of vortices is important, and why some key results are still controversial. We discuss work that we have done on the dynamics of quantized vortices in a superfluid. Despite the fact that this problem has been recognized as important for forty years, there is still a lot of controversy about the forces on and masses of quantized vortices. We think that one can get unambiguous answers by considering a broken symmetry state that consists of one vortex in an infinite ideal system. We argue for a Magnus force that is proportional to the superfluid density, and we find that the effective mass density of a vortex in a neutral superfluid is divergent at low frequencies. We have generalized some of the results for a neutral superfluid to a charged system.
We give a general review of recent developments in the theory of vortices in superfluids and superconductors, discussing why the dynamics of vortices is important, and why some key results are still controversial. We discuss work that we have done on the dynamics of quantized vortices in a superfluid. Despite the fact that this problem has been recognized as important for forty years, there is still a lot of controversy about the forces on and masses of quantized vortices. We think that one can get unambiguous answers by considering a broken symmetry state that consists of one vortex in an infinite ideal system. We argue for a Magnus force that is proportional to the superfluid density, and we find that the effective mass density of a vortex in a neutral superfluid is divergent at low frequencies. We have generalized some of the results for a neutral superfluid to a charged system.
I. INTRODUCTION
From the very beginning it has been realised that quantized vortices play an important part in the behavior of superfluids 1 . Both in neutral superfluids and in superconductors it is the vortices that provide a mechanism for the decay of superfluid currents in a ring. The circulation, for a neutral superfluid, or the trapped flux, for a superconducting ring, is quantized, and the current can only decay by a change of the quantum number by an integer, which can occur by the passage of a vortex (or quantized flux line) across the ring, from one edge to the other. In superconductors in a high magnetic field, the motion of flux lines is the main mechanism for electrical resistance. At high temperatures the movement of vortices is a thermally activated process, but at low enough temperatures the dominant mechanism must be by quantum tunneling. It is therefore important to understand the dynamics of vortices, in order to be able to evaluate the dissipative processes that occur in neutral superfluids and in superconductors.
Despite the obvious importance of the problem, the theory has been in a most unsatisfactory state. There are many conflicting results in the literature. We did not realise the extent of this disagreement, and it was initially a surprise to us that a result we obtained would be dismissed by one knowledgeable critic as too obvious to be worth discussing, and by another equally eminent critic as well known to be wrong; this has happened to us several times. There are real problems here, connected with questions of suitable boundary conditions, and there is often a question of whether two different calculations are finding the same result by two different ways, or if they are finding two different contributions which must be added.
To our surprise, we found, about two years ago, that we could get an exact result for one of the two parameters that determine the transverse force on a moving vortex, using only general properties of superfluid order 2 . The second parameter was determined by a straightforward thermodynamic argument a little over a year later 3 . The first of these results still seems to be controversial 4 , and some elements of the argument deserve closer scrutiny than they have received so far, but it is our belief that it should be possible to construct a firmly founded theory on the basis that we have tried to establish.
II. ELECTRONS IN MAGNETIC FIELDS AND VORTICES
There are strong analogies between the behavior of electrons in strong magnetic fields and of vortices in superfluids. These analogies enable us to make use of some of the insights that have been obtained in the study of the quantum Hall effect to understand problems connected with vortex dynamics.
In both cases there is a transverse force proportional to velocity. The Lorentz force for electrons is proportional to the vector product of the electron velocity and the magnetic field, F L = −ev × B. This can be represented by a term eBxẏ in the Lagrangian. The Magnus force acting on a vortex is proportional to the vector product of the velocity of the vortex relative to the fluid and a vector directed along the vortex core. Each of these forces leads to a path-dependent but speed independent term in the action. In a quantum theory the phase is equal to the action divided byh, this corresponds to a Berry phase, a phase which depends on the path, but not on the rate at which the path is traversed 5,6 .
In both cases there is considerable arbitrariness in the value of this phase. For the electron this is due to the arbitrariness of the vector potential which is used to represent the magnetic field, while for the vortex there is a similar arbitrariness in the way the transverse force is represented in a Lagrangian. In either case the change in action or phase when the electron or vortex traverses a closed path is well determined. For the electron the phase change on a closed path is equal to 2π times the number of flux quanta enclosed by the path. For the vortex the phase change is equal to 2π times the average number of atoms enclosed by the surface swept out by the closed path of the vortex.
The dominant correction to quantization of the Hall conductance comes from tunneling or activated transport between states on the two edges of a quantum Hall bar. The dominant mechanism for decay of supercurrents is tunneling or activated transport of vortices across the system. In many real systems there is a tangle of pre-existing vortices frozen in when the system is cooled below the critical temperature, and these can serve as sources for the vortices that cross the system. Under ideal conditions, and some modern experiments on helium approach such ideal conditions 7,8 , there are no vortices in equilibrium, and a vortex loop must be created from nothing in the interior, or a line must be created at the boundary (with its image this constitutes a loop), and cross the system to be annihilated at the opposite boundary.
Electrons in a magnetic field have a fast cyclotron motion around the guiding center. Canonical variables can be chosen as the two pairs v x , v y , rescaled by m 2 /eB, which give the fast motion, and the guiding center coordinates X, Y , rescaled by eB, which give the slow motion. Vortices also have such a fast cyclotron-like motion, in which the vortex core circles around the center of the flow it induces. In addition, since the vortex is a string rather than a point, it has the low frequency Thomson modes, circularly polarized modes of oscillation analogous to the modes of oscillation of a stretched string. For the vortex the two coordinates X, Y of the position of the vortex in the plane perpendicular to the core are also conjugate variables, as is manifest from the classical theory of vortex motion which can be found in Lamb's Hydrodynamics, and which Lamb credits to an 1880's book on Mechanics by Kirchhoff.
The most important difference is that we think we understand the Schrödinger equation for electrons, whereas a vortex in a superfluid is a complicated many-body entity. The relation of the collective variables describing the vortex motion to the single-particle variables describing the superfluid is not obvious.
III. GENERAL FEATURES OF VORTICES IN SUPERFLUIDS
A vortex is a composite object in a many-body system. Its motion may be described by collective variables, but its structure depends on all the single-particle variables of the superfluid, and the relation between these single-particle variables and the collective variables is, as usual, obscure. Feynman 9 proposed describing a vortex by taking the ground state wave function, symmetric in all the single particle variables for a boson superfluid, and multiplying it by a factor of the form where θ j is the azimuthal angle made by the particle j with the vortex core, r j is its distance from the core, and f is some real function which is close to unity everywhere except where r j is of the order of the radius of the vortex core, and which goes to zero at r j = 0 in order to prevent large kinetic energy contributions from the rapid variation of phase close to the vortex core. A similar description of the core is obtained for the Ginzburg-Pitaevskii equations for the order parameter near the critical temperature 10 , or from the Gross-Pitaevskii nonlinear Schrödinger equation for the condensate of the dilute Bose gas at zero temperature 11,12 . In these theories the vortex is described by meanfield-like equations, so that the position of the singularity at the vortex core has a sharp value, although we know that the two components of its position in the plane are conjugate variables. Somehow we should be able to construct a quantized version of the theory taking account of this result of the Magnus force. In a strongly type II (Shubnikov) superconductor the situation is somewhat similar, except that the current circulating round the vortex core generates a magnetic field parallel to the core, which in turn generates a vector potential that reduces the current, so that a total of one quantum of flux h/2e is associated with the vortex, or flux line, and no current is associated with the change of the phase angle at large distances. In a type I (Pippard) superconductor the character of the singularity is mainly trapped flux, but the singly quantized flux line is not stable in a uniform magnetic field, and it is thermodynamically favorable for the flux lines to aggregate and form a region of normal metal. In either case Landau-Ginzburg theory can be used to describe the vortex.
In classical incompressible fluid mechanics the hydrodynamic mass of a vortex is of order of mass of fluid displaced, but it depends in detail on the core structure. Since vortices in low temperature superfluid helium are measured to have a rather small vortex core radius, smaller than the average interatomic spacing, this mass density is relatively small, and is taken to be zero in some calculations. In recent work Duan and Leggett showed that the inertial mass of a vortex in a superconductor is finite 13 , but Duan argued that the mass density of a vortex in a neutral superfluid is infinite 14 . He originally described this as a result of the quantum nature of the fluid, and we found this very hard to accept. Actually it is true of all compressible fluids, but the divergence is logarithmic in the frequency of the motion, with quite a small coefficient, as Demircan, Ao and Niu have pointed out 15 . Under realistic circumstances, such as in the free cyclotron motion of the vortex, or in vortex tunneling, the logarithm may be quite small, and this term relatively unimportant.
In liquid helium at relatively high temperatures, close to the critical temperature, the largest force on a moving vortex, or on a vortex that is held still while the fluid streams past it, is likely to be a drag force due to the scattering by the vortex of the excitations that make up the normal fluid. At lower temperatures the transverse (Magnus) force should dominate, but the understanding of the Magnus force is complicated by the existence of the two components of the fluid, which may affect the vortex very differently. As we have discussed already, the Magnus force has important implications for the Berry phase, and for the quantum uncertainty of the position of the vortex.
For a superconductor the situation is far more complicated, since not only is there the magnetic field due to the motion of the electrons to be considered, but the effects of disorder in the positive background are vital. Disorder makes the conductivity of the normal metal finite, and produces a drag force on vortices even at rather low temperatures, but also, if the disorder is on a large enough scale, pins the vortices and reduces the flux flow resistivity.
In our work we have concentrated on understanding an isolated vortex in an ideal, uniform, infinite superfluid. Our aim has been to understand the parameters that come into the dynamics of a vortex when its velocity relative to the background fluid is small -the effective mass, the transverse component of the force, and the longitudinal (dissipative) component of the force. This is clearly not a program for a complete understanding of vortex dynamics, since, even if it were completely successful, we might still be concerned with strongly nonlinear regions in realistic situations, such as those found when quantum tunneling appears to be observed. Particularly for the transverse force, we think we have clean and precise results that are -inevitably-in conflict with widely accepted theories.
In our work the vortex is controlled by some pinning potential that can be manipulated from outside. The pinning potential can be rather weak, or a macroscopic wire, so long as it has cylindrical symmetry. For quantities such as the effective mass and the longitudinal force the nature of the pinning potential has an effect on the answer, and we may need to consider some suitable limiting process to make the strength of the potential tend to zero, but for the transverse force we find that the answer is independent of the form or strength of the pinning potential.
IV. THE MAGNUS FORCE IN NEUTRAL SUPERFLUIDS
There is no agreement about what the forces acting on a vortex in a neutral superfluid are. The simplest quantity to calculate should be the component of force perpendicular to the motion of the vortex relative to the substrate, the analog of the Magnus force for classical fluids, yet two recently quoted forms look quite different. In Donnelly's book on Quantized Vortices in Helium II 16 he quotes the force per unit length as where K is a vector along the vortex line whose magnitude is the quantum of circulation h/M , v V , v s and v n are the velocities of the vortex, the superfluid component and the normal fluid component, and ρ s and ρ n are the superfluid and normal fluid densities; σ is a coefficient whose value is not exactly determined. Volovik, however, in a number of recent papers 17,18 , quotes the form where the term with coefficient C F occurs only for fermion superfluids, such as the B phase of superfluid 3 He, and is due to spectral flow of the low energy states in the vortex core. The first term in each of these expressions is referred to as the Magnus force, the term proportional to ρ n as the Iordanskii force, so the use of these two terms is quite different for the two authors. Donnelly's term proportional to σ comes from phonon or roton scattering by the vortex, and it is only if this is equal to zero that the two expressions are in agreement for the case of bosons. Whatever the form of this force, Galilean invariance tells us that there are only two parameters to be determined. If we know the coefficients of v V and v s , the coefficient of v n must be equal to minus the sum of the other two coefficients. We argue, in the rest of this section, that the only transverse force has the form by determining separately the coefficients of v s and v V . Wexler 3 has given a thermodynamic argument to show that the coefficient of K × v s is indeed −ρ s . This result seems to be uncontroversial, and is in agreement with both Donnelly and Volovik. The argument is essentially a thermodynamic argument, which considers a reversible change of the circulation in a ring by moving a vortex slowly across the system under equilibrium conditions. Consider a macroscopic ring, such as the one shown in fig. 1, with average radius R, width (difference between outer radius and inner radius) L y and height L z . For simplicity we assume L y << R, but this is not essential, and the result is independent of the shape of the ring. Initially there are n quanta of circulation trapped in the ring, giving superfluid velocity v s = nκ 0 /2πR, and the normal fluid velocity is zero, since the boundaries of the ring are stationary. A pinning potential is used to insert adiabatically one vortex, which is created at the outer boundary, moved slowly across the system under constant temperature conditions, and annihilated at the inner boundary. The effect of this extra vortex is to increase the circulation from n units to n + 1, increasing the superfluid velocity by δv s = κ 0 /2πR. This increases the free energy by since superfluid density is defined in terms of the free energy change when the superfluid velocity is changed. This must be compared with the work done in moving the vortex of length L z a distance L y isothermally across the ring, which is Comparison of these two shows that the magnitude of the transverse force per unit length, under conditions in which v n and v V are both negligible, is More careful analysis gives the sign and direction of this force as This argument determines the coefficient of v s in the transverse force.
To determine the coefficient of v V , Thouless, Ao and Niu 2 consider an infinite system with superfluid and normal fluid asymptotically at rest (v n = 0 = v s ) in the presence of a single vortex which is constrained to move by moving the pinning potential. For simplicity we describe the two-dimensional problem of a vortex in a superfluid film, but the three-dimensional generalization is straightforward. Also we restrict this discussion to the ground state of the vortex, but the generalization to a thermal equilibrium state is straightforward. The reaction force on the pinning potential is calculated to lowest order in the vortex velocity v V . This can be studied as a time-dependent perturbation problem, but this can be transformed into a steady state problem, with the perturbation due to motion of the vortex written as iv V · grad 0 . The force in the y direction on a vortex moving with speed v V in the x direction can then be written as where P projects off the ground state of the vortex. Since ∂V /∂x 0 is the commutator of H with the partial derivative ∂/∂x 0 , the denominator cancels with the H in the denominator, and so the expression is equal to the Berry phase form Since the Hamiltonian consists of kinetic energy, a translation invariant interaction between the particles of the system, and the interaction with the pinning center, which depends on the difference between the pinning center coordinates and the particle coordinates, the derivatives ∂/∂x 0 , ∂/∂y 0 , can be replaced by the total particle momentum operators − ∂/∂x j , − ∂/∂y j . This gives the force as a commutator of components, P x , P y of the total momentum, At first sight one might think that the two different components of momentum commute, but this depends on boundary conditions, since the momentum operators are differential operators. Actually this expression is the integral of a curl, and can be evaluated by Stokes' theorem to get where the integral is taken over a loop at a large distance from the vortex core. This gives the force in terms of the circulation of momentum density (mass current density) at large distances from the vortex. Our result that the transverse force is equal to v V times the line integral of the mass current is independent of the nature or size of the pinning potential. The general form of this is where K n represents the normal fluid circulation.
In equilibrium the circulation of the normal fluid around a stationary vortex is zero, since circulation of the normal fluid gives rise to viscous dissipation of energy, which in turn leads to growth of the area of the normal fluid vortex core. If there is any nonequilibrium normal fluid circulation, it is not obvious that it should be quantized, or that the motion of the normal fluid vortex should be correlated with the motion of the superfluid vortex. If only the superfluid participates in the circulation round the vortex core, which seems to us to be the most reasonable assumption, this gives In combination with the Wexler result for the coefficient of v s , the total transverse force on a vortex is Only the superfluid Magnus force exists unless the normal component participates in the circulation of the superfluid to some extent. This disagrees with Donnelly's eq. (2) unless the phonon-scattering term proportional to σ cancels with the Iordanskii term proportional to ρ n , and disagrees with Volovik's eq. (3) unless his coefficient C F is equal to ρ n even for bosons. The most striking feature of this is that the force is independent of the normal fluid velocity.
It agrees with the obvious generalization of the classical Magnus force argument to two-fluid dynamics. This argument considers the force-momentum balance in a large cylinder surrounding a vortex which is held stationary while the fluid flows past it. Bernoulli pressure on the cylinder, and momentum flux across the boundary of the cylinder balance with the force on the vortex. In a two-fluid generalization of this there are separate contributions from the product of superfluid circulation with superfluid velocity, and from the product of normal fluid velocity with normal fluid velocity.
Since we have only had to consider global properties involving momentum conservation and conditions at a long distance from the vortex core, and have not needed to make any detailed consideration of conditions at the core of the vortex, we believe that our arguments are valid for a fermion superfluid described by a single complex order parameter. The B phase of 3 He does not quite meet this condition, but the order parameter is an essentially isotropic combination of a P -wave orbital state and a triplet spin, so this should behave in much the same way. Volovik 17,18 has argued that spectral flow of the unpaired states in the vortex core of a fermion superfluid leads to a contribution to the transverse force that cancels most of the Magnus force, but Stone 19 has examined this argument more closely, and does not find that this mechanism is operative unless there is a background to take momentum from these excitations. We do not think that there is such a canceling contribution in a homogeneous fermion superfluid.
V. FORCES DUE TO PHONON SCATTERING
The result obtained in the previous section, that the coefficient of the vortex velocity in the transverse force is equal and opposite to the coefficient of the superfluid velocity leads to the surprising conclusion that normal fluid flow does not affect the force on the vortex, unless there is also normal fluid circulation round the vortex. This is surprising, because Pitaevskii 20 and Iordanskii 21 argued that the asymmetrical scattering of rotons or phonons by vortices should lead to a transverse force when the vortex moves relative to the normal fluid component.
In the low temperature limit an explicit calculation of the phonon-vortex scattering can be made, and the literature quotes a transverse force proportional to ρ n K × (v V − v n ). There are two problems with this result: 1. The derivation assumes that the phonons interact only with the vortex, but in our argument we assume that the phonons, which make up the normal fluid, must be in equilibrium with one another.
2. In papers from Cleary (1968) 22 to Sonin (1997) 23 the expression for the transverse force, which is proportional to has been rewritten as This would be fine, except that δ m does not tend to zero. If one substitutes the formula which is correct to lowest order in temperature T , into the original formula, a result is obtained which is (at least) cubic in κ 0 and of sixth power in T , or 3/2 power in ρ n . The second expression is obtained from the first by canceling two divergent series, and this gives the quoted expression which is linear in κ 0 and linear in ρ n ; we cannot see any justification for a term of this magnitude.
VI. SUPERCONDUCTIVITY
The situation for the transverse force on a vortex in a superconductor is even more confused than the situation for a neutral superfluid. In the 1960s, Bardeen and Stephen 24 argued for a very small Magnus force, but an analysis by Nozières and Vinen 25 of an idealized model of a superconductor gave the full value of the Magnus force suggested by classical hydrodynamics.
Wexler's argument 3 for the coefficient of v s can be applied to the case of a superconductor. When the substrate velocity and the vortex velocity zero in the presence of a superfluid electron velocity, this argument gives the expected result that there is a Lorentz force on the vortex equal to the integral of eρ e v s × B, where ρ e is the conduction electron density.
To find the coefficient of v V , Geller, Wexler and Thouless 26 have adapted the arguments of sec. 4 to the very idealized model of a charged system with a uniform positive background, which is essentially the situation considered by Nozières and Vinen 25 , although they, unlike us, also had to assume that the superconductor was extreme type II. This is not completely straightforward, even though we have taken the uniform positive background so that we can continue to use momentum conservation, because any choice of the gauge field which is used to describe magnetic effects breaks the explicit translation invariance, and makes the implicit translation invariance obscure. Rather than introduce a gauge field, we can write the electromagnetic interactions in terms of a Coulomb interaction between electrons and between electrons and positive background, together with an instantaneous current-current interaction between the electrons. Darwin showed that this is correct up to second order in electron velocity, apart from a relativistic variation of the mass with velocity which is unimportant for this problem. We also need a Galilean invariant attractive interaction between the charges to produce a paired superconducting state. This gives a Hamiltonian with explicit translation invariance, so the arguments of sec. 4 can be taken over.
The result for the coefficient of v V is formally unchanged, and is the line integral of the canonical momentum density on a loop which surrounds the flux line at a distance which is large compared with the penetration length. This is actually a surprising result, as at these distances there is no magnetic field or current density produced by the vortex line, and the integral is related to the Aharonov-Bohm effect rather than to any classical quantity.
Since the integral is equal to the trapped magnetic flux, the transverse force can be written as so that the transverse force depends only on the motion of the vortex relative to the electrons. It can be rewritten, in a form that makes its physical origin more transparent, as The first term is the Lorentz force given by the interaction of the electric current, which is a Galilean invariant, with the magnetic field. The second is a Magnus force that acts on the positive substrate moving with velocity v p relative to the vortex. The moving vortex generates a dipolar electric charge distribution, which in turn produces a dipolar elastic stress on the positive substrate, and this leads to a net force on the positive substrate. A similar analysis was carried out by Nozières and Vinen 25 , and their results were essentially the same. There have been recent measurements made by Zhu, Brandstrom and Sundqvist 27 that support a fairly large value of the Magnus force.
VII. CONCLUSIONS
We have succeeded in determining the transverse force on a vortex in a neutral superfluid under assumptions that are both general and reasonably realistic. Like most other exact results in quantum many-body theory, they are related to general conservation laws, and apply, in a slightly different form, to classical systems as well.
Our generalization to superconductors is far from realistic, since it relies on the uniformity of the positive substrate. With a uniform positive substrate an electron gas has infinite conductivity, even in the absence of a pairing interaction, so our results can only form a first step towards a plausible theory of the Magnus force in superconductors. We may be able to extend the reults from a uniform substrate to an ideal periodic sustrate, but even that is quite inadequate for the description of a real metal. We have to be able to take the next step of considering disorder, but there is no chance that we will be able to get exact results in that case.
It would also be interesting to generalize these results to finite systems, nonzero frequency of the vortex motion, and a finite density of vortex lines.
Another line that we are pursuing is the connection between the Magnus force and the quantization of the vortex line. We know that there is an intimate connection between the strength of the Lorentz force on an electron and the density of degenerate levels of an electron in a magnetic field, and there are good reasons to think that there is a similar connection between the strength of the Magnus force on a vortex and the density of degenerate levels for a vortex. | 2015-03-21T17:44:09.000Z | 1997-09-10T00:00:00.000 | {
"year": 1997,
"sha1": "fb979394cf64b79655be0867f27b7963b43ca57c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9709127",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fb979394cf64b79655be0867f27b7963b43ca57c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
265410028 | pes2o/s2orc | v3-fos-license | Research progress in cardiotoxicity of organophosphate esters
Organophosphate esters (OPEs) have been extensively utilized worldwide as a substitution for brominated flame retardants. With an increased awareness of the need for environmental protection, the potential health risks and ecological hazards of OPEs have attracted widespread attention. As the dynamic organ of the circulatory system, the heart plays a significant role in maintaining normal life activities. Currently, there is a lack of systematic appraisal of the cardiotoxicity of OPEs. This article summarized the effects of OPEs on the morphological structure and physiological functions of the heart. It is found that these chemicals can lead to pericardial edema, abnormal looping, and thinning of atrioventricular walls in the heart, accompanied by alterations in heart rate, with toxic effects varying by the OPE type. These effects are primarily associated with the activation of endoplasmic reticulum stress response, the perturbation of cytoplasmic and intranuclear signal transduction pathways in cardiomyocytes. This paper provides a theoretical basis for further understanding of the toxic effects of OPEs and contributes to environmental protection and OPEs’ ecological risk assessment.
Introduction
Organophosphate esters (OPEs) are a class of phosphate derivatives containing organic groups or compounds containing carbon-phosphorus bonds, with a common skeleton structure of phosphate esters.According to the different functional groups of the side chains, they can be roughly divided into three types: chlorinated OPEs, alkyl OPEs, and aryl OPEs (Tian et al., 2023).Chlorinated OPEs are highly hydrophilic and volatile, and can migrate to environmental media through various physical and chemical processes, with strong hydrolysis and biodegradation resistance (Gao et al., 2015).The common types of chlorinated OPEs include tris (2-chloroethyl) phosphate (TCEP), tris (3-chloropropyl) phosphate (TCPP), tris (1,3-dichloro-2-propyl) phosphate (TDCPP), etc. Alkyl OPEs have a wide range of polarity and significant differences in physical and chemical properties, with common types such as tris (2-butoxyethyl) phosphate (TBOEP) and tributyl phosphate (TnBP).Aryl OPEs have strong hydrophobicity and are prone to bioconcentration, with common types being triphenyl phosphate (TPhP), cresyl diphenyl phosphate (CDP) and so on (Gu et al., 2023).
At present, OPEs have been used widely as flame retardants and plasticizers worldwide, often added to industrial products such as electronic devices, building materials, textiles, and plastics, playing a role in resisting the spread of flames as well as increasing the plasticity and flexibility of polymers (Zhang et al., 2018;Liao et al., 2022).However, as physical additives, OPEs are prone to be released into the environment during production, transportation, utilization, and recycling processes through diffusion, abrasion, and other means (Bacaloni et al., 2007;Sundkvist et al., 2010).Currently, they have been widely detected in various environmental media such as the atmosphere, soil, sediment, water bodies, and even in organisms, human blood and urine.
In recent years, the issue of environmental pollution caused by OPEs has gradually gained the attention of researchers.Numerous studies have confirmed that OPEs can induce various toxicities in organisms, including developmental, reproductive, neurological, metabolic, and endocrine disruptions.For instance, after exposure to TDCPP with environmental concentrations of 300 ng/L and 3,000 ng/L, the growth ability, survival rate, and reproductive ability of Daphnia magna were significantly inhibited after 32-90 d depending on the concentration (Li et al., 2017).Tricresyl phosphate (TCP) at 20 μg/L disrupted the balance between excitation and inhibition in the neural circuit of zebrafish (Danio rerio), specifically inducing hyperactivity and seizures, which in turn led to severe neurotoxicity (Knoll-Gellida et al., 2021).Impaired liver metabolism and imbalanced gill ion transport were observed in juvenile medaka fish (Oryzias latipes) after exposure to TCEP with a concentration of 1 μg/L for 30 d, and cell apoptosis was induced through the p53-Bax pathway and caspase-dependent pathways at 10 μg/L (Zhao et al., 2021).The TPhP at 40 μg/L affected hormone synthesis and thus cause endocrine disruption by damaging the nervous system's normal regulatory process of thyroid hormone secretion in zebrafish (Kim et al., 2015).
The heart, as the dynamic core of blood circulation in an organism, plays a crucial role in maintaining regular life functions.The investigation of cardiotoxicity holds great significance in drug development and the evaluation of environmental pollution levels.It has become an important focal point in the realms of life sciences, ecotoxicology, and environmental science.It has been confirmed that biological exposure to environmental pollutants can induce cardiotoxicity, which manifests as myocardial injury, disruption of cardiac electrophysiological characteristics, decline in heart function, and the inability to supply sufficient blood to the body, ultimately resulting in myocardial disease (Cross et al., 2015;Zakaria et al., 2018).This paper provides a concise summary of the cardiotoxicity of OPEs, including alkyl, aryl and chlorinated OPEs.Furthermore, it delves into the underlying toxic mechanisms associated with these chemicals, aiming to offer valuable reference materials for toxicological research and ecological risk assessment of OPEs.
Cardiotoxicity of organophosphate esters
Compared to other organ toxicities, cardiotoxicity is often manifested as myocardial lesions and a decline in cardiac function, characterized by a long latency period, slow progression, and irreversible consequences once established.This section summarizes the cardiotoxicity induced by OPEs, considering morphological structure, physiological function, as well as key molecules and biomarkers.
Effects of OPEs on cardiac morphological structure
In current studies, the main focus is on the cardiac developmental toxicity of zebrafish.This organism has been used widely in exploring cardiotoxicity due to the high similarity of their cardiac developmental process and the high homology of their genome with mammals.
When exposed to nine OPEs at different concentration gradients, zebrafish embryos generally showed pericardial edema and abnormal cardiac looping, manifested by prolonged venous sinus-arteriolar bulb (SV-BA) distance and dioxin-like tubular heart.The above phenomena were positively correlated with the exposure concentration (Du et al., 2015;McGee et al., 2013).Further studies revealed that embryos exposed to TPhP and CDP were more prone to significant pericardial edema.The 96 h-EC 50 values for these two groups were lower than half of the 96 h-LC 50 value, whereas the other OPEs groups exhibited values higher than the 96 h-LC 50 .This suggests that aryl OPEs exhibit stronger toxicity to cardiac development.Wiegand et al. (2022) exposed zebrafish embryos to TPhP and observed cardiac looping abnormalities and pericardial edema, occurring within the sensitive time window of 24-30 hpf.Previous studies have confirmed that ionocytes express the ion transporters to maintain ion balance between the aquatic environment and the embryo environment before zebrafish gill development (Guh et al., 2015).It is hypothesized that the formation of pericardial edema may be associated with an increased abundance of ionocytes rich in Na + / K + ATPase, known as NaRCs.Further investigations have revealed that the pericardial edema induced by TPhP in zebrafish embryos is dependent on the ion strength of the exposure medium.Therefore, it is of vital importance to further standardize the exposure culture medium and the embryo rearing protocols in zebrafish-based chemical toxicity screening assays (Wiegand et al., 2023).
Furthermore, observation of tissue slices revealed that exposure to 0.50 or 1.0 mg/L of TPhP, or 0.10, 0.50, or 1.0 mg/L of CDP, led to a reduction in the number of myocardial cells and thinning of the atrioventricular wall in zebrafish.This suggests that both TPhP and CDP can affect cardiac development during zebrafish embryogenesis, and CDP exhibits stronger cardiotoxicity than TPhP at the same concentrations (Du et al., 2015).A study by Xiong et al. (2022) also found that continuous intragastric administered of 10 mg TCEP/kg b.w./d for 30 d caused inflammatory infiltration and nuclear swelling or atrophy in mouse (Mus musculus) cardiomyocytes.Additionally, it induced myocardial fibrosis, leading to disorganized arrangement of muscle fibers and the appearance of myocardial congestion phenotypes.Further observation of cardiac ultrastructure revealed that exposure to TCEP can induce the formation of mitophagosomes and an increase in the quantity of autophagic vacuoles in myocardial cells in mice.These findings suggest a relationship between TCEP-induced myocardial fibrosis and autophagy.Kanda et al. (2021) investigated the effects of TCEP on chicken embryos and found that the heart weight-to-body weight ratio significantly increased in the group exposed to 500 nmol TCEP/g egg.Additionally, after 3 d of treatment, both the total length of blood vessels and the number of branches showed a significant decrease, suggesting that TCEP induced myocardial hypertrophy and inhibited angiogenesis in chicken embryos.
Effects of OPEs on cardiac physiological function
The heart, being the most essential organ in vertebrate bodies, plays a vital role in propelling blood circulation throughout all body parts.Hence, maintaining proper cardiac physiology is imperative for sustaining normal physiological activities of organisms.Studies have revealed that TPhP, TnBP, and TBOEP can elicit a concentration-dependence decrease in heart rate in Japanese medaka and zebrafish embryos (Sun et al., 2016;Liu et al., 2017).Alzualde et al. (2018) utilized zebrafish embryos to investigate the cardiotoxicity of TPhP, isopropylated phenyl phosphate (IPP), 2ethylhexyl diphenyl phosphate (EHDP), tert-butylated phenyl diphenyl phosphate (BPDP), trimethyl phenyl phosphate (TMPP), isodecyl diphenyl phosphate (IDDP), and TDCPP.The results revealed that all seven OPEs induced bradycardia in zebrafish embryos.In addition, four non-halogenated OPEs (BPDP, IPP, TMPP, TPhP) showed significant cardiotoxicity at concentrations of 10-100 μM, which was characterized by bradycardia initially, followed by atrial standstill at higher concentrations.Du et al. (2015) investigated the sensitive period of heart development in zebrafish.Zebrafish embryos ranging from 0 hpf to 60 hpf were exposed to CDP or TPhP for 12 h.At 72 hpf, the heart rate and SV-BA distance were measured.The results revealed that zebrafish larvae exposed to 0.5 mg/L TPhP or 0.1 mg/L CDP (the former is about 1/3 of 96h-LC 50 , while the latter is close to 1/10) exhibited an irreversible decrease in heart rate and an elongated SV-BA distance between 0-48 hpf, resembling the dioxin-like tubular heart phenotype.This suggests that these two OPEs can induce bradycardia and inhibit cardiac looping, leading to abnormalities in cardiac circulation (SV-BA distance reflects changes in the position of the atrium and ventricle, serving as an indicator for assessing cardiac circulatory function).Importantly, bradycardia induced by toxicant exposure after 48 hpf could be restored to normal levels after removal of the toxicant for 12 h.The results suggest that zebrafish embryos exhibit a sensitive window for heart development between 0-48 hpf, during which they are more susceptible to pharmacological stimulation.Additionally, in experiments involving exposure of chicken embryos to TCEP, it was found that the heart rate decreased in a concentration-dependent manner and significantly decreased after 4 d of toxicant treatment (Kanda et al., 2021).
Researchers have also conducted in vitro experiments to investigated the effects of OPEs on the pulsation of myocardial cells and the differentiation of stem cells into myocardial cells.Sirenko et al. (2017) employed an organotypic human induced pluripotent stem cell-derived model to investigate the in vitro cardiotoxicity study on seven OPEs, including EHDP, phenol, isopropylated, phosphate (3:1) (PIP 3:1), TCEP, TPhP, IDDP, BPDP, and TCP.They evaluated the beating behavior of cardiomyocytes by assessing the parameters such as peak frequency, rise time, and decay time of intracellular Ca 2+ flux at two time points, specifically 30 min and 24 h post-exposure.It was observed that besides TCEP, six other OPEs exhibited a nonmonotonic concentration response in myocardial cells, characterized by an increase in peak frequency at low exposure concentrations followed by suppression at higher concentrations.Furthermore, prolonged exposure for 24 h resulted in similar changes with IDDP, BPDP, and PIP 3:1; whereas EHDP and TPhP primarily demonstrated inhibitory effects at high concentrations.In a separate in vitro experiment conducted by Qi et al. (2019), it was demonstrated that TPhP significantly reduced the beating frequency of embryoid bodies that formed during the cardiomyogenic differentiation of mouse embryonic stem cells (mESCs), indicating that TPhP can reduce the differentiation of mESCs into cardiomyocytes.
By employing Hoechst 33342 and the mitochondria-specific dye JC-10 co-stain technique, researchers discovered that OPEs could lead to a reduction in the quantity of granules per cell, average granule area, and/or granule intensity within myocardial cells, while also disrupting mitochondrial membrane potential (Sirenko et al., 2017).Transmission electron microscopy examination of mitochondrial structure further revealed that exposure to TCEP induced abnormal increases in small granular mitochondria as well as mitochondrial swelling accompanied by degeneration or disappearance of cristae within mouse myocardial cells (Xiong et al., 2022).These findings collectively suggest that OPE exposure can influence mitochondrial metabolism within myocardial cells.
Effects of OPEs on key molecules and biomarkers in the heart
The morphogenesis and functional maintenance of the heart require the involvement of genes, proteins, and enzymes in a finely regulated way.Researchers utilized zebrafish and mice as experimental models to investigate the effects of OPEs on key genes involved in bone morphogenesis protein 4 gene (bmp4) and biomarkers including creatine kinase (CK), among others.
The bone morphogenetic protein BMP4 plays a crucial role in the development of the cardiac outflow tract (OFT), and its functional loss can lead to embryonic lethality and defects in myocardial differentiation in mice (Zheng et al., 2021).In addition, reports have indicated that BMP4 can regulate heart looping and asymmetric development in zebrafish (Chen et al., 1997).The nkx2.5 belongs to the NK homeobox family and plays a critical role in cardiac cell proliferation and differentiation, ventricular chamber formation, and the development and maintenance of the specialized conduction system; it is a key transcription factor in cardiac development (de Sena-Tomás et al., 2022).The gata4 can be expressed as GATA binding protein 4, which can activate the αT-catenin promoter in cardiomyocytes, promoting the expression of αT-catenin; gata4 plays a crucial role in the assembly of the cytoskeleton and myofiber formation in cardiac cells, and is important in local muscle regeneration and cell proliferation during the process of regeneration (Vanpoucke et al., 2004).The tbx5 plays a crucial role in the formation of the atrioventricular septum, generation of the conduction system, and rhythm control during development.In addition, the tbx5 also participates in the process of cardiac regeneration, and its loss can lead to failed heart looping and heart failure in zebrafish (Steimle and Moskowitz, 2017).Du et al. (2015) exposed zebrafish to 0.5 mg/L TPhP and CDP and examined the expression of cardiac developmental regulatory genes gata4, bmp4, nkx2.5, and tbx5.
During the initial stages of development, the expression of bmp4, nkx2.5, and tbx5 were downregulated.Further morphological and functional indicators revealed that zebrafish exhibited decreased heart rate and abnormal development of the specialized cardiac conduction system during the 0-24 hpf stage.As development progressed, the expression of gata4, nkx2.5, and tbx5 gradually increased, reaching levels close to, or exceeding those of, the control group by 72 hpf.However, the expression of bmp4 remained lower than that of the control group throughout, indicating that TPhP and CDP affect cardiac looping and asymmetric development processes.The Hox gene family plays a crucial role in regulating the fate of cardiac cells within the second heart field (SHF), contributing to embryonic heart field positioning and establishing the anterior-posterior polarity during heart morphogenesis (Lo and Frasch, 2003;Waxman et al., 2008).Previous research has demonstrated that exposing zebrafish embryos at 6 hpf to 0.2 μM monosubstituted isopropylated triaryl phosphate (mITP) until 48 hpf leads to downregulation of transcription within the Hox gene family, resulting in cardiac malformations (Haggard et al., 2017).
The CK catalyzes myocardial cell metabolism and regulates cardiac electrophysiological activity; its abnormal elevation is often considered one of the diagnostic markers for myocardial infarction (Amani et al., 2013).Xiong et al. (2022) found increased protein levels of collagen I, collagen III, and α-SMA in mice after exposure to TCEP, accompanied by elevated levels of CK and creatine kinase isozymes (CK-MB), indicating the presence of myocardial fibrosis and impaired cardiac function.Telethonin (TCAP), encoded by the titin-cap gene, is a critical molecule for myocardial sarcomere assembly and regulation.Variations in TCAP can lead to myocardial hypertrophy and an increased risk for dilated cardiomyopathy (Hayashi et al., 2004;Webber et al., 2012).Mitchell et al. (2018) observed that exposure of zebrafish at 72 hpf to TPhP concentrations of 5, 10, and 20 μM resulted in arrested cardiac development and a significant increase in TCAP expression within the heart.
In summary, OPEs can induce cardiac structural changes characterized by pericardial edema, physiological functional changes manifested as abnormal heart rate, and alterations in the expression levels of key molecules and biomarkers such as bmp4 and CK.The manifestations of cardiotoxicity, changes in key molecules or biomarkers, exposure concentrations, and environmentally relevant concentrations resulting from major OPEs are shown in Table 1.
Wnt signaling pathway
The Wnt signaling pathway is involved in regulating several processes in animal growth and development.It consists of two pathways: the ß-catenin-mediated signaling pathway activated by the wnt ligands binding to Frizzled receptors on the plasma membrane; and the non-canonical signaling pathway that involves protein kinase C and Ca 2+ , which is not dependent on the ß-catenin-mediated signaling pathway.Excessive activation or inhibition of the Wnt signaling pathway can have significant effects on the development of vertebrate organisms.
The Wnt signaling is essential for the development of the vertebrate heart.The researchers artificially activated the Wnt/βcatenin signaling at different developmental stages in zebrafish embryos and found that at 0-5 hpf, it upregulated the expression of nkx2.5, a key transcription factor for heart development, and promoted heart development, whereas at 6-9 hpf, it downregulated the nkx2.5 expression, ultimately inhibited cardiac formation.These findings suggested that the Wnt signaling is involved in early-stage cardiac development by promoting differentiation but suppresses it later on (Ueno et al., 2007).Xiong et al. (2021) observed that acute exposure of zebrafish embryos to TBOEP resulted in the inhibition of both Wnt classical and non-classical signaling pathways.In addition, with the exposure concentration above 1,000 μg/L, the β-catenin, wnt11 and pkc were downregulated, and the negative regulators axin1 and axin2 were upregulated, ultimately leading to downregulation of the downstream target genes sox9b and nkx2.5, resulting in abnormal cardiac In addition, the disruption and dysregulation of Wnt signaling can induce apoptosis and oxidative stress (He et al., 2005;Lu et al., 2011;Zhang et al., 2013).Under exposure to TBOEP at 2000 μg/L, the number of apoptotic cells and the content of reactive oxygen species in the zebrafish heart region increased, and the activities of superoxide dismutase and catalase decreased (Xiong et al., 2021).The researchers further applied the Wnt signal activator 6-bromoindirubin-3′-oxime (6BIO) to verify the toxic effect of TBOEP on the heart.The results showed that the combined exposure of TBOEP and 6BIO significantly inhibited the upregulation of axin1 and axin2, as well as the downregulation of β-catenin, wnt11 and pkc, indicating that 6BIO can alleviate the toxic effect of TBOEP.Small molecule modulators are an important research direction for developing targeted drugs, and the above research provides ideas for exploring intervention measures regarding toxic effects of OPEs.Up to now, no report has been found on the cardiotoxic effects of other alkyl OPEs or two other types of OPEs through the Wnt signaling pathway.Further research is needed to determine whether they can also interfere with the Wnt signaling regulation.
Calcium overload/endoplasmic reticulum stress/autophagy pathway
The Ca 2+ ion plays a crucial role in maintaining normal physiological function of the heart, with an imbalance in calcium homeostasis leading to cardiac diseases such as myocardial infarction, arrhythmias, myocardial hypertrophy, and heart failure (Morciano et al., 2022).Endoplasmic reticulum (ER) stress is an adaptive response of cells to the accumulation of proteins in the ER.Excessive stress can lead to calcium overload and oxidative stress, both of which can cause mitochondrial dysfunction, resulting in a reduction in the activity of mitochondrial complex I in the cardiac muscle (Mohsin et al., 2020).Sarco/ER Ca 2+ ATPase (SERCA) is essential for the removal of excess Ca 2+ , ATP synthesis, and maintenance of normal cardiac function, and is also a target for the toxic effects caused by acute exposure to brominated flame retardants (Al-Mousa and Michelangeli, 2014;Shareef et al., 2014).In mouse cardiomyocytes, SERCA2a is the main subtype whose dysfunction leads to ischemic heart disease and dilated cardiomyopathy.Deletion of the SERCA2a gene results in sustained Ca 2+ influx, causing abnormal cell death and promoting the occurrence of cardiovascular diseases (Chemaly et al., 2018).After 30 d of oral gavage treatment with TCEP at a dose of 10 mg/kg b.w./d, the mice exhibited decreased SERCA expression, accompanied by a significant increase in Ca 2+ concentration and reduced ER transmembrane protein expression.Furthermore, the Phosphatidylinositol 3 kinasemechanistic target of rapamycin-Protein kinase B (PI3K/AKT/ mTOR) signaling pathway was inhibited.This inhibition resulted in an increased number of autophagic vacuoles and mitochondrial autophagosomes in myocardial cells, indicating that TCEP may induce calcium overload, ER stress, and excessive autophagy in cardiac myocytes by suppressing SERCA expression, ultimately contributing to myocardial fibrosis and cardiac toxicity (Xiong et al., 2022).
In this study, researchers also used a small molecule regulator to verify the cardiotoxicity of TCEP.The SERCA activator CDN1163 has been confirmed to be able to treat diabetes and liver metabolic dysfunction in mice by improving Ca 2+ homeostasis (Kang et al., 2016).When TCEP and CDN1163 were applied together, a significant improvement in myocardial fibrosis induced by TCEP was observed.It is speculated that CDN1163 suppressed calcium overload by restoring the function of SERCA, reorganized mitochondrial structure, promoted ATP production, and thus alleviated myocardial fibrosis.
Nuclear receptor pathway
Nuclear receptors, which are ligand-dependent transcription factors and belong to a family of proteins, exert critical regulatory roles in various physiological processes such as development, metabolism, reproduction, inflammation, and circadian rhythms.By binding to homologous ligands or specific DNA sequences, they activate or inhibit the transcription of target genes.Dysregulation of nuclear receptor function can contribute to cardiovascular diseases, malignant tumours, metabolic disorders, and inflammatory diseases (Weikum et al., 2018).At present, nuclear receptors are primarily classified into three groups: steroid receptors (Class I receptors), non-steroid receptors (Class II receptors), and orphan receptors (referring to nuclear receptors whose endogenous ligands have not been identified) (Kurakula et al., 2014).The retinoic acid receptors (RARs), belonging to Class II receptors, are essential in the process of heart development.They are instrumental in cardiac morphogenesis, myocardial growth, and coronary artery formation.Mutations in RARs can lead to myocardial thinning and embryonic lethality (Merki et al., 2005).The absence of retinoic acid (RA) during the 4-hpf blastocyst stage can lead to an excessive number of zebrafish myocardial cells, causing an expansion of the SHF after the formation of the heart canal, ultimately leading to the disintegration of the heart canal (Ryckebusch et al., 2008).Upon activation by ligands, the RARs undergo heterodimerization with other Class II nuclear receptors, Retinoid X receptors (RXRs).They bind to the RA response elements, thereby driving the transcription of genes associated with heart development (Minucci et al., 1997).The peroxisome proliferator-activated receptor gamma (PPARγ), also a member of Class II receptors, can be activated by the same ligands as RARs.It competes with RARs for RXRs, preventing the heterodimerization of RAR/RXR, and thereby inhibiting the function of RAR (DiRenzo et al., 1997).
It has been reported that RARs can participate in mediating the cardiac looping defects in zebrafish induced by the TPhP, an effective agonist of PPARγ (Belcher et al., 2014).To validate the hypothesis that TPhP interferes with the PPARγ and RAR-mediated signaling pathways leading to cardiac developmental toxicity in zebrafish, Mitchell et al. (2018) subjected zebrafish embryos to TPhP for acute exposure, there was a disruption in cardiac development, with impacts on the signaling pathways of five RXR-related nuclear receptors.The expression of TCAP, fatty acid-binding protein 1b, and desmin a in the myocardium was upregulated, indicating that TPhP induces cardiotoxicity the nuclear receptor pathway.However, TPhP may exert its effects by indirectly influencing the binding of upstream RXRs, rather than binding directly to RXRs.At present, it remains uncertain whether TPhP can function as a ligand for RXRs, and its mode of action is still not well-defined.Further investigation could be carried out by knocking out RXRs to gain deeper insights into this matter.Moreover, 15 ligands were selected that can alleviate pericardial edema (Mitchell et al., 2018).Among them, fenretinide (RARs agonist) and ciglitazone (PPARγ agonist) reduced (in a concentration-dependent-manner) the cardiotoxicity caused by TPhP, indicating that these two agonists can be used as drugs to prevent or treat this toxic effect.
Other studies have shown that the cardiotoxicity induced by mITP also involves the nuclear receptor signaling pathways.The mITP and TPhP are the main components in the composite additive organic phosphorus flame retardant Firemaster 550 (FM 550).They both belong to aryl OPEs and have similar structures, with the difference being the presence of isopropyl groups in mITP.The mITP can downregulate cyp26a1, dhr3a, and dhrs3b, and may play a role by inhibiting RAR (Haggard et al., 2017).The Cyp26a1 enzyme is responsible for the metabolic degradation of the RA, whereas Dhr3a and Dhrs3b are responsible for the degradation of all-trans retinol to vitamin A (Feng et al., 2010).The Hox gene family is involved in regulating the fate of heart cells in SHF (Waxman et al., 2008), and its expression is regulated by RA signaling (Ahn et al., 2014).The results showed that the expression of the Hox gene family (hoxb5b, hoxb6b, hoxa5a, hoxc1a, and hoxb8b) was significantly reduced, indicating that mITP inhibition of RARs led to downregulation of the Hox gene expression and the occurrence of cardiac abnormalities.We speculate that the effective small molecule modulators fenretinide and ciglitazone for TPhP are still effective in treating mITPinduced cardiac toxicity, which requires further confirmation.
Aromatic hydrocarbon receptor pathway
The aromatic hydrocarbon receptor (AHR), as a member of the basic helix-loop-helix transcription factor family, functions as a sensor in organisms for responding to external environmental stimuli.Upon ligand activation, the AHR translocates from the cytoplasm into the nucleus, where it forms a dimer with an AHR nuclear translocator.Subsequently, it regulates gene transcription, activating downstream signaling pathways associated with cellular toxicity.The target genes include cytochrome P450 superfamily members, reduced nicotinamide adenine dinucleotide phosphate, quinone oxidoreductase, and aldehyde dehydrogenase (Zhang, 2011).The AHR is involved in the initiation and progression of the cardiovascular system and its associated diseases.The AHR gene knockout mice exhibit myocardial hypertrophy, vascular remodeling, and systemic hypertension (Lund et al., 2003;Lund et al., 2008).Conversely, excessive activation of AHR can mediate inflammatory responses, leading to atherosclerosis in mice (Wu et al., 2011).It has been found that 2,3,7,8-tetrachlorodibenzop-dioxin (TCDD) can inhibit epicardial formation during cardiac development by activating the AHR pathway, resulting in developmental malformations, volume reduction, decreased cardiomyocyte numbers, blood regurgitation, and conduction block in the heart (Antkiewicz et al., 2005;Hofsteen et al., 2013).TCDD-like compounds and polycyclic aromatic hydrocarbons (PAHs) are the two main types of environmental pollutants known to activate the AHR signaling pathway.Therefore, AHR is also referred to as the "dioxin receptor" due to its responsiveness to these compounds.
The mITP is another ligand of the AHR.Its effects on cardiac looping and pericardium area in zebrafish embryos can be mitigated by co-exposure with an AHR antagonist CH223191, but no significant effect on heart rate has been found compared with mITP alone treatment (McGee et al., 2013).Further studies using a functional zebrafish AHR2 knockout line along with AHR1A-and AHR1B-specific morpholinos revealed that mITP interacted with both AHR2 and AHR1B and induced the expression of cytochrome P450 1A, but knockout all three AHR subtypes did not block the mITP-induced cardiotoxicity (Gerlach et al., 2014).In addition, similarly to TPhP, mITP can also cause pericardial oedema, abnormal cardiac cyclisation, and reduced heart rate in zebrafish embryos through the RARs signaling pathway (Haggard et al., 2017).The above results suggest that the cardiotoxicity of mITP in zebrafish can be mediated through RARs and AHR signaling pathways.
The pertinent cellular and molecular mechanisms underlying cardiotoxicity induced by OPEs have been summarized, and a schematic diagram (Figure 1) is constructed to illustrate the intracellular signaling pathways.
Conclusion and outlook
As the latest generation of flame retardants, the extensive utilization of OPEs in industrial and agricultural production has presented a potential environmental and biological health hazard.Numerous studies have substantiated that OPEs exhibit high environmental persistence, are readily absorbed and accumulated by living organisms, and can induce diverse toxic effects including neurotoxicity, metabolic toxicity, reproductive toxicity, cardiotoxicity, and immune toxicity.Currently, investigations on the cardiotoxicity of OPEs have primarily been conducted in zebrafish, along with other fish species, birds, mammals, and in vitro cells.Generally, the exposure to OPEs can elicit cardiac morphological and structural alterations such as pericardial edema and abnormal cardiac cyclization (e.g., prolonged SV-BA distance or dioxin-like tubular heart), decreased number of myocardial cells, thinning of atrioventricular walls, myocardial fibrosis and hypertrophy, as well as physiological functional changes such as decreased heart rate, bradycardia, weakened cardiomyocyte pulsation ability, and mitochondrial metabolic disorder.Additionally, the exposure to OPEs can induce alterations in the expression levels of key molecules and biomarkers, including bmp4, nkx2.5, gata4, tbx5, Hox gene family, CK, CK-MB, TCAP, collagen type I, collagen type III, and α-SMA, etc.These aforementioned effects may be mediated through pathways like the Wnt signaling pathway, Ca 2+ overload/ER stress/autophagy axis, nuclear receptor or AHR pathways.
there might be variations in mechanisms of toxicity and manifestations between different types of OPEs.For instance, TBOEP (alkyl OPEs) has been found to induce developmental toxicity on the heart of zebrafish larvae through the Wnt signaling pathway.TCEP (chlorinated OPEs) can trigger ER stress in cardiomyocytes, leading to calcium overload in the ER and disrupting mitochondrial structure, ultimately initiating autophagy.TPhP (aryl OPEs) has been shown to have adverse effects on cardiac differentiation and development through the nuclear receptor pathway, however it remains uncertain whether it acts directly or indirectly upon this pathway.Aryl OPEs have demonstrated a higher potency in inducing cardiotoxicity compared to the other two types.Furthermore, within the same type of OPEs, their pathways may be different.For example, both TPhP and mITP can induce cardiotoxicity in zebrafish through the RARs signaling pathway; however, mITP can also be mediated by the AHR pathway.
The current studies primarily focus on heart rate as the indicators of the effects of OPEs on cardiac physiological function.However, future research should also consider incorporating commonly used measures such as cardiac output and ejection fraction to comprehensively evaluate cardiac function.For instance, the administration of various targeted therapies for metastatic NSCLC may result in QT interval prolongation, supraventricular tachycardia or ventricular arrhythmias and potentially even heart failure (Waliany et al., 2021).The compound enzastaurin possesses the capability to inhibit potassium channels in myocardial cells of guinea pig (Cavia porcellus), resulting in an increase in action potential duration and prolongation of the QT interval, ultimately inducing a negative chronotropic effect (Zhang et al., 2023).The current utilization of this technique in the investigation of cardiotoxic effects caused by environmental pollutants (e.g., OPEs, etc.) on organisms is limited.However, its future implementation could be considered to enhance the comprehensiveness of assessing the environmental toxicological effects of pollutants.
Most studies report a variety of toxic effects of OPEs on organisms, there are also studies indicating that TDCPP, at low exposure concentrations, can attenuate hydrogen peroxide-induced Ca 2+ overload in H9C2 cells by reducing Ca 2+ inward flow, reduce excessive autophagy, and mitigates myocardial oxidative stress injury by activating the PI3K/Akt/GSK3β signaling pathway (Zhang et al., 2019).This offers novel insights for rational applications of OPEs.
TABLE 1
The cardiotoxic manifestations, exposure concentrations, key molecules or biomarkers of different OPEs.
TABLE 1 (
Continued) The cardiotoxic manifestations, exposure concentrations, key molecules or biomarkers of different OPEs.'-': no data.The italic values in the second column are the Latin name of the experimental subjects, and the ones in the fourth column are related genes. | 2023-11-25T16:20:30.293Z | 2023-11-21T00:00:00.000 | {
"year": 2023,
"sha1": "0e9259b94a704702489fc716ec72c809a8105060",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2023.1264515/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c01b88831f5630c7d0acabc6a342bf7e4f835481",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221181884 | pes2o/s2orc | v3-fos-license | Long noncoding RNA Hotair facilitates retinal endothelial cell dysfunction in diabetic retinopathy
Background: Retinal endothelial cell (REC) dysfunction induced by diabetes mellitus (DM) is an important pathological step of diabetic retinopathy (DR). Long noncoding RNAs (lncRNAs) have emerged as novel modulators in DR. This study aimed to investigate the role and mechanism of lncRNA Hotair in regulating DM-induced REC dysfunction. Methods: The retinal vascular preparations and immunohistochemical staining assays were conducted to assess the role of Hotair in retinal vessel impairment in vivo. The EdU, transwell, cell permeability, CHIP, luciferase activity, RIP, RNA pull-down, and Co-IP assays were employed to investigate the underlying mechanism of Hotair-mediated REC dysfunction in vitro. Results: Hotair expression was significantly increased in diabetic retinas and high glucose (HG)-stimulated REC. Hotair knockdown inhibited the proliferation, invasion, migration, and permeability of HG-stimulated REC in vitro and reduced the retinal acellular capillaries and vascular leakage in vivo. Mechanistically, Hotair bound to LSD1 to inhibit VE-cadherin transcription by reducing the H3K4me3 level on its promoter and to facilitate transcription factor HIF1α-mediated transcriptional activation of VEGFA. Furthermore, LSD1 mediated the effects of Hotair on REC function under HG condition. Conclusion: The Hotair exerts its role in DR by binding to LSD1, decreasing VE-cadherin transcription, and increasing VEGFA transcription, leading to REC dysfunction. These findings revealed that Hotair is a potential therapeutic target of DR. Keyword: lncRNA Hotair, diabetic retinopathy, LSD1, VEGFA, VE-cadherin D ow naded rom http://pndpress.com /clinsci/ardf/doi/10.1042/C S202006942/cs-2020-0694.pdf by gest on 18 Sptem er 2020 C liical Scnce. This is an Acepted M ancript. ou re encuraged to se he Vrsion of R eord tat, w en puished, w ill relace his vesion. he m st up-tote-version is avilable at https://rg/10.1042/C S200694 Clinical Perspectives (1) Background as to why the study was undertaken: LncRNAs have emerged as novel modulators in DR, while the role and mechanism of lncRNA Hotair in DM-induced REC dysfunction is still unknown. (2) A brief summary of the results: Hotair exerts its role in DR by binding to LSD1, decreasing VE-cadherin transcription, and increasing VEGFA transcription, ultimately leading to REC dysfunction. (3) The potential significance of the results to human health and disease: Our study highlights the important link between Hotair, LSD1, VE-cadherin, and VEGFA, and provides a deeper understanding of DR pathogenesis. This interplay may provide a potentially targeted method for DR treatment. D ow naded rom http://pndpress.com /clinsci/ardf/doi/10.1042/C S202006942/cs-2020-0694.pdf by gest on 18 Sptem er 2020 C liical Scnce. This is an Acepted M ancript. ou re encuraged to se he Vrsion of R eord tat, w en puished, w ill relace his vesion. he m st up-tote-version is avilable at https://rg/10.1042/C S200694
Introduction
Diabetic retinopathy (DR) is one of the most common microvascular complications of diabetes mellitus of diabetes mellitus (DM) characterized by pericytes loss, acellular capillaries increase, and blood-retinal barrier (BRB) hyperpermeability (1). Despite its mortality rate is lower than that of macrovascular complications, DR has become the leading cause of vision loss among working-age people (20-65-year-old) (2). The pathogenesis of DR is complicated, and the dysfunction of retinal endothelial cells (REC) has been considered as a major pathological step of DR. Under pathological conditions of DM or high glucose (HG), the permeability of REC cells is increased, which leads to the leakage of BRB, thereby resulting in retinal bleeding, exudation, and detachment (3). Besides, the abnormal proliferation and migration of REC induced by DM or HG can cause capillary occlusion and pathological angiogenesis, and finally promoting the progression of DR (4). Thus, an in-depth study of molecular mechanisms that regulate REC dysfunction is important for understanding the pathogenesis of DR and its treatment.
Long noncoding RNAs (lncRNAs), a class of transcripts with a length longer than 200 nt, are involved in the transcription, translation, and epigenetic regulation of target genes (5). It has been proven that the aberrant expression of lncRNAs is closely associated with the development of many diseases, especially these proliferative diseases (6,7). At present, although the roles of certain lncRNAs such as lncRNA myocardial infarction associated transcript (MIAT), lncRNA imprinted maternally expressed transcript (H19), lncRNA FOXF1 adjacent non-coding developmental regulatory RNA (FENDRR), and lncRNA CDKN2B antisense RNA 1 (ANRIL) (8)(9)(10)(11) have been elucidated, the roles of the most lncRNAs in DR have not been determined, including HOX transcript antisense intergenic RNA (Hotair). Hotair is a lncRNA with ~2000 nt in length and highly conserved among species (12) and its function in promoting angiogenesis has been indicated by many studies (13,14). As reported, Hotair is abnormally expressed in many diabetes-related diseases and plays an important role in the development and progression of these diseases, including diabetic kidney disease and diabetic cardiomyopathy (12,15,16). It also has been reported that the expression of Hotair is up-regulated in the serum of DR patients and can be considered as a marker of DR diagnosis and prognosis (17). Inspired by these shreds of evidence, we speculated that Hotair might exert its role in DR by regulating the function of REC cells.
In this study, we aimed to investigate whether Hotair exerts its role in DR by regulating REC cell Downloaded from http://portlandpress.com/clinsci/article-pdf/doi/10.1042/CS20200694/891202/cs-2020-0694.pdf by guest on 18 September 2020 The immunohistochemical staining of retinal paraffin-embedded sections with an HRP-labeled IgG antibody was used to detect retinal permeability as previously described (9). Mouse retinal tissues were used to prepare paraffin-embedded cross-sections. The paraffin-embedded cross-sections were immersed in a sodium citrate buffer to perform antigen repair. After blocked with H 2 O 2 , sections were incubated with HRP-labeled IgG antibody. Later, the DAB chromogenic solution was dropped onto sections. The sections were observed under a microscope (Leica, Japan) after stained with hematoxylin.
Retinal vascular preparations
The retinal vascular preparations were prepared according to previously described (18). In brief, the eyes of mice were enucleated and then fixed in paraformaldehyde for 24 h. Then the whole retinas were isolated from eyes under a microscope and then digested with 5% pepsin and 2.5% trypsin to isolate the retinal vasculature. The samples were stained with periodic acid Schiff (PAS) and hematoxylin to evaluate the changes of capillaries and pericytes. The acellular capillaries and pericytes in ten random fields per retina were counted and averaged.
Cell culture, treatment, and transfection
Mouse retinal endothelial cells (mREC) were isolated as previously described (19) and cultured in the basal medium containing 10% fetal bovine serum (FBS). mREC were treated with high glucose (HG, 25 mmol/l) or normal glucose (NG, 5.5 mmol/l). The overexpression vectors (Hotair and LSD1) and silence vectors (sh-Hotair, sh-LSD1, and sh-HIF1α) were synthesized by GenePharma company. Plasmids were used for these overexpression and silence vectors. These vectors were transfected into cells using Lipofectamine 2000 (Invitrogen, USA).
EdU assay
EdU assay was used to detect the proliferation of mREC cells. All experimental producers were according to the manufacture instructions of the BeyoClick™ EdU Cell Proliferation Kit with Alexa Fluor 488 (Beyotime, China).
Wound healing assay
The mREC cells were cultured in FBS-free basal medium for 24 h and then seeded into the collagen-coated 6-well plate. Then a linear wound was generated within the monolayers by scraping the cells using the sterile pipette tip. Cells were observed under an inverted microscope at 0 h and 24 h after incubation.
Permeability detection assay
The permeability of mREC cells was detected as previously described (20). Briefly, mREC cells were suspended in the basal medium and then were seeded into the upper transwell chambers (1× 10 4 cells/chamber ). Later, FITC-conjugated bovine serum albumin was added into each upper chamber at a final concentration of 10 μg/ml and 500 μl medium was added into each lower chamber. After incubation for 30 min, the fluorescence intensity of each lower chamber was detected by a microplate reader (BioTek, USA) with the excitation wavelength of 490 nm and an emission wavelength of 525 nm. The concentration of permeabilized FITC-albumin in the lower chamber was assessed according to the fluorescence intensity.
Fluorescence in situ hybridization (FISH)
The fluorescent probe targeting Hotair (cy3-labeled) was used to perform in situ hybridization.
Immunofluorescence assay
Retinas for the whole-mount were isolated from mice. The paraffin-embedded sections of mouse retinas were prepared. mREC cells were seeded onto the coverslips. Then the retinal whole-mount, retinal cross-sections, or mREC cells were washed with PBS, fixed with 4% formaldehyde, and permeated with 0.2% Triton X-100. After blocked with normal serum, cells were incubated with primary antibodies: anti-VE-cadherin, anti-ZO-1, and anti-VEGF at 4°C overnight, followed by incubation with FITC-conjugated secondary antibody for 1.5 h. Then cells were stained with DAPI for 5 min at room temperature and the immunofluorescence density and area of cells were observed under a fluorescence microscope (Nikon, Japan).
RNA pull-down assay
Downloaded from http://portlandpress.com/clinsci/article-pdf/doi/10.1042/CS20200694/891202/cs-2020-0694.pdf by guest on 18 September 2020 RNA pull-down assay was conducted using Pierce™ Magnetic RNA-Protein Pull-Down Kit (Thermo, USA). mREC cells were lysed using a standard lysis buffer and Hotair was labeled with biotin. The biotin-labeled Hotair was incubated with Streptavidin Magnetic Beads and cell lysate.
Then protein levels of LSD1 and HIF1α in Hotair pulled-down products were detected by western blot.
RNA immunoprecipitation (RIP) assay
The RIP assay was performed using Magna RIP™ RNA-Binding Protein Immunoprecipitation Kit (Millipore, USA). mREC cells were lysed by RIP buffer. Then cell lysates were incubated with magnetic beads and anti-LSD1 antibody at 4℃ overnight. Later, RNA was extracted from the complex and used for qPCR analysis.
Chromatin immunoprecipitation (CHIP )assay
The CHIP assay was performed using the Magna ChIP™G Tissue Kit (Magna, USA). In brief, mREC cells were fixed with formaldehyde for 10 min at 37℃, followed by incubation with glycine to stop cross-linking. Then cells were lysed with CHIP lysis buffer and sonicated to generate DNA fragments of 200-1000 bp. The 20 µL cell lysate was stored at -80℃ as input and remaining cell lysate was incubated with the anti-H3K4me3, anti-H3K27me3, anti-H3K9me3, anti-LSD1, anti-HIF1α, or IgG (negative control) at 4°C for 24 h. After then, Protein A/G magnetic beads were added into lysates to prepare the immunoprecipitated protein-DNA complexes. The DNA was extracted from complexes and the expressions of VEGF and VE-cadherin in DNA samples were detected by qPCR. The sequences of primer used for Chip-qPCR were shown in Table S2.
Co-immunoprecipitation (Co-IP) assay
The Co-IP assay was used to assess the interaction between LSD and HIF1α. mREC cells were lysed by RIPA lysis. The cell lysates were incubated with anti-LSD1 antibody and Protein A agarose beads. Then the protein level of HIF1α in immunoprecipitation complex was detected by western blot.
Luciferase reporter assay
The luciferase reporter assay was used to assay the activities of VEGFA and VE-cadherin promoters. Briefly, the promoter sequences of VEGFA and VE-cadherin were subcloned into the upstream of a luciferase reporter gene in a pGL3 plasmid. Each recombinant vector was Downloaded from http://portlandpress.com/clinsci/article-pdf/doi/10.1042/CS20200694/891202/cs-2020-0694.pdf by guest on 18 September 2020 co-transfected with sh-Hotair or sh-HIF1α into mREC cells. Then the luciferase activity was detected by the Dual-Luciferase Reporter Assay System (Promega, USA).
Real-time quantitative PCR (qRT-PCR)
Total RNA was isolated from mouse retinas and mREC cells and reversely transcribed to cDNA using BeyoRT™ First Strand cDNA Synthesis Kit (Beyotime, China). Then cDNA was utilized to perform qRT-PCR using BeyoFast™ SYBR Green qPCR Mix (Beyotime, China). The relative expressions of genes were calculated using the 2 -ΔΔCt method. The sequences of primer used for qRT-PCR were shown in Table S2.
Statistical analysis
All data were analyzed using Graphpad Prism 7.0 software and expressed as mean ± standard deviation (SD). One-way ANOVA combined with Tukey post-test and Student t-test was applied to determine the significance between groups. The p-value <0.05 indicated statistical significance.
Hotair expression pattern in the retina of DM mice and HG-stimulated REC
To investigate the role of Hotair in DR, we first established an STZ-induced mouse DM model ( Fig. 1A). 12 and 20 weeks after STZ injection, the retinal vascular preparations showed the decreased pericytes and increased acellular capillaries in the retinas of DM mice (Fig. 1B). Since IgG extravasation is considered as a marker of increased vessel permeability (21), we then stained the retinas of mice with IgG. The results revealed an increase in vascular leakage of DM mice.
Moreover, the vascular leakage increased with the increase of diabetes weeks (Fig. 1C). As shown in Fig. 1D, Hotair is highly conserved among mouse, rat, and human. We then detected the
1E-F). Collectively, these results suggested that the expression of Hotair is increased in DM or
HG-stimulated REC.
Hotair knockdown improves the function of REC under HG condition
We then explored the role of Hotair in regulating the function of REC in vitro. Firstly, we assessed the effects of Hotair knockdown on the proliferation of REC by EdU fluorescence staining. As shown in Fig
Hotair knockdown regulates the expressions of VE-cadherin and VEGF in REC under HG condition
It is has been reported that VEGF plays an important role in the pathological angiogenesis of REC in DR (22). Besides, VE-cadherin also plays an important role in DR by regulating the permeability of REC (9). To further clarify the mechanism of Hotair in regulating the function of REC under HG condition, we then detected the effects of Hotair knockdown on the expression of VEGF and VE-cadherin. As shown in Fig. 3A protein levels of VE-cadherin and decreased VEGFA mRNA level and VEGF protein level ( Fig. 3C-D). These results suggested that Hotair may exert its role in REC dysfunction by regulating the expression of VE-cadherin and VEGF.
Hotair knockdown relieves retinal vessel impairment in vivo
We then addressed whether Hotair can affect retinal vessel impairment in vivo. As shown in Fig. 4A, the AAV vectors carrying sh-Hotair or scramble sh-RNA (negative control) were injected into DM mice. The retinal vascular preparations showed that Hotair knockdown decreased the acellular capillaries and increased the pericytes in the retinas of DM mice (Fig. 4B). The immunohistochemistry analysis of IgG revealed that Hotair silence reduced the vascular leakage of DM mice (Fig. 4C). The qRT-PCR assay showed that AVV-sh-Hotair effectively suppressed the expression of Hotair in the retinas of DM mice (Fig. 4D). Furthermore, the immunofluorescent staining of retinal whole-mount revealed that the silence of Hotair caused an increase of VE-cadherin expression in retinas of mice (Fig. 4E). The results of Fig. 4F showed that Hotair knockdown resulted in a decrease of VEGF expression in retinas of mice. Together, these data indicated that Hotair knockdown can relieve retinal vessel impairment of DM mice.
Hotair/LSD1 inhibits the transcriptional expression of VE-cadherin via reducing the H3K4me3 level on its promoter
Based on the RNA-FISH assay, we found that Hotair was mainly located in the nuclei of mREC (Fig. 5A). The qRT-PCR analysis showed increased Hotair expression and decreased mRNA level of VE-cadherin in HG-stimulated mREC cells. The correlation analysis further indicated that the expression of Hotair was negatively correlated with VE-cadherin mRNA level (Fig. 5B). Besides, the luciferase reporter gene assay showed the knockdown of Hotair elevated the promoter activity of VE-cadherin in mREC cells under HG condition (Fig. 5C). These findings indicated that Hotair may regulate VE-cadherin expression by modulating its transcriptional expression.
The previous studies have shown that Hotair can regulate the transcriptional activity of target genes by binding to histone methylase PRC2 and histone demethylase LSD1 (23). Among the two enzymes, PRC2 is mainly responsible for the methylation of H3K27, and LSD1 is mainly responsible for the demethylation of H3K4 or H3K9 (23,24). Hence, we then detected the effects of Hotair on the levels H3K4me3, H3K9me3, and H3K27me3 on the VE-cadherin promoter. The CHIP assay showed that although HG decreased the H3K4me3 level and increased the H3K27me3 level on the promoter of VE-cadherin, the knockdown of Hotair only altered the H3K4me3 level on VE-cadherin promoter under HG condition (Fig. 5D). Because of this, we hypothesized that Hotair might regulate the H3K4me3 level on the VE-cadherin promoter through binding to LSD1, thereby affecting the transcription of VE-cadherin. To confirm this speculation, we first assessed the effect of Hotair on the expression of LSD1 under DM or HG conditions. As shown in Fig. 5E-F, the mRNA expression of LSD1 was significantly increased in the retinas of DM mice, and the knockdown of Hotair had no significant effect on the LSD1 mRNA level. Meanwhile, the in vitro experiments also showed that the silence of Hotair did not affect the mRNA and protein levels of LSD1 in mREC cells under HG condition (Fig. 5G), suggesting that Hotair has no effect on LSD1 expression. We then verified whether Hotair can bind to LSD1 in mREC cells. The RNA pull-down assay showed that Hotair was enriched in the complex immunoprecipitated by LSD1 antibody, and the RIP assay revealed that LSD1 was enriched in the product pulled-down by Hotiar probe (Fig. 5H). Next, we determined the impact of Hoatir on the binding capacity of LSD1 to the VE-cadherin promoter in mREC cells. The CHIP assay confirmed that the overexpression of Hotair enhanced the binding capacity of LSD1 to the VE-cadherin promoter under NG condition, and the silence of Hotair repressed the binding capacity of LSD1 to VE-cadherin promoter under HG condition (Fig. 5I). At last, we verified whether LSD1 mediates the regulatory effect of Hotair on the H3K4me3 of VE-cadherin promoter. The result revealed that the force expression of Hotair reduced the H3K4me3 level on the VE-cadherin promoter, while the knockdown of LSD1 reversed this effect (Fig. 5J). Therefore, these results demonstrated that Hotair binds to LSD1 to inhibit the H3K4me3 on the VE-cadherin promoter, thereby suppressing the transcriptional expression of VE-cadherin.
Hotair/LSD1 promotes HIF1α-mediated transcription activation of VEGFA
The qRT-PCR assay also showed that the expression of Hotair and the mRNA level of VEGFA were both increased in HG-stimulated mREC. The correlation analysis showed a significant positive correlation between Hotair expression and VEGFA mRNA level (Fig. 6A). We then explored whether Hotair can affect the histone methylation on VEGFA promoter in REC under HG condition. As shown in Fig. 6B, HG reduced the H3K9me3 and H3K27me3 levels on VEGFA promoter, while the knockdown of Hotair had no significant effect on them, implying that Hotair To further investigate the mechanism of Hotair in regulating VEGFA transcriptional expression, we determined the effects of Hotair on the activity of the VEGFA promoter. The results showed that the silence of Hotair reduced the activity of VEGFA promoter at -1983bp, -1340bp, and -643bp regions, while had no significant effect on the activity of VEGFA promoter at -325 bp regions (Fig. 6C). In view of this, we hypothesized that the target of Hotair regulating VEGFA promoter activity may be located at -643bp to -325bp. We then predicted the possible binding sites of transcription factors on this region of VEGFA promoter using the JASPAR software and found possible HIF1α binding sites in this region (Fig. 6D). Since many studies have reported that HIF1α can act as a transcription factor to promote the expression VEGFA (25, 26), we then investigate whether HIF1α mediates the role of Hotair in regulating VEGFA transcriptional expression under HG condition. As shown in Fig. 6E-F, the mRNA and protein expression of HIF1α was elevated in the retinas of DM mice and HG-stimulated mREC cells. The silence of HIF1α reduced VEGFA promoter activity at -643bp region under HG condition, while had no significant effect on -325 bp region (Fig. 6G), indicating that HIF1α mainly regulates the VEGFA promoter activity between -643bp and -325bp region. Besides, our results showed that the knockdown of HIF1α decreased the mRNA level of VEGFA in mREC cells under HG condition (Fig. 6H). The CHIP assay revealed that the silence of Hotair or LSD1 suppressed the binding of HIF1α to the VEGFA promoter (Fig. 6I). The knockdown of Hotair or LSD1 reduced the protein level of HIF1α and the mRNA and protein levels of VEGF, but had no significant effect on HIF1α mRNA level (Fig. 6J), implying that Hotair or LSD1 regulates HIF1α expression at the post-transcriptional level. The RNA pull-down and Co-IP assays showed that HIF1α could bind to Hotair and LSD1 (Fig. 6K), making us speculating that HIF1α, Hotair, and LSD1 might form a complex. Our further results revealed the overexpression of Hotair increased the mRNA level of VEGFA and protein level of HIF1α in mREC cells under HG condition, while the knockdown of LSD1 canceled this effect (Fig. 6L), suggesting the role of Hotair in regulating expressions of VEGFA and HIF1α depends on LSD1. Furthermore, the overexpression of LSD1 elevated the mRNA level of VEGFA and protein level of HIF1α in mREC cells under HG condition, while this effect was partially reversed by Hotair knockdown (Fig. 6M). Hence, these data demonstrated that Hotair/LSD1 regulates HIF1α at the protein level, thereby affecting HIF1α-mediated transcription activation of VEGFA.
LSD1 mediates the effects of Hotair on the function of HG-stimulated retinal REC
We then verified whether LSD1 mediates the effects of Hotair on the function of HG-stimulated retinal REC. As shown in Fig. 7A-B, the overexpression of Hotair promoted the invasion and increased the permeability of HG-stimulated mREC, while these effects were reversed by LSD1 knockdown. Besides, Hotair overexpression increased VEGF protein level and decreased VE-cadherin protein level in HG-stimulated mREC cells, while LSD1 silence abrogated these impacts (Fig. 7C). Therefore, these results showed that LSD1 mediates the effects of Hotair on the function of HG-stimulated retinal REC.
Discussions
Hotair has been proven to play important roles in many diseases, such as cancers, Parkinson's disease, osteonecrosis of the femoral head, and diabetic cardiomyopathy (12,(27)(28)(29). The previous study found that the expression of Hotair was significantly up-regulated in the serum of DR patients (17), whereas its role in DR has bot been clarified. Similarly, we found the expression of Hotair was evidently up-regulated in the retinas of DM mice and HG-stimulated REC cells in this study. Thus, we then focused on the role of Hotair in DR in the present study. The dysfunction of REC has been considered as an important pathological step since it can induce abnormal angiogenesis and BRB hyperpermeability of retinas (3,4). Interestingly, our in vitro results revealed that the knockdown of Hotair protects the function of REC under HG condition by inhibiting the proliferation, invasion, migration, and permeability of REC. The in vivo results further showed that the silence of Hotair relieved the retinal microvascular disorder of DM mice which was indicated by the reduced acellular capillaries and permeability of retinas. Thus, our current results demonstrated that Hotair plays a role in DR by promoting REC dysfunction.
VEGF and VE-cadherin are described as the major regulators in DR by regulating REC function (3). Under DM or HG conditions, the expression of VEGF is up-regulated and then binds to its receptors to facilitate the proliferation and migration of REC cells, leading to pathological angiogenesis and aggravated progression of DR (30,31). The VE-cadherin is responsible for the integrity of the adherens junctions between adjacent REC cells, while its down-regulated expression induced by DM or HG can destroy this integrity, thereby increasing the permeability of Downloaded from http://portlandpress.com/clinsci/article-pdf/doi/10.1042/CS20200694/891202/cs-2020-0694.pdf by guest on 18 September 2020 REC and causing BRB injury (32,33). In this study, the up-regulated expression of VEGF and downregulated expression of VE-cadherin was found in the diabetic retinas of mouse and HG-stimulated REC cells. Besides, our in vitro and in vivo experiments showed that knockdown of Hotair inhibited VEGF and promoted VE-cadherin under DM and HG conditions, implying that VEGF and VE-cadherin mediate the role of Hotair in regulating REC function in DR.
The histone methylation is an important epigenetic regulatory mechanism of gene expression. The methylation of H3K4 is associated with gene activation, while the methylations of H3K9 and H3K27 are associated with gene repression (34). LSD1, as an important lysine-specific histone demethylase, can trigger the demethylation of H3K4 and H3K9 (35). The reported studies indicated that LSD1 is involved in DR by regulating the histone demethylation on the promoters of target genes (36, 37). Zhong et al. reported that the expression of LSD1 is increased in the retinas of DM rats and patients, and its knockdown inhibited the expression of MMP-9 by increasing the H3K9me2 level on MMP-9 promoter in REC cells, thereby relieving mitochondria injury of REC cells in DR (37). Another study of them showed that the silence of LSD1 also can promote the expression of SOD2 to ameliorate mitochondria dysfunction by increasing the H3K4me2 and H3K4me1 levels on SOD2 promoter (36). In the current study, we also found that the expression of LSD1 was up-regulated in retinas of DM mice and HG-stimulated REC cells.
Our CHIP assay showed that HG decreased the H3K4me3 but not H3K9me3 on the VE-cadherin promoter. Besides, HG enhanced the binding capacity of LSD1 to VE-cadherin promoter in REC cells. These results revealed that LSD1 might repress the transcription of VE-cadherin by reducing the H3K4me3 level on its promoter in REC cells under HG condition.
HIF1α is a hypoxia-modulated transcription factor and can act as a transcriptional activator to induce the expression of VEGF in DR (38,39). In the present study, we found that HIF1α expression was increased in the retinas of DM mice and HG-stimulated REC cells. The silence of HIF1α significantly reduced the promoter activity and mRNA level of VEGFA in REC cells under HG condition. These data confirmed that HIF1α can participate in DR by promoting VEGFA transcriptional activation.
In this study, we found that the expression of Hotair was positively correlated to the VEGFA mRNA level, while it was negatively correlated to VE-cadherin in REC cells under HG condition.
The knockdown of Hotair reduced VEGFA promoter activity and increased VE-cadherin in Interestingly, our results showed that the silence of Hotair and LSD1 had no effect on the mRNA level of HIF1α in HG-stimulated REC cells, while caused a decrease in its protein level. This evidence suggested that Hotair and LSD1 regulate the expression of HIF1α at the protein level.
Since the previous study has pointed out that LSD1 can mediate the demethylation of HIF1α protein at K391, thereby protecting HIF1α protein against ubiquitin-mediated protein degradation in tumor angiogenesis (42), we thus speculated that Hotair and LSD1 might regulate the protein level of HIF1α in DR by this mechanism. We will further investigate whether Hotair/ LSD1 Downloaded from http://portlandpress.com/clinsci/article-pdf/doi/10.1042/CS20200694/891202/cs-2020-0694.pdf by guest on 18 September 2020 regulates HIF1α by modulating its protein methylation and ubiquitination in DR in the future.
In conclusion, our findings in the current work demonstrated that Hotair serves as a scaffold of LSD1 to decrease VE-cadherin transcription and increase VEGFA transcription, which leads to REC dysfunction, thereby resulting in microvascular dysfunction and aggravating the progression of DR (Fig. 7D). This study provides a deeper understanding of DR pathogenesis and a potentially targeted method for DR treatment.
Funding
This study was supported by the funding of the National science foundation for youth (grant No. | 2020-08-20T10:07:58.292Z | 2020-08-19T00:00:00.000 | {
"year": 2020,
"sha1": "bc1e3a96b5073bacbc1ca33b998b4d1f4c2d9d08",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1042/cs20200694",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5b38e96e9b5d9d0d7147cdc444dbbfb7a6633349",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
21799469 | pes2o/s2orc | v3-fos-license | Gemin2 Plays an Important Role in Stabilizing the Survival of Motor Neuron Complex*
The survival of motor neuron (SMN) protein, responsible for the neurodegenerative disease spinal muscular atrophy (SMA), oligomerizes and forms a stable complex with seven other major components, the Gemin proteins. Besides the SMN protein, Gemin2 is a core protein that is essential for the formation of the SMN complex, although the mechanism by which it drives formation is unclear. We have found a novel interaction, a Gemin2 self-association, using the mammalian two-hybrid system and the in vitro pull-down assays. Using in vitro dissociation assays, we also found that the self-interaction of the amino-terminal SMN protein, which was confirmed in this study, became stable in the presence of Gemin2. In addition, Gemin2 knockdown using small interference RNA treatment revealed a drastic decrease in SMN oligomer formation and in the assembly activity of spliceosomal small nuclear ribonucleoprotein (snRNP). Taken together, these results indicate that Gemin2 plays an important role in snRNP assembly through the stabilization of the SMN oligomer/complex via novel self-interaction. Applying the results/techniques to amino-terminal SMN missense mutants that were recently identified from SMA patients, we successfully showed that amino-terminal self-association, Gemin2 binding, the stabilization effect of Gemin2, and snRNP assembly activity were all lowered in the mutant SMN(D44V), suggesting that instability of the amino-terminal SMN self-association may cause SMA in patients carrying this allele.
Spinal muscular atrophy (SMA) 2 is a common autosomal recessive disease that is clinically classified into three types, I-III, based on the severity of motor neuron degeneration and the age of onset (1)(2)(3). Two nearly identical copies of the survival of motor neuron genes (SMN1 and SMN2) are located on the human chromosome, 5q13, whereas other eukaryotic species have only one copy of the SMN gene. Homozygous deletions of, or mutations in the SMN1 gene are responsible for SMA (4). SMN is expressed ubiquitously and is a core component of a self-assembling multiprotein complex, the SMN complex, consisting of SMN and Gemin proteins, which plays an essential role in the assembly of the spliceosomal small nuclear ribonucleoproteins (snRNPs) and in pre-mRNA splicing (5,6).
Most of the missense mutations in SMA patients are located in exon 6 of SMN1; whereas nonsense and frameshift mutations are widely spread throughout the entire gene (7). Exon 6 of SMN1 encodes a self-association domain, and deletions or mutations of this domain result in a decrease in oligomer formation of SMN, which correlates with the severity of SMA (8). SMN oligomerization is also a prerequisite for high affinity binding of the SMN complex to spliceosomal snRNPs (9). The self-association domains of SMN were identified by surface plasmon resonance analysis; both the carboxyl-terminal exon 6-encoded region and the amino-terminal exon 2b-encoded region contributed to self-association (10). This suggests a mechanism in which the self-association formation involves either linear oligomers or closed rings by SMN proteins (10), although amino-terminal self-association is not yet evaluated in other studies. Recently, two novel amino-terminal missense mutations, located in the exon 2a-encoded region of SMN1, were identified from SMA patients, although the biochemical features of these mutations are currently unclear (11).
Despite its lack of apparent known domains, Gemin2 plays important roles in the SMN complex; the SMN-Gemin2 complex is associated with spliceosomal snRNPs U1 and U5 (12), and Gemin2 is essential for the formation of the SMN complex (21), even if the precise mechanism is still unknown. Additionally, gene targeting of Gemin2 in mice reveals a correlation between defects in the biogenesis of U snRNPs and motor neuron cell death (22,23).
As well as Gemin2, the roles and the mechanisms of the other Gemin proteins in the SMN complex are not yet fully characterized, although recent analysis reveals that, in addition to the SMN protein, Gemin3, -5, -6, and -7 all associate directly with the Sm proteins (24), and Gemin2, -3, -4, -6, and -8 and SMN are important for the U1 snRNP assembly activity of the SMN complex (20,25,26). Battle et al. (27) demonstrated that Gemin5 functions as an snRNA binding protein of the SMN complex and is required for the U4 snRNP assembly activity of the SMN complex. Recent crystal structural analysis showed that a Gemin6-Gemin7 heterodimer has an Sm protein-like structure, suggesting that it plays a role in the assembly of the snRNP proteins (28). Further, a Gemin3-Gemin4 complex was identified in the RNA-induced silencing complex complex, suggesting that Gemin3 and -4 have additional roles to play in the RNA interference gene regulation system (29).
Here, we systematically explore the functions of Gemin2 to gain insight into the molecular basis of the SMN complex and show that Gemin2 significantly stabilized amino-terminal SMN self-association. This correlates with the stability of the SMN oligomer formation and snRNP assembly activity and is likely to occur through the recently identified Gemin2 self-association. We also found that the amino-terminal self-association, Gemin2 binding, stabilizing effect of Gemin2, and snRNP assembly activity were lowered in the SMA-derived amino-terminal missense mutant SMN(D44V), supporting the importance of Gemin2 in stabilizing the SMN complex.
EXPERIMENTAL PROCEDURES
cDNA Clones-The full-length cDNAs encoding the SMN complex component proteins were obtained from the RIKEN mouse cDNA bank (FANTOM) (30). GenBank TM accession numbers are AK167832 for SMN, AK007515 for Gemin2, and AK141078 for Gemin4. The full-length cDNA for human SMN (BC015308) was purchased from the Mammalian Gene Collection. The full-length cDNA for human Gemin2 was kindly provided by Dr. Hitoshi Kurumizaka (Graduate School of Science and Engineering, Waseda University, Tokyo, Japan).
Preparation of SMN Mutants-To generate missense mutations (D30N, D44V, and Y272C), human SMN cDNA was used as a template for site-directed mutagenesis by overlap extension using PCR (31). PCR products were digested with BamHI and XhoI and ligated into the pET21a vector (Novagen, Madison, WI). SMN constructs, including synonymous point mutations in the target region of the siRNA against wild type SMN mRNA, were also generated by site-directed mutagenesis by overlap extension using PCR. PCR products were digested with EcoRI and XhoI and ligated into the pCMV-HA vector (Clontech). The resulting clones were sequenced to confirm each mutation. The primers used for the preparation of SMN mutants are listed in supplemental Table S1.
Expression and Purification of the Recombinant Gemin2-The Gemin2 encoding sequence was subcloned into the pGEX-6P-1 vector (Amersham Biosciences) to express the GST-fused Gemin2 protein (GST-Gemin2). The sequence was also subcloned into the homemade pGEX-6P-1-(His) 6 vector to express the carboxyl-terminal (His) 6 -tagged GST-Gemin2 (GST-Gemin2-His). The fusion proteins were expressed in Escherichia coli BL21-CodonPlus (DE3)-RIL (Stratagene, La Jolla, CA). The cells, harvested from 200 ml of culture, were sonicated and centrifugation at 15,000 ϫ g for 20 min at 4°C. The lysate was applied to the GSTrap FF column (Amersham Biosciences) that contains glutathione-Sepharose resin and was then washed with 1ϫ phosphate-buffered saline (Ϫ) buffer. The bound GST-Gemin2 or GST-Gemin2-His proteins were then cleaved with PreScission TM protease (Amersham Biosciences) at 4°C for 4 h. Then the GST tag-free Gemin2 or Gemin2-His was eluted with HBS buffer (10 mM HEPES, pH 7.4, 0.15 M NaCl). The tag-free Gemin2 protein was subjected to gel-filtration chromatography on a Hi-Load 16/60 Superdex 200-pg column (Amersham Biosciences) with HBS buffer. The bound GST-Gemin2 was also eluted using 10 mM glutathione, 50 mM Tris-HCl, pH 8.0, to obtain the GST-tagged Gemin2.
Mammalian Two-hybrid Assay-Mammalian two-hybrid assays, including sample construction and transfection, were carried out as previously described (32), with minor modifications. The forward primers specific to each ORF were designed to have a consensus tag sequence, 5Ј-GAAGGAGCCGCCAC-CATG-3Ј, followed by an ORF sense-strand sequence with an annealing temperature of 60°C. Similarly, the reverse primers were designed to have another tag sequence, 5Ј-CAATTTCA-CACAGGAAACTCA-3Ј, followed by an ORF antisense-strand sequence. All the gene-specific primers and other common primers used in this work (RSALSE, FSV40LPAS02, FPCMV6, FPCMV5, RSV40LPAS01, T7-RBS-KOZAK, and LGT10L) are listed in supplemental Table S2. Briefly, fragments for the human cytomegalovirus (CMV) promoter and the Gal4 DNAbinding domain (BIND) or the VP16 transcriptional activation domain (ACT), were PCR-amplified from pBIND or pACT vectors (Promega, Madison, WI) using the primer pair FPCMV6 and RSALSE. Fragments for the SV40 late polyadenylation signal (SV40LPAS) was PCR-amplified from the pBIND vector using the primer pair FSV40LPAS02 and RSV40LPAS01. Each cDNA ORF was amplified by using the corresponding ORFspecific forward and reverse primers. Overlapping PCR was carried out to obtain the assay constructs, in which each ORF fragment was connected with the BIND or ACT fragments at the 5Ј-end of the first PCR product and the SV40LPAS fragment at the 3Ј-end (bait and prey, respectively). One microliter each of the ORF fragments was mixed with 0.75 l of BIND or ACT fragments and SV40LPAS fragments, and then amplified in 100-l reactions using the primer pair FPCMV5 and LGT10L. All the PCR conditions were based on those in our previous report (32).
All the combinations of Gemin2 and other components were transfected to CHO-K1 cells using Lipofectamine TM 2000 (Invitrogen) together with the luciferase reporter plasmid pG5luc, and the reporter activity was measured after 22 h incubation. Each combination was done in triplicates, and the assay was carried out three times.
Rapid in Vitro
Pull-down Assay-The PCR products encoding protein-coding sequences were used to construct samples for in vitro transcription/translation. The products were connected by overlapping PCR using the primer pair T7-RBS-KOZAK and LGT10L giving the final constructs with a T7 RNA polymerase promoter at the 5Ј-terminal. The in vitro pull-down assay was carried out as previously described (33). Briefly, independent in vitro synthesis of biotinylated and 35 S-labeled proteins was carried out from the corresponding PCR constructs by using the Transcend TM biotinylated lysine-tRNA (Promega), redivue L-[ 35 S]methionine (Amersham Biosciences), and TNT T7 Quick-Coupled Transcription/Translation System (Promega). After 35 S-labeled protein synthesis was confirmed by SDS-PAGE and autoradiography, 10 l each of biotinylated protein and 35 S-labeled protein were mixed, and the mixture was incubated on ice for 1 h. Dynabeads Streptavidin (Dynal Biotech, LLC, Milwaukee, WI) suspension (0.2-mg beads in 80 l of blocking buffer, 2% (w/v) skim milk in TBST (50 mM Tris-HCl, pH 8.0, 137 mM NaCl, 2.68 mM KCl, 0.1% (w/v) Tween 20)) was mixed with the reaction, and the mixture was incubated in a rotary shaker for 30 min at 4°C. The beads were isolated with a magnet and washed five times with 150 l of ice-cold TBST. The radiolabeled proteins co-precipitated with biotinylated proteins were separated by SDS-PAGE and visualized by autoradiography.
GST Pull-down Assay-Purified GST (5 g) or GST-Gemin2 (5 g) was immobilized on glutathione-Sepharose 4B (Amersham Biosciences). Purified Gemin2-His (10 g) was incubated with the immobilized GST or GST-Gemin2 in a lysis buffer (10 mM Tris-HCl, pH 7.8, 1% (w/v) Nonidet P-40, 150 mM NaCl, 1 mM EDTA) at 4°C for 1 h. After washing the resin five times with the lysis buffer, the bound proteins were subjected to SDS-PAGE, followed by Coomassie staining or Western blot using anti-Gemin2 or anti-His antibodies.
In Vitro Dissociation Assay-For in vitro dissociation assay, free [ 35 S]methionine in the radioisotope-labeled reaction that was not incorporated into the synthesized proteins was removed using CENTRI-SEP spin columns (Princeton Separations, Inc., Adelphia, NJ). Each 50 l of Biotin-labeled and 35 Slabeled proteins was mixed with either 50 l of in vitro synthesized unlabeled protein or 50 l of the reticulocyte lysate, and the mixture was incubated on ice for 1 h. Dynabeads Streptavidin suspension (1-mg beads in 150 l of the blocking buffer) was mixed with the reaction and incubated for 30 min at 4°C. The beads were captured by a magnet and washed once with 500 l of ice-cold TBST within 2 min of the capture, followed by re-suspension in 300 l of TBST. Every 15 min, 50 l of the suspension was removed. The beads were captured, and radioactivity remaining on the beads was measured by a liquid scintillation counter.
Sedimentation Analysis-HeLa cells (two 10-cm cultured dishes) were used for sedimentation analysis of the SMN oligomer. The siRNAs for Gemin2 and the negative control (GCA-GCUCAAUGUCCAGAU and CCCGGACCACAACGCUCUG, respectively (25)) were purchased from Invitrogen. The siRNAs were transfected into the cells by using Oligofectamine TM according to the manufacturer's protocol (Invitrogen). After 44 h from the siRNA transfection, the silencing of Gemin2 in siRNA-transfected cells was evaluated by qRT-PCR and Western blot analysis. The cells were harvested and re-suspended in 500 l of the lysis buffer, and the centrifuged at 15,000 ϫ g for 20 min at 4°C. Preparation of the sucrose density gradient was performed by using Gradient master (Biocomp Instruments, Inc., Fredericton, New Brunswick, Canada). The supernatant was separated on 6 to 38% (w/v) sucrose gradients at 17,000 rpm in the SW28 rotor (Beckman Coulter, Inc., Fullerton, CA) at 4°C for 17.5 h, followed by 2-ml fractionations by using a Piston gradient fractionator (Biocomp Instruments). The proteins in the fractions were precipitated with 3% (w/v) trichloroacetic acid and centrifuged at 15,000 ϫ g for 10 min at 4°C. The precipitants were diluted in 50 l of 0.1 M NaOH, and 20% of each sample was separated by SDS-PAGE and analyzed by Western blotting using anti-SMN polyclonal IgG. Svedberg values of the SMN complex were estimated by using the marker globular proteins, ovalbumin (3.5S, 44 kDa), aldolase (7.3S, 158 kDa), and thyroglobulin (19.4S, 670 kDa). Other additional ribosomal 50S (1.8 MDa) and 70S (2.7 MDa) subunits were also used in size estimation.
Immunoprecipitation-HeLa cells (in a 10-cm culture dish) were transfected with the siRNA for Gemin2 or with the negative control siRNA using Oligofectamine TM according to the manufacturer's protocol. Forty-four hours after the siRNA transfection, cells were harvested and lysed by lysis buffer containing 1 mM phenylmethylsulfonyl fluoride and 10 mg/ml leupeptin. After centrifugation at 15,000 ϫ g for 15 min at 4°C, the supernatants were subjected to an immunoprecipitation with 5 g of anti-SMN antibody. The co-precipitated Gemin2, Gemin3, Gemin7, and SmB/BЈ were detected by Western blotting.
qRT-PCR-Total RNA was extracted from HeLa cells using an RNeasy mini kit (Qiagen). The extracted total RNA was reverse-transcribed using an oligo(dT) primer and Thermo-Script TM RT-PCR system (Invitrogen). The prepared cDNA were used as templates for qRT-PCR analysis using SYBR Green I nucleic acid gel stain (Invitrogen). The two different primer sets against each target gene are shown in supplemental Table S3. PCR amplification was carried out using the ABI Prism 7900HT instrument (Applied Biosystems, Foster City, CA), and each reaction was run in triplicate. Expression was assessed by evaluating the threshold cycle value (C T ). Firstly, the difference of threshold cycle (⌬C T ) of each target gene was calculated between the negative control siRNA-transfected cells and Gemin2 siRNA-transfected cells. Secondly, the gene expression levels of negative control cells was set to 1.0, and the relative gene expression levels of Gemin2 siRNA-transfected cells were calculated as 2 Ϫ⌬CT . The procedure is described in detail in supplemental data 1.
In Vitro snRNP Assembly Assay-The in vitro snRNP assembly assay was carried out as described previously (26,34), with slight modification. Briefly, U1 snRNA was transcribed in vitro using Riboprobe Systems (Promega) in the presence of Ribo m 7 G Cap Analog (Promega) and [␣-32 P]UTP (Amersham Biosciences) for 1 h at 37°C, followed by removal of DNA template by digestion with RQ1 RNase-free DNase (Promega). The free [␣-32 P]UTP that was not incorporated into the synthesized U1 snRNA was removed by using MicroSpin S-200 HR Columns (Amersham Biosciences). Cytoplasmic extracts were prepared from HeLa cells using Ne-Per nuclear and cytoplasmic extraction reagents (Pierce), and the extracts were incubated with U1 snRNA for 20 min at 30°C in RSB-100 buffer (10 mM Tris-HCl, pH 7.5, 100 mM NaCl, 2.5 mM MgCl 2 ). Subsequently, the U1 RNA that was assembled by Sm proteins was immunoprecipitated with the anti-Sm monoclonal antibody Y12 (Lab Vision Corp.) or anti-Gal4 antibody (Santa Cruz Biotechnology), which was used as a negative control antibody. The antibody and Dynabeads protein G (Dynal Biotech) in 80 l of RSB-100 buffer containing 0.1% Nonidet P-40 and 0.2 unit/l RNasin RNase inhibitor (Promega), were added to the reaction mixtures, and the mixture was incubated in a rotary shaker for 1 h at 4°C. The beads were isolated with a magnet and washed five times with 200 l of ice-cold RSB-100 buffer containing 0.1% Nonidet P-40. Half of the precipitated U1 RNA was measured using a liquid scintillation counter, and the other half was subjected to 7 M urea-6% polyacrylamide gel and detected by autoradiography. After subtraction of the count for immunoprecipitation with anti-Gal4 antibody (negative control), the relative assembly activity was compared with the assembly activity of the wild-type cells. The average count for the negative control was 82 cpm (Ͻ1% of the count in the wild type).
RESULTS
Identification of a Novel Protein-Protein Interaction, the Gemin2 Self-association-Gemin2 is a core component of the SMN complex, contributing to the activity of the SMN complex through an uncharacterized molecular mechanism (12,22,35). To gain insight into Gemin2 and its mechanisms, we tried to purify recombinant Gemin2 using a size-fractionation column, and we found that the main fraction of Gemin2 was ϳ65 kDa, whereas minor fractions of 30 and 120 kDa were apparent (Fig. 1A). Assuming that Gemin2 is a globular protein, the observed molecular mass for the main fraction was twice as heavy as the theoretical molecular mass of 30 kDa, which would suggest that Gemin2 forms a homo-dimer (Fig. 1A).
To confirm the result in vivo, we explored the self-association of Gemin2 using a mammalian two-hybrid system (32). We made constructs that expressed Gemin2, SMN, and Gemin4 as fusion proteins with the Gal4 DNA binding domain or the VP16 trans-activation domain. Together with the reporter plasmid, the Gal4-Gemin2 construct was transfected into CHO-K1 cells in combination with VP16-fusion protein constructs, and the interaction between the expressed fusion proteins was detected by measuring the luciferase reporter activity the next day. We found that Gemin2-Gemin2 showed high reporter activity, comparable to that of SMN-Gemin2, a previously known interaction (Fig. 1B). We also applied a rapid in vitro pull-down method that we recently developed, which uses in vitro biotinylated proteins instead of tagged proteins as the pull-down drivers (33). We synthesized Gemin2 protein in vitro using the rabbit reticulocyte lysate system and thereby demonstrated Gemin2 self-association (Fig. 1C).
The observed binding, derived from the mammalian twohybrid system and the in vitro pull-down method with proteins produced in the reticulocyte lysate, could be indirect via additional proteins. Because of this, we further confirmed binding using purified recombinant Gemin2; we expressed and purified recombinant Gemin2 with a GST tag (GST-Gemin2) or His tag (Gemin2-His) and subjected the recombinant protein to a rational in vitro pull-down assay using glutathione-Sepharose resin (Fig. 1D). The purified Gemin2-His was successfully pulled down by GST-Gemin2, which indicates that Gemin2 is able to directly self-associate.
Domain Mapping of Gemin2 Self-association-To identify the binding domain for the Gemin2 self-association as compared with the Gemin2 binding domain for the SMN protein, we systematically constructed Gemin2 deletion mutants, and these were subjected to a binding assay using the mammalian two-hybrid system ( Fig. 2A). We found that these binding domains are very similar and that they reside almost throughout in the Gemin2 protein. The binding availability seems to be sensitive to deletion at the carboxyl terminus, because both self-association and SMN binding drastically decreased in the mutant Gemin2 1-252 , a deletion mutant lacking 17 amino acid residues at the carboxyl terminus of Gemin2. Interestingly, the binding availability is slightly different in the amino-terminal deletions. The reporter activity for Gemin2 self-association decreased in Gemin2 90 -269 and was almost undetectable above the background signal in Gemin2 99 -269 , whereas the reporter activity for Gemin2-SMN association did not change drastically in these mutants but decreased in Gemin2 108 -269 . The difference in binding properties of the Gemin2 mutant was also confirmed using an in vitro pull-down method (Fig. 2B). Thus we obtained a Gemin2 mutant that exhibited reduced Gemin2 selfassociation activity without drastically affecting Gemin2-SMN association.
Self-association of SMN Protein at the Amino and Carboxyl Termini-It is well established that SMN protein self-associates via its carboxyl-terminal exon 6-encoded region wherein many point mutations are reported in SMA patients (11,36). In addition, one report using in vitro surface plasmon resonance analysis focused on another self-association region of the SMN protein that resided in an amino-terminal exon 2b-encoded region, closely located to the region where the SMN protein associates with Gemin2 (10). Despite its importance for oligomerization of the SMN protein, self-association via the amino terminus has not been evaluated in other works. Thus, we carried out domain mapping for SMN self-association using both in vivo and in vitro assays.
We divided the SMN protein into three regions: amino-terminal, middle, and carboxyl-terminal, encoded by exons 1-2b FIGURE 1. Identification of a novel protein-protein interaction; Gemin2 self-interaction. A, gel-filtration analysis of Gemin2. The chromatogram demonstrates the elution of purified recombinant Gemin2 (*, the major elution peak of the protein). The arrows indicate the elution point of each molecular standard protein, and numbers correspond to the proteins shown at the inset graph. The inset graph indicates the semi-log plots for the molecular mass of the standard proteins and Gemin2 against the elution volume of these proteins in gel-filtration chromatography. B, the result of the mammalian two-hybrid assay. This experiment was independently conducted three times, and the errors bars represent standard deviation. Gal4-Gemin2 and VP16-SMN, Gemin2, and Gemin4 were expressed in CHO-K1 cells. The combination of Gal4-Gemin2 and VP16-Gemin4 was examined as a negative control. Protein-protein interaction was determined by measurement of luciferase reporter activity. C, the result of in vitro pull-down assay. In vitro translated 35 S-labeled Gemin2 was incubated with in vitro translated biotinylated SMN, Gemin2, and Luc (luciferase as a negative control) and formed complexes that were captured with streptavidin beads. Proteins that remained bound to the beads were analyzed by SDS-PAGE and visualized by autoradiography. 10% of 35 S-labeled protein used in the assay was loaded as an input. D, Gemin2 self-association using the purified recombinant Gemin2. Purified GST, GST-Gemin2, and Gemin2-His were resolved by SDS-PAGE and visualized by Coomassie staining (left panel). Right panels show the results of the GST pull-down assay. Purified GST or GST-Gemin2 were immobilized on glutathione-Sepharose and incubated with purified Gemin2-His. After resolution by SDS-PAGE, the bound proteins were visualized by Coomassie staining or Western blot using an anti-Gemin2 or an anti-His antibody. 30% of the Gemin2-His used in each reaction were loaded as an input.
(SMN exon1-2b ), exons 3-5 (SMN exon3-5 ), and exons 6 and 7 (SMN exon6 -7 ), respectively, and explored their interaction with full-length SMN protein using the mammalian two-hybrid system (Fig. 3A). We found clear reporter signals in both the amino-and carboxyl-terminal regions but not in the middle region, consistent with the two self-association sites previously reported (10). The reporter signals became stronger when we used the regions encoded by exons 1-5 (SMN exon1-5 ) and exons 3-7 (SMN exon3-7 ), suggesting that these self-association sites may be more stable when present in a longer form. Next we FIGURE 2. Interaction domain mapping for Gemin2 self-association and Gemin2-SMN association. A, schematic representation of the full-length and deletion mutants of Gemin2 (left panel). The interaction was examined using a mammalian two-hybrid assay. Gal4-fused proteins for full-length Gemin2 or Gemin2-deletion mutants were expressed in CHO-K1 cells with VP16-SMN (middle panel) or VP16-Gemin2 or VP16-Gemin2 mutants (right panel). Proteinprotein interactions were determined with the same procedures as in Fig. 1B. The experiment was independently conducted three times, and the errors bars represent the standard deviation. B, confirmation of the properties of the Gemin2 self-association and the Gemin2-SMN association in the mutant Gemin2 90 -269 . The interaction assay was performed by the same method as in Fig. 1C. explored the in vitro bindings; in addition confirming the in vivo bindings, we could show self-associations of SMN exon1-5 and SMN exon3-7 (Fig. 3B).
Gemin2 Stabilizes the Amino-terminal Self-association of the SMN Protein-Because the amino-terminal region of SMN is responsible for both its self-association and the association with Gemin2, it is conceivable that these interactions together may assist the formation of the SMN complex. To evaluate this hypothesis, we examined the stability of the amino-terminal self-association of SMN in the presence or absence of synthesized Gemin2, using the in vitro dissociation assay system based on our in vitro pull-down assay (Fig. 4). After association with in vitro synthesized biotin-labeled protein and 35 S-labeled protein, the complex was captured by streptavidin beads and suspended with assay buffer. The remaining 35 S-labeled protein that was attached to the beads was measured to assay stability at different time points. We found that the amino-terminal SMN self-association was unstable; only 35% of the original Fig. 4B).
From the domain mapping experiment we obtained Gemin2 90 -269 , a mutant that reduced Gemin2 selfassociation activity without drastically affecting Gemin2-SMN association, and we applied the mutant to the dissociation assay. First we examined the stability of Gemin2 self-association and Gemin2-SMN association in the mutant. As expected, the mutant showed a less stable Gemin2 self-association in comparison with that of full-length Gemin2 protein, whereas the stability of the Gemin2-SMN association was similar in the mutant and fulllength Gemin2 protein (supplemental Fig. S1). Next we explored the effect of Gemin2 mutation on the stabilization of amino-terminal SMN self-association. The stabilizing effect was weak even in the presence of Gemin2 90 -269 (crosses in Fig. 4B), indicating that Gemin2 helps to stabilize amino-terminal SMN selfassociation through Gemin2-SMN association and Gemin2 self-association. The result is represented schematically in Fig. 4C.
Gemin2 Knockdown Lowers SMN Oligomerization and in
Vitro snRNP Assembly Rates-Our results indicate that Gemin2 plays an important role in the stabilization of the SMN complex through SMN interaction and a novel self-interaction. The SMN complex forms oligomers in mammalian cells, which are considered to be important for the function of snRNP assembly. We therefore explored the requirement of Gemin2 for SMN oligomerization and the snRNP assembly by applying Gemin2 siRNA in HeLa cells. First we confirmed the effect of Gemin2 siRNA to transcripts of the SMN complex components by qRT-PCR analysis (Fig. 5A). As expected, Gemin2 siRNA treatment decreased the Gemin2 transcript to 10% of that in untreated cells, whereas transcripts for other components were unaffected. A resulting decrease in Gemin2 protein expression was confirmed by Western blotting (Fig. 5B). We then subjected the cytoplasmic soluble fraction of the siRNA-treated and untreated cells to sedimentation analysis using sucrose density gradient ultracentrifugation and detected the SMN protein by Western blotting. Gemin2 siRNA (siGemin2) treatment drastically changed the SMN distribution (middle lane in Fig. 5C); most of the cytoplasmic SMN protein resided in fractions between 3.5S and 19.4S (estimated molecular mass of 44 -670 kDa with a peak of 158 kDa), as opposed to untreated (WT) and negative control siRNA (siControl) treated cells (top and bottom in Fig. 5C), where cytoplasmic SMN protein mainly resided in molecular fractions larger than 19.4S (estimated molecular mass of larger than 670 kDa with a peak of 1.8 MDa), indicating that most of the cytoplasmic SMN protein is composed of the oligomerized complex, consistent with a previous report (21). (SMN exon1-7 ) or SMN-deletion mutants were expressed in CHO-K1 cells with VP16-SMN exon1-7 . Protein-protein interactions were determined using the same procedures as in Fig. 1B. B, confirmation of SMN self-association by in vitro pull-down assay. The interaction assay was performed by the same method as in Fig. 1C.
The precise stoichiometry of the components of the SMN complex is still unknown. However, because the amount of SMN and Gemin2 is far greater than that of the other com-ponents, it is highly likely that the core of the native SMN complex has a simple protein composition comprising only two proteins, SMN and Gemin2 (21). Thus, it is unlikely that the drastic change that was observed is only due to molecular weight loss of Gemin2 and/or dissociation of the other components from the oligomerized complex. This strongly suggests that the decrease in Gemin2 destabilized the oligomerized formation of the SMN complex. Nonetheless, it is interesting to explore the effects of Gemin2 knockdown on the other components in the SMN complex. We explored the components that are co-immunoprecipitated with the SMN protein both in Gemin2 siRNA-treated and untreated cells. Useful antibodies were only commercially available for a few components, but we found that the level of co-immunoprecipitated Gemin3, Gemin7, and SmB/BЈ following Gemin2 siRNA treatment (Fig. 5D). These results indicate that Gemin2 plays important roles not only in the stabilization of SMN oligomerization but also in the stabilization of other components of the SMN complex.
Next we explored the effect of Gemin2 knockdown on SMN function of snRNP assembly, because the SMN complex plays a role in the formation of the Sm protein-U1 RNA complex. Cytoplasmic extracts from the 48-h siRNA-treated and untreated cells were incubated with in vitro synthesized 32 Plabeled U1 RNA and immunoprecipitated using an anti-Sm antibody. The in vitro snRNP assembly activity was determined to measure co-immunoprecipitated 32 P-labeled U1 RNA as Sm protein-U1 RNA complexes. The siRNA treatment did not affect the amount of Sm protein in the extracts (supplemental Fig. S2A). Gemin2 siRNA treatment (siGemin2) significantly decreased snRNP assembly, with 40% assembly when compared with the untreated (WT) cells (supplemental Fig. S2, B and C). In the negative control siRNA (siControl)-treated cells, snRNP assembly activity was comparable with the level in untreated cells, indicating that Gemin2 is required for efficient snRNP assembly, which is consistent with previous reports (25,26).
A SMA-derived Mutant SMN(D44V) Reveals a Decrease in Amino-terminal Self-association, Gemin2 Binding, and the Stabilization Effect of Gemin2-Recently, Sun et al. (11) reported that two novel missense SMN mutants, SMN(D30N) and SMN(D44V), were identified from SMA patients where D30N and D44V denote an substitution of aspartic acid at the positions 30 and 44 by asparagine and valine, respectively. Although the authors failed to identify a significant biochemical feature of these mutations, the mutations reside within the Gemin2 binding site and are close to the amino-terminal SMN self-association site. We therefore investigated the effect of these mutations on SMN self-association and SMN-Gemin2 interaction by using human SMN cDNAs, into which point mutations corresponding to these missense mutations were introduced. These were then analyzed by mammalian two-hybrid assay and in vitro pull-down assay (Fig. 6). We used SMN exon1-5 for the assay, because the removal of the carboxyl-terminal self-association site enables a clearer detection of any effect of mutation on amino-terminal SMN self-association. We found that the self-association of SMN(D44V) exon1-5 , but not of SMN(D30N) exon1-5 , was lowered as detected by luciferase reporter activity (Fig. 6A). This decrease in interaction was also observed in SMN-Gemin2 interaction; we again detected lower reporter activity for SMN(D44V) exon1-5 but not for SMN(D30N) exon1-5 (Fig. 6B). These results were confirmed by in vitro pull-down assay (Fig. 6, C and D). Similar results were observed even when using full-length SMN mutant proteins in these assays, although the decrease was less pronounced (supplemental Fig. S3).
The results described above suggest that the amino-terminal selfassociation of SMN(D44V) is unstable even in the presence of Gemin2. To examine this hypothesis, the exon 1 to exon 5 region of wild-type and mutant SMN proteins was subjected to an in vitro dissociation assay in the absence and presence of Gemin2 (Fig. 7). In the absence of Gemin2, amino-terminal SMN selfassociation decreased to 20 -30% compared with the initial level in both wild-type and mutant SMNs after 60-min incubation (opened symbols in Fig. 7). As expected, Gemin2 could not effectively stabilize amino-terminal self-association in SMN(D44V) (closed squares in Fig. 7), but it significantly (p Ͻ 0.01) stabilized self-association in wildtype SMN and SMN(D30N) (closed circles and triangles).
SMN(D44V) Is Defective in snRNP
Assembly-It is difficult to explore the functional effect of patient-derived mutants by overexpression in cultured cells because of the presence of endogenous wild-type SMN protein. The activities of the mutant SMNs were instead explored in the absence of wild-type SMN by applying the method described by Shpargel and Matera (26). We generated HA-tagged SMN constructs, including synonymous point substitutions in the target region of siRNA against wildtype SMN mRNA. Each mutation corresponding to SMN(D44V), SMN(D30N), and SMN(Y272C) was incorporated into the construct followed by co-transfection into the cells with SMN siRNA. As shown in Fig. 8A, the level of endogenous wild-type SMN protein decreased in the siRNA-treated cells to 20% compared with the untreated and negative control siRNA-treated cells, and overexpression of the constructs was confirmed. We also confirmed that the siRNA treatment did not affect the amount of Sm protein in the cytoplasmic extracts. The cytoplasmic lysates of the transfected cells were then used in an snRNP assembly assay. The assembly activity decreased to 20% in cells where siRNA mediated the knockdown of endogenous wild-type SMN protein (Fig. 8A, siSMN) compared with untreated cells, and this effect was partially rescued by incorporating wild-type siRNA-insensitive SMN constructs (Fig. 8, B and C). The rescue of snRNP assembly activity by SMN(D44V) was significantly lower (p Ͻ 0.01) than that of HA-SMN and was comparable to that of SMN(Y272C), the other SMN mutant at the carboxyl-terminal self-association site. Conversely, SMN(D30N) could rescue assembly activity to a similar extent as the wild-type construct.
DISCUSSION
Herein, we report that Gemin2 plays an essential role in the stabilization of amino-terminal SMN selfassociation of the SMN complex. Further, we found that SMN oligomerization in vivo and snRNP assembly activity in vitro was stabilized in the presence of Gemin2. It is likely that Gemin2-dependent stabilization is mediated by a novel self-interaction, where the SMN dimer-Gemin2 dimer forms a stable quaternary complex by association with each other. In fact, we revealed that a mutant Gemin2 90 -269 , which showed less stable Gemin2 self-association than full-length Gemin2 protein (supplemental Fig. S1), mediates a weaker stabilization effect to amino-terminal SMN self-association (Fig. 4B). Such multiple interaction is considered to be advantageous to the stability of the complex. It would be better if the contribution of the identified interaction could be evaluated by using Gemin2 deletion mutants lacking the property of self-association. However, it is not easy, because the interaction domains of Gemin2 for SMN binding and for Gemin2 self-association are hard to segregate, which suggests that these associations occur in a closely located region in Gemin2 (Fig. 2). We also revealed that Gemin2 plays an important role in the stabilization of other components of the SMN complex, because Gemin2 siRNA treatment partially blocked co-immunoprecipitation of Gemin3, Gemin7, and SmB/BЈ using the anti-SMN antibody (Fig. 5D). A possible explanation is that SMN oligomerization is a prerequisite for stabilization of other components in the SMN complex. However, it is also possible that such components may be stabilized in the SMN complex through unknown associations with Gemin2. So far, the other components in the SMN complex, Gemin3, -5, and -7, are connected with SMN by very simple interaction networks. It is conceivable that many of them are associated with each other in a far more complex manner to ensure the stability of the SMN complex. In this aspect, it is intriguing that Gemin6, Gemin7, and Unrip form a stable cytoplasmic complex whose association with SMN requires Gemin8 (37).
Our results could offer a more satisfactory explanation model to explain oligomer formation of SMN proteins than previous models (35). Young et al. (10) reported a self-interaction at the exon2b-encoded region of SMN and proposed an oligomer model in which two self-association sites in the SMN regions encoded by exons 2b and 6 were capable of forming This experiment was carried out with a three times as high volume as in the in vitro dissociation assay showed in Fig. 4 because of the reduced self-association activity in SMN(D44V) exon1-5 . The assay was independently conducted three times, and the errors bars represent the standard deviation.
larger oligomers. However, it was uncertain whether such oligomers are stable, because surface plasmon resonance data suggested that the self-interaction at the exon2b-encoded region must be relatively weak when compared with the exon6-encoded region. In addition to the SMN self-association sites that were previously identified, and confirmed in our study (Fig. 3), we found that Gemin2 selfassociates (Fig. 1) and that Gemin2 has a stabilization effect on amino-terminal SMN self-association (Fig. 4). Therefore, larger SMN protein oligomers could be formed by two independent stable self-associations of the SMN protein.
Although this working hypothesis is consistent with the results obtained in this study, the effect of other components of the SMN complex, Gemin3 to -8 are currently unclear. Further studies to explore the effects of other Gemin proteins will be necessary to construct a more detailed model.
Our findings may also provide insight into the reason why missense mutations of the SMN protein in SMA patients are rarely observed in the amino-terminal half and frequently in the exon 6-encoded region. Because SMN self-association in the carboxyl-terminal, exon 6-encoded region is very stable, and there is no evidence that other components associate through this region, mutations in this region may directly affect selfassociation and oligomer formation, resulting in SMA. Actually, Paushkin et al. shows that the sedimentation of the oligomerized SMN complex shifted to smaller size objects by overexpression of the SMN proteins with missense mutation at the exon 6 (21). On the other hand, Gemin2 self-association may well work in concert with SMN selfassociation in the exon 2a-encoded region, which would then result in stable amino-terminal SMN self-association and stable SMN oligomer formation. Because amino-terminal SMN self-association is not stable by itself, many of the missense mutations in this region may not critically affect self-association and oligomerization. So far, no SMA patients with mutations in the Gemin2 gene have been identified. It may be that Gemin2 is critical for the formation of SMN complex; mutations may well be lethal in an early stage. In this aspect, it is interesting that Gemin2 is the most conserved SMN complex components (human-mouse, 94%), whereas the mean conservation rate for other SMN components is 80%. We successfully showed that amino-terminal self-association and Gemin2 binding were decreased in SMN(D44V), but not in SMN(D30N), using both an in vivo mammalian twohybrid assay and an in vitro pull-down assay. Conversely, Sun et al. failed to identify significant biochemical features in SMN(D44V) with GST pull-down assays, because in their experiments the mutation did not affect SMN self-association or SMN-Gemin2 interaction (11). A possible explanation for the different results obtained by the present study and the work reported by Sun et al. is that Sun et al. used full-length SMN protein in the binding assay; the aminoterminal mutations are unlikely to affect the stability of SMN self-association that is mediated by the carboxyl-terminal, exon 6-encoded region. Subsequently the decrease in selfassociation in SMN(D44V) was less clearly detected when using full-length protein than when using only the exons 1 to 5 region as used in our study ( Fig. 6 and supplemental Fig. S3). It is also a possibility that the use of GST fusion proteins FIGURE 8. In vitro snRNP assembly assay for the SMN mutants. A, Western blotting of SMN, HA-SMN, and Sm proteins. The cytoplasmic extracts, prepared from untreated HeLa cells (WT) or HeLa cells transfected with negative control siRNA (siControl), SMN siRNA (siSMN), and SMN siRNA and various siRNA-insensitive SMN constructs, were subjected to Western blotting using anti-SMN, anti-HA, and anti-Sm antibodies. B and C, the cytoplasmic extracts were incubated with in vitro synthesized 32 P-labeled U1 RNA and immunoprecipitated using an anti-Sm antibody or negative control antibody (anti-Gal4 antibody). Half of the precipitated 32 Plabeled U1 RNA was separated using a 7 M urea-6% polyacrylamide gel and visualized by autoradiography (B), and the radioactivity of other half was measured using a liquid scintillation counter (C). This experiment was independently conducted four times, and the errors bars represent the standard deviation.
as pull-down drivers in the binding assay may affect the binding ability at the amino-terminal region of the target protein. In this regard, our in vitro pull-down assays are less likely to affect the properties of the tag, because we use in vitro biotinylated proteins instead of tagged proteins as the pull-down drivers, avoiding modification of the specified region in the driver proteins.
Recently, Shpargel and Matera (26) reported that the severity of SMA is roughly correlated with snRNP assembly activity in SMA-derived missense SMN mutations, because in five of six SMA type I alleles, the severe alleles, including SMN(Y272C), showed a decreased in snRNP assembly assay, whereas in the two SMA type III alleles, mild alleles, including SMN(D30N), snRNP assembly functioned on a level comparable with the wild-type constructs. Our results for the SMN(D30N) and SMN(Y272C) were consistent with this report (26). However, we were also able to show that snRNP assembly activity was lowered in the SMA type III allele of SMN(D44V) as well as in the type I allele of SMN(Y272C). This is the first report of decreased snRNP assembly activity associated with SMA type III alleles and amino-terminal missense mutations. Together with the SMA type I allele case of SMN(A111G) whose assembly activity functioned on a level comparable with the wild-type constructs (26), our results strongly indicate that the severity of SMA is not simply determined by the snRNP assembly activity of each allele in isolation. Other SMN properties, such as nuclear import, localization, cap hypermethylation and binding activity with other proteins (38 -42), should also be considered for a more complete understanding of the relationship between missense SMN mutations and disease severity. In the same way, it is necessary to consider whether the copy numbers of SMN2 correlates with SMA severity (43). In this regard, the mutant alleles, SMN(D44V) and SMN(A111G), are expected to be good probes for the detection of other factors that are also responsible for SMA severity. | 2018-04-03T00:51:50.385Z | 2007-04-13T00:00:00.000 | {
"year": 2007,
"sha1": "1747ef0aa869e36a674b211a04513efe7753af24",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/282/15/11122.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "0b9e16e078f8971e58c7e05d510b8a449cb6edef",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
52097021 | pes2o/s2orc | v3-fos-license | Reliability and Validity of the Geriatric Depression Scale in Italian Subjects with Parkinson's Disease
Introduction The Geriatric Depression Scale (GDS) is commonly used to assess depressive symptoms, but its psychometric properties have never been examined in Italian people with Parkinson's disease (PD). The aim of this study was to study the reliability and validity of the Italian version of the GDS in a sample of PD patients. Methods The GDS was administered to 74 patients with PD in order to study its internal consistency, test-retest reliability, construct, and discriminant validity. Results The internal consistency of GDS was excellent (α = 0.903), as well as the test-retest reliability (ICC = 0.941 [95% CI: 0.886–0.970]). GDS showed a strong correlation with instruments related to the depression (ρ = 0.880) in PD (ρ = 0.712) and a weak correlation with generic measurement instruments (−0.320 < ρ <−0.217). An area under the curve of 0.892 (95% CI 0.809–0.975) indicated a moderate capability to discriminate depressed patients to nondepressed patient, with a cutoff value between 15 and 16 points that predicts depression (sensitivity = 87%; specificity = 82%). Conclusion The GDS is a reliable and valid tool in a sample of Italian PD subjects; this scale can be used in clinical and research contexts.
Introduction
Parkinson disease (PD) is characterized by motor and nonmotor symptoms. Bradykinesia, tremor at rest, and rigidity are the cardinal motor manifestations of PD [1]. Nonmotor symptoms include gastrointestinal dysfunctions, sleep disorders, cognitive disorders, and neuropsychiatric disturbances. Depression has been found to be more frequent in PD patients than in age-matched healthy controls or in patients with other chronic medical conditions [2,3]. For example, major depression may be found in up to 20% of PD patients [4]. To measure the level of depression, it is crucial that clinicians and researchers have access to reliable and valid instruments. A recent systematic review about depression tools in PD patients recommended the use of the Hamilton Depression Inventory as a rating scale, which takes into consideration the judgment of the clinician or the caregiver, and the Geriatric Depression Scale (GDS), that considers the patient's point of view, for the screening and measurement of the degree of perceived depression in patients with PD [5]. e GDS [6], composed by 30 items, was developed to evaluate the level of depressive symptoms over the past week. It was transculturally adapted in several languages [7][8][9], and it has proven to be reliable and valid in subjects with dementia [10][11][12][13], stroke [14][15][16][17], rheumatoid arthritis [18], and psychiatric disorders [19,20]. In PD, several studies showed that GDS has good psychometric properties, a high internal consistency (Cronbach's alpha � 0.92) [21], an excellent test-retest reliability (intraclass correlation coefficient � 0.89 [95% CI 0.83-0.93]), and a minimal detectable change of 5.4 points [22]. Taking into account the validity, the GDS showed good correlations with the Beck Depression Inventory (r s � 0.62, p < 0.05) and with mood related items of the Unified Parkinson's Disease Rating Scale (r s � 0.38, p < 0.05) [23], and moderate correlations with the 17-item Hamilton Depression Rating Scale (r � 0.54, p < 0.001) [24]. Recently, the GDS was used in an Italian sample of geriatric patients, and this study confirmed the good psychometric properties of GDS [25]. As the measurement properties of an instrument are affected by the disease investigated and by the contextual factors, for a reliable and valid use of the instrument in Italian subjects, the GDS should be validated also in the target population to which the questionnaire will be administered. No study has assessed the psychometric properties of GDS in Italian patients with PD. erefore, the aim of this study is to assess the reliability and the validity of the GDS in a sample of Italian PD patients, using the Classical eory Test.
Subjects.
Seventy-four (older than 18 years) patients with clinically diagnosed PD were consecutively recruited through a convenience sample in the Rehabilitation Unit of San Giovanni Battista Hospital, Polyclinic Italia, and in the Department of Neurosciences, Sapienza University of Rome. Patients with cognitive impairment (Mini-Mental State Examination score <23 points) and problems with reading and understanding the Italian language were excluded. All subjects gave their informed consent [26,27] to participate in the study, and the research was conducted according to the principles of Declaration of Helsinki.
Geriatric Depression Scale.
is scale assesses the depressive symptoms [6]. e version used in this study was composed by 30 items that investigated different aspects of the depression over the last week. Each item is rated by a dichotomous score (yes � 1; no � 0), and some items (Item numbers 1,5,7,9,15,19,21,27,29, and 30) presented a reverse score (yes � 0; no � 1). e total score is given adding the item scores, and it ranged from 0 (no depression) to 30 (maximum depression) points. e Italian version used in this study demonstrated to be reliable and valid [25].
Hospital Anxiety and Depression Scale.
is scale measures the level of depression and anxiety [28]. It is composed by 14 items divided in two subscales: 7 items investigate depressive symptoms, and the other 7 measure anxious symptoms. Subjects respond to each item on fourlevel ordinal score (0 � no symptoms; 3 � maximum symptoms); therefore, the total scores may vary between 0 and 21 points for each subscale. e Italian version of the scale was used in this study [29].
Parkinson Disease Questionnaire.
is questionnaire assesses the impact of parkinsonian symptoms in the life of these patients in the past month [30]. It contains 39 items that examine 8 domains through separately scored subscales: mobility (10 items), activities of daily living (6 items), emotional well-being (6 items), stigma (4 items), social support (3 items), cognition (4 items), communication (4 items), and bodily discomfort (3 items). A 5-point level score is attributed to each item (0 � never; 1 � occasionally/rarely; 2 � sometimes; 3 � often; 4 � always). A total score ranging from 0 (indicating best health status) to 100 (indicating worst health status) was calculated by summing the score of each item, both for the 8 subscores and for the total score. e Italian version used in this study was recently evaluated [31] and revealed good psychometric properties.
Short Form 36-Health Survey Questionnaire (SF-36).
is is a 36-item questionnaire measuring the patient's health status in the past four weeks [32]. e total score ranges from 0 to 100 with higher scores indicating a better condition. e Italian version is considered to be a valid and reliable tool [33].
Barthel Index.
is well-known test measures the disability on the ADLs [34]. It is composed of 10 items including feeding, bathing, grooming, dressing, bowel and bladder control, toilet use, transfers (bed to chair and back), mobility, and stairs climbing. ree ordinal level scores are attributed to each item (0, 5, or 10; 15 points for items regarding transfers and mobility) to assess whether the patient can perform the various activities independently, with assistance or whether they are totally dependent from others. e total score is generated summing each score, and it varies from 0 (total dependence) to 100 (total independence). e Italian version was administered in this study [35,36].
Procedures.
Four clinicians (three occupational therapists and one physical therapist) screened all patients for their recruitment. Once enrolled, these clinicians collected demographic and clinical variables and administered the outcome measure to all patients. In order to study the testretest reliability, the GDS was readministered after seven days. To assess the discriminant validity, a physician diagnosed the depression in this sample. According to DSM-5, patients were diagnosed with depression if they had at least five depressive symptoms including "depressed mood" and "loss of interest or pleasure" for at least two weeks [37].
Statistical Analysis.
Descriptive statistics was used to analyze the sample characteristics; in particular, mean ± standard deviation (SD), median with 25th and 75th percentiles, and frequency with percentage were calculated for intervallic, ordinal, and categorical data, respectively. e reliability of GDS was assessed in terms of internal consistency and test-retest reliability. Internal consistency was determined calculating Cronbach's alpha [38]: for values closer to 1, the internal consistency is higher. Alpha was considered excellent if >0.9, good if >0.8, and acceptable if >0.7 [39]. Test-retest reliability was calculated by the intraclass correlation coefficient (ICC) with a 95% confident interval (CI). ICC values greater than 0.75 are a minimum requirement to use the instrument in group measurements [40]; ICC values greater than 0.90 are considered essential for the use of the instrument in individual measurements [41]. e construct validity of the GDS was studied calculating the Pearson correlation coefficient (ρ) when comparing the GDS with the other administered instruments. e following ranges were considered in order to interpret the results: ρ > 0.70 � strong correlation, 0.50 < ρ < 0.70 � moderate correlation, and e ρ < 0.50 � weak correlation [42].
In order to study the discriminant validity, the receiving operating characteristic (ROC) curve was created, and the area under the curve (AUC) was calculated. e closer the AUC value is to 1.0, the greater the instrument's ability to distinguish depressed and nondepressed patients. An AUC higher than 0.75 confers to the tool a moderate discriminative validity; while an excellent one is demonstrated by a value ≥0.90.
For all statistical analyses, the α value was set at 0.05, and SPSS statistical software program, version 18.0 for Windows (SPSS Inc., Chicago, IL, USA), was used.
Sample Characteristics.
Seventy-four patients (44 males; 30 females) with PD were included in this study. e demographic and clinical characteristics of the patients studied are reported in Table 1.
Internal Consistency.
e internal consistency for the total GDS score was excellent (α � 0.903).
Test-Retest Reliability.
Test-retest reliability was assessed in a subsample of 35 patients. Excellent reliability was observed for the GDS total score (ICC � 0.941 [95% CI: 0.886-0.970]). Table 2. Taking into account the comparisons between GDS and the other instrument related to depression (HADS) and PD (PDQ-39), Pearson coefficient ranged between 0.712 and 0.880, indicating a strong correlation. On the other hand, regarding the comparisons between GDS and generic measurement instrument (Barthel Index and SF-36), the correlation coefficient varied from −0.320 to −0.217, showing a weak correlation.
Validity. Pearson's correlation coefficient values are reported in
Regarding the discriminant validity, the AUC showed a value of 0.892 (95% CI 0.809-0.975), indicating a moderate capability to discriminate depressed patients to nondepressed patient. e score with the best sensibility and specificity that predicts depression is between 15 and 16 (sensitivity � 87%; specificity � 82%) (Figure 1).
Discussion
e use of a reliable and valid instrument is essential in clinical practice and when measuring specific outcomes [43]. Several questionnaires are available to measure depression in patients with PD [5]. e psychometric properties of GDS have been extensively studied in different pathologies and in different settings. To our knowledge, however, no study assessed the psychometric properties of GDS in Italian patients with PD. Studying the measurement properties in the context in which the instrument will be administered is crucial because these properties can be influenced by various contextual, social, and environmental factors [44]. e results of our study show that GDS is a reliable and valid instrument in Italian patients with PD. e internal consistency assessed by calculating Cronbach's alpha (equal to 0.903) was excellent. e results obtained in the PD patients we studied are similar to those obtained in patients with different clinical conditions. For example, Cronbach's alpha was found to be 0.876 in a study on 294 geriatric patients [45] and 0.90 in 888 depressed and nondepressed elderly subjects [46].
We demonstrated an excellent test-retest reliability of the questionnaire (ICC � 0.941). e results obtained in our sample of PD patients are similar to those found in a cohort of 75 Chinese subjects with PD (ICC � 0.89 [95% CI 0.83-0.93]) [22]. e construct validity was investigated through the correlations between the GDS and other validated questionnaires. In particular, a strong construct validity was obtained through correlations with HADS (both with anxiety and depression) and PDQ-39. On the other hand, a weak correlation was found when the GDS was compared with the Barthel Index and the SF-36. e strong correlations between GDS and HADS can be explained because these two scales intend to measure the same variable, that is, the depression; these results are in line with previous studies that obtained similar correlations with questionnaires related to depression-Beck Depression Inventory (r s � 0.62, p < 0.05) [23] and Hamilton Depression Rating Scale at 17 items (r � 0.54, p < 0.001) [24]. Conversely, the low correlation found with SF-36 and Barthel Index may be explained because both the Barthel Index and the SF-36 are generic instruments. Finally, the discriminating validity was studied through the ROC curve in order to identify the best sensitivity and specificity of the cutoff value that can distinguish depressed and nondepressed patients. e cutoff value of 15-16 points showed a sensitivity of 87% and a specificity of 82%. Comparing our results with those obtained in other studies is not easy considering the different patient populations and the different settings; for example, the study by McDonald et al. showed a cutoff value of 9-10 points [24] and the study by Ertan et al. [7] a cutoff value of 13-14. is study presents limitations that need to be taken into account. e design of the study did not allow the assessment of some fundamental psychometric properties such as content validity and responsiveness.
In conclusion, this study shows that GDS can be used in clinical practice as a valid measurement instrument in order to quantify depression in patients with PD.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Consent
Informed consent was obtained from all individual participants included in the study.
Disclosure
All authors have no commercial associations or disclosures that may pose or create a conflict of interest with the information presented within this manuscript.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2018-09-15T21:18:11.737Z | 2018-08-01T00:00:00.000 | {
"year": 2018,
"sha1": "76607097dfdba0b8aa5f0732b61e27cff3e07767",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/pd/2018/7347859.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2a27421a648f8a05b37031dd94c30afb9d17924a",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235577197 | pes2o/s2orc | v3-fos-license | iTRAQ-Based Quantitative Proteomics Analysis of Sprague-Dawley Rats Liver Reveals Peruorooctanoic Acid-Induced Urea Metabolism Dysfunction
Peruorooctanoic acid (PFOA) is a typical C8 representative compound of peruoroalkyl and polyuoroalkyl substances (PFAS) widely used in industrial and domestic products. It is a persistent organic pollutant found in the environment as well as in the tissues of humans and wildlife. Despite emerging scientic and public interest, the precise mechanisms of PFOA toxicity remain unclear. In this study, male rats were exposed to 1.25, 5, and 20 mg PFOA/kg body weight/day for 14 days. Urine samples were also collected and monitored by raising rats in metabolic cages. In vivo results demonstrate that PFOA exposure induces signicant hepatocellular hypertrophy and reduced urea metabolism. iTRAQ-based quantitative proteomics analysis of Sprague-Dawley (SD) rats livers identied 3,327 non-redundant proteins of which 112 proteins were signicantly upregulated and 80 proteins were downregulated. Gene ontology analysis revealed proteins are primarily involved in cellular, metabolic and single − organism processes. Among them, eight proteins (ACOX1, ACOX2, ACOX3, ACSL1, EHHADH, GOT2, MTOR and ACAA1) were related to oxidation of fatty acids and two proteins (ASS1 and CPS1) were found to be associated with urea cycle disorder. The downregulation of urea synthesis proteins ASS1 and CPS1 after exposure to PFOA was then conrmed through qPCR and western blot analysis. Together, these data demonstrate that PFOA exposure directly inuences urea metabolism and identify CPS1 as a potential regulatory target.
Introduction
Per uoroalkyl and poly uoroalkyl substances (PFASs) are a class of synthetic chemicals that are increasingly recognized as a new type of persistent organic pollutants (POPs). POPs contain high-energy C-F covalent bonds in which all hydrogen atoms may be replaced by uorine atoms. The archetypal PFAS, per uorooctanoic acid (PFOA), is a per uorocarboxylic acid with 8 C atoms. The uorinecontaining special structure of PFOA is responsible for its hydrophobicity, oleophobicity and extremely low surface tension. As such, PFOA is widely used in various commercial and industrial settings including manufacture of textiles, packaging materials, surfactants, pharmaceuticals and re-extinguishing foam In recent years, an increasing number of studies have revealed the toxic effects of PFOA accumulation in organisms (Andersen et al. 2008;Kennedy et al. 2004;Lau et al. 2007;Lau et al. 2004). In general, PFOA accumulation interferes with cellular lipid metabolism, leading to carcinogenicity, liver toxicity, developmental toxicity, immunotoxicity, endocrine interference and neurotoxicity. Previous reports from our lab have demonstrated that PFDoA exposure can restrict amino acid metabolism in rats and thereby in uence the synthesis of urea (Liu et al. 2016). In the following study, iTRAQ-based quantitative proteomics was utilized to screen for global proteomic pro le alterations in rat livers after exposure to PFOA. We hereby demonstrate the differential expression of ASL1, ASS1 and CPS1 and identify their role as PFOA sensitive genes related to the urea cycle. These ndings clarify the potential mechanisms responsible for PFOA toxicity in vivo and provide reference targets for future intervention and treatment of PFOA accumulation in humans. company (Nanjing, China) at 6-8 weeks of age. The rats were maintained in a SPF grade facility on a 12h light/12-h dark cycle and were allowed ad libitum access to a standard diet and pure water. The ambient temperature in the animal room was 23 ± 1℃ and the relative humidity was 60 ± 5%. After one week of adaptation, the rats were randomly separated into four groups of 10. The treatment rats were given doses of 1.25, 5, and 20 mg PFOA/kg body weight/day by oral gavage for 14 consecutive days.
Materials And Methods
The control animals were also treated with Milli-Q water, accordingly. At the end of the experiment, 7 rats from each group were weighed and anesthetized with sodium pentobarbital (45 mg/kg). Afterwards, blood was drawn from the inferior caval vein, and liver tissues were rapidly collected, weighed, rinsed with PBS, divided into small aliquots, ash frozen in liquid nitrogen before being stored at -80℃ until further analysis. The remaining three rats from each group were used for Clinicopathologic analysis. All procedures were performed in accordance with the Ethics Committee of Bengbu Medical College, Anhui Province. Urine collection and analyses 10 rats were randomly divided into 2 groups: high dosage group and normal control. Rats were raised in metabolic cages to collect urine samples of 24 h volume for 14 consecutive days. Urea concentration was determined by using commercially available kits (Nanjing Jianchen Bioengineering Institue, China). Histopathological Examination Three livers from each group were xed in freshly prepared paraformaldehyde (3.7% in DPBS) and processed sequentially in ethanol, xylene and para n. Tissues were then embedded in para n, sectioned (5 µm), and stained with hematoxylin and eosin (HE stains). Serum Biochemistry Analysis and creatinine (CR) were measured using cobas® 8000 modular analyser series (F. Hoffmann-La Roche Ltd). Protein Preparation, iTRAQ Labeling 3 individual liquid nitrogen frozen livers from normal control rats and three PFOA treated livers from 20 mg PFOA/kg/d group were randomly selected for iTRAQ based mass spectrometry analysis. Proteins were extracted by dissolving each liver sample in 300 µL of ice-cold 0.1 M Na 2 CO 3 and 10 mM sodium orthovanadate (pH 11) supplemented with protease inhibitor (Roche Complete EDTA Free) and phosphatase inhibitor (Roche), sonicated for 3 × 10 seconds and stored on ice. The bicinchoninic acid assay (BCA assay) was used to measure 200 µg proteins and mixed with urea/thiourea denaturation buffer to a nal concentration at 6 M urea, 2 M thiourea. All protein samples were trypsinized (mass spec grade, Promega). The tryptic peptides in the three biological samples from the control and PFOA-treated groups were labeled with iTRAQ reagents (isobaric tags 115, 116, and 117 for the control; 118, 119 and 121 for the treated group) (iTRAQ Reagent-8 Plex Multiplex Kit, AB Sciex). The iTRAQ labeling was performed according to the manufacturer's protocol. LC -MS/MS analysis Mass spectroscopic (MS) analysis was performed using an Orbitrap Fusion™ Lumos™ Tribrid™ mass spectrometer (Thermo Scienti c, USA) and coupled with an EASY-nLC HPLC system (Thermo Scienti c, USA). The iTRAQ labeled peptieds were loaded onto a C18-reversed phase column (3 µm-C18 resin, 75 µm ×15 cm) and separated on an analytical column (5 µm-C18 resin, 150 µm × 2 cm; GmbH, Ammerbuch, Germany) using mobile phase Buffer A: 0.5% formic acid / H 2 O and Buffer B: 0.5% FA/ACN at a ow rate of 300 nL / min, using a 150 min gradient. Spectra were acquired in Data Dependent Acquisition (DDA) mode. Database search for peptide and protein identi cation The raw mass data were analyzed using Thermo Proteome Discoverer version 1.4 (ver. 1.4.0.288; Thermo Fisher Scienti c) and with a false discovery rate (FDR) < 1% and expected cutoff or ion score < 5% (with 95% con dence) for searching the Uniprot Rat Complete Proteome database. The following options were used to identify the proteins: Peptide mass tolerance = ± 10 ppm, MS/MS tolerance = 0.6 Da, enzyme = trypsin, missed cleavage = 2, xed modi cation: iTRAQ 8plex (K) and iTRAQ 8plex (N-term), variable modi cation: oxidation (M), database pattern = decoy.
GO annotation and KEGG pathway analysis
To analyze the differentially expressed proteins in PFOA treated group compared with normal control group, Gene Ontology (GO) annotation of the identi ed proteins was performed by searching the GO Web site (http://www.geneontology.org) to catalog the molecular functions, cellular components, and biological processes. Protein interactions and biological pathways were determined using the ResNet database (version 6.5, Ingenuity Systems, Inc.) (KEGG) to better understand these differentially expressed proteins in relation to the published literature. RNA Isolation and Quantitative real-time PCR Rat livers were used for RNA extraction and subsequent qPCR assays. Total RNA of the liver samples was isolated using a Trizol reagent (Ambion, Thermo Fisher Scienti c, USA) and the isolation process was performed according to the manufacturer's instructions. Quantitative real-time PCRs (qPCR) were performed on a QuantStudio 3 Real-Time PCR System (Thermo Scienti c, USA) using a SYBR Green Real Master Mix Rox (Tiangen, China). The housekeeping gene GAPDH was used as an internal control. The information of the primer pairs are listed in the supplementary table S1. The relative quanti cation of target genes was calculated based on the 2 −ΔΔCT method. Western blotting authentication Protein extracts from the control and PFOA exposure group liver tissues were used for western blot analysis. The western blot is brie y as follows: Total proteins from liver of each rats were extracted with RIPA (Thermo Scienti c, USA) containing 1 mM PMSF (Sigma-Aldrich, USA) and 1% phosphatase inhibitor (Sigma-Aldrich, USA). The protein concentration was determined by using a BCA kit (cwbiotech, China). Approximately 40 µg of total protein was loaded on 10% sodium dodecyl sulfate (SDS)polyacrylamide-gels and then transferred to polyvinylidene uoride membrane (PVDF) tansblot membranes (Amersham Biosciences, Piscataway, NJ, USA). The blotted membranes were blocked in blocking buffer (TBST) for 1 h, and then incubated with primary antibodies dissolved by blocking buffer on a shaker overnight at 4°C. (The information of the primary antibodies is listed in supplementary table S2.) After washing with TBST for 3 times, the membranes were then incubated with uorescentconjugated anti-rabbit IgG as the secondary antibody for 1 h at room temperature, respectively. The immunoreactive bands were photoed and analyzed by Gel Doc XR + Gel Documentation System (BIO-RAD, USA). Statistical analyses Data were analyzed using SPSS for Windows 17.0 Software (SPSS, Inc., Chicago, IL) and presented as means with standard errors (mean ± SE). Differences between the control and treatment groups were determined using one-way analysis of variance (ANOVA). A P value of < 0.05 was considered statistically signi cant. OriginPro 2018 software was used to develop graphs (Origin Lab Corporation, USA).
Results
PFOA cause liver damage and in uence urea synthesis After 14 days, we nd that PFOA exposure may cause body weight loss and signi cant liver swelling (Fig. 1A-B). Both absolute and relative liver weight were signi cantly increased by PFOA exposure. HEstained liver slices from PFOA exposed mice also show signi cant liver swelling (Fig. 1C). In order to quantify the cell size, we counted the number of cells per unit area. The results show that the number of nuclei per area is signi cantly reduced following PFOA exposure (Fig. 1D); furthermore, this phenomenon has a clear dose-effect relationship with PFOA concentration.
We also monitored the effects of PFOA on metabolism of rats. 14 days of PFOA exposure had no signi cant in uence on daily food intake ( Fig. 2A). However, PFOA exposure had signi cant effect on urea metabolism. Rats exposed with PFOA had signi cantly lower urea concentration in urine compared with normal control rats (Fig. 2B). PFOA has effects on sera biochemical parameters To investigate the effect of PFOA on urea metabolism, we assayed 19 biochemical indexes in rat serum using a cobas® 8000 modular analyser series automatic biochemical analyzer. The results show that 8 indexes have signi cant changes compared with normal control group (Table 1) While the urea content of urine was signi cantly decreased in PFOA treated rats compared to normal rats, the level of urea in the serum of the 20 mg/kg/d treatment group was signi cantly increased.
Differentially Expressed Protein Identi cation and Relative Quanti cation by iTRAQ Analysis
Three individual samples were included in the iTRAQ experiment from the control and 20 mg PFOA/kg/d group. The MS/MS analysis identi ed a total of 25, 506 unique spectra matched to special peptides. Proteome Discover version 2.1 identi ed a total of 8,369 unique peptides from 2,868 proteins. Bioinformatics Analysis for Differential Expressed Proteins Induced by PFOA Heatmapping, Volcano plot analysis and Venn diagram packaging were used to explore the differentially expressed proteins in PFOA treated group compared with normal control (Fig. 3A, 3B, 3C). Among the 3,327 non-redundant proteins, 112 proteins were signi cantly upregulated and 80 proteins were downregulated. Signi cantly changed proteins are shown in the Volcano plot, where the cut log 2 (Fold Change) was set at 1 and the cutoff P value was 0.05 (Fig. 3B). Among the differentially expressed proteins, upregulated proteins are listed in Table 2 and supplementary table S4; while the downregulated proteins are listed in Table 3.
To further characterize these differentially expressed proteins, we performed GO function annotation analysis via The Gene Ontology (GO) knowledgebase (http://geneontology.org/). Results show that the upregulated and downregulated proteins are mainly involved in the following three biological processes; cellular processes, metabolic process and single − organism processes, and that these processes are localized primarily within the cellular component. When classifying differential proteins by molecular function we nd that they are primarily associated with binding and catalytic activity (Fig. 4A, B).
KEGG Pathway analysis (http://www.kegg.jp/kegg/pathway.html) was also used to determine the involvement of differentially expressed proteins in metabolic and cell signaling pathways. The upregulated proteins were primarily involved in peroxisome, PPAR signaling pathway, fatty acid degradation and fatty acid metabolism; while the downregulated proteins were involved in chemical carcinogenesis, biosynthesis of amino acids and drug metabolism (Fig. 5). Pathway analysis of differentially expressed proteins identi ed in the rat livers Utilizing Ingenuity Pathway Analysis software, eight proteins (ACOX1, ACOX2, ACOX3, ACSL1, EHHADH, GOT2, MTOR and ACAA1) were found to be related to oxidation of fatty acid. (Fig. 6A). Two proteins (ASS1 and CPS1) were found to be associated with urea cycle disorder (Fig. 6B).
Effects of PFOA on urea synthesis related genes
To investigate the toxic effects of PFOA on urea synthesis related genes, we surveyed the transcription levels of three genes (ASL, ASS1 and CPS1) of key enzymes related to urea cycle using qRT PCR. Compared with the control group, the transcriptional levels of ASL remained unchanged (Fig. 7A), however, the mRNA transcriptional levels of ASS1 and CPS1 were signi cantly downregulated in the PFOA exposed groups in a dose-dependent manner (Fig. 7B,C). These results are consistent with the proteomic results. ASS1 and CPS1 are signi cantly reduced by PFOA exposure Expression levels of ASS1 and CPS1 were further veri ed via western blot. Results show expression levels of ASS1 and CPS1 are signi cantly downregulated in the PFOA exposed groups in a dose-dependent manner (Fig. 8A, B). These results are also consistent with the proteomic results.
Discussion
In the present study, the physiological effects of PFOA exposure and its role in liver toxicity was investigated in rat models. We hereby demonstrate that PFOA exposure produces signi cant body weight loss and liver swelling after 14 days exposure. These results are consistent with previous studies of PFOA exposure experiments in rodents (Lau et al. 2007;Starkov and Wallace 2002). Furthermore, we nd that PFOA exposure has a signi cant effect on urea metabolism. Rats exposed to PFOA have reduced urea concentration in urine compared with normal control rats. On the other hand, PFOA exposed rats presented with high urea concentration in the sera. A high urea content in serum rather than in urine may suggest that PFOA exposure either decreases the ability of the liver to metabolize urea, or that urea may leak into blood stream due to the hepatocyte damage.
Investigation of sera biochemistry reveals that levels of ALT, ALP and UREA increased signi cantly after PFOA exposure. The increased level of ALT and ALP implies that PFOA exposure contributes to liver damage and metabolic disfunction in rats. Levels of TG and TC were signi cantly decreased in the serum of treatment group suggesting a reduction in metabolic processes. Additionally, the levels of HDL-C and LDL-C were also decreased. A search of KEGG (Kyoto Encyclopedia of Genes and Genomes) metabolic pathway and MetaboAnalyst metabolic pathway found that these indicators are involved in bile acid metabolic pathways and steroid and steroid hormone synthesis pathways.
iTRAQ-based quantitative proteomics were utilized to de ne the proteomic changes in rat livers after 14 days of PFOA exposure. Totally, 2,868 proteins were identi ed by MS, among which, 112 proteins were signi cantly upregulated and 80 proteins were downregulated. Two enzymes identi ed through quantitative proteomics analysis, ASS1 and CPS1, were found to be closely related to urea metabolism. The differential expression of ASS1 and CPS1, was then con rmed in western blotting experiments.
Con rming the downregulation of the enzymes involved in urea synthesis as a result of PFOA exposure provides potential targets for future intervention and treatment of PFOA toxicity.
Additionally, quantitative proteomics experiments also identi ed 8 differentially expressed proteins (ACOX1, ACOX2, ACOX3, ACSL1, EHHADH, GOT2, MTOR and ACAA1) all related to oxidation of fatty acid. ACOX1, ACOX2 and ACOX3 are enzymes related to the fatty acid beta-oxidation pathway, which catalyzes the desaturation of acyl-CoAs to 2-trans-enoyl-CoAs. ACSL1 is an enzyme responsible for the conversion of free long-chain fatty acids into fatty acyl-CoA esters, and thereby plays a key role in lipid biosynthesis and fatty acid degradation. EHHADH is a bifunctional enzyme that is one of the four enzymes of the peroxisomal beta-oxidation pathway. ACCAA1 is a protein involved in the beta-oxidation system of the peroxisomes. The upregulation of these proteins in the livers of PFOA exposed rats implies an increase in liver fatty acid oxidation. This result is consistent with our nding of low levels of TG present in rat liver and sera. Previous studies have also suggested that PFOA exposure may cause accelerated fatty acid oxidation (Chen et al. 2020;Kudo et al. 2006;Yu et al. 2016). In our study, urea-cycle enzymes ASS1 and CPS1 were also downregulated after PFOA exposure, implying that urea synthesis is decreased in liver. Therefore, PFOA exposure accelerated the β-oxidation of fatty acid in the liver of rats, and at the same time inhibited the synthesis of urea in the liver.
Conclusion
In summary, after 14 days of PFOA exposure, rat livers displayed signi cant liver swelling and aberrant levels of TG, TC, HDL-C, LDL-C and urea. iTRAQ-based quantitative proteomics revealed that deregulated proteins ACOX1, ACOX2, ACOX3, ACSL1, EHHADH, GOT2, MTOR and ACAA1 are all related to oxidation of fatty acid while ASS1 and CPS1 are associated with urea cycle disorder. Overall, this study provides insight into speci c mechanisms of hepatotoxicity as a result of PFOA exposure. Data availability All data generated or analyzed during this study were included in this published article, Supplementary table S1, Supplementary table S2, Supplementary table S3, Supplementary table S4 were availble from Springer link. Compliance with ethical standards Con icts of interest The authors declare no con ict of interest.
Ethical approval All authors declared that they had no known competing nancial interests or personal relationships that seemed to affect the work reported in this article. All authors followed the ethical responsibilities of this journal. Consent to participate and publish All authors participated and approved the nal manuscript to be published. *p < 0.05, **p < 0.01 Table 2. Lists of upregulated Proteins (higher than 2 fold) Identi ed by iTRAQ in rat livers after 20 mg/kg/day PFOA Exposure for 14 Days Figure 1 PFOA exposure can cause body weight to lose and signi cant liver Swelling. A. Body weight gain during PFOA exposure for 14 days in each group. B. Organ index of liver (relative liver weight) was signi cantly increased by PFOA exposure. C. HE-stained liver slices from PFOA exposed mice compared with normal control (100x and 400x original mag). D. Nuclei number per unit area. Data points represent individual replicates (C: control; L: 1.25 mg/kg/d; M: 5 mg/kg/d; H: 20 mg/kg/d). Mean ± SEM; n = 10; *p < 0.05; **p < 0.01 (control group vs. PFOA treated groups). Quantitative RT-PCR analysis of rat liver mRNA transcription levels of control and PFOA treated groups at various concentrations. Mean ± SEM; n = 6 *p < 0.05; **p < 0.01 (control group vs. PFOA treated groups). Asl: Argininosuccinate Lyase; Ass1: Argininosuccinate Synthase 1; Cps1: Carbamoyl-Phosphate Synthase 1.
Figure 8
Western blot analysis of rat liver protein expression levels of control and PFOA treated groups at various concentrations. A. Protein levels of CPS1 and ASS1 in rat livers after PFOA treatment. Protein intensities were normalized to the corresponding internal reference protein GAPDH level. B. Results from densitometry analysis of the western blots in A. Mean ± SEM; n = 6 *p < 0.05; **p < 0.01 (control group vs. PFOA treated groups). | 2021-06-22T17:55:34.904Z | 2021-04-27T00:00:00.000 | {
"year": 2021,
"sha1": "be2e1c17dcd6963d994325b479044da9e68a18ee",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-364795/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "a998a5d0953896238c38632bc245fce07dbdcf9c",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
13861553 | pes2o/s2orc | v3-fos-license | New broad 8Be nuclear resonances
Energies, total and partial widths, and reduced width amplitudes of 8Be resonances up to an excitation energy of 26 MeV are extracted from a coupled channel analysis of experimental data. The presence of an extremely broad J^pi = 2^+ ``intruder'' resonance is confirmed, while a new 1^+ and very broad 4^+ resonance are discovered. A previously known 22 MeV 2^+ resonance is likely resolved into two resonances. The experimental J^pi T = 3^(+)? resonance at 22 MeV is determined to be 3^-0, and the experimental 1^-? (at 19 MeV) and 4^-? resonances to be isospin 0.
Introduction
What are the properties of the resonances of 8 Be? This question is most comprehensively answered by a global analysis of all experimental data based on the best reaction theory available, for example R-matrix theory. Resonance structure tends to be based on single experiments, most recently compiled by TUNL [1]. In contrast, the results of a coupled Table 1 contains a complete list of the data in the analysis. Substantial data are entered for the 4 He(α, α 0 ) and 7 Li(p, p 0 ) reactions, and the least data are entered for the 4 He(α, p 0 ), 4 He(α, d 0 ) and 6 Li(d, d 0 ) reactions [8]. The maximum excitation energy above the 8 Be ground state is 25 − 26 MeV for all reactions except 4 He(α, α 0 ) and 7 Be(n, p 0 ). In the 4 He(α, α 0 ) reaction, data above the maximum α laboratory energy for which data are entered (38.4 MeV) and below the limit of this analysis, are only available as phase shifts [9], and have not been incorporated. For the 7 Be(n, p 0 ) reaction no data above the near-threshold data entered are found below the maximum excitation energy of this analysis. Further details of the data and cross-section fits are available [8,10].
The excitation energies of the thresholds of the various analyzed channels, with respect to the unstable 8 Be ground state, are −0.09 (α 4 He), 17.26 (p 7 Li), 18.90 (n 7 Be) and 22.28 MeV (d 6 Li) [1]. The two-body channels p 7 Li * , n 7 Be * and d 6 Li * , involving resonances less than 100 keV wide, are neglected. These could reasonably be included in an R-matrix analysis.
All the channels included are strongly constrained by unitarity (via the R-matrix formalism) and, as explained in the next section, isospin symmetry (charge independence). The channel radii are fixed as follows based on earlier R-matrix analyses: α 4 He (4.0 fm), p 7 Li and n 7 Be (3.0 fm) and d 6 Li (6.5 fm). The fit is insensitive to variation in the d 6 Li radius [8]. The orbital angular momenta included between the two scattered nuclei are: α 4 He (S-, D-, G-, I-and L-waves), p 7 Li and n 7 Be (S-, P-, D-and F-waves) and d 6 Li (S-, P-and D-waves).
The inclusion of the highest wave for each channel did not seem to change the qualitative features of the fit, indicating that a sufficient number of waves has been used.
Procedure
The Kapur-Peierls expression for the S-matrix at real energies E for channels c ′ and c is (Eq. 28 of Ref. [11]) Here the incoming and outgoing wave functions I and O are functions of E through the wave number k. In principle the S-matrix is independent of the channel radii a. The complex functions E µ (E) and G µc (E) are determined by the R-matrix fit (see below, and also Ref. [11]).
Eq. 1 can be extended to complex E, and the S-matrix remains independent of a. The poles of the S-matrix then occurs at complex is the resonance excitation energy and Γ ≡ −2 Im[E 0 ] is the resonance total width. The partial width Γ c ≡ |ρ µc | 2 = 2 |k 0c | a c |G µc (E 0 )/O c (a c , k 0c )| 2 is evaluated at the pole in terms of the reduced width amplitude g c ≡ |G µc (E 0 )|, and is related to the residue at the pole (see Eq. 1).
The quantities E x , Γ and Γ c are independent of a. Contrary to physical intuition, the sum of Γ c for kinematically open channels is not equal to Γ. It should be cautioned that E x , Γ and Γ c all depend on how the extension to complex E is done, and are accordingly quantities that cannot be measured experimentally. However, for narrow resonances where E µ (E) is almost real, E x , Γ and Γ c respectively collapse to the usual notions of excitation energy, width and partial width, which can be measured experimentally.
The method of calculation of the S-matrix poles and residues in terms of the R-matrix parameters is briefly summarized from the more complete discussion [2]. To obtain the Smatrix pole positions from the real R-matrix eigenenergies E λ and the real reduced width amplitudes γ λc for the real boundary conditions B c (fixed in this analysis), as defined in Ref. [72], a complex energy E 0 is found such that at least one eigenvalue of the complex "energy-level" matrix (p. 294 of Ref. [72]), is the same as E 0 . Here the outgoing-wave logarithmic derivatives L are defined in terms of the outgoing wave functions O in the usual way (Eq. 4.4, p. 271 of Ref. [72]), and are functions of E through the wave number k. The residue at the pole iρ µc ′ ρ µc has already been written in terms of the function G µc (E 0 ) in Eq. 1. This function can be calculated from the R-matrix parameters by using Eq. 4 of Ref. [2]. Although this function and the energylevel matrix (Eq. 2) are defined for real energies, extension to complex E is done by simply using the functional form of these expressions when working with complex energies. In this way both the S-matrix pole E 0 and the function G µc (E 0 ), needed to calculate the excitation energy, (partial) width and reduced width amplitude, are defined in terms of the R-matrix parameters.
The EDA code [7] used to perform the R-matrix analysis implements the standard Wigner R-matrix theory [72] without approximations, except for restricting the number of R-matrix levels for a given J π T to a finite number of levels in the energy region of interest. The analysis employs isospin symmetry in the limited sense that isospin constraints on the γ λc are implemented as follows. The α 4 He and d 6 Li channels couple to an isospin 0 level, but not to an isospin 1 level. Hence the γ's for an isospin 1 level coupling to these channels are set to zero. Also, a level's γ's for the p 7 Li and n 7 Be channels are related by isospin Clebsch-Gordon coefficients, which are different for isospin 0 and 1 levels.
Let us consider the dissociation of the compound nucleus A into nucleus A ′ and ejectile a.
Define the channel cluster form factor F , proportional to the overlap between the internal wave function of nucleus A and the internal wave functions of the nuclei A ′ and a, as [73] Here r aA ′ is the relative coordinate between the C.M. of a and A ′ . The symbols ξ A , ξ A ′ and ξ a denote internal coordinates of the nuclei A, A ′ and a, respectively; and ψ are the corresponding internal wave functions. A full definition of F can be found elsewhere (Eq. 7 of Ref. [74]). The integral of |F | 2 over r aA ′ is the widely predicted "spectroscopic factor".
The R-matrix reduced width amplitude γ λc for the breakup of a level λ of the nucleus A into A ′ and a in channel c is defined as [72,73] where M c is the reduced mass for relative motion between A ′ and a. Comparison between theory calculations and the predictions here are possible by comparing F (a c ) calculated from theory and γ λc using Eq. 4. However, this is only possible when the same boundary conditions B c are imposed at a c is as standardly done in R-matrix theory. As theory calculations do not usually do this, it is more useful to compare them to G µc (E) in Eq. 1, which is the equivalent of γ λc for wave functions with outgoing wave (Kapur-Peierls) boundary conditions (Eq. 30 of Ref. [11]). Hence the R.H.S. of Eq. 4, calculated from theory (usually) for bound states, should be compared to the g c which will be tabulated in the next section for scattering states.
Resonance structure
The E x , Γ and isospin impurity of the resonances are displayed in Table 2. All J π are allowed, so that the J π is independently established by the R-matrix analysis. Isospin 0 and 1 are allowed for all resonances, because these are the only isospins that can couple to the channels in this analysis if isospin symmetry is assumed. The resonances found in Table 2 should be compared to the "experimental" resonances believed to exist on the basis of a summary of resonances found in experimental data and other analyses [1]. A comparison with experiment indicates substantial agreement. Disagreements partially stem from the difference between defining the energy and width from poles of the S-matrix, as is done in the R-matrix analysis, and defining them from Breit-Wigner formulae, as is often the case in experimental analyses. For example, agreement between the energy and width of the wellknown narrowest resonances (J π T (E x ) = 0 + 0(0), 1 + 0(18), 1 + 1(18), 3 + 0(19) and 3 + 1(19)) is much better than those of the well-known broadest resonances (2 + 0(3) and 4 + 0(11)).
However, the parameters of the 4 + 0(11) resonance found from 4 He(α, α) alone (E x = 11.5(3) MeV, Γ = 4000(400) keV) [1] are in perfect agreement with this analysis. Since the R-matrix analysis contains more data than any known analysis, the experimental masses and widths may well be in doubt, although this is less likely for narrow experimental resonances. by GFMC [76]. Quantities in square brackets are not accurately determined by this analysis. (1, 2) − 1(24), the reason is that these resonances were observed in reactions other than those analyzed here [1]. Of the reactions studied here, the 4 + 0(20) resonance is only non-negligibly observed in 4 He(α, α 0 ) [1], and data from the experimental reference [9] are not included here.
The narrow ground state 0 + 0 resonance parameters in Table 2 are not an improvement on experiment, since no low-energy 4 He(α, α 0 ) data are included at the same excitation energy as the resonance energy. The experimental J π T = 1 − ? at 19 MeV [1], and the 4 − ? [1], are found to have isospin 0, having allowed for both isospins. the current fit clearly prefers two resonances. The lower mass resonance is well established [1].
The existence of the higher mass resonance only became apparent once 6 Li(d, X) data above ≈ 1 MeV d laboratory energy were included, and hence does not contradict an analysis [79] of yield at the same energy as a broad bump in the p 1 yield. However, this analysis cannot be regarded as strong evidence for two 2 + 0 resonances. It is unclear whether two 2 + 0 resonances at 22 − 23 MeV is confirmed by NCSM theory calculations [75]. This calculation does find an extra 2 + 0 state at 14 − 21 MeV, which is known as an "intruder" state because it does not appear in the naïve shell model. Whether this intruder should be identified with the 2 + 0 (23) or with the extremely broad 2 + 0(16), discussed below, is unclear.
The reason is that when the peak in 6 Li(d, α 0 ) at a d laboratory energy of is dependent on the channel radius used in the R-matrix fit [6,81]. For example, an analysis of β-delayed 2α spectra from 8 Li and 8 B together with ℓ = 2 α 4 He phase shifts finds that 2 + intruder states below excitation energy 26 MeV need not be introduced [81]. Although the S-matrix (and its poles and residues) are formally independent of the chosen channel radii for infinitely many R-matrix levels, actual analyses employ a finite number of levels, which can lead to different energies for different channel radii. In addition, the energy of 2 + 0 varies by several MeV as new data are included, consistent with the expectation that the energy should not be particularly well constrained for a very broad resonance. A NCSM theory calculation finds the 2 + 0 and 4 + 0 intruders at 14 − 21 and 20 − 26 MeV respectively [75].
However, a recent GFMC calculation finds no need to introduce extra 2 + or 4 + states below respectively 22 and 19 MeV [76]. The disagreement between NCSM and GFMC may be due to the large widths of the intruder states (Table 2), which imply substantial variation in the energies extracted from these calculations which treat all the states as bound. Whether very broad states should be seen in calculations that treat states as bound is debatable. [74]. The same is true for VMC if the T = 1 8 Li states are taken as a guide to the T = 1 8 Be states [78]. This coincides with the finding here that only one new 1 + state is needed, and that this state has isospin 1.
The 2 − resonance is conceptually complicated because it lies exactly at the n 7 Be threshold, and hence requires sophysticated analysis. Several such analyses have been performed [1], typically yielding a resonance with E x = 18.9 MeV and Γ ≈ 100 keV, although there is disagreement on the width. Most strikingly, an analysis of 7 Li(p, n 0 ) and 7 Be(n, p 0 ) data finds Γ = 1634 keV [82], based on a prescription whereby the sum of the Γ c equals Γ.
As previously mentioned, this is not the case in our analysis. In contrast, another multilevel R-matrix analysis [52] defines the resonance energy and width as the properties of the pole of the S-matrix, yielding a total width much lower than the sum of the partial widths. This corresponds closely to our conventions, yielding Γ = 122 keV, T = 0 and isospin impurity ≈ 24% [52]. This isospin impurity is at odds with ≤ 10% obtained from 7 Li(p, γ) 8 Be * (18.9) [1]. The current analysis assigns T = 1 for the 2 − resonance ( with E x = 18.73 MeV, a much larger width Γ = 640 keV, T = 1 and isospin impurity 31%. 2 + α 1d p 5p p 5f p 3p p 3f n 5p n 5f n 3p n 3f d 5s d 5d d 3d d Table 3: The partial widths Γ c and reduced width amplitudes g c found in the R-matrix analysis. First, the list of possible channels is indicated for each J π . Each channel is denoted in the format (reaction) (2s + 1) ℓ; where "reaction" is α (α 4 He), p (p 7 Li), n (n 7 Be) or d (d 6 Li); and s and ℓ are the spin and orbital angular momentum of the nuclei in the channel.
Second, for each resonance, Γ c and g c are indicated in the order of the channels enumerated for the corresponding J π . These entries always start with the first channel, but do not necessarily end with the last channel. For Γ c this is because the corresponding channels are not kinematically allowed. For g c the quantities could not be determined because the resonance is too distant from the relevant threshold. Quantities in square brackets are not accurately determined by this analysis. It is understood that Γ c and g c are only given for the channels considered in this analysis; and that certain two-body channels, all three-body channels, and higher ℓ, are neglected. The g c are channel radius dependent, and hence not experimentally measurable.
This pole has the opposite pattern of coupling to the channels: it couples stronger to p 7 Li and weaker to n 7 Be.
The 1 − 1(22) resonance has previously only been observed in the 7 Li(p, γ 0 ) reaction [1]. This analysis finds a need to introduce this resonance with a strong coupling to p 7 Li and n 7 Be in the spin 2, D-wave. The parameters of 1 − 1 (22) are not strongly fixed by this analysis and are hence not displayed. | 2014-10-01T00:00:00.000Z | 2005-06-20T00:00:00.000 | {
"year": 2005,
"sha1": "95694677e16a9c2e843dd467ad7ea3695f7830b4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/nucl-th/0506063",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8dba023151587a7051dd97d942f031bc62e0d37b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
252173526 | pes2o/s2orc | v3-fos-license | The Limits of Acute Anemia
For many years, physicians’ approach to the transfusion of allogeneic red blood cells (RBC) was not individualized. It was accepted that a hemoglobin concentration (Hb) of less than 10 g/dL was a general transfusion threshold and the majority of patients were transfused immediately. In recent years, there has been increasing evidence that even significantly lower hemoglobin concentrations can be survived in the short term without sequelae. This somehow contradicts the observation that moderate or mild anemia is associated with relevant long-term morbidity and mortality. To resolve this apparent contradiction, it must be recognized that we have to avoid acute anemia or treat it by alternative methods. The aim of this article is to describe the physiological limits of acute anemia, match these considerations with clinical realities, and then present “patient blood management” (PBM) as the therapeutic concept that can prevent both anemia and unnecessary transfusion of RBC concentrates in a clinical context, especially in Intensive Care Units (ICU). This treatment concept may prove to be the key to high-quality patient care in the ICU setting in the future.
Introduction
The loss or deficient production of erythrocytes leads to anemia, defined as a reduction in erythrocytes and Hb concentration in the blood [1]. Due to many possible causes, anemia is widespread and is therefore a global burden of disease [2]. However, the phenomenon of "acute anemia" has become more common in the last century. In contrast to chronic anemia, acute anemia arises in most cases when acute blood losses are substituted by acellular solutions. In acute hemorrhage, primarily erythrocytes and plasma are lost simultaneously, which keeps the concentration of erythrocytes in the remaining circulating blood constant. The subsequent transfer of interstitial fluid into the intravascular space leads to a reduction in the erythrocyte concentration and, consequently, to a drop in the hemoglobin concentration and the hematocrit values [3]. Blood losses are also often replaced with acellular solutions, such as crystalloids, or to a much lesser extent with colloids, which leads to an immediate reduction in the erythrocyte concentration and thus to dilutional anemia [4]. The German physiologist Kronecker was the first who demonstrated in animal experiments that extensive, acute blood loss can be survived even at very low hemoglobin concentrations if the lost volume is replaced with water or salted solutions [5]. He thus proved that acute hypovolemia is significantly more dangerous for the animals' integrity than dilutional anemia caused by volume administration, as long as a certain number of circulating erythrocytes remain [6]. It is now known from many animal experiments that the lowest physiological short-term limit of acute anemia is about 3 g/dL, depending on the physiological compensation reserve and the clinical situation [7][8][9][10][11]. However, individual clinical case reports suggest that patients with significantly lower Hb values survived without long-term sequelae, with the lowest documented value being 0.7 g/dL [12].
The fact that milder forms of anemia also have a considerable influence on physical performance was discovered in 1946 by the gynecologist Adams [13]. He proved that postpartum women were physically more resilient if their Hb was above 10 g/dL (hematocrit > 30%). This led to the so-called "10/30 rule", which resulted in a Hb of 10 g/dL being held as the typical clinical threshold for the transfusion of RBC for many years. This approach was established as a standard in everyday clinical practice [14].
In 1999, for the first time, it was proven in a randomized controlled trial that even in intensive care patients, a Hb of 7 g/dL has a similar outcome compared to patients with a Hb above 9 g/dL [15]. This study indicates that a Hb of 10 g/dL should probably not be the limit for acute anemia. In addition, case reports from Jehovah's Witnesses provide increasing evidence that patients with significantly lower Hb values may survive without any sequelae. This opens the question of why a Hb of 10 g/dL does not represent the tolerable short-term limit for acute anemia. In contrast, it might have a long-term influence on morbidity and mortality [16]. However, an increasing amount of the literature questions the scientific approach of comparing liberal versus restrictive transfusion regimes [17][18][19]. Therefore, this narrative review aims to enlighten the limits of acute anemia from a physiological, clinical, and "patient blood management" (PBM) point of view, highlighting the effects of therapeutic alternatives. We did not perform a systematic review with a systematic literature search but combined the most pivotal literature from animal experiments and clinical trials.
The Physiological Point of View
One of the most important tasks of blood circulation is transporting oxygen to the cells of individual tissues [20]. Since oxygen dissolves very poorly in blood plasma, adequate oxygen supply depends on the presence of erythrocytes, which uniquely bind oxygen. The sigmoidal oxygen dissociation curve of hemoglobin ensures oxygen uptake in the lung and oxygen release in the tissues [20]. Therefore, the release and uptake of oxygen by erythrocytes is a passive process that primarily depends on the microenvironment and is significantly altered in septic patients [21]. Typical venous oxygen saturation in tissues is 65%, which means that only about one-third of the transported oxygen is released to the tissues. At least 2/3 recirculates back to the right heart via the venous circulatory system [20].
The absolute amount of transported oxygen also depends linearly on the cardiac output. Control mechanisms of cardiac output are complex; they include heart rate (which is, on the one hand, controlled by the autonomic nervous system and, on the other hand, controlled by humoral regulation) and stroke volume ( Figure 1). The stroke volume is determined by the wall shear stress of ventricles (preload), myocardial contractility, and peripheral vascular resistance (afterload) [22]. The vascular resistance, in particular, depends on the blood's viscosity. The higher the Hb value, the more peripheral vasoconstriction occurs via shear forces on the endothelium. In acute anemia, these shear forces decrease, and the peripheral resistance drops-reflectively, the cardiac output and thus the oxygen supply to the tissues increase [23]. In awake patients, acute dilutional anemia also increases cardiac output by increasing heart rate [24], an adaption mechanism that typically does not occur in anesthetized individuals. Since neural or humoral pathways do not directly regulate the oxygen delivery in the organism, the relative increase in cardiac output exceeds the relative decrease in the Hb value. Thus, the oxygen supply to the tissues paradoxically increases. With the further decline in Hb, oxygen delivery reaches its baseline level [25]. Only at very low Hb values (3 g/dL) can the organism's oxygen requirements no longer be met, and tissue hypoxia occurs. . Cardiac output is defined as the product of stroke volume and heart rate. Stroke volume depends on preload, afterload, contractility, and ventricular volume, whereby neuronal and humoral factors modulate heart rate. Arterial oxygen content is defined as the sum of hemoglobinbound oxygen (hemoglobin concentration times arterial saturation times Hüfner's number) and physically dissolved oxygen (arterial oxygen partial pressure times a constant).
All these mechanisms are necessary to meet the oxygen requirements of the organs and tissues. In the literature, a value of 3.6 mL/min/kg for an adult is often given for the resting oxygen demand. However, this value can vary for individual organs, from 2 mL/min/kg for the skin to 100 mL/min/kg for the myocardium. Therefore, different organs react quite differently to an acute restriction of the oxygen supply. Furthermore, it cannot be predicted whether an individual patient can increase cardiac output to such an extent for a given Hb that the limits for oxygen demand can be met.
Theoretical deduction of the limit of acute anemia is impossible. The complexity and individuality of the potential compensatory mechanisms prevent adequate prediction of the outermost limit of acute anemia. Furthermore, it is difficult to define what determines the limit of anemia in a specific setting. Theoretically, several different outcome parameters such as short-term survival, long-term survival, lactate concentration, pH, tissue oxygenation, etc., can be used to define the limit of anemia. For example, if a change in long-term survival is used to define the limit of anemia, this might be completely different from a situation where an increase in lactate is used to define the limit of anemia. How this influences the definition of the limit of anemia will be discussed later in the article.
What Is Known from Animal Experiments?
From a theoretical point of view, the most solid outcome parameter of acute anemia is short-term survival. Several animal experiments have been performed that investigated the outermost limit of anemia in terms of oxygen transport and tissue oxygenation and their influence on short-term survival. However, the first systematic investigations focused on oxygen transport and tissue oxygenation. van Bommel and coworkers were one of the first to determine the hemoglobin concentration in anesthetized pigs, where the compensatory mechanisms of acute anemia were unable to stabilize oxygen consumption (VO 2 ) [26]. The main finding of this study was that the systemic VO 2 , the cerebral µpO 2 , and the intestinal mucosal µpO 2 became impaired at the same stage during hemodilution. In contrast, the intestinal serosal µpO 2 became impaired at an earlier stage. In this model, the decline in VO 2 was interpreted as supply dependency; therefore, a hematocrit of 7.6% was defined as the outermost limit of acute anemia. Torres Filho and coworkers performed similar experiments in rats [27]. They could show that, until a hemoglobin concentration of 6 g/dL was reached, oxygen transport and tissue oxygenation were barely influenced, but at a hemoglobin concentration of 3 g/dL, rats' compensatory mechanisms were incapable of avoiding tissue hypoxia due to an overcritical reduction in tissue oxygenation. Similar values could also be found in large animal models, where a hemoglobin concentration close to 3 g/dL has been demonstrated to be the limit of the compensatory mechanisms of acute anemia, resulting in severe short-term mortality [7,28,29].
However, this physiological approach is subject to some relevant limitations. For example, it assumes a critical limit in the oxygen supply to the tissues in all organs simultaneously, although this is not the case. Several animal studies have demonstrated that the anemia tolerance of individual organs can differ [9,10,27]. Initially, it was erroneously assumed that the myocardium, as the carrier of the compensatory mechanisms of acute anemia, should determine the limit of anemia tolerance of the whole body [24]. In contrast to this assumption, it has been demonstrated that the kidneys, for example, become hypoxic when the compensatory mechanisms of acute anemia still maintain the integrity of the cardiovascular system [9,11]. In addition, the physiological principles described above just apply if the lost blood volume is replaced to maintain normovolemia. Only in this case is there no release of endogenous catecholamines during anemia [7]. In the case of hypovolemia, peripheral vasodilation, one of the main compensatory mechanisms of acute anemia, is counteracted. This prevents the combination of acute anemia and acute hypovolemia from being compensated as well as acute anemia alone [30]. Lastly, none of these animal studies investigated the long-term outcome of such severe anemia. Therefore, this outermost limit of anemia cannot be considered safe for a longer time.
What Is Known about Extreme Anemia in Humans?
Outcome Short-Term Survival The considerations mentioned above for animal models are supported by several anecdotal case reports in humans [12,31,32] and correlate with studies in patients for whom blood is not an option (mostly Jehovah's Witnesses), who refuse transfusion of allogeneic blood for personal or religious reasons. Transfusion in this cohort not only would disregard patient autonomy but can also result in medicolegal problems [33]. Several major case series correlate the lowest perioperative Hb concentration with survival [16,[34][35][36]. They showed that patients could survive with a Hb value of ≈3 g/dL for a short period. Still, these studies also clearly demonstrated that if extreme anemia is not corrected, mortality below a Hb value of 6 g/dL notably increases. This is in contrast to the theoretical physiological considerations mentioned above. Regarding oxygen transport and tissue oxygenation, the limit for acute anemia should be at much lower hemoglobin concentrations than 6 g/dL. This phenomenon can therefore be seen as a reference that a low threshold for transfusion cannot solely be defined based on purely physiological considerations. It is difficult to assess whether the temporal component is the only reason for this so that short-term extreme anemia can be better tolerated than moderate anemia of longer duration.
It is sometimes recommended to use physiological transfusion triggers rather than fixed Hb values for the transfusion of RBC concentrates. These may theoretically indicate a critical limitation in oxygen transport and tissue oxygenation. Although very interesting from a physiological point of view, extensive clinical use cannot be recommended at present [37].
Summarizing these results, it can be concluded that acute anemia below a Hb of 6 g/dL is not safe as a transfusion threshold in everyday clinical practice since the risk of increased mortality is unacceptable for ICU patients. However, which hemoglobin concentration above 6 g/dL can be considered safe without transfusion is still open for discussion. Furthermore, even transfusions deemed appropriate due to a low hemoglobin concentration could be avoided if adequate measures were taken in advance [38]. The benefit/risk ratio of transfusion at a specific threshold might depend on the individual clinical situation and the individual capability of a patient to compensate for the reduction in the hemoglobin concentration. For this reason, numerous studies have been conducted in different patient populations to investigate the mortality and morbidity of acute anemia with or without transfusion.
The Clinical Point of View-The Effect of Acute Anemia and Transfusion on Morbidity and Mortality
In daily clinical practice, it is impossible to study the morbidity of acute anemia without the influence of transfusion. A potential clinical study that includes a control group with extensive anemia over a prolonged time might be considered unethical. Therefore, most clinical studies compare groups of patients subjected to a liberal or a restrictive transfusion regime [39]. These clinical studies investigate both regimes in patients with different pathologies and cannot be easily compared to each other. Typically, the effects of prolonged acute anemia with eventual transfusion at a lower threshold (in these studies, often called a restrictive transfusion regime) are compared with the results of transfusion of allogeneic blood after a short period of acute anemia (liberal transfusion regime) [40]. Although this reflects the typical clinical approach, such studies cannot describe the safe lower limit of acute anemia for oxygen transport but can only provide valuable guidance on the most useful clinical practice in terms of therapeutic modalities, a point of view that will be discussed later in the article.
A typical study considering this restriction was performed by von Heymann and coworkers in cardiac surgery patients [41]. They demonstrated that perioperative anemia is an independent risk factor for mortality and that this risk is increased by transfusion of RBC concentrates. Furthermore, the risk of dying is correlated with the degree of anemia: the more anemic the patients were, the higher their probability of death. These results were later confirmed by Jabagi in a similar study [42]. He also showed that perioperative anemia is an independent risk factor for perioperative death and that this risk is additionally increased by the transfusion of RBC concentrates.
In addition to these findings, preoperative anemia is independently correlated to perioperative mortality. This has been shown many times in large patient cohorts. Most impressive in this context is a study by Musallam from 2011 in which he demonstrated in 227,425 patients that even mild to moderate anemia increases the perioperative risk of death by more than 20% [43]. Mild to moderate anemia was defined as a Hb value of 10-13 g/dL, which is far from limiting oxygen transport and tissue oxygenation. Thus, the reason for the increase in morbidity and mortality cannot be sought in inadequate tissue oxygenation but is likely to have other causes that cannot be deduced from the original study.
These results were repeated several times for surgical patients, showing a statistical correlation between mild anemia and a negative perioperative outcome [44][45][46]. Unfortunately, all these studies have to be seen as descriptive, and the association between anemia and perioperative mortality cannot be proven causally due to this fact. Theoretically, one or more confounders could exist that represent the actual reason for increased perioperative mortality. In this case, perioperative anemia could only be linked statistically to these confounders.
Hebert conducted the first and perhaps the most significant study comparing a liberal with a restrictive transfusion regime in intensive care patients [15]. He demonstrated that a transfusion trigger of 7 g/dL was not associated with higher mortality compared to a transfusion trigger of 9 g/dL. At that time, this new approach opened the field for this point of view. Consequently, similar studies were performed in various patient cohorts, and in the vast majority of cases, a restrictive transfusion regimen was non-inferior to a liberal one [47]. However, as demonstrated by Carson et al. in this Cochrane review, there are "insufficient data to inform the safety of transfusion policies in certain clinical subgroups", which include acute coronary syndrome, myocardial infarction, traumatic brain injuries, cancer including hematological malignancies, and stroke, which represents a large percentage of the population of patients in hospitals [48][49][50].
Although all of these studies demonstrated that a "restrictive" transfusion regime is as safe as a "liberal" one, no study can show whether other thresholds might be safer than those investigated. In this context, "safe" always has to be seen as connected to the specific study and the two groups compared.
Furthermore, many patients treated in an ICU have comorbidities that are not present in other settings and could potentially affect anemia tolerance [51]. For example, sepsis is a common condition in ICU. Due to the altered capability of the organism to utilize oxygen in this situation, whether the limit of anemia for oxygen transport changes in septic patients has been discussed. Holst and colleagues showed that a restrictive transfusion regime is not inferior compared to a liberal transfusion regime in septic shock patients [52]. Even if this does not demonstrate the physiological limits of acute anemia, it has been proven that the recently advised clinical approach (transfusion threshold 7 g/dL) is safe even in sepsis.
In most studies, it was possible to significantly reduce the perioperative transfusion requirements by a restrictive transfusion regime, which was associated with significant cost savings [53,54]. In a clinical situation with similar liberal and restrictive transfusion outcomes, the more cost-effective one is advised.
Two important studies differ from this general finding that restrictive or liberal transfusion does not affect the outcome. First, Bergamin et al. demonstrated that a liberal transfusion regime was superior to a restrictive one in patients undergoing abdominal surgery due to an underlying malignant disease [55]. In contrast, in the study by Villanueva et al., mortality was decreased by a restrictive transfusion regimen in patients with gastrointestinal bleeding [56]. It is impossible to say whether these studies have extraordinary results due to the respective diseases or if should they be evaluated as random statistical errors (since both studies have methodological limitations). Overall, it can be stated that a clear superiority of one transfusion regime over another in terms of morbidity and mortality cannot be demonstrated. If one also considers the effort required to provide RBCs and the "primum nil nocere" principle, then a liberal transfusion regime can hardly be justified.
This general recommendation for a restrictive over a liberal transfusion regime has, of course, to be seen in the light of many comorbidities potentially influencing anemia tolerance in a specific patient. As long as this evidence has not been generated, existing meta-analyses can be interpreted in such a way that, so far, no comorbidity could be identified, which advises higher transfusion thresholds. An overview of this topic is provided by the latest Cochrane analysis by Carson and colleagues [47].
We know from the data presented that perioperative anemia is associated with increased perioperative morbidity and mortality. At the same time, it has been convincingly shown that unnecessary transfusion of RBCs does not improve clinical outcomes. For this reason, all available therapeutic measures to avoid anemia and transfusion of allogeneic blood must be utilized to improve outcomes. This leads to the third approach, the perspective of "Patient Blood Management" (PBM).
The PBM Point of View
Neither the acceptance of acute anemia nor the transfusion of RBC concentrates represents an effective therapeutic modality. Therefore, it seems logical that in daily clinical practice, everything must be undertaken to avoid both. This is the intellectual basis for a treatment concept that has gained significant traction in recent years, the so-called "Patient Blood Management" (PBM) [57].
PBM aims to improve the quality of care through a clinical focus on increased erythropoiesis, reduced blood loss, and reflected transfusion (Figure 2) [58]. PBM is not a single measure, but a bundle of actions adapted depending on the individual patient [59]. These measures do not correspond to a "one size fits all" approach but must be individually put together for the particular clinical situation. It is essential to understand that in clinical practice, it is best to avoid the physiological or clinical limits of acute anemia. Preventing blood loss (the so-called 2nd pillar of PBM) or administering iron and erythropoietin to increase erythropoiesis (so-called 1st pillar of PBM) makes the transfusion of allogeneic blood unnecessary [60,61]. This approach is based on the hypothesis that moderate or even mild anemia is associated with worsening perioperative morbidity and mortality.
The 3rd pillar of the PBM deals with the question of which threshold of Hb RBCs must be transfused in certain physiological situations (hypotension, hypoxia, sepsis, etc.) to prevent damage to the organism. Only this 3rd pillar of PBM defines the possibilities to exploit the physiological anemia tolerance in terms of a lower Hb or oxygen transport value. However, this does not mean that anemia should be tolerated until a Hb threshold for transfusion is reached, but that every form of anemia should be avoided as far as possible (Figure 3). If therapy is needed, it should not be carried out by transfusion of erythrocyte concentrates but it should consider alternative treatment modalities for the individual case [62]. These treatment modalities include the administration of iron, erythropoietin, folic acid, and vitamin B 12 , which prove effective, even if applied in a concise time frame [63]. Which therapeutic option plays the most crucial role at a specific time is determined by several guidelines that refer to different clinical situations [64][65][66][67][68]. Despite the general belief that PBM is also very useful in ICU patients, and the fact that this has been demonstrated recently by Lasocki and coworkers [69], whether the application of iron at the ICU can increase the risk of nosocomial infection has been regularly discussed [70,71]; however, up to now, there has been no conclusive study that could prove this.
The efficacy of the PBM bundle was most impressively demonstrated in a large prospective observational study in Western Australia [53]. The interdisciplinary team reduced the odds ratio for mortality by 28% by implementing PBM and significantly reduced the transfusion of RBCs, fresh frozen plasma (FFP), and platelets, making PBM the standard for managing anemia, bleeding, and transfusion in Western Australia. A similar result was shown by Meybohm et al., where the implementation of PBM reduced the volume of blood products transfused [72]. However, no evidence regarding mortality could be demonstrated in this publication.
Every effort must be made in clinical practice to avoid anemia or treat it with adequate therapeutic modalities. Although the clinical approach commonly used in PBM has been successfully proven in large observational studies, there is some criticism about proving the effectiveness of individual interventions. However, whether this type of criticism is justified for a bundle of measures is still open for discussion [73]. In particular, the application of iron and erythropoietin is sometimes considered unsafe for some clinical situations such as acute infection for iron [74] and malignancy for erythropoietin [75]. So far, no randomized clinical trials exist that prove that the combination of a restrictive transfusion regime and postoperative application of iron and erythropoietin is superior to the transfusion of pRBCs.
From what has been said so far, it can be concluded that two different limits to acute anemia exist. First, there is the naturally existing short-term anemia tolerance of the organism, which is defined by physiological conditions. The Hb values associated with this kind of anemia tolerance are low. Second, there is the tolerance of physicians to acute anemia, which is significantly higher but also associated with relevant morbidity and mortality. Both limits seem arbitrary for this reason because neither one of these limits is really "safe" from a clinical point of view.
Conclusions
The concept of anemia tolerance is widely discussed in the recently published literature. However, this term subsumes very different clinical circumstances. Mainly, it stands for the tolerance of the organism to withstand acute, normovolemic anemia, but it also includes the phenomenon that both mild and moderate anemia are independent long-term risk factors for morbidity and mortality, and therefore should be avoided or treated. Correcting mild or moderate anemia with the transfusion of RBC concentrates rarely improves outcomes in a general population and should, therefore, only be used in specific indications. As a promising alternative, PBM includes a powerful toolbox to prevent and treat anemia. | 2022-09-10T15:20:28.096Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "b6225e6d4e04d9455e821124bae80c0fdea4c7d7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/11/18/5279/pdf?version=1662553173",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "975e750eab8b6df83d98178f4fe9724de528ef35",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
233393710 | pes2o/s2orc | v3-fos-license | Accelerating Universe in Higher Dimensional Space Time : an alternative approach
We have discussed here a higher dimensional cosmological model and explained the recent acceleration with a Chaplygin type of gas. Dimensional reduction of extra space is possible in this case. Our solutions are general in nature because all the well known results of 4D Chaplygin driven cosmology are recovered when $d = 0$. We have drawn the best fit graph using the data obtained by the differential age method (CC) and it is seen that the graph favours only one extra dimension. That means the Chaplygin gas is apparently dominated by a 5D world. Relevant to point out that the final equation in this case are highly nonlinear in nature. Naturally it is not possible to obtain explicit solution of the 4D scale factor with time. To circumvent this difficulty, we consider a first order approximation of the key equation which has made it possible to get time explicit solution of 4D scale factor in exact form as well as the expression of extra dimensions. It may be pointed out that for large four dimensional scale factor this solution mimics $\Lambda$CDM model. An analysis of flip time is also studied both analytically and graphically in some detail. It clearly shows that early \emph{flip} occurs for higher dimensions. It is also seen that the rate of dimensional reduction is faster for higher dimensions. So we may conclude that the effect of compactification of extra dimension helps the acceleration.
Introduction :
There has been a resurgence of interests in models where the present universe seems to be undergoing an accelerated expansion. Gravitational force being always attractive in nature, this finding is contrary to our intuition. However, detail investigations of redshifts of type Ia distant nabulae as well as cosmic microwave background anisotropy measurements did suggest the accelerated type of expansion. Several explanations present themselves -introduction of higher derivative theories [1] , a variable cosmological constant in Einstein's field equation [2], flavor oscillations of axions [3], inhomogeneity in space time structure [4,5], quintessential type of scalar field [6], presence of higher dimensions [7], and most importantly a Chaplygin type of gas as a matter field [8].
In this context the authors of the present article have been, of late, struggling with the idea of explaining the late acceleration as a higher dimensional(HD) phenomena [9,10]. In the framework of higher dimensional cosmology we have been able to show, though in a rather naive way, that the acceleration can be explained as a consequence of the presence of the extra spatial dimensions and this effect has been coined as 'dimension driven' accelerating model. In fact here the effective Friedmann equations contain additional terms resulting from the presence of extra dimensions which may be interpreted as a 'fluid' causing the late acceleration. So in this work we attempt to incorporate the phenomenon of acceleration within the framework of higher dimensional spacetime itself without invoking a mysterious scalar field with large negative pressure by hand. Moreover, the origin of the extra fluid responsible for the acceleration is geometric in origin having strong physical foundation and more in line with the spirit of general relativity as proposed by Einstein [11] and later developed by Wesson and his collaborators [12]. In an earlier work Milton [13] has shown that quantum fluctuations in 4D spacetime do not generate the dark energy but rather a possible source of the dark energy is the fluctuations in the quantum fields including quantum gravity inhabiting extra compactified dimensions. This has led a number of workers to concentrate on ideas relating to higher dimensional space in its attempts to unify gravity with other forces of nature, interpretation of different brane models, space-time-matter (STM) proposal [12] and also the dimension driven quintessential models [14]. The present investigation is primarily motivated by two considerations. While we have plenty of multidimensional cosmological models in literature [15] and also some sporadic works of brane models [16,17] with Chaplygin type of fluid, but scant attention has been paid so far to explain the cosmic acceleration either by extra dimensions themselves or by Chaplygin type of matter field [18,19].
The present work essentially contains two parts. We have taken a (d+4) dimensional homogeneous spacetime with two scale factors and a perfect fluid as a source field. Here we have taken a Chaplygin type of matter field with higher dimensional spacetime. The solution of our key equation (13) in closed form cannot be obtained because integration yields an elliptical solution only and we get hypergeometric series. In any case certain inferences can always be drawn in the extreme cases and our analysis shows that an initially decelerating model transits to an accelerating one as in 4D. An interesting result in this section is the fact that the effective equation of state(EOS) at the late stage of evolution contains some additional terms coming from extra dimensions. This finding has marked similarity with the EOS obtained by Guo et al. [20] for a variable Chaplygin gas model. Depending upon the presence of extra dimensions the cosmology then evolves as ΛCDM or Phantom type. This is definitely at variance with the usual 4D models which essentially ends up in a deSitter phase with time. Though not exactly similar this points to the 'k-essence' type of models which lead to cosmic acceleration today for a wide range of initial conditions without fine-tuning and without invoking an anthropic argument.
We adopt here χ 2 minimization technique to obtain constraints imposed by cosmological observations. We use Type Ia Supernova data and the predictions of CMB, BAO in constraining the cosmological models. Defining a total χ 2 function, we analyse cosmological models using the (H(z) − z) OHD data (table 1). The constraints on the Ω m and m (to account for dimensional reduction m > 0) are determined by drawing contour plots at different confidence levels. We have drawn the best fit graph using the data obtained by the differential age method (CC) and it is seen that the graph favours only one extra dimension. That means the Chaplygin gas apparently mimics a 5D world.
It is to be mentioned here that dimensional reduction of extra space is possible in this case. But we can not explain the impact of compactification of extra dimensions on present acceleration or the evolution of scale factor of the universe because the key eq. (13) is not amenable to obtain an explicit solution, so we have to study the extremal cases only. This type of incompleteness may be remedied via an alternative approach [21] where the higher order terms of the binomial expansion of RHS of eq. (13) are neglected. The main reason behind it is that the 4D scale factor should be large enough at zero pressure era and it may not be inappropriate if we take only the first order terms of the binomial expansion of RHS of the eq. (13) which was shown in eq. (29). In the process we have got an exact solution through which we study an explicit time dependent solution.
Higher Dimensional Field Equations :
The Einstein field equation in d− dimension is given by where A, B are (0, 1, 2, 3, .....d), R AB and R are the Ricci tensor and Ricci scalar respectively. We consider the line element of (d+4)-dimensional spacetime where y α (α, β = 4, ...., 3 + d) are the extra dimensional spatial coordinates and the 3D and extra dimensional scale factors a(t) and b(t) depend on time only and the compact manifold is described by the metric γ ab . We consider the manifold M 1 × S 3 × S d the symmetry group of the spatial section is O(4) × O(d + 1). The stress tensor whose form will be dictated by Einstein's equations must have the same invariance leading to the energy momentum tensor as [22] T 00 = ρ , where the rest of the components vanish. Here p is the isotropic 3-pressure and p d , that in the extra dimensions. Considering, where m is any positive number so that dimensional reduction is ensured a priori. For the matter field we here assume an equation of state given by the Chaplygin type of gas in 3D space only [23] which is The field equations are given by [9] . For a positive energy density, k must be greater than zero which .
The conservation equation is given bẏ
Now using eqs. (7), Solving eq. (10) we get and c is the integration constant. From physical considerations we determine the restriction on m as for d = 1 otherwise m < 1 for d = 1. It may be mentioned here that a detail analysis was given by two of us in Ref. [9] in a Modified Chaplygin gas cosmology. With the constraint (12), the last term in the right hand side is obtained from higher dimensional contribution which is absent in 4D (d = 0). Thus the density of the universe at the present epoch in the framework of a higher dimensional is less compared to a 4-dimensional universe. We note the following : (i) when m = −1 we get a universe with b(t) = a(t), which permits a universe with expansion in all dimensions. In this case the observed universe with desirable feature of dimensional reduction to get an isotropic expansion is not obtained.
(ii) when m = 0, a universe with flat extra space in (d + 3) dimensions is obtained. In this case also we get similar scenario that is obtained in a 4D universe which was reported in Ref. [24]. In fact the similarity is a direct consequence of a known theorem of Campbell that any analytic N-dimensional Riemmanian manifold can be locally embedded in a higher dimensional Ricci-flat manifold [25].
(iii) when d = 0, we recover the 4D metric with all the known features of 4D cosmology.
The known form of the scale factor can be obtained from the above equation. The solution of the eq. (13) in closed form cannot be obtained because integration yields an elliptical solution only and we get hypergeometric series. However the eq. (13) gives significant information under extremal conditions as briefly discussed here.
Cosmological dynamics :
Now we have discussed the cosmological behaviour of the Chaplygin gas equation of state in higher dimensional spacetime. We have expressed the relevant equations with the help of deceleration parameter. In what follows we shall see from the observational data, the best fit graph favours a 5-dimensional interpretation of the cosmological dynamics.
Deceleration Parameter:
At the early stage of the cosmological evolution when the scale factor a(t) of the universe is relatively small, the second term of the right hand side of the eq. (13) dominates which has been discussed in the literature [26]. Using the expression of the deceleration parameter, q we get where H is the Hubble constant. With the help of the equation of state (EoS) given by (5) we get one again using eq. (11) we obtain Again at flip time, i.e. when q = 0 the scale factor becomes where a f lip signifies the sign change of the deceleration parameter. Again, in terms of redshift parameter 1 + z = a 0 a where a 0 is the scale factor of the present universe, we can re-write the eq. (16) as and the redshift parameter at the flip epoch (z f ) is given by As the universe expands the energy density ρ decreases with time such that the last term in the eq. (15) increases indicating a sign flip when the density attains a critical value given by It is evident that for M > 2(2 − dm) one gets a universe with normal matter. This is a consistent result for a realistic z f .
In the next section we discuss the extremal cases to understand the evolution of the universe. Similar cases in 4 dimensional universe with modified Chaplygin gas is discussed in [27,28].
CASE A : In the early phase when the scale factor a(t) is very small, the eq. (16) reduces to representing a dust dominated universe. It is found that q = 1 2 for d = 0, i.e., in a 4-dimensional space time, which is in consonance with well known 4D results.
CASE B :
In the later epoch of evolution, i.e., for a large size of the universe, we get from the eq.
which is similar to that one finds in a ΛCDM model. It further gives the effective EoS using the eq. (14), It is interesting to note that the effective EoS is not time dependent. In what follows we shall find that at the later stage of evolution of the universe as a(t) → ∞, W → −1 so in the asymptotic region it can be expressed as p = −ρ even one begins with a modified Chaplygin gas which corresponds to an empty universe. It corresponds to a universe with a cosmological constant from eq. (22) it is evident that the deceleration parameter, q reduces to −1. Again in the presence of extra dimensions (d = 0), eq. (23) points to a phantom type (W < −1) but in the analogous 4D case (d = 0), it mimics a ΛCDM model. This striking difference results from the appearance of extra terms coming from additional dimensions in EoS.
Observational Constraints on the Model Parameters:
In this section the observational data [29] will be used to analyze cosmological model estimating the constraints imposed on the model parameters. We use Type Ia Supernova data and the predictions of CMB, BAO in constraining the cosmological models. It is difficult to integrate the expression to determine the exact temporal behaviour of the scale factor we use an alternative way to express the expansion rate of the universe as a function of redshift, i.e., H(z) [30] in the data analysis. In our case the observed Hubble data (OHD) set, the most direct and model independent observable of the dynamics of the universe will be used here in the model. Naturally, the H(z) dataset here shows the fine structure of the expansion history of the universe. One can not get the Hubble H(z) data directly from a tailored telescope. Instead, one may get it from two different methods. First is to calculate the differential ages of galaxies [31,32], usually called cosmic chronometer (CC), other is to the deduction from the radial BAO peaks in the galaxy power spectrum [33,34] or from the BAO peak using the Ly − α forest of QSOs [35] based on the clustering of galaxies or quasars. We analyze the cosmological model using the compilation of OHD data points collected by Magana et al. [36] and Geng et al. [37], the H(z) data reported in various surveys so far. The 31 CC H(z) data points are listed in Table 1.
The Hubble parameter depending on the differential ages as a function of redshift z can be written in the form of therefore, from eq. (24) H(z) can be found directly once dz dt is known [30]. We consider the Hubble parameter H and the three space scale factor a, then the eq. (6) may be expressed as ρ = k 2 H 2 . Using the present value of the scale factor normalised to unity , i.e., a = a 0 = 1 we get a relation of the Hubble parameter with the redshift parameter z. If ρ 0 be the density at present epoch then the well known density parameter can be written as Ω m = c BK M +c [38]. Now using eq. (11), we can express the three space matter density as where H 0 = 2ρ 0 k 1 2 is the present value of the Hubble parameter. The eq. (26) shows the evolution of Hubble parameter H(z) as a function of redshift parameter z. The graphical presentation of (26) is shown in Fig-2(b) where it has been compared with best fit curve. We draw a best fit curve of redshift against Hubble parameter in the 1σ confidence region from the data given by the Table- The apparently small uncertainty of the measurement naturally increases its weightage in estimating χ 2 statistics. We define here the χ 2 as where H obs is the observed Hubble parameter at z i and H th is the corresponding theoretical Hubble parameter given by eq. (26). Also, σ H (z i ) denotes the uncertainty for the i th data point in the sample and θ is the model parameter. In this work, we have used the latest observational H(z) dataset consisting of 31 data points in the redshift range, 0.07 ≤ z ≤ 1.965, larger than the redshift range that is covered by the type Ia supernova. It should be noted that the confidence levels 1σ(68.3%), 2σ(95.4%) and 3σ(99.7%) are taken proportional to △χ 2 = 2.3, 6.17 and 11.8 respectively, where △χ 2 = χ 2 (θ) − χ 2 (θ * ) and χ 2 m is the minimum value of χ 2 . An important quantity which is used for data fitting process is where subscript dof is the degree of freedom, and it is defined as the difference between all observational data points and the number of free parameters. If In a higher dimensional model with extra d-dimensions it is noted that comparing the data obtained by the differential age method (CC), the model with Chaplygin gas favours a 5D universe for a given value of m. It is to be mentioned here that from eq. (4) in the framework of dimensional reduction the 4-dimensional scale factor increases. But we can not explain the impact of compactification of extra dimensions on the present acceleration of the universe.
As pointed earlier the key eq. (13) is not amenable to obtain an explicit solution which is a function of time in known simple form. In this case the variation of cosmological variables like sale factor, flip time, dependence on extra dimensions etc. can not be explicitly obtained. To avoid such a difficulty of obtaining solution in known form to determine the flip time and other physical features of cosmology we adopt here an alternative approach in the next section.
An alternative approach :
In the late evolution the universe is big enough and the second term of the right hand side (RHS) in the eq. (13) is almost negligible compared to the first term. We know that the Chaplygin gas equation of state explains only from dust dominated era to present accelerating universe. As the 4D scale factor is large enough it may not be inappropriate to consider only the first order approximation of the binomial expansion of RHS of the eq. (13). We obtain an exact solution of the first order approximation in the eq. (13). Now from eq. (13) we determine the late stage of evolution of the universe neglecting the higher order terms which is given byȧ It is also seen that the rate of growth of 4D scale factor depends on the number of dimensions and it is higher as the number of extra dimension increases. Again the reduction rate of extra dimensional scale factor is faster for higher dimensions. So it is physically realistic to consider the presence of extra dimension which enhances the acceleration. In this context, it is to be remembered that the observational results hold only for one extra dimension in our model. Now using eqs. (6), (7) and (30) we can write the expression of p and ρ as follows. and The effective equation of state is given by here w(t) 4 is a function of time. From eqs. (14) and (33) we get the deceleration parameter q = 1 − n cosh 2 ωt n cosh 2 ωt The eq. (34) gives that the exponent n which determines the evolution of q. A numerical analysis using eq. (34) shows that (i) if n > 1 one gets only acceleration, no flip occurs in this condition. But the eq. (20) leads to a physically unrealistic matter field for n > 1. (ii) Again, if 0 < n < 2 3 it gives early deceleration and late acceleration, so the desirable feature of flip occurs which agrees with the observational analysis. Figure-7 shows the variation of q with t for different values of d where flip occurs. It is seen that the flip occurs faster for more dimensions.
Using eq. (35) we have drawn the figure-8 where the variation of t f with d is shown. It is seen that the flip time is lower for higher dimensions, i.e., it gives the early acceleration for higher dimensions.
Summary :
In the paper we present a higher dimensional cosmological model to explain the recent acceleration with a Chaplygin type of gas. The salient features of this model are briefly as follows: The most important thing is that depending on some initial conditions, the effective equation of state during late evolution interpolates between ΛCDM and Phantom type of expansion. In this respect our work recovers the effective equation of state (for large scale factor) for an analogous work of Guo et al [20] where a very generalised Chaplygin type of gas is taken. One may mention that our solutions are quite general in nature because all the well known results of 4D Chaplygin driven cosmology are recovered when d = 0.
While working on any higher dimensional model one always looks for situation where dimensional reduction takes place and the cosmology eventually becomes 4D one. Interesting to point out that our present model satisfies this important criteria for positive m. It is to be noted that with the help of observational data and following χ 2 minimization programme we find the range of Ω m and m are respectively ( 0.1257, 0.3553) and ( -0.5862, 0.5660) in 1σ confidence region. One takes the value of m = 0.54 and the corresponding Ω m = 0.18 which are lying in the 1σ confidence region. The best fit graph is drawn from the observational data and it is seen that the graph favours only one extra dimension. That means the Chaplygin gas is apparently dominated by a 5D world.
To end the section a final remark may be in order. Being highly nonlinear one can not get a solution of the key eq. (13) in a closed form forcing us to look for solutions in the asymptotic regions only. So we can not explain the evolution of 4D scale factor or reduction of extra dimensions etc. in a general way. To compensate for these incompleteness an alternative approach is suggested where only the first order terms of the binomial expansion are considered. By the above approach we get the time explicit solution of 4D scale factor a(t) as well as the expression of extra dimensions b(t). It is also seen that the rate of dimensional reduction is higher for higher dimensions. So we may conclude that the effect of compactification of extra dimension helps the acceleration. We also investigate dimensional dependence on the deceleration parameter q and flip time t f . It clearly shows that early flip occurs for higher dimensions. | 2021-04-27T01:16:20.397Z | 2021-04-25T00:00:00.000 | {
"year": 2021,
"sha1": "5e5cba7779ee76e262aa678194ae290990621ef8",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2104.12169",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5e5cba7779ee76e262aa678194ae290990621ef8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
12864742 | pes2o/s2orc | v3-fos-license | Superscattering of pseudospin-1 wave in photonic lattice
We uncover a superscattering behavior of pseudospin-1 wave from weak scatterers in the subwavelength regime where the scatterer size is much smaller than wavelength. The phenomenon manifests itself as unusually strong scattering characterized by extraordinarily large values of the cross section even for arbitrarily weak scatterer strength. We establish analytically and numerically that the physical origin of superscattering is revival resonances, for which the conventional Born theory breaks down. The phenomenon can be experimentally tested using synthetic photonic systems.
I. INTRODUCTION
In wave scattering, a conventional and well accepted notion is that weak scatterers lead to weak scattering. This can be understood by resorting to the Born approximation. Consider a simple 2D setting where particles are scattered from a circular potential of height V 0 and radius R. In the low energy (long wavelength) regime kR < 1 (with k being the wavevector), the Born approximation holds for weak potential: (m/ 2 )|V 0 |R 2 1. Likewise, in the high energy (short wavelength) regime characterized by kR > 1, the Born approximation still holds in the weak scattering regime: (m/ 2 )|V 0 |R 2 (kR) 2 . In general, whether scattering is weak or strong can be quantified by the scattering cross section. For scalar waves governed by the Schrödinger equation, in the Born regime the scattering cross section can be expressed as polynomial functions of the effective potential strength and size [1]. For spinor waves described by the Dirac equation (e.g., graphene systems), the 2D transport cross section is given by [2] Σ tr /R (π 2 /4)(V 0 R) 2 (kR) (under v F = 1). In light scattering from spherically dielectric, "optically soft" scatterers with relative refractive index n near unity, i.e., kR|n − 1| 1, the Born approximation manifests itself as an exact analog of the Rayleigh-Gans approximation [3], which predicts that the scattering cross section behaves as Σ/(πR 2 ) ∼ |n − 1| 2 (kR) 4 in the small scatterer size limit kR 1. In wave scattering, the conventional understanding is then that a weak scatterer leads to a small cross section and, consequently, to weak scattering, and this holds regardless of nature of the scattering particle/wave, i.e., vector, scalar or spinor.
In this paper, we report a counterintuitive phenomenon that defies the conventional wisdom that a weak scatterer always results in weak scattering. The phenomenon occurs in scattering of higher spinor waves, such as pseudospin-1 particles that can arise in experimental synthetic photonic systems whose energy band structure consists of a pair of Dirac cones and a flat band * Ying-Cheng.Lai@asu.edu through the conical intersection point [4][5][6][7][8][9][10][11]. Theoretically, pseudospin-1 waves are effectively described by the generalized Dirac-Weyl equation [7,12,13]: and S = (S x , S y ) being the vector of 3 × 3 matrices for spin-1 particles. Investigating the general scattering of pseudospin-1 wave, we find the surprising and counterintuitive phenomenon that extraordinarily strong scattering, or superscattering, can emerge from arbitrarily weak scatterers at sufficiently low energies (i.e., in the deep subwavelength regime). Accompanying this phenomenon is a novel type of resonances that can persist at low energies for weak scatterers. We provide an analytic understanding of the resonance and derive formulas for the resulting cross section, with excellent agreement with results from direct numerical simulations. We also propose experimental verification schemes using photonic systems.
II. RESULTS
We consider scattering of 2D pseudospin-1 particles from a circularly symmetric scalar potential barrier of height V 0 defined by V (r) = V 0 Θ(R − r), where R is the scatterer radius and Θ denotes the Heaviside function. The band structure of pseudospin-1 particles can be illustrated using a 2D photonic lattice for transverse electromagnetic wave with the electric field along the zaxis. As demonstrated in previous works [4,13], Dirac cones induced by accidental degeneracy can emerge at the center of the Brillouin zone for proper material parameters, about which three-component structured light wave emerges and is governed by the generalized Dirac-Weyl equation.
We consider the setting of photonic crystal to illustrate the pseudospin-1 band structure. Figure 1(a) shows the band structure of lattices with a triangular configuration constructed by cylindrical alumina rods in air, where the rod radius is r 0 = 0.203a (a -lattice constant) and the rod dielectric constant is 8.8 [4]. We obtain an accidentaldegeneracy induced Dirac point at the center of the 1st Brillouin zone at the finite frequency of 0.6357 · 2π · c/r 0 .
arXiv:1612.07829v1 [quant-ph] 22 Dec 2016
Following a general lattice scaling scheme of photonic gate potential [13], we obtain a sketch of the cross section of the lattice in the plane, as shown in Fig. 1(b), where the thick black bar denotes an applied exciter. For our scattering problem, the band structures outside and inside of the scatterer are shown in Fig. 1(c). The scattering problem can be treated analytically using the Dirac-Weyl equation (see Appendix A for a detailed derivation of the various scattering formulas). To demonstrate the phenomenon of superscattering, we use the transport cross section Σ tr to characterize the scattering dynamics. (It should be noted that the total cross section Σ is another usual quantity for characterizing superscattering with consistent results as from the transport cross section -see Appendix B for details.) The transport cross section is defined in terms of the scattering coefficients A l as: where A l 's can be obtained through the standard method of partial wave decomposition [1]. For convenience, we define ρ ≡ V 0 R and x ≡ kR. At low energies, i.e., x 1, scattering is dominated by the lowest angular momentum channels l = 0, ±1. To reveal the relativistic quantum nature of the scattering process, we focus on the under-barrier scattering regime, i.e., x < ρ, so that manifestations of phenomena such as Klein tunneling are pronounced. We define two subregimes of low energy scattering: 1 < ρ and x < ρ < 1, where the former corresponds to the case of a scatterer with a large scattering potential. The weak scatterer subregime, i.e., x < ρ < 1, is one in which the counterintuitive phenomenon of superscattering arises. Specifically, for x < ρ < 1, we obtain the leading coefficients as where P 0 = πx and with ln γ E ≈ 0.577 · · · being the Euler's constant and P 1 , Q 1 given by relations [P 1 , We first show that, in our scattering system, all the conventional resonances will disappear in the weak scatterer regime (ρ < 1). To make an argument, we examine the case of a scatterer with large scattering potential: ρ > 1 where the transport cross section as a function of x and ρ is given by (see Appendix A for a detailed derivation) with m, n = 1, 2, 3, · · · and ρ 0,m , ρ 1,n denoting the mth and nth zeros of the Bessel functions J 0 and J 1 , respectively. The resonances occur about ρ ≈ ρ 0,m , ρ 1,n for x 1, and thus are well separated with a minimum position at ρ ≈ 2.4. This indicates that the locations of such resonances satisfy ρ > 2, which are not possible in the small scattering potential regime ρ < 1. In conventional scattering systems where the Born approximation applies, no additional resonances will emerge in the small scattering potential regime ρ < 1.
For sufficiently weak scatterer strength (ρ 1), the prefactor in (3), i.e., is off-resonance. The remaining factor characterizes the emergence of a new type of (unconventional) revival resonances at Q 1 + 4 = 0, which are unexpected as the scatterers are sufficiently weak so, according to the conventional Born theory, no scattering resonances are possible. The resonant condition can be obtained explicitly from the constraint We obtain ρ = 2x for ρ 1. The surprising feature of revival resonance is that it persists no matter how weak the scatterer. As a result, superscattering can occur for arbitrarily weak scatterer strength. One example is shown in Fig. 2(a), where a good agreement between the theoretical prediction and numerical simulation is obtained. For comparison, results for the corresponding pseudospin-1/2 wave scattering system governed by the conventional massless Dirac equation are shown in Fig. 2(b), where scattering essentially diminishes for near zero scatterer strength, indicating complete absence of superscattering. To characterize superscattering in a more quantitative manner, we obtain from Eq. (3) the associated resonance width as Γ ∼ πρ 3 /8, and the closed approximation form as In addition, at the resonance, we have A striking and counterintuitive consequence of (6) is that, the weaker the scatterer (ρ ↓), the larger the resulting maximum cross section ((Σ tr /R) max ↑). This can be explained by noting that, due to the revival resonant scattering, an arbitrarily large cross section can be achieved for a sufficient weak scatterer with its radius R much smaller than the incident wavelength 2π/k (i.e., in deepsubwavelength regime kR 1). In contrast, for a system hosting pseudospin-1/2 wave under the same condition of x < ρ 1 where the Born approximation applies [2], the maximum transport cross section is given by Comparing with pseudospin-1/2 particles, the scattering behavior revealed by Eq. (6) for pseudospin-1 particles is extraordinary and represents a fundamentally new phenomenon which, to our knowledge, has not been reported for any wave (especially matter wave) systems. The analytic predictions [Eqs. (6) and (7)] have been validated numerically, as shown in Fig. 3. Further insights into superscattering can be obtained by examining the underlying wavefunction patterns, as shown in Fig. 4. In particular, Figs. 4(a,c) and 4(b,d) show the distributions of the real part of one component of the spinor wavefunction (Ψ 2 ) for pseudospin-1/2 and pseudospin-1 particles, respectively, where the parameters are V 0 R = 0.5 and kR = 0.2485. The patterns in Figs. 4(b,d) correspond to the revival resonance indicated by the pink arrow in Fig. 3(b). We see that, even for such a weak scatterer, the incident pseudospin-1 wave of a much larger wavelength λ = 2π/k ∼ 25R is effectively blocked via trapping around the scatterer boundary, resulting in strong scattering. In contrast, for the conventional pseudospin-1/2 wave system, the weak scatterer results in only weak scattering, as shown in Figs. 4(a,c), which is anticipated from the Born theory.
III. EXPERIMENTAL TEST WITH PHOTONIC SYSTEMS
It is possible to test superscattering in experimental optical systems. Recent realization of photonic Lieb lattices consisting of evanescently coupled optical waveg- uides implemented by femtosecond laser-writing technique [7-10] make them suitable for studying the physics of pseudospin-1 Dirac cones. For example, in the tightbinding framework, for a homogeneous identical waveguide array with the same propagation constant β 0 , the Hamiltonian in the momentum space is given by In the low-energy regime (measured from the β 0 ), the Hamiltonian is reduced to a generalized Dirac-Weyl Hamiltonian for spin-1 particles with β 0 analogous to the constant electronic gate (voltage) potential. As such, the superscattering phenomenon uncovered in our work can in principle be tested experimentally in photonic Lieb lattice systems through a particular design of the refractive index profile across the lattice to realize the scattering configuration.
Loading ultracold atoms into an optical Lieb lattice fabricated by interfering counter-propagating laser beams [11] provides another versatile platform to test our findings, where appropriate holographic masks can be used to implement the desired scattering potential barrier [14,15]. Synthetic photonic crystal based 2D pseudospin-1 wave systems are also promising for feasible experimental validation. For example, it was demonstrated experimentally [4][5][6] and theoretically [13,16] that a pseudospin-1 wave system can be realized in 2D dielectric photonic crystals via the principle of accidental degeneracy. Implementation of the scalar type of poten-tial can be achieved by manipulating the length scale of the photonic crystals. From a recent work of "on-chip zero-index metamaterial" design [6] based on such a system, we note that the phenomenon of superscattering uncovered in this paper can be relevant to a novel onchip superscatterer fabrication, which is not possible for conventional wave systems.
IV. CONCLUSION AND DISCUSSION
In conclusion, we uncover a superscattering phenomenon in a class of 2D wave systems that host massless pseudospin-1 particles described by the Dirac-Weyl equation, where extraordinarily strong scattering (characterized by an unusually large cross section) occurs for arbitrarily weak scatterer in the low energy regime. Physically, superscattering can be attributed to the emergence of persistent revival resonances for scatterers of weak strength, to which the cross section is inversely proportional. These unusual features defy the prediction of the Born theory that is applicable but to conventional electronic or optical scattering systems. Superscattering of pseudospin-1 wave thus represents a fundamentally new scattering scenario, and it is possible to conduct experimental test using synthetic photonic systems.
An important issue is whether superscattering uncovered in this paper is due to the presence of a flat band that implies an infinite density of states. Our answer is negative, for the following reasons. Note that, measured from the three-band intersection point, the energy for the (dispersionless) flat band states is zero outside and V 0 inside the scatterer, but for the two dispersion Dirac bands the energy is finite outside the scatterer and not equal to V 0 inside. For elastic scattering considered in our work, the incident energy outside the scatterer is finite and less than V 0 as well. As a result, only the states belonging to the conical dispersion bands are available both inside and outside the scatterer, and therefore are responsible for the superscattering phenomenon. Indeed, as demonstrated, superscattering is due to revival resonant scattering for states belonging to the conical dispersion bands that persist in the regime of arbitrarily weak scatterer strength. From another angle, if superscattering were due to the flat band, the phenomenon would arise in the conventional resonant scattering regime V 0 R > 1, which has never been observed.
While the flat band itself is not directly relevant to the superscattering behavior, its presence makes the structure of the relevant states belonging to the conical bands different from those, e.g., in a two band Dirac cone system, giving rise to boundary conditions that permit discontinuities in the corresponding intensity distribution and tangent current at the interface. Interestingly, surface plasmon modes [c.f., Fig. 4(d)] are excited at the interface when revival resonant scattering occurs, which are strongly localized and can be excited for arbitrarily weak scatterer strength, leading to superscattering in the deep sub-wavelength regime. These modes are created from the particular spinor structure of the photon states, which can be implemented by engineering light propagation in periodically modulated/arranged, conventional dielectric materials (e.g., alumina) rather than within the material itself. Our finding of the superscattering phenomenon is thus striking and represents a new scattering capability that goes beyond the Rayleigh-Gans limit or, equivalently, one defined by the Born approximation.
With respect to potential applications of the finding of this paper, it is worth emphasizing that the phenomenon of superscattering represents a novel way of controlling light behaviors beyond those associated with the conventional scattering scenario because, in our system [e.g., Fig. 1(b)], light is structured into three-component spinor states and behaves as relativistic spin-1 wave in the underlying photonic lattice. There have been extensive recent experimental works demonstrating that such lattice systems can actually be realized. Our theoretical prediction is based on a general setting that effectively characterizes the low-energy physics underlying the photonic lattices.
where V (r) = V 0 Θ(R − r) with V 0 being the potential height. Generally, far away from the scattering center (i.e. r R), for an incoming flux along the x direction, the spinor wavefunction with the band index s takes the asymptotic form where the vector |k, s is the spinor plane wave amplitude with wavevectors k 0 = (k, 0) and k θ = k(cos θ, sin θ) defining directions of the incident and scattering respectively.
In our case, for the conical dispersion bands s = ±, we obtain With the definition of the current operatorĴ = (1/ )∇ k H(k) = v F (S x , S y ), we have the scattered current while the incident current J in = k 0 , s|Ĵ · k 0 /k|k 0 , s = v F . The differential cross section is thus defined in terms of the scattering amplitudes f (θ) as The other relevant cross sections can be calculated by definition, i.e. the total cross section (TCS) the transport cross section (TrCS) In order to figure out the exact expression of f (θ), we expand the wavefunctions inside and outside the scatterer as a superposition of partial waves, i.e. for r > R (outside the scatterer) for r < R (inside the scatterer) where ψ > l,s and ψ < l,s are the partial waves defined in terms of the cylindrical wave eigenfunctions of the reduced Hamiltonian H that reads in polar coordinates r = (r, θ), with the compact operator and V (r) = V 0 Θ(R − r) the circular symmetric scalartype scattering potential. It is evident that [H,Ĵ z ] = 0 with the definition ofĴ z = −i ∂ θ + S z . As such, H acting on the spinor eigenfunctions ofĴ z yields where the wavefunctions ϕ l simultaneously satisfŷ J z ϕ l = lϕ l with l being an integer. After some standard derivations, we obtain for the conical bands (i.e. s = ±) where q = |E − V |/ v F and s = Sign(E − V ). The radial function h while inside the scatterer (r < R) the partial waves read where A l and B l denote the elastic partial wave reflection and transmission coefficients in the l angular channel respectively. In order to obtain the explicit expressions of the partial wave coefficients, relevant boundary conditions (BCs) are needed. Boundary conditions. Recalling the commutation relations [Ĵ z , H] = 0, we generally define a spinor wavefunction in polar coordinate that satisfies Substituting Eq. (A13) into the wave Eq. (A14) and eliminating the angular components finally yield the following (one-dimensional first-order ordinary differential) radial equation Directly integrating the radial equation above over a small interval r ∈ [R − η, R + η] defined around an interface at r = R and then taking the limit η → 0, we obtain provided that the potential V (r) and the radial function components R 1,2,3 (r) are all finite. Reformulating such continuity conditions in terms of the corresponding wavefunction yields the BCs that we seek (A17) Far-field solutions: r R. Using the asymptotic form of the Hankel function H (1) l (kr) ∼ 2/πkre i(kr−lπ/2−π/4) and evaluating the outside wavefunction given in Eq. (A8a) at r R, we arrive at |Ψ s (r) = e ikx |k 0 , s + −i 2/πk l A l e ilθ √ −ir e ikr |k θ , s .
(A18) It is evident from the Eq. (A18) and Eq. (A2) that Imposing relevant BCs given in Eq. (A17) on the total wavefunctions of both sides at the interface r = R, we have where X As such, the resulting probability density ρ = Ψ s (r)|Ψ s (r) and local current density j = Ψ s (r)|Ĵ |Ψ s (r) can be calculated accordingly. In addition, the relevant scattering amplitudes f (θ) can be exactly obtained according to the Eq. (A19) and hence related cross section given in Eqs. (A6) and (A7).
Derivation of the Eq. (4). By definition, the transport cross section can be obtained as with A l being the reflection coefficients given in Eq. (A21). For x 1, scattering is dominated by the lowest angular momentum channels l = 0, ±1. As a result, the transport cross section can be approximated as where and provided that the scattering potential is large, ρ > 1. Substituting the Eqs. (A26a) and (A26b) into Eq. (A25), we obtain . | 2016-12-22T22:14:39.000Z | 2016-12-22T00:00:00.000 | {
"year": 2016,
"sha1": "efd89fa48cb506c3169d013d763440e71ec7cb43",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevA.95.012119",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "efd89fa48cb506c3169d013d763440e71ec7cb43",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
246665626 | pes2o/s2orc | v3-fos-license | Visual Sensor Image Analysis and Massage Techniques to Prevent and Treat Common Injuries of Sports Dance Practitioners
In recent years, the performance of sports dance in China has become better and better. Naturally, the technical requirements for this dance are getting higher and higher, and the number and intensity of training have also increased, which has led to increasing injuries in sports dance. This article is based on visual sensor images to analyze and study the common injuries and prevention of sports dance practitioners. It is aimed at providing a certain reference basis for athletes' injuries, so that dance practitioners and coaches can better master sports dance training and teaching. Injury-related rules and prevention reduce the injury rate. This article puts forward the related technology of a visual sensor image and applies its technology to the prevention and research of common injuries in sports dance. At the same time, it analyzes the causes of sports dance practitioners' injuries and seeks economical and affordable massage techniques for prevention, and the method of treatment provides protection for dance practitioners. The experimental results in this article show that the Tuina group cured 15 subjects, 41 subjects were markedly effective, 13 subjects were improved, and 6 subjects were unhealed. The total effective rate was 92%.
1. Introduction 1.1. Background. Sports dance is a national standard that people often say. This combines dance, art, and competitive sports. As an art form, it perfectly reflects the unique artistic value of decoration and dance as well as excellent body function. Whether it is a dance art project or a sports project, there is almost no sports dance that can perfectly integrate entertainment, fitness, performance, and competition, but sports dance has done it. With its unique dancing posture and passionate music, gorgeous costumes are loved by many people. In competitive training, sports injuries will reduce the athletic ability of sports dance staff, affect or hinder training plans, and greatly reduce training efficiency. If appropriate and effective treatment measures are not taken, the sports experience of sports dancers will produce a "time bomb." In sports dance, the negative impact caused by sports injuries has frustrated many players and consumed their original enthusiasm for training. Many sports dancers must give up their original enthusiasm. Sports dance talents choose careers due to injuries and illnesses, which are extremely detrimental to the continuous progress of Chinese sports dance and the sustainable development of the sports dance industry. Therefore, it is necessary to find the cause of the injury and preventive safeguards in time.
Related Work.
The vision sensor is a specialized vision system with image acquisition and processing and data transmission capabilities. It has become an indispensable key perception method for intelligent industrial robots. Solomon et al. focus on the study of injury treatment and prevention for young dancers. First, it discusses the epidemiology of young dancers' injuries and then describes the screening procedure and sample screening procedures. It then covers physical therapy and resistance training, as well as common diseases and injuries of the spine, hips, knee, and ankle complexes. The conclusion is that preventing young dancers from being injured is important to solve the physical and psychological challenges faced by young performers [1]. Their research is mainly about injury treatment and prevention of young dancers, but the application of visual sensors is insufficient. Geyt et al. mainly evaluate the personal subjective experience of the recipients of cervical spine massage and analyze the influence of kinematics, cavitation, and qualifications of practitioners on personal experience. The method used was to manipulate 20 asymptomatic volunteers on both sides C3 and C5. A 3dimensional electric goniometer was used to record kinematics, and ISE data was collected through a questionnaire to explore the subject's manipulation experience in touch, relaxation, task perception, and therapist processing. The conclusion is that a better understanding of the individual's subjective experience related to cervical spine massage can increase confidence and improve the doctor-patient relationship and can provide practitioners with further treatment perspectives [2]. Venkatesan and Parthiban proposed that medical image segmentation is a key step in medical image analysis. The methods used are particle swarm optimization (PSO), kernelized FCMPSO (KFCMPSO), optimized fuzzy C-means (FCM), quantum PSO (FCMQPSO), and KFCMQPSO optimized FCM to extract ROI from medical images. The experimental results show that the proposed hybrid FCM and KFCM with PSO and QPSO have good performance and good convergence speed [3]. Moeys et al. introduced the 180-nanometer Towerjazz CIS processing vision sensor named SDAVIS192. By combining the signal-to-noise ratio (SNR) measurements, the characteristics of the DVS event detection threshold are determined, and the results of the black-and-white and RGBW color filter arrays are compared [4]. Luigi mainly studies the prevalence and risk of injuries at the competition and sports level in southeastern France. The method used is to collect data on adolescents (n = 1849; 14-19 years old) in French schools in 2015 and 2017. The result obtained is that in almost all sports activities, high-level sports athletes have a higher incidence of injuries than low-level sports athletes [5]. Zhang and Yin pointed out that with high-intensity exercise and training, athletes are increasingly likely to be injured. The common injury parts of sports dancers are waist, knees, shoulders, and ankles. In response to this situation, the authors put forward some suggestions in this article, hoping to effectively alleviate the common sports injuries of aerobics athletes [6]. Although the views put forward by these scholars are accurate, there are still deficiencies in the research process.
Innovation. The innovation of this article is as follows:
(1) the first is the innovation of the topic selection angle. This article is a new perspective from the perspective of topic selection. At present, there are not many researches that integrate visual sensors, images, massage techniques, sports dance practitioners, common injuries, prevention, and treatment, and it is of exploratory significance. (2) The second is the innovation of research methods. This article puts forward the related technology of a vision sensor image and the related technology of massage, which have high theoretical value and exploratory significance. (3) The other is the innovation of project practice. The results obtained in this article provide a certain reference basis for athletes' injuries, which is convenient for dance practitioners and coaches to better grasp the rules and prevention of injuries in sports dance training and teaching.
Related Technologies
2.1. The Architecture of the Vision Sensor System. The vision sensor system has the following functions. The first is image acquisition and processing, which refers to the acquisition of raw image data through the image sensor and on this basis the use of image registration [7], contour extraction [8], contour fitting, etc. Various image algorithms realize vision application functions such as guided positioning, size measurement [9], barcode recognition [10], and then data interaction, which refers to the remote connection of vision sensors through RS232, TCP/IP, CAN, and other communication methods. Digital image acquisition and processing mainly include three types of depth and color mapping relationships, including true color, false color, and color matching images. Data exchange refers to the integration of several distributed application information systems. Command control, parameter configuration, job file issuance and tool processing results, raw image data, and operating status information of the vision sensor can be transmitted to external devices such as robots, HMI, PLC, and industrial computer through the above-mentioned communication methods. Then, there is graphical programming [11,12], which refers to the use of graphical programming instead of text programming in the integrated development environment for visual application development. Each visual application tool is an independent module, which can be combined into engineering tasks at will. The file format is issued to the vision sensor. The last is multitask scheduling [13], which means that the vision sensor must perform multiple tasks such as device connection, instruction reception, image acquisition, image processing, and response interruption at the same time, its task priority is different, and it is scheduled based on the priority. To meet the real-time requirements, the overall architecture of the vision sensor system is shown in Figure 1.
From top to bottom, the entire system can be divided into a system layer [14], control layer [15], and device layer [16]. Among them, the communication interconnection between the system layer and the control layer is based on various communication interfaces, and the control layer and the device layer are directly connected to the hardware based on the terminal board [17]. The system layer refers to the characteristics of the multilevel state shown by the various elements of the system in the system structure. The control layer is a graphical representation of the relationship between different control levels arranged according to the increasing complexity of the main control system. The equipment floor refers to a certain floor of a high-rise building. All or most of its effective area is used as the layout of equipment such as air conditioning, water supply and drainage, electrical, elevator, and machine room. The hardware architecture of the vision system based on this vision sensor is shown in Figure 2.
Image Fusion Based on Wavelet Transform.
Wavelet transform is a new transform analysis method. It inherits and develops the idea of localization of short-time Fourier transform and at the same time overcomes the shortcomings of window size not changing with frequency and can provide a "time-frequency" window that changes with frequency. It is an ideal tool for signal time-frequency analysis and processing. Wavelet transform has the best time-frequency localization characteristics [18], so the method of multiresolution [19] fusion processing of images is becoming increasingly complete with the help of the mathematical tool of wavelet transform [20].
(1) Continuous wavelet transform Let αðxÞ be a square integrable function, that is, αðxÞ ∈ W 2 ðEÞ; its Fourier transform is α _ ðgÞ, if α _ ðgÞ satisfies the following conditions: Then, αðxÞ is called "basic wavelet," and after αðxÞ is stretched and translated, a wavelet sequence can be obtained.
For the continuous case, the wavelet sequence is The expansion factor is a numerical value that characterizes the degree of multicollinearity between the observed values of the independent variables. When the translation factor is within a certain temperature range, the double logarithmic relaxation spectrum measured at any temperature can be superimposed on the relaxation spectrum of the reference temperature by horizontally moving a certain amount of the time logarithmic abscissa. In the formula, a is the expansion factor and b is the translation factor [21]. In the continuous case, the continuous wavelet transform of any function uðbÞ ∈ Q 2 ðTÞ is defined as Its inverse transformation is (2) Discrete wavelet transform of the image The discrete wavelet transform of the image generally uses the two-dimensional Mallat algorithm [22], which can be expressed as
Computational and Mathematical Methods in Medicine
The reconstruction of the image can be expressed as In the formula,k,f is a low-pass analysis comprehensive filter andh,t is a high-pass analysis comprehensive filter. The synthesis filter uses the same high-quality algorithms as the circuit simulator but is optimized to run quickly and has a gorgeous, fully scalable user interface.
Image Fusion Algorithm
(1) Evaluation method based on the statistical characteristics of a single image [23] Suppose the fused image is Qðm, nÞ, the image size is n × n, and the total gray level is P. The average image value reflects the average brightness of the entire image and is also the average gray value of the image, which can be defined as In the formula, Qðm, nÞ is the gray value of the pixel at position ðm, nÞ.
The information entropy of an image is an important indicator to measure the richness of the information contained in the image. The so-called information entropy is a rather abstract concept in mathematics. Here, we might as well understand information entropy as the probability of occurrence of certain specific information, which can represent the value of information. The entropy value [24] reflects The information entropy indicates the amount of information contained in the image, and the larger the information entropy, the richer the information of the source image extracted from the fusion image [25].
The standard deviation reflects the discreteness of the image gray relative to the average gray, which is defined as The average gradient reflects the sharpness of the image, which is defined as The spatial frequency is used to measure the overall activity of the image [26], which is defined as (2) Evaluation method based on error sensitivity Suppose the fusion image and the standard reference image are W and Y, respectively, and the corresponding images are Wðm, nÞ and Yðm, nÞ, respectively. The root mean square error is defined as [27,28] The degree of spectral distortion reflects the degree of spectral distortion of the image after fusion, and the expression is The relative average spectral error (RASE) indicates that the fusion image retains the spectral information of the source multispectral image as follows: The data form of image fusion is an image that contains brightness, color, temperature, distance, and other characteristics of the scene. The correlation coefficient (CC) represents the similarity between the fused image and the source image and is defined as Table 1. It can be seen from Table 1 that a survey of 150 sports dancers showed that the waist and knee were injured the most, with an injury rate of 30%, respectively; then, the 5 Computational and Mathematical Methods in Medicine number of shoulder injuries was the most, with an injury rate of 17%. Then, there were more people with neck injuries, with an injury rate of 13%, followed by toes, with an injury rate of 9%; followed by ankles, with an injury rate of 7%; followed by back, hip, and calf injuries, with an injury rate of 6%; followed by the head and upper limbs, with an injury rate of 5%, respectively; and finally followed by the hip, with an injury rate of 3%. The injury rate of each part of the sports dance athlete's body is shown in Figure 3.
Regardless of the event, for athletes, the waist is the core area of force and the most injured area. In addition, there are many descending and squatting movements in dance. The knee is an important weight-bearing joint of the lower limbs. With a large amount of body weight in the standard dance, the partial load on the knee joint is even greater with the large amount of descent and squatting in the standard dance, and some athletes cannot master the knee joint flow well during training or before the competition. Insufficient knee preparation activities can greatly increase the injury rate of the knee. Statistics of the course of sports injuries in various parts of the body are shown in Table 2.
It can be seen from Table 2 that chronic injuries in the five parts of the body, back, waist, hip, knee, and calf, are clearly more than acute injuries, and some very standardized and delicate dance movements are required in sports dance, and the injured parts are inseparable, so the local load of these parts in dance is relatively large, it is easy to fatigue, and it is easy to cause chronic injuries. The statistics of sports injuries in various parts of the body are shown in Table 3.
It can be seen from Table 3 that most athletes did not completely stop training after being injured.
Gender Differences in Prevalence and
Incidence. The popularity rate of Latin dance is higher than that of modern dance, which is closely related to the style of dance. The style of Latin dance is warm and uncontrollable, bold, and rough. Dancers sometimes change the direction and angle radially within a fixed range and sometimes dance together like piano strings. The rhythm is cheerful and energetic, and the hips, upper and lower limbs, and other parts of the body Computational and Mathematical Methods in Medicine will also move quickly, causing the audience to be overwhelmed. The modern dance style is solemn and elegant and lovely and luxurious, and the dance steps are more standardized and rigorous. Dance partners maintain physical contact to complete various movements and have higher requirements for the knees and ankles. Therefore, the different high exercise intensity, fast movement frequency, and great style characteristics make Latin dance more likely to cause injuries than modern dance, as shown in Table 4 and Figure 4. The prevalence and incidence of different genders are different. Latin dance has the highest prevalence and incidence, followed by ten items. The popularity of Latin dance is higher than that of modern dance. Because the ladies of Latin dance wear high heels and do all kinds of highintensity and difficult movements, the hip swing and rotating music have a fast rhythm, which affects endurance, speed, and flexibility. In addition, because girls are inherently weaker than boys in terms of natural physiology, it is not difficult to explain that the incidence of female Latin is higher than other items, as shown in Figure 5.
Men and women interpret the characteristics of different styles of dance through different body postures and technical movements in the dance process, and sports injuries often occur in sports. During the past year, there was no significant difference in the nature of the injury ratio between men and women. Muscle strain and ligament strain were the majority, as shown in Table 5.
Prevention and Treatment of Injuries by Manipulation.
Tuina is very effective for the injuries caused by sports dance, and Tuina treatment is a very economical method. Let us take the knee joint as an example to discuss the treatment of the knee. Tuina steps are as follows: In the first step, the patient adopts a straight posture, the doctor stands on the affected side from top to bottom and applies the method to the thigh muscles, inner leg, and back side. Relieve cramps and tension to relieve symptoms completely, and repeat three times for six minutes.
Computational and Mathematical Methods in Medicine
In the second step, the patient takes the supine position and the doctor stands on the affected side. This method relieves the spasm and tension around the thighs and knees to achieve complete relaxation and adjustment and uses traditional massage techniques such as holding and massaging Zen fingers to treat the knee joint, blood Sea, Liangqiu, Knee Inner and Outer Eye, Heding, Yanglingquan, Ashi Point, etc., far acupuncture points, Futu, Zusanli, the penetrating power is strong, and the duration is long. Repeat three times for six minutes.
In the third step, the patient takes the supine position. The doctor uses the thumb and hands to act on the affected knee at the same time, pushes the kneecap inward, and fixes it at the same time. Use the compression method to apply a vertical force around the kneecap, and then, use the bottom of the palm to press and rub. Apply even force to the lower end of the kneecap. Inhale and endure, with the accumulation of acid as the gas level, and repeat three times for approximately six minutes.
In the fourth step, the doctor moves the affected knee joint and instructs the patient to actively bend, extend, rotate, and abduct. Finally, we use diathermy to rub around the knee joint for six minutes. Once a day, seven times a course of treatment, one day between the two courses of treatment, a total of four weeks of treatment, the anatomical parts selected by the observation point of the body surface temperature outside the knee joint and the posterior anterior side are shown in Figure 6.
When selecting observation points for body surface temperature changes, to facilitate statistical data and intuitive comparison of specific data on body surface temperature changes between the two groups, select the infrared temperature observation images of body surface temperature before and after noon treatment in the massage group and the traditional massage group: comprehensive observation. Record and compare the obvious anatomical position of the monitoring section around the knee joint, and use its specific anatomical position as the unique name of the temperature monitoring point. The determination of the sedimentation observation period should be based on the principle that it can systematically reflect the change process of the measured
Experimental Results and Analysis
4.1. WOMAC Score. After the WOMAC score and statistical analysis of the two groups of patients, the results: the pain, and stiffness of the patients before treatment, the total score of daily activity disorder was compared with each other (P > 0:1), which was not statistically significant. After treatment, the total scores of pain, stiffness, and disturbance of daily activities of the patients were compared with each other (P > 0:1), which was not statistically significant. Before treatment, the total scores of pain, stiffness, and dysactivity of the two groups of patients were compared before and after the treatment (P < 1); there was a significant statistical difference, indicating that the two treatment methods have achieved significant effects, and the scores are shown in Figure 7. Compared with the traditional two groups of patients, the body surface observation point temperature after treatment is better than that of the traditional group (P < 0:1); there is a difference between these two groups. Compared with the body surface observation point temperature before and after treatment, the difference between the Jingjin group and the traditional group was statistically significant. Comparing the changes in body surface temperature at the observation point, it can be determined that the temperature of the preoperative body surface temperature observation point in the Jingjin group and the traditional massage group is very different from that after the treatment. After the treatment, it has no obvious effect on the comparison. The knee of patients with knee osteoarthritis is subjected to infrared thermal imaging. The thermal image is shown in Figure 8.
In the meridian massage group, 18 cases were cured, 46 cases were markedly effective, 8 cases were improved, and 3 cases were not healed. The total effective rate was 96%; in the traditional massage group, 15 cases were cured, 41 cases were markedly effective, 13 cases were improved, and 6 cases were not healed. The efficiency is 92%. The chisquare test showed a statistical difference (P < 0:1), and the curative effect of the meridian massage group was significantly better than that of the traditional massage group.
Analysis of the Main Causes of Sports Injuries
(1) Poor physical fitness According to statistics, in the questionnaire survey of coaches and athletes, sports injuries caused by poor physical condition are the primary factor among all injury factors. This shows that both Chinese dance coaches and athletes believe that poor physical condition is the most important factor leading to sports injuries. Among them, strength accounts for more than half of the injury rate, followed by flexibility, endurance, and coordination. Judging from this ratio, it seems that Chinese dancers do not pay enough attention to the training of strength and flexibility.
(2) Incorrect technical essentials and deviation of foot focus If the toe opening is too large, it will cause the midsection of the hip to leak out to the side, and it will not be able to better support forward movement. When the athlete performing the Latin dance is standing and walking, everyone has different opinions regarding the angle of the toe. When dancing, most people usually focus on the inside of the big toe. The human foot is composed of multiple bones and soft tissues, supporting the weight of the erect person. The data of the toe opening angle and the main focus points of the foot are shown in Figure 9. The male partner's choice of dance steps, changes in movement orientation, and performance of dance styles must be mastered and can be used actively, quickly, decisively, and accurately. Female partners are required to have an agile following ability to accurately accept the dance signals given by the male partner, to quickly and accurately change their dance steps and postures, and to go with the flow, cooperate very actively and easily, complete each dance step, and dance. In the process, the female partner should not overpower the guests or change the dance steps at will or she should not subjectively speculate or guess the movement of the dance steps; otherwise, it is prone to mutual collision and disconnection of the lead band and unnecessary sports injuries.
(4) Unreasonable warm-up activities
Heating is to improve the stimulation of the central nervous system through low-intensity to moderate-intensity exercise to reach an appropriate level, strengthen the activities of different organs, overcome various functional inactivities, improve the temperature and functional flexibility of muscles and tissues, strengthen dependent reflex contact, promote rapid recovery of the human body, and prepare for formal training. At the same time, it is also one of the important measures to prevent sports injuries.
(5) Failure to receive timely treatment and positive adjustment for injury training Dancers do not know how to reflect their physical condition, or under the strict requirements of coaches, they dare not truthfully report their physical condition, but stick to it. High-load and intense practice is easy to make coaches and athletes difficult to detect the warning signals sent by the body. Therefore, in training, when the players are injured and the symptoms are not very obvious, they often do not receive medical treatment in time.
(6) External factors causing damage
According to further investigations, the main reasons for sports dancers' injuries are as follows: characteristics of Latin dance events, excessive local weight-bearing, neglect of recovery after exercise, excessive exercise load, insufficient preparation activities, improper training methods, failure to pull after training stretching, unreasonable use of technology, poor physical condition, and inappropriate dance shoes or dance clothes, as shown in Figure 10.
Conclusions
Through the research in this article, it is found that the number of people with waist and knee injuries is the largest, with an injury rate of 30%, followed by the number with shoulder injuries, with an injury rate of 17%; followed by more people with neck injuries, with an injury rate of 13%; followed by the toes, with an injury rate of 9%; followed by the ankle, with an injury rate of 7%; followed by the back, hip, and calf injuries, with an injury rate of 6%; and followed by the head, face, and upper limbs, with an injury rate of 5%, respectively. Finally, the hip has an injury rate of 3%. The reason for the injury is that there is no warm-up session before the exercise or the warm-up session is perfunctory; the athlete's technical level is poor; the physical fitness is poor; the training method is unscientific; the amount of exercise and exercise intensity are too large; the training schedule is unreasonable and the time is too long; athletes choose difficult dance moves that do not meet their own level; there is no coordination between male and female athletes; athletes select unreasonable music rhythm; there is physical decline during the competition; they are unable to adjust their own state reasonably before the competition; and collisions with other players happen during the competition. They are all causes of sports injuries of sports dancers in Xi'an. In response to this, our proposed massage technique has certain advantages over western medicine's oral medicine in terms of cure rate and improvement rate, and it avoids the burden and harm to the patient's body and economy caused by surgical treatment and joint cavity injection. In the future, we need to continue to sum up the experience, ingeniously combine modern technology and equipment, and explore increasingly effective comprehensive therapies on the basis of research on the treatment mechanism of Chinese medicine, to improve the cure rate, relieve the suffering of patients, and use traditional Chinese medicine such as massage and acupuncture. We think that in the future, Tuina technology will have a very critical impact on many medical diseases. Tuina technology will also collide with more technologies to produce new treatment technologies.
Data Availability
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
Conflicts of Interest
The authors declare that they have no conflicts of interest. | 2022-02-09T16:35:31.583Z | 2022-02-07T00:00:00.000 | {
"year": 2022,
"sha1": "55512d67ca3c1ca36ef57b5d8acc078eb246156a",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/cmmm/2022/5665972.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7bf0a38da7e20a4e01005ae986911300e6522934",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
213028146 | pes2o/s2orc | v3-fos-license | Investigation of soft ESD failure on capacitive transimpedance amplifier for hybrid integrated infrared sensor
In this letter, an experiment is designed to validate the soft ESD failure. The capacitive transimpedance amplifier (CTIA) circuit used in the hybrid integrated infrared sensor is chosen as the prototype for its characteristic of ESD-protection-device-free. The experiments show that under the ESD event with the rising pulse voltage, the induced leakage current in the CTIA circuit increase as well. As the pulse voltage exceeds one certain threshold point, the CTIA circuit fails to work anymore. The EMMI measurement helps to demonstrate the existence and location of the leakage path.
Introduction
For its catastrophic consequence, electrostatic discharge (ESD) induced hard failures such as thermal breakdowns of microelectronic device and metal interconnect, have been well understood and drawn much attention of the researchers in the world [1,2,3,4,5,6,7,8]. Nevertheless, the problems of ESD induced soft failure such as performance degradation and increased leakage current, are still not fully comprehended and investigated [9,10,11,12]. This problem might raise great concerns in some specific application fields, especially for the hybrid integrated infrared sensor system with high sensitivity. For the infrared sensor, infrared radiation from the target is transferred into opto-current by the detector, then converted into the amplified voltage by the readout circuit. Traditionally, the infrared detector and readout circuit are fabricated with different materials and processes. After that, indium bump is used to interconnect both chips [13,14,15,16]. Generally speaking, for the infrared sensor with high sensitivity, the current generated by the detector is as weak as tens of pA. Any potential leakage current will compromise the performance. And in order to improve the resolution of the infrared image, large number of detector pixels as many as millions are often placed in a very compact array. With the state of art fabrication process, the pixel pitch has been scaled to tens of m. Therefore, for the reason of weak current sensing capability and small pixel area, no ESD protection devices are included in the readout circuit. In such circumstance, the unavoidable ESD event that happens in the process of dicing, assembly and package, might do harm to the unprotected readout circuit. Therefore, to illustrate the potential soft failure problem in infrared readout circuit, the functionalities of a capacitive transimpedance amplifier (CTIA), which is often adopted as the frontend readout circuit [17,18,19,20], are investigated before and after ESD event.
Basic operation of CTIA circuit
CTIA has been long chosen as the fronted readout circuit to convert the opto-current into voltage for its superior noise and area performance. Figure 1 depicts the simplified circuit. Two phase operation of CTIA works as follow: in the reset phase, the switch S1 is closed, the amplifier in the CTIA is connected as unit gain configuration to set an initial output voltage; in the integration phase, S1 is opened, the sink/source current from the detector is integrated across the feedback capacitor Ci. The developed output voltage is linearly changed with the passing integration time, and the final voltage is sampled by the later stage at the end of the integration phase [21,22]. To accommodate the large dynamic range of the opto-current, optional feedback capacitors with different values are often designed. The area occupied by the CTIA pixel should match with that of the detector pixel. In principle, any unknown current source except the optocurrent in the input, such as the leakage current originating from ESD device, will contaminate the output voltage. For these reasons, ESD device is excluded from protecting the CTIA input port.
Experiment
A ESD device free CTIA circuit is designed and fabricated with 0.18 m CMOS process, the microphotography of the chip and the test PCB are presented in Figure 2(a). To measure the CTIA circuit, as is shown in Figure 2(b), the dc voltage generated by the Tektronix AFG3252 is converted to the current with the on-board resistor, which is then injected into the chip, and the output voltage from the CTIA circuit 1 This article has been accepted and published on J-STAGE in advance of copyediting. Content is final as presented. is captured with the Tektronix oscilloscope DPO3054. The pulse control signal is also generated with AFG3252. The captured waveforms in Figure 3 show the current integration function of the chip. The current flows off the chip, therefore the output voltage ramps up. It's obvious that the larger the injected current, the faster the voltage ramp. To observe the soft failure problem that will happen in the ESD device free CTIA circuit, a validation experiment is proposed. In the ESD setup mode, The HBM model is chosen to imitate the ESD event that might be encountered in the CTIA circuit [23,24]. Instead of using the standard HBM model such as IEC64000-4-2 directly, a minor modification is made. The peak pulse voltage is limited around 200v for the reason that in the Lightly Doped Drain (LDD) CMOS technology, the n-drain region corresponding to the reset switch has low ESD threshold [10]. After one specific ESD event is applied, the current integration function is validated in the measurement mode. It is evaluated constantly after each ESD pulse with different Figure 4 shows the normalized slope of CTIA output voltage after different ESD events from two prototyped chips. As in the left figure, when the voltage approach 198v, each time the larger ESD pulse event is applied, the slope of the output voltage is increased. But when the voltage exceeds 222v, the slope decrease to zero abruptly, the chip fails to work completely. In the right figure, although the threshold voltages for the different regions is not exactly the same, it exhibits similar behaviour. From the proposed experiment, it is obvious that before the chip is completely destroyed by the ESD event, there exists an non-destructive operation region, in which the current integration function is maintained with degraded performance. We, therefore, can define this region as soft failure region.
Analysis
By analyzing the CTIA circuit, there exists three possible leakage current paths: path through the gate terminal of the input transistor in the amplifier, the top-plate terminal of the integrating capacitor, and the source terminal of the switch transistor. In theory, the first two leakage paths are contributed by oxide rupture, while the latter is mainly due to the snapback of the parasitic bipolar transistor formed by source, drain and substrate [25,26]. The increased output voltage slope after the ESD event implies the increased integration current. Therefore, the induced leakage current also flows toward the ground as the input current source. After the prototype chip lose its integration function, the output voltage always clamp to the reset voltage, working as if a unit gain amplifier is formed. From these two observations, the assumption that the switch transistor is the critical device susceptive to ESD soft failure can be drawn with a simplified model as shown in Figure 5. In the process of avalanche breakdown, the localized leakage paths is formed in the source/substrate PN junction of the switch transistor under the relatively small ESD event.
As the ESD pulse voltage increase further, the short circuit between the source and drain is formed because of strong localized heating, the current integration function can no longer be recovered. To identify the exact location of the leakage current, we have characterized the CTIA circuit by means of emission microscopy (EMMI) measurements which have been used extensively [27,28,29,30]. The left picture in the Figure 6 presents the enlarged EMMI image taken after the nondestructive ESD event is applied. The new hot spot as marked with the red circle emerges compared with the previous one (not shown here) taken before any ESD event happens. By crossing check the location with layout picture on the right side, it's confirmed that there truly exists the leakage path beneath the switch transistor.
Conclusion
In this letter, a validation experiment of the soft ESD fail- Fig. 6. EMMI image taken after the nondestructive ESD event ure is designed and presented. The CTIA circuit used in the hybrid integrated infrared sensor is chosen as the experimental target, because it's ESD-protection-device-free, and susceptive to leakage current problem. The experiments shows that under the ESD event with the increased pulse voltage, the induced leakage current increase as well. When the pulse voltage exceeds one certain threshold point, the CTIA circuit fails to work anymore. The subsequent EMMI measurement helps to demonstrate the existence and location of the leakage path. | 2020-02-06T09:02:32.655Z | 2020-01-31T00:00:00.000 | {
"year": 2020,
"sha1": "373795258ea9ed504459860cbd5a65e5be1faba4",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/elex/17/6/17_17.20190692/_pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a5e12bb16db5a7e6503e9645657932e937fba590",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Computer Science"
]
} |
117407974 | pes2o/s2orc | v3-fos-license | KARMEN Limits on nu_e ->nu_tau Oscillations in 2-neutrino and 3-neutrino Mixing Schemes
The 56 tonne high resolution liquid scintillation calorimeter KARMEN at the beam stop neutrino source ISIS has been used to search for neutrino oscillations in the disappearance channel nu_e->x. The nu_e emitted in mu+ decay at rest are detected with spectroscopic quality via the exclusive charged current reaction 12-C(nu_e,e-)12-N_g.s. almost free of background. Analysis of the spectral shape of e- from the nu_e induced reaction as well as a measurement of the absolute nu_e flux allow to investigate oscillations of the type nu_e->nu_tau and nu_e->nu_mu . The flux independent ratio R(CC/NC) of charged current events 12-C(nu_e,e-)12-N_g.s. to neutral current events 12-C(nu,nu')12-C* provides additional information in the oscillation channel nu_e->x . All three analysis methods show no evidence for oscillations. For the nu_e->nu_tau channel 90%CL limits of sin^2(2t)<0.338 for dm^2>100eV^2 and dm^2<0.77eV^2 for maximal mixing in a simple 2 flavor oscillation formalism are derived. A complete 3 flavor analysis of the experimental data from five years of measurement with respect to nu_e<->nu_tau and nu_e<->nu_mu mixing is presented.
I. INTRODUCTION
One of the most interesting issues in present particle physics is to clarify the neutrino mass problem.In the minimal standard model of electroweak theory neutrinos are considered to be massless.But there is no compelling theoretical reason behind this assumption.On the other hand, in most extensions of the standard model massive neutrinos are allowed.A very sensitive way of probing small neutrino masses and the mixing between different neutrino flavors is provided by neutrino oscillations.Although there are some experimental results pointing to neutrino oscillations, it is still difficult to deduce a consistent and reliable set of neutrino masses and mixings [1][2][3].
Up to now, experimental results have been persistently interpreted in terms of neutrino oscillations using only a 2-ν mixing scheme with a single mixing angle Θ.However, to perform a complete and consistent analysis of experimental data on neutrino oscillation search, a 3 flavor formalism of neutrinos should be adopted.In this framework, all experiments searching for different flavor oscillations (e.g.ν µ → ν e , νµ → νe , ν µ → ν τ in appearance mode or ν e → ν x , νe → νx , ν µ → ν x in disappearance mode) are combined to extract global information on the oscillation parameters, the mixing angles as well as the mass differences ∆m 2 ij = |m 2 i − m 2 j |, i, j = 1, . . .3. Searches for oscillations in the disappearance mode, although less sensitive to small mixing angles, are of great importance to restrict the allowed parameter space.
In many of the 3 flavor descriptions of neutrino oscillations a so-called 'one-mass-scale dominance' δm 2 ≡ ∆m 2 12 ≪ ∆m 2 13 , ∆m 2 13 ≈ ∆m 2 23 ≡ ∆m 2 has been adopted [3][4][5][6][7][8][9].Possible mixing to sterile neutrinos as suggested by [10][11][12] is ignored whereas CP conservation is assumed, as we shall do in the following.The flavor eigenstates ν α are then described by superpositions of the mass eigenstates ν i with the mixing matrix elements U αi following the typical notation [13] of the CKM-like mixing matrix described by two mixing angles Φ,Ψ.The oscillation probability P αβ for transition ν α → ν β in the KARMEN experiment with its 'short baseline' experimental configuration (1/∆m 2 ≈ L/E ≪ 1/δm 2 ) is then with ∆m 2 in eV 2 /c 4 , the neutrino path length L in meters and the neutrino energy E in MeV.In a simple 2 flavor description A = sin 2 (2Θ) with Θ being the mixing angle between the two ν states, while in a 3 flavor analysis A = 4U 2 α3 U 2 β3 (appearance mode) or A = 4U 2 α3 (1 − U 2 α3 ) (disappearance mode) with U 2 e3 = sin 2 Φ, U 2 µ3 = cos 2 Φsin 2 Ψ, U 2 τ 3 = cos 2 Φcos 2 Ψ [3].For an experimental oscillation search it is important to note that the second factor in (2) describing the spatial evolution of oscillations is unchanged in the 2 flavor analysis.In the deduction of limits on A and ∆m 2 we shall start with A = sin 2 (2Θ) having in mind that the oscillation amplitude A has different definitions depending on the actual theoretical model as well as the experimental situation (appearance ↔ disappearance search).In section IV and V we shall then analyze our results in the 3 flavor scheme.
The KArlsruhe Rutherford Medium Energy Neutrino experiment KARMEN has set stringent upper limits in the appearance channels νµ → νe (sin 2 (2Θ) < 8.5 • 10 −3 for ∆m 2 ≥ 100 eV 2 /c 4 ) and ν µ → ν e (sin 2 (2Θ) < 4.0 • 10 −2 for ∆m 2 ≥ 100 eV 2 /c 4 ) [14].KARMEN also searches for ν e → ν τ and ν e → ν µ through disappearance of ν e .Here, the neutrinos ν e emerge from µ + decay at rest (DAR) and are detected at a mean distance of 17.7 m from the source.This search is a complementary addition to the appearance channels.Compared to other disappearance experiments this search has very small, systematic uncertainties as well as an excellent signal to background ratio for the detection of ν e -induced events.Limits on νe disappearance were obtained by reactor experiments e.g. at the nuclear power plant of Bugey [15] with νe from β-decays and the detector at 15, 40 or 95 m distance.Other limits on ν e → ν x were deduced from two 51 Cr source experiments in the GALLEX detector [16,17] emitting ν e with discrete energies E ν = 0.746 MeV (81%), 0.751 MeV (9%), 0.426 MeV (9%) and 0.431 MeV (1%).Experiments with neutrinos in the energy range of several GeV allow the search for appearance ν e ,ν µ → ν τ .In contrast to such experiments, like FNAL E531 [18] with a 410 meter long decay path, in KARMEN the parameters L and E of ν-induced events can be measured with high accuracy on an event-by-event basis.
The outline of the paper is as follows: We first present the experimental setup and the ν e detection and identification (section II).Then we investigate the absolute number of identified CC reactions 12 C ( ν e , e − ) 12 N g.s.(section III A) comparing the measured cross section with theoretical predictions.In a second evaluation method (section III B) we normalize the number of CC sequences to the number of neutral current (NC) reactions 12 C ( ν , ν ′ ) 12 C * (15.1 MeV).These NC reactions are not sensitive to the ν flavor.By comparing the number of CC events with NC events measured simultaneously, systematic uncertainties in the calculation of the absolute ν flux mostly cancel.A lower ratio of events R exp =CC/NC than expected would also point to ν e → ν x .A complementary analysis is given in section III C, where we search for distortions of the energy and spatial distributions of electrons from 12 C ( ν e , e − ) 12 N g.s., such as would be due to ν e → ν x .In section IV the results of these analyses are combined and compared with other experimental limits for νe → νx , ν e → ν τ and ν µ → ν e (ν µ → νe ) in a 2 flavor as well as a 3 flavor mixing scheme.
A. Experimental Setup
The KARMEN experiment is performed at the neutron spallation facility ISIS of the Rutherford Appleton Laboratory.The rapid cycling synchrotron provides 800 MeV protons with an average current of 200 µA.These are stopped in a beam-stop Ta-D 2 O-target producing neutrons and pions.99% of all charged pions are stopped within the target [19] leading to π + decay at rest whereas negative pions are absorbed in the heavy target nuclei.Neutrinos therefore emerge from the consecutive decay sequence π + → µ + + ν µ and µ + → e + + ν e + νµ in equal intensity per flavor.The intrinsic contamination νe /ν e from µ − decay in the beam-stop-target is only 4.7 × 10 −4 [19].The neutrino energy spectra are well defined due to the decay at rest kinematics: The ν µ from π + -decay are monoenergetic (E ν =29.8 MeV), the continuous energy distributions of ν e and νµ up to 52.8 MeV can be calculated using V-A theory (Fig. 1a) where the characteristic Michel shape of ν e from µ + DAR is given by the expression [20] The neutrino production time follows the two parabolic proton pulses of 100 ns base width and a separation of the maxima of 324 ns, produced with a repetition frequency of 50 Hz.The different lifetimes of pions (τ = 26 ns) and muons (τ = 2.2 µs) together with this unique time structure allow a clear separation in time of the ν µ -burst from the following ν e and νµ (Fig. 1b) which show the characteristic exponential distribution of the muon lifetime τ = 2.2 µs.Furthermore the accelerator's extremely small duty cycle suppresses effectively any beam-uncorrelated background by four to five orders of magnitude depending on the time interval selected for analysis.The neutrinos are detected in a high resolution 56 t liquid scintillation calorimeter segmented into 512 central modules with a cross section of 0.18 × 0.18 m 2 and a length of 3.53 m each [21].A massive blockhouse of 7000 t of steel in combination with a system of two layers of active veto counters provides shielding against beam correlated spallation neutron background and suppression of the hadronic component of cosmic radiation, as well as highly efficient identification of cosmic muons and their interactions.The central scintillation calorimeter and the inner veto counters are segmented by double acrylic walls with an air gap providing efficient light transport via total internal reflection of the scintillation light at the module walls.The event position within an individual module is determined with a position resolution of ± 6 cm for typical energies of 30 MeV [22] by the time difference of the PM signals at each end of this module.Due to the optimized optical properties of the organic liquid scintillator [23] and an active volume of 96% for the calorimeter, an energy resolution of σ E = 11.5%/E[M eV ] is achieved.The KARMEN electronics is synchronized to the ISIS proton pulses to an accuracy of better than ±2 ns, so that the time structure of the neutrinos can be exploited in full detail.
B. Detection and Identif ication of νe
Electron neutrinos ν e from µ + decay at rest (DAR) are detected via the exclusive charged current (CC) reaction 12 C ( ν e , e − ) 12 N g.s. which leads to a delayed coincidence signature.The ground state of 12 N decays at its production point with a lifetime of τ = 15.9 ms: 12 N g.s.→ 12 C + e + + ν e .The detection signature consists of a prompt electron within a few µs after beam-on-target (0.6 ≤ t pr ≤ 9.6 µs) followed by a spatially correlated positron (e − /e + sequence with 0.5 ≤ t diff ≤ 36 ms).To reduce cosmic background, no event is accepted within a software deadtime of 20 µs after any signal in the detector system.Remaining sequences induced by cosmic background are studied with high statistical accuracy by demanding the prompt event to emerge in a 200 µs long time interval before beam-on-target when no neutrinos are produced at ISIS.This CC reaction has been previously studied in detail [24], with respect to weak nuclear form factors [25] and to the CC helicity structure of muon decay [26].
The data set used for this ν e → ν x analysis is taken from 1990-1995 corresponding to 9122 C protons on the ISIS beam stop target, or 2.51 × 10 21 µ + decays at rest in the target.Figure 2 shows the signatures of 513 events surviving all cuts.These events contain 499.7 ± 22.7 ν-induced e − /e + sequences and 13.3 ± 0.8 cosmic induced background sequences.The energy spectra of the prompt e − and the delayed e + follow the Monte Carlo (MC) expectations from µ + DAR and the 12 N β-decay.The time distribution of the prompt event reflects the muon lifetime, the time difference t diff shows the 12 N lifetime folded with the detection efficiency.
The stringent signatures of the 12 C ( ν e , e − ) 12 N g.s.reaction lead to an excellent signal-to-background ratio of 37:1 which allows identification of ν e almost free of background.Accordingly, the search for ν e disappearance has very small systematic errors and the sensitivity in this oscillation channel is not limited by background events.
III. OSCILLATION SEARCHES
Flavor oscillations of the type ν e → ν µ or ν e → ν τ will result in a smaller number of detected 12 C ( ν e , e − ) 12 N g.s.sequences than expected because the analogous charged current reactions 12 C ( ν µ , µ − ) 12 N g.s. and 12 C ( ν τ , τ − ) 12 N g.s.cannot be induced due to the small neutrino energies (E ν ≤ 52.8 MeV).In addition to the comparison of the observed ν e rate with expectation, the effects of ν e disappearance can be investigated by a detailed study of the remaining ν e -induced events.First, the characteristic energy dependence of flavor oscillations (2) will lead to a distortion of the electron energy distribution.Secondly, the 1/r 2 dependence of the ν e flux from µ + DAR in the beam stop target will be distorted.The specific features of neutrino production at ISIS and detection by KARMEN require the two oscillation channels ν e → ν µ and ν e → ν τ to be discussed separately.
We first consider transitions ν e → ν τ .As described above, ν τ cannot be detected via CC reactions, so oscillations ν e → ν τ directly lead to a reduction of CC sequences and consequently of the measured flux averaged cross section σ CC exp .The second channel, ν e ↔ ν µ mixing, is more complex due to the simultaneous presence of νµ in the neutrino beam from µ + decay.Assuming CPT invariance, there would be a second source for CC reactions in the KARMEN detector, namely νe from νµ → νe leading to 12 C ( νe , e + ) 12 B since νµ from µ + DAR at ISIS are produced with the same intensity and at the same time as ν e .These would mostly compensate the reduction through ν e → ν µ of the measured cross section due to the following reasons: Because 12 B belongs to the isobar triplet 12 B-12 C-12 N, the Q-value for 12 C ( νe , e + ) 12 B (Q = −13.4MeV) as well as the 12 B lifetime (τ = 29.14ms) and the endpoint energy of 12 B g.s.→ 12 C + e − + νe (E 0 = 13.4MeV) are very similar to the corresponding quantities for 12 C ( ν e , e − ) 12 N g.s. with the subsequent 12 N β-decay (see Fig. 3).Moreover, the cross sections are similar with σ = 9.0 • 10 −42 cm 2 [27] induced by νe from νµ → νe with large ∆m 2 (see Fig. 4).So, as one CC reaction disappears, the other takes its place. 1 Figure 5 demonstrates this argument in detail showing the relative detection efficiencies ǫ(ν) and ǫ(ν) as a function of the oscillation parameter ∆m 2 .Taking into account the different energy dependent cross sections for both CC reactions, the detection efficiency ǫ(ν, ∆m 2 ) within the KARMEN detector for ν e → ν x disappearance into ν τ or ν µ is only slightly higher than for νµ → νe appearance ǫ(ν, ∆m 2 ) which also incorporates the different detection efficiency for the sequential e + /e − due to the different lifetimes and Q-values of 12 N/ 12 B. The different ν energy spectra and energy dependences of the cross sections are responsible for a small shift in the position of minima and maxima of the efficiencies for ν e → ν x and νµ → νe .Therefore, at ∆m 2 ≈ 5 eV 2 /c 4 a small net increase of CC reactions is expected assuming mixing between ν e and ν µ .
Following these considerations the performed search for ν e → ν x must actually be interpreted in terms of ν e ↔ ν µ and ν e ↔ ν τ mixing where the ν e → ν τ search is much more sensitive ( in terms of the absolute number of CC events; see section III A) than ν e → ν µ due to the almost complete cancellation by the mirror νµ → νe .An analysis of the spectral shape of CC sequences, however, is sensitive to both mixings, ν e ↔ ν τ and ν e ↔ ν µ , as will be shown in section III C.This shape analysis is performed in a complete 3 flavor scheme where the amplitudes for the mixings ν e ↔ ν τ and ν e ↔ ν µ are completely free and independent parameters.
A. Absolute Cross Section
The comparison of the number of ν e -induced CC reactions with the expectation from theoretical calculations is done on the basis of the measured flux averaged cross section σ CC exp for ν e from µ + DAR.Taking into account the overall detection efficiency of ǫ CC = 0.328 the 499.7±22.7 ν e -induced events (see Fig. 2) correspond to a flux averaged cross section of where the systematic error is almost entirely due to the uncertainty in the calculation of the absolute ν flux from ISIS [19].The measured cross section is in excellent agreement with theoretical calculations based on different models to describe the 12 C nucleus (see table I), i.e. there is no indication of ν e disappearance.For quantitative calculations we use the mean theoretical value of σ CC th from [27][28][29][30][31] with a realistic estimate of the systematic error [35]: The ν e disappearance oscillation probability P ν e → ν x can be written in terms of cross sections as where the statistical and systematical errors of the experimental value have been added quadratically.To get a density distribution for values of P ν e → ν x we sampled σ CC exp and σ CC th from Gaussian distributions with the given mean values and widths (Fig. 6).The resulting distribution of P ν e → ν x is slightly non Gaussian due to the ratio introduced in (6).Only positive values of P ν e → ν x correspond to the physically allowed region of oscillation.To extract a 90% CL upper limit we therefore renormalize the distribution in P ν e → ν x following the most conservative Bayesian approach [13].The limit deduced is then The interpretation of this limit in terms of the oscillation parameters ∆m 2 and mixing angle is given in section IV.
In the case of oscillations between ν e and other ν flavors the NC cross section remains unchanged due to the flavor universality of NC interactions. 2The number of CC events 12 C ( ν e , e − ) 12 N g.s.would decrease due to ν e → ν τ .By looking at the ratio of observed events (i.e. the measured cross sections) the systematic uncertainties in σ exp are significantly reduced so that statistical fluctuations dominate the error on R exp : This ν flux independent ratio R exp is in good agreement with theoretical predictions (see table I).Again, there is no indication of ν e disappearance.
As theoretical prediction for R th we combine the R-values from [27][28][29][30][31] in a straight forward way and take the mean value and its variance as an estimate for the systematic error: R th = 0.91 ± 0.035.The ν e → ν x oscillation probability can then be written as = 1 − 0.86 ± 0.08 0.91 ± 0.035 (11) In close analogy to the evaluation in section III A an upper limit can be extracted after renormalization to the physical region: This limit is slightly higher than the limit (7) in section III A due to the relatively high value of the experimental cross section σ NC exp compared to the theoretical predictions.
C. Spectral Shapes
The third means of searching for oscillations ν e → ν x is to carry out a maximum likelihood (ML) analysis of the measured energy and spatial distribution of the prompt electrons from 12 C ( ν e , e − ) 12 N g.s.reactions.This method is not sensitive to the absolute number of detected sequences but to distortions of the expected spectra in energy and position within the detector volume.In order to minimize contributions from cosmic induced background we applied more stringent cuts in time (corresponding to 3 τ µ + ) and spatial distribution (restricted fiducial volume of 86% of the central detector) for the prompt event resulting in 458 accepted sequences (see Fig. 7) with 5.8 background events determined in the appropriate prebeam evaluation.With a neutrino signal-to-background ratio of about 80:1 the cosmic background can be included as a fixed component in the likelihood analysis and will not be discussed further.The aim of the ML analysis is to calculate a combined likelihood for all 458 events on the basis that the two-dimensional density distribution f (L, E e − ) is sensitive to varying amounts of oscillation events from ν e → ν τ , ν e → ν µ and νµ → νe .E e − is related to neutrino energy by E e − = E ν e − 17.3 MeV, L e − = L ν e since the electron is detected at the interaction point.The absolute energy scale is known to a precision of ±0.25 MeV from analysis of Michel electrons from decay of stopped cosmic muons.The density function f for no oscillations is with the normalization factor F integrated over the ν e energy and the detector volume The function n(L) represents the spatial distribution of events within the rectangular detection volume resulting from an isotropic neutrino flux from ISIS (Φ ν ∼ L −2 ).Φ ν e denotes the ν e energy distribution from µ + DAR (see Fig. 1a), σ(E ν e ) the energy dependent cross section for 12 C ( ν e , e − ) 12 N g.s.(see Fig. 4).
For an oscillation ν e → ν x with given ∆m 2 the density function f ν ∆m 2 normalized by the integral F ν ∆m 2 over L and E can be written as The appearance of νe through νµ → νe is described by the density in close analogy to (15).The density functions used in the ML analysis are MC-simulated electron/positron (see Fig. 8) and positron/electron (see Fig. 3) distributions including all effects due to detector resolutions and the cuts applied .The combined likelihood for the N = 458 events is calculated with respect to the fraction r 1 of initially expected electrons now absent due to ν e → ν τ and r 2 of the combined transition ν e → ν µ and νµ → νe taking into account the different detection efficiencies ǫ(ν, ∆m 2 ) and ǫ(ν, ∆m 2 ) (see Fig. 5): In this specific construction of the likelihood function L the event number is fixed at N = 458, only the shape differences of the components are analyzed.Although not explicitly quoted in (17) we also include into L ∆m 2 (r 1 , r 2 ) the knowledge, that the delayed energy E del and the time difference t diff differ for 12 C ( νe , e + ) 12 B (see Fig. 3).Maximising L ∆m 2 (r 1 , r 2 ) gives the most likely oscillation fractions r 1max (∆m 2 ), r 2max (∆m 2 ).This optimization is done separately for each value of ∆m 2 in the range 0.1 to 100 eV 2 /c 4 .Over this range of ∆m 2 the fully 2-dimensional best fit values are compatible with r 1 = r 2 = 0 within the 1σ error band (see Fig. 9).As there is no indication for oscillations, upper limits r 1 (90% CL), r 2 (90% CL) for each ∆m 2 are determined.In the case of ν e ↔ ν τ the ML analysis is not able to discriminate oscillation events for 2 < ∼ ∆m 2 < ∼ 3 eV 2 /c 4 and ∆m 2 > 50 eV 2 /c 4 .This is due to the fact that energy and spatial distributions for these oscillation cases are almost identical with the expected electron spectra without oscillations.However, for ν e ↔ ν µ mixing, the shape analysis is also sensitive for large ∆m 2 since we expect events with E pr > 36 MeV from νµ → νe . 3The 90% CL limits for each ∆m 2 are calculated by integrating the corresponding 2-dimensional likelihood function for r 1 (r 2 ) over the whole r 2 (r 1 ) range and then renormalizing to the physical region r 1 (r 2 ) > 0 applying a Bayesian approach near the physical boundary r 1 (r 2 ) = 0, respectively.The limits obtained are equivalent to the limits P ν e → ν τ (90% CL), P ν e → ν µ (90% CL) in oscillation probability.The translation of these limits into 90% CL limits for the neutrino oscillation amplitude and mixing angles for different values of ∆m 2 is shown in section IV.
IV. EXCLUSION LIMITS
In this section, we translate the results of the different methods obtained in section III as limits on the oscillation probability (cross section evaluation) or on the ν-mixings (shape analysis) into limits of the oscillation amplitude.This oscillation amplitude A is then expressed in terms of the mixing angles, for example A eτ = sin 2 (2Θ) (2 flavors) or A eτ = 4sin 2 Φcos 2 Φcos 2 Ψ (3 flavors, see section I).
A. Exclusion limits from measurement of the CC cross section We first discuss the limit obtained from the evaluation of the absolute number of detected CC sequences (section III A): P ν e → ν x < 0.169 (90% CL)4 in a 2 flavor as well as in a complete 3 flavor scheme.
For a fixed neutrino energy and a fixed source-target distance L, the oscillation probability P is related to the mixing amplitude A via the spatial evolution term in (2).For an experimental configuration with continuous ν energies and a large volume detector with its resolution functions, this spatial evolution term is determined by MC simulations resulting in oscillation detection efficiencies ǫ(ν, ∆m 2 ) and ǫ(ν, ∆m 2 ) as shown in Fig. 5.The upper limit of the oscillation amplitude for a given ∆m 2 is then For ν e → ν µ the analogous MC simulations of positrons from νµ → νe based on f ν ∆m 2 were performed resulting in ǫ(ν, ∆m 2 ) so that P ν e → ν τ (90% CL) and P ν e → ν µ (90% CL) are the evaluated limits of the oscillation probability from the different analyses of section III.
In a more detailed 3 flavor evaluation, the KARMEN oscillation limit P ν e → ν x corresponds to In Fig. 11 the correlation of A eτ and A eµ limits is shown for some examples of ∆m 2 .The parameter A eµ is constrained only for regions at ∆m 2 ≈ 2-3 eV 2 /c 4 , ∆m 2 ≈ 7-9 eV 2 /c 4 and at ∆m 2 ≈ 13 eV 2 /c 4 (see Fig. 10a), whereas for ∆m 2 ≈ 5 eV 2 /c 4 the upper limit of A eτ even increases with A eµ due to the fact that ǫ(ν, ∆m 2 ) − ǫ(ν, ∆m 2 ) is negative.
From A eτ = 4sin 2 Φcos 2 Φcos 2 Ψ we can evaluate the 90% CL limits for the mixing angles Φ and Ψ taking the largest upper bound of A eτ with respect to A eµ .With the double logarithmic presentation of tan 2 Φ, tan 2 Ψ we follow again [3]. Figure 12a,c shows the 90% CL limits in A eτ and A eµ from the cross section analysis as dashed curves (labeled with σ).In Fig. 12b, these limits from A eτ are plotted as dashed curves for two examples of ∆m 2 (2 eV 2 /c 4 and 100 eV 2 /c 4 ) excluding areas in (Φ, Ψ) to the left.The corresponding limit on (Φ, Ψ) from A eµ for ∆m 2 = 2 eV 2 /c 4 is shown in Fig. 12d (dashed curve labeled σ).
The upper limit for ν e → ν τ , P ν e → ν τ < 0.169 (90% CL), is reliable due to the spectroscopic measurement of the ν e -flux from µ + DAR at ISIS and the nearly background-free 12 C ( ν e , e − ) 12 N g.s.detection reaction.This limit is in contradiction to a model of 3 flavor neutrino mixing and masses calculated by Conforto [39,40].Although being in conflict on a three standard deviations level with the negative result in ν e → ν τ of FNAL E531 [18] this compilation of different experimental results predicts a transition probability P ν e → ν τ = 0.23 ± 0.06 (90% CL) expected in the appropriate 'short-baseline' regime.Figure 12a shows the calculated point (∆m 2 , A eτ ) = (377±29 eV 2 /c 4 , 0.46±0.12)lying completely in the area excluded at 90% CL by the KARMEN analysis of the absolute number of CC events.
B. Exclusion limits from the spectral shape analysis
In the 2-dimensional complete shape analysis (section III C) the oscillation probabilities P ν e → ν µ and P ν e → ν τ themselves depend on ∆m 2 .The limit in the mixing amplitude is then The resulting limits in A eτ and A eµ are shown in Figure 12a and Figure 12c, respectively, as solid exclusion curves (labeled ML) together with the exclusion curves evaluated above (dashed lines).For ν e → ν τ the analysis of the spectral shape of L and E is only sensitive and competitive to the evaluation of the absolute event number in regions of about 0.5 E/L < ∆m 2 < 5 E/L corresponding to ∆m 2 values of ≈ 3 to 30 eV 2 /c 4 for this experiment whereas for ν e ↔ ν µ the shape analysis is far more sensitive.This is due to expected events from νµ → νe with positron energies E e + > 36 MeV (see also Fig. 3), where electrons from 12 C ( ν e , e − ) 12 N g.s.cannot arise.Fig. 12b and Fig. 12d show the regions in (Φ, Ψ) space for various ∆m 2 values and ∆m 2 =2 eV 2 /c 4 , respectively, excluded by measurement of ν e ↔ ν τ and ν e ↔ ν µ mixing by KARMEN.These limits are compared with the exclusion limits from the Bugey νe disappearance experiment where the particular value of ∆m 2 chosen for comparison, 2 eV 2 /c 4 , has no special significance.Note that there are no limits on A eτ from accelerator experiments like FNAL E531 below ∆m 2 = 10 eV 2 /c 4 .
C. Global limits from the 3 flavor analysis
Up to now, there are no bounds from ν e → ν τ to the mixing angles Φ, Ψ in the region ∆m 2 ≤ 10 eV 2 /c 4 (see for example FNAL E531 limit in Fig. 10b).With A eτ = 4U 2 e3 U 2 τ 3 = 4sin 2 Φcos 2 Φcos 2 Ψ (see equ. 2) we can set new limits to the mixing angles Φ and Ψ from ν e → ν τ oscillation search (see Fig. 12b, 13).For a given ∆m 2 of for example ∆m 2 = 2 eV 2 /c 4 the limits on the oscillation amplitudes A eτ and A eµ from the ν e → ν x search can be combined with other limits from KARMEN as well as from other experiments.Figure 13 demonstrates the complementarity of different experimental results with respect to the mixing angles.
V. CONCLUSION AND OUTLOOK
The KARMEN experiment has found no positive evidence for ν oscillations in the investigated channels ν e → ν τ and ν e → ν µ through disappearance of ν e .Charged current reactions induced by ν e were analyzed by three different methods investigating the absolute number of events, the ratio of CC to NC events and the spectral shape of energy E and target distance L. The sensitivity of the ν e → ν x search is limited by the accuracy of the theoretical cross section of 12 C ( ν e , e − ) 12 N g.s. and the ν flux calculation in the first evaluation, and by the relatively small statistics of 458 events in the case of the shape analysis.The experimental parameter L/E can be measured with high accuracy on an event basis and differs significantly compared to other νe disappearance searches at reactors or ν e → ν τ appearance searches at accelerators.In the maximum likelihood analysis, the 3 flavor approach assures fully independent treatment of ν e ↔ ν τ and ν e ↔ ν µ mixing as free parameters.The sensitivity of KARMEN is competitive with existing limits from oscillation searches at reactors and improves limits from accelerator experiments for ν e → ν τ for ∆m 2 < 10 eV 2 /c 4 .
In 1996, the KARMEN detector system was improved by an additional veto counter system consisting of 300 m 2 of plastic scintillator modules placed within the massive iron shielding of the central detector.This new veto system will reduce significantly cosmic induced background for the oscillation search, especially in the appearance channel νµ → νe [43].In the ongoing measuring period we expect again about 400 ν e -induced CC sequences from 12 C ( ν e , e − ) 12 N g.s. .Therefore, the statistical significance of this ν e → ν x investigation will slightly improve ending up with about 900 ν e -induced CC reactions in the KARMEN detector after a measuring period of 2-3 years.
We gratefully acknowledge the financial support of the German Bundesministerium für Bildung, Wissenschaft, Forschung und Technologie (BMBF), the Particle and Astronomy Research Council (PPARC) and the Central Laboratory of the Research Council (CLRC) in the UK.R reference EPT 7 9 7 Elementary Particle Treatment 8 Continuum Random Phase Approximation 9 One Body Density 10 This value for the CC exclusive cross section is calculated by [30] using the OBD of [31], but is also obtained with the original computer code NUEE from [32].Events/0.5MeVEvents/0.5MeV Events/5cm Events/0.2msFIG. 3. Simulated detector response in energy, target distance and time difference including readout efficiency of 12 C ( νe , e − ) 12 Ng.s.(solid histograms) and 12 C ( νe , e + ) 12 B (dashed histograms) for ∆m 2 = 100 eV 2 /c 4 normalized to the same arbitrary number of events.12. a,c: KARMEN exclusion limits of the mixing amplitude A eτ and A eµ (solid line = shape ML; dashed line (σ) = absolute number of events) b,d: corresponding limits of the mixing angles Ψ and Φ from the limits of: (b) A eτ for several values of ∆m 2 , (d) A eµ , for ∆m 2 = 2 eV 2 /c 4 .For comparison, the exclusion limits from νe disappearance search at Bugey [15] are shown, also for ∆m 2 = 2 eV 2 /c 4 .In b(d) areas to the left(right) of the curves, respectively, and between the horizontal lines are excluded.In (d) curves 1 and 2 are the KARMEN limits from νµ→ νe and νµ→ νe appearance search.13. 90% CL limits to the mixing angles Φ and Ψ fixed at ∆m 2 = 2 eV 2 /c 4 deduced from this experiment from the oscillation channels νµ→ νe appearance and νe→ νµ, νe→ ντ disappearance in comparison with other experiments: FNAL E531 νµ→ ντ appearance [18], BNL E776 νµ→ νe appearance [42] and a 90% likelihood favoured region from LSND [41] (shaded region).
FIG. 1 .
FIG. 1. Neutrino energy spectra (a) and production time (b) at ISIS.The proton double pulses are repeated with a frequency of 50 Hz.
FIG. 2 .
FIG. 2. Signatures of the νe detection reaction12 C ( νe , e − )12 Ng.s.: time and energy distributions of the prompt electron (a;c) and the delayed positron (b;d).Data points are with background subtracted, MC expectations are shown as histograms, total background as shaded histograms.
FIG. 11
FIG. 11. 3 flavor 90%CL exclusion limits of the mixing amplitudes A eµ and A eτ based on the extracted oscillation limit of Pν e→ νx < 0.169 (90% CL) from the cross section analysis (see equ. 7).Areas to the right of the lines are excluded.
FIG. 12. a,c: KARMEN exclusion limits of the mixing amplitude A eτ and A eµ (solid line = shape ML; dashed line (σ) = absolute number of events) b,d: corresponding limits of the mixing angles Ψ and Φ from the limits of: (b) A eτ for several values of ∆m 2 , (d) A eµ , for ∆m 2 = 2 eV 2 /c 4 .For comparison, the exclusion limits from νe disappearance search at Bugey[15] are shown, also for ∆m 2 = 2 eV 2 /c 4 .In b(d) areas to the left(right) of the curves, respectively, and between the horizontal lines are excluded.In (d) curves 1 and 2 are the KARMEN limits from νµ→ νe and νµ→ νe appearance search.
TABLE I .
Comparison of theoretical cross sections σ th in units of 10 −42 cm 2 averaged over the neutrino energy from µ + DAR and the values σ exp measured with KARMEN (including statistical and systematical errors); for the NC cross section ν=νe+νµ in case of no oscillations; R indicates the ratio σCC ( νe )/σNC ( νe + νµ ). | 2019-04-14T02:28:18.644Z | 1998-01-08T00:00:00.000 | {
"year": 1998,
"sha1": "68822889ba2a2aa2cbdd3f2a68f3deaf5d2cb40f",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/hep-ex/9801007",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "68822889ba2a2aa2cbdd3f2a68f3deaf5d2cb40f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
222093291 | pes2o/s2orc | v3-fos-license | Strategies to mitigate impacts of the COVID-19 pandemic on patients treated with deep brain stimulation
In the wake of the COVID-19 crisis, 160,000 patients worldwide who have undergone deep brain stimulation (DBS) are now experiencing critical treatment disruptions. These include patients treated for Parkinson’s disease, dystonia, epilepsy and essential tremor as well as for psychiatric disorders, like treatmentresistant obsessive compulsive disorder and Tourette syndrome. With many hospitals overburdened [1] and the potential for community-based infection still high (and increasing), shifting to various forms of telemedicine DBS care has become part of a necessary “natural experiment” to mitigate risk for infection and continue care throughout the pandemic. However, the impacts of COVID-19 on patient outcomes and well-being remain unknown. Some potential risks of and practical considerations for implementing remote DBS carewere outlined by Gross et al. [2], who discussed whether and when to implant new patients with DBS, how to avoid and what to do in case of internal pulse generator (IPG) depletion, and how to address hardware infection or malfunction. Gross et al. provide effective recommendations for addressing potential neurosurgical risks during the pandemic; however, a gap remains in understanding how best to address potential ethical issues that can impact patient well-being in the context of remote DBS care. Our group of DBS clinicians and researchers e currently treating patients using conventional DBS and engaged in research on ethical issues arising in next-generation DBS care, respectively, highlight some of these ethical considerations here. In the absence of remote programming technology for DBS systems, the shift to telemedicine for patients who wish to continue DBS treatment inevitably entails a greater level of involvement and participation from patients in managing their own care. Whereas before the pandemic, physicians and other healthcare professionals were able to conduct routine observations of motor function, assess changes in cognition, mood, behavior or quality of life, modify or titrate stimulation parameters and assess battery life in person, now most of these critical aspects of care are occurring remotely, resulting in greater patient control and autonomy over their treatment (particularly stimulation). Physicians may widen stimulation parameters within a safe margin to allow patients to “tweak” their stimulation and experiment with minimum thresholds on their own. This key shift toward greater patient control over stimulation is part of a larger strategy to balance battery conservationwith symptom management. Many patients without rechargeable batteries face the possibility of battery depletion during the course of the pandemic; thus, conserving battery life is of high priority. The
Strategies to mitigate impacts of the COVID-19 pandemic on patients treated with deep brain stimulation Dear Editor, In the wake of the COVID-19 crisis, 160,000 patients worldwide who have undergone deep brain stimulation (DBS) are now experiencing critical treatment disruptions. These include patients treated for Parkinson's disease, dystonia, epilepsy and essential tremor as well as for psychiatric disorders, like treatmentresistant obsessive compulsive disorder and Tourette syndrome. With many hospitals overburdened [1] and the potential for community-based infection still high (and increasing), shifting to various forms of telemedicine DBS care has become part of a necessary "natural experiment" to mitigate risk for infection and continue care throughout the pandemic. However, the impacts of COVID-19 on patient outcomes and well-being remain unknown.
Some potential risks of and practical considerations for implementing remote DBS care were outlined by Gross et al. [2], who discussed whether and when to implant new patients with DBS, how to avoid and what to do in case of internal pulse generator (IPG) depletion, and how to address hardware infection or malfunction. Gross et al. provide effective recommendations for addressing potential neurosurgical risks during the pandemic; however, a gap remains in understanding how best to address potential ethical issues that can impact patient well-being in the context of remote DBS care. Our group of DBS clinicians and researchers e currently treating patients using conventional DBS and engaged in research on ethical issues arising in next-generation DBS care, respectively, highlight some of these ethical considerations here.
In the absence of remote programming technology for DBS systems, the shift to telemedicine for patients who wish to continue DBS treatment inevitably entails a greater level of involvement and participation from patients in managing their own care. Whereas before the pandemic, physicians and other healthcare professionals were able to conduct routine observations of motor function, assess changes in cognition, mood, behavior or quality of life, modify or titrate stimulation parameters and assess battery life in person, now most of these critical aspects of care are occurring remotely, resulting in greater patient control and autonomy over their treatment (particularly stimulation). Physicians may widen stimulation parameters within a safe margin to allow patients to "tweak" their stimulation and experiment with minimum thresholds on their own.
This key shift toward greater patient control over stimulation is part of a larger strategy to balance battery conservation with symptom management. Many patients without rechargeable batteries face the possibility of battery depletion during the course of the pandemic; thus, conserving battery life is of high priority. The expiration of DBS device battery can require hospitalization for emergency battery replacement to avoid negative impacts (e.g. "rebound effect") of abrupt cessations in stimulation [3] which may uncommonly rise to the level of a medical emergency. Impacts of depletion can be especially problematic for patients receiving DBS for neuropsychiatric disorders e including treatmentresistant depression, obsessive-compulsive disorder and Tourette Syndrome e who often run stimulation at higher currents and are more likely to experience battery depletion if they do not have rechargeable batteries. Patients may conserve battery life by titrating settings to minimum stimulation levels needed to manage their symptoms, and can even turn the device off completely in some circumstances (e.g., while sleeping).
A major ethical obligation when employing strategies that involve enhanced patient control over treatment parameters is to consider what level of control patients are comfortable assuming over their own stimulation. While many patients may welcome additional control over their settings as a source of comfort and even empowerment, others may not feel comfortable altering settings and may experience this responsibility as anxiety-provoking. Evidence from the literature on shared decision making suggests that patients' control preferences over treatment decisions vary widely and are not easily predicted [4]. We recommend that physicians actively explore e using available quantitative [5] or narrative tools [6,7] e whether their DBS patients are comfortable with taking on these new responsibilities before incorporating treatment strategies that entail enhanced patient control over stimulation. Remote care approaches should respect and align with patients' control preferences for treatment.
A second ethical concern related to patient control is the potential for unforeseen negative impacts resulting from untested or under-understood approaches. As with any untested intervention, outcomes are likely to be indeterminate. However, the uncertainty of integrating greater patient autonomy over treatment may be exacerbated by the already elevated baseline levels of uncertainty over DBS outcomes, given the high degree of variation in medical and psychosocial characteristics with the potential to influence DBS outcomes, even under highly controlled conditions. Understanding how patients will respond to changes in stimulation takes time, with most changes in movement, mood or cognition likely to happen naturalistically, that is, outside of the cross-sectional telemedicine visit. Indeed, even in a non-pandemic context, all inperson observations are cross-sectional representations of a continuum of symptom experiences. However, greater dangers to patient well-being may exist when physicians' insights are exclusively mediated by patient report and potentially further obscured by
Contents lists available at ScienceDirect
Brain Stimulation j o u r n a l h o m e p a g e : h t t p : / / w w w . j o u r n a l s . e l s e v ie r . c o m / b r a i n -s t i m u l a t i o n technological and logistical barriers [8]. In this context, patients must not only assume greater responsibility for observing and reliably reporting these changes, but physicians must also consider additional strategies and devices (e.g. employing flexible wearable devices that measure and remotely report on gait impairment, falls, muscle tone, tremors, sleep disturbances, or web-based calculators and smart phone applications that help to estimate device battery life) in order to facilitate remote patient monitoring to augment physicians' understandings about how best to manage patientspecific approaches to remote care.
A third ethical concern is whether remote care scenarios increase the potential for negative mental health symptoms to manifest or worsen among DBS patients, many of whom have primary or comorbid mental health needs. Limited access to in-person care for the treatment of psychiatric comorbidities (e.g., depression) before and after DBS surgery, as well as post-surgery psychosocial health needs (e.g., adjustment to the device; changes in identity and body image) may require special attention. Further, many DBS patients may be experiencing new distress due to the interaction of COVID19-related fears with preexisting psychiatric symptoms, potentially compounded by the social isolation imposed by pandemic conditions. These factors combine to put DBS patients with existing (and especially treatment-refractory) mental health treatment needs at significant risk for worsening of mental health symptoms and even suicide in the absence of effective and accessible care. Risks to mental health during the pandemic are critical and should receive equal consideration in relation to physiological and surgical risk concerns. As many researchers have argued [9,10], physicians (of all types) should be ready to address or offer referrals for patients with mental health needs that emerge or are exacerbated during this pandemic.
In sum, an ethically responsible approach to remote DBS care should entail explicit discussions between physicians about patients' control preferences for treatment, and about potential safety concerns in the context of patient-led experiments conducted "in the wild" (in the absence of consistent physician oversight). Further, physicians should identify and closely monitor patients who have the potential to experience emerging or worsening mental health symptoms during the pandemic.
Disclaimer
The views expressed are those of the authors alone, and do no necessarily reflect views of the NIH or the institutions with which the authors are affiliated.
Declaration of competing interest
The authors declare no competing interests. | 2020-10-02T13:06:23.847Z | 2020-10-02T00:00:00.000 | {
"year": 2020,
"sha1": "f81d864ae00433a79ffb4af62c979497a3707b57",
"oa_license": "CCBYNCND",
"oa_url": "http://www.brainstimjrnl.com/article/S1935861X2030262X/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "303673412a3cc2599fd910bb8a25b92dfc2f5459",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267362257 | pes2o/s2orc | v3-fos-license | Biochar production under different pyrolysis temperatures with different types of agricultural wastes
The main aim of this study is to determine the physical and chemical properties of biochar synthesized from different materials (straw rice, sawdust, sugar cane, and tree leaves) at different pyrolysis temperatures (400, 600, and 800 °C). The physical and chemical properties such as moisture content, water holding capacity, bulk density, and porosity; and pH, electrical conductivity (EC), organic matter, organic carbon, total nitrogen, potassium, phosphorus, calcium, magnesium, sodium, and sulfur were determined, respectively. The results show that the biochar yield decreased with increasing pyrolysis temperature, and the values of the analyzed properties varied depending on the type of biochar and pyrolysis temperature. The moisture content ranged from 1.11 to 4.18%, and the water holding capacity ranged from 12.9 to 27.6 g water g−1 dry sample. The highest value of bulk density (211.9 kg m−3) was obtained from sawdust at a pyrolysis temperature of 800 °C. The porosity values ranged from 45.9 to 63.7%. The highest values of pH and EC (10.4 and 3.46 dS m−1) were obtained from tree leaves at a pyrolysis temperature of 800 °C. Total organic matter ranged from 66.0 to 98.1%, total organic carbon ranged from 38.3 to 56.9%, and total nitrogen ranged from 0.4 to 1.9%. The highest values of phosphorus and calcium content (134.6 and 649.0 mg kg−1) were obtained from sugar cane at a pyrolysis temperature of 800 °C. The magnesium, sodium and sulfur content had ranges of 10.9–51.7, 1124–1703 and 3568–12,060 mg kg−1, respectively.
contribute to the physical nature of the system by affecting the penetration depth, structure, texture, porosity, and consistency by changing the surface area, particle size distribution, pore size distribution, density, and packing 10 .The influence of biochar on the physical features of soil can then have a direct impact on plant growth because the penetration depth and availability of air and water in the root zone are largely determined by the physical make-up of the soil horizons.Biochar's presence in soil directly affect the soil's response to water, as well as its aggregation, workability during soil preparation, swelling shrinking dynamics, permeability, capacity to retain cations, and response to ambient temperature changes.In addition, many chemical and biological aspects of soil fertility can be indirectly inferred from these physical properties, such as the physical availability of sites for chemical reactions and the provision of protective habitats for soil microbes 11,12 .
Biochar has higher nutrient retention capacity and higher resistance to degradation by microorganisms which are due to chemical and colloidal structure 13 .Despite of high significant losses of nutrients during pyrolysis, biochar has positive responses on the soil, because it neutralizes the toxins and improves the physical properties of soils such as water retention as well as develop a protection against heavy metal pollution and reduce soil compaction 11 .Physical and chemical features of biochar vary by the variation of the properties of raw materials used and conditions of pyrolysis 14 .
Soil properties such as physical and biological properties could be improved by using biochar and it is used as preventive against heavy metal stress of soil 15,16 .Biochar produced from cotton stalk has capacity of holding higher Cadium (Cd) in soil contaminated with Cd.It reduced the biological effect of Cd in soil by changing the morphological structure of Cd 17 .Several studies have been covered the production of biochar from agriculture wastes which was used to prevent the heavy metals and organic pollutants in soil and water 18 .
Biochar could be used in improving anaerobic digestion as mentioned by Valentin et al 19 whose stated that properties such as specific surface area (SSA), cation exchange capacity, presence of functional groups and electrical conductivity were found favorable for increased methane production, reduction of lag phase, and adsorption of inhibitors.
The published works on biochar application to soil have predominantly focused on agronomic benefits, while the physical and chemical properties of the produced biochar and their effects on soil structure and texture have received little attention.Therefore, there is a lack of information about aspects of biochar that are important for plant growth and soil improvement.Thus, the main aim of this work is to determine same properties of biochar.In particular, the physical properties (bulk density, moisture content, water holding capacity, and porosity) and chemical properties (pH, Electrical Conductivity (EC), organic matter, organic carbon, total nitrogen, and potassium) of different types of biochar were analyzed at different pyrolysis temperatures.
Materials and methods
The experiments were carried out at the Agricultural and Bio-Systems Engineering Department, Faculty of Agriculture, Moshtohor, Benha University, during the months of April and May, 2023 season under the regulations of International, National and Benha University which are consistent with the national and international guidelines and legislation.Biochar was produced from certain agricultural wastes, manly straw rice, sawdust, sugar cane plant residues, and tree leaves.The physical and chemical properties that are relevant to the manufacturing of biochar using these four materials are listed in Tables 1 and 2, respectively.
All study materials (straw rice, sawdust, sugar cane plant residues, and tree leaves) were sun-dried and cut into small pieces (less than 4-5 cm), which were then inserted into a ceramic vessel (500 cm 3 ) and placed in a commercial electric furnace (SOMO-01 Isuzu, Japan).The material was charred for 6 h at different pyrolysis temperatures (400, 600, and 800 °C).The experiments were repeated 3 times and averages were taken.
Biochar physical properties
Moisture content (MC) of biochar was determined by drying the product at 105 °C for 24 h or to a constant weight according to ASAE 20 .
(1) biochar yield = weight of pyrolysis materials (g) mass of raw material input (kg) The Water Holding Capacity (WHC) of biochar was determined by measuring the weight of a wet sample (W i ).This could be done by placing samples in a beaker for 1-2 days using distilled water.Excess water was drained through Whatman #2 filter paper, and the saturated sample was weighed again (W s ).The water holding capacity was calculated using the following equation (Ahn et al 21 ): where W i is the initial weight of the biochar (g), W s is the saturated weight of the biochar (g), and MC is the initial moisture content of the sample (decimal).
The bulk density (BD) of biochar was determined by placing the biochar in a known volume scaled flask (1 L) and the mass of biochar in the scaled flask was measured.The sample weight was recorded and the bulk volume was measured.The bulk density was calculated using Eq. ( 3): Biochar porosity (ε a ) was calculated using the following equation from 22-24 : where ε a is the biochar porosity (%), ρ w is the water density (kg m −3 ), ρ wb is the biochar bulk density (kg m −3 ), ρ ash is the ash density (kg m −3 ), ρ om is the organic matter density (kg m −3 ), DM is the dry matter (decimal), and OM is the organic matter (decimal).
Biochar chemical properties
Electrical conductivity and pH were measured in a 1:5 (v/v) material/water extract using a glass electrode.Organic carbon (OC) was determined by using the dry combustion method at 540 °C for 4 h, as specified by 25 , which involves heating a biochar sample in an oxygen-free environment until it loses all its volatile matter, then weighing the residue and calculating the organic carbon content by subtracting the inorganic carbon content.Organic matter was measured by combustion at 550 °C for 8 h according to 26 , and total nitrogen (TN) was measured by Kjeldahl digestion (model VAPODEST; range 0.1 mg to 200 g N; Germany) 27 .Potassium (K) content was determined by atomic absorption (model EMI9783B; range of 190-930 nm; USA), and phosphorus (P) content was determined using the calorimetric method 28 .The quantities of calcium (Ca), magnesium (Mg), and sodium (Na) were determined by a flame photometer (model Jenway PFP7; range0-160 mmol L −1 ; USA).Sulfur content was determined by using barium chloride following 29 .
Statistical analysis
The data were subjected to analysis using statistical package SPSS version 21 in which one way ANOVA and Duncan Multiple Range Test (DMRT) were performed at significance level of (p < 0.05) at 95% confidence limit to know the significant differences between the treatment means for different parameters.
Biochar yield
Table 3 and Fig. 1 show the biochar yield for different biochar types (straw rice, sawdust, sugar cane, and tree leaves) at different pyrolysis temperatures (400, 600, and 800 °C).The results indicate that the biochar yield www.nature.com/scientificreports/decreased with increasing pyrolysis temperature.Increasing the temperature from 400 to 800 °C the biochar yield significantly decreased from 378.2 to 216.7 g kg −1 (57.29% decrease), 331.4 g kg −1 to 204.1 g kg −1 (61.59% decrease), 450.1 g kg −1 to 322.5 g kg −1 (71.65% decrease), and 277.9 g kg −1 to 165.0 g kg −1 (59.37% decrease) for straw rice, sawdust, sugar cane plant residues, and tree leaves, respectively.The biochar yield decreased with increasing pyrolysis temperature as a result of the increased burning rate, because the variation of lignin and cellulose content of biomass and conversion of organic matter to ash, which reduced the carbon content of the biochar.These results agree with those obtained by Jindo et al 5 .They found that the yield of biochar from apple tree branch, tree oak, rice husk, and rice straw decreased from 283 to 155 g kg −1 , 358 to 191 g kg −1 , 486 to 320 g kg −1 , and 393 to 183 g kg −1 , respectively, when the pyrolysis temperature increased from 400 to 800 °C.The results also indicate that the highest biochar yield (450.1 g kg −1 ) was obtained from sugar cane at a pyrolysis temperature of 400 °C, because the sugar cane content higher total solids compared to the other materials used in this study.While the lowest biochar yield (165.0 g kg -1 ) was obtained from tree leaves at a pyrolysis temperature of 800 °C.These results agree with those obtained by Sarfraz et al 30 .They found the highest value of biochar yield was obtained at pyrolysis temperature of 400 °C, while, the lowest value of biochar yield was obtained at pyrolysis temperature of 700 °C.
Physical properties
Table 4 shows the physical properties (moisture content, water holding capacity, bulk density, and porosity) of the different types of biochar (straw rice, sawdust, sugar cane, and tree leaves) at different pyrolysis temperatures (400, 600, and 800 °C).The results indicate that the moisture content (MC) decreased with increasing pyrolysis temperature.Increasing the pyrolysis temperature from 400 to 800 °C significantly decreased the MC from 2.64 to 1.11%, 2.59 to 1.34%, 3.17 to 1.66%, and 4.18% to 2.19% for straw rice, sawdust, sugar cane, and tree leaves, respectively.The results also show that the highest moisture content (4.18%) was obtained from tree leaves at a pyrolysis temperature of 400 °C, while the lowest moisture content (1.11%) was obtained from straw rice at a pyrolysis temperature of 800 °C.
The water holding capacity (WHC) significantly increased with increasing pyrolysis temperature.Increasing the pyrolysis from 400 to 800 °C increased the water holding capacity from 12.9 to 22.5, 20.3 to 24.1, 24.9 to 27.6, and 20.8 to 24.8 g water g −1 dry for straw rice, sawdust, sugar cane, and tree leaves, respectively.The highest WHC (27.6 g water g −1 dry) was obtained from tree leaves at a pyrolysis temperature of 800 °C, while the lowest WHC (12.9 g water g −1 dry) was obtained from straw rice at a pyrolysis temperature of 400 °C.These results agreed with those obtained by Alkhasha et al 31 .They found that the WHC was increased with increasing pyrolysis temperature for date palm wastes.www.nature.com/scientificreports/ The bulk density (BD) also significantly increased with increasing pyrolysis temperature.Increasing the pyrolysis temperature from 400 to 800 °C increased the bulk density from 161.5 to 187.1, 195.0 to 211.9, 175.7 to 194.1, and 188.0 to 199.4 kg m −3 for straw rice, sawdust, sugar cane, and tree leaves, respectively.The highest bulk density (211.9 kg m −3 ) was obtained from sawdust at a pyrolysis temperature of 800 °C, while the lowest bulk density (161.5 kg m −3 ) was obtained from straw rice at a pyrolysis temperature of 400 °C.
The porosity decreased with significantly increasing pyrolysis temperature.Increasing the pyrolysis temperature from 400 to 800 °C decreased the porosity from 63.7 to 56.1%, 51.0 to 45.9%, 61.8 to 54.3%, and 55.5 to 49.2% for straw rice, sawdust, sugar cane, and tree leaves, respectively.The highest porosity (63.7%) was obtained from straw rice at a pyrolysis temperature of 400 °C, while the lowest porosity (47.2%) was obtained from sawdust at a pyrolysis temperature of 800 °C.
The biochar porosity was dependent on the bulk density and moisture content of biochar, and the porosity decreased with increasing bulk density and moisture content.The results indicate that the porosity of biochar decreased from 63.7 to 56.1%, 51.0 to 45.9%, 61.8 to 54.3%, and 55.5 to 49.2% for straw rice, sawdust, sugar cane, and tree leaves, respectively, when the bulk density increased from 161.5 to 187.1, 195.0 to 211.9, 175.7 to 194.1, and 188.0 to 199.4 kg m −3 and the moisture content increased from 2.64 to 1.11%, 2.59 to 1.34%, 3.17 to 1.66%, and 4.18 to 2.19%.These results agree with those obtained by Brewer et al 32 .They found the biochar density increased from 250 to 600 kg m -3 , when the pyrolysis temperature increased from 350 to 450 °C for wood biochars.
Biochar chemical properties
Table 5 shows the analyzed chemical characteristics (pH, EC, organic matter, organic carbon, total nitrogen, potassium, phosphorus, calcium, magnesium, sodium, and sulfur content) of the different types of biochar (straw rice, sawdust, sugar cane, and tree leaves) at different pyrolysis temperatures (400, 600, and 800 °C).
The results indicate that the pH significantly increased with increasing pyrolysis temperature.Increasing the pyrolysis temperature from 400 to 800 °C increased the pH from 8.2 to 9.4, 7.3 to 7.6, 6.6 to 8.9, and 8.7 to 10.4 for straw rice, sawdust, sugar cane, and tree leaves, respectively.These results agree with those obtained by Alghashm et al 4 .They found the pH of biochar increased from 9.19 to 12.52 with pyrolysis temperature ranging from 400 to 900 °C.The results also indicate that the highest pH (10.4) was obtained from tree leaves at a pyrolysis temperature of 800 °C, while the lowest pH (6.6) was obtained from sugar cane at a pyrolysis temperature of 400 °C.However, biochar pH values may vary depending on the feedstock and production process.The observed increase in the pH of the four biochar types at higher temperatures is probably a consequence of the relative concentration of non-pyrolyzed inorganic elements that were present in the original feedstocks 33 .They found the pH was increased from 7.9 to 8.6, 5.9 to 7.2, 8.7 to 10.3 and 5.8 to 8.0, when the pyrolysis temperature increasing from 400 to 500 °C for peanut hull, pecan shell, poultry litter and switch grass, respectively.
The results also indicate that the EC significantly increased with increasing pyrolysis temperature.Increasing the pyrolysis temperature from 400 to 800 °C increased the EC from 1.48 to 2.90, 0.94 to 1.52, 0.82 to 1.27, and 2.11 to 3.46 dS m −1 for straw rice, sawdust, sugar cane, and tree leaves, respectively.The highest EC (3.46 dS m −1 ) was obtained from tree leaves at a pyrolysis temperature of 800 °C, while the lowest EC (0.82 dS m −1 ) was obtained from sugar cane at a pyrolysis temperature of 400 °C.These results agree with those obtained by Shenbagavalli and Mahimairaja 34 , they found that the EC of biochar (paddy straw, maizestover, groundnut shell, coconut shell, coir waste and prosopis wood) ranged from 0.39 to 4.18 dS m −1 .
The organic matter (OM) increased with increasing pyrolysis temperature significantly.Increasing the pyrolysis temperature from 400 to 800 °C increased the organic matter from 66.0 to 90.5%, 74.8 to 96.7%, 87.8 to 98.1%, and 71.0 to 95.5% for straw rice, sawdust, sugar cane, and tree leaves, respectively.These results agree with those obtained by Wu et al 35 .They found the organic matter of biochar prepared from Typha orientalis was increased from 31.63 to 40.54%, when the pyrolysis temperature increasing from 300 to 500 °C.The highest organic matter content (98.1%) was obtained from sugar cane at a pyrolysis temperature of 800 °C, while the lowest organic matter content (66.0%) was obtained from rice straw at a pyrolysis temperature of 400 °C.These results agree with those obtained by 30,36 .They found the highest value of organic matter content was found with the biochar synthesized from sugar cane.Organic carbon (OC) content significantly increased with increasing pyrolysis temperature.Increasing the pyrolysis temperature from 400 to 800 °C increased the organic carbon content from 38.3 to 52.5%, 43.4 to 56.1%, 50.9 to 56.9%, and 41.2 to 55.4% for straw rice, sawdust, sugar cane, and tree leaves, respectively.These results agree with those obtained by Yargicoglu et al 37 .They found that the organic carbon content ranged from 23.5 to 78.1%.The highest organic carbon content (56.9%) was from sugar cane at a pyrolysis temperature of 800 °C, while the lowest organic matter content (38.3%) was obtained from straw rice at a pyrolysis temperature of 400 °C.
The total nitrogen (TN) decreased with significantly increasing pyrolysis temperature.Increasing the pyrolysis temperature from 400 to 800 °C decreased the total nitrogen from 0.9 to 0.4%, 1.4 to 0.5%, 1.9 to 1.3%, and 1.5 to 0.7% for straw rice, sawdust, sugar cane, and tree leaves, respectively.These results agree with those obtained by Jindo et al 5 , they found that the total nitrogen decreased from 0.76 to 0.34%, 0.69 to 0.32%, 0.69 to 0.22%, and 1.22 to 0.25% for apple tree, tree oak, rice husk, and rice straw, respectively, when the pyrolysis temperature increased from 400 to 800 °C.The highest TN content (1.9%) was obtained from sugar cane at a pyrolysis temperature of 400 °C, while the lowest TN content (0.4%) was obtained from straw rice at a pyrolysis temperature of 800 °C.
The potassium (K) content significantly increased with increasing pyrolysis temperature.Increasing the pyrolysis temperature from 400 to 800 °C increased the potassium content from 0.6 to 1.6%, 1.3 to 2.7%, 2.2 to 3.5%, and 1.8 to 2.9% for straw rice, sawdust, sugar cane, and tree leaves, respectively.The highest potassium content (3.5%) was obtained from sugar cane at a pyrolysis temperature of 800 °C, while the lowest potassium content (0.6%) was obtained from straw rice at a pyrolysis temperature of 400 °C.
The results indicate that the phosphorus (P) content significantly increased with increasing pyrolysis temperature.Increasing the pyrolysis temperature from 400 to 800 °C increased the phosphorus from 47.6 to 62.3, 65.5 to 121.3, 77.3 to 134.6, and 59.6 to 70.9 mg kg −1 for straw rice, sawdust, sugar cane, and tree leaves, respectively.The results also indicate that the highest phosphorus content (134.6 mg kg −1 ) was obtained from sugar cane at a pyrolysis temperature of 800 °C, while the lowest phosphorus content (47.6 mg kg −1 ) was obtained from straw rice at a pyrolysis temperature of 400 °C.
The results also indicate that the calcium (Ca) content significantly increased with increasing pyrolysis temperature.Increasing the pyrolysis temperature from 400 to 800 °C increased the calcium content from 241.3 to 264.2, 491.6 to 546.1, 513.1 to 649.0, and 353.7 to 444.9 mg kg −1 for straw rice, sawdust, sugar cane, and tree leaves, respectively.The highest calcium content (649.0 mg kg −1 ) was obtained at from sugar cane at a pyrolysis temperature of 800 °C, while the lowest calcium content (241.3 mg kg −1 ) was obtained from straw rice at a pyrolysis temperature of 400 °C.
The magnesium (Mg) content significantly increased with increasing pyrolysis temperature.Increasing the pyrolysis temperature from 400 to 800 °C increased the magnesium content from 10.9 to 13.2, 21.4 to 27.8, 47.2 to 51.7, and 30.7 to 39.0 mg kg −1 for straw rice, sawdust, sugar cane, and tree leaves, respectively.The highest magnesium content (51.7 mg kg −1 ) was obtained from sugar cane at a pyrolysis temperature of 800 °C, while the lowest magnesium content (10.9 mg kg −1 ) was obtained from straw rice at a pyrolysis temperature of 400 °C.
The sodium (Na) content also significantly increased with increasing pyrolysis temperature.Increasing the pyrolysis temperature from 400 to 800 °C increased the sodium content from 1124 to 1329, 1034 to 1109, 1604 to 1703, and 1204 to 1509 mg kg −1 for straw rice, sawdust, sugar cane, and tree leaves, respectively.The highest sodium content (1703 mg kg −1 ) was obtained from sugar cane at a pyrolysis temperature of 800 °C, while the lowest sodium content (1124 mg kg −1 ) was obtained from straw rice at a pyrolysis temperature of 400 °C.www.nature.com/scientificreports/ The sulfur (So 4 ) content increased with increasing pyrolysis temperature.Increasing the pyrolysis temperature from 400 to 800 °C increased the sulfur content from 3568 to 4360, 9752 to 10,138, 11,235 to 12,060, and 10,334 to 11,241 mg kg −1 for straw rice, sawdust, sugar cane, and tree leaves, respectively.The highest sulfur content (12,060 mg kg −1 ) was obtained from sugar cane at a pyrolysis temperature of 800 °C, while the lowest sulfur content (3568 mg kg −1 ) was obtained from straw rice at a pyrolysis temperature of 400 °C.
Conclusions
Yield, physical and chemical properties of biochar synthesized from different agricultural wastes (straw rice, sawdust, sugar cane plant residues, and tree leaves) under different pyrolysis temperature (400, 600, and 800 °C) were determined.The results revealed that yield was affected by both temperature and content of raw materials, where, sugar cane gave the highest yield (450.1 g kg −1 ) compared to other materials.The moisture content of biochar ranged from 1.11 to 4.18%, and the WHC ranged from 12.9 to 27.6 g water g −1 dry.The bulk density ranged from 161.5 to 211.9 kg m −3 .The porosity ranged from 45.9 to 63.7%.The pH ranged from 6.6 to 10.4, and the EC ranged from 0.82 to 3.46 dS m −1 .The total organic matter content ranged from 66.0 to 98.1%, the total organic carbon content ranged from 38.3 to 56.9%, and the TN content ranged from 0.4 to 1.9%.The total K content ranged from 0.6 to 3.5%.The P and Ca content ranged from 47.6 to 134.6 and 241.3 to 649.0 mg kg −1 , respectively, for different compost types.The magnesium, sodium, and sulfur content ranged from 10.9 to 51.7, 1124 to 1703, and 3568 to 12,060 mg kg −1 , respectively.More studies are recommended on biochar properties with mixed biomass.Further studies are recommended to study the feasibility of using biochar in soil improvement compared to other commercial fertilizers.
Table 1 .
Physical properties of raw materials used in the production of biochar.Means on the same row with different superscripts are significantly different (p < 0.05).
Table 2 .
Chemical properties of raw materials used in the production of biochar.Means on the same row with different superscripts are significantly different (p < 0.05).
Table 3 .
Biochar yield for different materials from 6 h of pyrolysis at different temperatures.Means with different superscripts are significantly different (p < 0.05).
Table 4 .
Physical properties of different biochar types.Means on the same column with different superscripts are significantly different (p < 0.05).*WHC is water holding capacity. | 2024-02-02T06:16:17.916Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "644c6eb97b3e8fc234d588568ef1678657014ced",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "6d3f0deaa078a20e562f39df5a74e298f63bea96",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
146810772 | pes2o/s2orc | v3-fos-license | DstarM: an R package for analyzing two-choice reaction time data with the D∗M method
The decision process in choice reaction time data is traditionally described in detail with diffusion models. However, the total reaction time is assumed to consist of the sum of a decision time (as modeled by the diffusion process) and the time devoted to nondecision processes (e.g., perceptual and motor processes). It has become standard practice to assume that the nondecision time is uniformly distributed. However, a misspecification of the nondecision time distribution introduces bias in the parameter estimates for the decision model. Recently, a new method has been proposed (called the D∗M method) that allows the estimation of the decision model parameters, while leaving the nondecision time distribution unspecified. In a second step, a nonparametric estimate of the nondecision time distribution may be retrieved. In this paper, we present an R package that estimates parameters of several diffusion models via the D∗M method. Moreover, it is shown in a series of extensive simulation studies that the parameters of the decision model and the nondecision distributions are correctly retrieved.
Introduction
Decision making is actively studied in both psychology and neuroscience. Many studies attempt to gain insight into decision processes via a combination of two-choice reaction time experiments and mathematical modeling. A number of mathematical models, collectively known as sequential sampling models, have been developed in the past decennia (see Ratcliff et al. 2016). A common assumption to all these models is that noisy evidence is accumulated (or integrated) over time to arrive at a decision. The most used and successful model from this class is the Ratcliff diffusion model (DDM;Ratcliff 1978). Following the presentation of a stimulus, a participant is accumulating information for either one of two possible responses. Once the level of accumulated information exceeds a certain boundary, the participant makes the corresponding response. The diffusion process is illustrated in Fig. 1. In the DDM, the evidence criteria are relative: If the accumulated evidence for one option goes up, the evidence decreases by the same amount for the other option. For easy stimuli, the evidence accumulates quickly to the corresponding boundary (leading to quick and accurate responses) while the opposite happens for difficult stimuli.
More formally, the DDM has a starting point ξ * and two boundaries a and 0 (corresponding to the decision criteria). The speed at which the level of information evolves is called the drift rate and is denoted μ (if μ > 0. The process tends to drift off towards the upper boundary, and vice versa). If the starting point ξ * lies in the middle of the boundary a and 0, a participant's decision is seen as a priori unbiased for the two response options. Usually, the starting point is expressed as a proportion of the boundary (ξ = ξ * /a) to facilitate interpretation. The information accumulation process is noisy, and the size of the noise is regulated by a standard deviation s (it describes how the drift rate varies within a trial). However, s is fixed to 1 for identification purposes.
If the same stimulus is given to the same participant repeatedly, some of the parameters will vary across these occasions; this is trial-to-trial variability. Thus, the starting point ξ is not a constant but can vary over trials and is therefore modeled as a uniform distribution centered at z with width sz. Similarly, the drift rate μ can vary over trials and is modeled as a normal distribution with mean drift rate Fig. 1 Graphical representation of a diffusion model. At the beginning of a trial, a participant's level of information starts at ξ * . Over time, the level of information accumulates until it reaches either boundary a or 0. The rate of information accumulation is the drift rate μ. After a decision boundary is reached, the participant makes a response. The gray lines represent information accumulation processes for five trials with different drift rates ν and variance sv 2 . For more information on the Ratcliff model, see Ratcliff and Tuerlinckx (2002).
The DDM describes in detail what happens during the decision process. However, the total reaction time is not uniquely the result of the decision process. There are also nondecision processes playing a role and they entail everything that does not contribute to the decision making but does take up time, from the encoding of visual information to the neural representation of stimuli to eliciting a motor response (Wagenmakers, 2009). In early applications of diffusion models, the nondecision processes are modeled by a constant. Extensions of these models also estimate the variance of nondecision processes. As a consequence, these models impose a distribution on the nondecision processes. Commonly, the nondecision distribution is assumed to be uniform. However, if this assumption is violated, then bias is introduced in the parameter estimates of the decision model (Ratcliff, 2013). To circumvent specifying a distribution for the nondecision processes, a new method has been proposed, called D * M (Verdonck & Tuerlinckx, 2016). Until now, no publicly available software application capable of doing a D * M analysis existed. Therefore we developed the R package DstarM and provide a thorough quality check of its performance. This paper is structured as follows. First, we provide a brief summary of the D * M method and the Ratcliff diffusion model. Next, we provide a tutorial on how to run a D * M analysis in R using DstarM by analyzing data from an empirical study (Wagenmakers et al., 2008). Then, we validate the performance of our implementation in DstarM via simulation studies and compare the results to those of traditional analyses. Finally, we discuss some theoretical limitations of the D * M method and some practical issues with traditional analyses.
The D * M method
Assume we have data from a reaction time (RT) task with two conditions (e.g., a speed-accuracy manipulation) and two responses (e.g., correct and error). Observed RTs are (positive) random variables that can be seen as the sum of two random variables: the time spent on the decision process and the residual or nondecision time. At the level of densities, this assumption implies that the total RT density is a convolution of the nondecision time density and the decision time density: where f (t) denotes the total RT probability density function (pdf), m(t) denotes the decision time pdf, r(t) denotes the nondecision time pdf, and * denotes the convolution operator. All pdfs are a function of time t but we omit this in further equations for simplicity. Generally, in choice RT experiments, there are two densities for a given condition c (c = 1, . . . , C): f 1c for the correct and f 0c for the error response. Both are degenerate densities, which means that they do not integrate to one but to the probability of a correct and an error response for condition c, respectively. A further simplification of notation is achieved by denoting a unique condition-response pair (c, x) by a single index p (p = 1, . . . , P ). Many methods exist to estimate the parameters of f p (with p = 1, . . . , P ). Most commonly, a discrepancy measure between data and model is defined and this is directly minimized as a function of the model parameters. Such a procedure requires the specification of the nondecision distribution. The most popular software packages to estimate the diffusion model assume a uniform nondecision distribution, which has two parameters: the center and the range. However, as mentioned above, the results of such an approach may depend strongly on the specific assumption.
In contrast, the D * M method (Verdonck & Tuerlinckx, 2016) circumvents the problem of specifying a nondecision distribution via a simple identity based on the commutative property of convolutions. Considering two distinct condition-response pairs p and p for which the same but unknown nondecision time distribution can be assumed. Then it holds that: Equation 2 is the foundation of the D * M: An expression is obtained that only depends on the total RT density pairs p and p and the decision densities for these pairs but not on the nondecision time distribution. When we replace f p and f p by their observed counterparts (f p andf p , respectively), the identity will not hold anymore. However, the parameters of the decision pdf can then be estimated by minimizing the discrepancy between the left-and right-hand side (simultaneously for multiple of such pairs): wheref p is the observed RT distribution for conditionresponse pair p.
As a discrepancy, we choose to use the following Chisquare discrepancy: This difference can be summed for every unique combination of condition-response pairs which results in the objective function T to be minimized as a function of the parameter vector θ: In words, T (θ) is a function of the model parameters describing the decision distribution that calculates the sum of the difference between the left-hand side and right-hand side of Eq. 2 for all unique combinations of conditionresponse pairs. One restriction must be imposed on the estimation procedure. The variance of the model distribution must be smaller than or equal to the variance of the data distribution. If we assume that the decision model and the nondecision model are independent, then the sum of their variances is equal to the variance of the data distributions. Equivalently, the variance of the nondecision distribution is equal to the variance of the total distribution minus the variance of the decision distribution. Since the variance of the nondecision distribution cannot be negative, this implies the restriction: the variance of the total distribution must be larger than or equal to the variance of the decision distribution. By enforcing that the variance of the nondecision distribution is non-negative, the procedures ensures that a nondecision distribution exists.
In the previous equations, it is assumed that all conditionresponse pairs have the same nondecision distribution. However, it may be the case that it is hypothesized that one set of condition-response pairs shares the same nondecision distribution, and a second set another nondecision distribution. To estimate parameters of conditions with different nondecision distributions, we calculate the objective function separately for every set with the same nondecision distribution and sum the outcome. In the most unrestricted estimation scenario, this implies that only reaction times for responses A and responses B within the same condition have an identical nondecision distribution, an assumption made by most other estimation software.
After obtaining the decision distributions, the nondecision distribution can be estimated by minimizing the following function: where the average data distribution f and average decision distribution m(θ) are the sum of the data distributions and decision distributions divided by the number of conditionresponse pairs that go into them, respectively. Again, the sum only refers to condition-response pairs that were assumed to have the same nondecision distribution when estimating the parameters of the decision model. This procedure is akin to a deconvolution. For traditional DDM analyses, we obtain parameter estimates by directly minimizing the Chi-square difference between the observed data distribution and the model distribution, for each condition-response pair.
This approach is equivalent to the Chi-square approach used in Ratcliff and Tuerlinckx (2002).
Numerical procedures in DstarM
The D * M procedure has been implemented in the R package DstarM. Before explaining how to use DstarM (see the next section), we discuss some technical details first. When dealing with data distributions (i.e.,f ), we use a kernel-based approach (with a uniform kernel of bandwidth equal to 1) to derive them from the raw reaction times. The same kernel is subsequently used to smooth the average estimated decision distribution (to avoid bias). It is possible to change the default bandwidth of 1. 1 Model distributions of the DDM are obtained via a numerical procedure (Voss & Voss, 2008) as implemented in the R-package rtdists (Singmann et al., 2016).
In DstarM, all minimizations are done using Differential Evolution, implemented in the R package DEoptim (Ardia et al., 2015;Mullen et al., 2011). To ensure full user customization, all arguments of Differential Evolution can be changed in DstarM and users can run the estimation in parallel. It is strongly advised to run the Differential Evolution procedure several times again (we settled at five) and then choose the analysis with the lowest objective function value (and hopefully there are several equal results). This is done to avoid potential convergence issues that may arise. For a more detailed explanation of the D * M method, see Verdonck and Tuerlinckx (2016).
A tutorial on DstarM with an empirical example
We provide a tutorial on DstarM by analyzing data from a lexical decision making task (Experiment 1 of Wagenmakers et al. 2008). These data are available in the rtdists package under the name speed acc. Our main goal is to demonstrate how D * M analyses can be carried out in R; we will not carry out a detailed comparison of our results with the ones obtained by (Wagenmakers et al., 2008); Verdonck and Tuerlinckx (2016) present three case studies with an in-depth comparison between the traditional DDM and the D * M analyses (although not for the data we analyze here). We only analyze data from the first participant in the dataset to avoid needless computational complexity that does not contribute this tutorial. Furthermore, we carry out both a D * M analysis and a traditional analysis to contrast these methods.
In this experiment, participants (N = 17) had to decide if a stimulus was a word or a nonword. Responses were manipulated by instructing the participants to respond either as fast as possible or as accurately as possible. A second manipulation was induced by presenting four different populations of stimuli: high-frequency words (HF), lowfrequency words (LF), very low frequency words (VLF), and nonwords (NW). As observed in prior research, the first manipulation (speed/accuracy instructions) is believed to only influence the boundary parameter of the DDM. The second manipulation (word frequency) is intended to make trials harder, which is believed to influence the drift parameter of the DDM (Ratcliff et al., 2004).
Analysis of the data from a single participant
After having installed and loaded the package (with the usual install.packages() and library() functions) the next step when using DstarM is to import the data. The data passed on to the functions of DstarM should have a structure like that in Table 1, where the first six observations of the empirical data set are shown. The data set should be a data frame (called dat in the remainder of this section) with three columns: rt containing reaction times, condition determining condition membership, and response determining response decision. Note that recoding a decision to upper or lower is arbitrary. Inverting this will only change the sign of the estimated drift speed change and flip the relative bias. In our analysis, we let upper represent 'word' choices and lower represent nonword choices. Table 1 can be reproduced with the following code.
# get complete dataset data('speed_acc', package = 'rtdists') # get the first six observations of these columns Ideally, a visual and numerical exploration of the data should be carried out before moving to more complicated analyses, but we skip that step for reasons of brevity (a more detailed description of the data can be found in (Wagenmakers et al., 2008) and code for preprocessing the raw data can be found in the rtdists package; see ?speed acc). In order to prepare for the DstarM analyses, the analyst should provide a time grid and decide on the parameter restrictions that specify the model. First, we look into the time grid, which is usually an evenly spaced grid. We recommend a time grid from 0 to 5 in steps of 0.01 (i.e., a hundredths of a second or a centisecond). Using this grid as a standard could blur subtleties present in onethousandths of a second, but it is unlikely that many studies hypothesize about effects that small let alone have the power to detect them. The code for defining the time grid is as follows: # define a time grid tt <-seq(0, 5, .01) Second, we need to specify the model by applying an appropriate set of restrictions over the parameters of different conditions (or indicating that no restrictions are needed). This is done by specifying the restriction matrix using integer values from 0 up to the number of uniquely estimated parameters. All parameters with the same integer value will be restricted to be equal. An example for the Once the time grid and the parameters restrictions are specified, the DDM parameters can be estimated via the D * M method using the following code: # estimate decision model resD <-estDstarM(dat = dat, tt = tt, restr = restr) # estimate nondecision distribution resND <-estND(res = resD) # estimate total distribution resObs <-estObserved(resDecision = resD, resND = resND, data = dat) The function estDstarM can also run a traditional DDM analysis where the nondecision distribution is modeled as a uniform distribution by adding the argument DstarM = FALSE. Both resD and resND are S3 class objects with custom print and plot methods. From resD, a vector containing the best parameter estimates of the decision model can be obtained by indexing with $Bestvals.
The estimated nondecision distribution(s) can be obtained by running resND$r.hat. Both resD and resND can be indexed with $GlobalOptimizer to look up details about the Differential Evolution estimation procedure. The function estDstarM can also run a traditional DDM analysis where the nondecision distribution is modeled as a uniform distribution. The model then contains two more parameters: the mean and width of the uniform distribution. Columns represent different conditions of the experiment and rows represent parameters. The first four columns represent the speed condition, the last four columns represent the accuracy condition. The four columns within both speed and accuracy represent word frequency conditions. Identical values in the cells indicate that these parameters will be restricted across conditions Parameter estimates of both models are shown in Table 3. The differences in parameter estimates between the speed and accuracy manipulations are comparable between the two analyses. All ordinal relations between conditions with respect to a and v are the same for the traditional model, D * M model and the original analyses done in Wagenmakers et al. (2008). A difference between the traditional model and the D * M model is present in the variance parameters. It appears that the D * M estimates attribute more variance to intertrial variability (sv) whereas the tradition model attributes this to variance in the nondecision distribution. Next, we can compare the performance of both models. This can be done visually, as done in Fig. 2 or by comparing the χ 2 goodness of fit value. For a traditional analysis, the χ 2 goodness of fit is the same as the objective function. For a D * M analysis, this has to be recalculated, which is done automatically in the function estObserved. In each case, the fit is first calculated separately for each conditionresponse pair. Subsequently, each individual fit is multiplied by the proportion of observations in that condition-response pair and then summed, to obtain a weighted fit measure. The effect of the speed accuracy manipulation is similar in the parameter estimates of both models. Note that T er and σ Ter represent the mean and variance of the nondecision distribution. Parameters of the uniform nondecision distribution obtained by the traditional model are U(a = 0.2791, b = 0.5077) The resulting fit of the D * M model was 7.638 compared to 8.030 for the traditional model. In Fig. 2 and Table 4, it can be seen that a D * M analysis performs somewhat better than a traditional DDM analysis. This is likely caused by the additional freedom in the shape of the nondecision distribution. The overall misfit in both analyses may be caused by the many parameter restrictions and the small sample size for many condition-response pairs. The complete script for carrying out the analyses can be found at https://osf.io/ypcqn/.
To summarize, a D*M analysis can be carried out as follows. Before the analysis, one could get an overview Fig. 2. χ 2 is defined as χ 2 D * M − χ 2 T raditional . Since a lower Chi-square difference indicates better fit, negative values for the χ 2 indicate that the D * M model fits better. Note that χ 2 values are weighted by sample size and therefore they cannot be compared across condition-response pairs of the data using rtDescriptives. This returns the observed proportions of each condition-response pair and plots density estimates for each condition-response pair.
Next, to execute the analysis, the following functions are called in order. First estDstarM, to estimate the decision model, then estND, to estimate the nondecision model, and finally estObserved, to combine the decision and nondecision model and to obtain the model implied distribution. In principle, if a researcher is interested only in the decision model, then there is no need to estimate the nondecision model. However, this means that model fit cannot be examined (e.g., Table 4 and Fig. 2 can not be obtained).
After running the analyses, a number of convenience functions allow a user to inspect the results. For instance, the call plotObserved can be used to quickly mimic Fig. 2. This function either produces QQ-plots or histograms of the data overlayed with the model-implied density. To obtain Chi-square goodness-of-fit measures (e.g., to create Table 4), the function chisqFit can be used. This returns a list containing the goodness of fit for each condition response pair (weighted by the number of observations) and the sum of the fit.
Setup of the simulation study
To validate our software tool, an extensive recovery simulation study was carried out. In the recovery study, we simulated 600 data sets, each consisting of two experimental conditions. Between these conditions, the decision model parameter values and the nondecision distributions could vary.
Our data simulation procedure worked as follows. First, we created 100 sets of parameter values by drawing each parameter from an appropriate uniform distributions on the parameter space (see Table 5 for the lower and upper bounds). This resulted in 100 different parameter sets that varied widely. Next, we randomly selected a manipulation (including no manipulations) in parameters a, v, and/or z (e.g., if only parameter a was selected to be affected by manipulation, then a new a was drawn while the other parameters were kept constant). Then we selected for each condition one of three nondecision distributions (uniform, skewed, multimodal) at random (see Fig. 3 for details). This resulted in 100 unique sets of parameter configurations. For each of these configurations, we simulated six data sets with 100, 250, 2500, 10,000, 250,000 and an infinite number 2 of observations per condition. All simulated datasets and the code to generate them can be found at https://osf.io/ypcqn/. The data were analyzed with the traditional DDM (assuming a uniform nondecision distribution) and with D * M, both using the DstarM package. As mentioned above, we analyzed every data set five times with DstarM (both for DDM and D * M) and selected the analyses with the lowest objective function value to use in the results. This was done to avoid potential convergence issues in the Differential Evolution algorithm. For the D * M analyses, the nondecision distributions were estimated in a next step using the best model parameters from the previous step.
Results
This section consists of two parts. The first part reports the parameter estimates for the decision distributions. The second part shows the retrieval of the nondecision distributions.
2 An infinite number observations means that we supply the algorithm with the true densities used to simulate data, instead of the actual data (where the algorithm then estimates a density from). Because our estimation procedure makes use of a distance between (empirical or theoretical) densities, it is possible to evaluate the performance of the procedure when the true densities are supplied.
Parameter estimates: Correlations with the true values and biases
Results of the simulation study are shown separately for the traditional (i.e., standard DDM with uniform nondecision distribution) and D * M analyses in Tables 6 and 7, respectively. The left subtables contain correlations between the true and estimated parameters; the right subtables contain mean absolute relative differences between estimated and true parameters (i.e., 100 · 1 N N i=1 |θ i −θ i θ i |). Evaluating Table 6, it can be seen that in general for sufficiently large sample sizes, the parameters estimated using the standard DDM analysis correlate well with their true counterparts. There are, however, important qualifications to make. First, the drift rate and boundary separation have in general higher correlations, even for small samples. Second, and to be expected, correlations are largest for the condition of a uniform nondecision distribution.
Third, the multimodal nondecision distribution lowers the correlations, specifically for the trial-to-trial variability parameters.
Relative biases in Table 6 are in general quite considerable for the DDM analyses and they do not disappear completely with a large number of observations. The drift rates show more bias than boundary separation or starting point. The trial-to-trial variabilities show most bias. It may appear surprising that results from traditional analyses on datasets with uniform nondecision distributions are not unbiased as the sample size grows to infinity. This bias appears because each table describes all datasets for which at least one of the nondecision distributions used has the mentioned shape. Since parameters are restricted across conditions, this biases the results. Appendix A contains tables with parameter estimates split up per unique combination of nondecision distributions. It can be seen that the bias in parameter estimates of the traditional model for The left table shows correlations between estimated and true parameter values; the right table shows the mean of the absolute relative differences between estimated and true parameter values. The column headings contain the sample sizes divided by 1000 (so, the first column refers to a sample size of 0.1 × 1000 = 100 The left table shows correlations between estimated and true parameter values; the right table shows the mean of the absolute relative differences between estimated and true parameter values. The column headings contain the sample sizes divided by 1000 (so, the first column refers to a sample size of 0.1 × 1000 = 100 Table 7, it can be seen that the D * M analyses properly retrieve parameters regardless of the nondecision distribution and this is shown in relatively large correlations even for small samples (with the exception of the trial-totrial variability parameters). The relative biases for the D * M analyses are much smaller than for the traditional DDM analyses and they quickly become small with increasing sample size (again with the exception of the variabilities). It is striking that D * M analyses perform also better than traditional analyses with lower sample sizes, even when the nondecision distribution is indeed uniform. We refer to Appendix B for additional scatterplots of estimated and true parameter values. Figure 4 shows the (average) retrieved nondecision distributions for two sample sizes 100 and 250,000. From this plot, it can be concluded that the general shape of the nondecision distribution can be retrieved on average, even for 100 observations. Recovery is much better for 250,000 observations. Of course, it should also be remarked that a proper retrieval of the nondecision distribution depends foremost on proper estimation of the decision distribution. The smooth rightskewed beta distribution is most easy to estimate, while the procedure has most difficulties with the discontinuities and sharp corners of the two other nondecision distributions. Appendix C contains these plots for data sets with sample sizes not shown here. In Figs. 5 and 6, we show the implied estimates of the mean and variance of the nondecision distributions versus their true mean and variance. It can be seen that increasing the sample size leads to a convergence of the estimated mean and variance to their true values.
Discussion
In this paper, we have introduced the R package DstarM to estimate the diffusion model without the need to assume a specific nondecision distribution. In an extensive simulation study, we have shown the package performs as intended. In the following sections, we discuss some limitations of the D * M method and of traditional analyses. We also provide some design advice for researchers interested in using the D * M method. It is worthwhile to emphasize that the D * M is not specific to DDM analyses, but could be used with any decision model (e.g., ballistic accumulator models).
Consequences of bias in traditional analyses due to a misspecified nondecision distribution
Results from traditional analyses may correlate highly with true values but can be severely biased as well, depending on the underlying nondecision distribution. As a consequence, conditions with misspecified and different nondecision distributions can no longer be compared meaningfully. Potential differences between conditions could be negated or increased through bias introduced by misspecification of the nondecision distributions. Effectively, results from traditional analyses should not be compared across conditions with different nondecision distributions.
Limitations of the D * M method
It is theoretically possible that the (observed) data distributions of condition-response pair p and condition-response pair p are equal. D * M builds on the difference between these (observed) distributions (see Eq. 2). Hence, in this scenario, the D * M method will likely encounter convergence issues or return improper parameter estimates. Of course, these situations are rare and imply that in Eq. 2 f p equals f p with as a consequence that the models m p and m p can have any parameters as long as their distributions are equal. This is mainly a theoretical problem and is unlikely to occur in practice. D * M is does not use an explicit likelihood function. Therefore, obtaining standard errors can only be done via bootstrapping. Furthermore, it is not straightforward to employ the method in a Bayesian paradigm. D * M currently introduces a large number of parameters for the nondecision distributions. It is important to realize that the additional parameters do not impact the estimation of the decision model parameter, because the decision and non-decision processes are separated.
The idea behind D * M may be compared to the simple paired sample design in which for a number of persons two measurements are made. Let us denote an observation from person i in group j (with j = 1, 2) as y ij . To account for individual differences, a person-specific parameter τ i is added to the model formulation of both measurements: y ij = α j + τ i + ij , where α j is the condition effect and ij (with ij ∼ N(0, σ 2 ), independently of τ i ) is the error term. Usually we are interested in the difference between the condition effects α 2 −α 1 . A simple way to make inferences about this quantity is by analyzing the personspecific differences y i2 − y i1 . By doing so, we remove the person-specific τ i 's from the equation, as well as any information about their distribution.
For model selection, we advise using cross validation techniques, to avoid confusion about the status of the added non-decision parameters when calculating typical information criteria like AIC.
Estimation of the trial-to-trial variability parameters
The variance parameters sz and sv can only reliably estimated with very high sample sizes. This issue is not inherent to the D * M method but to the Ratcliff model since the same problem was encountered for traditional analyses. This has also been observed in reports on other estimation methods (Voss & Voss, 2007;Ratcliff & Tuerlinckx, 2002). Therefore, we discourage interpreting these parameters.
Comparison to other estimation methods Our implementation differs from other software used to estimate diffusion models in two aspects (aside from using a different theoretical framework). The first most notable difference is that most other software applications obtain parameter estimates by minimizing a difference between observed data statistics (e.g., the quantiles of the observed data) and their model counterparts (e.g., quantiles of the model distribution). In contrast, DstarM minimizes a Chi-square distance between (convolutions of) observed data distributions and model distributions. This Chi-square distance can roughly be interpreted as the difference between two distributions in terms of all moments.
A second difference is the optimizer used; DstarM uses a global optimization algorithm (Differential Evolution) which should make it more robust to local optima than software applications that use local search algorithms.
Design Advice
The D * M method performs best when there are multiple conditions that share one nondecision distribution. Obtaining conditions with quasi-identical nondecision distributions can be realized via experimental design. If an experiment consists of some conditions that only differ in stimulus difficulty then the expected effect in parameter estimates should only be present in the drift rate v of the Ratcliff diffusion model (Ratcliff et al., 2004). In a similar fashion, giving participants a speed or accuracy instruction is believed to only result in a change in boundary a of the Ratcliff diffusion model .
Conclusions
To summarize, we have made D * M analyses more accessible with the R package DstarM; a new method for diffusion model analyses which circumvents specifying a distribution for the nondecision processes. In simulation studies, we have shown that it performs well at retrieving parameters of a decision model and also properly estimates nondecision distributions.
Open Practices Statement
The complete script for carrying out the analyses, as well as the simulated data sets, can be found at https:// osf.io/ypcqn/. The source code of the R package DstarM can be downloaded from https://CRAN.R-project.org/package=DstarM.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The table is identical to Tables 6 and 7, except that subtables are now shown for every unique combination of nondecision distributions | 2019-05-08T13:27:20.871Z | 2019-05-06T00:00:00.000 | {
"year": 2019,
"sha1": "91045bb822c684c72dd9a6ae2d3093a13d77ee12",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.3758/s13428-019-01249-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "cf150881e62e3a063194b2d51e5dc135d68c6a4e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226299712 | pes2o/s2orc | v3-fos-license | EGAD: Evolving Graph Representation Learning with Self-Attention and Knowledge Distillation for Live Video Streaming Events
In this study, we present a dynamic graph representation learning model on weighted graphs to accurately predict the network capacity of connections between viewers in a live video streaming event. We propose EGAD, a neural network architecture to capture the graph evolution by introducing a self-attention mechanism on the weights between consecutive graph convolutional networks. In addition, we account for the fact that neural architectures require a huge amount of parameters to train, thus increasing the online inference latency and negatively influencing the user experience in a live video streaming event. To address the problem of the high online inference of a vast number of parameters, we propose a knowledge distillation strategy. In particular, we design a distillation loss function, aiming to first pretrain a teacher model on offline data, and then transfer the knowledge from the teacher to a smaller student model with less parameters. We evaluate our proposed model on the link prediction task on three real-world datasets, generated by live video streaming events. The events lasted 80 minutes and each viewer exploited the distribution solution provided by the company Hive Streaming AB. The experiments demonstrate the effectiveness of the proposed model in terms of link prediction accuracy and number of required parameters, when evaluated against state-of-the-art approaches. In addition, we study the distillation performance of the proposed model in terms of compression ratio for different distillation strategies, where we show that the proposed model can achieve a compression ratio up to 15:100, preserving high link prediction accuracy. For reproduction purposes, our evaluation datasets and implementation are publicly available at https://stefanosantaris.github.io/EGAD.
I. INTRODUCTION
Nowadays, live video streaming has emerged as a prominent communication solution for several companies worldwide. For example, live video streaming is employed for corporate internal communications, marketing announcements, and so on [1], [2]. Delivering a high quality video to enterprise offices is a challenging task, which stems from the bandwidth requirement, increasing along with the number of viewers in each office. To overcome this challenge, distributed live video streaming solutions were proposed (e.g. by Hive Streaming AB) to deliver high quality video content to several enterprise offices [3], [4]. As shown in Figure 1, Viewers 1, 4 and 7 download the video content of the presenter directly from the Content Delivery Network (CDN) server. Thereafter, Viewers 1, 4 and 7 have to distribute the video content to the rest of the viewers, that is Viewers 2, 3, 5, 6, 8, and 9. To efficiently distribute the video content, each viewer should establish connections with other viewers of the same office and exploit the internal high-bandwidth network of the office (1 GB/s). However, to efficiently establish connections between viewers, Viewer 1 requires the information that Viewers 2 and 3 share the same office. Without this information, Viewer 3 might erroneously establish a connection to Viewer 5 of a different office through a low bandwidth network (10 MB/s). This will negatively impact the video distribution process of Viewer 5, as the only established connection of Viewer 5 will not satisfy the bandwidth requirements of a high quality live video streaming event [4]. Nonetheless, this requires the information of the customers' network topology during the live video streaming event, for instance, Viewers 1, 2 and 3 are in Office 1. However, it is not always feasible to acquire this information, for example, large enterprises provide limited information about their network topologies for security reasons, 978-1-7281-6251-5/20/$31.00 ©2020 IEEE or enterprises constantly adapt their networks to assure the desired business outcomes and improve the user experience [5], [6]. In addition, complying with the recent data protection regulations (GDPR) [7], live video streaming providers, such as Hive Streaming AB, are prohibited to retrieve certain network characteristics, such as private and public internet protocol (IP) addresses. Therefore, it is important to predict the network capacity of each connection -bandwidth during a live video streaming event, based on the limited information provided by the already established connections. In doing so, we can infer if the viewers are located in the same office so as to establish connection through the internal high bandwidth network.
During a live video streaming event, each viewer has a limited number of connections. In addition, the viewers adapt their connections in real-time so as to improve the distribution of the video content [4]. For example, in Figure 1, Viewer 6 and 8 are connected with a low bandwidth network at time step t = 1. As a consequence, Viewer 8 drops the connection with Viewer 6 at time step t = 2 to establish a connection with Viewer 7 via a high bandwidth network connection. The effectiveness of a distributed live video streaming solution depends on the accuracy of each viewers' predictions, that is to predict the connections between viewers in the same office. Moreover, the predictions of the viewers' connections have to be performed in a nearly real-time computational time, otherwise it will negatively impact the user experience during the live video streaming event. In this study, we model an enterprise live video streaming event as a dynamic undirected and weighted graph, where the edges weight correspond to the throughput of the connection between two nodes/viewers. The graph nodes/viewers emerge and leave at unexpected rate and each node/viewer adapts their edges/connections, so as to identify the nodes/viewers that are located in the same office and efficiently distribute the video content. Provided that an enterprise live video streaming event has thousands of viewers, such graphs are highly dimensional and sparse.
Graph representation learning. Recently, graph representation learning approaches emerged that compute compact latent node representations to solve the graph dimensionality problem [8]- [10]. Calculating the latent node/viewer representations has proven a successful means to address the link prediction problem on graphs [8], [11]- [15]. Baseline graph representation learning approaches exploit random walks to learn the latent node/viewer representations [8], [10]. More recently, several studies design different neural network architectures to calculate complex patterns in graph structures [13], [15]- [19]. However, these neural network architectures work on static graphs. To capture the graph evolution, recent approaches employ Recurrent Neural Networks (RNN) [12], [20] and self-attention mechanisms [11] between consecutive graph snapshots. Although dynamic graph representation learning approaches achieve high accuracy in link prediction, the underlying neural networks require to train a large amount of parameters. Therefore, these approaches incur high latency during the online inference of the node/viewer representations due to the large model sizes of the underlying neural network architectures [21]- [26]. As a consequence, state-of-the-art approaches are not applicable to real-world live video streaming solutions, as the high online latency inference increases the computational time of link prediction during a live video streaming event, resulting in high complexity when adapting the viewers' connections.
Knowledge distillation. Alternatively, to reduce the high online latency inference, graph representation learning approaches could employ neural networks of smaller sizes with less parameters. However, such models might fail to accurately capture the structure of an evolving graph, resulting in low link prediction accuracy. Knowledge distillation has been recently introduced as a model-independent strategy to generate a small model that exhibits low online latency inference, while preserving high accuracy [21], [23], [27]. The main idea of knowledge distillation is to train a large model, namely teacher, as an offline training process. The teacher model is a neural network architecture that requires to train a large number of parameters, so as to learn the structure of offline data. Having pretrained the teacher model, the knowledge distillation strategies compute a smaller student model with less parameters, that is more suitable for deployment in production. In particular, the student model is trained on online data, and distills the knowledge of the pretrained teacher model. This means that the student model mimics the teacher model and preserves the high prediction accuracy, while at the same time reduces the online inference of the model parameters due to its small size [27], [28]. A few attempts have been made on graph representation learning with knowledge distillation strategies to reduce the model sizes of the underlying neural network architectures [29]- [31]. As we will show in Section IV-D, such approaches fail to achieve a high compression ratio on the student model, that is the size of the student model remains high when compared with the size of the teacher model. This occurs because these approaches learn low dimensional representations on static graphs, which do not correspond to the dynamic case of live video streaming events.
Contribution. To overcome the limitations of existing models, in this work we present a knowledge distillation strategy for dynamic graph representation learning, namely EGAD, for the link prediction task during live video streaming events. Our main contributions are summarized as follows: • EGAD employs a self-attention mechanism on the weights of consecutive Graph Convolutional Networks (GCNs), to capture the graph evolution and learn accurate latent node/viewer representations, during a live video streaming event. • To the best of our knowledge we are the first to study knowledge distillation for dynamic graph representation learning. We train the EGAD teacher model in an offline process and formulate a distillation loss function to transfer the pretrained knowledge to a smaller student model on online data. In doing so, we significantly reduce the number of parameters when training the student model on online data, and achieve high link prediction accuracy. Our experiments on real-world datasets of live video streaming events demonstrate the superiority of the proposed model to accurately capture the evolution of the graph and reduce the online latency inference of the model parameters, when compared with other state-of-the-art methods. The remainder of the paper is organized as follows: in Section II we present the collected live video streaming data in Hive Streaming AB, and in Section III we detail the proposed model. Our experimental evaluation is presented in Section IV, and we conclude the study in Section V.
II. LIVE VIDEO STREAMING DATA
During a live video streaming event in Hive Streaming AB, various data are collected such as connections per viewer, throughput per connection, and so on, to provide valuable insights to customers. Each viewer periodically reports the data to centralized servers. To evaluate the performance of the proposed model, we collected real-world datasets based on the reports of three live video streaming events, that is LiveStream-4K, LiveStream-6K and LiveStream-16K. All datasets are anonymized and publicly available. The duration of each live video streaming event is 80 minutes. Each generated dataset consists of 8 weighted undirected graph/viewing snapshots, corresponding to the viewers' connections every 10 minutes. A weight of a graph/viewing edge corresponds to the throughput of the connection among two viewers at each snapshot. The LiveStream-4K dataset has 3, 813 viewers, distributed to 15 different offices, and 11, 066 connections. In the LiveStream-6K dataset, 6, 655 viewers attended the live video streaming event from 29 different offices. The viewers established 787, 291 connections. The LiveStream-16K dataset consists of 17, 026 viewers and 482, 185 connections in total. The viewers participated in the live video streaming event from 46 different offices. Figure 2 illustrates the different patterns of how viewers emerge during the three live video streaming events. LiveStream-4K has more viewers than LiveStream-6K and LiveStream-16K, during the first 10 minutes of the live video streaming event. This indicates that in LiveStream-4K the majority of the viewers started to attend the live video streaming event from the beginning. In LiveStream-6K, the first 2 graph/viewing snapshots significantly change in terms of number of viewers, for 0 − 20 minutes 2.8K new viewers emerged, whereas in LiveStream-4K and LiveStream-16K 0.5K and 1K viewers emerged, respectively. LiveStream-4K is less informative as the viewers establish the lowest number of connections. Finally, we can observe that viewers in LiveStream-16K emerge at the lowest pace during the live video streaming event. As we will demonstrate in Section IV-C, the effectiveness of the proposed knowledge distillation strategy and baseline approaches not only depends on the graph sizes but also on different patterns that viewers emerge during the live video streaming events.
III. PROPOSED METHOD
A live video streaming event is represented as a sequence of K graph/viewing snapshots where V k corresponds to the set of n k = |V k | viewers, E k is the set of connections, and X k ∈ R n k ×m is the matrix of the m features of each viewer. For each graph G k , we consider a weighted adjacency matrix A k ∈ R n k ×n k , where A(u, v) > 0 for the viewers u ∈ V k and v ∈ V k , if e k (u, v) ∈ E k . The weight A(u, v) corresponds to the bandwidth measured between viewers u ∈ V k and v ∈ V k at the k-th snapshot. Given a sequence of l graph/viewing snapshots 1 {G k−l , . . . , G k }, the goal of the proposed model is to compute d-dimensional latent representations Z k ∈ R n k ×d , with d m [11], [12], [14]. The constructed latent representations should capture both the structure of the graph at the graph/viewing snapshot k and the evolutionary behavior of the viewers up to the k-th minute.
Dynamic graph representation learning models employ deep neural network architectures, requiring to train a large amount of parameters [12], [32], [33]. Such models are computationally expensive to deploy to a large number of viewers in live video streaming events as they incur significant online latency to calculate the viewers' representations [22], [23], [29], [30]. The problem of knowledge distillation is to generate a smaller online student model S than a pretrained offline large teacher model T . The goal is to reduce the number of trainable parameters of the student model S to minimize the online latency inference [27], [34]. In practice, the teacher model T is pretrained using a computationally expensive deep neural network architecture to calculate the latent representations Z T k of the offline data. Having trained the teacher model offline, the student model S learns the latent representations Z S k by minimizing a distillation loss function L D . The distillation loss function L D calculates the prediction error of the student model S and the deviation from the latent representations Z T k generated by the teacher model T . This means that the student model S is able to mimic the already pretrained teacher model T with fewer parameters [27], [28]. In Section III-A we present the offline teacher model EGAD-T , and then in Section III-B we describe the distillation process of the online student model EGAD-S.
A. EGAD-T Teacher Model
The teacher model EGAD-T learns the viewer representations Z T k at the k-th graph/viewing snapshot using l consecutive Graph Convolutional Network (GCN) models [12], [35], [36], with EGAD-T = {GCN k−l , . . . GCN k }, and l being the number of previous graph/viewing snapshots. The input of each GCN k model is the normalized adjacency matrix A k ∈ R n k ×n k and the viewers' features X k . Provided that the graphs during the live video streaming events have nodes with no features, the node feature matrix X k is replaced by the identity matrix I ∈ R n×n , with m = n. Each GCN k model calculates the viewers representations Z T k by applying two convolution layers to k and X k , as follows: where W i k ∈ R di−1×di is the weight parameter matrix of the i-th convolutional layer, with d i < d i−1 < m. Following [13], [35] we employ two convolutional layers (i = 1, 2), to learn the weight parameter matrices W 1 k ∈ R m×d1 and W 2 k ∈ R d1×d2 , with d 2 = d, so as to compute the d-dimensional representations Z T k . The symmetrically normalized adjacency matrix k is calculated as follows: The l consecutive GCN models are connected in a sequential manner through the weights W 1 k−1 and W 1 k of the first convolutional layers [37]. For each node u ∈ V k we calculate h independent self-attention heads, that is vectors To compute the weights W 1 k of each GCN k model, we average the h independent self-attention vectors z j k (u) [16], as follows: where ELU is the Exponential Linear Unit activation function [38]. Variable H k ∈ R d1×d1 is the shared weight transformation matrix applied to the previous weights W 1 k−1 (u) of each node u ∈ V k , N u is the neighborhood set of the node u. Variable α u,v is the normalized attention coefficient between u ∈ V k and v ∈ N u , which is calculated based on the softmax function [11], as follows: (4) where σ is the sigmoid function, A k (u, v) is the edge weight between u and v, a T k ∈ R 2d1 is a 2d 1 -dimensional weight vector which is applied to the attention process between nodes u and v [11], [16], and || is the concatenation operation. The attention coefficient α u,v measures the importance of the connection between nodes u ∈ V k and v ∈ N u . A high attention coefficient value α u,v corresponds to a connection e k (u, v) ∈ E k which is maintained over several consecutive graph/viewing snapshots and has high edge weight in the adjacency matrix A k (u, v). This means that the learned weights W 1 k reflect on the importance of the existing connection between node u and v, processing the convolution accordingly.
To train the teacher model EGAD-T , we initialize l GCN models and connect the consecutive GCN models using the self-attention mechanism in Equation 3. As aforementioned, each of the k-th GCN models takes as an input the normalized adjacency matrixà k and the feature vectors X k . When training EGAD-T , each GCN k model computes the weights W 1 k in Equation 3, and then calculates the latent representations Z k based on Equation 1. Note that the weights W 1 0 for the first GCN model are randomly initialized. To train our teacher model EGAD-T , we adopt the Root Mean Square Error loss function with respect to the latent representations Z k generated by the last GCN model [37], as follows: where · represents the inner product operation between all the possible pairs of latent representations, and the term σ(Z T k · Z T k ) − A k calculates the error of the latent representations Z T k to capture the structure of the graph snapshot G k . In our implementation, we optimize the parameters H k and a k between consecutive GCN models, based on the loss function in Equation 5 and the backpropagation algorithm.
B. EGAD-S Student Model
We train the student model EGAD-S to compute the online latent representations Z S k , by exploiting the knowledge of the pretrained teacher model EGAD-T . As we train the student model only on online data, the student model EGAD-S requires significantly less number of trainable parameter weights, compared with the teacher model EGAD-T . The student model EGAD-S consists of l consecutive GCN models, with EGAD-S = {GCN k−l , . . . , GCN k }. We calculate the weights W 1 k (Equation 4), and compute the latent representations based on Equation 1.
The knowledge acquired by the teacher model EGAD-T is transferred to the student model EGAD-S via the distillation loss function L D , adopted by the student model during the online training process. We formulate the distillation loss function as a minimization problem for the student model EGAD-S as follows: where L T is the inference error of the teacher model in Equation 5, and L S is the root mean squared error with the latent representations Z S k generated by the student model. Hyper-parameter γ ∈ [0, 1] balances the training of the student model EGAD-S when inferring the knowledge of the teacher model EGAD-T . A higher value of γ emphasizes more on the student model EGAD-S and distillates less knowledge from the teacher model EGAD-T . The distillation loss function L D in Equation 6 allows the student model EGAD-S to overcome any bias introduced by the teacher model EGAD-T [21], [23], [27], [28]. This means that EGAD-S can achieve similar or better accuracy than the teacher model EGAD-T . As we will show later in Section IV-D, the student model EGAD-S consistently outperforms the teacher model EGAD-T in terms of accuracy, by significantly downsizing the number of parameters.
A. Evaluation Setup
In our experiments we evaluate the performance of the proposed model on the link prediction task. To examine the two different components of our model, we train the teacher and student models EGAD-T and EGAD-S, using l consecutive graph/viewing snapshots up to the k-th graph G k . The task of link prediction is to forecast the unobserved connections, denoted by O k+1 = E k+1 \{E k−l , . . . , E k }, that will occur in the next graph/viewing snapshot G k+1 . Following the evaluation protocol of [11], [12], [14], we concatenate the latent representations Z k (u) and Z k (v) based on the Hadamard operator, for the unobserved connection o(u, v) ∈ O k+1 of the viewers u and v ∈ V k . The concatenated latent representations are then applied to a Multi-Layer Perceptron (MLP), to calculate the weight of the connection. To measure the online inference efficiency, we report the number of parameters that each model requires to train. Moreover, regarding the prediction accuracy we evaluate the examined models based on the metrics Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE): (7) Following [11], [12], [37], for each snapshot k we train each examined model on l previous graph/viewing snapshots G k−l , . . . , G k , which are considered the offline data for each time step. We randomly select 20% of the unobserved links O k+1 for validation set to tune the model hyper-parameters. The remaining 80% of the unobserved links are considered as the test set, which are the online data for each time step. We repeated our experiments five times, and we report the average RMSE and MAE over the five trials.
B. Examined Models
We compare the performance of the proposed EGAD-T and EGAD-S models with the following baseline strategies: • DynVGAE [14] is a dynamic joint learning model that shares the trainable parameters between consecutive variational graph auto-encoders [35]. We implemented DynVGAE from scratch and publish our code 2 , as there is no publicly available implementation. • EvolveGCN 3 [12] is a dynamic graph representation learning model with Gated Recurrent Units (GRUs) between the convolutional weights of consecutive GCNs. • DySAT 4 [11] is a dynamic self-attention model that captures the evolution of the graph using multi-head selfattention between consecutive graph snapshots. • DMTKG-T [29] is the teacher model of the DMTKG knowledge distillation strategy. DMTKG-T employs Heat Kernel Signature (HKS) on static graph/viewing snapshots and uses Convolutional Neural Network layers to calculate the latent representations based on Deep-Graph [39]. To ensure fair comparison, we train DMTKG-T per snapshot, with each snapshot containing aggregated graph history up to k-th snapshot. As the source code of the DMKTG distillation strategy is not available, we made our implementation publicly available 5 . • DMTKG-S [29] is the student model of the DMTKG strategy, where the goal is to minimize a distillation loss function based on the weighted cross entropy. Settings. In Tables III-V we report the performance of each examined model in terms of RMSE when calibrating the hyper-parameters of the examined models, following a crossvalidation strategy. For each model, we tuned the hyperparameters based on a grid selection strategy and select the best configuration. In particular, in DynVGAE we set the
C. Performance Evaluation
In Figure 3, we evaluate the performance of the student model EGAD-S against the non-distillation strategies, that is DynVGAE, EvolveGCN and DySAT, in terms of RMSE and MAE. We observe that all models have a higher prediction error in terms of RMSE and MAE in LiveStream-6K than the other datasets. This occurs because the viewers in the LiveStream-6K dataset attended the live video streaming event in a completely different pattern (Section II) than LiveStream-4K and LiveStream-16K. More precisely, in LiveStream-6K the number of viewers that emerge in 0-20 minutes is significantly higher than the other events, which negatively impacts the prediction accuracy of the examined models.
The student model EGAD-S significantly outperforms the baseline approaches in all datasets. This suggests that the proposed student model EGAD-S can efficiently capture the evolution of the graph in the learned latent representations Z S k . The second best approach is DySAT, demonstrating the ability of self-attention mechanisms to generate accurate latent representations. DySAT calculates the latent representations Z k by applying self-attentional aggregations to the local node neighborhoods. Instead, the proposed EGAD-S model performs self-attention to the convolutional weights between consecutive GCNs. Thus, our model is able to efficiently capture the different graph evolution patterns of the live video streaming events. Compared to the second best method DySAT, the proposed EGAD-S model achieves relative drops 9.8 and 13.5% in terms of RMSE and MAE in the LiveStream-4K dataset. Similarly, EGAD-S achieves relative drops 10.2 and 3.5% in LiveStream-6K, and 17.3 and 6.2% relative drops in LiveStream-16K.
In Table I, we present the numbers of parameters in millions that are required to train the examined models. As aforementioned in Section II, the majority of the viewers in the LiveStream-4K dataset started to attend the live video streaming event from the first 10 minutes. Therefore, all models have fewer trainable parameters on the first graph snapshots k = 0 − 30 minutes in LiveStream-6K and LiveStream-16K than in the LiveStream-4K dataset. We observe that EGAD-S clearly outperforms the baseline approaches in terms of the required parameters. Evaluated against DynVGAE, EvolveGCN and DySAT, the average compress ratios of the student model EGAD-S are 12:100, 7:1000, and 7:100, respectively. Provided that EGAD-S constantly outperforms all the baseline approaches in terms of RMSE and MAE, the high compression ratios demonstrate the ability of the proposed knowledge distillation strategy to significantly reduce the model size in terms of required parameters. Moreover, it is clear that EvolveGCN model requires a significant amount of trainable parameters to generate the latent representations. This means that EvolveGCN does not scale well when increasing the number of viewers in live video streaming events. As DySAT employs multi-head attention on consecutive graph/viewing snapshots, and not on consecutive GCNs as the proposed EGAD-model does, DySAT requires a much a higher number of parameters by following a non-distillation strategy.
D. Distillation Evaluation
In Figure 4, we study the impact of the proposed knowledge distillation strategy on the student model EGAD-S in terms of RMSE, when compared with the teacher model EGAD-T . In addition, in this set of experiments we evaluate our model against DMTKG [29], a baseline graph representation approach with knowledge distillation, comparing with both the teacher model DMTKG-T and student model DMTKG-S.
On inspection of Figure 4, we observe that the EGAD-T and EGAD-S models outperform DMTKG-T and DMTKG-S in all datasets. This occurs because DMTKG applies knowledge distillation on top of DeepGraph [39], which is a static graph representation learning approach. Therefore, DMTKG ignores the graphs' evolution when learning the latent representations. An interesting observation is that the student models EGAD-S and DMTKG-S achieve higher performance than the respective teacher models EGAD-T and DMTKG-T . This indicates the effectiveness of the examined distillation strategies to correctly transfer the knowledge of the teacher models to the respective student models. This occurs because the student models remove the bias of the teacher models to the offline data, and achieve high prediction accuracy, complying with similar observations that have been made in relevant studies [21], [40]. Compared to the EGAD-T model, EGAD-S achieves 6.5, 3.6 and 5.7% relative drops in terms of RMSE for LiveStream-4K, LiveStream-6K and LiveStream-16K, respectively.
In Table II, we present the maximum number of parameters in millions that are required to train the examined models during the live video streaming events. EGAD-S significantly reduces the number of required parameters, achieving compression ratios 15:100, 17:100 and 21:100, on average, in LiveStream-4K, LiveStream-6K and LiveStream-16K, respectively. This occurs because the student model EGAD-S uses a lower number of attention heads h and representation size d than the teacher model EGAD-T (Section IV-C). Therefore, EGAD-S has lower online inference latency, compared with the teacher model EGAD-T . Instead, the DMTKG distillation strategy achieves an average 1:2 compression ratio for the student model DMTKG-S. The DMTKG distillation strategy is not able to further reduce the student model size, because DMTKG is designed for static graphs. This indicates that the DMTKG-S model requires more trainable parameters to learn accurate latent representations than the proposed EGAD-S model. In Figure 5, we evaluate the influence on the hyperparameter γ of Equation 6 on the student model EGAD-S. We vary the hyper-parameter γ from 0.1 to 0.9 by a step of 0.1, to balance the impact of the student L S and teacher L T losses on the distillation loss function L D . For each parameter γ, we report the averaged RMSE over all the graph snapshots of the live video streaming events. In all datasets, the student model EGAD-S achieves the highest performance when we equally balance the influence of the student and teacher models (γ = 0.5). For larger values of parameter γ, the student model EGAD-S emphasizes more on the loss L S than the loss L T . As a consequence, the student model EGAD-S distills less knowledge from the teacher model EGAD-T , which negatively impacts the performance of the EGAD-S model in terms of RMSE. Instead, decreasing the hyper-parameter γ prevents the student model EGAD-S from training on the online graph data and at the same time introduces the bias to the offline data of EGAD-T . This means that for small values of γ EGAD-S mainly distills the knowledge of the teacher model EGAD-T , resulting in limited prediction accuracy.
V. CONCLUSION
In this paper, we presented a knowledge distillation strategy, to overcome the problem of online latency inference of dynamic graph representation learning approaches in live video streaming events. Evaluated against several baseline approaches on three real-world live video streaming events, the proposed model achieves 7:100 compression ratio on average. Moreover, the proposed student model preserves high prediction accuracy, achieving average relative drops 12.4 and 7.7% in terms of RMSE and MAE in all events, when compared with the second best approach. Distributed live video streaming providers, such as Hive Streaming AB, can significantly benefit from our model by significantly reducing the required parameters/computational time in the link prediction task. In doing so, viewers can exploit the offices' internal high bandwidth network from the beginning of the live video streaming event, by avoiding to establish low bandwidth connections. Provided that several offices have limited network capacity, our model can significantly reduce the generated network traffic. Therefore, enterprises can distribute high quality video content to their offices without any network limitations, improving user experience. Moreover, the proposed model allows enterprises to distribute video content of high resolution, such as 4K.
There are several interesting future directions to graph representation learning for live video streaming events.
• For instance, as future work we plan to evaluate the performance of the proposed model on evolving graphs of social networks. In particular, provided the limited duration of live video stream events the main challenge resides on identifying the differences of how viewers emerge during live video streaming events and at what pace users establish connections in social networks over time. • Another interesting future direction is to study the performance of our model on graph snapshots over time steps with different duration. For example, in a live video streaming event the duration of time steps between two consecutive snapshots might vary, depending on the network demand. This means that different time steps might require an adaptive learning strategy of the time window w when training our model. • In our model, training is performed on the graph data of a single live video streaming event. In practice though there are several live video streaming events that take place on a daily basis. The question that we have to answer is how to exploit the knowledge acquired from different live video streaming events, when training our model on a new event. More precisely, we plan to study various transfer learning strategies to exploit the knowledge from different events, when training our model. This is a challenging task, because not only the internal network topologies of several companies vary, but also viewers emerge at various paces during different live video streaming events. | 2020-11-12T02:01:26.288Z | 2020-11-11T00:00:00.000 | {
"year": 2020,
"sha1": "1c003dd4b044aac639d3a9b283c92d0dea21d5bb",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2011.05705",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1c003dd4b044aac639d3a9b283c92d0dea21d5bb",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
213032976 | pes2o/s2orc | v3-fos-license | Cores Produced by Geopolymer Technology and Impact to Casting Quality in Comparison with PUR Cold Box Amin
Michal Vykoukal1,2, Alois Burian1, Markéta Přerovská1, Milan Luňák3,4, Štefan Kyselka3 1SAND TEAM, spol. s r.o., Holubice 331, 683 51 Holubice, Czech Republic. E-mail: vykoukal@sandteam.cz, burian@sandteam.cz, prerovska@sandteam.cz 2VŠB – Technical University of Ostrava, Faculty of Materials Science and Technology, 17.listopadu 15, 708 33 Ostrava, Czech Republic 3BENEŠ a LÁT a.s., Tovární 463, 289 14 Poříčany, Czech Republic. E-mail: milan.lunak@benesalat.cz, stefan.kyselka@benesalat.cz 4Faculty of Mechanical Engineering J. E. Purkyně University in Ústí nad Labem, Pasteurova 1, 400 96 Ústí nad Labem, Czech Republic
Generally about geopolymers
The geopolymers were discovered and terminology was introduced by Davidovits in the seventies of the last century [1]. Earlier in 1957 Gluchovskij investigated the problem of alkali-activated slag binders, he called the technology "soil silicate concretes" and the binders "soil cements" [2]. These are materials that belong to alkaline aluminosilicates, so they are purely inorganic materials. The geopolymers contain silicon, aluminium and some alkaline element, such as sodium or potassium. In nature, such materials appear and are called zeolites. The geopolymers are not formed due to geological processes, they are artificially prepared and they are called so because their composition approaches natural rocks. The geopolymers consist of tetrahedron chains of SiO4 and AlO4 The geopolymers are the focus of interest in a number of industries. The ratio of the proportion of aluminium and silicon ranges from 1:1 to 1:35 (various ratios SiO4 and AlO4 tetrahedrons). According to the aluminium content varies the chemical and the physical properties of the resultant polymer, as well as its applications, diverse with the content of aluminium. The usage of geopolymers is extensive. Especially in the construction industry, these alkali-activated aluminosilicates are given considerable attention. In these applications, a geopolymer is formed during the process. The geopolymer is created in the reaction between the silicon-containing material and aluminium-containing material (fly ash, slag) and an alkaline activator. The resulting product has many advantages in comparison with the conventional materials. Geopolymers are, for example, also used in the solidification of hazardous waste, ceramics, and the refractory materials industry. Generally speaking, the main properties of the geopolymers which they are used, are fire resistance, high heat resistance, and low thermal expansion [1][3] [4].
The geopolymers with a high molar ratio of SiO2/Al2O3, sometimes called geopolymer resins, are liquid substances with similar properties to colloidal solutions of alkali silicateswater glass. One of the possibilities for using geopolymer resins is a foundry binder. Either elevated temperatures or chemical way is used for hardening [3] [5].
According to some archaeological publications, Egyptian pyramids are not of carved blocks but casted from the geopolymers and similarly Venus of Dolní Věstonice [1] [5], there is an interesting idea.
Geopolymers for foundry industry
More and more emphasis is put on the clean and environment-friendly processes. Many foundries are exposed to a huge pressure. This leads to the introduction of new technologies, most often based on inorganic chemisindexed on: http://www.scopus.com try, which are more acceptable in terms of the environment and sustainable development. The geopolymer binder systems and geopolymer technology are undoubtedly among these new technologies. A new environmentally friendly binder system has been developed using a geopolymer inorganic binder for the production of conventional moulds and cores in the Czech Republic. These polymers are also referred to polysialates and are composed of chains of tetrahedrons of SiO4 and AlO4 (Fig. 1).
The resulting properties of the binder depend on the ratio of these components and on the preparation of the geopolymer. The basic structural units consist of monomers, dimers and higher polymers.
The binder is an inorganic geopolymer precursor with a low degree of polymerization. The hardening occurs by the action of heat or hardeners. There is an increase in the degree of polymerization and formation of an inorganic polymer during the hardening reaction Fig. 2. Davidovits (left) [1] and updated by Barbosa [6] and later by Rowles (right) [7].
Fig. 2 Scheme and model of inorganic polymer by
The geopolymer technology hardened by dehydration is odourless technology and generates no pollutants, so it has a minimal negative impact on the environment. Due to the chemical nature of the geopolymer binder, the mechanical reclaimability of used sand mixture is feasible The emissions are one of the fundamental environmental troubles in foundries. Foundries have to take into account an increasing cost related to solving these environmental problems. They are increasingly interested in technologies with more favourable environmental characteristics and trying to introduce them into operation.
The environmental pressure is even greater in economically developed countries. There is also increased interest in the development of new technologies and their implementation [3].
In general, it is expected that the inorganic binder systems achieve significant reductions in emissions. The comparison of the binder systems from the point of view of the BTEX and the PAH shows in graphs in Fig. 3 [9].
The geopolymer technology is currently used in the foundries for three basic production processes/technologies: (1) self-hardening moulding mixtures, (2) sand mixtures hardened by gaseous carbon dioxide and (3) the hot box technology with hot air hardening [3].
Hot box and hot air hardening, geopolymer technology hardened by dehydration
Geopolymer binders are used for the core production with hardening by heat. In this technology, the hardening is caused by dehydration, it means by a physical process. The technology is suitable for serial and mass core production. The whole technology is purely inorganic, thus it has a minimal impact on the environment and ensures favourable hygienic conditions [8]. The principle of this technology is as follows: the sand mixture is shot into a heated core box and the hardening of the sand mixture in the hot core box is speeded up by blowing the hot air through it at the same time. Suitable temperature of the core and the hot air ranges from 100 to 200 °C. The temperature from 150 to 200 °C allows to obtain a long storage time and prevent the reverse cores hydration. Dehydration can also be achieved by microwave hardening [10].
It is recommended to use the GEOTEK powder additive, which has a beneficial effect on the reduced wettability of the cores and the increases the cold and hot strength of the cores [10].
When compared with PUR cold box amine technology, the comparable (higher) strengths are achieved at the same or shorter hardening time and the collapsibility of the cores after pouring is significantly better. Core strength and other properties depend on the addition level of the sand mixture and on the parameters of the production processes. Flexural strength after hardening and cooling reaches up to 5.5 MPa [10] [11].
The composition of the sand mixture for core production made by the geopolymer technology hardened by dehydration [11]: · Sand. Generally quartz sand. · Geopolymer binder, addition level ranging from 1.4 to 1.8%, based on sand quantity (quartz sand). · Accelerator GEOTEK W, addition level ranging from 0.3 to 0.9%, based on sand quantity, generally 50% of binder weight.
Tab. 1 Addition level of geopolymer binder for technology hardened by heat on different foundry sands [11].
Geopolymer technology hardened by heat
Foundry sand
Range of addition level [wt. % on sand weight] Quartz sand 1.4 -1.8 CERABEADS 1.8 -2.5 Addition levels of additive GEOTEK are from 0.3 to 0.9% based on sand.
The addition of 1.8% of binder and 0.9% of accelerator ensures optimum strength, which was verified/confirmed by the production process [11].
Very good results are achieved in the production of aluminium and non-ferrous alloy castings. We are currently working on the development of binder system for castings made of steel and cast iron [11]. The geopolymer binder system is suitable for most quartz and non-quarts sands such as CERABEADS, olivine sand, chromite sand, aluminosilicate sands. The addition levels are in Tab. 1. The scheme of the core production hardened by heat is shown in Fig. 4 [11].
Experimental procedure
The new grade of geopolymer binder was used for core production. It is sodium-potassium grade with higher hot and cold strengths, improved humidity resistance and long storage life of cores. The casting of turbocharger no. 399 4401 523 were chosen by the foundry for the verification of the geopolymer technology. Standard cores made by PUR cold box amin technology directly at the foundry were used as comparison.
The targets of the experiment were as follows: · Verify the core production in the core machine modified for the hot box and hot air hardening. · Observation of the technological properties. · Verification of long storage of cores. · Application of refractory alcohol based coating. · Demonstrating the harmlessness of the geopolymer technology throughout the entire production process, especially at pouring and decoring. · And finally, the main target is casting quality, both the surface and internal casting quality.
Materials
It has already been mentioned, the sodium-potassium type geopolymer W20 was used as the binder. The following materials were used for the core sand mixture: · quartz sand BK31, AFS 43, · geopolymer binder W20, sodium-potassium type of geopolymer binder, · inorganic powder additive, GEOTEK W303.
Core production
The modified core machine was used for the core production at core shop. The heating of the metal core box provide two electrical heated plates which are controlled by independent temperature regulators. The heating power of both electrical plates is max 15 kW, maximum working temperature is 250 °C. Hot air is generated by air heater, which is connected on compressed air at 6 bars. The heating power is max 12 kW (1100 lt/min) and the temperature is possible to set up to 600 °C (real temperatures up to 200 °C). The core shooter with installed turbocharger core box is showwn Fig. 5. The detail of the metal core box is shown on the Fig. 6.
The core sand mixture composition is shown the indexed on: http://www.scopus.com Tab. 2 and the parameters set for the core shooting and hardening are shown at the Tab. 3. The temperature of core box was set on 190 °C, but the real temperature was around the 155 °C. Electrical heating plates continuously heating the core box during the entire core production procedure, in spite of this, small temperature fluctuation in several °C. This is caused by hardening of the cores and by cooling of core box when are both halves open and cores are taken from core box. Temperature of hot air is set on 190 °C and the real temperature is about 145 °C. Shooting pressure was 5,5 to 6.0 bars and hot air pressure 3.0 bars for hardening. Core sand mixture shot to the heated core box was kept 20 seconds and then hardened by purging of hot air for 75 seconds. Totally, 231 cores were produced and 217 supplied to the foundry, cores ready for packing, see Fig. 7. Manufactured cores were stored one week and then packed in core shop as usual and were supplied to the foundry for pouring. There was no core damaged despite of more than 200 km transport distance.
The cores were stored at the foundry warehouse for another two weeks under standard condition. Just 24 hours before the pouring was the standard alcohol based refractory coating applied by dipping, see Fig. 7. The cores were ready for pouring.
Pouring and decoring
Pouring machine LPDC internal no. 51 was chosen for casting production due to the current standard production of castings with PUR cold box amin cores. So the machine was in operating temperature and geopolymer cores could be used for pouring immediately without break of production. The mould has two-part, see Fig. 12. The aluminium alloy and the pouring parameters were as follows: · Aluminium alloy for castings: EN AC-45400. · Melt temperature:720 ±10 °C. · Filling pressure:18 ±10 kPa; 26 ±5 s · Pressure:20 ±10 kPa; 120 ±20 s · Solidification:120 ±20 s Decoring was carried out by a jackhammer manually according to the standard procedure regularly used in the foundry. As a measure of the break down of the cores is the decoring time (clearing time of the casting). The casting with cores before decoring are possible to see in Fig.
Castings quality evaluation
Casting quality evaluation was carried out in the foundry.
The principal objective is the comparison of the casting surface made by cores of the existing organic technology PUR cold box amin and the new inorganic geopolymer technology. Laboratory of the foundry has a casting surface roughness measuring device, the Mitutoyo SJ-410 type. The roughness was measured in four points, on same location, see Fig. 9 and Fig. 10. The internal quality of the castings by X-ray examination was also evaluated in the foundry laboratory.
Castings made by the cores of the geopolymer technology were finally tested in the customer laboratory by machining and by pressure test were compared with standard castings made by the cores of the PUR cold box amin.
Considered was the comparative roughness assessment evaluation of the casting surface quality as well according to the measuring set with six pieces of surface sample graded in Ra or Rz roughness value in μm, see Fig. 11 [12].
Fig. 9
The surface roughness measuring apparatus Mitutoyo
Core production and flexural strength
Tab. 4 presents the flexural strength properties of both geopolymer binder and PUR cold box amin core sand mixtures. Geopolymer binder has higher strengths at presented addition levels, very good strength at 100% relative humidity and even if the test samples are put into water, the strength is still about at 1.9 MPa. There is the possibility of changing the addition level depending on applications and requirements on cores and castings.
Tab. 4 The flexural strength properties of geopolymer binder and PUR cold box amin core sand mixtures
Core production technology The cores made by the geopolymer technology can be manufactured in the same production cycle as the cores made by the PUR cold box amine technology. In this case, the geopolymer cores had 135 seconds total working time (two cores in core box), the PUR cold box amin had about 150 seconds by one cycle. By the optimisation of working procedures could be reached even better times and furthermore improve productivity. The great advantage is, that there is no odour, smell, fume during the entire core production. There has been observed and verified by weighting that the geopolymer cores are heavier than PUR cold box amin cores, about 100 g (10% of whole core weight). It confirms better compaction of cores and leads to avoid the penetration of aluminium melt and leads to better surface quality.
Core storage, storage life, use of coatings
As was mentioned, geopolymer cores were stored one week in core shop as usual and then stored at the foundry storage for another two weeks under standard conditions. There was no core damaged during storage and handling both in core shop and in foundry and during the transportation to the foundry as well.
The manufactured geopolymer cores do not need any extra care, it means, that conventional coatings can be applied (conventional alcohol based coatings based on graphite or aluminosilicates or corundum or zirconium etc. or their mixtures). Cores were visually fully comparable with PUR cold box amin from point of view core surface quality after coating application. The cores can be stored in standard foundry conditions without having affected the final casting quality. There is no deformation of the cores. The conclusion made in [13] (geopolymer binders might be more sensitive for storage conditions, higher sensitivity for air moisture) were not be confirmed.
Pouring
More favourable effect of the geopolymer binder system on the work environment and the environment can be seen in Fig. 12. The differences between inorganic and organic binder systems are significant. The cores made by geopolymer technology do not generate smoke, fume, odour, and smell during the pouring and at the opening the moulg. Only hardly noticeable aroma is formed.
We positively evaluate the following: · working times were the same as standard with PUR cold box amin, · there were no difficulties during pouring, · there were no breaking of the cores at inserting them to the die or during pouring itself, · foundryman confirmed: cores were not crumbled, no grains of sand fallen into the die (sand was not adhered on the mould cavity).
Very good collapsibility of the cores after pouring and the substantial reduction of the decoring process time has not been confirmed. The decoring times were 3x to 4x longer than standard times. It is the main disadvantage of the experiment and the primary task for the next development and optimization leading to improve whole process especially decoring. The better core break down can solve reduced binder addition level and a new type of additive.
Castings quality evaluation
The result values of surface roughness Ra are for both castings made by geopolymer binder and PUR cold box amin cores shown in the Tab. 5. Is it possible to see that castings made by geopolymer binder cores have surface roughness Ra from 5.215 to 6.227 μm in comparison with values from 12.573 to 29.178 μm by PUR cold box amin cores. Higher values of surface roughness are in central area of castings for both technologies. It could be stated that castings made by geopolymer cores reach three times lower surface roughness values. The difference in favour of geopolymer binder can be seen on the details of castings surface, Fig. 13. These are very positive results. The good results can be related with less gasses formed during pouring and higher compaction of geopolymer cores.
The Fig. 14 presents X-ray examination pictures of casting for both technologies. It could be stated that there is no difference and castings made by both technologies are without internal defects.
Successfully were castings tested in the customer laboratory. The machining and pressure test were made as standard and all 41 castings were classified as OK.
It would be very interesting to use for standard production cores without refractory coating. It could be economical benefit and less handling and time savings in
Conclusion
The presented research paper and obtained results support and confirms the effort to introduce the geopolymer technology, geopolymers and inorganic binders for core production and with goal to replace the organic binders in short future. The new sodium-potassium geopolymer grade, binder W20, seems to be as the right way. The final surface and internal castings quality confirm it.
Obtained results are compared with PUR cold box amin widely used at foundries. Can be argued that the geopolymers have great possibility in the field of core production. The geopolymer technology is completely inorganic. Geopolymers for hot box and hot air hardening can guarantee good core and casting production with the same or better productivity and mainly much more environmental friendly during the whole core and casting production.
On the basis of the results, the following conclusions can be summarized: · Flexural strength of the geopolymer core sand mixture is higher or the same than the PUR cold box amin, depending on addition level. This means that the geopolymer technology is an adequate alternative from the viewpoint of strength. · Cores can be manipulated as usual and the conventional refractory coatings can be used as well.
· All cores were stored under standard conditions almost one month without any problems, strength reduction, any abrasion. No special conditioned room, tent or extra care is needed. · Very positively could be seen impact to working conditions and environment generally. No smell, smoke, fume, or hazardous odour are not generated throughout the production process, even during pouring and decoring. · The break down was identified as the weak property in this research despite very good break down properties observed in similar experiments. This disadvantage can be solved by the reduced amount of binder in the core sand mixture, the high flexural strength allow this binder reduction or usage of new types of additive which improve the core break down after pouring. · Internal quality of castings evaluated by X-ray examination demonstrate castings without any defects. · A significant difference was observed in the evaluation of the casting surface roughness from the core side. The roughness made by geopolymer binder core was three times lover than PUR cold box amin. It gives the possibility to use the cores without refractory coatings in standard production process, which can improve productivity, save costs and handling and storage capacity. · Final machining and pressure test at the customer were evaluated positively for all castings. | 2020-01-09T09:10:13.716Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "c5be54d19a8d011b1c2cfe6534ec8f2e3b9d7cf3",
"oa_license": "CCBYNC",
"oa_url": "http://journalmt.com/doi/10.21062/ujep/420.2019/a/1213-2489/MT/19/6/1071.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6035a0a82190b3a659b29081efbd1827e1f961d3",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
15105489 | pes2o/s2orc | v3-fos-license | Some notes on the equivalence of first-order rigidity in various geometries
These pages serve two purposes. First, they are notes to accompany the talk"Hyperbolic and projective geometry in constraint programming for CAD"by Walter Whiteley at the"Janos Bolyai Conference on Hyperbolic Geometry", 8--12 July 2002, in Budapest, Hungary. Second, they sketch results that will be included in a forthcoming paper that will present the equivalence of the first-order rigidity theories of bar-and-joint frameworks in various geometries, including Euclidean, hyperbolic and spherical geometry. The bulk of the theory is outlined here, with remarks and comments alluding to other results that will make the final version of the paper.
Introduction
In this paper, we explore the connections among the theories of first-order rigidity of bar and joint frameworks (and associated structures) in various metric geometries extracted from the underlying projective space of dimension n, or R n+1 . The standard examples include Euclidean space, elliptical (or spherical) space, hyperbolic space, and a metric on the exterior of hyperbolic space.
In his book, Pogorelov explored more general issues of uniqueness, and local uniqueness of realizations in these standard spaces, with some first-order correspondences as corollaries [11]. We will take the opposite tack -beginning directly with the first-order theory, in this paper. We believe this presents a more transparent and accessible starting point for the correspondences. In a second paper, we will use the additional technique of 'averaging' in combination with the first-order results to transfer results about pairs of objects with identical distance constraints in one space to corresponding pairs in a second space Like Pogorelov (and perhaps for related reasons) we will begin with the correspondence between the theory in elliptical or spherical space and the theory in Euclidean space ( §4). This correspondence of configurations is direct -using gnomic projection (or central projection) from the upper half sphere to the corresponding Euclidean space. This correspondence between spherical frameworks and their central projections into the plane is also embedded in previous studies of frameworks in dimension d and their one point cones into dimension d + 1 [18].
With a firm grounding for the first-order rigidity in spherical space, it is simpler to work from the spherical n-space to the other metrics extracted from the underlying R n+1 ( §5). The correspondence works for any metric of the form p, q = n+1 i=1 a i p i q i , a i = 0, in addition to the special case of Euclidean space (with a n+1 = 0). It has a particularly simple form, for selected normalizations of the rays as points in the space, such as p, p = ±1, which is the form we present.
Having examined the theory of first-order motions, we pause to present the motions as the solutions to a matrix equation R X (G, p)x = 0 for the metric space X ( §6). In this setting, we have the equivalent theory of static rigidity working with the row space and row dependences (the self-stresses) of these matrices, instead of the column dependencies (the motions). The correspondence is immediate, but it takes a particular nice form for the 'projective' models in Euclidean space of the standard metrics. In this setting, the rigidity correspondence is a simple matrix multiplication: for the same underlying configuration p, where [T XY ] is a block diagonal matrix with a block entry for each vertex, based on how the sense of 'perpendicular' is twisted at that location from one metric to the other. As a consequence of this simple correspondence of matrices, we see that row dependencies (the static self-stresses) are completely unchanged by the switch in metric. As a biproduct of this static correspondence, there is a correspondence for the first-order rigidity of the structures with inequalities, the tensegrity frameworks, which are well understood as a combination of first-order theory and self-stresses of the appropriate signs for the edges with pre-assigned inequality constraints.
As this shared underlying statics hints, there is a shared underlying projective theory of statics (and associated first-order kinematics) [4].
We will not present that theory here but we note the projective invariance, in all the metrics, of the first-order and static theories ( §7). There are various extensions that follow from this underlying projective theory, such as inclusion of 'vertices at infinity' in Euclidean space [4], and the possibility that polarity has a role to play (see below).
As an application of these correspondences, we consider a classical theory of rigidity for polyhedra -the theorems of Cauchy, Alexandrov, and the associated theory of Andreev. This theory provides theorems about the first-order rigidity of convex polyhedra and convex polytopes with either rigid faces, or 2-faces triangulated with bars and joints in dimensions d ≥ 3, in Euclidean space. Since the basic concepts of convexity transfer among the metrics (if we remove the equator on the sphere, or the corresponding line at infinity in Euclidean space), this first-order and static theory immediately transfers to identical theorems in the other metric spaces ( §7). There are some first-order extensions of Cauchy's Theorem to versions of local convexity, which will automatically extend to the various metrics and on through to hyperplanes and angles, giving additional generalizations. Moreover, this theory for hyperplanes and angles will be projectively invariant, if we are careful with the transfer of concepts such as 'convexity' through the projective transformations.
In hyperbolic space, there is a correspondence between rigidity of 'bar-and-joint frameworks' with vertices and distance constraints in the exterior hyperbolic space (or ideal points) and planes and angle constraints in the interior hyperbolic space. We present this correspondence directly, although it can be viewed as a polarity about the absolute. With this correspondence, the first-order Cauchy theory in exterior hyperbolic space gives a first-order theory for planes and angles in hyperbolic space. This result turns out to be a generalization of the first-order version of Andreev's Theorem. In this setting, the constraint that angles be less than π/2 disappears and the angles have the full range of angles in a convex polyhedron (< π).
Moreover, as this hints, there is a correspondence, via spherical polarity, which connects the first-order Cauchy Theorem in the spherical or elliptic space with an Andreev style first-order theorem for planes and angles of a simple convex polytope in elliptical geometry ( §none). The effect of polarity in Euclidean space is drastically different. It has an interesting, and distinctive interpretations in dimensions d = 2 and d = 3 [22,23].
The general problem of characterizing which graphs have some (almost all) realizations in d-space as first-order rigid frameworks is hard for dimensions d ≥ 3. With these correspondences, we realize that this problem is identical in all the metric spaces and we will not get additional leverage by comparing first-order behaviour under the various metrics.
On the other hand, in general geometric constraint programming in fields such as CAD, there is an interest in more general systems of geometric objects and general constraints. For example, circles of variable radii with angles of intersection as constraints are in interest in CAD. As people familiar with hyperbolic geometry may realize, these are equivalent, both a first-order and at all orders, to planes and angles in hyperbolic 3-space. The correspondence presented here provides the final step in the correspondence between circles and angles in the plane and points and distances in Euclidean 3-space [13].
The basic first-order correspondence among metrics should extend to differentiable surfaces from these discrete structures. The major difference here is that static rigidity and first-order rigidity are distinct concepts in the this world which corresponds to infinite matrices. Still the correspondence should apply to both theories, and all the metrics.
2. First-Order Rigidity in E n 2.1. Euclidean n-space. Let E n denote the set of vectors in R n+1 with x n+1 = 1, 2.2. Frameworks and rigidity in E n . A graph G = (V, E) consists of a finite vertex set V = {1, 2, . . . , v} and an edge set E, where E is a collection of unordered pairs of vertices. A bar-and-joint framework G(p) in E n is a graph G together with a map p : , where x · y denotes the Euclidean inner product of the vectors x and y. Since the framework lies in E n during the motion (p k (t) ∈ E n for all k ∈ V ), p k (t) satisfies e · p k (t) = 0 for all k ∈ V . Hence its derivative satisfies, e · p ′ i (0) = 0 for each i ∈ V . This motivates the following definition.
where u i denotes u(i).
for all x, y and z in E n . G(p) is first-order rigid in E n if all the first-order motions of the framework G(p) are restrictions of trivial first-order motions of E n .
2.5.
Remark. Any rigid motion of E n yeilds a trivial first-order motion of a given framework: the isometry restricts to a motion of the framework whose derivative satisfies the equations in (1).
2.6.
Remark. First-order rigidity is a good indicator of rigidity: first-order rigidity implies rigidity, but not conversely.
First-Order Rigidity in S n
The distance between two points x, y ∈ S n + is given by the angle subtended by the vectors x and y, d S + (x, y) = arccos(x · y).
Frameworks and rigidity in
3.3. Motivation for first-order rigidity in S n + . To extend the definitions of first-order motion and first-order rigidity to frameworks in S n + , mimic the motivation presented in section 2.3. If p(t) is a motion of a framework G(p) in S n + , then for all t and {i, j} ∈ E, where c ij is constant for all {i, j} ∈ E, and for all t and k ∈ V , Equivalently, for all t, {i, j} ∈ E and k ∈ V , If the motion p(t) is differentiable at t = 0, then p(t) must satisfy, This leads to the following definition.
3.4.
First-Order Rigidity in S n + . A first-order motion of the framework G(p) in S n + is a map u : V → R n+1 satisfying, for each {i, j} ∈ E and for each k ∈ V , (2) p i · u j + p j · u i = 0 and p k · u k = 0.
A trivial first-order motion of S n + is a map u : S n + → R n+1 satisfying x · u(y) + y · u(x) = 0 and z · u(z) = 0, for all x, y and z in E n . The framework G(p) is first-order rigid in S n + if all first-order motions of G(p) are restrictions of trivial first-order motions.
3.5.
Remark. Note that the equations in (2) are equivalent to the following conditions, (p i − p j ) · (u i − u j ) = 0 and p k · u k = 0, which are similar to the equations defining first-order rigidity in E n .
3.6. Remark. If G(p) is a bar-and-joint framework in S n + , then the graph obtained from G by adjoining a new vertex with edges incident with all vertices of G, together with the map p : V ∪ {v + 1} → E n+1 given by is first-order rigid in E n+1 iff G(p) is first-order rigid in S n+1 + . That is, frameworks in S n + can be modeled by the cone on the same framework in E n+1 .
4.
Equivalence of First-Order Rigidity in S n + and E n . This section presents two maps, a map carrying a framework G(p) in S n + into a framework G(q) in E n , and a map carrying the first-order motions of G(p) into first-order motions of G(q). The latter map carries trivial first-order motions of S n + to trivial first-order motions of E n , yielding the result G(p) is first-order rigid iff G(q) is first-order rigid.
4.1.
Mapping frameworks and first-order motions. If G(p) is a framework in S n + , then G(ψ • p) is a framework in E n , where ψ : S n → E n is given by ψ(x) = x/(e · x). The inverse of ψ is given by If u is a first-order motion of the framework G(p) in S n + , let ϕ denote the map If G(q) is a framework in E n with first-order motion v, then ϕ −1 is given by Observe that ϕ and ϕ −1 map into the appropriate tangent spaces: ψ −1 (q i ) · ϕ −1 (v i ) = 0 and ϕ(u i ) · e = 0.
4.2.
Theorem. u is a first-order motion of the framework G(p) in S n + iff ϕ • u is a firstorder motion of the framework G(ψ • p) in E n . Moreover, u is a trivial first-order motion iff ϕ • u • ψ −1 is a trivial first-order motion.
Pf. Note that If u is a first-order motion of G(p), then u i · p i = 0 for all i ∈ V , and p i · u j + p j · u i = 0 for all {i, j} ∈ E. By (3), (ψ(p i ) − ψ(p j )) · (ϕ(u i ) − ϕ(u j )) = 0 for all {i, j} ∈ E. The definition of ϕ ensures that ϕ(u i ) · e = 0. Therefore, ϕ • u is a first-order motion of G(ψ • p).
Suppose u is a trivial first-order motion. Then x · u(x) = 0 for all x ∈ S n + and x · u(y) + y · u(x) = 0 for all x, y ∈ S n + . Let v : E n → R n+1 denote the composition φ • u • ψ −1 . If x, y ∈ E n with x denoting ψ −1 ( x) and y denoting ψ −1 ( y), then (3) gives So v is a trivial first-order motion. The converse follows similarly.
Corollary. G(p) is first-order rigid in S n + iff G(ψ • p) is first-order rigid in E n . 4.3. Remark. S n + versus S n : Given a discrete framework, there exists a rotation of the nsphere such that no vertex of the framework lies on the equator of the sphere. Therefore, we need not restrict our frameworks to a hemisphere.
5.1.
Geometries. For x, y ∈ R n+1 , let x, y k denote the function x, y k = x 1 y 1 + · · · + x n−k+1 y n−k+1 − x n−k+2 y n−k+2 − · · · − x n+1 y n+1 , and let X n c,k denote the set, X n c,k = {x ∈ R n+1 | x, x k = c, x n+1 > 0}, for some constant c = 0 and k ∈ N. We write X n to simplify notation, if c and k are understood. If k = 1 and c = −1, then X n is hyperbolic space, H n . If k = 1 and c = 1, then X n is exterior hyperbolic space, D n . Spherical space S n + is the case k = 0, c = 1. Note that E n = X n for any choice of c and k.
5.3.
First-order rigidity in X n . A metric d X can be placed on X n so that d X (x, y) is a function of x, y k . A sufficient condition for the distance d X (x, y) remaining constant is the requirement x, y k remain constant. Therefore, the same analysis motivates the following extensions of the definitions of first-order rigidity to X n .
A bar-and-joint framework G(p) in X n is a graph G together with a map p : V → X n . A first-order motion of the framework G(p) in X n is a map u : V → R n+1 satisfying for each {i, j} ∈ E, (4) p i , u j k + p j , u i k = 0, and for each i ∈ V , A trivial first-order motion of X n is a map u : X n → R n+1 satisfying x, u(y) k + y, u(x) k = 0 and z, u(z) k = 0 for all x, y, z ∈ X n . G(p) is first-order rigid in X n if all first-order motions of G(p) are the restrictions of trivial first-order motions of X n .
5.4. X n and E n . In section 4 we established the equivalence between first-order rigidity in E n and first-order rigidity in S n + . We need only demonstrate the equivalence holds between the first-order rigidity theories of X n and S n + . 5.5. X n and S n + . Let ψ S + : X n → S n + denote the map x → x/ √ x · x, and let ϕ S + denote the map As in the proof of Theorem 4.2, the above equation and the definitions of ψ S + and ϕ S + give that ϕ S + • u is a first-order motion of G(ψ S + • p) iff u is a first-order motion of G(p).
It is clear that trivial motions of S n + map to trivial motions of X n . However, a trivial motion of X n maps onto a "trivial motion" of a proper subset of S n + . The following fact finishes of this proof. 8 Figure 4. Mapping first-order motions of a framework in S n + to first-order motions of a framework in H n .
Fact. Given a first-order motion u of K n+1 , the complete graph on n + 1 vertices in E n , there exists a unique trivial first-order motion of E n extending u.
(This result and the equivalence of the first-order theories of E n and S n + give the corresponding result for S n + , which was needed to finish the proof of the proceeding theorem.)
5.7.
Remark. There is no obstruction to defining a framework with vertices in H n and D n : the equations defining first-order motions provide formal constraints between these vertices, although the geometric interpretations of these constraints may not be obvious. In general, the theorem holds for frameworks with vertices on the surface x, x k = ±1, but not with vertices on x, x k = 0.
6. The Rigidity Matrix 6.1. Projective models of X n . The projective model of X n is the subset of E n obtained by projecting from the origin the points of X n onto E n , The projective model of hyperbolic n-space H n is the interior of the unit n-ball B n of E n and the projective model of exterior hyperbolic n-space D n is the exterior of B n . The unit (n − 1)-sphere S n−1 is the absolute, the points at infinity of hyperbolic geometry. Spherical n-space is model projectively by E n . Figure 5. Mapping first-order motions of a framework in S n + to first-order motions of a framework in E n .
Since we are now restricting our attention to points in E n , we identify E n with R n and write P X n to denote the projective model of X n as a subset of R n . Distance in P X n is calculated by normalizing the points into X n and applying the definition of distance in X n . For example, the distance between points x and y in P S n + (so x, y ∈ R n ) is and for points x and y in P H n , 6.2. The rigidity matrix of a framework. A first-order motion u : V → R n of the framework G(p) in R n , satisfies This system of homogeneous linear equations, indexed by the edges of G, induces a linear transformation with matrix R E (G, p), called the rigidity matrix of G(p), The kernel of R E (G, p) is precisely the space of first-order motions of G(p).
A first-order motion u : V → R n of the framework G(p) in P H n , P D n or P S n satisfies The matrix of the linear transformation induced by this system of linear equations is the rigidity matrix R X (G, p) of G(p), Note that k ij depends on X.
6.3. Transforming rigidity matrices. Let T K (G, p) denote the matrix k ) (I is the n × n identity matrix and (p k is the i-th component of p k ). For example, for n = 3 and p k = (x 1 , x 2 , x 3 ), Theorem. Let G(p) be a framework with p ∈ R n . Then (1) T K (G, p) satisfies (2) G(p) is first-order rigid in P S n + iff G(p) is first-order rigid in P E n ; (3) G(p) is first-order rigid in P H n ∪P D n iff G(p) is first-order rigid in P E n and p i ·p i = 1 for all i ∈ V (no vertex is on the absolute). Pf. (1) Since T p i multiplies only the columns corresponding to vertex i, we need only verify k ij × T p i = p i − p j . This is a straightforward calculation, (2), (3): Since the determinant of T K (G, p) is the product v i=1 det(T p i ) and det(T p i ) = 1 + K(p i · p i ), the dimension of the vector space of first-order motions of G(p) is the same in each geometry iff 1 + K(p i · p i ) = 0 for all i ∈ V .
6.4. Remark. It is well-known that the rank of the rigidity matrix, and thus first-order rigidity, of a framework in E n is invariant under projective transformations of E n . Due to the equivalence of first-order theories, the same is true of frameworks in X n . (In fact, there exists an underlying projective theory.) Intuitively at least, this projective invariance suggests the equivalences presented in this paper since all the geometries discusses can be obtained from projective geometry by choosing an appropriate set of transformations. Figure 7. A visual summary of the underlying projective theory: hyperbolic space H, Euclidean space E and spherical space S can be realized as subgeometries of projective geometry.
The First-Order Uniqueness Theorems of Andreev and Cauchy-Dehn
An immediate consequence of the equivalence of these first-order rigidity theories is the ability to transfer results between the theories. 13 7.1. The Cauchy-Dehn Theorem. The Cauchy-Dehn theorem for polytopes in E n states that a convex, triangulated polyhedron in E n , n ≥ 3, is first-order rigid. Before the generalization of this theorem can be stated, convexity in X n needs to be defined. A set S ⊂ X n is convex if, for any line L of X n , L ∩ S is connected. Therefore, S ⊂ X n is convex iff ψ E (S) ⊂ E n is convex.
7.2.
A first-order version of Andreev's uniqueness theorem. If p denotes a point of D n , then the set of points x in R n+1 satisfying p, x 1 = 0 (orthogonal in the hyperbolic sense) defines a unique hyperplane of R n+1 through the origin. Therefore, to each point of p, there corresponds a unique hyperplane of H n , P = {x ∈ H n | p, x 1 = 0}, and conversely.
If q is another point of D n with Q the corresponding hyperplane of H n , the angle of intersection of the hyperplanes P and Q is defined to be arccos( p, q 1 ). So equations (4) and (5) defining a first-order motion u of a framework G(p) in D n , p i , u j k + p j , u i k = 0 and p i , u i k = 0, are precisely the conditions defining a "first-order motion" of a collection of planes under angle constraints (a bar-and-joint framework is merely a collection of points under distance constraints). Polyhedra with fixed dihedral angles are examples of such objects.
Under this point-plane correspondence of D n and H n , the Cauchy-Dehn theorem for D n gives a first-order version of Andreev's uniqueness theorem. Indeed, a simple, convex polytope in H n is a triangulated, convex polytope in D n . We use stiff to denote the analogous definition of first-order rigid.
Theorem. (Andreev) If M is a simple, convex polytope in H n , n ≥ 3, then M is stiff. 7.3. Remark. The usual hypothesis of Andreev's theorem requires the polytope M to have dihedral angles not exceeding π/2. This supposition implies M is simple. 7.4. Remark. The point-plane correspondence described above is known as polarity. There is a version of this result for the sphere that requires a better discussion of polarity on the sphere. | 2007-09-21T05:37:16.000Z | 2007-09-21T00:00:00.000 | {
"year": 2007,
"sha1": "a57f3c4b9e0e403da879d6525f20a4a9d79988f6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a57f3c4b9e0e403da879d6525f20a4a9d79988f6",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
236930201 | pes2o/s2orc | v3-fos-license | Mental Resilience and Coping With Stress: A Comprehensive, Multi-level Model of Cognitive Processing, Decision Making, and Behavior
Aversive events can evoke strong emotions that trigger cerebral neuroactivity to facilitate behavioral and cognitive shifts to secure physiological stability. However, upon intense and/or chronic exposure to such events, the neural coping processes can be maladaptive and disrupt mental well-being. This maladaptation denotes a pivotal point when psychological stress occurs, which can trigger subconscious, “automatic” neuroreactivity as a defence mechanism to protect the individual from potential danger including overwhelming unpleasant feelings and disturbing or threatening thoughts.The outcomes of maladaptive neural activity are cognitive dysfunctions such as altered memory, decision making, and behavior that impose a risk for mental disorders. Although the neurocognitive phenomena associated with psychological stress are well documented, the complex neural activity and pathways related to stressor detection and stress coping have not been outlined in detail. Accordingly, we define acute and chronic stress-induced pathways, phases, and stages in relation to novel/unpredicted, uncontrollable, and ambiguous stressors. We offer a comprehensive model of the stress-induced alterations associated with multifaceted pathophysiology related to cognitive appraisal and executive functioning in stress.
INTRODUCTION
The impact of minor and major stressors on psychological and physical health is well documented. It is clear from this literature that stressors are salient stimuli, including events and behavior, that can evoke strong negative emotions and feelings such as fear, betrayal, confusion, and powerlessness (i.e., psychological stress), which in turn, can lead to significant morbidity including depression, PTSD, coronary heart disease, and ischemic stroke (e.g., Stansfeld and Candy, 2006;Hamer et al., 2012;Richardson et al., 2012;Brainin and Dachenhausen, 2013;Henderson et al., 2013;Wei et al., 2014a). Psychological stress is an appropriately evoked biological reaction intended to recalibrate and optimize executive functions to stay focused on the stressor at hand, and thus mitigate the potential harm to the organism. Although this mechanism is intended to be adaptive, it is not perfect, particularly in the case of intense and/or chronic stress. In this context, the neuroactivity can constrain cognition and increase the risk of mental and social dysfunction, as well as neural and systemic inflammation (e.g., Shin and Handwerger, 2009;Hassija et al., 2012;Latack et al., 2017;Auxéméry, 2018;Mills et al., 2019;Quinones et al., 2020;Slavich, 2020;Vaillancourt and Palamarchuk, 2021). The origin of this type of stress-associated cognitive maladjustment belongs to attentional tunneling (i.e., stressor preoccupation, e.g., Chajut and Algom, 2003;Roelofs et al., 2007;Pilgrim et al., 2010;Tsumura and Shimada, 2012;Shields et al., 2019), which restricts cognitive flexibility (e.g., Alexander et al., 2007;Shields et al., 2016;Marko and Riečanský, 2018), and distorts memory because aversive information is prioritized over neutral or positive information (e.g., de Quervain et al., 2009de Quervain et al., , 2017Palamarchuk and Vaillancourt, under review;Vaillancourt and Palamarchuk, 2021). Moreover, despite the shift in cognitive defence mechanism to liberate the emotional burden via the downplaying of aversive feelings and thoughts, the attempted suppression of the stressor's influence can still affect mental health. For instance, internalizing can lead to dysphoria or anhedonia (Salmon and Bryant, 2002), core symptoms of major depressive disorder (American Psychiatric Association, 2013).
The effect of a psychological stressor is primarily related to the level of perceived stress severity, i.e., cognitive appraisal/interpretation of the stressor. Stressors can represent various aversive events regardless of their proximity (i.e., direct or remote such as in witnessing or learning), which commonly disrupt emotional integrity (Figure 1). This mechanism and development have not been described comprehensively in one integrated model. In this review, we outline the central neural dynamics and highlight the main phases of stress development. We define a neuropathophysiological mechanism of psychological stress that represents a complex cognitive construct beyond the classic fear-conditioning model. We detail neural dynamics in stress, and in doing so, propose a multi-level model to describe the accumulated neuronal alteration of cognitive dysfunctions. Our review highlights the importance of ameliorating psychological assessment, clinical screening, prevention, and treatment of altered adaptive-learning abilities of psychologically distressed and depressed individuals.
STRESSOR DETECTION AND AROUSAL
Psychological stress is a challenge, but the nervous system stands its homeostatic ground. First, it facilitates the detection of a stressor with noradrenergic signaling via the locus coeruleusnorepinephrine (LC-NE) system (e.g., Sara and Bouret, 2012;Bari et al., 2020;Poe et al., 2020). The LC-NE system is formed by the LC in the brainstem, which is a cluster of neurons encompassing NE. The axons of the LC neurons are organized in the several modules that project across the brain and format a noradrenergic system with extensive collateralization. Thus, LC activation results in a diffuse NE surge in the cerebral networks (e.g., Sara and Bouret, 2012;Szabadi, 2013;Schwarz et al., 2015;Bari et al., 2020;Poe et al., 2020), which is linked to cognitive (e.g., attention and flexibility) and behavioral outcomes (e.g., Skosnik et al., 2000;Morilak et al., 2005;Alexander et al., 2007; Figure 2).
The LC neurons can be subconsciously activated in response to fear, which is likely linked to the corticotropin-releasing factor (CRF) afferents from the amygdala (e.g., Pacak et al., 1995;Dunn et al., 2004;Valentino and Van Bockstaele, 2008;Sara and Bouret, 2012;Szabadi, 2013;Godoy et al., 2018;Reyes et al., 2019). The amygdala is principally associated with a fear response (e.g., Etkin and Wager, 2007;Godoy et al., 2018;Palamarchuk and Vaillancourt, under review). Chronic psychological stress strengthens the functional connectivity between the LC and amygdala that relates to fear learning. Specifically, via hypothalamic orexin, LC activity facilitates amygdala-dependent aversive/fear memory (e.g., Sears et al., 2013), with early retrieval (up to 6 h) associated with activated prelimbic prefrontal cortex (PFC) → basolateral amygdala circuits and later retrieval (up to 28 days) associated with activated prelimbic PFC → thalamic paraventricular nucleus → central amygdala circuits (rat model, Do-Monte et al., 2015). At the same time, prolonged severe stress has been found to impair amygdalar inhibition, seen in reduced PFC → basolateral amygdala connectivity that hyperactivated the amygdala and ensued aggressive behavior (Wei et al., 2018). That is, in chronic stress, the amygdala is relaxed from the PFC, yet thalamic pathways reconnect the pair, at least for fear memory retrieval.
The LC-amygdala connectivity is reciprocal as the amygdala can phasically activate LC neurons as well (e.g., Bouret et al., 2003). Liddell et al. (2005) showed that subliminal fear stimuli (i.e., fearful faces) coactivate the LC, amygdala, pulvinar, and frontotemporal areas related to orienting an ''alarm system'' (hereafter referred to as cognitive defence that is induced by ''alarmed'' LC-NE system; see Figure 2). Leuchs et al. (2017) validated previous findings that phasic pupil dilations, which are related to the LC activity (e.g., Murphy et al., 2014) in response to aversive (e.g., Wiemer et al., 2014) and emotionally arousing stimuli (e.g., Bradley et al., 2008), are a physiological marker of fear learning/conditioning. Fear learning is associated with a functional coactivity between the amygdala, anterior cingulate cortex (ACC), insula, thalamus, and PFC (e.g., Etkin and Wager, 2007;Fullana et al., 2016; see Figure 2. At the same time, almost all of the neocortex (e.g., the PFC related to cognitive appraisal and stress controllability; and the ACC together with the insula related to social monitoring/pain network; Palamarchuk and Vaillancourt, under review) can modulate LC activity via passing already processed/encoded information about the salient sensory and behavioral stimuli (e.g., Sara and Bouret, 2012;Szabadi, 2013;Schwarz et al., 2015).
The LC neuronal activity is a bimodal-tonic (sensoryorientated) and phasic (action-orientated)-firing that regulates attention and ongoing behavior. Specifically, the levels of tonic activity relate to drowsiness and disengagement (low), arousal (moderate), and hyperarousal (high; Sara and Bouret, 2012;Hofmeister and Sterpenich, 2015;Bari et al., 2020). Hyperarousal has been found to be associated with an increased effort to face challenges (Varazzani et al., 2015). The phasic activity increases in response to relevant behavior and hence prioritizes a goal-directed attentional processing over a stimulus-driven FIGURE 1 | A simplified schema of the neurocognitive reactivity to a psychological stressor. Note. This schema presents major neurocognitive dynamics during stress development phases (light blue blocks) and stages (yellow blocks). Neurocognitive stress reactivity is facilitated by two principal neural limbs, the LC-NE system and the HPA axis. Phase I: (1) The LC-NE system detects a challenging stimulus (i.e., stressor) and "informs" the neocortex related to cognition.
(2) Automatically, it triggers subconscious cognitive defence mechanisms to activate the HPA axis. Phase II: (3) Further engagement of cognitive appraisal defines the severity of a stressor. Phase III: (4) Severe stress perception distresses emotions. (5) Fear promotes selective attention and aversive memory which aggravates cognitive defence and (6) can result in psychological problems. (7) Insufficient fear downregulation in chronic and/or intense stress (alarm-to-threat stage), as well as chronic uncertainty (risk-to-escape stage) and/or losing hope (surrender-in-defeat stage) can lead to psychiatric disorders and cognitive alterations, e.g., poor memory and executive dysfunctions. Phase IV: (8) Consequently, poor neurocognitive functioning affects decision-making, as well as alters recognition (phase I), appraisal (phase II), and response (phase III) of/to a novel stressor. Legend: HPA-hypothalamic "pituitary" adrenal; LC-NE-locus coeruleus-norepinephrine; ↑: hyperactivity/increase; ↓: decrease; black arrows-adaptive path; blue arrows and blocks-maladaptive path.
attention, which serves adaptive behavioral performance (Sara and Bouret, 2012;Hofmeister and Sterpenich, 2015). The phasic activity also reacts to fear, nociception (e.g., Valentino and Van Bockstaele, 2008;Sara and Bouret, 2012), and motivation (i.e., anticipated reward size; Bouret and Richmond, 2015), that modulate behavioral performance. However, upon detecting a stressor, the LC drops its phasic activity and increases its tonic activity, which is seen in hyperarousal and hypersensitivity and relates to scanning attention and the analysis of behavior (Valentino and Van Bockstaele, 2008). That is, when facing a stressor, the LC puts goal-directed attentional processing (the dorsal frontoparietal network) on hold so the challenge can first be inspected (the ventral/mesial frontoparietal network, mainly the dextral part including the inferior frontal gyrus, frontal/insula regions, and basal ganglia; Corbetta and Shulman, 2002;Corbetta et al., 2008;Shulman et al., 2009; see also Godoy et al., 2018). Therefore, we define cognitive defence as the ventromedial fronto-temporo-parietal network driven by fear which can emerge when fearful stimuli (frontotemporal circuits) and novel/unexpected stimuli (frontoparietal circuits; Figure 2) are presented.
Unexpected novel stimuli that do not have predictive value will elicit larger event-related potential responses measured by electroencephalography and prolonged reaction time to the subsequent target (i.e., larger arousal), that in turn, will modulate behavior (Knight and Nakada, 1998). The findings in shocked rats are that, compared to expected stressors, unpredictable stressors evoke greater LC-NE reactivity seen in the higher levels of principal NE metabolite in the amygdala, hypothalamus, and thalamus, and higher levels of corticosterone in plasma. In contrast, predictable stressors do not elevate NE metabolite levels in the LC and thalamus, nor corticosterone levels in plasma, the way unpredictable stressors do, compared to non-shocked rats (Tsuda et al., 1989). The potential mechanism of the higher impact of unpredictable stress may relate to altered serotoninergic (5-HT) signaling that relates to preserve the βadrenoreceptors' upregulation (e.g., Asakura et al., 2000;Yalcin et al., 2008), which is also seen in conditioned fear and inescapable stress (Kaehler et al., 2000). However, McDevitt et al. (2009) showed that although stress controllability modulates NE levels, it does not affect NE signaling in the LC neurons; whereas stressor controllability relates to the medial PFC function to downregulate the amygdalar hyperactivity associated with altered 5-HT signaling (e.g., Amat et al., 2005; see also Puig and Gulledge, 2011;Leiser et al., 2015;Garcia-Garcia et al., 2017;Palamarchuk and Vaillancourt, under review). The findings collectively highlight that neurocognitive stress reactivity is orchestrated by the LC-NE system, fueled by the fear-driven amygdala, and regulated by the PFC/5-HT circuits.
FIGURE 2 | Highlights of the neural dynamics and topology in neurocognitive stress reactivity. Note. Schematic diagram of the main co-occurrences (1-5) in neurocognitive reactivity and cerebral topology in psychological stress. (1) Detection of a threat by the LC-NE system and (2) its sensory processing triggers (3) the amygdala (fear), which in turn affects (4,5) cognition and behavior via the ventromedial fronto-temporoparietal network [cognitive defence] directed towards fearful stimuli (the fronto-temporal circuits) and novel/unexpected stimuli (the fronto-parietal circuits). Novelty detection encompasses the following circuits: (a) mesial temporoparietal network for phasic attention to the novel stimuli such as auditory and somatosensory, but to the lesser degree visual; (b) the prefrontalhippocampal-diencephalic network (i.e., frontocentral hippocampal regions, adjacent fusiform, lingual gyri, fornix-mammilothalamic-cortical pathways and calcarine) for novelty processing and encoding. By contrast, the posterior hippocampal region is associated with spatial processing and encoding. Legend: A-amygdala; dACC-dorsal anterior cingulate cortex; H-hippocampus; I-insula; LC-locus coeruleus; NE-norepinephrine; T-thalamus; vm-ventromedial; ↑: hyperactivity/increase; ↓: decrease; ↔: functional coactivity.
COGNITIVE APPRAISAL OF STRESS SEVERITY
Elevation of cortisol levels in response to a stressor is associated with perceived stress severity (e.g., Sladek et al., 2016;Gabrys et al., 2018Gabrys et al., , 2019Woody et al., 2018). That is, a psychological threat ''exists'' to the extend cognition ''sees'' it. Though cognitive capability may help with the avoiding of dangerous situations, it is the cognitive appraisal that helps reduce psychological stress via a self-appraisal perspective that conquers challenges, but not the challenging stimulus per se. Slattery et al. (2013) tested the associations between three neurocognitive variables, IQ, academic achievement, and verbal/visual short-term memory, which were measured at age 14, during a standardized psychosocial stress paradigm delivered at age 18. Results indicated that poor cognitive appraisal, but not cognitive skill, predicted stress responses. Specifically, stress-coping abilities during stress anticipation depended on ''secondary'' cognitive appraisal related to the perception of poor self-efficacy (we term this appraisal related to the perception of self-efficacy to deal with the stressor self-appraisal), but not on ''primary'' cognitive appraisal (greater threat/challenge-perception, which we term stressor-appraisal). Poor self-appraisal independently predicted lower cortisol reactivity during the test indicating an insufficient stress response in adolescents. At the same time, poor visual memory predicted cortisol hyperreactivity to stress, whereas internalizing disorders increased the links between verbal memory and cortisol reactivity. These results denote an important fact that intelligence alone is not likely a marker of emotion regulation that is sufficiently related to stress outcome. Rather, the outcome associated with stress is principally influenced by an individual's cognitive selfappraisal.
Other findings support the impact of self-appraisal on stress severity. In adolescents, Sladek et al. (2016) showed that higher levels of perceived daily stress severity were linked to elevated cortisol levels, compared to diurnal patterning, only in: (1) individuals with low self-appraisal; and (2) in situations with higher ''engagement'' coping (i.e., support seeking). The situational variation of cortisol reactivity likely indicates that engagement coping may be due to lower self-belief in coping capacity and thus lower self-appraisal. Coping efficacy related to self-belief in one's capacity to deal with a stressful situation has been found to be linked to psychological problems in children of divorced parents (Sandler et al., 2000). In another study, compared to peers with high coping efficacy, adolescents with increased loneliness and low coping efficacy presented a flatter diurnal cortisol slopes, a marker of poor cortisol regulation, later on in college; while higher coping efficacy predicted lower levels of the cortisol awakening response in college (Drake et al., 2016). In their subsequent work, Sladek et al. (2017b) found that girls with an active engagement coping style in response to interpersonal stress had lower cortisol levels (measured by diurnal cortisol slope, total output across the day (AUCg), and cortisol awakening response). However, higher rates of using active coping related to higher cortisol awakening responses the next morning. For women with attentional avoidance of social threat cues, Sladek et al. (2017a) showed that increased use of social support coping predicted lower cortisol responses to social stress and flatter average diurnal cortisol slopes compared to women with attentional vigilance (i.e., a bias toward threat). Similar cortisol patterns were found in children who had more social problems compared to their peers, which was seen in flatter slopes of cortisol decline from wakening to bedtime; as well, children presented with higher cortisol at wakeup time the next morning after higher than usual rates of peer or academic problems at school (Bai et al., 2017; see Figure 3).
The impact of self-appraisal on stress response/severity is in keeping with meta-analytic results by Kammeyer-Mueller et al. (2009), which demonstrated that core self-evaluations (i.e., a stable personality trait that encompasses self-efficacy, locus of control, self-esteem, and neurotism) related to lower perceived stress, higher rates of problem-solving coping, reduced strain, and lower levels of engagement in avoidance coping. In this meta-analysis, self-appraisal was not significantly linked to emotion-focused coping and emotional stability moderated the association between stress and strain and was uniquely linked to the coping process and stress. A meta-analysis by Connor- Smith and Flachsbart (2007) adds to the idea that personality traits can predict higher rates of specific coping strategies, including problem-solving and cognitive restructuring (for extraversion and conscientiousness), support seeking (for extraversion), and wishful thinking (i.e., mental avoidance), withdrawal, and emotion-focused coping (for neuroticism).
The effect of self-appraisal may be related to the aforementioned sensory-driven shift in the LC firing in response to stress, that suppresses goal-orientated actions, which need to be balanced with the action-orientated switch (i.e., subconsciousness ''cognitive defence task''). In other words, sufficient self-appraisal supports self-belief and reduces the ''mental barriers'', which in turn facilitates active, problemsolving coping. Further research is needed to lend more clarity on these associations (see Figure 2). A meta-analysis by Penley et al. (2002) showed problem-solving coping, but not emotion-focused coping, was associated with positive outcomes on general physical and psychological health. The nuances were that deliberate actions or analytical efforts and problem-focused coping were helpful only in acute interpersonal stress, correlating positively to psychological health outcomes.
The effect was opposite in chronic stress, correlating negatively to psychological health outcomes. This highlights the fact that chronically distressed individuals do require social/psychological assistance. In contrast, seeking social support, confrontation, self-blame, mental or physical avoidance/distancing, selfcontrol, and positive reappraisal in which emphasis is placed on a positive side of a situation, correlated with poor psychological self-reported outcomes in acute stress.
The major role of self-appraisal aligns with Social Self Preservation Theory ; see also . For instance, in social evaluative stress, both acceptance threat and status threat can elicit a cortisol response (Smith and Jordan, 2015), and threats to the social self can induce shame and reduce self-esteem, which correlates with stress-induced cortisol levels . It has also been demonstrated that high cortisol in social evaluative stress is accompanied by sympathetic activation (i.e., hyperarousal due to the NE surges), but not parasympathetic activation (i.e., measured by heart rate variability, can relate to affective responses; Bosch et al., 2009;Mackersie and Kearney, 2017;Poppelaars et al., 2019). Further, the magnitude of the stress response has been shown to increase in women with the size of the audience (Bosch et al., 2009), whereas sympathetic hyperreactivity was found to predict increased reactivity of the hypothalamic-pituitary-adrenal (HPA) axis, again in women (Poppelaars et al., 2019).
Stress perception also moderates the impact of a stressor on neurocognitive function. For instance, Jiang et al. (2017) showed that higher levels of stress perception correlated with poor episodic memory and frontal executive function in older adults free of mild cognitive impairment and dementia. Higher stress severity can be experienced in novel/unpredictable and inescapable conditions (e.g., Sauro et al., 2003;Lupien et al., 2007;Slattery et al., 2013) and is distinguished by hyperarousal. Tsuda et al.'s (1989) rodent studies, where these types of conditions, but not predictable stress, elevated NE in the LC and corticosterone in plasma. The apparent effect of the compromised feeling of control over unknown/novel challenges or in learned helplessness, aligns well with the self-appraisal influence discussed above. meta-analysis provides evidence that uncontrollable social threat relates to the highest levels of cortisol and adrenocorticotropin hormone responses to stress and the longest post-stress recovery.
Aversive emotions in both stress and stress anticipation that result in NE surge affect cortisol influence on attention, cognitive flexibility, memory, and learning, and thus aggravate the intensity of a stressor (Skosnik et al., 2000;Morilak et al., 2005;Alexander et al., 2007;Kvetnansky et al., 2009;Gray et al., 2017). That is, in intense stress, negative emotions enhance aversive memories and withdraw the cognitive focus from the ''peripheral'' details. Such selective attention is associated with poor working memory and memory retrieval (de Quervain et al., 1998(de Quervain et al., , 2009Roozendaal et al., 2006Roozendaal et al., , 2008. The effect of emotional valence in stress involves concurrent activation of glucocorticoid receptors (GRs) and adrenoreceptors, specifically, central β-adrenergic receptors activation linked to long-term declarative memory for emotionally arousing information FIGURE 3 | Major cognitive determinants of the cortisol responses linked to stress psychopathology. Note. This diagram represents the major factors influencing cortisol response to stress that can lead to stress disorders. Stress responses depend on the particular challenge, one's perception of the stressor, and the ability to cope with the stressor. The stressor's intensity, acuity, and persistence relate to cortisol responses, which are moderated by cognitive appraisal that is associated with self-efficacy and coping abilities. The stressor's novelty (i.e., unknown predictive value) and inescapability (i.e., negative "learned" value) increase negative predictive values (i.e., fear and powerlessness, respectively), that hinder self-appraisal and aggravate stress severity. Repeated exposure to homotypic stressors resets the hypothalamic-pituitary-adrenal axis. Chronic stress can result in blunted cortisol responses to a stressor, flattened diurnal slops, and increased cortisol awakening responses. Legend: * -not limited to the emotional aspect that reduces stress perception (e.g., motivation, compassion 1 , and sense of belonging 2-4 ) but also social and physical aspects directed to a reduction in the stressor's influence (e.g., physical or financial help); **-risk of PTSD and suicidal ideation; ↑: increase; ↓: decrease, "-": negative; 1 Vaillancourt and Palamarchuk (2021) (e.g., Cahill et al., 1994;Cahill et al., 2004;Maheu et al., 2005a,b; see also Summers, 2000, 2002;Schwabe et al., 2009;Smeets et al., 2009;Lonergan et al., 2013) and activation of α 1 -adrenoreceptors that were insensitive previously to NE in the medial entorhinal cortex, linked to hippocampal memory dysregulation (e.g., Carrion and Wong, 2012;Hartner and Schrader, 2018). As well, a deletion variant gene that encodes α 2B adrenoceptor, ADRA2B, contributes to the cognitive processing of emotional information (see meta-analytic review by Xie et al., 2018). Levels of hyperarousal and its proximity to the occurrence of stress modulate memory formation, whereas higher hyperarousal can be seen in children due to neurodevelopmental sensitivity (e.g., Palamarchuk and Vaillancourt, under review; Vaillancourt and Palamarchuk, 2021), and in women due to the LC-NE system specifics (e.g., Bangasser et al., 2016;Bangasser and Wicks, 2017;Bangasser et al., 2018Bangasser et al., , 2019; see also Mulvey et al., 2018). Additionally, the sex differences are that emotionally influenced memory relates to hyperactivated amygdala with a stronger effect in the left hemisphere for women and in the right hemisphere for men (e.g., Cahill et al., 2004). Animal studies on fear conditioning show that mild-to-low levels of hyperarousal can impair spatial recognition memory, yet moderate-to-strong levels of hyperarousal can enhance the memory (e.g., Baars and Gage, 2010;Conrad, 2010). Therefore, stress reactivity has interindividual variations that can be mild or more pronounced depending upon the individual's stress appraisal and valence of aversive emotions, which are moderated by age and gender. Additionally, glucocorticoid stimulation followed hours earlier by NE secretion has been shown to inhibit arousal effect on memory (Osborne et al., 2015).
DECISION MAKING AND STRESS
The executive functioning facilitates adaptation with decisionmaking based on the evaluated external (environmental) and internal (sensory) information (e.g., De Kloet et al., 1998;Wager and Smith, 2003;Collins and Koechlin, 2012;Barbey et al., 2013;Dajani and Uddin, 2015). Executive functioning integrates memory, cognitive flexibility (such as rapid attention and taskshifting, as well behavioral adjustments, e.g., Palamarchuk and Vaillancourt, under review), learning fortification, reasoning, insecurity predictability, and monitoring behavioral strategies (e.g., Collins and Koechlin, 2012; see also Grissom and Reyes, 2019). The distinctions are that the ventromedial PFC integrates memory and emotional systems that are needed for decisionmaking, whereas the striatal and ACC inputs can affect it with bias (e.g., Gupta et al., 2011;Ho et al., 2012;Shimp et al., 2015;Goulet-Kennedy et al., 2016;Fitoussi et al., 2018;Hiser and Koenigs, 2018;Palamarchuk and Vaillancourt, under review). At the same time, the amygdala mediates emotional responses that engage the insula, which relates to social pain, empathy, and anger (e.g., Palamarchuk and Vaillancourt, under review). In a social context, the medial PFC and amygdala, but not ventral striatum, moderate decision-making (Ho et al., 2012; see also Hiser and Koenigs, 2018); whereas high levels of fear or anger (i.e., the amygdalar hyper response to a stressor) can affect decision-making with impulsivity/immediate actions (e.g., Gupta et al., 2011). Conversely, the stress associated with uncertainty and unknown power over a situation involves the frontrostriatal circuits, where task-sets and actions are driven by the references of cognitive/behavioral strategies stored in the long-term memory as a script (relates to the dorsal striatum/left caudate nucleus engaged in reward and motivation). Thus, in the context of stress-related ambiguity, the choice depends on predicted outcome values (related to the ventral striatum/the nucleus accumbens and ventral putamen engaged in cognitive control) to maximize their utilization, i.e., reinforcement learning/instrumental conditioning (O'Doherty et al., 2004; see also Hollerman et al., 2000;Brovelli et al., 2011;Vogel et al., 2015Vogel et al., , 2017. The strategy is selected if it is absolutely reliable (the ventral striatum, nucleus accumbens) among the assortment of scripts (the dorsal striatum, nucleus caudate); and if it is unavailable, a new task-set is created because the decisionmaking is binary when the stimulus is ambiguous (e.g., Collins and Koechlin, 2012).
Emotional state/mood can affect the interpretation of the stressor, i.e., the mood-incongruent effects. Anxiety can lead to attentional bias toward threat due to higher predicted negative outcome of the stressor (i.e., ambiguity (fear, e.g., Blanchette and Richards, 2003;Barazzone and Davey, 2009). An anxious state also increases speed in the detection of aversive changes on a subliminal level and increases attention and conscious awareness on a supraliminal level (Gregory and Lambert, 2012). For example, in adults with high trait anxiety, the anxious state lowers awareness thresholds. In particular, fearful faces or non-threat faces presented among threatening faces are detected faster (Ruderman and Lamy, 2012). Neurocognitive functioning in stress thus drops cognitive flexibility (i.e., reduced functions of the dorsolateral PFC) to stay focused on the stressors, this attentional tunneling during emotional arousal allows the individual to detach from the ''peripheral'' information unrelated to the stressor that might distract the individual who is under pressure (e.g., Palamarchuk and Vaillancourt, under review; see also Brosch et al., 2013;LeBlanc et al., 2015). However, attentional tunneling and enhanced memory for aversive experiences can lead to psychological maladjustment, for instance, emotion-focused coping, anxiety, and PTSD (e.g., Palamarchuk and Vaillancourt, under review).
Hypothesis: Coping Mechanisms Are Driven by the Stress Stages
We define coping styles as intra-individual neurocognitive variability moderated by stress development across three main stages: (1) alarm-to-threat stage → (2) risk-to-escape stage → (3) surrender-in-defeat stage. Potentially, the full development can be observed in chronic, intense, and homotypic stress associated with the HPA resetting and circulating cortisol decline. It is likely that these stress stages can be disrupted/attenuated, escalated, and/or distorted according to the level of perceived stress severity and neuropsychological status; whereas novel stressors can restart stress phases cycling (e.g., stress detection phase I; see Figure 1). Therefore, coping styles can fluctuate in a predictable intra-individual manner and recognizing the stress stage can expedite adequate interventions to prevent or treat maladaptive coping.
Alarm-to-Threat (Check) Stage
Acute intense stress triggers right amygdalar fear-related effects such as tunneling attention, anxiety, and impulsivity seen in a reactive aggression as a sympathetic fight-or-flight response that is driven by high cortisol and NE levels (e.g., Palamarchuk and Vaillancourt, under review). The core mechanism is that fear can initially serve adaptation by reducing risky behavior (e.g., Pabst et al., 2013a,b;Yu, 2016;Vogel and Schwabe, 2019), because, in contrast, positive emotions can increase the probability of risk-taking (e.g., LeBlanc et al., 2015). Specifically, aversive emotions during mild psychological stress can facilitate the most reliable cognitive strategy via the narrowed scope of attention (that can also be induced by the pre-goal desire, e.g., LeBlanc et al., 2015), reduced configural associative learning (i.e., reduction in tri-/biconditional discrimination), and enhanced binary (uniconditional as irrelevant vs relevant) discrimination (e.g., Byrom and Murphy, 2016). Of relevance, social stress has been shown to increase activity in the anterior PFC associated with parallel processing during decision-making performance (e.g., the Game of Dice Task, Gathmann et al., 2014; see also Schiebener and Brand, 2015;Shimp et al., 2015). However, stimuli associated with extreme/traumatic experiences can trigger inadequate responses and reduce responses to contextual cues such as focusing on aversive sound and disregarding the safety of the environment that promotes automatic retrieval of traumatic experiences (e.g., Cohen et al., 2009;Otgaar et al., 2017). This is an example of accentuated alarm-to-threat stage by rigid binary cognitive strategy, whereas improving cognitive flexibility by configural associative learning could be a key element in the psychotherapeutical approach. Another example is that strong fear can elicit avoidance behavior related to the left lateral amygdala and anterior hippocampal hyperactivity (Abivardi et al., 2020). In other words, ''cold'' executive functioning is set to prioritize the most reliable decision-making to avoid danger when confronting a threat, yet it limits attention and flexibility. The mechanism is facilitated by promoted dorsal striatum-dependent (''habit'') learning and behavior over hippocampal-dependent (''cognitive'') memory encoding and retrieval, which leads to stereotypical ideas and thus maladaptive functioning in chronic stress (e.g., Packard, 2009;Vogel and Schwabe, 2016;Vogel et al., 2017;Zerbes et al., 2020; see also Schiebener and Brand, 2015;Shimp et al., 2015;Fitoussi et al., 2018). In particular, poor consequences can be seen in attentional set-shifting deficits, poor memory, anxiety, and depression (e.g., Palamarchuk and Vaillancourt, under review).
If acute stress subsides, attention can be improved with the decline of cortisol (e.g., Zandara et al., 2016). Conversely, intense stress can hyperactivate the LC that is associated with anxiety (Borodovitsyna et al., 2018;Morris et al., 2020) due to limbic dysregulation (e.g., Herman et al., 2005). In particular, it is related to the functional connectivity between the bed nucleus of the stria terminalis (BNST) and amygdala (e.g., Clauss, 2019;Knight and Depue, 2019;Hofmann and Straube, 2021). The nuances are that the amygdala is involved in explicit threat processing (i.e., threat confrontation), whereas the BNST is involved in ambiguous threat processing (i.e., threat anticipation; Herrmann et al., 2016;Klumpers et al., 2017;Naaz et al., 2019; see also Fox et al., 2015;Fox and Shackman, 2019;Luyck et al., 2019). As well, the BNST → central amygdala projections relate to cued-fear inhibition (Gungor et al., 2015; see also Clauss, 2019). The BNST plays a critical role in fear acquisition/expression, which relates to stress maladaptation and the development of stress-related disorders like PTSD (e.g., Miles and Maren, 2019) and involves CRH signaling (e.g., Hu et al., 2020). This functional interplay between the BNST and amygdala relates to the inter-individual differences in threat processing and trait anxiety (Brinkmann et al., 2018), which likely influences the development of the next stage in chronic intense stress.
Risk-to-Escape (Stalemate) Stage
The evidence is that stress, predominantly chronic, can increase risk-taking behavior (Starcke et al., 2008;Lighthall et al., 2009;Pabst et al., 2013c;Ceccato et al., 2016; see also Brand et al., 2006;Starcke and Brand, 2012;Yu, 2016). We predict that stress-induced risk-taking is largely driven by threat anticipation due to hyperactivated BNST. The BNST integrates limbic information and valence monitoring and plays a central role in the hippocampus-hypothalamic paraventricular nucleus circuit that activates the HPA axis and has a psychogenic effect (e.g., Lebow and Chen, 2016). The BNST is sexually dimorphic; its activity is heritable and relates to anxiety in ambiguous and sustained threat (e.g., Clauss, 2019). The neurophysiological background is that the BNST receives multiple signals, including, but not limited to, dopamine and 5-HT from the dorsal raphe and NE from the nucleus tractus solitarii (e.g., Glangetas and Georges, 2016). Moreover, increased impulsivity relates to alteration in the central amygdala → BNST dopaminergic projections that inhibit impulsive behavior (Kim et al., 2018).
We thus predict that in prolonged homotypic stress, hyperactivated BNST covers a shift from the front-line stresscare medial PFC-amygdalar circuits. This is likely a now-ornever response to escape the burden of anticipated threat, driven by dopamine reductions in uncertain conditions which recruit the dorsal PFC-striatal circuits related to impulsive and risky behavior. Our reasoning is that, in contrast to fear, ambiguity can be perceived as a dormant threat that increases approach behavior (the hippocampal rectivity, e.g., O'Neil et al., 2015) and risky behavior (the ventral striatal reactivity moderated by impulsivity traits, e.g., Mason et al., 2014;Goulet-Kennedy et al., 2016). As well, the activity of the ventral striatum is associated with a motivational control of performance and is regulated by the dorsolateral PFC (Hart et al., 2014). Therefore, it could be a part of an adaptive mechanism to confront the challenge although it requires adequate executive functioning, and by extension, goal-oriented actions. The pitfalls are that poor cognitive control and insular risk-processing can increase perceived stress, and in turn, risk-taking behavior (e.g., among adolescents, Maciejewski et al., 2018). In contarst, risk-taking behavior is inversely associated with a cortisol increase for boys/men but not girls/women (e.g., Daughters et al., 2013;Kluen et al., 2017). This effect relates to greater activity and novelty preferences due to higher sensation seeking in boys/men compared to girls/women who are more punishment sensitive (meta-analysis by Cross et al., 2011). The developmental moderation of stress-induced responses can also lead to impulsive errors in girls (e.g., Lukkes et al., 2016), which is also moderated by personality traits related to impulsivity (e.g., negative urgency that correlates to impulsivity, Berg et al., 2015; see also Cyders and Smith, 2008a,b;Herman et al., 2018). The levels of impulsivity in healthy young adults inversely correlate with the levels of released dopamine from the ventral striatum in low to moderate stress; yet high stress reduces dopamine responses (e.g., Oswald et al., 2007; see also Palamarchuk and Vaillancourt, under review).
In sum, poor cognitive functioning and cortisol decline can promote a burden of uncertainty (stalemate), and as dopamine drops, risk-taking ensues to which young men are more prone to than young women. The mechanism is that the striatal networks can serve decision-making with the learned behavior/''script'' when facing explicit danger in acute stress. In contrast, when dealing with prolonged uncertainty, decision-making can be impulsive and risky due to poor risk-processing, and potentially, motivation/urge to terminate the status quo in chronic intense stress. Accordingly, improving cognitive control with proper risk-processing (psychological help) and facilitating adequate options to avoid predictable danger (social assistance) could be a key intervention to prevent poor outcomes. Although our hypothesis has yet to be tested, it sheds light on why stress can induce risk-taking behavior.
Surrender-in-Defeat (Checkmate) Stage
We interpret that in acute and extreme stress associated with a loss or defeat, as well as in chronic stress with a prolonged ambiguity, the executive functioning ''surrenders'' in the absence of absolutely reliable task-sets and incapacity to create new ones (i.e., defeat/checkmate), which is why serotonin levels drop and depression emerges. Of relevance, Yu et al.'s (2016) findings in rodent models demonstrate that repeated social defeats, but not social threats, increase cortisol and NE levels but decrease dopamine, its metabolites, and serotonin levels in the striatum and hippocampus (see also Palamarchuk and Vaillancourt, under review).
On a molecular level, stress adaptation relates to a negative feedback of the HPA axis seen in cortisol hyposynthesis as ACTH sensitivity declines (e.g., Juruena et al., 2003;McEwen, 2012;Gray et al., 2017). In particular, the duration of exposure to a homotypic stressor displays a linear and inverted U-shaped dose-effect on a stress response: (1) a novel stressor can increase ACTH sensitivity; (2) a repeated stressor can initially desensitize ACTH; and (3) a chronic stressor relates to an unceasing ACTH sensitivity (Aguilera, 1994(Aguilera, , 1998Aguilera and Liu, 2012). Prior exposure to homotypic stressors can compromise stress response to a novel stressor (e.g., García et al., 2000), which in turn can expose a previous stress-induced latent behavioral sensitization that often surpasses the HPA axis sensitization (Belda et al., 2015;also see McCarty, 2016). Not surprisingly, intense stressor can facilitate certain cognitive functions and thus promote stress resilience (e.g., Ellis et al., 2017) although its chronic exposure is associated with mood disorders such as depression and anxiety (e.g., Juruena et al., 2020). According to the aforementioned findings on stress responses, we hypothesize that intra-stages expressions and inter-stage transitions in our model of stress development depend on the novelty, intensity, timing, and chronicity of the stressor. Stress stages can be desensitized in subchronic exposure to the same stressor (or homotypic stressors) but accelerated/exacerbated in chronic exposure to the homotypic stressors, which in turn can also hypersensitize stages toward a novel stressor.
We acknowledge that sex/gender may affect the copingrelated neural pathways due to sex and stress hormones co-signaling. In particular, neurocognitive variability during stress development can be affected by the levels of circulating estradiol/estrogen. Estrogen signaling influences memory, social learning, and aggressive/defensive behavior associated with the hippocampal and medial PFC functioning (e.g., Milner et al., 2008;Luine and Frankfurt, 2012;Laredo et al., 2014;Almey et al., 2015) and thus contributes to sex differences in stress coping. In females, circulating estradiol levels mediate stress resilience (e.g., Wei et al., 2014b;Luine, 2016;Yuen et al., 2016) and facilitate cerebro-and cardio-protection (e.g., Guo et al., 2005;Murphy, 2011;Adlanmerini et al., 2014) in linear and inverted U-shaped dose-effect (e.g., Bayer et al., 2018), where high estrogen levels increase cognitive sensitivity to stress (e.g., Graham and Scott, 2018;Hokenson et al., 2021). On the one hand, this may help explain why the prevalence of PTSD-surrender-in-defeat stage in our model-is two times higher in women than in men (e.g., Breslau, 2002;Zlotnick et al., 2006;Pooley et al., 2018). On the other hand, the androgen effect may explain the findings of why men are inclined toward impulsive behavior (i.e., risk-to-escape stage in our model, e.g., Hernandez et al., 2020) and are more affected by stress magnitude, compared to women who are more affected by stress frequency (e.g., Grissom and Reyes, 2019; see also Hidalgo et al., 2019).
Our hypotheses need to be tested to further clarify the various interfering factors with stress reactivity and resilience, such as sex hormones and genetic polymorphism related to serotonin and dopamine signaling reviewed above, as well as stressor type and stress timing/continuity (single, repeated intermittent, or chronic) that can involve different neural pathways and different reactivity of the HPA axis and LC-NE system. Nevertheless, these hypotheses can help explain why active coping is negatively linked to psychological health as reviewed above (Figure 1). It also supports the fact that chronically stressed individuals with depression/anxiety and poor cognition require psychological and social assistance.
Concluding Remarks
Neurocognition plays a vital role in adaptation and monitors the severity of challenges faced. When cognitive appraisal assigns a negative value to the salient stimuli, it is the moment they become psychological stressors and stress arises. Thus, psychological stimuli can vary in nature because it is the level of cognitive ''attention'' that determines stress and its severity, that is the stress appraisal/interpretation, but not the stimuli per se.
To address the nuances underlining stress severity, we propose to update a dichotomy in the cognitive appraisal terminology-self-appraisal (i.e., the perception of self-efficacy to deal with the stressor) and stressor-appraisal (the perception of threat/challenge). This dichotomy is intended to facilitate cognitive behavioral therapy, as well as translational research on stress and mental resilience. Specifically, self-appraisal relates to successful emotional downregulation and enables cognitive flexibility vs. stressor-appraisal which can contribute to emotional dysregulation and attentional tunneling that restricts/alters executive functioning. Noted specifics of the cognitive appraisal duality are associated with the PFC and amygdala interplay during the processing of aversive emotions and fear, which is linked to stress sensitization and psychiatric consequences (e.g., Palamarchuk and Vaillancourt, under review).
To advance our understanding of mental resilience and stress development, we offer new insights to the scholarly literature on psychological stress coping with respect to previously published reviews. First, we differentiate the neurocognitive aspects in stress development with four key phases: (i) stressor detection, (ii) stress appraisal (assessment of stress severity), (iii) stress reactivity, and (iv) decision making. Clinical analysis of each phase may help with ruling out primary and secondary causes of behavioral maladaptation. For instance, it is important to keep in mind that sudden and inadequate behavioral reaction to an event (i.e., detection of a novel stressor) may be related to a totally different event that occurred chronically in the past that latently compromised psychological health (i.e., prior chronic exposure to homotypic stressors can trigger cognitive ''defence,'' see Figure 1). Another example is that prolonged uncertainty increases the chances of risky/impulsive behavior. Second, we model a complex concept of stress development that introduces an intra-individual variability factor in the stress reactivity phase, which is based on the neural dynamics in cognitive processing. In particular, we hypothesize that coping styles are influenced by intra-individual neurocognitive variability moderated by stress reactivity (phase iii) across three major stages: (1) alarm-to-threat [check] stage → (2) risk-toescape [stalemate] stage → (3) surrender-in-defeat [checkmate] stage (Figure 1). Alarm-to-threat stage denoting the cortisol and NE surges in response to psychological stress must not be confused with the alarm phase, classically referred to triphasic allostasis process, which originated from the ''general adaptation syndrome'' concept by Selye (1998), reprint of 1936) that described ''typical syndrome'' following ''diverse nocuous agents.'' That is, the general alarm reaction within ''6-48 h in rat models of acute nonspecific stress.'' Finally, we emphasize that stress coping can fluctuate in a predictable intra-individual manner. Identifying the stressor's novelty/chronicity and stress stage/phase can help with early prevention and appropriate therapy of maladaptive stress coping, and in turn, prevent mental disorders.
AUTHOR CONTRIBUTIONS
TV encouraged, supported, and supervised ISP to investigate stress impact on cognition. ISP planned and carried out the project, the main conceptual ideas, developed the theoretical models and hypotheses, and designed the figures. ISP wrote the manuscript with support from TV. ISP and TV provided critical feedback, helped shape the manuscript, and contributed to the final version. The authors are accountable for the content of the work. All authors contributed to the article and approved the submitted version. | 2021-08-06T13:26:18.305Z | 2021-08-06T00:00:00.000 | {
"year": 2021,
"sha1": "cb2d89b99e939a61feb62b389af9b1a7dd3910e7",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnbeh.2021.719674/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cb2d89b99e939a61feb62b389af9b1a7dd3910e7",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53051018 | pes2o/s2orc | v3-fos-license | A class of spherical, truncated, anisotropic models for application to globular clusters
Recently, a class of non-truncated radially-anisotropic models (the so-called $f^{(\nu)}$-models), originally constructed in the context of violent relaxation and modeling of elliptical galaxies, has been found to possess interesting qualities in relation to observed and simulated globular clusters. In view of new applications to globular clusters, we improve this class of models along two directions. To make them more suitable for the description of small stellar systems hosted by galaxies, we introduce a 'tidal' truncation (by means of a procedure that guarantees full continuity of the distribution function). The new $f_T^{(\nu)}$-models are shown to provide a better fit to the observed photometric and spectroscopic profiles for a sample of 13 globular clusters studied earlier by means of non-truncated models; interestingly, the best-fit models also perform better with respect to the radial-orbit instability. Then we design a flexible but simple two-component family of truncated models, to study the separate issues of mass segregation and of multiple populations. We do not aim at a fully realistic description of globular clusters, to compete with the description currently obtained by means of dedicated simulations. The goal here is to try to identify the simplest models, that is, those with the smallest number of free parameters, but still able to provide a reasonable description for clusters that are evidently beyond the reach of one-component models: with this tool we aim at identifying the key factors that characterize mass segregation or the presence of multiple populations. To reduce the relevant parameter space, we formulate a few physical arguments (based on recent observations and simulations). A first application to two well-studied globular clusters is briefly described and discussed.
Introduction
As a zeroth-order dynamical description, a class of models (King 1966) has long and successfully been applied to globular clusters (McLaughlin & van der Marel 2005;Carballo-Bello et al. 2012;Miocchi et al. 2013). Standard spherical King models are meant to describe round, nonrotating stellar systems made of a single stellar population, for which the role of internal two-body relaxation has had time to act, bringing the system close to a quasi-Maxwellian, isotropic distribution function; a truncation is considered, to mimic the presence of tidal effects. The success of the King models is largely based on their ability to fit the observed photometric profiles (but see McLaughlin & van der Marel 2005 for a photometric test in favor of models characterized by a milder truncation); the models are then used to infer the general internal kinematical structure of globular clusters, which is largely beyond the reach of direct observational tests. In recent years, with the advent of high-resolution space and ground-based observations, the great progress made in the acquisition of detailed information on the line-of-sight and proper motion kinematics of these stellar systems has prompted a demand for more complex dynamical models. In particular, many galactic globular clusters are known to be characterized by significant rotation (Bellazzini et al. 2012;Bianchini et al. 2013) and/or pressure anisotropy . Often, clusters that are known to be characterized by longer relaxation times turn out to be more anisotropic (see for example Zocchi et al. 2012, hereafter ZBV12, and).
Regardless of their success, King models exhibit several internal inconsistencies. The models are meant to describe tidally truncated stellar systems, but in their original form they are spherical, in spite of the stretching that tides are expected to impose. The models are chosen to reflect the conditions of a collisionally relaxed state, but actually, outside their half-mass radius, globular clusters and the models themselves are associated with very long relaxation times (Harris 2010). These models are generally applied as one-component models, that is, they are suited to describe stellar systems made of a single homogeneous stellar population, yet, if collisional relaxation is at work, it should generate significant mass segregation, with heavier stars characterized by a distribution more concentrated than that of lighter stars (Spitzer 1969).
Physically motivated models able to resolve some of the above-noted inconsistencies, in relation to the shape and rotation of globular clusters, have been constructed (in particular, see Heggie & Ramamani 1995, Bertin & Varri 2008. As to the possible presence of pressure anisotropy, for the case of nonrotating clusters, so far most studies have resorted to the so-called Michie-King models (Michie 1963), which introduce significant radial pressure in the outer parts by multiplication of the underlying distribution function by a suitable angular-momentum-dependent factor (see also the models recently proposed by Gieles & Zocchi 2015). In this general picture, we might then consider models, such as those known as the f (ν) models, developed to represent the final state of collisionless collapse under incomplete violent relaxation and suc-Article number, page 1 of 15 arXiv:1603.05993v1 [astro-ph.GA] 18 Mar 2016 A&A proofs: manuscript no. AA_2016_28274 cessfully applied to the study of bright elliptical galaxies (e.g., see . Even though it remains to be proved that the formation of globular clusters, or at least of some globular clusters, follows this route, some recent investigations have looked into this possibility. A general trend in the direction of radial pressure in the outer regions has been noted also in recent simulations of the evolution of globular clusters (Tiongco et al. 2016). [Eventually, if external tidal fields are present, the outermost regions of the cluster may be characterized by isotropy or mild tangential anisotropy, as also suggested by Vesperini et al. (2014).] In a recent paper (ZBV12), the class of spherical f (ν) models has been used to study a sample of Galactic globular clusters under different relaxation conditions and compared to the performance of the standard spherical King models. This exploratory investigation indicates that for some clusters the use of f (ν) models is encouraged, although, being non-truncated, these models are obviously at a disadvantage in describing the outer parts of the available photometric profiles. In addition, some of the best-fit radially anisotropic models thus identified actually turn out to be too anisotropic, so that they might be prone to the radial-orbit instability (and thus not acceptable for interpreting the observations). The first goal of the present paper is to introduce a truncation to the f (ν) models and to test whether this new class of models is capable of a more satisfactory fit to the sample of globular clusters studied by ZBV12.
The second objective of the paper is to extend the newly constructed f (ν) T models to the case of two-component systems. For globular clusters, there are at least two important reasons to address more complex models of this kind.
One of the main effects related to collisionality is that of mass segregation. Thus a more realistic dynamical framework for the modeling of globular cluster has been sought in terms of multi-component models (e.g., see Da Costa & Freeman 1976;Gunn & Griffin 1979;Merritt 1981;Miocchi 2006), which basically represent an extension of the standard King (or Michie-King) models. Naively (i.e., in the normal context of kinetic systems), we would expect collisions to enforce a sort of equipartition, in which the velocity dispersion σ of stars of mass m should scale as σ ∼ m −1/2 . The process is complicated by the global and inhomogeneous nature of self-gravitating systems. It has also been argued that in the core of globular clusters complete equipartition cannot be achieved as a result of the so-called Spitzer instability. In particular, Spitzer (1969) suggested that, in a two-component system in virial equilibrium, the condition of equipartition in the core is precluded if the total mass of the heavy stars exceeds a certain fraction of the total mass of the cluster. Spitzer's criterion has been extended by Vishniac (1978) to cover systems with a continuous distribution of masses. These theoretical arguments have been revisited by means of recent simulations (see Trenti & van der Marel 2013), in which only partial equipartition is "observed" to follow from the cumulative action of star-star collisions. In any case, a certain degree of mass segregation appears to emerge from the observations of several globular clusters (see Anderson & van der Marel 2010;Goldsbury et al. 2013;Di Cecco et al. 2013;Bellini et al. 2014).
A second, physically separate reason to address the issue of two-component models is given by the relatively recent finding that globular clusters host multiple stellar populations. In many observed cases, the suggested interpretation is that clusters have been the site of multiple generations of stars (see Lardo et al. 2011;Gratton et al. 2012), so that the stars can be divided into the groups of the first and the second generation, and these groups may be associated with different dynamical properties, such as concentration or degree of anisotropy (see Richer et al. 2013;Bellini et al. 2015).
For the second goal of the paper, that is, the construction of two-component models of the f (ν) T form, to keep the number of free parameters low, we formulate some physical hypotheses (based on observations and/or simulations) that correspond to the picture of mass segregation. A comparison with observed cases should be able to support or disprove the physical assumptions made in the modeling procedure. Our approach is complementary to that of constructing multiparameter models as diagnostic tools (see Da Costa & Freeman 1976, Gunn & Griffin 1979, Gieles & Zocchi 2015. The paper is organized as follows. In Sect. 2 we introduce and construct the new class of truncated anisotropic f (ν) T models. In Sect. 3 we extend it to the two-component case. In Sect. 4 we apply the one-component models to fit a sample of 13 galactic globular clusters. For NGC 5139 (ω Cen) and NGC 104 (47 Tuc), we also present the results of the fits performed by means of two-component f (ν) T models. Finally, in Sect. 5, we draw our conclusions.
One-component models
Studies of the dynamics of elliptical galaxies have investigated the picture of galaxy formation by incomplete violent relaxation from collisionless collapse. There are ways to translate this picture into an appropriate choice of the relevant distribution function to represent the current state of ellipticals. The choice is not unique and various options have been explored. One particular choice reflects a conjecture on the statistical foundation of the relevant distribution function (see Stiavelli & Bertin 1987). This is a family of partially relaxed models. The models are called f (ν) models and their properties have been studied extensively in more recent papers (see Bertin & Trenti 2003;. They are based on the following distribution function where A, a, and d are positive constants. For applications, a given value of ν ≈ 1 is usually taken as a fixed parameter. Here E = v 2 /2 + Φ(r) < 0 and J = |r × v| represent the specific energy and the magnitude of the specific angular momentum of a single star subject to a spherically symmetric mean potential Φ(r). The self-consistent models based on this distribution function define a family of anisotropic, non-truncated models. The following subsections are devoted to the formulation of a truncated distribution function as a generalization of Eq. (1) and to the analysis of the main dynamical properties found for the resulting new classes of anisotropic truncated models.
Truncation
As also noted by Davoust (1977), the truncation prescription is not unique (the structural properties associated with different types of truncation are described by Hunter 1977); indeed, the distribution functions considered in that article differ from one another for the smoothness of their energy gradients in correspondence with the energy cut-off. In this respect, we decided to proceed to the truncation of the f (ν) models with ν = 1 in the following way. The distribution function for J 0, vanishes at the cut-off energy E e together with all its derivatives (the quantities A, E e , a, and d are constants). The twoparameter family of one-component models is then constructed by solving the Poisson equation: for the gravitational potential Φ(r). In our case, the distribution function is anisotropic, so that the density on the right-hand side of Eq.
(3) can be reduced to a two-dimensional integral, which depends on radius explicitly and implicitly, through the unknown Φ(r). Thus, if we define dimensionless quantities such as the potential ψ = −a(Φ − E e ), the radius ξ = a 1/4 dr, and the velocity ω 2 = (a/2)v 2 , the integral is proportional tô wherê and ζ is the angle between the position r and the velocity v of a single star. The resulting dimensionless form of Eq.
(3) is given by where we have introduced the dimensionless parameter γ = ad 2 /(4πGA). This differential equation is integrated under the boundary conditions ψ(0) = Ψ and (dψ/dξ)(0) = 0 out to the truncation radius ξ tr , where the dimensionless potential vanishes. Hence, the self-consistent problem for the dimensionless potential reduces to a family of second-order differential equations defined by two structural parameters: the central dimensionless potential Ψ and γ. We have performed the integration of Eq. (6) with an adaptive fourth-order Runge-Kutta method. At every integration step, the two-dimensional integral on the right-hand side has been computed by means of the Chure routine in the C-package Cuba (see Hahn 2015).
The parameter space
The non-truncated models are characterized by a specific relation between the parameters Ψ and γ (see the plot of γ(Ψ) in Fig. 1 of . In particular, for a given value of Ψ the corresponding value of γ is fixed by the requirement of a Keplerian decay of the gravitational potential (Φ ∼ −1/r) at large radii. For the models with ν = 1, in the range 0 Ψ 15, the function γ(Ψ) presents a pronounced peak at Ψ ≈ 5.5; for higher values of Ψ, γ decreases, reaches about half of its peak value at Ψ ≈ 10, and then stays approximately constant. In our models γ is left as a free parameter. However, since, for a given Ψ, there is a maximum value γ max beyond which the models do not present any truncation, the parameter space is confined to the region that is under the curve γ(Ψ) found for the non-truncated models. For a given Ψ, the non-truncated models are recovered in the limit γ → γ max ; indeed, as shown in Fig. 1, the ratio of the truncation radius r tr to the half-mass radius r M is an increasing function of γ.
The parameter Ψ is identified with the concentration of the model. Another measure of the central concentration is the ratio ρ(0)/ρ(r M ) of the central density to the density calculated Ruggero de Vita et al.: A class of spherical, truncated, anisotropic models for application to globular clusters for J , 0, vanishes at the cut-o↵ energy E e together with all its derivatives (the quantities A, E e , a, and d are constants). The twoparameter family of one-component models is then constructed by solving the Poisson equation: for the gravitational potential (r). In our case, the distribution function is anisotropic, so that the density on the right-hand side of Eq.
(3) can be reduced to a two-dimensional integral, which depends on radius explicitly and implicitly, through the unknown (r). Thus, if we define dimensionless quantities such as the potential = a( E e ), the radius ⇠ = a 1/4 dr, and the velocity ! 2 = (a/2)v 2 , the integral is proportional tô wherê and ⇣ is the angle between the position r and the velocity v of a single star. The resulting dimensionless form of Eq.
(3) is given by where we have introduced the dimensionless parameter = ad 2 /(4⇡GA). This di↵erential equation is integrated under the boundary conditions (0) = and (d /d⇠)(0) = 0 out to the truncation radius ⇠ tr , where the dimensionless potential vanishes. Hence, the self-consistent problem for the dimensionless potential reduces to a family of second-order di↵erential equations defined by two structural parameters: the central dimensionless potential and . We have performed the integration of Eq. (6) with an adaptive fourth-order Runge-Kutta method. At every integration step, the two-dimensional integral on the right-hand side has been computed by means of the Chure routine in the C-package Cuba (see Hahn 2015).
The parameter space
The non-truncated models are characterized by a specific relation between the parameters and (see the plot of ( ) in Fig. 1 of . In particular, for a given value of the corresponding value of is fixed by the requirement of a Keplerian decay of the gravitational potential ( ⇠ 1/r) at large radii. For the models with ⌫ = 1, in the range 0 . . 15, the function ( ) presents a pronounced peak at ⇡ 5.5; for higher values of , decreases, reaches about half of its peak value at ⇡ 10, and then stays approximately constant.
In our models is left as a free parameter. However, since, for a given , there is a maximum value max beyond which the models do not present any truncation, the parameter space is confined to the region that is under the curve ( ) found for the non-truncated models. For a given , the non-truncated models are recovered in the limit ! max ; indeed, as shown in Fig. 1, the ratio of the truncation radius r tr to the half-mass radius r M is an increasing function of .
The parameter is identified with the concentration of the model. Another measure of the central concentration is the ratio ⇢(0)/⇢(r M ) of the central density to the density calculated at the half-mass radius r M . In Fig. 2 we plot this quantity as a function of . We note that for high values of the relation is non-monotonic. For 5.5 . . 8.5 the relation is monotonic and characterized by a weak dependence on .
Intrinsic profiles
All the radial profiles of physical interest can be derived by taking moments of the distribution function f . If we consider the natural velocity coordinate system (v r , v ✓ , v ' ), the velocity dispersion tensor is diagonal with 2 ✓✓ = 2 '' . Explicitly, by defining a tangential component of the velocity dispersion tensor as T (⇠, , !, ⇣)! 4 cos 2 ⇣ sin ⇣d⇣d! , where we have used the definitions given in Eqs. (4)-(5) and the relations: v 2 r = v 2 cos 2 ⇣ and v 2 at the half-mass radius r M . In Fig. 2 we plot this quantity as a function of Ψ. We note that for high values of γ the relation is non-monotonic. For 5.5 Ψ 8.5 the relation is monotonic and characterized by a weak dependence on γ .
Intrinsic profiles
All the radial profiles of physical interest can be derived by taking moments of the distribution function f . If we consider the natural velocity coordinate system (v r , v θ , v ϕ ), the velocity dispersion tensor is diagonal with σ 2 θθ = σ 2 ϕϕ . Explicitly, by defining a tangential component of the velocity dispersion tensor as σ 2 T = σ 2 θθ + σ 2 ϕϕ , we have T (ξ, ψ, ω, ζ)ω 4 cos 2 ζ sin ζdζdω , T (ξ, ψ, ω, ζ)ω 4 sin 3 ζdζdω , Article number, page 3 of 15 A&A proofs: manuscript no. AA_2016_28274 Ruggero de Vita et al.: A class of spherical, truncated, anisotropic models for application to globular clusters definition of ⌘ is given a few lines below). The way in which equipartition is incorporated is not unique (e.g., see Kondratev & Ozernoi 1982). In its simplest form, as proposed by Da Costa & Freeman (1976), energy equipartition is sometimes imposed by means of a relation between the energy scales of the form a 2 /a 1 = m 2 /m 1 . Here we prefer to follow the argument of Miocchi (2006), which recognizes that equipartition is best ensured in the central, more relaxed regions. On the other hand, given the support of recent observations (see Bellini et al. 2014) and simulations (see Trenti & van der Marel 2013), it may be wiser to refer to only partial equipartition, by imposing " a 2 a 1 (5/2, ) (3/2, a 2 /a 1 ) (3/2, ) (5/2, a 2 /a 1 ) The left-hand side of the above equation represents the ratio 1 (0)/ 2 (0) of the central velocity dispersions for the two-component model. 2 Note that at r = 0 the one-component distribution function is trivial, because the dependence on J drops out and = (0), so that Eq. (14) is expressed in closed form in terms of the relevant constants and of the concentration parameter = a 1 [ (0) E e ]. Full equipartition is marked by ⌘ = 1/2; from their simulations, also in view of an argument by Spitzer (1969), Trenti & van der Marel (2013) suggest ⌘ = 0.2 for specific cases. In the following we will refer to this case of partial equipartition (for a recent investigation on energy equipartition in globular clusters, see also Bianchini et al. 2016). -We assume that the radial scales that define the size of the radially biased anisotropic outer envelope are the same for the two components, that is d 2 a 1/4 2 = d 1 a 1/4 1 .
2 is the incomplete gamma function defined by (s, x) = R x 0 t s 1 e t dt.
Article number, page 5 of 14 Ruggero de Vita et al.: A class of spherical, truncated, anisotropic models for application to globular clusters definition of ⌘ is given a few lines below). The way in which equipartition is incorporated is not unique (e.g., see Kondratev & Ozernoi 1982). In its simplest form, as proposed by Da Costa & Freeman (1976), energy equipartition is sometimes imposed by means of a relation between the energy scales of the form a 2 /a 1 = m 2 /m 1 . Here we prefer to follow the argument of Miocchi (2006), which recognizes that equipartition is best ensured in the central, more relaxed regions. On the other hand, given the support of recent observations (see Bellini et al. 2014) and simulations (see Trenti & van der Marel 2013), it may be wiser to refer to only partial equipartition, by imposing " a 2 a 1 (5/2, ) (3/2, a 2 /a 1 ) (3/2, ) (5/2, a 2 /a 1 ) The left-hand side of the above equation represents the ratio 1 (0)/ 2 (0) of the central velocity dispersions for the two-component model. 2 Note that at r = 0 the one-component distribution function is trivial, because the dependence on J drops out and = (0), so that Eq. (14) is expressed in closed form in terms of the relevant constants and of the concentration parameter = a 1 [ (0) E e ]. Full equipartition is marked by ⌘ = 1/2; from their simulations, also in view of an argument by Spitzer (1969), Trenti & van der Marel (2013) suggest ⌘ = 0.2 for specific cases. In the following we will refer to this case of partial equipartition (for a recent investigation on energy equipartition in globular clusters, see also Bianchini et al. 2016). -We assume that the radial scales that define the size of the radially biased anisotropic outer envelope are the same for the two components, that is 2 is the incomplete gamma function defined by (s, x) = R x 0 t s 1 e t dt.
Article number, page 5 of 14 where we have used the definitions given in Eqs. (4)-(5) and the relations: v 2 r = v 2 cos 2 ζ and v 2 For simplicity, in the following we will use the notation σ 2 r = σ 2 rr . Once the dimensionless potential profile is obtained by solving the Poisson equation, the velocity dispersion profiles can be calculated as two-dimensional integrals with the same procedure described in Subsect. 2.1.
For the one-component f (ν) T models, in Fig. 3 and Fig. 4, we plot some intrinsic profiles of the density and the total velocity dispersion (defined by σ 2 = σ 2 r + σ 2 T ).
Anisotropy
A local measure of the pressure anisotropy is given by the function α(r) = 2 − σ 2 T /σ 2 r . In Fig. 5 we show some representative anisotropy profiles. The models are characterized by an isotropic core and a radially-biased anisotropic envelope.
The radial extent of the isotropic core can be measured by means of the anisotropy radius r α defined as the radius where α = 1. The ratio r α /r M of the anisotropy radius to the half-mass radius as a function of γ is shown in Fig. 6. At fixed Ψ, models with higher γ are characterized by lower values of r α /r M . This trend is confirmed by the behavior of the ratio κ = 2K r /K T of twice the total radial kinetic energy K r to the total tangential kinetic energy K T , which is often used to measure the degree of global anisotropy of the system. This parameter is related to a well-known criterion for the onset of the radial-orbit instability (Polyachenko & Shukhman 1981): instability occurs if κ exceeds a model-dependent threshold, κ 1.7 ± 0.25. Figure 7 shows the monotonic increasing dependence of κ on γ. Therefore, truncated models are generally more isotropic than the corresponding non-truncated models.
Virial coefficient
The virial coefficient (for more details see Bertin et al. 2002) can be defined as This is only a qualitative argument, meant to recognize that one of the possible causes of radially-biased pressure anisotropy is incomplete violent relaxation, which is a collisionless relaxation process that acts in the same way on stars of di↵erent masses (see also Gunn & Gri n 1979). For convenience in the numerical calculation of the models, we decided to adopt the radial scale da 1/4 as a proxy for the radius of transition from isotropic core to anisotropic envelope; by inspecting one-component and two-component models, we confirm that indeed this scale identifies approximately the anisotropy radius r ↵ .
To summarize, our two-component models depend on eight constants. In practice, by taking a common truncation radius and a common pressure anisotropy scale for the two components and by fixing the values of the ratios M 1 /M 2 , m 1 /m 2 (and of ⌘), the relations introduced above reduce the number of free constants to four. Two of them are used to rescale the Poisson equation to a dimensionless form, the remaining two define two indepen- Fig. 3. The left frame shows the normalized density profile for selected values of at fixed ; the right frame shows the normalized density profile for selected values of at fixed . definition of ⌘ is given a few lines below). The way in which equipartition is incorporated is not unique (e.g., see Kondratev & Ozernoi 1982). In its simplest form, as proposed by Da Costa & Freeman (1976), energy equipartition is sometimes imposed by means of a relation between the energy scales of the form a 2 /a 1 = m 2 /m 1 . Here we prefer to follow the argument of Miocchi (2006), which recognizes that equipartition is best ensured in the central, more relaxed regions. On the other hand, given the support of recent observations (see Bellini et al. 2014) and simulations (see Trenti & van der Marel 2013), it may be wiser to refer to only partial equipartition, by imposing " a 2 a 1 (5/2, ) (3/2, a 2 /a 1 ) (3/2, ) (5/2, a 2 /a 1 ) The left-hand side of the above equation represents the ratio 1 (0)/ 2 (0) of the central velocity dispersions for the two-component model. 2 Note that at r = 0 the one-component distribution function is trivial, because the dependence on J drops out and = (0), so that Eq. (14) is expressed in closed form in terms of the relevant constants and of the concentration parameter = a 1 [ (0) E e ]. Full equipartition is marked by ⌘ = 1/2; from their simulations, also in view of an argument by Spitzer (1969), Trenti & van der Marel (2013) suggest ⌘ = 0.2 for specific cases. In the following we will refer to this case of partial equipartition (for a recent investigation on energy equipartition in globular clusters, see also Bianchini et al. 2016). -We assume that the radial scales that define the size of the radially biased anisotropic outer envelope are the same for the two components, that is 2 is the incomplete gamma function defined by (s, x) = R x 0 t s 1 e t dt.
Article number, page 5 of 14 Fig. 3. The left frame shows the normalized density profile for selected values of at fixed ; the right frame shows the normalized density profile for selected values of at fixed . definition of ⌘ is given a few lines below). The way in which equipartition is incorporated is not unique (e.g., see Kondratev & Ozernoi 1982). In its simplest form, as proposed by Da Costa & Freeman (1976), energy equipartition is sometimes imposed by means of a relation between the energy scales of the form a 2 /a 1 = m 2 /m 1 . Here we prefer to follow the argument of Miocchi (2006), which recognizes that equipartition is best ensured in the central, more relaxed regions. On the other hand, given the support of recent observations (see Bellini et al. 2014) and simulations (see Trenti & van der Marel 2013), it may be wiser to refer to only partial equipartition, by imposing " a 2 a 1 (5/2, ) (3/2, a 2 /a 1 ) (3/2, ) (5/2, a 2 /a 1 ) The left-hand side of the above equation represents the ratio 1 (0)/ 2 (0) of the central velocity dispersions for the two-component model. 2 Note that at r = 0 the one-component distribution function is trivial, because the dependence on J drops out and = (0), so that Eq. (14) is expressed in closed form in terms of the relevant constants and of the concentration parameter = a 1 [ (0) E e ]. Full equipartition is marked by ⌘ = 1/2; from their simulations, also in view of an argument by Spitzer (1969), Trenti & van der Marel (2013) suggest ⌘ = 0.2 for specific cases. In the following we will refer to this case of partial equipartition (for a recent investigation on energy equipartition in globular clusters, see also Bianchini et al. 2016). -We assume that the radial scales that define the size of the radially biased anisotropic outer envelope are the same for the two components, that is 2 is the incomplete gamma function defined by (s, x) = R x 0 t s 1 e t dt.
Article number, page 5 of 14 This is only a qualitative argument, meant to recognize that one of the possible causes of radially-biased pressure This is only a qualitative argument, meant to recognize that one of the possible causes of radially-biased pressure Fig. 7. Global anisotropy parameter = 2K r /K T for selected values of the parameter . The grey area indicates the region of the threshold for the onset of the radial orbit instability. dent dimensionless parameters, so that the parameter space ex- Fig. 5. The left frame shows the anisotropy profile α(r) for selected values of γ at fixed Ψ; the right frame shows the anisotropy profile for selected values of Ψ at fixed γ. Where a curve terminates, the truncation radius is reached.
In Fig. 8 we show the value of K V as a function of the central dimensionless potential Ψ for selected values of γ and for the King models. The difference between the various curves can be significant, particularly for low values of Ψ.
Two-component models
Starting from the truncated models described in the previous subsections, we introduce the two distribution functions: Each distribution function depends on four constants A i , a i , d i , E i (with i = 1, 2), so that in total the solution for the self-consistent potential Φ from the Poisson equation requires a study with eight arbitrary constants. In practice, from the point of view of dimensionless parameters, by means of physical arguments we will reduce our investigation to a twoparameter space; of course, if desired, we could loosen some of the physical constraints that we are going to impose and thus extend our discussion. As noted in the Introduction, different physical arguments can motivate the study of two-component models. Here we focus on the case in which we distinguish one population of lighter stars (let m 1 be the representative mass of its individual stars and M 1 its associated total mass) from a second population of heavier stars (with m 2 > m 1 and in general, M 2 M 1 ), so that the total Article number, page 5 of 15 A&A proofs: manuscript no. AA_2016_28274 alues of at fixed ; the right frame shows the anisotropy profile for selected us is reached. dent dimensionless parameters, so that the parameter space explored by the family of two-component models considered in the present study is two-dimensional. As in the one-component models, we use as independent structural parameters the central dimensionless potential = a 1 [ (0) E e ] and the parameter = a 1 d 2 1 /(4⇡GA 1 ).
Mass segregation
The third condition imposed in the construction of twocomponent models is meant to incorporate the role of collisions in establishing some sort of equipartition. It is well known that this e↵ect should be accompanied by mass segregation, that is, by a general trend of the lighter component to exhibit a more diffuse distribution with respect to the heavier component. In particular, we note that for our models the central density ratio is given by which, under the conditions listed in the previous subsection, would be expected to fall below unity from a simple picture of mass segregation (in which the central parts should be dominated by the heavier component).
As we noted in Subsect. 2.2, when we introduced the concentration parameter for the one-component models, there are several ways of describing the concentration of a given density profile. Here, we illustrate the result of di↵erent definitions that may be adopted. In Fig. 9 we plot the ratio r M1 /r M2 of the halfmass radii of the two components and the ratio of the quantities associated with the parameter illustrated in Fig. 2, that is, of the density contrast of the lighter component ⇢ 1 (0)/⇢ 1 (r M1 ) to that of the heavier component ⇢ 2 (0)/⇢ 2 (r M2 ), as a function of , for selected values of . The ratio r M1 /r M2 exceeds unity for all the models considered and thus it is the more natural parameter to be used to describe the relative concentration of the two components.
Fitting the data with dynamical models
We have performed a combined photometric and kinematic fit to the data available for a set of globular clusters, following a proce dure very similar to that used in ZBV12. In the present analysis we have decided to minimize a combined chi-square function which is defined as the sum of the photometric and the kine matic contributions. Di↵erently from the fits reported in ZBV12 by means of one-component non-truncated f (⌫) models, the fits presented here, based on the f (⌫) T models, are characterized by one additional parameter ( ) strictly connected with the trun cation.
4.1. The issue of the mass-to-light ratios 4.1.1. Mass-to-light ratios for one-component models In the application of one-component models, we follow the gen eral assumption that a constant mass-to-light ratio adequately describes the stellar population, imagined to be homogeneous mass of the cluster is M = M 1 + M 2 . As for the one-component case, we rescale the problem to a dimensionless form, by referring to a length scale and to an energy scale based on the constants associated with the lighter component. In particular, we define the dimensionless radius ξ = ra 1/4 1 d 1 and the dimensionless potential ψ = −a 1 (Φ − E 1 ). After such rescaling, we are left with six independent constants. To reduce the number of parameters and thus to work in the simplest mathematical context, we make the following assumptions: -We consider a common truncation radius, that is, we take Such assumption is frequently made as a starting point for the construction of multi-mass models (e.g., see Da Costa & Freeman 1976).
-We consider two-component models in which the total masses associated with the two components are in a given ratio M 1 /M 2 . Reasonable values for this ratio are suggested by models of the evolution of stellar populations, as briefly described in Appendix A. Obviously, this can be seen as a requirement on the ratio of the normalization factors A 1 /A 2 . In practice, for a globally self-consistent model this constraint can be written as (for the notationρ i , see Eq. (4)). For a desired mass ratio, the equation is basically a relation for the constant A 2 a −3/2 2 in terms of A 1 a −3/2 1 , but the precise relation has to be worked out iteratively from the global solution.
-We choose a given value for the single-mass ratio m 1 /m 2 (reasonable values for this ratio are suggested by stellarpopulation models, as described in Appendix A) and impose partial energy equipartition in the central regions of the system by means of the dimensionless parameter η = 0.2 (the definition of η is given a few lines below). The way in which equipartition is incorporated is not unique (e.g., see Kondratev & Ozernoi 1982). In its simplest form, as proposed by Da Costa & Freeman (1976), energy equipartition is sometimes imposed by means of a relation between the energy scales of the form a 2 /a 1 = m 2 /m 1 . Here we prefer to follow the argument of Miocchi (2006), which recognizes that equipartition is best ensured in the central, more relaxed regions. On the other hand, given the support of recent observations (see Bellini et al. 2014) and simulations (see Trenti & van der Marel 2013), it may be wiser to refer to only partial equipartition, by imposing a 2 a 1 γ (5/2, Ψ) γ (3/2, a 2 Ψ/a 1 ) γ (3/2, Ψ) γ (5/2, a 2 Ψ/a 1 ) The left-hand side of the above equation represents the ratio σ 1 (0)/σ 2 (0) of the central velocity dispersions for the twocomponent model. 2 Note that at r = 0 the one-component distribution function is trivial, because the dependence on J drops out and Φ = Φ(0), so that Eq. (14) is expressed in closed form in terms of the relevant constants and of the concentration parameter Ψ = −a 1 [Φ(0) − E e ]. Full equipartition is marked by η = 1/2; from their simulations, also in view of an argument by Spitzer (1969), Trenti & van der Marel (2013) suggest η = 0.2 for specific cases. In the following we will refer to this case of partial equipartition (for a recent investigation on energy equipartition in globular clusters, see also Bianchini et al. 2016). -We assume that the radial scales that define the size of the radially biased anisotropic outer envelope are the same for the two components, that is This is only a qualitative argument, meant to recognize that one of the possible causes of radially-biased pressure anisotropy is incomplete violent relaxation, which is a collisionless relaxation process that acts in the same way on stars of different masses (see also Gunn & Griffin 1979). For convenience in the numerical calculation of the models, we decided to adopt the radial scale da 1/4 as a proxy for the radius of transition from isotropic core to anisotropic envelope; by inspecting one-component and two-component models, we confirm that indeed this scale identifies approximately the anisotropy radius r α .
To summarize, our two-component models depend on eight constants. In practice, by taking a common truncation radius and a common pressure anisotropy scale for the two components and by fixing the values of the ratios M 1 /M 2 , m 1 /m 2 (and of η), the relations introduced above reduce the number of free constants to four. Two of them are used to rescale the Poisson equation to a dimensionless form, the remaining two define two independent dimensionless parameters, so that the parameter space explored by the family of two-component models considered in the present study is two-dimensional. As in the one-component models, we use as independent structural parameters the central dimensionless potential Ψ = −a 1 [Φ(0) − E e ] and the parameter γ = a 1 d 2 1 /(4πGA 1 ).
Mass segregation
The third condition imposed in the construction of twocomponent models is meant to incorporate the role of collisions in establishing some sort of equipartition. It is well known that this effect should be accompanied by mass segregation, that is, by a general trend of the lighter component to exhibit a more diffuse distribution with respect to the heavier component. In particular, we note that for our models the central density ratio is given by which, under the conditions listed in the previous subsection, would be expected to fall below unity from a simple picture of mass segregation (in which the central parts should be dominated by the heavier component). As we noted in Subsect. 2.2, when we introduced the concentration parameter Ψ for the one-component models, there are several ways of describing the concentration of a given density profile. Here, we illustrate the result of different definitions that may be adopted. In Fig. 9 we plot the ratio r M1 /r M2 of the halfmass radii of the two components and the ratio of the quantities associated with the parameter illustrated in Fig. 2, that is, of the density contrast of the lighter component ρ 1 (0)/ρ 1 (r M1 ) to that of the heavier component ρ 2 (0)/ρ 2 (r M2 ), as a function of Ψ, for selected values of γ. The ratio r M1 /r M2 exceeds unity for all the models considered and thus it is the more natural parameter to be used to describe the relative concentration of the two components.
In order to highlight how different types of mass segregation can result from the condition of partial energy equipartition imposed on our models, we report the cases of two selected globular clusters: 47 Tuc and ω Cen. We have found the twocomponent dynamical models that best reproduce the observed photometric and kinematic profiles of the two clusters. In Fig. 10 we plot the density profiles of the two best-fit models found by the procedure in which Red Giant stars are not included among the heavy stars (for a discussion of this fitting procedure, see the next section). The best-fit model of 47 Tuc is characterized by a density profile with a larger density of heavy stars in the central regions. Indeed, this is the type of mass segregation traditionally associated with the tendency of the system to establish energy equipartition. The model of ω Cen exhibits a qualitatively different mass distribution.
Ruggero de Vita et al.: A class of spherical, truncated, anisotropic models for application to globular clusters given by a 2 a 1 ! 3/2 e (3/2, ) e a 2 /a 1 (3/2, a 2 /a 1 ) , which, under the conditions listed in the previous subsection, would be expected to fall below unity from a simple picture of mass segregation (in which the central parts should be dominated by the heavier component).
As we noted in Subsect. 2.2, when we introduced the concentration parameter for the one-component models, there are several ways of describing the concentration of a given density profile. Here, we illustrate the result of di↵erent definitions that may be adopted. In Fig. 9 we plot the ratio r M1 /r M2 of the halfmass radii of the two components and the ratio of the quantities associated with the parameter illustrated in Fig. 2, that is, of the density contrast of the lighter component ⇢ 1 (0)/⇢ 1 (r M1 ) to that of the heavier component ⇢ 2 (0)/⇢ 2 (r M2 ), as a function of , for selected values of . The ratio r M1 /r M2 exceeds unity for all the models considered and thus it is the more natural parameter to be used to describe the relative concentration of the two components.
In order to highlight how di↵erent types of mass segregation can result from the condition of partial energy equipartition imposed on our models, we report the cases of two selected globular clusters: 47 Tuc and ! Cen. We have found the twocomponent dynamical models that best reproduce the observed photometric and kinematic profiles of the two clusters. In Fig. 10 we plot the density profiles of the two best-fit models found by the procedure in which Red Giant stars are not included among the heavy stars (for a discussion of this fitting procedure, see the next section). The best-fit model of 47 Tuc is characterized by a density profile with a larger density of heavy stars in the central regions. Indeed, this is the type of mass segregation traditionally associated with the tendency of the system to establish energy equipartition. The model of ! Cen exhibits a qualitatively di↵erent mass distribution.
In the next section, devoted to setting the correspondence between dynamical models and observations, we briefly describe how mass segregation has a counterpart in the gradient of the profile of the cumulative mass-to-light ratio, defined as the total mass-to-light ratio for a sphere of given radius r.
Fitting the data with dynamical models
We have performed a combined photometric and kinematic fit to the data available for a set of globular clusters, following a procedure very similar to that used in ZBV12. In the present analysis we have decided to minimize a combined chi-square function, which is defined as the sum of the photometric and the kinematic contributions. Di↵erently from the fits reported in ZBV12 by means of one-component non-truncated f (⌫) models, the fits presented here, based on the f (⌫) T models, are characterized by one additional parameter ( ) strictly connected with the truncation.
Mass-to-light ratios for one-component models
In the application of one-component models, we follow the general assumption that a constant mass-to-light ratio adequately describes the stellar population, imagined to be homogeneous. This assumption allows us to convert projected mass densities ⌃(R) into surface luminosity densities l(R) by means of a simple relation of proportionality. Then, the mass-to-light ratio is found as one of the parameters determined by the fit (see Appendix B of ZBV12).
Mass-to-light ratios for two-component models
In general, for the two-component models we consider the surface luminosity profile as the sum of two contributions: Then, we have performed two di↵erent types of fit: (i) In the first procedure, we consider the heavier component made of only dark remnants. Therefore, the fit is similar to that for elliptical galaxies in the presence of a dark matter component. In other words, the photometric fit is carried out by omitting the ⌃ 2 -term in Eq. (17). Then the kinematic fit is Article number, page 7 of 14 In the next section, devoted to setting the correspondence between dynamical models and observations, we briefly describe how mass segregation has a counterpart in the gradient of the profile of the cumulative mass-to-light ratio, defined as the total mass-to-light ratio for a sphere of given radius r.
Fitting the data with dynamical models
We have performed a combined photometric and kinematic fit to the data available for a set of globular clusters, following a procedure very similar to that used in ZBV12. In the present analysis we have decided to minimize a combined chi-square function, which is defined as the sum of the photometric and the kinematic contributions. Differently from the fits reported in ZBV12 by means of one-component non-truncated f (ν) models, the fits presented here, based on the f (ν) T models, are characterized by one additional parameter (γ) strictly connected with the truncation.
Mass-to-light ratios for one-component models
In the application of one-component models, we follow the general assumption that a constant mass-to-light ratio adequately describes the stellar population, imagined to be homogeneous. This assumption allows us to convert projected mass densities Σ(R) into surface luminosity densities l(R) by means of a simple relation of proportionality. Then, the mass-to-light ratio is found as one of the parameters determined by the fit (see Appendix B of ZBV12).
Mass-to-light ratios for two-component models
In general, for the two-component models we consider the surface luminosity profile as the sum of two contributions: Article number, page 7 of 15 A&A proofs: manuscript no. AA_2016_28274 A&A proofs: manuscript no. main_post performed by considering only the velocity dispersion profile relative to the lighter component, which is the only component assumed to be visible. (ii) In the second type of fit, we include the Red Giant stars (RGs) in the group of the heavier stars (see Appendix A). In this case, in the photometric fit both components contribute to the surface brightness. Thus, we have explored two possible options: either (a) to assign a reasonable value for the ratio (M/L) 1 /(M/L) 2 , based on the fraction of luminosity expected to come from the RGs and the main-sequence stars present in the system; or (b) to leave the mass-to-light ratio of the heavier component to be determined as a parameter of the best-fit model, and thus to make a prediction on the number of RGs contained in the system. In this paper we report only the results given by option (a), as the best-fit models found with the other option tend to underestimate the contribution of RGs present in globular clusters. 3 In this procedure the kinematic fit considers the heavier component as the kinematic tracer, because most kinematic data come from spectroscopic observations of RGs (i.e., the line-of-sight velocities of RG stars are usually those that are detected for the construction of the observed velocity dispersion profiles).
Note that, for the two-component models, the conversion from density profiles to luminosity profiles is not straightforward as in the one-component case, because it depends on the structural characteristics of the system. In particular, it reflects the interconnection between mass segregation and the gradients of mass-to-light ratios. In Fig. 11, we plot the cumulative massto-light ratio for two selected globular clusters in their central regions; the behavior of this quantity as a function of the intrinsic radius r changes according to the type of fit considered. On the one hand, in the case in which RGs are not included in the heavier component, the ratio M/L decreases with r (for the more relaxed cluster 47 Tuc, this trend is more evident). On the other hand, the case in which RGs are included in the heavier component (and in the fitting procedure) is characterized by a mild increase of the cumulative mass-to-light ratio. For the former case we recover a behavior of the cumulative mass-to-light ratio profile similar to that found by van den Bosch et al. (2006) for the globular cluster M15 (NGC 7078); they suggest that the gradient of the ratio M/L at small radii is likely to be due to the presence of a centrally concentrated population of dark remnants, an interpretation that is also suited to describe the result of our fit.
We wish to emphasize that in this paper we are not aiming at providing improved dynamical models for selected clusters. Rather, we wish to demonstrate, by means of the mathematically simplest framework, how di↵erent ways of using a multicomponent dynamical model actually lead to di↵erent pictures of the internal structure of globular clusters, especially in relation to mass segregation and gradients of mass-to-light ratios.
Fits with one-component models
The data sets considered in this paper are the same as used by ZBV12. For convenience, in Table 1 we report some distinctive quantities for the sample of 13 Galactic GCs selected for this paper.
In Fig. 12 we show the best-fit surface brightness and velocity dispersion profiles for 3 of the selected GCs, which are displayed in order of increasing core relaxation time. The dimensionless parameters of the fits and the values of the reduced chi-squared are listed in Table 2. For the statistical analysis we have followed the procedure used by ZBV12. From an inspection of the way the best-fit models are identified, we note that the present models are characterized by significant degeneracy in parameter space: this is a natural consequence of the introduction of the additional parameter related to the truncation.
In general, the photometric fits by the f (⌫) T models are more satisfactory than those performed by means of the King and f (⌫) models, for every relaxation class considered (for a comparison of the values of the reduced chi-squared, see Table 4 in ZBV12); indeed, for the majority of the clusters, the minimum chi-squared is inside the 90% confidence interval. The improvement with re-Article number, page 8 of 14 A&A proofs: manuscript no. main_post performed by considering only the velocity dispersion profile relative to the lighter component, which is the only component assumed to be visible. (ii) In the second type of fit, we include the Red Giant stars (RGs) in the group of the heavier stars (see Appendix A). In this case, in the photometric fit both components contribute to the surface brightness. Thus, we have explored two possible options: either (a) to assign a reasonable value for the ratio (M/L) 1 /(M/L) 2 , based on the fraction of luminosity expected to come from the RGs and the main-sequence stars present in the system; or (b) to leave the mass-to-light ratio of the heavier component to be determined as a parameter of the best-fit model, and thus to make a prediction on the number of RGs contained in the system. In this paper we report only the results given by option (a), as the best-fit models found with the other option tend to underestimate the contribution of RGs present in globular clusters. 3 In this procedure the kinematic fit considers the heavier component as the kinematic tracer, because most kinematic data come from spectroscopic observations of RGs (i.e., the line-of-sight velocities of RG stars are usually those that are detected for the construction of the observed velocity dispersion profiles).
Note that, for the two-component models, the conversion from density profiles to luminosity profiles is not straightforward as in the one-component case, because it depends on the structural characteristics of the system. In particular, it reflects the interconnection between mass segregation and the gradients of mass-to-light ratios. In Fig. 11, we plot the cumulative massto-light ratio for two selected globular clusters in their central regions; the behavior of this quantity as a function of the intrinsic radius r changes according to the type of fit considered. On the one hand, in the case in which RGs are not included in the heavier component, the ratio M/L decreases with r (for the more relaxed cluster 47 Tuc, this trend is more evident). On the other hand, the case in which RGs are included in the heavier component (and in the fitting procedure) is characterized by a mild increase of the cumulative mass-to-light ratio. For the former case we recover a behavior of the cumulative mass-to-light ratio profile similar to that found by van den Bosch et al. (2006) for the globular cluster M15 (NGC 7078); they suggest that the gradient of the ratio M/L at small radii is likely to be due to the presence of a centrally concentrated population of dark remnants, an interpretation that is also suited to describe the result of our fit.
We wish to emphasize that in this paper we are not aiming at providing improved dynamical models for selected clusters. Rather, we wish to demonstrate, by means of the mathematically simplest framework, how di↵erent ways of using a multicomponent dynamical model actually lead to di↵erent pictures of the internal structure of globular clusters, especially in relation to mass segregation and gradients of mass-to-light ratios.
Fits with one-component models
The data sets considered in this paper are the same as used by ZBV12. For convenience, in Table 1 we report some distinctive quantities for the sample of 13 Galactic GCs selected for this paper.
In Fig. 12 we show the best-fit surface brightness and velocity dispersion profiles for 3 of the selected GCs, which are displayed in order of increasing core relaxation time. The dimensionless parameters of the fits and the values of the reduced chi-squared are listed in Table 2. For the statistical analysis we have followed the procedure used by ZBV12. From an inspection of the way the best-fit models are identified, we note that the present models are characterized by significant degeneracy in parameter space: this is a natural consequence of the introduction of the additional parameter related to the truncation.
In general, the photometric fits by the f (⌫) T models are more satisfactory than those performed by means of the King and f (⌫) models, for every relaxation class considered (for a comparison of the values of the reduced chi-squared, see Table 4 in ZBV12); indeed, for the majority of the clusters, the minimum chi-squared is inside the 90% confidence interval. The improvement with re-Article number, page 8 of 14 Then, we have performed two different types of fit: (i) In the first procedure, we consider the heavier component made of only dark remnants. Therefore, the fit is similar to that for elliptical galaxies in the presence of a dark matter component. In other words, the photometric fit is carried out by omitting the Σ 2 -term in Eq. (17). Then the kinematic fit is performed by considering only the velocity dispersion profile relative to the lighter component, which is the only component assumed to be visible. (ii) In the second type of fit, we include the Red Giant stars (RGs) in the group of the heavier stars (see Appendix A). In this case, in the photometric fit both components contribute to the surface brightness. Thus, we have explored two possible options: either (a) to assign a reasonable value for the ratio (M/L) 1 /(M/L) 2 , based on the fraction of luminosity expected to come from the RGs and the main-sequence stars present in the system; or (b) to leave the mass-to-light ratio of the heavier component to be determined as a parameter of the best-fit model, and thus to make a prediction on the number of RGs contained in the system. In this paper we report only the results given by option (a), as the best-fit models found with the other option tend to underestimate the contribution of RGs present in globular clusters. 3 In this procedure the kinematic fit considers the heavier component as the kinematic tracer, because most kinematic data come from spectroscopic observations of RGs (i.e., the line-of-sight velocities of RG stars are usually those that are detected for the construction of the observed velocity dispersion profiles).
Note that, for the two-component models, the conversion from density profiles to luminosity profiles is not straightforward as in the one-component case, because it depends on the structural characteristics of the system. In particular, it reflects the interconnection between mass segregation and the gradients of mass-to-light ratios. In Fig. 11, we plot the cumulative massto-light ratio for two selected globular clusters in their central regions; the behavior of this quantity as a function of the intrinsic radius r changes according to the type of fit considered. On the one hand, in the case in which RGs are not included in the heavier component, the ratio M/L decreases with r (for the more relaxed cluster 47 Tuc, this trend is more evident). On the other hand, the case in which RGs are included in the heavier component (and in the fitting procedure) is characterized by a mild increase of the cumulative mass-to-light ratio. For the former case we recover a behavior of the cumulative mass-to-light ratio profile similar to that found by van den Bosch et al. (2006) for the globular cluster M15 (NGC 7078); they suggest that the gradient of the ratio M/L at small radii is likely to be due to the presence of a centrally concentrated population of dark remnants, an interpretation that is also suited to describe the result of our fit.
We wish to emphasize that in this paper we are not aiming at providing improved dynamical models for selected clusters. Rather, we wish to demonstrate, by means of the mathematically simplest framework, how different ways of using a multicomponent dynamical model actually lead to different pictures of the internal structure of globular clusters, especially in relation to mass segregation and gradients of mass-to-light ratios.
Fits with one-component models
The data sets considered in this paper are the same as used by ZBV12. For convenience, in Table 1 we report some distinctive quantities for the sample of 13 Galactic GCs selected for this paper.
In Fig. 12 we show the best-fit surface brightness and velocity dispersion profiles for 3 of the selected GCs, which are displayed in order of increasing core relaxation time. The dimensionless parameters of the fits and the values of the reduced chi-squared are listed in Table 2. For the statistical analysis we have followed the procedure used by ZBV12. From an inspection of the way the best-fit models are identified, we note that the present models are characterized by significant degeneracy in parameter space: this is a natural consequence of the introduction of the additional parameter related to the truncation.
Article number, page 8 of 15 Ruggero de Vita et al.: A class of spherical, truncated, anisotropic models for application to globular clusters Ruggero de Vita et al.: A class of spherical, truncated, anisotropic models for application to globular clusters Fig. 11. The cumulative mass-to-light ratio as a function of the intrinsic radius r for the best-fit models of two globular clusters. The best-fit models are found by means of two di↵erent procedures, that is, by taking the heavier component as made of only dark remnants or by including in the heavier component the presence of Red Giants. The vertical lines indicate the position of the total half-mass radius. Notes. For each globular cluster the following quantities are recorded: distance from the Sun (kpc); logarithm of the core relaxation time (years); logarithm of the half-mass relaxation time (years); number of points in the surface brightness profile; and number of points in the velocity dispersion profile (adapted from ZBV12).
spect to the King and the f (⌫) models is mainly related to the outer regions of the system, where the truncation of our models accommodates well the observed brightness profiles. In addition, the general trends found by ZBV12 for the nontruncated models are not a↵ected by the truncation significantly. In particular, our models remain able to reproduce the central peak in the velocity dispersion profiles that is characteristic of the least relaxed clusters in the sample (NGC 2419 and NGC 5139).
In Table 3 we report the values of the truncation radius r tr , the projected core radius R c (that is the radial location where the surface brightness equals half its central value), and the intrinsic half-mass radius r M . Then we list other relevant quantities, in particular, the total mass M, the central density ⇢ 0 , and the Vband mass-to-light ratio (M/L) V . For our anisotropic models we (2) and (3) we provide the best-fit parameters that define the dynamical models, together with their formal errors. We then list the values of the photometric reduced chi-squarẽ 2 ph (Col. 4) and the kinematic reduced chi-square˜ 2 k (Col.5).
have also calculated the intrinsic anisotropy radius r ↵ defined as the radius where ↵(r ↵ ) = 1 and the global anisotropy parameter (see Subsect. 2.4).
A comparison with the King models
No systematic trends are found. The only exception is represented by the truncation radius, which is generally larger for the f (⌫) T models, in line with the general finding that the photometric profiles appear to possess a smoother truncation than that of King models (see McLaughlin & van der Marel 2005).
Article number, page 9 of 14 Ruggero de Vita et al.: A class of spherical, truncated, anisotropic models for application to globular clusters Fig. 11. The cumulative mass-to-light ratio as a function of the intrinsic radius r for the best-fit models of two globular clusters. The best-fit models are found by means of two di↵erent procedures, that is, by taking the heavier component as made of only dark remnants or by including in the heavier component the presence of Red Giants. The vertical lines indicate the position of the total half-mass radius. Notes. For each globular cluster the following quantities are recorded: distance from the Sun (kpc); logarithm of the core relaxation time (years); logarithm of the half-mass relaxation time (years); number of points in the surface brightness profile; and number of points in the velocity dispersion profile (adapted from ZBV12).
spect to the King and the f (⌫) models is mainly related to the outer regions of the system, where the truncation of our models accommodates well the observed brightness profiles. In addition, the general trends found by ZBV12 for the nontruncated models are not a↵ected by the truncation significantly. In particular, our models remain able to reproduce the central peak in the velocity dispersion profiles that is characteristic of the least relaxed clusters in the sample (NGC 2419 and NGC 5139).
In Table 3 we report the values of the truncation radius r tr , the projected core radius R c (that is the radial location where the surface brightness equals half its central value), and the intrinsic half-mass radius r M . Then we list other relevant quantities, in particular, the total mass M, the central density ⇢ 0 , and the Vband mass-to-light ratio (M/L) V . For our anisotropic models we (2) and (3) we provide the best-fit parameters that define the dynamical models, together with their formal errors. We then list the values of the photometric reduced chi-squarẽ 2 ph (Col. 4) and the kinematic reduced chi-square˜ 2 k (Col.5).
have also calculated the intrinsic anisotropy radius r ↵ defined as the radius where ↵(r ↵ ) = 1 and the global anisotropy parameter (see Subsect. 2.4).
A comparison with the King models
No systematic trends are found. The only exception is represented by the truncation radius, which is generally larger for the f (⌫) T models, in line with the general finding that the photometric profiles appear to possess a smoother truncation than that of King models (see McLaughlin & van der Marel 2005).
Article number, page 9 of 14 Fig. 11. The cumulative mass-to-light ratio as a function of the intrinsic radius r for the best-fit models of two globular clusters. The best-fit models are found by means of two different procedures, that is, by taking the heavier component as made of only dark remnants or by including in the heavier component the presence of Red Giants. The vertical lines indicate the position of the total half-mass radius. Notes. For each globular cluster the following quantities are recorded: distance from the Sun (kpc); logarithm of the core relaxation time (years); logarithm of the half-mass relaxation time (years); number of points in the surface brightness profile; and number of points in the velocity dispersion profile (adapted from ZBV12).
In general, the photometric fits by the f (ν) T models are more satisfactory than those performed by means of the King and f (ν) models, for every relaxation class considered (for a comparison of the values of the reduced chi-squared, see Table 4 in ZBV12); indeed, for the majority of the clusters, the minimum chi-squared is inside the 90% confidence interval. The improvement with respect to the King and the f (ν) models is mainly related to the outer regions of the system, where the truncation of our models accommodates well the observed brightness profiles.
In addition, the general trends found by ZBV12 for the nontruncated models are not affected by the truncation significantly. In particular, our models remain able to reproduce the central peak in the velocity dispersion profiles that is characteristic of the least relaxed clusters in the sample (NGC 2419 and NGC 5139). (2) and (3) we provide the best-fit parameters that define the dynamical models, together with their formal errors. We then list the values of the photometric reduced chi-squarẽ χ 2 ph (Col. 4) and the kinematic reduced chi-squareχ 2 k (Col.5).
In Table 3 we report the values of the truncation radius r tr , the projected core radius R c (that is the radial location where the surface brightness equals half its central value), and the intrinsic half-mass radius r M . Then we list other relevant quantities, in particular, the total mass M, the central density ρ 0 , and the Vband mass-to-light ratio (M/L) V . For our anisotropic models we have also calculated the intrinsic anisotropy radius r α defined as the radius where α(r α ) = 1 and the global anisotropy parameter κ (see Subsect. 2.4).
A comparison with the King models
No systematic trends are found. The only exception is represented by the truncation radius, which is generally larger for the f (ν) T models, in line with the general finding that the photomet-Article number, page 9 of 15 A&A proofs: manuscript no. AA_2016_28274 A&A proofs: manuscript no. main_post Fig. 12. Photometric and kinematic fits for three globular clusters of the sample. Each cluster is representative of its relaxation class as identified by the core relaxation time T c (for NGC 6341, log T c ⇡ 7.96; for NGC 6656, log T c ⇡ 8.53; for NGC 2419 log T c ⇡ 9.87). The curves represent the surface brightness profile (left panels) and the velocity dispersion profile (right panels) calculated by means of dynamical models. In particular, dotted lines correspond to King models; dashed lines to the non-truncated f (⌫) models, and solid lines to the f (⌫) T models. In all panels, the dots are the observed data. For each data-point, errors are shown as vertical bars; in the case of the velocity dispersion profile, the horizontal bars indicate the size of the the radial bin used to calculate each data point. The King profiles, the f (⌫) profiles, and the observed data are taken from ZBV12.
Article number, page 10 of 14 A&A proofs: manuscript no. main_post Fig. 12. Photometric and kinematic fits for three globular clusters of the sample. Each cluster is representative of its relaxation class as identified by the core relaxation time T c (for NGC 6341, log T c ⇡ 7.96; for NGC 6656, log T c ⇡ 8.53; for NGC 2419 log T c ⇡ 9.87). The curves represent the surface brightness profile (left panels) and the velocity dispersion profile (right panels) calculated by means of dynamical models. In particular, dotted lines correspond to King models; dashed lines to the non-truncated f (⌫) models, and solid lines to the f (⌫) T models. In all panels, the dots are the observed data. For each data-point, errors are shown as vertical bars; in the case of the velocity dispersion profile, the horizontal bars indicate the size of the the radial bin used to calculate each data point. The King profiles, the f (⌫) profiles, and the observed data are taken from ZBV12.
Article number, page 10 of 14 A&A proofs: manuscript no. main_post Fig. 12. Photometric and kinematic fits for three globular clusters of the sample. Each cluster is representative of its relaxation class as identified by the core relaxation time T c (for NGC 6341, log T c ⇡ 7.96; for NGC 6656, log T c ⇡ 8.53; for NGC 2419 log T c ⇡ 9.87). The curves represent the surface brightness profile (left panels) and the velocity dispersion profile (right panels) calculated by means of dynamical models. In particular, dotted lines correspond to King models; dashed lines to the non-truncated f (⌫) models, and solid lines to the f (⌫) T models. In all panels, the dots are the observed data. For each data-point, errors are shown as vertical bars; in the case of the velocity dispersion profile, the horizontal bars indicate the size of the the radial bin used to calculate each data point. The King profiles, the f (⌫) profiles, and the observed data are taken from ZBV12.
Article number, page 10 of 14 Fig. 12. Photometric and kinematic fits for three globular clusters of the sample. Each cluster is representative of its relaxation class as identified by the core relaxation time T c (for NGC 6341, log T c ≈ 7.96; for NGC 6656, log T c ≈ 8.53; for NGC 2419 log T c ≈ 9.87). The curves represent the surface brightness profile (left panels) and the velocity dispersion profile (right panels) calculated by means of dynamical models. In particular, dotted lines correspond to King models; dashed lines to the non-truncated f (ν) models, and solid lines to the f (ν) T models. In all panels, the dots are the observed data. For each data-point, errors are shown as vertical bars; in the case of the velocity dispersion profile, the horizontal bars indicate the size of the the radial bin used to calculate each data point. The King profiles, the f (ν) profiles, and the observed data are taken from ZBV12.
Article number, page 10 of 15 Ruggero de Vita et al.: A class of spherical, truncated, anisotropic models for application to globular clusters Notes. For two clusters considered either by including or by not including RG stars in the heavier component, we provide the best-fit parameters that define the dynamical models (Ψ, γ). We then list the values of the reduced photometric chi-squareχ 2 ph and the reduced kinematic chi-squareχ 2 k .
ric profiles appear to possess a smoother truncation than that of King models (see McLaughlin & van der Marel 2005).
Radial-orbit instability
One of the points noted in the analysis by ZBV12 is a general concern about the possible occurrence of the radial-orbit instability. Polyachenko & Shukhman (1981) argued that this instability would occur when the anisotropy parameter κ = 2K r /K T , the ratio of the radial contribution to the tangential contribution to the total kinetic energy, exceeds 1.7 ± 0.25. In this respect, for some of the globular clusters considered by ZBV12 (e.g., NGC 6254) the non-truncated f (ν) models might not be applicable. The truncation in our f (ν) T models tends to reduce the global value of the radial contribution to the kinetic energy (see Fig. 7), bringing κ down to values typically associated with stability. Of course, a test by N-body simulations would be desired to confirm this point, but obviously this would bring us well beyond the goals of the present paper.
Fits with two-component models
As anticipated in the previous sections, in order to address the issue of mass segregation in the simplest mathematical framework, we have studied the performance of our two-component models in fitting two globular clusters characterized by different relaxation conditions: 47 Tuc (NGC 104) and ω Cen (NGC 5139).
The photometric and kinematic fits for these clusters are presented in Fig. 13. The fits are performed by means of the two procedures outlined in Subsect. 4.1. In particular, for the procedure in which RGs are included in the heavier component, we have assumed that RGs contribute 60% of the total luminosity of the cluster in the V-band.
As in the previous subsection, we report the best-fit parameters (see Table 4) and some relevant physical quantities (see Table 5).
The two-component models appear to provide good fits to the observed profiles, thus supporting the hypotheses imposed in their construction. For both clusters the fits performed with the procedure that includes RG stars in the heavier component appear to be better. This is particularly evident for the case of 47 Tuc, for which the best-fit model corresponding to the case without RGs in the heavier component does not reproduce the kinematic profile adequately. We then argue that the role of the The truncation radius r tr and the half-mass radius are expressed in pc; the core radius R c is expressed in units of arcsec. The total mass is expressed in units of 10 5 M and the central mass density ρ 0 in M pc − 3. Finally, the mass-to-light ratio is given in solar units M /L stars used as kinematic tracer becomes important when we consider more relaxed environments. In turn, the fit to ω Cen suggests that its stellar population is reasonably homogeneous and mass segregation is probably negligible.
Conclusions and perspectives
In this paper we have constructed a new class of truncated anisotropic models as an extension of the so-called f (ν) models, introduced by Stiavelli & Bertin (1987) to describe elliptical galaxies interpreted as the result of incomplete violent relaxation. Such f (ν) T models have been applied to perform a combined photometric and kinematic study of a sample of Galactic globular clusters.
In the first part of the paper, we have constructed onecomponent truncated models, to describe a stellar system made of a single homogeneous stellar population. From our analysis, the new class of models is found to be well suited to describe the globular clusters of a sample studied earlier. We have compared our fits with those performed for the same sample of globular clusters by ZBV12 by means of King and f (ν) models. In general, the new truncated models represent the surface brightness profiles better, especially in the outer parts of the systems. In addition, the models tend to reproduce the inner parts of the velocity dispersion profiles better than the King models. As also noted by ZBV12, this is probably related to the role played by radiallybiased pressure anisotropy in partially relaxed clusters. In the f (ν) and in the f (ν) T models, such radial anisotropy is a signature of the process of incomplete violent relaxation, which may have occurred during the initial stages of the evolution of globular clusters; of course, we should be aware that other mechanisms may be responsible for radially-biased pressure anisotropy. In contrast to some cases found earlier by application of the nontruncated f (ν) models, the f (ν) T models identified by the fits appear to be stable with respect to the radial-orbit instability.
In the second part of the paper, we have extended our analysis by constructing a family of two-component models, with Article number, page 11 of 15 A&A proofs: manuscript no. AA_2016_28274 Notes. For each cluster listed in the first column, in double-column form we provide the relevant physical quantities derived from the King models (as reported in ZBV12 -left columns) and from our truncated anisotropic models f (ν) T (right columns). In single-column form, as last items, we provide the anisotropy radius for the best-fit f (ν) T models and the global anisotropy parameter κ. The truncation radius r tr and the core radius are expressed in units of arcsec; the intrinsic half-mass radius and the anisotropy radius in pc. The total mass is expressed in units of 10 5 M and the central mass density ρ 0 in M pc − 3. Finally, the mass-to-light ratio is given in solar units M /L . (*) Most values of the half-mass radii for the King models reported in ZBV12 are incorrect; in the present paper we report the corrected values. the aim of characterizing in the simplest way a stellar system made of stars with different masses. In fact, if some collisionality is present, stars of different masses are expected to differ in their dynamical evolution, by exhibiting phenomena associated with equipartition and mass segregation. In particular, we have assumed that the stellar system under consideration is made of only dark remnants and main sequence stars, with the possible inclusion of Red Giant stars. RG stars would naturally belong to the component of heavier stars, but obviously differ from the heavy dark remnants from the point of view of their visibility. This raises an interesting modeling problem, that is, the question of the optimal comparison between the two-component models thus constructed and the available photometric and kinematic data. To explore the relevant underlying modeling issues, the new two-component models have been tested on two globular clusters characterized by different relaxation conditions. They generally provide satisfactory fits to the observed photometric and kinematic profiles, in particular when RGs are included in the fitting procedure, by considering their contribution as heavy stars to the photometric profile and their role in tracing the kinematics of the clusters. Interestingly, from our two-component models only the more relaxed cluster (47 Tuc) exhibits the signature of mass segregation in a prominent way.
The two-component models that we have introduced address the effects induced by collisionality on stars characterized by different masses. This is only one particular application of twocomponent models. We plan to consider soon the construction of two-component models aimed at addressing the issue of dark matter in globular clusters and of others able to touch on the issue of the recently observed multiple stellar populations (generally thought to represent different episodes of star formation; see Gratton et al. 2012). | 2016-03-18T21:00:00.000Z | 2016-03-18T00:00:00.000 | {
"year": 2016,
"sha1": "52f8eed25d64805998c575bde6654f96e5d352af",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2016/06/aa28274-16.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "52f8eed25d64805998c575bde6654f96e5d352af",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
237578683 | pes2o/s2orc | v3-fos-license | SULP (SUSTAINABLE URBAN LOGISTICS PLAN) AS A TOOL FOR SHAPING SUSTAINABLE URBAN LOGISTICS: A REVIEW OF EUROPEAN PROJECTS SUPPORTING THE CREATION OF SULP
Few European cities have logistics plans. However, many European projects involve cities to create such plans and share the results with others. This article reviews the most important projects regarding the creation of Sustainable Urban Logistics Plans (SULP) as the fundamental element of all logistics plans. Having a sustainable logistics plan is necessary to achieve the ambitious goals set in the White Paper on Transport – zero-emission logistics in cities by 2050.
Introduction
Transport is the second largest energy-intensive sector, with 32% share in final energy consumption. In the 2001 White Paper on Transport, the European Commission presented an ambitious goal of CO2-free urban logistics to 2030. Eighty-two percent of Europeans will live in cities by 2050, so it is important to address city transport issues in functional urban areas (FUAs), taking into account the functional transport and economic relations between urban centres and surrounding metropolitan areas. This way, the developed strategies will have an impact on the territorial and economic development of urban regions in Europe. Sustainable Urban Logistics Plan (SULP) is a policy-support tool aimed at a large number of cities in Europe that may not have the resources to undertake a serious policy evaluation and modelling of work for sustainable urban logistics. The SULP methodology is connected to the Sustainable Urban Mobility Plan (SUMP), which has been widely implemented in many European cities. The article summarizes the results of NOVELOG, ENCLOSE, and SULPiTER projects.
Methodology and theory
When analysing the preparation of European cities for urban logistics planning, the desk research method was used, which involved analysing, verifying, and merging existing data and information from the results of European projects.
The collected material was ordered and presented in the form of a summary. The paper can be used as a road map to proceed with planning urban logistics in cities and to create Sustainable Urban Logistics Plans.
Results
The most important conclusions include: -Few cities in Europe have a logistics plan, -There is no set of measures that suits every city, -The action plan, however, can serve all kinds of cities.
Discussion
The main elements of SULP understand the participatory approach and political engagement. A bottom-up approach is assumed, starting with the needs of users, the requirements of operators, and urban objectives. This methodology has been used (and tested) by the nine cities of the ENCLOSE Project and seven cities of SULPiTER project to develop local SULPs and is now available for eventual adoption by other European cities willing to address the issues of urban transport in general urban mobility planning. ENCLOSE (ENergy efficiency in City LOgistics Services) for small and mid-sized European Historic Towns is a project funded by the European Commission under the Intelligent Energy -Europe (Intelligent Energy Europe -IEE) programme, which supports efforts to increase energy efficiency. The ENCLOSE project started in May 2012, and its implementation lasted until November 2014. The main objective of the project was to raise awareness about the challenges faced by energy efficient and sustainable urban logistics in small and medium-sized historical cities in Europe and about concrete opportunities to achieve significant improvements and benefits through the implementation and application of appropriate and effective measures specifically adapted to such urban environments. Certain solutions and innovative programmes already exist in Europe, proving that this approach is feasible and beneficial. Using and developing existing experiences, the ENCLOSE project will help to eliminate potential barriers, promote possible solutions for many small and medium-sized historical European cities, and explore and demonstrate the possibility of transferring these solutions among cities. The project was planning to promote and make future use of energy-efficient and sustainable solutions for urban logistics in as many small and medium-sized historical cities in Europe as possible. Over the past years, the Sustainable Urban Mobility Plan (SUMP) concept has been worked into various European Union documents (e.g. Action Plan on Urban Mobility-COM, 2009, 490; White Paper Roadmap-COM, 2011, 0144), as well as in European projects, notably those financed by the IEE. The current and detailed information on SUMP and its applications are all available on www. mobilityplans.eu website. There too, among other things, one can find guides on how to create SUMP and information about the proposed methodology. SUMP can be partly combined with the previous mobility and transport plans, which have emerged in cities (with specific size and characteristics) in recent years in order to cope with transport problems. Therefore, the ENCLOSE along with the ongoing SULPiTER projects view the sustainable urban logistics plan as an essential part of the urban mobility outlook and aim to develop the SULP in every project city, while respecting the SULP compatibility with SUMP (ENCLOSE, 2015).
SULP in practice is a detailed plan of management of urban logistics processes, a medium-term-solutions design. It is also a tool to define a common vision, a tool for designing a set of appropriate measures, and ultimately a tool to reduce air and noise pollution and energy consumption.
The SULP methodology is defined following the SUMP approach. SULP elements are (Ambrosino, Liberato, Pettinelli, 2015): 1. Setting the target. 2. Urban mobility scenario and priorities. 3. Analysing context and logistic processes. 4. Setting the logistics baseline requirements and baselines. 5. Appropriate measures and services versus requirements. 6. Design of identified solutions. 7. Business model, actor role and responsibility. 8. Evaluation of services/solutions and impacts. 9. Obligations, implementation plan. 10. Promotion and communication plan. 11. Action plan for the adoption of SULP.
Every element of the above has to include justification, tasks, times, and methods. The SULPiTER project, lasting from 2016 to 2019, was supposed to support policymakers in improving their understanding of FUAs commodity phenomena in the energy and environmental perspective. The project increased their capacity to plan urban mobility for freight transport in order to develop and adopt sustainable urban logistics plans. Policymakers in Bologna, Budapest, Poznan, Brescia, Stuttgart, Maribor, and Rijeka engaged in cooperation with other local, regional, and national non-partner authorities and with technical partners. . The partners worked to build the potential of transnational policy and to develop transnational analytical and management tools, thereby contributing to the improvement and adoption of policies for future sustainable energy and sustainability in Freight transport in Central European FUAs. In May 2019, the project ended, and all seven cities were preparing their SULP. Each of them was written on the basis of research in the area -traffic, suppliers, LSP, and business at the location. Depending on the conditions, profiled measures were introduced (SULPiTER, 2019).
The NOVELOG project in the years 2015-2018 served to enable cities to implement effective and sustainable policies and to facilitate cooperation between stakeholders in the field of sustainable urban logistics in order to provide knowledge and understanding of urban freight distribution and business travel.
The aim of the project was to strengthen the capacity of local authorities and stakeholders to create a sustainable policy by providing tools to manage the "implementation chain" (problem capture-decision-planning-testing-assessment -corrections -implementation). The overall objective of the project is a sustainable transport policy, with a better understanding of cost-effective strategies, measures, and business models to reduce the carbon footprint of logistics operations in cities. The NOVELOG project focused on providing knowledge and understanding of the distribution of goods and business travel by providing guidance on the implementation of effective and sustainable policies and measures. These guidelines support the selection of the most optimal and used solutions for urban transport and transport services and facilitate stakeholder cooperation and development, field testing, and transfer of best governance and business models.
The key concept of the project was to initiate and enable the formulation of urban logistics policy and decision-making in the framework of sustainable urban mobility planning, as well as to support the implementation of appropriate policies and measures. This was achieved through the guidance of policymakers based on sustainable business and logistic models and facilitating cooperation and consensus between stakeholders.
The triangular pyramid (Fig.1) presents interests and interactions between stakeholders. Municipal authorities (policymakers) are placed on top of the vortex as central stakeholders. Other stakeholders are divided into broadcasters/recipients, logistic service providers (freight forwarders, operators) and society (citizens). The pyramid is a cooperation scheme for each pair of stakeholders, assuming "administrative and regulatory programmes and incentives" (admin) and/or "logistical cooperative" (CoLog), which consist of two concepts where all policies and the remedial measures discussed in the project are grouped together. The consensus is placed at the centre of the pyramid, as it constitutes the main ambition of creating the SULP, which is implemented in each work package by building a platform for exchanging information and knowledge, at different levels, concerning urban logistics.
The communication platform or the freight quality partnership can be used to achieve consensus. These tools allow for the accumulation of factors influencing urban freight transport (UFT) and the identification and collection of key performance indicators (KPIs) and methodologies used to assess the effectiveness of policies and measures for each stakeholder group. It shall disclose the objectives, priorities, and views of stakeholders and the future development plans of the UFT as well as initiate an open discussion arena that will evolve into a common decision-making process.
The results of selected or tested policies and measures shall be assessed on the basis of an integrated evaluation framework based on stakeholder objectives, priorities, and perceptions. The most suitable, enforceable method of evaluation is employed by each city, depending on the characteristics of the city as well as the elaborated and implemented multilateral decision-making techniques from multiple parties.
Based on pilot cases and case studies, a guide to best practices has been created that has increased knowledge of the conditions in a given area to better coordinate and manage these movements. The project investigated a number of systems and practices such as incentives for low-emission vehicles, overnight deliveries, parking law enforcement and service cooperation programmes, rescheduling of deliveries, consolidation in existing systems, etc.
The NOVELOG project developed a toolkit that helps cities to identify measures implemented in other similar cities and facilitate the selection of the most suitable measure or combination of measures for implementation. The toolkit helps the cities focus on the specific measures that would provide the greatest benefit to the city or to specific impact areas that are a city's priority.
As seen in Figure 2, the toolkit allows the user to filter the available measures in the database using a set of city parameters such as objectives, problems, city morphology, UFT logistics profile, UFT market, key stakeholders, and measures of implementation. A drop-down list for each of these parameters makes the experience as simple as possible for the user to select the most appropriate parameters and search for suitable measures implemented in other cities. Users may even filter results based on specific criteria to see impacts achieved elsewhere. To use this tool you need to select the suitable parameters (Fig. 3) and simply press "search" to see the list of measures and impacts implemented in cities with the same parameters.
The NOVELOG Toolkit also allows users to see the impacts of specific measures. The Toolkit also provides information on where and when this measure was implemented and what was its impact. The results for the chosen parameters (Gdańsk features) are shown in Figure 4.
Conclusions
For now, only a dozen cities have their own SULP, but thanks to the outcome of the projects NOVELOG, ENCLOSE and SULPiTER, and enlarged transfer program, good practices applied in projects can be spread and reproduced. Every city requires different measures; therefore, not every solution can be used anywhere. The decision on the selection of measures should be preceded by research and surveys together with stakeholders' engagement. However, following the roadmap adopted by SULPiTER, sustainable goals can be achieved with the satisfaction of all stakeholders. Without implementing a sustainable urban logistics plan, the ambitious goals of the White Paper on Transport cannot be achieved. | 2021-09-20T18:52:04.813Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "76268e63a4c43fe7914b96db9366d0d740600572",
"oa_license": "CCBYNCSA",
"oa_url": "https://czasopisma.bg.ug.edu.pl/index.php/znetil/article/download/5407/4735",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fdd2e4880698702e1c00529241d370396d3bb40e",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
247318597 | pes2o/s2orc | v3-fos-license | Demonstration of VegaPlus: Optimizing Declarative Visualization Languages
While many visualization specification languages are user-friendly, they tend to have one critical drawback: they are designed for small data on the client-side and, as a result, perform poorly at scale. We propose a system that takes declarative visualization specifications as input and automatically optimizes the resulting visualization execution plans by offloading computational-intensive operations to a separate database management system (DBMS). Our demo emphasizes live programming of visualizations over big data, enabling users to write or import Vega specifications, view the optimized plans from our system, and even modify these plans and compare their performance via a dedicated performance dashboard.
INTRODUCTION
Developing interactive visualizations of large datasets requires significant effort. Besides making effective visualization design choices, the developer also needs to implement the underlying architecture to support interactivity at large scale, requiring expertise in client and server development, data management, and user interface design. Ideally, visualization tools should be expressive enough to rapidly prototype a variety of designs, not only in terms of interface capabilities but also dataset scale and efficiency [6].
Visualization specification languages such as D3 [4] and Vega [11] make the process of designing interactive visualizations more systematic, precise, and simple on the client-side. They often require the browser to load and process the data being visualized, which works well for designing responsive interfaces for small data. However, when handling data that is too large for the browser, these languages lack built-in support for coordination between clientand server-side data management and processing. Existing visualization systems like imMens, Falcon, and Kyrix address this problem for specific dataset and interface scenarios (e.g., geospatial datasets, crossfilter interfaces) [3], but they fail to support the level of expressiveness that D3 and Vega provide for innovative designs.
We propose an alternative solution that increases scalability while also aiming to preserve the flexibility of the underlying language. Specifically, we present a series of visualization-and interaction-aware optimizations that can be integrated directly with existing visualization specification languages. Our approach automatically determines which computations to keep on the client and which to offload to a separate DBMS, minimizing unnecessary network data transfers. Furthermore, our optimizer leverages knowledge of the compiled language structure to adapt and deploy database optimizations for interactive exploration contexts. In this way, we can reap the computational benefits of DBMSs while making it significantly easier for users to connect visualization tools with data processors that are already available to them. We demonstrate our approach by implementing it for Vega; we call the resulting system VegaPlus. However, our approach can generalize to other declarative visualization languages as well.
Building a hybrid client-server application via a declarative language (Vega) is a promising approach for the following reasons. First, the underlying visualization system can automatically generate the necessary client and server components. Vega's declarative design enables us to easily reason about and modify the data transformations that the Vega runtime executes. It decouples specification from runtime execution that utilizes a dataflow graph model, providing optimization opportunities via partitioning dataflow operators to transfer data and computation across client and server. As a result, the visualizations remain lightweight, stand-alone, and agnostic of the optimization work behind. Second, Vega is also expressive enough to capture the computational complexity of most visualization interfaces, including those tested in recent DBMS benchmarks designed for visual exploration scenarios [2,5]. Third, Vega is the backbone of a popular ecosystem of visualization tools, including Vega-Lite [10], Voyager [13], and Falcon [7]; thus, making improvements to Vega is of interest to thousands of data enthusiasts, researchers, and companies worldwide.
In this demo, users can use VegaPlus to implement scalable visualizations at no additional effort: they can focus on making nice visualizations, while VegaPlus automatically makes performance decisions for them. VegaPlus takes a dataset and a user-defined Vega specification as inputs and automatically loads the data into a user-selected DBMS. Its optimizer partitions the dataflow workload across client and server to minimize latency, both at initialization and upon user interactions. Furthermore, it comes with a dedicated performance dashboard that users can explore. The dashboard includes a graph visualization showing how the underlying execution plan is partitioned across client and server, as well as tooltips showing the details behind the nodes in execution plan. Users can also try their own ways to partition the execution plan by modifying the server/client partitioning scheme displayed in the dashboard, and compare it with our optimizer's performance.
SYSTEM OVERVIEW
This section reviews the Vega dataflow and summarizes the system architecture (see Figure 1). The UI will allow users to create visualizations, visualize and interact with the visualization plans generated by the middleware, and compare the performance with their customized plans. Specifications and interactions are passed to our middleware, which (1) automatically instantiates a dataflow graph containing SQL translations for inner data operations from the declarative specification, (2) dynamically optimizes the partitioning of visualization plans, (3) prefetches data in anticipation of the following interactions and coordinates the cache, (4) evaluates the dataflow and handles communication across the client and server components.
Dataflow
The dataflow is a common data model in visualization systems (e.g., Vega [11], VTK [12]) where its operators form a directed graph. A dataflow graph executes a series of data transformations (i.e., processing a data stream) through its operators (e.g., filter, map, aggregate) before the result is mapped to visual encodings. In Vega, a dataflow is automatically constructed based on the user's declarative specification. Streaming data objects pass through the edges and are processed by the operators. Parameters that define an operator can either be fixed values or live references to other operators. Interaction events update operator parameters or data inputs, and the changes are only re-evaluated by the necessary operators.
Middleware Optimization Dynamic
Inspired by previous work in [6], VegaPlus contains a middleware server that takes the user-provided declarative visualization specification and instantiates an optimized dataflow graph in which operations are partitioned across client and server. VegaPlus optimizes how to partition the dataflow based on the dataflow graph, estimated data sizes, and current network latencies. The optimization dynamic is described in the following steps, where steps 1-3 handle startup (i.e., visualization creation) and a loop is introduced in step 4 to handle changes and redundancies in the pipeline due to user interactions.
(1) SQL rewriting: A visualization specification can be roughly seen as processing raw data to the intended format so that the visual encoding specification maps processed attributes to target visual component properties. We assume rendering is not the dominant overhead due to the data reduction resulting from data processing and transformation, which in Vega is achieved via transform operators. To extend the dataflow model to a client-server architecture to use the scalability advantage of DBMSs, we automatically translate each transform operator to SQL queries and provide an extension to offload intensive calculations to the DBMS. The SQL translations are used in further optimization steps to decide whether they will be eventually executed in a DBMS, or their equivalent transform operations will be carried out by the client.
(2) Partitioning visualization plans: For datasets of up to millions of records, client-side visualization tools are able to perform fast data processing and visualizing entirely on the client. For large datasets that cannot be fully loaded or fast-processed in the browser, raw data and computations can be offloaded to a backing DBMS. Based on a small experiment we conducted, for datasets with 4M rows Vega is faster than VegaPlus when it's not optimized, for 4M-10M performance is comparable and for 10M+ VegaPlus is much faster. With the raw data on the server-side, when to bring the dataflow back to the client-side remains an important problem for optimizing the overall cost and minimizing the latency. For static visualizations (or the initial visualization) with overly large data, the optimal point to split the dataflow is after the entire data processing to minimize the network cost, since the encoding mapping and rendering right after it don't dominate the latency.
(3) Optimizing server queries: We optimize the part of the plan that is assigned to the server by node merging and SQL statement rewriting. By merging the SQL queries from individual operations, we can avoid unnecessary network round trips for data transfers. As for SQL statement rewriting, we optimize the subqueries by applying standard rule-based optimizations including pushing down derived conditions from outer subqueries, pruning projections, simplifying expressions, etc. (4) Prefetching and re-partitioning: Interactions impose an even stricter latency requirement for visualizations [2]. Based on the idea of partial execution in subsection 2.1, partially processed data can be brought back to the client earlier so that a downstream interaction parameterized by such data will only trigger a faster partial execution. To optimize partitioning for each interaction, we construct a prediction model with user modeling techniques [1] for potential user actions, prefetch and cache requested data during idle times. Based on the prediction, we re-partition the dataflow to generate potential plans that split right before the interaction handlers in dataflow. When a interaction triggers, we pick the plan based on the interaction and cache state, and evaluate it accordingly.
DEMONSTRATION WALKTHROUGH
In this section, we describe our demonstration scenarios. Our demo comes with several real-world datasets and commonly used visualization designs. In addition, the examples demonstrated are augmented by real ones from Vega visualization creators. Users will also be able to customize various types of visualizations using any dataset they choose.
Our tool is motivated with the goal of enabling web-based interactive visualization of large data with minimum edits to the original Vega specification. For sake of space, we describe how to achieve this goal with just two of our various examples.
US Airline Flights 1 : The dataset consists of flight arrival and departure details for all commercial flights in the USA from 1987 to 2008. We use a simple record count histogram to explore the data distribution in terms of each data fields. Users can select the target data field from a drop down menu and use the slider to find a desired binning range to summarize the record count.
Census-based Occupation History 2 : The dataset is comprised of the details collected by Census of the U.S. occupations reported between 1850 and 2000. In order to show all of the occupations reported in the year, aggregate transform operation was used by stacking each occupation atop each other, length of which was determined by its frequency in the respective year. The data are illustrated with an interactive stacked area graph, in which the user can perform multiple tasks. Filtering by gender can be performed by means of radio. Users can also use our search box that supports regular expressions to filter the jobs.
Demo Workflow
Our demo interface consists of two major views: a visualization editing view ( Figure 2) and a performance view (Figure 3). In this section, we summarize how SIGMOD attendees can interact with each view and their significance in showcasing the contributions of our new visualization-aware optimizations.
Visualization Editing View: This view allows the user to create on-the-fly visualizations over large data, as well as inspect intermediate data results. It can be used as an independent visualization tool that enables fast visualization authoring and interaction with large data. Users can create visualizations using one of our pre-loaded datasets and choose a DBMS back-end. We currently support PostgreSQL [8], OmniSciDB, and DuckDB [9].
To use the visualization editing view, the user uploads a specification and/or uses the live editor (Figure 2-left) to modify the current specification. Existing example specifications (including the above examples) will also be made available to users. When modifying a specification in the editor, the user will see the updated visualizations rendered live (Figure 2-middle). The flight example is shown in Figure 2, where the user can also inspect the data results defined in the specification. For example, the "binned" data ( Figure 2-right) is resulted from the aggregation and is used to be mapped to the rect visual marks (i.e., bars).
Performance View: This view is an interactive dashboard of the overall visualization plan, which shows both the dataflow graph overview (top-left of Figure 3), execution plan and its performance chart (top-right of Figure 3). The main view at the bottom presents the dataflow graph; specifically, we show which operators are placed on the server and which are not by different colors. The execution plan in Figure 3 continues the flights example, where the extent, bin, and aggregate operators are all placed on the server. Operator parameters and rewritten SQL queries will be shown as tooltips when users hover on the nodes. Further, users will be able to toggle on the operators to customize the partitioning. For instance, the user could assign the bin operator to be executed on the client. In this case, data will be requested from the DBMS so that they can be allocated into buckets on the client, which will make the execution much slower because of more data transferring and inefficient SQL queries. Additionally, for users' reference, we show a stacked bar chart (top-right of Figure 3) to display the overall result and which components (i.e., client, server or network) are taking up the most time. We will have one bar for each plan, and, for each stacked bar, we map different colors to the server, client, and network components. The user can compare the performance of Vega alone, our recommendation, and the user's own partitioning made by interacting with the dataflow graph or by simulating different network latencies.
CONCLUSION
We present a new approach to scalable interactive visualization by optimizing declarative visualization specification languages. We demonstrate our approach by applying it to the Vega visualization language, and we refer to the resulting system as VegaPlus. Our proposed demonstration provides an intuitive, real-time experience with implementing and interacting with visualizations over large data, and a unique environment for interactively optimizing the underlying execution plans that is easily adjustable for demo users. | 2022-01-19T02:16:17.368Z | 2022-01-18T00:00:00.000 | {
"year": 2022,
"sha1": "308c00b498da2a2afe00de692b21a8a173386f51",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "308c00b498da2a2afe00de692b21a8a173386f51",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
59527325 | pes2o/s2orc | v3-fos-license | A suitable marking method to achieve lateral margin negative in endoscopic submucosal dissection for undifferentiated-type early gastric cancer
Background and study aims Delineating undifferentiated-type early gastric cancer (UD-type EGC) from noncancerous areas is difficult. Therefore, the lateral margin negative (LM−) resection rate of endoscopic submucosal dissection (ESD) is lower for UD-type EGC than for differentiated-type EGC. This study aimed to retrospectively evaluate the effectiveness of the marking methods with circumferential biopsies in ESD for UD-type EGC. Patients and methods We analyzed the clinical outcomes of ESD in 127 patients with UD-type EGC between April 2013 and 2017. We performed diagnostic delineation of cancerous areas using magnifying endoscopy with narrow-band imaging, and four or more circumferential biopsies approximately 5 mm apart from the estimated lesion border were obtained to confirm noncancerous areas. The markings were placed on the circumferential biopsy scars, and a mucosal incision line was made outside the markings. Results Median size of the tumors and ESD specimens was 12 and 35 mm, respectively. En-bloc resection rate was 100 % (127/127), and LM− and curative resection rates were 97.6 % (124/127) and 80.3 % (102/127), respectively. Circumferential biopsy in preoperative esophagogastroduodenoscopy has successfully identified the misdiagnosis of cancerous areas of four patients (3.2 %), with three (2.4%) achieving LM− resection. LM + resection was pathologically identified in three patients (2.4 %), with all undergoing non-curative resection due to > 20-mm tumor. The proportion of patients with the shortest distance ≥ 5 mm from the lesion edge to the specimen edge was 88.2 % (112/127). Conclusion Our marking methods with circumferential biopsies may reduce LM + resections in ESD for UD-type EGC.
ABSTR AC T Background and study aims Delineating undifferentiated-type early gastric cancer (UD-type EGC) from noncancerous areas is difficult. Therefore, the lateral margin negative (LM−) resection rate of endoscopic submucosal dissec-tion (ESD) is lower for UD-type EGC than for differentiatedtype EGC. This study aimed to retrospectively evaluate the effectiveness of the marking methods with circumferential biopsies in ESD for UD-type EGC.
Patients and methods We analyzed the clinical outcomes of ESD in 127 patients with UD-type EGC between April 2013 and 2017. We performed diagnostic delineation of cancerous areas using magnifying endoscopy with narrowband imaging, and four or more circumferential biopsies approximately 5 mm apart from the estimated lesion border were obtained to confirm noncancerous areas. The markings were placed on the circumferential biopsy scars, and a mucosal incision line was made outside the markings. the result of differentiated-type early gastric cancer (D-type EGC) [10 -12].
Regarding the ESD procedure, the markings were commonly placed entirely around the lesion before the mucosal incision. Generally, for ESD of D-type EGC, the markings are placed 2 to 3 mm apart or more from the estimated border of the lesion area [10]. For UD-type EGC, some previous studies reported that the markings were placed 5 to 10 mm apart from the estimated border of the lesion area [5,7]. However, the details of the suitable marking method are still not clarified.
We had reported that the magnifying endoscopy with narrow-band imaging (M-NBI) diagnosis and circumferential biopsies, which confirm the non-neoplastic mucosa in preoperative esophagogastroduodenoscopy (EGD), are needed to achieve LM−resection in ESD for UD-type EGC [5]. Therefore, we necessarily perform M-NBI diagnosis and circumferential biopsies for ESD cases of UD-type EGC. In this study, we aimed to retrospectively evaluate the effectiveness of the marking methods with circumferential biopsies in ESD for UD-type EGC.
Patients
In our institution, we carried out ER for EGC in 1,731 patients between April 2013 and 2017. Of these patients, 1,570 were preoperatively diagnosed with D-type EGC, whereas 161 were preoperatively diagnosed with UD-type EGC. Of the 161 patients, we excluded 14 with differentiated dominant-type EGC on ESD specimens, seven with prior gastrectomy and reconstructive surgery involving the stomach for esophageal cancer, four in whom circumferential biopsies were not performed in preoperative EGD, three with lesions located near ulcer scar or anastomosis, three in whom the tumor edge was in contact with the esophagogastric junction or pyloric ring, two with simultaneous multiple EGCs, and one who underwent endoscopic mucosal resection. We analyzed clinical outcomes of ESD in 127 patients with UD-type EGC who fulfilled the expanded indications for ESD (▶ Fig. 1). Written informed consent was obtained from all patients prior to undergoing the procedure. This study was approved by the Institutional Review Board of Cancer Institutional Hospital (IRB No. 2017 -1113).
Strategy of preoperative EGD
Preoperative EGD was performed in all patients. We mainly used the LUCERA-ELITE system (Evis Lucera Elite System; Olympus Medical Systems, Tokyo, Japan) and GIF-H260Z or GIF-H290Z endoscope (Olympus Medical Systems). We carried out diagnostic delineation of cancerous areas using M-NBI and dye spraying endoscopy with application of indigo carmine after white light imaging (WLI) observation. In the M-NBI diagnosis, we also carried out diagnostic delineation of cancerous areas using expansion of the intervening parts, which enhances the diagnostic capability [13]. Following the M-NBI diagnosis, circumferential biopsies approximately 5 mm apart from the estimated border of lesion were obtained at even intervals. Four circumferential biopsies were standardized, but if the interval of the circumferential biopsies was widened or the border of le-sion was unclear, five or six circumferential biopsies were performed. If the result of the circumferential biopsy was cancerpositive, even in only one of the specimens, we performed a secondary preoperative EGD to confirm whether the circumferential biopsies were cancer-negative.
ESD procedure
ESD was carried out under the supervision of an expert endoscopist certified by the Japan Gastroenterological Endoscopy Society with the patient anesthetized with midazolam and pethidine hydrochloride. We performed ESD using the insulated-tip (IT) knife2 (Olympus Medical Systems) as the primary resection device, and ERBE VIO 300 D (Erbe, Tubingen, Germany) or ESG-100 (Olympus Medical Systems) as the electrosurgical generator. To lift the mucosa, 0.4 % sodium hyaluronate solution (Mucoup, Boston Scientific Co., Tokyo, Japan and Seikagaku Co., Tokyo, Japan) was then injected into the submucosal layer. Markings were placed on the circumferential biopsy scars using the Patients carried out ER for EGC in Cancer Institute Hospital between April 2013 and 2017 (n = 1,731) Patients carried out ER for preoperatively diagnosed with UD-type EGC between April 2013 and 2017 (n = 161) Clinical outcomes of ESD in patients with UD-type EGC were analyzed ▪110 with signet ring cell carcinoma ▪17 with poorly differentiated adenocarcinoma (n = 127) Excluded (n = 1,570) 1,570 who carried out ER for preoperatively diagnosed with differentiated type EGC Excluded (n = 34) ▪ 14 with differentiated dominant-type on ESD specimen ▪ 7 with prior gastrectomy and reconstructive surgery involving the stomach for esophageal cancer ▪ 4 in whom circumferential biopsies were not performed before ESD ▪ 3 with lesions located near ulcer scar or anastomosis ▪ 3 in whom the tumor edge was in contact with EGJ or pyloric ring ▪ 2 with simultaneous multiple cancers ▪ 1 who underwent EMR ▶ Fig. 1 Flowchart of inclusion of the patients with undifferentiated-type early gastric cancer. ER, endoscopic resection; EGC, early gastric cancer; UD-type EGC, undifferentiated type early gastric cancer; ESD, endoscopic submucosal dissection; EMR, endoscopic mucosal resection; EGJ, esophagogastric junction. GIF-H260Z or GIF-H290Z endoscope and an argon plasma coagulator (APC) probe or snare tip. When circumferential biopsy scars were not recognizable, the preoperative EGD images of the biopsies were considered. Subsequently, the markings were placed at least 5 mm apart from the estimated border of the lesion area. A mucosal incision line was made outside the markings. The mucosal incisions and submucosal dissections were mainly carried out using GIF-Q260 J (Olympus Medical Systems), which has a water-jet function as well. After lesion dissection, a preventive hemostatic procedure to coagulate vessels on the artificial ulcers was immediately performed using hemostatic forceps. ▶ Fig. 2 shows an example of a marking method.
Pathological diagnosis
Pathological findings of ESD specimens were evaluated using version 14 of the Japanese Classification of Gastric Carcinoma [14]. All ESD specimens were sectioned into 2-mm slices and evaluated through histopathological examinations. The Japanese classification system categorizes histological types of gastric carcinoma into the following groups: differentiated and un-differentiated. The differentiated group consists of well-differentiated carcinoma, moderately differentiated carcinoma, and papillary adenocarcinoma, whereas the undifferentiated group consists of poorly differentiated adenocarcinoma (PDAC) and signet ring cell carcinoma (SRC). LM was considered negative if no cancer cells were present within 2 mm from the specimen edge.
Therapeutic outcome parameters
We evaluated characteristics of the patients with UD-type EGC and their lesions in terms of the following parameters: age, sex, location, gross type, tumor size, circumferential biopsies, and histology. Definitions used for the evaluation of ESD therapeutic outcomes were as follows: en-bloc resection was defined as the successful resection of a lesion in one piece, irrespective of the pathological findings; R0 resection was en-bloc resection with both the lateral and vertical margins being negative for cancer cells; and curative resection as resection that satisfied the expanded indications for ESD. ESD operation time was defined as the duration from endoscope insertion to its removal. ▶ Fig. 2 A case of undifferentiated-type early gastric cancer with successful lateral margin negative resection in a 56-year-old man. a In preoperative esophagogastroduodenoscopy, a discolored lesion is located on the greater curvature of the antrum (white arrow). b In M-NBI diagnosis, a diagnostic delineation of cancerous areas (yellow dotted line) using expansion of the intervening parts is performed. c Four circumferential biopsies (numbers 1 to 4) approximately 5 mm apart from the estimated border of the lesion are obtained, which are identified as cancernegative samples. d The markings are placed on the circumferential biopsy scars and ESD is performed. e Histological mapping: the ESD specimen is excised as indicated by the black lines. The area of the lesion itself is represented by the red lines. The shortest distance from the lesion edge to the specimen edge is 7 mm. It is diagnosed as successful LM-resection. Pathological findings: ESD specimen size, 35 × 33 mm; tumor size, 12 × 12 mm; and type 0-IIb, signet ring cell carcinoma, UL-, M, ly0, v0, LM-,VM-. f On retrospective consideration, the area of the lesion itself is depicted by the blue-dotted line. M-NBI, magnifying endoscopy with narrow-band imaging; ESD, endoscopic submucosal dissection; LM, lateral margin; VM, vertical margin.
We, then, investigated the shortest distance from the lesion edge to the specimen edge. The pathologist serially sectioned the post-ESD specimens at 2-mm intervals to histologically estimate the area of the lesions, which was mapped on a photograph for measurement of the shortest distance (▶ Fig. 2e). The overall median shortest distance was 7 mm (range, 0 -13). In 112 patients (88.2 %), the shortest distance was ≥ 5 mm, which was considered as a safe LM. In 15 patients (11.8 %), the shortest distance was < 5 mm, which was considered as an insufficient LM (▶ Fig. 3).
Clinical characteristics of patients with LM +
Characteristics of three patients (2.4 %) with LM + , which was pathologically identified, are shown in ▶ Table 3. All LM + patients had non-indication lesions for ESD, such as a those with a tumor size > 20 mm. In all LM + patients, the cancerous area had spread within the atrophic mucosa in the lesser curvature of the stomach from the angle to the lower gastric body. Two out of three LM + patients underwent additional treatments. One patient underwent secondary ESD 3 weeks after initial ESD, and the residual carcinoma, which was close to the ESD scar, was successfully resected. Another patient underwent laparoscopic distal gastrectomy, and a 7 × 7-mm residual carcinoma without lymph node metastasis was found in the oral side of the post-ESD scar (▶ Fig. 4). However, another elderly patient refused to undergo additional treatment due to his age ( > 70 years) and chronic heart failure. This patient was followed up for 38 months, and no recurrence has been observed to date.
Discussion
Generally, the LM−resection rate in ESD for D-type EGC was reported as 96.9 % to 99.0 % [10 -12], whereas that for UD-type EGC was reported as 72.7 % to 94.8 % [5 -9], which is lower than the result of D-type EGC. Our marking method for UDtype EGC, which is securing the LM of approximately 5 mm from the estimated border of lesion after the M-NBI diagnosis, and circumferential biopsies achieved a low LM−resection rate ▶ Table 2 Therapeutic outcomes of 127 patients who underwent endoscopic submucosal dissection for undifferentiated-type early gastric cancer (UD-type EGC).
UD-type EGC (n = 127)
Specimen size, median (range), mm 35 ( The shortest distance, median (range), mm 7 (0 -13) The shortest distance, The shortest distance from the lesion edge to the specimen edge. SRC, signet ring cell carcinoma; PDAC, poorly differentiated adenocarcinoma 1 Some patients had more than one curative factor. (97.6 %), which is an excellent result compared to the previous reports. Hwang et al. reported that the residual/recurrent tumor rate was 34.5 % in cases with pathologically diagnosed LM + resection, and undifferentiated histology was an independent risk factor for the development of residual/recurrent tumors [15]. Therefore, the standard secondary management in cases with LM + resection for UD-type EGC is surgical gastrectomy [3], and LM−resection is important. Following recent advances in M-NBI, our institution previously reported [13] the by-growth pattern on the M-NBI fea-▶ Table 3 Clinical characteristics of undifferentiated-type early gastric cancer with pathologically positive lateral margin.
Case
Age The shortest distance, The shortest distance from the lesion edge to the specimen edge. SRC, signet ring cell carcinoma; ESD, endoscopic submucosal dissection ▶ Fig. 4 A case of undifferentiated-type early gastric cancer with pathologically positive lateral margin in a 64-year-old man. a In preoperative esophagogastroduodenoscopy, a discolored lesion is located on the lesser curvature of the lower gastric body (white arrow). b A diagnostic demarcation using M-NBI is performed and then, four circumferential biopsies (numbers 1 to 4) are obtained 5 mm apart from the lesion, which were identified as cancer-negative samples. c The markings are placed on the circumferential biopsy scars and ESD is performed. tures of UD-type EGC and identified and compared significant expansion of the intervening parts within neoplastic lesions with noncancerous regions. Expansion of the intervening parts enhanced the diagnostic capability, and accurate diagnosis rates for the diagnostic demarcation of UD-type EGC improved from 53.9 % to 81.5 % with the addition of M-NBI to WLI [16]. Thus, accurate identification of the cancerous areas has become possible even in UD-type EGC. In other procedures for precise diagnostic delineation of cancerous areas, circumferential biopsies for confirming the noncancerous areas are useful to identify a misdiagnosis of the cancerous areas. Cancerous areas of lesions with lateral extension beneath the noncancerous mucosa, which is at a high risk of LM + resection by ESD [4,11], are sometimes difficult to precisely and diagnostically delineate, even when using M-NBI. In particular, achieving LM−resection in such lesions necessitates the consideration of circumferential biopsies on preoperative EGD. In this study, four patients (3.2 %) were cancer-positive by circumferential biopsy; of these, three achieved LM−resection by undergoing second-ary preoperative EGD to confirm whether the additional circumferential biopsies were negative (▶ Fig. 5).
In this study, the cancerous area in three patients with LM + resection had spread under the atrophic mucosa in the lesser curvature of the stomach from the angle to the lower body. Diagnostic demarcation in the lesser curvature of the stomach between the angle and gastric body is difficult due to the strong atrophic changes in the background mucosa and endoscopic observation from a tangential angle. Moreover, in those patients with LM + resection, circumferential biopsies could not identify the misdiagnosis of cancerous areas, because the cancerous area had spread between the sites of the circumferential biopsies (▶ Fig. 4). Although larger resection is a simple strategy for LM−resection, unnecessary larger resection causes more bleeding as more vessels would be exposed in the base of the ulcers after ESD. Several studies have reported that ESD specimen size of > 40 mm was a significant risk factor for post-ESD bleeding [17 -19]. In this study, 2.3 % of patients had post-ESD bleeding, which is lower than the rates in previous studies. This could be due to the fact that the median size of the ESD speci-▶ Fig. 5 A case of undifferentiated type early gastric cancer with cancer-positive circumferential biopsy results. a In preoperative esophagogastroduodenoscopy (EGD), a discolored lesion is located on the anterior wall of the lower gastric body (white arrow). b A diagnostic demarcation using M-NBI is performed and then, four circumferential biopsies (numbers 1 to 4) are obtained 5 mm apart from the lesion. Numbers 2 to 4 biopsies are identified as cancer-negative samples, whereas the number 1 biopsy is identified as a cancer-positive sample. c In secondary preoperative EGD, one additional circumferential biopsy (number 5) is obtained approximately 3 mm outside from the lesion, which is identified as cancer-negative. d The markings are placed on the circumferential biopsy scars (numbers 2 to 5) and ESD is performed. e Pathological findings: ESD specimen size, 46 × 40 mm; tumor size, 24 × 23 mm; and type 0-IIc, poorly differentiated adenocarcinoma, UL-, M, ly0, v0, LM-, VM-. f On retrospective consideration, the area of the lesion itself is depicted by the blue-dotted line. Number 1 biopsy is located on the edge of the lesion. By placing the markings on the number 5 biopsy scar, LM-resection is performed successfully. M-NBI, magnifying endoscopy with narrow-band imaging; ESD, endoscopic submucosal dissection; LM, lateral margin; VM, vertical margin. mens was 35 mm in this study, which was < 40 mm. The minimum required specimen size of ESD may have contributed to the low post-ESD bleeding rate.
The current study had some limitations. First, this was a nonrandomized, retrospective, and single-center study. Second, this study included a small number of patients. Third, a bias of patient selection might have been present. To reduce the selection bias, we included consecutive patients except for those who satisfied the exclusion criteria. However, a future prospective, randomized, multicenter study is needed to confirm with certainty the benefits of the marking methods for LM−resection in ESD for UD-type EGC.
Conclusion
In conclusion, our marking method, which secures an LM of approximately 5 mm after M-NBI diagnosis and four or more circumferential biopsies, may reduce LM + resections in ESD for UD-type EGC. | 2019-02-06T21:28:10.224Z | 2019-01-30T00:00:00.000 | {
"year": 2019,
"sha1": "4b7b487c289286a7ed4aa01d66f1be8a76651013",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/a-0812-3222.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "19a31f3151c4702756bb2b0cf8c1ca9b9a46829f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245820237 | pes2o/s2orc | v3-fos-license | Nuclear Envelope Alterations in Myotonic Dystrophy Type 1 Patient-Derived Fibroblasts
Myotonic dystrophy type 1 (DM1) is a hereditary and multisystemic disease characterized by myotonia, progressive distal muscle weakness and atrophy. The molecular mechanisms underlying this disease are still poorly characterized, although there are some hypotheses that envisage to explain the multisystemic features observed in DM1. An emergent hypothesis is that nuclear envelope (NE) dysfunction may contribute to muscular dystrophies, particularly to DM1. Therefore, the main objective of the present study was to evaluate the nuclear profile of DM1 patient-derived and control fibroblasts and to determine the protein levels and subcellular distribution of relevant NE proteins in these cell lines. Our results demonstrated that DM1 patient-derived fibroblasts exhibited altered intracellular protein levels of lamin A/C, LAP1, SUN1, nesprin-1 and nesprin-2 when compared with the control fibroblasts. In addition, the results showed an altered location of these NE proteins accompanied by the presence of nuclear deformations (blebs, lobes and/or invaginations) and an increased number of nuclear inclusions. Regarding the nuclear profile, DM1 patient-derived fibroblasts had a larger nuclear area and a higher number of deformed nuclei and micronuclei than control-derived fibroblasts. These results reinforce the evidence that NE dysfunction is a highly relevant pathological characteristic observed in DM1.
Introduction
Myotonic dystrophy type 1 (DM1) is the most common adult-onset muscular dystrophy with an estimated prevalence of 1:8000 [1,2]. DM1 is characterized by a slowly progressing muscle weakness, loss of muscle mass and myotonia. Additionally, DM1 is also categorized as a multisystemic disease, affecting other organs, namely the eyes (cataracts), heart (conduction problems leading to cardiomyopathies) and respiratory system, and causing metabolic alterations (insulin insensitivity and diabetes) [3][4][5][6]. DM1 is a genetic disease caused by an abnormal unstable expansion of the CTG trinucleotide in the 3 UTR of the Myotonic Dystrophy Protein Kinase (DMPK) gene [1,7]. The central protein of DM1, DMPK, is a protein kinase that consists of seven distinct isoforms (DMPK A to G) in humans, which are generated by alternative splicing. DMPK's subcellular localization is confined to either the endoplasmic reticulum or nuclear envelope (NE) (DMPK A and B), mitochondria (DMPK C and D) or cytoplasm (DMPK E, F and G) [6,8,9].
Several studies have been carried out to unravel the molecular mechanisms underlying this pathology. To date, there are three more consensual hypotheses explaining the pathogenesis of DM1: RNA toxic gain-of-function, haploinsufficiency of DMPK and rearrangement of the DM1 locus [6,[10][11][12]. Despite the great deal of effort for unravelling the molecular mechanism underlying DM1, none of these hypotheses can explain all the multisystemic signs and symptoms. A defect in the positioning of myonuclei, resulting from alterations in nuclear molecular mechanism underlying DM1, none of these hypotheses can explain all the multisystemic signs and symptoms. A defect in the positioning of myonuclei, resulting from alterations in nuclear envelope (NE) proteins, has also been proposed as a potential pathological mechanism of DM1, similar to other muscular dystrophies [13][14][15][16]. Previous studies have reported that some muscular dystrophies result from alterations in NE stability [13][14][15][16]. A common feature of these diseases is the presence of nuclei usually located and grouped in the muscle cells' center, compromising myonuclear movement [13,17]. NE proteins are essential for gene regulation, nuclear structure and muscle function [18,19]. In the case of DM1, very few studies have been carried out to assess alterations of NE proteins in DM1 [20][21][22], and the contribution of NE dysfunction to DM1 has not been fully elucidated.
Therefore, the main objectives of this study were to evaluate the nuclear profile in DM1 patient-derived and control fibroblasts and to determine the intracellular protein levels and immunolocalization of the disease-associated DMPK protein and other NE proteins, namely lamin A/C, emerin, lamin-associated polypeptide 1 (LAP1), Sad1/unc-84 protein-like (SUN1), nesprin-1 and nesprin-2, in both cell lines. The results obtained here may provide new insights on the potential contribution of NE dysfunction to DM1 pathogenesis.
Evaluation of Intracellular DMPK Protein Levels in DM1 Patient-Derived Fibroblasts
The precise molecular mechanism underlying DM1 is still elusive. The toxic gain of function of expanded CUG repeats of mutant DMPK mRNA and haploinsufficiency are two well accepted proposed mechanisms [6]. As a consequence, the protein levels of DMPK are found to be decreased in DM1 tissues [23]. To confirm these changes, intracellular DMPK protein levels were evaluated by immunoblotting in DM1 patient-derived and control fibroblasts. Briefly, in this study, two cell lines were used with approximately 1000 CTG repeats, hereafter referred to as DM1_1000 (1) and DM1_1000 (2), two cell lines with approximately 2000 CTG repeats, hereafter designated DM1_2000 (1) and DM1_2000 (2), and one control cell line that comprised between 5 and 27 CTG repeats.
The results presented in Figure 1 show that the intracellular DMPK protein levels were significantly decreased in the DM1_1000 (p = 0.0332) and DM1_2000 (p = 0.0332) fibroblasts when compared to the control ( Figure 1). Figure 1. Intracellular DMPK protein levels in DM1 patient-derived and control fibroblasts. The intracellular protein levels in DM1 patient-derived fibroblasts were estimated in relation to the protein levels detected in the control condition and are presented as mean ± SEM of four independent experiments. Ponceau S staining was used to assess gel loading. The statistical analysis was performed using one-way ANOVA followed by the Tukey's multiple comparison test, used to compare between DM1_1000, DM1_2000 and the control groups. * p < 0.05; DM1-myotonic dystrophy type 1; DMPK-myotonic dystrophy protein kinase; SEM-standard error of the mean. Figure 1. Intracellular DMPK protein levels in DM1 patient-derived and control fibroblasts. The intracellular protein levels in DM1 patient-derived fibroblasts were estimated in relation to the protein levels detected in the control condition and are presented as mean ± SEM of four independent experiments. Ponceau S staining was used to assess gel loading. The statistical analysis was performed using one-way ANOVA followed by the Tukey's multiple comparison test, used to compare between DM1_1000, DM1_2000 and the control groups. * p < 0.05; DM1-myotonic dystrophy type 1; DMPKmyotonic dystrophy protein kinase; SEM-standard error of the mean.
As expected, significantly lower levels of DMPK protein were observed in the DM1 patient-derived fibroblasts. Therefore, we decide to further explore the contribution of NE dysfunction to DM1 through the evaluation of the nuclear profile as well as protein levels and the subcellular distribution of several relevant NE proteins.
Evaluation of the Nuclear Profile in DM1 Patient-Derived Fibroblasts
The presence of nuclear architectural alterations in DM1 patient-derived fibroblasts was assessed through DAPI staining, followed by the monitoring of several nuclear parameters, namely the occurrence of nuclear deformations, number of micronuclei, nuclear circularity, crossed diameter ratio and nuclear area ( Figure 2). Nuclear circularity is a quantitative measure that assesses the circular shape of nuclei, with a maximum value of 1 corresponding to a perfect circle. Concerning nuclear deformations, the existence of blebs, lobed nuclei, micronuclei and nuclear invaginations were taken into consideration. Several visible small nuclei were also quantified as micronuclei. The nuclear profiles of DM1 patient-derived and control fibroblasts were analysed using fluorescence microscopy and representative images are presented. Fibroblasts' nuclei were stained with DAPI (blue). Quantitative evaluation of (B) deformed nuclei, (C) micronuclei, (D) nuclear circularity, (E) crossed diameter ratio, (F) nuclear area comparison between DM1 patient-derived fibroblasts (DM1_1000 and DM1_2000) and control group and (G) nuclear area of <200 µm 2 and ≥200 µm 2 . The quantitative data are presented as mean ± SEM and were obtained by analysing 100 cells per condition from four independent experiments. The statistical analysis was performed using one-way ANOVA followed by Tukey's multiple comparison test used to compare between DM1_1000, DM1_2000 and the control groups. * p < 0.05, ** p < 0.01. Scale bar = 10 µm. A.U.-arbitrary units; DM1-myotonic dystrophy type 1; SEM-standard error of the mean.
Regarding the presence of nuclear deformations (Figure 2A,B), there was a significant increase in the percentage of deformed nuclei in DM1 patient-derived fibroblasts (DM1_1000, p = 0.0066; DM1_2000, p = 0.0012) relative to control fibroblasts. The results indicated that DM1 patient-derived fibroblasts carrying a higher number of CTG repeats seemed to present a higher number of nuclear deformations ( Figure 2B). The number of micronuclei also seemed to increase in DM1 patient-derived fibroblasts in comparison to control fibroblasts ( Figure 2C) and was correlated with CTG repeat length. Regarding the crossed diameter ratio, this parameter was significantly increased in the fibroblast nuclei derived from DM1_1000 (p = 0.0328) and DM1_2000 (p = 0.0127) fibroblasts when compared with the control fibroblasts ( Figure 2E). Finally, the mean nuclear area of DM1 patient-derived fibroblasts appeared to be larger than the control ( Figure 2F). Knowing that the average nuclear area of fibroblasts is around 200 µm 2 [24] (Figure 2G), we quantified the number of cells with a nuclear area <200 µm 2 and ≥ 200 µm 2 . Interestingly, it was found that there was a significant increase in the nuclear area in DM1_2000 fibroblasts compared to the control (p = 0.0041) ( Figure 2G).
Since important nuclear changes were observed in DM1 patient-derived fibroblasts in relation to control fibroblasts, it seemed important to evaluate some relevant proteins of the NE.
Evaluation of Intracellular Levels and Localization of NE Proteins in DM1 Patient-Derived Fibroblasts
To investigate the intracellular protein levels and subcellular localization of NE proteins in DM1 patient-derived and control fibroblasts, immunoblotting and immunocytochemistry techniques were used, respectively. Essentially, the following NE proteins were evaluated: nuclear lamin protein, namely lamin A/C ( Figure 3); three inner nuclear membrane proteins, including emerin ( Figure 4), LAP1 ( Figure 5) and SUN1 ( Figure 6); and two outer nuclear membrane proteins, including nesprin-1 ( Figure 7A) and nesprin-2 ( Figure 7B). . Intracellular protein levels and nuclear localization of lamin A/C in DM1 patient-derived and control fibroblasts. (A) Intracellular lamin A/C protein levels in DM1 patient-derived and control fibroblasts were analysed using immunoblotting. The intracellular protein levels in DM1 patient-derived fibroblasts were estimated in relation to protein levels detected in the control condition and are presented as mean ± SEM of four independent experiments. Ponceau S staining was used to assess gel loading. The statistical analysis was performed using one-way ANOVA followed by Tukey's multiple comparison test to compare between DM1_1000, DM1_2000 and the control groups. To compare intracellular protein levels between groups, one-way ANOVA was used, followed by Tukey's multiple comparison test. ** p < 0.01 (B) Subcellular distribution of lamin A/C in DM1 patient-derived and control fibroblasts was analysed using fluorescence microscopy. Lamin A/C was detected using a specific primary antibody and linked to an Alexa Fluor 488-conjugated secondary antibody (green). Nucleic acids were stained using DAPI (blue). Evaluation of lamin A/C-positive (C) nuclei with nuclear inclusions, (D) nuclei with 1-2 or ≥3 nuclear inclusions, (E) deformed nuclei, (F) nuclear invaginations and (G) mild or moderate nuclear invaginations. The quantitative data are presented as mean ± SEM and were obtained by analysing 50 cells per condition from four independent experiments. The statistical analysis was performed using oneway ANOVA followed by Tukey's multiple comparison test to compare between DM1_1000, DM1_2000 and the control groups. * p < 0.05; ** p < 0.01; Scale bar = 10 µm; ↑ represents nuclear invaginations;r epresents nuclear inclusions; DM1-myotonic dystrophy type 1; SEM-standard error of the mean. The intracellular protein levels in DM1 patient-derived fibroblasts were estimated in relation to protein levels detected in the control condition and are presented as mean ± SEM of four independent experiments. Ponceau S staining was used to assess gel loading. The statistical analysis was performed using one-way ANOVA followed by Tukey's multiple comparison test to compare between DM1_1000, DM1_2000 and the control groups. (B) Subcellular distribution of emerin in DM1 patient-derived fibroblasts and control was analysed using fluorescence microscopy. Emerin was detected using a specific primary antibody and an anti-mouse Alexa-488-conjugated secondary antibody (green). Nucleic acids were stained using DAPI (blue). Evaluation of emerin-positive (C) nuclei with nuclear inclusions, (D) nuclei with 1-2 or ≥3 nuclear inclusions, (E) deformed nuclei, (F) nuclear invaginations and (G) mild or moderate nuclear invaginations. The quantitative data are presented as mean ± SEM and were obtained by analysing 50 cells per condition from four independent experiments. The statistical analysis was performed using one-way ANOVA followed by Tukey's multiple comparison test to compare between DM1_1000, DM1_2000 and the control groups. * p < 0.05, ** p < 0.01; Scale bar = 10 µm; ↑ represents nuclear invaginations;ˆrepresents nuclear inclusions. DM1-myotonic dystrophy type 1; SEM-standard error of the mean. The intracellular protein levels in DM1 patient-derived fibroblasts were estimated in relation to protein levels detected in the control condition and are presented as mean ± SEM of four independent experiments. Ponceau S staining was used to assess gel loading. The statistical analysis was performed using one-way ANOVA followed by Tukey's multiple comparison test to compare between DM1_1000, DM1_2000 and the control groups. * p < 0.05 (B) Subcellular distribution of LAP1 in DM1 patient-derived and control fibroblasts was analysed using fluorescence microscopy. LAP1 was detected using a specific primary antibody linked to an anti-mouse Alexa-594-conjugated secondary antibody (red). Nucleic acids were stained using DAPI (blue). Evaluation of LAP1-positive (C) nuclei with nuclear inclusions, (D) nuclei with 1-2 or ≥3 nuclear inclusions, (E) deformed nuclei, (F) nuclear invaginations and (G) mild or moderate nuclear invaginations. The quantitative data are presented as mean ± SEM and were obtained by analysing 50 cells per condition from four independent experiments. The statistical analysis was performed using one-way ANOVA followed by Tukey's multiple comparison test to compare between DM1_1000, DM1_2000 and the control groups. * p < 0.05, ** p < 0.01; Scale bar = 10 µm; ↑ represents nuclear invaginations;ˆrepresents nuclear inclusions. DM1-myotonic dystrophy type 1; LAP1-lamin-associated polypeptide 1; SEMstandard error of the mean. Figure 5. Intracellular protein levels and nuclear localization of LAP1 in DM1 patient-derived and control fibroblasts. (A) Total LAP1, LAP1B and LAP1C intracellular protein levels in DM1 patientderived and control fibroblasts. The intracellular protein levels in DM1 patient-derived fibroblasts were estimated in relation to protein levels detected in the control condition and are presented as mean ± SEM of four independent experiments. Ponceau S staining was used to assess gel loading. The statistical analysis was performed using one-way ANOVA followed by Tukey's multiple comparison test to compare between DM1_1000, DM1_2000 and the control groups. * p < 0.05 (B) Subcellular distribution of LAP1 in DM1 patient-derived and control fibroblasts was analysed using fluorescence microscopy. LAP1 was detected using a specific primary antibody linked to an antimouse Alexa-594-conjugated secondary antibody (red). Nucleic acids were stained using DAPI (blue). Evaluation of LAP1-positive (C) nuclei with nuclear inclusions, (D) nuclei with 1-2 or ≥3 nuclear inclusions, (E) deformed nuclei, (F) nuclear invaginations and (G) mild or moderate nuclear invaginations. The quantitative data are presented as mean ± SEM and were obtained by analysing 50 cells per condition from four independent experiments. The statistical analysis was performed using one-way ANOVA followed by Tukey's multiple comparison test to compare between DM1_1000, DM1_2000 and the control groups. * p < 0.05, ** p < 0.01; Scale bar = 10 µm; ↑ represents nuclear invaginations; ^ represents nuclear inclusions. DM1-myotonic dystrophy type 1; LAP1lamin-associated polypeptide 1; SEM-standard error of the mean.
SUN1 was another protein evaluated from the inner nuclear membrane There was a significant increase in the intracellular protein levels of SUN1 in fibroblasts from patients with DM1_1000 (p = 0.0082) compared to the control fibroblasts ( Figure 6). . Intracellular protein levels of SUN1 in DM1 patient-derived and control fibroblasts. The intracellular protein levels in DM1 patient-derived fibroblasts were estimated in relation to protein levels detected in the control condition and are presented as mean ± SEM of three independent experiments. Ponceau S staining was used to assess gel loading. The statistical analysis was performed using one-way ANOVA followed by Tukey's multiple comparison test to compare between DM1_1000, DM1_2000 and the control groups. ** p < 0.01; DM1-myotonic dystrophy type 1; SEMstandard error of the mean; SUN-Sad1/Unc-84.
Following the assessment of nuclear lamins and inner nuclear membrane proteins, we carried on with the evaluation of two important outer nuclear membrane proteins, namely nesprin-1 and nesprin-2. SUN1, nesprin-1 and nesprin-2 are the core components of the linker of nucleoskeleton and cytoskeleton (LINC) complex and were therefore evaluated [29].
Regarding nesprin-1, a statistically significant decrease in nesprin-1 intracellular protein levels was observed in DM1_1000 (p = 0.0179) and DM1_2000 (p = 0.0129) patientderived fibroblasts in relation to the control fibroblasts ( Figure 7A). Concerning nesprin-2, the results showed a significant decrease in the protein intracellular levels in DM1_2000 patient-derived fibroblasts when compared to the control fibroblasts (p = 0.0059) ( Figure 7B). . Intracellular protein levels of SUN1 in DM1 patient-derived and control fibroblasts. The intracellular protein levels in DM1 patient-derived fibroblasts were estimated in relation to protein levels detected in the control condition and are presented as mean ± SEM of three independent experiments. Ponceau S staining was used to assess gel loading. The statistical analysis was performed using one-way ANOVA followed by Tukey's multiple comparison test to compare between DM1_1000, DM1_2000 and the control groups. ** p < 0.01; DM1-myotonic dystrophy type 1; SEM-standard error of the mean; SUN-Sad1/Unc-84. The intracellular protein levels in DM1 patient-derived fibroblasts were estimated in relation to protein levels detected in the control condition and are presented as mean ± SEM of four independent experiments. Ponceau S staining was used to assess gel loading. The statistical analysis was performed using one-way ANOVA followed by Tukey's multiple comparison test to compare between DM1_1000, DM1_2000 and the control groups. * p < 0.05. (B) Intracellular nesprin-2 protein levels in DM1 patient-derived and control fibroblasts. The intracellular protein levels in DM1 patientderived fibroblasts were estimated in relation to protein levels detected in the control condition and are presented as mean ± SEM of four independent experiments. Ponceau S staining was used to assess gel loading. The statistical analysis was performed using one-way ANOVA followed by Tukey's multiple comparison test to compare between DM1_1000, DM1_2000 and the control groups. ** p < 0.01 (C) Subcellular distribution of nesprin-1 in DM1 patient-derived and control fibroblasts was analysed using fluorescence microscopy. Nesprin-1 was detected using a specific primary antibody and an anti-mouse Alexa-488-conjugated secondary antibody (green). Nucleic acids were stained using DAPI (blue). Evaluation of nesprin-1 positive (D) nuclei with nuclear inclusions, (E) nuclei with 1-2 or ≥3 nuclear inclusions, (F) deformed nuclei, (G) nuclear invaginations and (H) mild or moderate nuclear invaginations. The quantitative data are presented as mean ± SEM and were obtained by analysing 50 cells per condition from four independent experiments. The statistical analysis was performed using one-way ANOVA followed by Dunnett's test to compare between DM1_1000, DM1_2000 and the control groups. * p < 0.05, ** p < 0.01; Scale bar = 10 µm; ↑ represents nuclear invaginations;ˆrepresents nuclear inclusions. DM1-myotonic dystrophy type 1; SEM-standard error of the mean.
Regarding the intracellular protein levels of lamin A/C, an increase was observed in the DM1 patient-derived fibroblasts DM1_1000 and DM1_2000 (p = 0.0058) in relation to the control fibroblasts, which was more pronounced in fibroblasts with higher CTG repeat length ( Figure 3A). The results also demonstrated that lamin A/C was located in the NE and nucleoplasm, and an increase in lamin A/C immunolabelling in DM1 patientderived fibroblasts was observed ( Figure 3B). The increase in the percentage of lamin A/C-positive nuclear inclusions in DM1 patient-derived fibroblasts was evident, with a significant alteration being observed between DM1_2000 and control fibroblasts (p = 0.0254) ( Figure 3C). The number of DM1 patient-derived fibroblasts' nuclei with three or more inclusions (≥3) was significantly different between the DM1_2000 and control fibroblasts (p = 0.0012) ( Figure 3D).
Concerning deformed nuclei, the DM1 patient-derived fibroblasts showed more lamin A/C-positive deformations than the control fibroblasts, with this increase being significant between the DM1_2000 and control fibroblasts (p = 0.0328) ( Figure 3E). Regarding nuclear invaginations, DM1_1000 and DM1_2000 fibroblasts had a higher number of nuclear invaginations than the controls, with this difference being significant in the DM1_2000 fibroblasts in comparison to the control (p = 0.033) ( Figure 3F). The patient-derived fibroblasts (DM1_2000) also demonstrated a significant increase in the number of moderate invaginations compared to the control fibroblasts (p = 0.0014) ( Figure 3G).
Upon nuclear lamina evaluation, several important alterations were observed in type A lamins, indicating that the NE structure and function could be compromised. Therefore, the subsequent analysis of their functional partners was of paramount importance. The intracellular protein levels of emerin remained apparently unchanged in DM1 patientderived fibroblasts when compared with control fibroblasts ( Figure 4A). Furthermore, our results also demonstrated that emerin was located not only in the NE but also in the nucleoplasm, in which the nuclear inclusions were more evident and in higher number in DM1 patient-derived fibroblasts ( Figure 4B-D).
Additionally, the DM1 patient-derived fibroblasts that were immunolabeled for emerin presented a higher percentage of deformed nuclei than the control fibroblasts (DM1_1000 vs. control: p = 0.0245; DM1_2000 vs. control: p = 0.0032) ( Figure 4E). The results also showed that the DM1_1000 (p = 0.0040) and DM1_2000 (p = 0.0026) fibroblasts presented a significantly higher number of nuclei with invaginations than the controls ( Figure 4F). The DM1 patient-derived fibroblasts showed a significant increase in mild (DM1_1000 vs. control: p = 0.0033; DM1_2000 vs. control: p = 0.0062) and moderate (DM1_2000 vs. control: p = 0.0089) invaginations when compared with control-derived fibroblasts ( Figure 4G). LAP1 is another important inner nuclear membrane (INM) protein, belonging to a dynamic and complex network of interactions spanning the perinuclear space and connecting the nuclear lamina, the NE, the cytoskeleton and nucleoskeleton. At least two human LAP1 isoforms are known, namely LAP1B and LAP1C [25,26]. LAP1 interacts with several proteins relevant to this study, such as nuclear lamins and emerin [27,28].
The intracellular protein levels of total LAP1 as well as individual LAP1B and LAP1C isoforms were increased in DM1 patient-derived fibroblasts. Furthermore, this alteration seems to be correlated with CTG repeat length, being statistically significant between DM1_2000 patient-derived fibroblasts and control fibroblasts (total LAP1: p = 0.0302; LAP1B: p = 0.0137; LAP1C: p = 0.0210) ( Figure 5A). Our results also showed that LAP1 was not only located in the NE, and an immunostaining of the NE and nucleoplasm was observed in DM1 patient-derived fibroblasts ( Figure 5B). However, the number of nuclear inclusions in fibroblasts derived from patients with DM1 were identical ( Figure 5C). When analysing the two established categories, it was observed that cells with one and two inclusions tended to decrease in patients with DM1_1000 and DM1_2000 (p = 0.0310) when compared to the control fibroblasts. Concomitantly, the presence of three or more nuclear inclusions was significantly increased in DM1_2000 fibroblasts when compared to the control (p = 0.0472) ( Figure 5D).
The percentage of LAP1-positive deformed nuclei in DM1 patient-derived fibroblasts tended to be higher than in the control fibroblasts, and this increase was more pronounced in fibroblasts with a higher CTG repeat length (DM1_2000 vs. control: p = 0.0301) ( Figure 5E). However, most deformities observed in LAP1 positive nuclei in patient-derived fibroblasts tended to be mild ( Figure 5G). SUN1 was another protein evaluated from the inner nuclear membrane There was a significant increase in the intracellular protein levels of SUN1 in fibroblasts from patients with DM1_1000 (p = 0.0082) compared to the control fibroblasts ( Figure 6).
Following the assessment of nuclear lamins and inner nuclear membrane proteins, we carried on with the evaluation of two important outer nuclear membrane proteins, namely nesprin-1 and nesprin-2. SUN1, nesprin-1 and nesprin-2 are the core components of the linker of nucleoskeleton and cytoskeleton (LINC) complex and were therefore evaluated [29].
Regarding nesprin-1, a statistically significant decrease in nesprin-1 intracellular protein levels was observed in DM1_1000 (p = 0.0179) and DM1_2000 (p = 0.0129) patientderived fibroblasts in relation to the control fibroblasts ( Figure 7A). Concerning nesprin-2, the results showed a significant decrease in the protein intracellular levels in DM1_2000 patient-derived fibroblasts when compared to the control fibroblasts (p = 0.0059) ( Figure 7B).
Regarding immunocytochemistry, this study demonstrated the NE and nucleoplasm localization of nesprin-1 in DM1 patient-derived fibroblasts ( Figure 7C). The results demonstrated an increase in the number of nesprin 1-nuclear inclusions in DM1 patient-derived fibroblasts, being statistically significant between DM1_2000 and control (p = 0.0226) ( Figure 7D). When we analysed the inclusions by groups, we found that cells with greater than 3 nuclear inclusions tended to increase in DM1 patient-derived fibroblasts when compared to the control-derived fibroblasts (DM1_1000 vs. control, p = 0.0281; DM1_2000 vs. control, p = 0.0002) ( Figure 7E).
Taking into consideration the nuclear deformity, the DM1-derived fibroblast nuclei were significantly more deformed than the control nuclei (DM1_1000 vs. control: p = 0.0280; DM1_2000 vs. control: p = 0.0287) ( Figure 7F). Regarding nesprin-1 positive nuclear invaginations, DM1_1000 (p = 0.0280) and DM1_2000 (p = 0.0106) patient-derived fibroblasts presented a percentage of nuclei with invaginations significantly superior to the control fibroblasts ( Figure 7G). When we distinguished between mild and moderate invaginations, there was an increase in both in patients, which was significant between the control and DM1_2000 for mild (p = 0.0498) and moderate invaginations (p = 0.044) ( Figure 7H).
Discussion
In this study, we demonstrated that DM1 patient-derived fibroblast nuclei presented with an aberrant nuclear morphology. Further, alterations in NE proteins, namely DMPK, lamin A/C, emerin, LAP1, SUN1, nesprin-1 and nesprin-2, were observed (Figures 1-7). Our results showed decreased DMPK intracellular protein levels in DM1 patient-derived fibroblasts (Figure 1). Since the DMPK protein is encoded by the DMPK gene, which is abnormally expanded in the 3 UTR region in DM1, these mutant transcripts with abnormal CUG expansions are not efficiently transported to the cytoplasm and accumulate in cell nuclei; therefore, they are not translated into protein [6,10,23,30,31]. Thus, intracellular protein levels of DMPK are reduced in patients with DM1, regardless of the length of the CTG repeat, as shown in Figure 1. This result was particularly important given that alterations in DMPK protein levels in DM1 patient-derived fibroblasts have not been previously reported.
The nuclear architectural alterations, namely nuclear deformation, the number of micronuclei, crossed diameter ratio and nuclear area, were increased in DM1 patientderived fibroblasts when compared to control fibroblasts, suggesting that these alterations represent relevant features in DM1 (Figure 2). The expanded RNA in DM1 patient-derived fibroblasts may explain the changes in nuclear integrity, since the expanded mutant RNA exerts an action on chromatin dynamics by remodelling [32] and changing the positioning of nucleosomes [33][34][35]. One of the effects of these conformational changes in chromatin is a decrease in autointegration barrier factor (BAF) availability in cells as a consequence of BAF downregulation [36]. In addition, our results regarding DM1 patient-derived fibroblasts revealed a large number of cells with micronuclei ( Figure 2C), which may be due to errors in the NE reassembly process after cell division, since an altered reassembly of NE accompanied by the abnormal incorporation of chromosomes may result in the encapsulation of separate and smaller genetic material (i.e., a micronucleus) [37]. Further, BAF deletion has been associated with defects in NE reassembly [37]. This transcription factor interacts with the NE proteins emerin, MAN1, LAP2 and lamin A/C [38][39][40]. Thus, the expanded mutant RNA and BAF interference with these NE proteins may explain the increased growth of deformed nuclei and micronuclei in DM1 patient-derived fibroblasts. Furthermore, it has also been suggested that the linker of nucleoskeleton and cytoskeleton (LINC) complex dampens forces in the NE while preserving nuclear morphology and constraining nuclear expansion, while also being involved in the nuclear positioning and disassembly process of the NE during mitosis [41][42][43][44]. Therefore, alterations in the LINC complex (such as the decrease in intracellular levels of nesprins and the increase in SUN1) may be responsible for the altered localization of nuclei in the cells and the increase in the nuclear area in fibroblasts derived from patients with DM1 [45], which is in accordance with our results regarding nesprins and SUN1protein levels.
In our study, a nuclear lamina protein was also evaluated, namely lamin A/C, which demonstrated increased intracellular levels, localization in the NE and in the nucleoplasm, and nuclear deformations in DM1 patient-derived fibroblasts (Figure 3). This may be correlated with the significantly decreased levels of DMPK protein observed in DM1 patient-derived fibroblasts, since according to a previous study, [46] DMPK seems to be essential in maintaining the stability of NE and strict regulation of DMPK levels is absolutely necessary to stabilize NE structure. Nonetheless, more studies are needed to decipher the role of increased lamin A/C in DM1, and to understand if the increase in lamin A/C intracellular protein levels is an attempt to stabilize the nuclear structure. Regarding the nuclear deformations observed, they may be due to a deregulation of the chromatin organization caused by the abnormal expansion of the CTG repeat. Furthermore, lamin A/C, as well as emerin, are NE proteins that regulate the organization of chromatin, and this regulation is essential for normal cell functioning. However, chromatin can undergo changes in its organization due to DNA damage [47]. When this damage occurs, the cells stop to proliferate and consequently enter into senescence, forming foci of heterochromatin in cell nuclei [48,49]. In the case of DM1, the abnormal expansion of the CTG repeat may have a role in the alteration of the subcellular localization of lamin A/C, and in the higher number of nuclear inclusions and deformations in DM1 patient-derived fibroblasts, leading to chromatin dysregulation and consequently the formation of heterochromatin foci associated with senescence.
The results regarding the three inner nuclear membrane proteins, namely emerin, LAP1 and SUN1, were also interesting. Concerning emerin protein levels, no significant differences between DM1 patient-derived and control fibroblasts were observed ( Figure 4A). In addition, we found that emerin in DM1-patients' nuclei was located at the NE and in the nucleoplasm, accompanied by an increase in the number of inclusions and nuclear deformations ( Figure 4B-G). Our results are in agreement with a previous study that also used DM1 fibroblasts as a cell model [20]. The fact that our results did not demonstrate changes in intracellular emerin levels leads us to speculate that the structural changes observed in the nucleus are not correlated with emerin intracellular protein levels, but rather are associated with a destabilized nuclear lamina and other NE proteins. Interestingly, it was previously reported that a destabilized nuclear lamina leads to greater nuclear fragility [50,51], resulting in an increase in deformed nuclei, nuclear breaks accompanied by abnormal chromatin organization and chromatin extrusion [51,52]. LAP1, to our knowledge, has not been previously evaluated in DM1. Our study demonstrated that total LAP1 intracellular levels are increased in DM1 patient-derived fibroblasts, and this increase seems to be correlated with the number of CTG repeat length ( Figure 5A). In addition, we observed an incorrect localization of the protein, an increase in nuclear inclusions, and deformed nuclei ( Figure 5B-G). LAP1 has been associated with processes regulating the development and maintenance of skeletal muscle and the integrity of the NE [53][54][55]. However, the cause that leads to the increase in this protein was not determined and should be addressed in future studies. Furthermore, LAP1 and torsinA interact with each other, and LAP1 stimulates torsinA activity as an ATPase AAA + [56]. Both proteins have been identified as mediators of the assembly of the LINC complex and are responsible for the localization of nesprins in the NE [57][58][59][60]. Thus, torsinA and LAP1 belong to a dynamic network of interactions that connect the nuclear lamina, the NE and the cytoskeleton [57,58,61]. Thus, the altered localization of LAP1 and the increase in deformed nuclei observed in our study may be associated with an abnormal positioning of the nuclei and/or nuclear deformations resulting from the mechanical stress exerted on cells that have a weakened NE due to alterations in the nuclear lamina and proteins of the LINC complex. In turn, SUN1 is responsible for locating torsinA in the NE [58,60] and is involved in the connection of the nucleoplasm with the cytoskeleton, in nuclear anchorage and in nuclear migration [62][63][64]. Our results demonstrated an increase in SUN1 intracellular levels in DM1 patient-derived fibroblasts when compared to control fibroblasts ( Figure 6). Mutations in SUN1 (with decreased levels of intracellular protein) have been associated with EMD, which is histologically manifested by the alteration of nuclear position and, consequently, the degradation of muscle function [65]. Given the alterations in lamin A/C location reported in DM1, SUN1 may be increased due to the impossibility of interacting with lamin A/C, mimicking what occurs in mutated lamin A HGPS fibroblasts and thus sharing mechanisms with other laminopathies. This hypothesis is reinforced by previous results showing that myoblasts from DM1 patients with positive nuclei labeled for SUN1 did not present changes in the localization of this protein [22]. However, it should be noted that these myoblasts and the cells used in this study are distinct models and the results may differ; thus, further studies will be needed to evaluate this hypothesis.
Finally, the two evaluated outer nuclear membranes proteins, namely nesprin-1 and nesprin-2, showed decreased intracellular levels in DM1 patient-derived fibroblasts. This result is in accordance with previous studies using myoblasts and myotubes from patients with DM1, where a tendency for nesprin-1 and -2 to decrease with an increasing number of CTG repeats was reported [22]. We also found that positive nuclei labeled for nesprin-1 showed an altered protein localization, increased number of deformed nuclei and increased number of nuclear inclusions ( Figure 7C-H). It was also previously reported that some muscular dystrophies associated with mutations in both SYNE-1 and SYNE-2, the genes encoding nesprin-1 and nesprin-2, respectively, are characterized by abnormal nuclear morphology, micronuclei and fragmented nuclei [15,66], similar to what we observed in our results. These changes are usually due to an incorrect localization of the LINC complex proteins (nesprins and SUN 1/2) or their interactors (lamin A/C) [15,67,68]. Low intracellular levels of nesprins result in a defective interaction of LINC complex proteins and/or complex-associated proteins (for example, emerin, LAP1 and lamin A/C) with nuclear actin. With this function impaired, nuclear positioning, NE architecture, gene expression and maintenance of muscle fibers in patients with muscle diseases are affected [69]. Therefore, the decrease in nesprin-1 intracellular protein levels might be related to the structural changes in the NE (deformed nuclei and nuclear inclusions) observed in our study.
Our results strengthen the hypothesis that NE dysfunction is an important contributor to DM1. Therefore, the identification of the signaling events underlying the NE dysfunction will be of extreme importance for the identification of novel molecular targets for DM1.
Human Samples
Fibroblasts derived from skin biopsies of adult male DM1 donors with different numbers of CTG repeats, and from a healthy control subject, were obtained from the Coriell Institute for Medical Research, Newark, NJ, USA. The clinically affected patients' cell lines selected for this study included two cell lines with approximately 1000 CTG repeats, referred to as DM1_1000 (1) (GM04033) and DM1_1000 (2) (GM04647), and two cell lines with approximately 2000 CTG repeats, designated DM1_2000 (1) (GM03759) and DM1_2000 (2) (GM03989). The DM1 patient-derived fibroblasts with approximately 1000 and 2000 CTG repeat lengths represented the adult and congenital phenotypes, respectively. In turn, the control cell line used in this study comprised between 5 and 27 CTG repeats (GM02673).
Cell Culture
Fibroblast cultures were maintained in T75 flasks with Dulbecco's Modified Eagle Medium (DMEM; Gibco, Thermo Fisher Scientific, Waltham, MA, USA) supplemented with 15% fetal bovine serum (FBS; Gibco TM ), at 37 • C in a humidified atmosphere with 5% CO 2 . The medium was changed every other day and all washes performed using Dulbecco's phosphate buffered saline (PBS; Thermo Scientific, Thermo Fisher Scientific, Waltham, MA, USA). Whenever fibroblast cultures reached a confluence of 80-90%, they were subcultured using 0.05% trypsin-EDTA, plated in complete medium and maintained at 37 • C in a CO 2 incubator [70].
Antibodies
The list of primary and secondary antibodies used for Western blotting and immunocytochemistry is summarized in Table 1.
Immunoblotting
Fibroblast cultures were grown in T75 flasks until they reached a confluence of 80-90%. Cell lysates were collected in 1% sodium dodecyl sulphate (SDS) and boiled at 90 • C for 10 min. The total protein content was quantified using Pierce's bicinchoninic acid (BCA) protein assay kit (Thermo Scientific, Thermo Fisher Scientific, Waltham, MA, USA). Protein samples were separated on a 5-20% SDS-PAGE gradient gel and electrotransferred onto nitrocellulose membranes. Reversible staining of nitrocellulose membranes with Ponceau S (Sigma-Aldrich, Saint Louis, MO, USA), followed by scanning in a calibrated image densitometer GS-800 (Bio-Rad, San Jose, CA, USA) was performed to assess gel loading [73,74]. For immunoblotting analysis of target proteins, upon blocking in 5% bovine serum albumin (BSA; Nzytech, Lisbon, Portugal)/1× Tris-buffered saline with 0.1% Tween-20 (TBS-T) for 3 h, the membranes were incubated with the primary antibodies (Table 1) in 3% BSA/1× TBS-T for 2 h at room temperature, followed by overnight incubation at 4 • C. On the next day, the membranes were incubated with the appropriate HRP-conjugated secondary antibody (Table 1) in 5% fat-free dry milk/1× TBS-T for 2 h at room temperature. For the detection of target proteins, the enhanced chemioluminescence ECL™ Select Western blotting detection reagent (GE Healthcare, Waukesha, WI, USA) was used, and immunoblots were scanned in a ChemiDoc imaging system (Bio-Rad, Hercules, CA, USA) [27].
The quantification of intracellular protein levels was achieved with ImageLab software (Bio-Rad, Hercules, CA, USA), and Ponceau S staining was used as a protein loading control for data normalization [27]. Relative protein levels were calculated by comparing the DM1 patients' samples with the control samples.
Immunocytochemistry
Fibroblasts were plated in 6-well plates containing glass coverslips (Corning, New York, NY, USA) at a cell density of 75,000 cells/well for 24 h. Then, cells were fixed using 4% paraformaldehyde for 20 min and permeabilized with 0.2% Triton X-100/1× PBS for 10 min. After blocking with 3% BSA/1× PBS for 1 h, the cells were incubated with specific primary antibodies (Table 1) in 3% BSA/1× PBS for 2 h at room temperature, followed by incubation with the appropriate secondary antibody (Table 1) in 3% BSA/1× PBS for 1 h in the dark. The coverslips were mounted on a microscope slide using Vectashield ® mounting medium with 4 ,6-diamidino-2-phenolyde (DAPI) (Vector Laboratories, Burlingame, CA, USA) [27,74]. Image acquisition was performed using an epifluorescence microscopy Zeiss AxioImager Z1 (Zeiss, Jena, Germany) motorized microscope equipped with a Plan-ApoCHROMAT 63×/1.4 oil objective lens. Microphotograph images were taken with a digital AxioCam HR3 (soft imaging system).
Morphological Analysis
Two hundred nuclei from each cell line were analyzed. From the morphological point of view, the nuclear form and the number of nuclear inclusions were evaluated.
The number of nuclear inclusions was assessed globally and by categories (1-2 inclusions and ≥3 inclusions). Nuclei were considered normal when they presented a typical ring-shaped immunostaining pattern for NE proteins or an ellipsoid shape when stained with DAPI. In turn, nuclei were considered deformed when they presented nuclear alterations/deformations, such as invaginations, blebs, lobes and micronuclei. Additionally, deformed nuclei were subdivided into two different categories according to the presence of mild invaginations (very soft deformations observed) or moderate invaginations (severe deformations observed). Representative images of mild (Figure 3; DM1_2000) and moderate ( Figure 5; DM1_2000) invaginations are presented.
Morphometric Analysis
For morphometric analysis, four hundred nuclei from each cell line were evaluated. Quantitative analyses of the circularity ((4π × area)/perimeter 2 ), nuclear area and crossed diameter ratio (length/width) were performed automatically using Fiji/ImageJ software.
Statistical Analysis
Statistical analysis was conducted using the GraphPad Prism 9 software (GraphPad Software, San Diego, CA, USA) and data were analyzed using one-way ANOVA followed by Tukey's multiple comparison test. Quantitative data were presented as mean ± standard error of the mean (SEM) of, at least, three independent experiments. Values of p < 0.05 were considered statistically significant.
Conclusions
In summary, our results clearly demonstrate that nuclear profile and nuclear envelope proteins are altered in DM1-patient derived fibroblasts. Concerning the nuclear profile, increased nuclear area, a high number of deformed nuclei and the high presence of micronuclei were the most prominent alterations observed in DM1 patient-derived fibroblasts.
Regarding the NE protein alterations, the protein levels of lamin A/C, LAP1 and SUN1 were increased, while the levels of emerin and nesprin-1/nesprin-2 remained unaltered and decreased, respectively. Additionally, the results showed an altered localization of these NE proteins, accompanied by the presence of nuclear deformations, including blebs, lobes and/or invaginations that were well correlated with the structural differences in the nuclei observed in DM1-derived fibroblasts.
Our study has strengthened the hypothesis that changes in the NE are important hallmarks of DM1 and supports further studies and the exploitation of NE dysfunction in DM1 as a target for the development of DM1 therapies.
Conflicts of Interest:
The authors declare no conflict of interest. | 2022-01-09T16:06:26.182Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "4f561610823f244335add1d4d48ece2170e62d0e",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "22a638091786751ac1f8aa680ce9d9fae668cfa7",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17843694 | pes2o/s2orc | v3-fos-license | E Master's Degree in Nursing from Universidade Fed- Eral De São Experience of Family Members Providing Care for Hiv-exposed Children: Beginning of the Trajectory
During and after pregnancy, mothers with HIV can undergo treatment that is capable of preventing vertical transmission (VT) to their babies. The purpose of this study was to analyze the experience of family members that provide care for children whose mothers have HIV, to reduce the risk of VT, with emphasis on the beginning of this trajectory. This study was based on the qualitative approach and Symbolic Interactionism was adopted as a theoretical framework. A total of 36 family members participated in the study, all of whom were carers of children aged up to 18 months and waiting for confi rmation of the HIV diagnosis. Data were collected in a hospital in northeastern Brazil, between December 2012 and February 2013, and examined by means of content analysis. Child care began during pregnancy, when the possibility of the child having HIV was expected. Some had previous experience in providing care for exposed children. Understanding the early trajectory of care will help fi nd ways to provide better support for carers during the trajectory of diagnosis confi rmation. Experience of family members providing care for HIV-exposed children: beginning of the trajectory 69 Rev Gaúcha Enferm. 2014 set;35(3):68-74.
INTRODUCTION
In Brazil, vertical transmission (VT) is the main cause of HIV infection in children under the age of 13 (1) .There are nationally established measures during and after pregnancy that can reduce the possibility of HIV transmission from mother to child (2) .
In the exercise of maternity, a pregnant mother with HIV completes a long trajectory to ensure the sero-negativity of her child and, during pregnancy, she focuses on self-care and drug therapy to protect her child from VT. Fear of transmitting the virus to the child is very present and causes despair (3) .Pregnancy with HIV sero-positivity is characterized as a moment of apprehension in relation to contaminating the child (3)(4) .This apprehension lasts until the child's diagnosis results and the mother must also deal with the uncertainty, guilt and demands during care to prevent VT (5)(6).
In addition to the demands of being a mother with HIV, discovering the infection usually aff ects the individual's social network and, in the family context, determines approximation or distancing of family members due to the stigma of this disease (7) .Social support and family relationships are important resources in the experience of living with HIV and have proved powerful in improving the quality of life of people who live with the virus (5,8) .HIV directly aff ects the lives of infected individuals and their families and the process of interacting and caring for the child, especially the parents, but it also involves other individuals (9) .However, few studies have approached the subject of families aff ected by HIV (8,10) and, primarily, the care provided by families for exposed children.There is, therefore, the need to expand the focus of this disease or how it is understood by family members who are active, interact and adapt to the condition of living with HIV (9) .
There are recommendations of extending scientifi c explorations on the subjectivity of the experience of living with HIV (11) .Most studies have investigated the perception of mothers (3)(4)(5)(7)(8)(12)(13) on subjects related to pregnancy, being a mother with HIV and breastfeeding, although it should also be considered that the experience of HIV/AIDS is not limited to those who are infected with the virus, but also to those who live with and assume the role of carers of children.
This study acknowledges the importance of professional support for the family who experiences HIV and the post-natal preventive treatment for children.It is therefore important to understand from which moment these family members consider they are providing care for children and how they experience this initial stage.In light of the ques-tion, 'how do mothers with HIV and/or carers experience the start of care for HIV-exposed children, the aim of this paper is to analyse the experience of family members who are providing care for children whose mothers are infected with HIV to reduce the risk of VT, with emphasis on the beginning of this trajectory.
METHODOLOGY
This research is based on a larger study (14) on care for HIV-exposed children, in which the qualitative descriptive approach was used and Symbolic Interactionism was adopted as the theoretical framework.The nature of this framework basically comprises three premises: man acts according to the meaning things have for him, including everything he could perceive in his surroundings; the meaning of these things is derived from interaction with others; and meanings are manipulated and modifi ed through interpretative processes used by individuals to deal with things they encounter (15) .
Data were collected in a hospital-school, founded in the 70s and maintained with recourses from the Unifi ed Health System (SUS), in north-eastern Brazil, and considered a benchmark in care, prevention and treatment of patients with HIV/AIDS and exposed children both in the state and in the municipalities of neighbouring states.
Collection occurred from December 2012 to February 2013.A total of 24 mothers, fi ve fathers and seven carers (grandparents, aunts and great grandparents) participated in the study, totalling 36 family members of children born from mothers with HIV.To be selected, participants had to be carers and have children under 18 months, without a fully defi ned diagnosis of HIV and receiving care at the referred service.Exclusion criteria were carers under the age of 18, and lack of knowledge on the HIV serological condition of the mother and on vertical exposure of the child.The number of participants was not pre-defi ned and collection was interrupted due to theoretical saturation of data, that is, when new data were no longer introduced during the interviews and the purpose of the study was achieved, assuring a full understanding of the phenomenon (16) .
Persons responsible for caring for the child and for accompanying care at the healthcare service were approached in the waiting room prior to the child's appointment and invited to a private room.In this room, all the objectives and the data collection strategy were explained to carers using an Informed Consent Form (ICF), and confidentiality and secrecy of their statements was guaranteed.Those who agreed to participate in the study signed the ICF.Mothers who agreed to participate and were accompanied by another family member of the child (father, grandmother or aunt) almost always asked if these members could participate in the interview.They were also carers and were aware of the child's condition, so they were included in the research and asked to sign an ICF.
Data were collected by means of semi-structured recorded interviews in a single encounter of approximately 50 minutes.The interviews were collective when other family members were present, or individual, conducted by the fi rst author who asked the following guiding questions: "How have you experienced care to prevent the transmission of HIV to the child?Tell me when it all began and how you have experienced this process." Data were examined according to inductive content analysis that consists of preparing, organizing and reporting results (17) .The fi rst steps of analysis, referent to the preparation stage, consisted of collecting, transcribing and reading data to get the overall sense of everything and subsequently select units of analysis and meaning.In the following stage, called organization, the inductive path resulted in coding and creating categories that were grouped to formulate a general description of the phenomenon.In the last stage, results based on contents of the categories were reported, describing the phenomenon from a historical line of events.This study describes the beginning of this time line, which is the start of the child care trajectory.
To protect the identity of participants, they were identifi ed according to their position in relation to the child (mother, father, grandmother, aunt), followed by the number of the order in which they entered the study.To guarantee reliability, authentic statements of the participants were used to show how the categories were formulated; however, some statements were not listed because they did not specifi cally contribute to the time frame of this study, but were fundamental to obtain an understanding and delimitation of the child care trajectory.
This research was approved by the Human Research Ethics Committee of the Universidade Federal de São Carlos (nº 112.500/2012), and the requirements of Resolution CNS196 de 1996, in force during preparation and execution of the research project, were observed.
RESULTS AND DISCUSSION
This study reveals the beginning of the care trajectory of HIV-exposed children, from pregnancy to birth.The categories: "Imagining the child with HIV", "Having previous experience" and "Wanting the child, in spite of everything" portray the experience and are defi ned below.
Imagining the child with HIV
Imagining oneself with a child with HIV triggers emotions, especially guilt, in the mother and family members involved in taking care of the child because they feel responsible or incapable of doing anything to change the outcome: having a child that runs the risk of being infected.
It's really tough!I am always worried about him, with the baby.I'm always thinking, afraid he might have HIV, of giving it to him (Mother1).
I was really upset when we found out she [granddaughter] has the disease. So many things came to mind. I thought, if she [granddaughter] were with me, if she lived with me, none of this would be happening to her [great granddaughter] (Great grandmother 24).
Self-guilt is a way of confronting the situation (5) .According to Symbolic Interactionism, reality is imposed on us from the moment in which we interpret it.It is precisely continuous social interaction that interferes in the meaning of things and in human action (15) .The presence of the virus is refl ected in the experience of the carer and lays the foundation for the fear and guilt of having a child with HIV.In spite of advancements in treatment, studies show that the positive diagnosis of this disease creates suff ering in the individual and family members (8)(9) , as it is related to the loss of identity, death, isolation and the impossibility of social interaction, which ultimately becomes a part of the daily lives of families and children (9) .
Other family members also blamed themselves for the risk to which the child was exposed and, especially, for the fact that the mother acquired HIV, as if the family environment was associated to containing the infection, which shows that the family is also aff ected by the disease.Studies reveal that the family experiences feelings of disarray, uncertainty, guilt and impotence that are generally transformed into a reorganization, based on knowledge of the disease (9)(10) .
When sero-positivity is discovered, before and during pregnancy, the possibility of transferring HIV to the child arises, which generates, in addition to guilt, fear, concern and suff ering, often experienced with drug use, thoughts of death and despair.These same feelings and uncertainties related to the possibility of transmitting the virus to the child were also the results found in other studies (3,18) .
To forget, I drank a lot, went out, refused to speak.I thought about being punished, about him not being born healthy; Rev Gaúcha Enferm.2014 set;35(3):68-74.
because I did this and did not protect myself. I wanted to take my own life (Father26). He [husband] drank a lot when I was pregnant, because he thought something would happen to the baby (Mother26).
My companion got so upset when he saw the state I was in that he started to use "powder" [drugs].After some time, I noticed and talked to him about why that was happening.'Because I found out you had the virus and I didn't, ' he said.That' s what hurt the most.He was blaming himself because I had HIV and because his son could also have it and he didn't (Mother25).
In the experience of paternity with or without sero-positivity, there were diff erences in relation to feelings of guilt.The father with HIV felt guilty and responsible for the birth of an HIV-exposed child, while sero-negative fathers felt guilty because they did not have HIV, for 'not sharing' the suff ering of having the same diagnosis, and for the risk to which the child was exposed.
Research (8) conducted on individuals with HIV and their sero-negative family members revealed that people who live with the virus present a greater number of depressive symptoms.In spite of the stress caused by the disease, individuals with HIV are more susceptible to psychological disorders (8) .
Faced with this diagnosis and the possibility of contaminating the child, thoughts of ending one's own life emerged and were equally found in other studies (4,17) .Drug use was also found in another study, which showed that this problem is more common in people with HIV (12) .In the United States, the key risk factors of families aff ected by HIV are: drug abuse, mental health and parental challenges (10) .
Having previous experience
Some participants had already taken care of HIV-exposed children; having previous experience triggered negative or positive feelings that aff ected confrontation of the new situation.According to the adopted theoretical framework (15) , this occurs because our past enters our actions and we think about it to defi ne the present situation, although the cause of the action is unfolding in the present.
Familiarity with the treatment, time passed since discovery of the HIV diagnosis, successful treatment of the previous child resulting in sero-positivity, are aspects that positively aff ect care of the studied child in the sense of bringing tranquillity and confi dence.Statements showed that stress, fear and lack of knowledge were more evident when providing care for the fi rst HIV-exposed child.
Taking care of him is a lot easier because I already knew what I had. I had already fi nished all the treatment during my fi rst pregnancy. I'm a lot more confi dent now. But with the fi rst [child], it was very stressful (Mother26).
A study shows that having or not having previous experiences of maternity with exposed children does not seem to reduce anxiety in relation to the HIV diagnosis of the child (6) .However, mothers who discovered their serology when pregnant with the fi rst child more clearly expressed fear of infection due to the more recent impact of discovering they had the disease and because they still did not have a lot of information on the disease and its treatment (6) .
When women are aware of their diagnosis and have already experienced pregnancy while infected with the virus, this moment seems less intense, as past experiences showed them that after completing preventive treatment their children can be born healthy (18) .This increases the confi dence of these mothers in relation to taking care of another HIV-exposed child, considering they will receive free prophylaxis during pregnancy, birth and breastfeeding, and can subsequently reduce the VT rate to less than 1% (2) .
In contrast, apprehension in relation to the diagnosis and the risk of having yet another child with HIV are present, although increased when results of the fi rst child are sero-positive.Feelings of guilt are sometimes relived in the new experience and considered a divine punishment for having, once again, exposed another child to HIV.
I felt bad, and still do to this day. During prenatal care, I refused to do the HIV test. I feel guilty, if I hadn't refused, I wouldn't have a child with that problem today. It' s diff erent with her, though. I did my prenatal like I was supposed to (Mother27). I think about divine punishment. The girl [previous child]
was lucky, the test was negative; now, there' s the boy, who wasn't planned.We should have stopped with the girl and taken care of her.We made a mistake, and she [wife] got pregnant [...].We' d already gotten rid of that 'burden' because we found out she [previous child] didn't have it [HIV].Now, we found out she' s pregnant.It' s like a punishment.I'll Rev Gaúcha Enferm.2014 set;35(3):68-74.
get my punishment with this child, because I did lots of bad things in the past (Father26).
The mother felt responsible for transmitting the virus to the child and blamed herself for getting pregnant again or for not having completed prophylactic treatment of the previous child.When this occurs, blame of other people and shame, pain and anguish are also present (5) .The belief that HIV is a punishment from God and related to bad actions from the past were also mentioned in another study (19) , which stated that this may appear with feelings of rage, guilt and the desire to take one's own life.
A new pregnancy forces the family to relive feelings of fear, guilt and sadness that seemed forgotten after discovering the sero-negativity of the previous child.This is because pregnancy triggers the constant reminder of the diagnosis, which causes anxiety and concern about the serology of the child.This is a negative interaction with the disease that results in despair due to the suff ering caused by the result they will obtain from the child, due to the stigma of living with the virus and the perceived risk of disease or death (3) .
Wanting the child in spite of everything
In spite of the presence of HIV, from the moment of discovery and the risk of VT, the child was considered desired in all situations, even when unplanned or when the child was the second case of vertical exposure in the family.
My dream was to have a child and God gave one to me [...]. She was planned. [...] Sometimes, I can't believe I have her (Mother16).
My daughter was planned, we did the entire programme because my husband is sero-negative.If you ask me, she is a Godsend.Every day I spend with her is a day of victory (Mother27).
When I found out about the problem, I said, "My God, I can't have kids, I want a baby so badly!". As time passed, I realized it didn't have to be that way! [...] That' s when I talked to my husband: It' s time, let' s have a baby! I completed the treatment correctly, the prenatal; I took the medication, never missed an appointment. I did it all, certain that he would be born without problems (Mother9). I don't feel any rejection. As soon as I found out I was pregnant and had the disease, I thought: I'm going to do everything in my power to make sure he is born healthy. Every-thing I can do for him, I'll do. Whatever is within my reach, I'll do, because he isn't to blame (Mother25).
The statements showed that the presence of HIV did not negatively interfere with maternity and the bond with the child.This was also reported in an investigation that compared foetal maternal attachment of pregnant women with HIV and pregnant women that were not infected with the virus; and results showed that the presence of HIV did not negatively aff ect the mother-child relationship (13) .
The birth of a child, whether planned or not, brings joy to the environment, especially for the mother.In spite of the HIV diagnosis, a study shows that, in general, maternity is an inherent desire of women or a way of continuing their own lives that was interrupted by the virus (4) .Consequently, they try to complete VT preventive treatment with the hope of ensuring a healthy life for their children (3) .Planning to have children was also reported in another study (4) .
In this study, some mothers did not plan their pregnancies.This leads to the doubt as to whether family planning is being discussed with people with HIV.The concept of family planning in healthcare policies, that consists of presenting actions with the purpose of helping individuals with conception or preventing unwanted pregnancies, is still a challenge (2) .
When they discovered they were pregnant, the mothers did everything in their power to prevent transmission, as the child was considered a victim of the situation.The child was welcome amidst the suff ering caused by the presence of HIV and diffi culties experienced during treatment.
She is a victory.The girl' s great grandmother says: "that little girl is a 'sweetheart' that was born.If it weren't for her [baby girl], we would've lost my granddaughter [wife]".Because a friend of ours, when he found out [he had HIV], he soon died.If it weren't for the child, maybe, when we found out, we would've died (Father3).
Statements showed that the child is viewed as a possibility to recover the lives of the parents, as its arrival allowed the parents to know their serological condition, providing them with the opportunity to take care of themselves and initiate the baby's treatment.
This aspect was also found in a study (3) , in which discovering about the pregnancy was related to the chance to discover the sero-positivity.The desire to understand the experience of suff ering infl uenced the process, the results of becoming sick and of confronting the situation (20) .
In this study, discovering HIV was given a new meaning with the arrival of child, who was the source of strength, joy and the main motive for confronting the disease.In this study, the mother found motives to complete prenatal care and follow all the recommendations of the healthcare professionals, who appeared at this time to orient and give hope that it was possible to prevent infection of the child.
In relation to the child, I was very happy; She gave me strength.I was desperate, I cried all the time.Then I stopped to think about her.Now, my sole concern was the fear of transmitting it to her; That' s why I did everything by the book [...].The doctor said: "If you complete the treatment correctly, there are only two risks: breastfeeding and the moment of birth.That' s why you have to do it right, so you don't run that risk".(Mother3).
I didn't kill myself because I thought about him fi rst [child], but it was tough, it was diffi cult! (Mother25).
Today, I try to be strong.He gave me strength, something to be happy about, but I still get a little depressed (Father26).
Moved by the hope of no VT, the mother did everything in her power to prevent transmission to the child and minimize the guilt that it would produce.Maternal commitment with care to prevent contamination to the child is similar to fi ndings of other studies (3,13,18) , in which women initiated drug therapy as soon as possible and attended all prenatal appointments to reduce the probability of VT.They received information that if treatment was completed correctly, the probability of a sero-negative child was greater (3,18) .The recommendation is for women to initiate the VT prophylaxis treatment between the 14 th and 28 th week of pregnancy (2) , with emphasis on the importance of completing prenatal care and incentives of the women's healthcare professional for these preventive measures to be eff ective.
Healthcare services value the birth of healthy babies and that reinforces the mother's decision to take care of herself, in the hope of preventing VT.Moreover, perception of the child's innocence, as not deserving this condition, and not wanting the child to experience the stigma of HIV is also evident (3) .Focusing on the child at this moment, and the possibility of the child not having HIV represents, from the interactionist standpoint, the idea that individuals know what they are doing and interpret the meanings of their own acts and the reaction that their actions have on others, which, in this case, is the child.Consequently, the reactions of others are used to control their own behaviour (15) .
FINAL CONSIDERATIONS
It was possible to learn how mothers with HIV and/ or carers experienced the start of care for HIV-exposed children.The care experience began with the mother's struggle to prevent transmission of the virus during pregnancy.This beginning was intense and there was a re/organization of the disease and pregnancy.Several emotions emerge or were relived during this experience, especially when there was another birth of an HIV-exposed child in the family.Results revealed a commitment with treatment, love and dedication of the mother/carer in relation to the child, conceived as a source of joy and life for the family.
Study limitations are the data collection strategy, completed in a single encounter and during the child's appointment.Collection at a diff erent time to that of the appointment and with other members of the family involved in the child's care, who are aware of the mother's diagnosis, could enhance apprehension of the study object.
As implications for future research, we suggest the inclusion of other family members and the home context of care as potential aspects to expand the body of knowledge of Nursing/Healthcare and the phenomenon of living with HIV and the risks involved.
Understanding the care trajectory of these children can create awareness in healthcare professionals, especially nurses, in relation to needs of their mothers and families.Obtaining a deeper understanding and clarity of this process of experiencing the start of child care can reduce the risk of VT, as it would enable us, also, to discover paths that help with the diffi culties and the implementation of preventive treatment during the entire trajectory of diagnosis confi rmation. | 2017-03-31T10:31:22.868Z | 2014-09-01T00:00:00.000 | {
"year": 2014,
"sha1": "41f42310a6ae902a03cad44a586f5d0c60b293b9",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/rgenf/a/NV9FDwzwnpqNZ3JXr6NDY4C/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "41f42310a6ae902a03cad44a586f5d0c60b293b9",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
241385515 | pes2o/s2orc | v3-fos-license | of Interferometers and Comparison of R adio Interferometers with Analog and Digital Extraction of Recorded Signal
Introduction . Radio telescopes incorporated in very long baseline interferometry (VLBI) networks are used to record several narrowband signals (up to 32 MHz), which are extracted by means of base band converters (BBC) from an analog noise signal of an intermediate frequency (IF) with bands up to 1 GHz. When processing the as-obtained data, the method of frequency band synthesis is used. Novel compact radio telescopes (e.g., RT-13) digitalize wideband IF signals. A digital narrowband signal extraction module developed in 2019 provides the possibility of integrating RT-13 radio telescopes with the Russian Quasar VLBI Network. Aim. To assess the accuracy of measuring the interferometric group delay of a signal by a radio interferometer equipped with a digital narrow-band signal extraction module, as well as to compare the sensitivity of interferometers with analog and digital signal extraction systems. Materials and methods . Sensitivity losses of interferometers with different systems for detecting recorded signals were calculated. The accuracy of a multi-channel interferometer with the synthesis of a frequency band and an interferometer with recording of digital broadband IF signals without band synthesis was compared. The results were confirmed by VLBI observations in the observatories of the Quasar VLBI Network. Results. When replacing the analog system of signal extraction with the digital system, the sensitivity losses of the interferometer decreased slightly. The measurement accuracy of the interferometric group delay remained unchanged. An increase in accuracy was achieved when broadband IF signals were recorded digitally and when a frequency band significantly larger than the IF bandwidth was synthesized. Conditions and minimum synthesized bands were determined, under which the accuracy of the interferometer registering narrowband signals exceed that of the interferometer registering wideband IF signals. Conclusion. The problem of integrating RT-13 radio telescopes with VLBI networks, which record video frequency signals, was solved. The feasibility of installing digital signal conversion systems on radio telescopes was shown.
Introduction. Data acquisition systems (DAS) are widely used in radio telescopes with very long baseline interferometry (VLBI) networks. These systems are capable to extract signals with relatively narrow (up to 32 MHz) F ∆ bands from a wide (up to 1 GHz) band of intermediate frequencies (IF) that followed by their conversion to base band frequencies and digital recording [1,2]. This class of system also includes R1002M 16-channel DAS [3] which are installed on RT-32 VLBI radio telescopes in the Quasar VLBI Network [4]. Signals with F ∆ bands are extracted from a noise IF signal with a band of 0.1…1 GHz using analog quadrature frequency converters (QFC) and digital downconverter. The extracted signals are amplitude-quantized and formatted according to the international VDIF standard [5] or the VSI-H format [6], followed by the transmission of the received observation data for processing by VLBI correlators [7,8]. For VLBI observations by astrometry and geodetic programs the signals of 5-8 frequency channels with bands of 8 F ∆ = or 16 MHz are extracted from the IF band and processed using the method of wide band frequency synthesis [9].
In recent years the transition to compact radio telescopes with digital systems for recording broadband signals (from 0.5 to 1 GHz) has become the main direction in of the development of VLBI [10,11]. Such systems are essential both for the creation of new generation VLBI complexes [12,13] and for the development of radiometry as a whole [14]. For example, the RT-13 13-meter VLBI radio telescopes was incorporated with digital systems for converting broadband signals that have eight channels and able to record IF signals with 0 bands at a sampling frequency of d 1024 MHz = F [11]. Processing of high-speed data streams received by the system channels (2048 Mbit/s per channel) are carried out by specialized software VLBI correlators [15].
In order to integrate radio telescopes with broadband channels into existing VLBI networks, where narrowband signals of base band frequencies are recorded and processed, the signals with relatively narrow bands should be extracted from a high-speed digital IF signal and converted to base band frequencies ( ) 0... . F ∆ This operation can be performed by digital modules on a field programmable gate array (FPGA) that containing polyphase filters (PPF) and base band converters (BBC) [16]. In terms of structure and clock frequency the data stream generated by such modules is similar to that received using R1002M DAS. As a result, it becomes possible to integrate RT-13 radio telescopes registering broadband signals with both the Quasar VLBI Network and international VLBI networks that recording narrowband signals.
In this regard, it is important to determine the effect of replacing analog DAS with digital signal extraction modules on the sensitivity of radio interferometers and the measurement accuracy of interferometric group delays of the τ received radiosignal. This information is essential both for the rational planning of VLBI observations using heterogeneous signal conversion systems and for the selection of reference sources for radiosignals. In addition, this information can be used for developing multifunctional digital systems for converting signals with bands (sampling frequency of with the aim of upgrading the existing RT-32 radio telescopes and equipping new compact radio telescopes.
In this article we set out to investigate the effect of the loss of instrumental sensitivity by radio interferometers equipped with different systems for extracting recorded narrowband signals from a wide IF band. To this end we compare the sensitivity and accuracy of measuring the interferometric group delays of signals using an interferometer with extracting registered narrowband signals digitally and an interferometer with an R1002M DAS.
In connection with the development of VLBI radio telescopes with ultra wide-band radio astronomy receivers (RR) [17,18] and registration systems for broadband signals [10,11], the possibility of synthesizing a frequency band exceeding the passband of the receiving channel (up to 1 GHz) is of particular significance. The receivers of RT-13 radio telescopes [19] have three receiving channels for each frequency range and for any of two circular wave polarizations, thus allowing the frequency bands up to 2.5 and 6 GHz wide to be synthesized in the X (7…9.5 GHz) and K range (28…34 GHz), respectively. Since the effectiveness of such an approach has not been clarified yet, it is of interest to compare the parameters of a multichannel interferometer that registering narrowband signals (up to 16 MHz) digitally for a subsequent synthesis of a broadband signal, with those of an interferometer recording in parallel (without synthesis) up to three broadband (0.5 or 1 GHz) signals.
Determination of the sensitivity of interferometers based on different systems for extracting recorded signals. The sensitivity of a radio interferometer is characterized by the ratio of the correlation response peak to the root mean square deviation (RMSD) of the residual noise at the output of the correlator. For a single-channel interferometer with the F ∆ band of signal recording, the signal-to-noise ratio at the peak of the correlation response is defined as: where 1 χ ≤ is the coefficient that taking into account the loss of sensitivity in the receiving and recording channels of radio telescopes and in the correlator of the interferometer; s n = q T T is the ratio of the s T received signal noise temperature to the n T temperature of the radio telescope set noise at the RR input; o t is the source observation (scanning) time [8]. Subscripts 1 and 2 indicate the serial numbers of the interferometer radio telescopes. VLBI measurements are usually performed at 1 7.
R > Let us represent the coefficient of hardware sensitivity loss by the E 0 χ = χ χ product, where the first term represents the losses in the broadband receiving and amplifying channels, as well as in the device for separating the recorded base band frequency signals, while the second term represents the losses in the digital processing and correlation devices of the extracted base band frequency signals.
The 0 χ value is determined mainly by losses arising during amplitude quantization of digital samples of the noise signal (12 or 36.3 % for four-or two-level quantization, respectively), as well as by losses involved with the correlation processing of base band frequency signals, accounting together for about 13 % [9]. For radio interferometers with narrowband channels, including those in the Quasar VLBI Network with R1002M systems, as a first ap-proximation, 0 0.76 χ ≈ or 0 0.55 χ ≈ can be taken for four-or two-level quantization, respectively. These 0 χ values remain valid for an interferometer based on digital signal extraction systems, since the methods of amplitude quantization, formatting and correlation processing of narrowband signals remain the same.
For assessing the quality of signal extraction channels, it is sufficient to compare the E χ loss coefficients for interferometers with digital signal extraction systems ( ) and those with analog In RT-32 radio telescopes the RR is connected to the DAS by a coaxial transmission line containing power amplifiers with corrections for the nonuniformity of attenuation in the 0.1…1 GHz IF band ( Fig. 1). In the DAS IF signal is distributed over base band converters each of which comprises a quadrature frequency converter (QFC) equipped with diode mixers and F ∆ band analog low-pass filters (LPF), a pair of analog-to-digital converters (ADC) and a digital phase signal splitter (PSS) separating the signals of the upper and lower side bands. After the fourlevel quantization of amplitudes the digital signals with F ∆ bands are fed to the data formatter of the Mark 5B+ recording terminal [20] followed by transmission of the observation data to the correlation processing center.
When the A χ coefficient is calculated using an interferometer with an DAS R1002M, account is taken of the losses associated with the distortion of signals in the receiving-amplifying channel from the input of the RR to the ADC in the DAS base band converter. In general, η are the losses related to the i-th factor (in Fig. 1 10 percent). Losses of about 3 % result from distorting the signal by the phase noise from super-high frequency heterodynes of the RR. Distortions of the narrowband signal in the RR with a wide passband can be neglected, since the amplitude-frequency characteristic (AFC) and phase-frequency characteristic (PFC) of the receiving channel are formed by the narrowband low-pass filter of the base band converter. In the IF signal transmission line, due to the non-uniform AFC of power amplifiers and the residual slope in the AFC of the coaxial cable (uncompensated attenuation nonuniformity), signals with F ∆ bands in individual frequency channels are susceptible to distortion. The distortions of the channel AFC due to the slope of the spectrum leads to the loss of the interferometer sensitivity of up to 2 %. A significant loss of the interferometer sensitivity may occur due to the non-identity of the AFC of analog filters in the base band converters of an interferometer radio telescope pair. Technological variation in the filter parameters, temperature changes and aging of circuit elements can also result in the oscillation and slope of the AFC in the channel passband. In R1002M DAS base band converters, the AFC identity and PFC linearity of the channels are significantly increased due to digital filters forming the F ∆ band and digital PSS separating sideband signals with an isolation of more than 42 dB. The noise of the mirror channel is practically eliminated, while moire noise is suppressed by a pre-filter (switchable filters) at the input of the base band converter. Nonlinear distortion of signals in the channel with digital filters is also practically absent. The quantization noise of the analog signal can be neglected, when the number of ADC bits equal to at least 8. Losses arising for the aforementioned reasons account for about 2 %. Minor (about 1 %) sensitivity losses occur due to noise from heterodyne signals, the RMSD of which is reduced to 2° [3]. The loss of the interferometer sensitivity due to signal distortion in the R1002M DAS comprises about 3 % in total.
In general, for an interferometer with analog signal extraction channels of base band frequencies, the coefficient of hardware sensitivity loss can be taken as A 0 0.92 . χ ≈ χ In interferometers based on digital conversion systems for broadband signals, the ADC operates at the sampling frequencies of d 2048 From the received high-speed (broadband) digital signal, narrowband signals are extracted using an FPGA-formed PPF module and BBC (Fig. 2). A digital input signal with a d F sampling frequency is distributed by the demultiplexer (DM) over the N channels of PPF with decreasing frequency to the FPGA value of clock frequency т 550 MHz.
≤ F Complex signals at the PPF outputs are divided into N real signals with s 0 = B B N bands by the splitters (PSS) in phase-shifting filters. From the obtained band signals, signals with the F ∆ specified bands are extracted by BBC. The selected signals are quantized in amplitude and formatted similar to those in the DAS R1002M channels.
In radio astronomy equipment that based on FPGA, it is convenient to use BBC operating with a clock frequency of т 128 F = or 256 MHz [11] which are tuned by digital heterodynes [21] in the frequency bands up to 64 or 128 MHz, respectively. When using heterodynes with a clock frequency of 512 MHz [22], the tuning range of the BBC is expanded to 256 MHz. However, in the latter case, preliminary filtering of the input broadband signal is also necessary.
During polyphase filtering in the near-zero frequency range and at the frequencies multiple of т F the signals are distorted. Distortions also occur near the frequencies multiple of т 0.5F where the signal spectra partially overlap at the edges of adjacent s . B bands. Thus, in order to be capable of extracting signals of any frequencies without distortion, the mod- where n is the serial number of the output signal code; L is the order of the filters for the ( ) n h r weight functions; j is the imaginary unit. The weight function affecting the distribution of energy between the main and side lobes of the spectral function for the output signal has the form: In RT-13 radio telescopes, a digital broadband IF converter is located next to the RR by means of a fixed short (less than 1.5 m) coaxial. The signal spectrum at the ADC input is formed by a broadband IF filter. Here, the sensitivity losses associated with distortions of signals in the IF signal transmission lines are eliminated; however, the losses (up to 3 %) due to signal distortions in frequency converters by heterodyne phase noise are still present. All filters in the signal extraction channel are digital, thus ensuring a high stability of the receiving and recording channel parameters during the antenna movement and changes in external climatic conditions. Therefore, the PFC linearity is guaranteed, and distortions in the AFC shape of the receiving and recording channel are minimized (distortion and oscillation of AFC, deviations in the passband, frequency tuning shift). The loss of sensitivity due to the AFC nonidentity of the interferometer channels is lower than 0.3 %. The AFC side lobes of the PPF channel are attenuated by 30 dB. Due to out-of-band noise penetrating the side lobes, the signal-to-noise ratio in the 8-channel PPF decreases by about 0.7 %.
In digital signal extraction systems, insignificant losses appears due to the bit depth limitations (truncations) of the codes in the PPF, PSS and BBC. In calculations according to (1) products are summed. The codes obtained at this stage are truncated to 9 bits. At the stage of multiplying these codes by 16-bit codes of the exponential function and adding the 8 N = obtained results, the output signal codes are truncated to 12 bits. As a result of code truncations, the signal-to-noise ratio in the PPF channel decreases by 0.16 %. A decrease in the bit depth of the codes to 14 in the band signal phase selector does not result in any noticeable loss of sensitivity. In the BBC heterodyne, the resolution of the current phase codes decreases to 10, corresponding 12 to a RMS phase noise of the heterodyne signal of 0.1°. Losses associated with such a phase noise are negligible. Amplitude fluctuations of heterodyne signals represented by 10-bit codes have little effect on the signal-to-noise ratio in the frequency converter. Almost no change in the signals of the base band frequencies caused by signal-to-noise ratio is observed at the outputs of the QFC, while limiting their bit depth to 15. The total sensitivity loss introduced by the digital narrowband signal extraction module does not exceed 0.5 %.
In the digital module, the total decrease in the signal-to-noise ratio is significantly lower than the loss resulting from signal distortion by phase noise of the RR heterodynes. Taking into account all losses for an interferometer with digital narrowband signal extraction systems, the loss coefficient can be taken equal to D 0 0.96 . χ ≈ χ For an interferometer with the same antennas and RR, but with different types of narrowband signal extraction systems, A 0 0.94 . χ ≈ χ The sensitivity of interferometers can be slightly increased (up to 4 %) by replacing the standard R1002M DAS in RT-32 radio telescopes with the considered modules for digital extraction of narrowband signals. A slight improvement in sensitivity has little effect on the accuracy of determining the τ interferometric group delay of the received radio signal. Under 1 7 R > , the greatest effect is produced by factors unrelated to the signal registration system, including errors in tracking systems for Doppler frequencies and ephemeris, errors in measuring group delays of signals in the receiving and recording channels of radio telescopes and discrepancies in the time scales of data formatters in radio telescopes. In addition, for angular and coordinate-time measurements by VLBI methods, account should be taken of the state of the atmosphere; however, the accuracy of such corrections may be insufficient.
A radio telescope with a digital signal extraction module can be operated both in a multichannel interferometer mode with registration of narrowband signals and in an interferometer mode registering broadband IF signals. During the VLBI observations conducted at the Zelenchukskaya and Badary observatories, the parameters of the Quasar standard radio interferometer (two RT-32 radio telescopes with R1002M DAS) and an interferometer with different types of radio telescopes (RT-32 with DAS and RT-13 with a digital signal extraction module) were compared. The tests confirmed the possibility of integration radio telescopes with dif-ferent systems of signal conversion in the VLBI network and the possibility of operating a RT-13 radio telescope in the Quasar VLBI Network.
Accuracy assessment of a multichannel interferometer providing digital signal extraction. For an M-channel radio interferometer with the synthesis of a wide frequency band, the RMSD calculated by the interferometric delay correlator is defined as [8]: observations are chosen such that it could be possible to extrapolate the signal phases from one frequency to another without 2π uncertainty and to construct a linear dependence of the recorded signal phases to the frequency with the greatest possible accuracy. In one of the recommended options, the frequency spacing between adjacent channels doubles as the r channel number increases [8]. Radioelectronics. 2020, vol. 23, no. 2, pp. 6-18 13 telescope, the former option can be implemented using one RR channel and two channels of the standard digital signal recording system with bands. The latter option requires one channel with a 512 MHz band.
In the synthesis of a frequency band not exceed- mean square error of the M-channel interferometer is always greater than that of a single-channel interferometer with a 0 B recording band defined in [9] as Provided that the interferometer contains m parallel channels registering broadband IF signals and after averaging m results, the RMSD of the calculated interference group delay will be Based on (2) and (3), in the synthesis of the frequency band within the passband of the receiving the relation is formed: This formula can be used to determine the minimum value of the synthesized frequency band, under which the accuracy of determining the interferometric delay exceed that obtained by an interferometer with broadband channels without band synthesis.
Results. The use of the digital method for extracting narrowband signals at radio telescopes provides a minor (about 4%) reduction in the sensitivity loss of the radio interferometer, without affecting the accuracy of measuring the interference group delay of the signal. When replacing analog narrowband signal extraction systems with digital systems, the accuracy of a multi-channel radio interferometer with frequency band synthesis remains unchanged.
As follows from (4) One direction in the development of VLBI (international projects VLBI 2010 and VGOS) involves the synthesis of a frequency band significantly exceeding 1 GHz. The antenna irradiator and threechannel RR of the RT-13 radio telescope with the bandwidths [19] allow a frequency band of up to 2.5 and 6 GHz in the X and K wavelength range, respectively, to be synthesized. This task can be achieved by using three RR channels, four ADCs of a standard signal recording system with bands of 512 MHz and a module for digital extraction of narrowband signals (Fig. 4). The signal extraction device is realized in the FPGA of the XC7K325T type. After formatting the extracted signals with the F ∆ bands according to the VDIF standard, an Ethernet 10G data stream is generated and transmitted through the X2 electron-optical transceiver to the radio telescope server and to the center of correlation data processing.
For 14 mission channel. The channel RR 1, extracting a broadband signal in the lower part of the operating frequency range, is connected to two ADCs through filters with adjacent passband (1024-1536 and 1536-2048 MHz). It is sufficient to connect one ADC with a filter of 1024-1536 MHz to the two remaining channels of the RR.
In the X frequency range, when synthesizing a frequency band up to 2.5 GHz, 2 RR channels with 1 GHz bandwidths, 3 ADCs digitizing signals with bands, 3 PPF modules and 7 ADCs can be used (Fig. 5, a) (Fig. 5, a) Across the K frequency range, a frequency band of up to 6 GHz can be synthesized using three RR channels, four PPF modules and seven BBCs with the 16 MHz ∆ = F bands (Fig. 5, b). Under 45 MHz = w and a similar arrangement of signals in terms of frequency, the number of extracted signals can be potentially increased to 8. However, due to the absence of a fourth RR channel, the registration is limited to 7 signals (excluding the signal at a frequency of with the interferometer operating in the registration mode of the broadband IF signals. However, in the mode of registering several F ∆ narrowband signals, the total stream speed of data transmitted to the correlation data processing center decreases significantly, thus permitting the connection of the radio telescope to VLBI networks using narrowband signal correlators [8]. For example, in an interferometer recording 8 signals with 16 MHz bands (see Fig. 5, b), the total speed of the information stream under four-level quantization is 512 Mbit/s. An interferometer recording 4 signals with 512 MHz bands provides a stream with a total speed of 8192 Mbit/s. An increase in the speed of the data stream leads to stricter requirements for radio telescope servers, fiber-optic communication lines between radio telescopes and data processing centers, as well as VLBI correlators.
Discussion. According to the obtained results, a conclusion can be made about the advisability of installing digital signal conversion systems on the RT-32 and RT-70 (Ussuriysk) radio telescopes instead of the standard R1002M DAS with analogue extraction of narrowband signals. This replacement will improve slightly the sensitivity of the interferometer (approximately 4%), at the same time as leaving the accuracy of measuring the interferometric group delays of the received signals practically unchanged. However, in digital systems, the complex channels for amplifying and transmitting broadband analog IF signals are replaced with fiber-optic digital signal transmission lines. Therefore, in terms of performance and reliability, digital systems provide distinct advantages.
In addition, the use of the developed digital system allows radio telescopes to be operated in the registration mode of broadband IF signals, thus significantly increasing the sensitivity of interferometers and expanding the list of available reference sources used in VLBI observations.
After completion of the ongoing development of antenna irradiators and ultra-wideband receivers for RT-32 radio telescopes, it will be possible to synthesize frequency bands wider than 1 GHz and to improve the accuracy of VLBI measurements.
The developed method for a digital extraction of narrowband signals from the IF bandwidth is applied in a new multifunctional signal conversion and registration system aimed at upgrading existing radio telescopes of the Russian Quasar VLBI Network and equipping novel compact radio telescopes [24]. This system is capable of rapidly switching the modes of radio astronomy observations. | 2020-04-30T09:08:02.985Z | 2020-04-28T00:00:00.000 | {
"year": 2020,
"sha1": "14d0ec9c6bedfcba95a356134dde511f3dd39c7f",
"oa_license": "CCBY",
"oa_url": "https://re.eltech.ru/jour/article/download/413/439",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d29f57e4109647a5c86febbf8ea60d916b98d7b7",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": []
} |
116940719 | pes2o/s2orc | v3-fos-license | Threshold current for switching of a perpendicular magnetic layer induced by spin Hall effect
We theoretically investigate the switching of a perpendicular magnetic layer by in-plane charge current due to the spin Hall effect. We find that, in the high damping regime, the threshold switching current is independent of the damping constant, and is almost linearly proportional to both effective perpendicular magnetic anisotropy field and external in-plane field applied along the current direction. We obtain an analytic expression of the threshold current, in excellent agreement with numerical results. This expression can be used to determine the physical quantities associated with spin Hall effect, and to design relevant magnetic devices based on the switching of perpendicular magnetic layers.
The spin transfer torque [1,2] can cause current-induced magnetization switching in magnetic tunnel junctions (MTJs) consisting of an insulator sandwiched by two ferromagnetic layers; a freely switchable layer (= free layer) and a fixed layer. An electric current running perpendicular to the layers is spin-polarized by the fixed layer and transfers its spin angular momentum to the free layer. This spin-angular-momentum-transfer can exert a spin torque large enough to switch the free layer magnetization. Lots of studies on this subject have addressed its fundamental physics [3][4][5][6][7][8][9][10][11][12], and explored its potential applicability to magnetic random access memories (MRAMs) [13][14][15][16][17]. Up to now, most studies have focused on the current-perpendicular-to-plane (CPP) geometry described above.
Several experiments have shown recently that it is also possible to switch the magnetization in current-in-plane (CIP) geometry [18][19][20]. Liu et al. [19,20] demonstrated that an in-plane current flowing in a heavy metal layer attached to a free layer can selectively switch the free-layer magnetization, and reported that these results can be quantitatively explained by spin torque from the spin Hall effect. In ferromagnet|nonmagnet bi-layer systems, an in-plane charge current passing through the nonmagnet is converted into a perpendicular spin current due to the spin Hall effect [21]. The ratio of spin current to charge current is parameterized by spin Hall angle. This spin current, injected perpendicularly to the free layer, transfers its spin-angular-momentum and exerts a spin torque to the free layer magnetization as in the CPP geometry.
Two magnetic configurations have been tested for the magnetization switching induced by the spin Hall effect. One is the in-plane free layer configuration where the free layer has in-plane magnetic anisotropy [19], and the other is the perpendicular free layer configuration where the free layer has perpendicular magnetic anisotropy [18,20]. For the in-plane free layer configuration, the in-plane charge current running in x direction yields a spin current in z direction due to the spin Hall effect (see Fig. 1(a) for the coordinate system). The spins injected into a ferromagnet (= free layer) are aligned in ±y direction, generating the damping or anti-damping torque when the magnetic easy axis of the free layer is in y direction. In this CIP configuration, a threshold current for the free layer switching has the same form as that of the conventional CPP geometry with in-plane free and fixed layers, given as [3,4,20] where is the damping constant, M S is the saturation magnetization, t F is the thickness of free layer, θ SH is an effective spin Hall angle of the system, H K,in is the in-plane magnetic anisotropy field, and N d is the demagnetization factor, which depends on the patterned shape of free layer.
For the perpendicular free layer configuration, in addition to an in-plane current, an external in-plane field H x along the direction of current-flow should be applied for deterministic switching [18,20]. Although this in-plane field does not favor either perpendicular magnetic orientation by itself, it breaks the symmetry in the response to the spin torque and provides the deterministic switching [20]. In this case, however, an explicit expression of the threshold switching current like Eq. (1) has not been reported yet. To design and interpret experiments based on the spin Hall spin torque, it is of critical importance to find such an analytic expression.
In this Letter, we show an analytic expression of threshold switching current for the perpendicular configuration in the CIP geometry. The expression is verified by comparing with numerical results obtained from macrospin simulations. We study the switching of a perpendicular nanomaget on top of a heavy metal layer with flowing in-plane current (the left panel of Fig. 1(a)). To get an insight into the perpendicular switching induced by the spin Hall effect, we numerically solve the modified Landau-Liftshitz-Gilbert equation as [20] ), ( where γ (= 1.76×10 7 Oe -1 s -1 ) is the gyromagnetic ratio, H eff is an effective magnetic field including an effective perpendicular anisotropy field H K,eff (= H K -N d M S ) and an external in- , θ SH (= 0.3 [22]) is an effective spin Hall angle The switching process in this CIP geometry is completely different from the conventional CPP geometry. Figure 1(b) shows the temporal change of magnetization components at a switching current in the CIP geometry. For comparison, a case of conventional CPP geometry with perpendicular free and fixed layers is shown in Fig. 1(c). For each case, a current corresponding to a threshold switching current is applied. An interesting difference can be found in the switching process. The switching occurs via many precessions in the CPP geometry as well-known ( Fig. 1(c)), whereas almost no precession is observed for the switching in the CIP geometry ( Fig. 1(b)). In the CPP geometry, the spin torque is directed and competes with the damping torque directly (i.e., damping torque ≈ consisting of a perpendicular fixed layer and an in-plane free layer with the current running perpendicular to the plane [23,24]. In this case, the threshold current is independent of the damping constant and proportional to the anisotropy field of free layer. Consequently, one may expect that the switching current for the CIP geometry studied in this work is also independent of the damping constant. To check this, we conducted simulations of magnetization dynamics with a current pulse (the pulse width of 5 ns and the rise/fall time of 0.5 ns), varying the damping constant. Fig. 2(a) shows the switching current as a function of the damping constant. For high damping cases ( > 0.03), the switching current I SW is independent of as expected. However, for low damping cases ( < 0.03), the switching current randomly jumps between two levels. . It is because the magnetization direction at t = 5 ns significantly deviates from its equilibrium direction, m z ≈ ±1 (i.e., |m y | >> |m z |). For = 0.03 (Fig. 2(c)), m z eventually goes to -1 and thus switching occurs. Note that m z is slightly negative at t = 5 ns and thus the magnetization is on the downhill towards the point of m z = -1 in the energy landscape. For high damping cases, the magnetization moves towards an energy minimum (m z ≈ -1) due to high energy dissipation rate. However, the energy dissipation rate for low damping cases is relatively small, giving rise to a non-linear dynamics. For example, when = 0.028 ( Fig. 2(b)), m z switches back to +1. This switching-back may be understood by that a highly nonlinear precession dynamics disturbs the complete switching. The reason of two level fluctuations of the switching current in the low damping regime (Fig. 2(a)) is not clearly understood but may be related to the period-doubling bifurcation of chaos theory.
The results shown in Fig. 2 suggest that a high damping is required for practical applications of CIP switching. We note that the damping constant of perpendicular magnetic materials is usually large, and the heavy metal in contact with the nanomagnet can further increase the damping due to spin pumping effect [25] or spin motive force [26]. For instance, Mizukami et al. [27] reported experimental results that the damping constant of Pt|Co|Pt structure with perpendicular anisotropy is larger than 0.1. Based on this, we assume a high damping ( = 0.1) in the remaining part of this paper.
The damping-independent switching current in high damping regime indicates that the threshold switching current can be obtained from a static solution of Eq. (2) When H x << H K,eff , Eq. (5) is further simplified as Equations (5) and (6) We find that when SH > 40, the errors are less than 2 % for Eq. (5) and 4 % for Eq. (6), respectively. These errors may be comparable to or even less than the inaccuracy in experiments. We also test the applicability of Eqs. (5) and (6) for various D (10 nm to 30 nm) and t F (1 nm to 3 nm), and find that the analytic expressions can describe the threshold switching current density with a sufficiently good accuracy as for an example shown in Fig. 3(c). Therefore, Eqs. We note that Eq. (5) is applicable to any ferromagnet|non-magnet bi-layer structures when the non-magnetic layer is able to supply a sufficient spin Hall spin current. The nonmagnetic layer could be either a heavy metal such as Pt, Ta, and W [19,20,22] or an alloy consisting of light host materials and heavy metal impurities such as CuIr [28] and CuBi [29]. The magnitude and sign of the spin Hall angle as well as the damping constant of the bi-layer structure strongly depend on the properties of nonmagnetic layer. For a nonmagnetic layer with the opposite sign of spin Hall angle, one finds exactly the same symmetry of the torques by reversing the direction of H x . Concerning the high damping condition, the authors of Ref. [20] reported that there are two switching boundaries depending on H x ; one occurs for a small H x (= type I), and the other occurs for a large H x (= type II). We found that the high damping condition for the controllable switching described in Fig. 2(a) is required only for the type I. Therefore, if a bi-layer structure has a low damping constant, one can simply increase H x to avoid uncontrollable switching. The different switching boundary for different type of switching in Ref. [20] is caused by a very slow ramping rate of the current [30]. In our case, we assumed a fast ramping rate (i.e., current-rise time = 0.5 ns), which is a reasonable assumption for realistic device applications. We found that in this fast ramping condition, the switching current of both types of switching is successfully described by Eq. (5).
We next discuss about the usefulness of perpendicular switching induced by the spin Hall Rashba-type spin-orbit coupling is mainly responsible for spin torque resulting in the perpendicular switching induced by an in-plane current [18]. The existence of this Rashba effect in metallic ferromagnets is a subject under extensive discussion [36][37][38][39][40][41][42]. Although we do not consider the Rashba effect in this work, a further investigation about this possibility may be valuable.
To summarize, we investigate the threshold switching current density for in-plane currentinduced perpendicular switching of magnetization due to the spin Hall effect. We find that the switching current in high damping regime is independent of the damping constant, and is in almost linear relation with both effective perpendicular anisotropy field and external magnetic field applied along the current direction. We derive an explicit analytic expression of threshold switching current and verify its applicability by testing various cases numerically.
This expression will be of importance for both fundamental physics and applications since it can be used to estimate essential physical quantities such as spin Hall angle and to design practical devices utilizing the spin Hall effect. | 2019-04-13T04:03:15.193Z | 2012-10-12T00:00:00.000 | {
"year": 2012,
"sha1": "8ddb03f0f0094b4dd70fad1266650c0b3cc1d62e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1210.3442",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "da14d3a12c9c1eb0d6c2a5626c7ec18d8ceac85a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
218817598 | pes2o/s2orc | v3-fos-license | The efficiency of the phytoremediation process combination of horsetail plants (Equisetum hyemale) and natural filtration media to reduce the concentration of iron (Fe) in the leachate of Cilowong’s Landfill Area of Banten Province
This research aims to determine the results of leachate sample testing and the efficiency of decreased Iron (Fe) concentration in the phytoremediation process using horsetail plants (Equietum hyemale) and the filtering process with zeolite, activated charcoal, and palm fibre. After testing a number of samples, there was a content of heavy metal Fe with a concentration of 7.20 mg/L. Wastewater quality standard for activities at the landfill area based on Regulation of the Minister of Environment No. 5 of 2014 concerning the quality standard of wastewater for businesses and/or activities that do not yet have a stipulated wastewater quality standard is about 5 mg/L. Based on previous research, the selection of types of horsetail plant used for phytoaccumulators is based on requirements as a plant that has high absorption and is resistant to various external influences. Based on data from the measurement of leachate samples that have been carried out experiments in leachate pond with phytoremediation processes and filtration media, it is obtained an efficiency decrease in the concentration of iron (Fe) by 54% of the total concentration of iron (Fe) leachate before the phytoremediation process. Utilization of horsetail plant in overcoming environmental pollution is expected to be developed into an environmentally, friendly and inexpensive alternative so that it can be applied optimally to the management of leachate in landfills.
Introduction
Leachate treatment technology at the landfill area still uses the pool system technology, which is using a storage pond, anaerobic pond, aerobic pond and stabilization pond. The concentration of several types of heavy metals contained in the leachate of landfills is very toxic and dangerous for humans and the surrounding environment.
Based on the Regulation of the Minister of Environment and Forestry of the Republic of Indonesia Number: P.59/Menlhk/Setjen/Kum.1/7/2016 concerning leachate quality standards for businesses and/or activities for landfill area, it explains that the landfill area produces potential leachate polluting the environment so that there is a need for leachate treatment before being discharged to environmental bodies.
Based on the results of preliminary checking of leachate in the Cilowong's landfill area with metal parameters of Fe, Cr, Cd and Pb that the Fe content has the highest levels and exceeds the wastewater Industrial growth in the Serang Regency area is estimated to affect the Fe heavy metal content in the waste generation located in Cilowong's landfill area. It is considering that access to the landfill area is currently still being passed/crossed by the agency of Serang City and Serang District. Some factories such as pharmaceuticals and medicines as well as chemical processing plants are likely to influence the characteristics of solid waste generation with very high Fe content so that there is also an increase in Fe concentration in leachate. According to Ronquillo (2009) in Dhimas Firmansyah et al (2013), dissolved iron can be in the form of suspended compounds, as colloidal grains such as Fe (OH) 3, FeO, Fe2O3 and others. If the concentration of dissolved iron in water exceeds the limit, it will cause various problems, namely technical disorders in the form of corrosive deposits, physical disturbances in the form of color, odor and bad taste and health problems that can cause nausea, damage the intestinal wall and irritation to the eyes and skin.
To be able to reduce levels of Fe metals contained in leachate, it is necessary to have a treatment that can reduce it. In this study using phytoremediation by utilizing varied horsetail plants in each pond, 12, 18 and 24. Ponds containing mud were then planted with a horsetail plant where leachate water testing with variations in the number of plants was carried out on the seventh to fourteenth day. Leachate water testing was conducted at the Serang District Health Service of the Integrated Health Laboratory Service Unit using ASS (Atomic Absorption Spectrophotometer) Shimazu AA 7000. The use of horsetail plants is expected to reduce the levels of Fe metals contained in leachate. The leachate which has been phytoremediated is then filtered, where the filtration process uses a tub containing zeolite media, activated charcoal, and modified fibers. The use of phytoremediation and filtration methods are to be able to compare the efficiency of the methods that most have an effect on decreasing the Fe metal content of leachate itself.
The problems come up with this scientific paper are: 1) How are the test results and differences in Fe leachate concentration before and after treatment with the phytoremediation process of horsetail plants (Equisetum hyemale) and the filtration media?; 2) How is the efficiency of leachate water treatment to decrease Iron (Fe) concentration after receiving treatment in each pond, both phytoremediation of horsetail plants and the filtration media?
The objectives of this scientific paper are to determine the ability and efficiency of the horsetail plant as a phyto-accumulator to reduce the concentration of leachate iron (Fe); and to determine the effectiveness of the filtering reactor pond through the composition of zeolite, activated charcoal, and fibers on the efficiency of reducing the concentration of leachate iron (Fe) after passing through the phytoremediation process; and to find out the physical changes that might occur in leachate after receiving treatment in the pond.
Method
This research is an experimental study with a laboratory scale to see the efficiency of decreased concentration of iron (Fe) leachate using experiments in a leachate water pond through the phytoremediation process of horsetail plants (Equisetum Hyemale) and the use of filter media. The data obtained will be processed descriptively quantitative.
The subjects in this study were leachate originating from Cilowong's landfill area in the Serang City of Banten Province which would be treated with a Constructed Wetlands reactor to reduce the content of Fe specific heavy metals in leachate with phytoremediation of horsetail plants and water filter media in the form of zeolites, charcoal, and palm fibers. The research is in the form of leachate water treatment through phytoremediation using prototype ponds for leachate of Wetland Construction system (artificial wetlands) using mud and leachate for phytoremediation process of horsetail plant with variations in the amount of the plant as many as 12, 18 and 24 planted on each pond. Leachate measurements were carried out on the 7 th day to the 14 th day by taking 2 times a day, at 10 am and 10 pm.
In the filtration pond that using zeolite, activated charcoal, and fibers, leachate which has been processed in the phytoremediation process will be back into the filtration pond. For each test carried out at the Regional Technical Implementation Unit of the Regional Health Laboratory of the Serang District Health Office with the AAS-7000 series atomic absorption spectrophotometer, it is prior to input into the Spectrophotometer as there is a need for sample preparation.
General conditions of the sample
Initial extraction and testing of Fe concentrations in leachate and mud samples were conducted separately. To be able to compare the leachate after and before the treatment. In addition, differences in air temperature, soil type and topography of the area between the location of Cilowong's landfill area and the research site will affect the changes that occur in leachate water.
The mud that is put into the reservoir is not sterilized so that the metal content of iron (Fe) in the mud is very high. This is based on environmental and regional conditions as well as the characteristics and types of soil that is clay. The result of the Fe Concentration of mud is 14.128 mg/L.
Concentration of Iron (Fe) leachate before treatment
Initial sample testing was carried out before the process to determine the condition and characteristics of leachate before the phytoremediation process was carried out in the reactor pond. Sampling was done by mixing leachate water and mud in 1000 ml of sample bottles. The type of sample taken for initial testing is the same as the type of sample to be used for the phytoremediation process in a processing rectifier. Following are the preliminary test results of Fe concentration of leachate before treatment which is presented in the table 1 below. Based on the test results in table 1, the amount of Fe concentration in the test results before treatment reached 13.5867 mg/L is higher than the amount of Fe concentration in the preliminary test results at seven sampling points of Cilowong's landfill area which only reached an average of 7,20189 mg/L. This is caused by the condition of the mud that already contains Fe levels so that it will affect the condition of the leachate water when it is in a pond containing mud.
The concentration of iron (Fe) leachate in phytoremediation pond
The results of testing the concentration of iron (Fe) in the three leachate ponds that have been mixed with mud/wet soil media with the same variable and in different amounts and stay time of 7 days with 2 takeouts per day, more details are summarized in Table 2 below. Based on table 2, the concentration of iron (Fe) in the first pond tended to not decrease, even on the 7 th sampling there was an increase in the Fe concentration reaching 12.9613 mg/L. While in the second pond, the results showed an increase on the 4 th day to the 7 th day. The test results with the lowest Fe concentration values were the third pond which reached an average of 11.10 mg/L. Iron (Fe) concentration test results in leachate samples if calculated on average per day as listed in table 2 above, the value of Iron (Fe) concentration that occurs only in the third and the second ponds occurs the concentration increase from day 4 to day 7. While in the first storage pond tended to be stable, but when compared with the second and third storage ponds, the first storage pond reached the biggest average value of iron concentration ( Fe). The average value of the concentration of iron (Fe) per day is shown in the bar chart in Figure 2 below. Table 3 shows that the lowest average Fe concentration was found in the 3rd phytoremediation pond with an average Fe concentration of 11.38 mg/L whereas the 1st pond with the highest average Fe concentration was 12.66 mg/L. The average amount of Fe concentration in each leachate collection ponds can be shown in Figure 3 below.
Figure 3. Diagram of average leachate iron (Fe) concentration in phytoremediation pond
The ability of the absorption rate of horsetail plants to Iron (Fe) decreases with the increasement of Fe concentration in leachate. This occurs because of the difference in temperature and air so that it will affect the growth of plants that can no longer absorb optimally. As shown in Figure 3 below, the color changes occurred in some horsetail plants after the 7 th day and the appearance of moss in the mud in the leachate treatment pond. 10
Filtration Reactor
The results of testing the amount of iron (Fe) leachate in the reactor pond using zeolite, activated charcoal, and palm fiber can be seen in table 4 below. Source: Results of regional health laboratory testing, Serang District Health Office 2019 Table 4 shows that the Iron (Fe) concentration test results for leachate in the first collection were 11.65 mg/L, while the results of the second leachate test for the next four days were 6.27 mg/L. Figure 6 shows that the optimization of detention time affects the decrease in Fe concentration of leachate resulting from the filtration media absorption reaction in the reactor pond. The treatment process in the filtration pond is a further process of processing in the phytoremediation reactor pond. With a detention time of 4 days, it can be seen clearly from the results of tests that have been carried out from the second sampling decreased by 45% from the number of the previous sampling. This proves that the addition of the filtering process after the phytoremediation of horsetail plants can reduce the iron (Fe) concentration of leachate water and change the color characteristics of leachate from jet black to golden yellow as shown in Figure 6 below. Leachate that comes out through the filtration media begins to show separate particles that are golden brown. As shown in figure 7, the process of sedimentation of leachate in a filter pond containing zeolite, activated charcoal and palm fibers with a detention time of 4 days is carried out to maximize the three filter media binding organic or inorganic substances which are still dissolved in leachate. The order of the results of the Fe concentration test from the four complete leachate ponds have been summarized in Table 5
Simple Linear Correlation Test
A simple correlation test is performed to determine whether there is a relationship between the value of Fe concentration resulting from the leachate treatment process and the efficiency of decreasing Fe concentration. The formula used to calculate the Simple Correlation Coefficient is as follows.
while the data to be tested with a simple correlation are in table 7. Based on the calculation results, the correlation coefficient between the value of Fe concentration and the value of efficiency is -0.986, which means that the two variables are negatively strongly linear correlated. This means that if the value of Fe concentration decreases, the efficiency value of Fe concentration decreases will increase and vice versa. Thus the contribution of Fe concentration to the efficiency of 97.21%, which means 97.21% leachate treatment processes gave contribution to the efficiency value of the system.
Conclusion
The selection of leachate treatment through phytoremediation using Horsetail Plant (Equisetum hyemale) and the filtering process with zeolite, activated charcoal and palm fiber as an advanced process based on consideration of the use of biological biota and simple wastewater treatment technology in leachate is an effort to reduce the level of environmental pollution caused by the leachate water. | 2020-04-16T09:11:58.061Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "9565501f748450c126552eb7c3832c1daedea05d",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1477/5/052060",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b79967fb9c355211226e92d3da40c4a2295d20fa",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science"
]
} |
40416499 | pes2o/s2orc | v3-fos-license | A diabatic definition of geometric phase effects
Electronic wave-functions in the adiabatic representation acquire nontrivial geometric phases (GPs) when corresponding potential energy surfaces undergo conical intersection (CI). These GPs have profound effects on the nuclear quantum dynamics and cannot be eliminated in the adiabatic representation without changing the physics of the system. To define dynamical effects arising from the GP presence the nuclear quantum dynamics of the CI containing system is compared with that of the system with artificially removed GP. We explore a new construction of the system with removed GP via a modification of the diabatic representation for the original CI containing system. Using an absolute value function of diabatic couplings we remove the GP while preserving adiabatic potential energy surfaces and CI. We assess GP effects in dynamics of a two-dimensional linear vibronic coupling model both for ground and excited state dynamics. Results are compared with those obtained with a conventional removal of the GP by ignoring double-valued boundary conditions of the real electronic wave-functions. Interestingly, GP effects appear similar in two approaches only for the low energy dynamics. In contrast with the conventional approach, a new approach does not have substantial GP effects in the ultra-fast excited state dynamics.
I. INTRODUCTION
Ubiquitous in molecules beyond diatomics, conical intersections (CIs) of electronic states act as "funnels" 1-4 that enable rapid conversion of the excessive electronic energy into nuclear motion. Also, CIs lead to the appearance of the geometric phase (GP) [5][6][7] in both electronic and nuclear wave-functions of the adiabatic representation. The GP presence leads to a sign change of adiabatic electronic wave-functions along a closed path of nuclear configurations encircling the CI seam. 6,8 This sign change affects evaluation of nonadiabatic couplings (NACs) necessary to complete the nuclear kinetic energy part of the adiabatic representation to define a nuclear Schrödinger equation. Changes in NACs due to the GP can lead to profound modification of nuclear dynamics even in situations when the nuclear wave-function is localized far from the region of CI. For example, the GP causes an extra phase accumulation for fragments of the nuclear wave-packet that move around the CI on opposite sides. 9,10 This leads to destructive interference that gives rise either to a spontaneous localization of the nuclear density 10 or slower nuclear dynamics 11 than in the case where the GP is neglected.
To distinguish unambiguously what is the effect of the GP on the nuclear dynamics one can study the exact quantum dynamics, which necessarily incorporates all GP effects, in comparison with the dynamics that is not including the GP. This comparison would allow one to formulate unique dynamical features related to the CI topology which gives rise to the GP. A natural question is how to modify a computational scheme to remove the GP with a minimal effect on other parts of dynamics? Previously, to analyze GP effects constructing a GP excluded version has been done by switching to the adi-abatic representation. [12][13][14][15][16] A straightforward simulation of the nuclear dynamics ignoring double-valued character of electronic and nuclear wave-functions in the adiabatic representation excludes the GP. 6 As shown by Mead and Truhlar, the only change that is needed to obtain the correct nuclear dynamics in the adiabatic representation is a phase modification for both electronic and nuclear wavefunctions that returns single-valued boundary conditions to these functions. 6 This phase change modifies only the kinetic energy terms, NACs, in the nuclear Hamiltonian and leaves potential energy terms unchanged. A practical difficulty with this approach is that it requires performing quantum nuclear dynamics in the adiabatic representation where many NAC components diverge at the CI. The necessity to work in the adiabatic representation creates technical challenges for investigation of GP effects in realistic systems beyond low dimensional simple models.
In this paper we propose an alternative way of investigating GP effects by introducing a modification in the system diabatic Hamiltonian, this modification removes the GP in the corresponding adiabatic representation without altering potential energy surfaces. Our modification is not equivalent to ignoring double-valued boundary conditions in the adiabatic representation and provides a new set of results characterizing GP effects in CI problems.
The rest of the paper is organized as follows. In Sec. II we introduce our approach for a two-dimensional linear vibronic coupling model problem with CI. Section III provides numerical results comparing GP effects obtained in the new diabatic and old adiabatic approaches on a set of model systems parametrized using real molecular systems. Finally, Sec. IV concludes the work by summarizing main results.
II. THEORETICAL ANALYSIS
We introduce two models within the two-dimensional linear vibronic coupling (LVC) Hamiltonian is the nuclear kinetic energy operator, and 1 2 is a 2 × 2 unit matrix. 17 V 11 and V 22 are the diabatic potentials represented by identical 2D parabolas shifted in the x-direction by a and in energy by ∆ To have the CI in the adiabatic representation, V 11 and with θ = θ(x, y) as a rotation angle between the diabatic electronic states |1 and |2 The transformation in Eq. (4) gives rise to the 2D LVC Hamiltonian in the adiabatic representationĤ adi = U †Ĥ LVC U , where are the adiabatic potentials which are exactly the same for models 1 and 2, andτ are the nonadiabatic couplings. For twoelectronic-state models we can expressτ ij aŝ The diagonal non-adiabatic couplings,τ 11 andτ 22 , represent a repulsive potential known as the diagonal Born-Oppenheimer correction (DBOC). [18][19][20] The off-diagonal elements,τ 12 andτ 21 in Eq. (11), couple dynamics on the adiabatic potentials W ± and are responsible for nonadiabatic transitions. Allτ ij terms involve derivative of θ which is given by two different functions for models 1 and 2, respectively. Here, b = ∆/(ω 2 1 a) is the x-coordinate of the CI point, and γ = 2c/(ω 2 1 a) is dimensionless coupling strength. For simplicity of the subsequent analysis we set b = 0, which corresponds to centring the coordinates at the CI point. To see the difference between θ 1 and θ 2 we will continuously track their changes along a contour encircling the CI. For the CI located at the origin we have taken a set of points on a circle (x j , y j ) parametrized by the polar representation of complex numbers x j + iy j = re iφj , where r = 1 and φ j 's are taken from the discretized [0, 2π] interval. Figure 1 illustrates that θ 1 changes by π when we do the full circle while θ 2 returns to its initial value, 0. For the adiabatic electronic functions [Eqs. (5)- (6)] this means that these functions change their signs in model 1 and return to their original values in model 2. Therefore, models 1 and 2 have electronic functions which are double-and singlevalued functions of nuclear parameters, respectively. In terms of differentiability, θ 2 clearly has issues at the y = 0 line. However, we will not computeτ ij elements for model 2 because all simulations for this model will be done in the diabatic representation.
Another possible concern for our approach could be that the modification of the diabatic model removing the GP breaks smoothness of the diabatic coupling as a function of the nuclear coordinate. This raises a question of the physical meaning of the diabatic model with such a coupling term. It is important to understand that the GP is a significant part of the CI topology and remov-ing it in any way is expected to produce an incomplete and thus in some sense unphysical picture. To illustrate this point even further we will show that the diabatic model which is mathematically equivalent to the adiabatic model with the GP removed in the conventional way has divergent diabatic potentials with discontinuous derivatives. First, let us clarify that to obtain the adiabatic Hamiltonian that will produce results equivalent to the initial diabatic LVC Hamiltonian [Eq. (1)] in the space of single-valued functions one needs to use the following single-valued transformation U (1) = e iθ U . Note that both functions e iθ and U [Eq. (4)] in this product are double-valued but they give the single-valued resulting transformation. In contrast to U , U (1) allows us to move between the representations while staying in the space of single-valued functions, hence,Ĥ H dia is similar toĤ LVC but it has an extra term containing derivatives of the mixing angle θ. It is well known that all these derivatives diverge at the CI point 21 thus giving rise to the diabatic representation that is unphysical. For example, there are two potential-like terms in Eq. (15), 1 2 (∇θ) 2 + i 2 ∇ 2 θ, which can be formally considered as a modification of diabatic surfaces V 11 and V 22 . This modification produces divergent diabatic surfaces with nuclear derivative discontinuities. All these problems in the diabatic representation of the conventional way of the GP removal has not been discussed before because the diabatic HamiltonianĤ dia does not provide any advantage compare to its adiabatic counterpartĤ adi and thus has not been used in simulations. This example illustrates that although introducing the absolute value of the coupling term leads to nuclear derivative discontinuities, this modification is still better than the conventional approach with its divergent diabatic potential terms.
III. NUMERICAL EXAMPLES
We will consider three molecular systems with CIs that are well described by multi-dimentional LVC models: the bis(methylene) adamantyl (BMA) 22 and butatriene 2,21 cations, and the pyrazine molecule. 21,23 Ndimensional LVC models for these systems are taken from literature 22,24,25 . Although our approach to removing the GP can be easily applied to a multi-dimensional LVC, for the sake of simplicity and also to be able to compare with our previous simulations 21 we will use 2D ef- fective LVC Hamiltonians for these systems (see Table I).
To quantify GP effects we solve the time-dependent nuclear Schrödinger equation for three model Hamiltonians: 1) model 1 using the diabatic representation (Diab-wGP) 2) model 2 using the diabatic representation (Diab-noGP), and 3) model 1 using the adiabatic representation [Eq. (8)] and ignoring double valued character of electronic and nuclear wave-functions (Adiab-noGP). First two Hamiltonians were treated using the split-operator approach while for the third one the exact diagonalization in a finite basis was employed. 21 In what follows we will consider two dynamical regimes different in energy of an initial wave-packet: 1) low energy case, where dynamics mostly occurs near CI on the ground electronic state; 2) high energy case, when a wave-packet proceeds from the excited electronic state to the ground state through the CI.
A. Low energy dynamics
For low energy dynamics we will analyze only the BMA case because the other systems have a non-symmetric diabatic well structure that would freeze dynamics if one starts in the lower energy well. The ground vibrational state of the uncoupled V 11 diabatic potential was chosen as an initial wave-packet. The diabatic population of the initial state is monitored as a function of time to assess dynamics (Fig. 2), this population correlates well with the well population in the adiabatic representation for BMA.
For discussing diabatic population evolution (Fig. 2) it is convenient to introduce a notation for diabatic uncoupled vibrational levels, (n, m) s refers to a level with n vibrational quanta on the x (tuning) coordinate and m vibrational quanta on the y (coupling) coordinate for the diabatic state s = D, A. s = D(A) will correspond to V 11 (V 22 ) diabats. In this notation the initial state is (0, 0) D and in model 1 it is coupled only with (n, 1) A states, where n is any positive integer number. Since all (n, 1) A states are higher in energy than (0, 0) D , the transfer is negligible in the Diab-wGP method. On the other hand, in model 2, owing to the even coupling function c|y|, the initial state (0, 0) D is coupled with (n, 2k) A states, where n and k are arbitrary integer numbers. Thus there is a resonance channel (0, 0) D → (0, 0) A that is responsible for a donor population decay quadratic in time in the Diab-noGP method. These results can be also obtained using the time-dependent perturbation theory which is applicable here due to a small value of the coupling constant, c. Both Diab-wGP and Diab-noGP methods have small bumps on the population plot with the period of 20 fs corresponding to the tuning coordinate frequency ω 1 = 2π/20 fs −1 . These features come from off-resonance transitions (0, 0) D → (n, 1) A and (0, 0) D → (n, 2k) A for n ≥ 1 in Diab-wGP and Diab-noGP methods, respectively. Using the time-dependent perturbation theory and summation over states of harmonic oscillators it can be shown that the off-resonance channel should induce the population dynamics with a frequency corresponding to ω 1 . 22 The Adiab-noGP method has very similar dynamics as that in Diab-noGP. This can be attribute to the absence of destructive interference between two pathways around the CI located between the wells when we ignore the double-valued boundary conditions by using the Adiab-noGP approach. Thus, in Adiab-noGP, one observes coherent tunnelling between the wells as in any single electronic state double-well problem.
B. Excited state dynamics
All three systems presented in Table I are assessed here so that results of our previous study 21 wave-function (Table I and Fig. 3). The quantity characterizing excited state dynamics will be the adiabatic electronic state population P adi (t) = χ adi 2 (t)|χ adi 2 (t) , where χ adi 2 (x, y, t) is a time-dependent nuclear wave-function that corresponds to the excited adiabatic electronic state (Fig. 4).
For BMA, due to low diabatic coupling, the exact dynamics (Diab-wGP) corresponds to coherent oscillations on a donor diabatic surface. Once the wave-packet crosses the diabatic state intersection the adiabatic population switches from excited to the ground state, but the wave-packet resides almost completely on the same diabat. The period of these oscillations corresponds exactly to the tuning mode frequency ω 1 = 2π/20 fs −1 . Switching to the Diab-noGP approach does not change dynamics within a sub 100 fs time-scale because small c makes transitions between diabatic levels inefficient. In other words, the difference in the coupling structure (n, m) s → (n ′ , m ± 1) s ′ for model 1 versus (n, m) s → (n ′ , m ± 2k) s ′ for model 2 does not cause large differences in population dynamics until population transfer between diabatic states becomes appreciable. Differences between results of Adiab-noGP and Diab-wGP have been extensively discussed in Ref. 21, and in BMA, they correspond to compensation of DBOC by GP induced terms in NACs. Without GP, DBOC has a significant repulsive character that prevents the wave-packet from approaching a CI region and thus hinders nonadiabatic transfer.
In the butatriene cation and pyrazine, the initial wavepackets are much closer to the CI (Fig. 3) and diabatic coupling constant c is more than 5 times larger than in the BMA case. Thus, the time-scale of the adiabatic population dynamics is regulated by the nonadiabatic transition rather than oscillations on a diabatic surface. Pyrazine due to its further FC point from the CI has a small plateau region in the initial population dynamics, this plateaux corresponds to a wave-packet approach to the CI. As in the BMA case, differences between Diab-wGP and Diab-noGP appear at a longer time-scale than that of the initial nonadiabatic transition. Absence of the difference in Diab-wGP and Diab-noGP can be attributed to averaging over transitions of many diabatic vibrational states forming a wave-packet on the excited state. These vibrational states although individually may have some differences in transferring population to accepting states in two models, but for the overall transfer such differences are averaged out. The difference between Adiab-noGP and Diab-wGP is apparent even at ultrafast initial transitions and has origin in enhancement of nonadiabatic transfer due to the GP for some parts of the nuclear wave-packet. 21
IV. CONCLUDING REMARKS
We presented a new method of analyzing GP induced effects in dynamics. It has conceptually important aspects and practical advantages. Conceptually, it is interesting to see what are the possible ways to remove the GP and how different these ways are in terms of quantum dynamics. Previously, to remove the GP one could ignore double-valued boundary conditions of electronic and nuclear wave-functions, this led to modifying both low energy dynamics and fast excited state dynamics. The new approach shows the same effect of the GP removal for the low energy dynamics, but does not have substantial effect in the fast excited state dynamics. Practically, the new approach gives an opportunity to study GP effects in the diabatic representation where simulation methods are much more developed (e.g., Multi-configuration timedependent Hartree approach). Thus we can easily explore N -dimensional scenarios without necessity for additional transformations. Going beyond linear vibronic coupling is also possible because our main modification puts absolute value on the coupling term so that in the twoelectronic state problem V 12 transforms into |V 12 | without changing the adiabatic potential energy surfaces. | 2016-07-01T16:11:07.000Z | 2016-05-05T00:00:00.000 | {
"year": 2016,
"sha1": "3ab9ebcf27b28f4b83ea60a1ce65d8c58e9f9d34",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1605.01487",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3ab9ebcf27b28f4b83ea60a1ce65d8c58e9f9d34",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine",
"Chemistry"
]
} |
59603254 | pes2o/s2orc | v3-fos-license | Identification of Novel Regulatory Genes in APAP Induced Hepatocyte Toxicity by a Genome-Wide CRISPR-Cas9 Screen
Acetaminophen (APAP) is a commonly used analgesic responsible for more than half of acute liver failure cases. Identification of previously unknown genetic risk factors would provide mechanistic insights and novel therapeutic targets for APAP-induced liver injury. This study used a genome-wide CRISPR-Cas9 screen to evaluate genes that are protective against, or cause susceptibility to, APAP-induced liver injury. HuH7 human hepatocellular carcinoma cells containing CRISPR-Cas9 gene knockouts were treated with 15 mM APAP for 30 minutes to 4 days. A gene expression profile was developed based on the 1) top screening hits, 2) overlap of expression data from APAP overdose studies, and 3) predicted affected biological pathways. We further demonstrated the implementation of intermediate time points for the identification of early and late response genes. This study illustrated the power of a genome-wide CRISPR-Cas9 screen to systematically identify novel genes involved in APAP-induced hepatotoxicity and to provide potential targets to develop novel therapeutic modalities.
CRISPR-Cas9 knock-out screen and deconvolution. HuH7-Cas9 cells (1.62 × 10 8 total) were transduced with the lentiviral sgRNA library at an MOI of 0.5 resulting in >630x total library coverage at the time of transduction. The first replicate contains plasmid and samples collected at 0 h, 30 min, 3 h, 6 h, 12 h, 24 h, and 4d (end) of APAP treatment. The second replicate contains samples collected at 0, 24 h, and 4d of APAP treatment. A minimum of 2 × 10 7 cells were collected per sample, resulting in 160x library coverage per sample as template for the 1 st PCR ( Supplementary Fig. 2). The average library coverage of aligned reads calculated from amount of isolated DNA per sample was 205x and 284x, respectively for replicates 1 and 2. On average, 70% of the sequence reads aligned to the reference sgRNA library resulting in 230.9x average library coverage per replicate (Supplementary Table 1).
After 4 days of APAP treatment and 21 days outgrowth, the endpoint sample is significantly different from the plasmid library or T0 (p < 10 −10 ) by comparison via Wilcoxon Rank-Sum test and there is a noticeable increase in variation of read counts after 4 days of drug treatment (Fig. 1D,E, Supplementary Table 2). Scatter plots of the read counts between the untreated and 24 h samples and the untreated and 4d samples show an increase in differential sgRNA count between 24 h and 4d of drug treatment ( Supplementary Fig. 3a,b).
sgRNA read counts were analyzed to determine the gene-level and protein-level negative and positive screen rankings of individual time points and combined time points using RRA (Supplementary . The 4d (end) samples were compared with the untreated sample, revealing a number of genes containing sgRNA that are significantly decreased with APAP treatment (negatively selected, potentially essential) and significantly increased with APAP treatment (positively selected, potentially susceptible) ( Fig. 2A,B). These gene knock-outs were significantly differentially expressed after 4d of APAP treatment represent a small population of cells remaining after most cells were killed by APAP. The ranked gene lists underwent GSEA pathway analysis against the All Gene Ontology and KEGG pathway gene sets, which returned statistically significant, highly ranked essential pathways in the negative screen analysis as well as a number of novel pathways in both the negative and positive screen analysis (Fig. 2C-E). Essential KEGG pathways are highly ranked in the negative screen after drug treatment, including ribosome and spliceosome pathways. Analysis of Gene Ontology pathways reveals other pathways important to cellular function are highly negatively selected and apoptotic processes are highly positively selected.
At 24 h APAP treatment, we observed a significantly different distribution of genes representing highly significant positive and negative changes in sgRNA expression (Fig. 3A,B). Pathway analysis by GSEA using the KEGG and Gene Ontology gene sets returned a number of novel pathways (Fig. 3C-E). The top negatively selected pathway after 24 hours of APAP treatment was regulation of skeletal muscle contraction. The top biological network identified from this pathway by Ingenuity Pathway Analysis (Qiagen) was lipid metabolism, small molecule biochemistry, and organ morphology, focusing around calcium signaling (Fig. 3F). We suspect this may be important to the injury introduced by the APAP overdose, and further study of genes involved in this calcium signaling identified from this screen (including SLC8A3, ATP2A1, CASQ1) are warranted. This correlates with existing literature, suggesting that calcium imbalance may affect APAP-induced hepatotoxicity 30,31 . Our data provide new and previously unrevealed targets for further experimentation. We next sought to rank genes by time groups rather than specific time points with two main goals: 1) identify genes that are ranked highly (positive or negative) in early time points (30 min-24 h APAP exposure) vs. no treatment and 2) identify genes that are ranked highly (positive or negative) in all pooled APAP treated samples vs. no treatment. A literature search of the top 100 ranked genes (positively and negatively ranked, respectively) for each of these combinations of time points identified 44 unique genes (of 716 total unique genes queried) that are already associated with APAP and a vast majority which are novel (Table 1).
We then grouped genes that were highly ranked at independent time points to isolate early and late acting genes. While a few genes contained sgRNA that are significantly enriched (or depleted) across all early time points, many are unique to the individual time points. While the sensitivity of the screen at very early times is likely lower than at later time points, early and late acting gene groups that are shared between time points or are unique to specific time points but represent statistically significant pathways may be important to drug response (Fig. 4A,B). To identify knocked-out genes which have global significance we compared all APAP-treated samples to the T0 samples (Fig. 4C,D). To identify knocked-out genes that were important for the early APAP response we compared the 30 min-24 h APAP treated samples to the T0 samples (Fig. 4E,F). These comparisons resulted in 5791 unique positively or negatively enriched significant genes (p < 0.05) in the combined 24 h APAP vs. T0, 4d APAP vs. T0, 30 min-24 h APAP vs. T0, or all APAP treatments vs. T0 gene rankings.
The RRA statistical method was chosen to rank gene knockouts because of its superior performance when compared with RSA and RIGER 32 . To validate our choice of statistical analysis method, we compared the Maximum Likelihood Estimate algorithm (MLE) to RRA, which has been shown to produce comparable gene ranking to RRA 33 . In a MLE analysis of all APAP time points compared with the T0 sample, 683 genes were statistically significant (p < 0.05), of which 442 (65%) were also statistically significant (p < 0.05) using the RRA method (v0.5.6) (Supplementary Data 9).
Cross-referencing of other datasets. In cuffdif, GSE110787 RNA-seq data from mice with and without APAP exposure were compared to assess the effect of APAP exposure on gene transcription. 1,626 of 46,073 gene probes had significantly differential gene expression after APAP exposure with an unadjusted p-val < 0.05. 1,025 genes have − log 2 fold change with p < 0.05 and 601 genes have +log 2 fold change with P < 0.05 ( Supplementary Fig. 4a, Data 10). Overlap between the genes that are highly ranked in the CRISPR screen at 24 h APAP treatment (2,082 gene knockouts, p < 0.05) and GSE110787 (p < 0.05) warrant validation in vivo. (Fig. 5A,B).Overall, 63 enriched gene knockouts and 55 depleted gene knockouts (24 h, p < 0.05) overlap with the significantly differentially expressed genes in the mouse model of ALF after 24 h drug treatment. Secondary data from human sources was used to cross-validate the CRISPR screen findings. In GEO2R, microarray data from 3 APAP-induced ALF liver samples were compared to 2 healthy liver samples (GSE74000). 1,679 of 54,675 probes have an FDR-adjusted p-value of <0.05. 1,251 probes have − log 2 fold change with p < 0.05 and 428 probes have +log 2 fold change with p < 0.05 ( Supplementary Fig. 4b). We compared genes with p < 0.05 to genes that were significantly enriched and depleted in our CRISPR screen (p < 0.05) to identify overlap and ascertain the relationship between sgRNA depletion or enrichment and gene expression (Fig. 5C,D). Overall, 63 enriched gene knockouts and 55 depleted gene knockouts (24 p < 0.05) overlap with the significantly differentially expressed genes in the human ALF data. A second dataset, GSE70748, was chosen to filter genes identified in the CRISPR screen that have also been identified in blood in humans who have been dosed with APAP. In GEO2R, microarray data from 12 APAP responder blood samples were compared to 32 non-responders using days 1 and 8 independently (GSE70784). No probes had an FDR-adjusted p-val < 0.05, so the unadjusted p-values were referenced. After 1 day of APAP dosing 362 of 20,173 probes have an unadjusted p-val < 0.05, of which 148 probes have −log 2 fold change with p < 0.05 and 214 probes have +log 2 fold change with P < 0.05 ( Supplementary Fig. 5a). After 8 days of APAP dosing 2445 of 20,173 probes had an unadjusted p-val < 0.05, of which 314 probes have − log 2 fold change with p < 0.05 and 2,131 probes have + log 2 fold change with P < 0.05 ( Supplementary Fig. 5b). We compared genes with p < 0.05 to genes that were significantly enriched and depleted in our CRISPR screen (p < 0.05) to identify overlap and ascertain the relationship between sgRNA depletion or enrichment and gene expression at 24 h APAP treatment ( Fig. 6A-D). Overall, 11 enriched gene knockouts and 15 depleted gene knockouts (24 h, p < 0.05) overlap with the significantly differentially expressed genes in non-acute overdose (drug responders vs. non-responders) after 1d of exposure. 101 enriched CRISPR gene knockouts and 117 depleted gene knockouts (24 h, p < 0.05 overlap with the significantly differentially expressed genes between drug responders and non-responders after 8d of exposure.
Using the same GSE70784 dataset in GEO2R, microarray data from 12 APAP responder blood samples were compared to 10 placebo controls using days 1 and 8 independently. After 1 day of APAP dosing 697 of 20,173 probes had an unadjusted p-val < 0.05. Of these, 244 probes have − log 2 fold change with p < 0.05 and 453 probes have +log 2 fold change with P < 0.05 ( Supplementary Fig. 5c). After 8 days of APAP dosing 1,801 of 20,173 probes had an unadjusted p-val < 0.05, of which 1248 probes have − log 2 fold change with p < 0.05 and 553 probes have +log 2 fold change with P < 0.05 ( Supplementary Fig. 5d). We compared genes with p < 0.05 to genes that were significantly enriched and depleted in our CRISPR screen (p < 0.05) to identify overlap and ascertain the relationship between sgRNA depletion or enrichment and gene expression at 24 h APAP treatment ( Fig. 6E-H). 30 enriched gene knockouts and 34 depleted gene knockouts (24 h, p < 0.05) overlap with the significantly differentially expressed genes in non-acute overdose (responders vs. placebo) after 1d of exposure. 89 enriched CRISPR gene knockouts and 86 depleted gene knockouts (24 h, p < 0.05 overlap with the significantly differentially expressed genes in non-acute overdose after 8d of exposure. Of the genes overlapping the CRISPR screen at 24 h APAP exposure (p < 0.05) and 1d APAP exposure vs. placebo in GSE70784, 7 up regulated genes and 8 downregulated genes remain significantly up or down regulated after 8d APAP treatment (GSE70784, p < 0.05). These overlaps rise to 10 and 20 genes, respectively, when the CRISPR gene knockout list is expanded to include all significant gene knockouts across all treatment times. Similarly, 6 downregulated genes remain significantly down regulated after 8d APAP treatment when the CRISPR overlapping APAP responders are compared with non-responders (GSE70784, p < 0.05). 13 genes are downregulated when the CRISPR gene knockout list is expanded to include all significant gene knockouts across all treatment times. Overall, our CRISPR screen data best overlaps the long-term exposure (8d). We additionally observe that there is little overlap between the differentially expressed genes in the early (1d) and late (8d) chronic exposure data of GSE70784 when filtered by gene knockouts that are significantly enriched or depleted in the CRIPSR screen. This suggests a dramatic shift in gene expression between early and longer-term exposure. We also observe better overlap when we include significant gene knockouts from other time points observed from the CRISPR screen.
We then isolated only genes (or gene knockouts in the case of the CRISPR screen) that were significantly differentially expressed across the CRISPR, mouse, and human studies. 523 genes (369 unique, 6% of CRISPR-Cas9 screen genes with p < 0.05) overlap the mouse RNA-seq and CRISPR "top lists" (4d, 24 h, Int, and All, p < 0.05, representing 5,791 unique genes with significant enrichment or depletion in the CRISPR screen). 57 of the 67 unique genes overlapping CRISPR, mouse, and GSE74000 p < 0.05 lists (0.1% of CRISPR-Cas9 screen genes with p < 0.05) are not previously reported to have a role in APAP metabolism, and 51/67 have consistent expression in mouse and GSE74000 and within CRISPR lists. When we compare the GSE70784 1 day responder vs. placebo to the CRISPR and mouse RNA-seq datasets, 12 of the 16 overlapping unique genes are novel (0.3% of CRISPR-Cas9 screen genes with p < 0.05, p < 0.05 overlap the main CRISPR analyses and the mouse RNA-seq) and 10 of the 16 have consistent expression between CRISPR analysis or between gene expression dataset. When we compare the GSE70784 8 day responder vs. placebo to CRISPR and Mouse datasets 36 of the 38 overlapping unique genes are novel (0.7% of CRISPR-Cas9 screen genes with p < 0.05, p < 0.05 overlap the main CRISPR analyses and the mouse RNA-seq) and 22 of the 38 have consistent expression between CRISPR analysis or between gene expression dataset. The largest number of genes overlapping with the CRISPR-Cas9 screen data was observed with the GSE70784 8d day responder vs. non-responder and responder vs. placebo datasets. (Supplementary Table 3). A number of the genes which had statistically significant differential expression in the in vivo datasets have known relationships with APAP (top 100 genes per data set), although as previously seen with the CRISPR screen, many are novel findings (Supplementary Table 4). These candidates which show consistent and significant differential expression in ALI (GSE70784) and ALF (mouse RNA-seq and GSE74000) and whose knockout impacts survival of APAP overdose need further study to evaluate the mechanisms and pathways by which they function. We suspect that NAD metabolism may play an important role in survival of acetaminophen injury and to this end we identified a number of genes involved in NAD metabolism which are also highly ranked in the CRISPR screen time points. A list of 48 genes identified based on Nikiforov et al., 2015 was compared with statistically significant CRISPR hits (p < 0.05) 34 . We identified 9 NAD metabolism in our screen data (Supplementary Table 5). Additionally, data from our lab suggest overexpression of NAMPT, a gene involved in NAD salvage, is protective against APAP-induced hepatotoxicity in vivo 35 .
We considered genes for functional validation which were in the top 10 of a CRISPR list and were also significantly differentially expressed in the GEO or mouse RNA-seq datasets (p < 0.05), with a preference for genes with a p < 0.05 in multiple positive or negative ranked lists. Novelty was assessed by literature search and essentiality was determined from essentialgene.org. A number of genes that were highly ranked in the CRISPR screen (positive or negative), and overlapped with other gene sets (human and mouse gene expression with and without APAP, p < 0.05), are identified as essential genes (essentialgene.org). These genes include PGM5, KIF23, C19orf60, BMPR1A, PDSS2, CXADR, SSR2, TMCC2, RDH13, and EGR1 (Supplementary Data 11). Additional genes that were highly ranked in the CRISPR screen, and overlapped with the other gene sets (human and mouse gene expression with and without APAP), have previously published relationships with APAP metabolism (pubmatrix. irp.nia.nih.gov). These genes include EGR1, VNN1, NR1I3. Genes ranked highly in both our screen and previous publications support the selection method used to filter candidate genes. Novel, non-essential genes identified by this study for further study include LZTR1, NAAA, ATG2B, MYOZ3, EFNB3, OR5M11, FCGR3A, PROZ, EEF1D, ACAD11, and TMCC2 (Supplementary Data 11). These genes are pathogenic (positively ranked) or protective (negatively ranked) and have potential for utility in development of diagnostic, risk-assessment, or therapeutic biomarkers.
Genes containing significant APAP SNPs. 133 gene names were identified from the literature as nearest-neighbors or containing 147 APAP injury-associated single nucleotide polymorphisms (SNPs) 36 . 22 of the genes were significantly enriched or depleted in the screen time points (Supplementary Table 6). Tables 5-6) identified a number of candidate genes that may be suitable for re-purposing to treat APAP-induced hepatotoxicity. Of the 54 unique candidate genes that were analyzed, 153 drug-gene interactions were identified for 19 genes (Supplementary Data 12). Of these, 14 genes were annotated with drug-gene interactions of known effects (Table 2). Notably, 3 novel genes are targets of existing drugs, which may be suitable re-purposed therapeutics against APAP-induced hepatotoxicity. BMPR1A, identified as a susceptible gene by the CRISPR-Cas9 screen, is inhibited by CHEMBL3186227. PROZ, identified as a protective gene by the CRISPR-Cas9 screen, is activated by Menadione. HSD11B1, a gene that was susceptible in the CRISPR-Cas9 screen, is inhibited by Carbenoloxone, CHEMBL222670, CHEMBL2153191, CHEMBL2177609, and Phenylarsine Oxide. An additional 3 genes, NR1I3, SIRT3, and GSTP1, have known roles in APAP hepatotoxicity that were correctly predicted by our CRIPSR-Cas9 screen and are targets of existing drugs that may be suitable for re-purposing 37-39 . These 6 genes are excellent candidate targets for re-purposing existing drugs to treat APAP-induced ALI and ALF. An additional 3 genes, SIRT1, GPX4, and GSS, were identified as targets of drugs with known gene interactions, however the CRISPR-Cas9 screen did not agree with the published gene role (protective or susceptible) in APAP-induced hepatotoxicity 40-42 . Functional validations of candidate genes. Mouse Lztr1, Nampt, and Pgm5 were selected for further in vitro validations of their functional effect of survival of APAP injury in primary mouse hepatocytes. Nampt knockdown by siRNA was significantly pathogenic when compared with a scramble control after 3 h APAP treatment (Fig. 7A,B, Supplementary Fig. 6a). Lztr1 knockdown by siRNA was significantly protective when compared with a scramble control after 3 h APAP treatment (Fig. 7C,D, Supplementary Fig. 6b). Pgm5 knockdown by siRNA resulted in a significant increase in cellular survival after 3 h of APAP treatment when compared with the scrambled control (Fig. 7E,F, Supplementary Fig. 6c).
Discussion
This study has identified a number of novel and previously unrevealed regulators of APAP-induced hepatotoxicity by employing state of the art genome-wide CRISPR-Cas9 screen in a hepatocyte cell line. Selected targets have been validated in primary hepatocytes and cross-referenced in other available data sets of human and mouse involvement. Our study has illustrated the power of a genome-wide CRISPR-Cas9 screen to systematically identify novel genes involved in APAP induced hepatocyte toxicity and most importantly, it provide a rich resources for further experimentation to identify potential new diagnostic targets or to develop novel therapeutic modalities to APAP induced hepatocyte toxicity. Validation of the screen findings was sought at multiple steps in the analysis and by siRNA in primary hepatocytes. Inspection of the significant genes revealed overlap with human microarray and mouse RNA-seq studies of APAP overdose. Additionally, several top genes identified from the screen for further study already had known associations with APAP in the literature. Lastly, some of the genes identified from the screen for further study have been previously identified as essential. While these genes were not essential in our study, their relationship with APAP treatment would support their roles in critical cellular functions that, when disrupted, result in cell death.
Although few genes were completely removed from the pooled mutant cell population prior to APAP treatment, thousands were missing after 4 days of APAP treatment. Based on the kill curve 4 days of APAP treatment results in about 1% surviving cells, indicating a majority of the cells being killed. The survival of cells with low numbers of sgRNAs is only statistically important if the proportion within the surviving population is significantly different than the starting population consistently across multiple sgRNAs per gene. The early time points (30 min to 24 h) in this screen are base off of traditional gene expression screening techniques. By considering the impact of drug selection at early time points we can better assess the early and late response genes involved in drug toxicity. We propose that a Wilcoxon Rank-Sum value of p < 10 −10 may be too stringent for addressing finer scale effects of gene knockout.
Using GSEA pathway analysis our screen identified WNT signaling (KEGG pathway) as a very strongly depleted pathway and also identified positive regulation of Notch Signaling (All Gene Ontology gene set) as a significantly depleted pathway (p < 0.05). Notch signaling has been previously identified as essential to survival of APAP 43 . To further validate our screening methodology, both spliceosome and ribosome KEGG pathways are among the most strongly depleted pathways after 4 days of APAP treatment. Our top negatively selected pathway after 24 h APAP treatment, regulation of skeletal muscle contraction, corroborated existing work, suggesting that intracellular calcium may be important to response to APAP. However, the role of this pathway in APAP-induced hepatotoxicity is unclear.
The 3 gene expression datasets all used distinct sampling methodologies, when combined with the CRISPR-Cas9 screen data, produced a comprehensive picture of changes in gene expression after APAP overdose. GSE70784 consists of blood samples from participants that are dosed with the daily maximum of APAP daily of the most pos. selected sgRNAs (left to right). (F) Overlap of neg. CRISPR/Cas9 screen (p < 0.05, 24 h) with APAP overdose microarray dataset GSE70784 responders vs. Placebo (1 day, p < 0.05). Heatmap of differential log 2 fold change of the most neg. selected sgRNAs (left to right). (G) Overlap of pos. CRISPR/Cas9 screen (p < 0.05, 24 h) with APAP overdose microarray dataset GSE70784 responders vs. Placebo (8 days) (p < 0.05). Heatmap of differential log 2 fold change of the top 10 genes with the most pos. selected sgRNAs (left to right). (H) Overlap of neg. CRISPR/Cas9 screen (p < 0.05, 24 h) with APAP overdose microarray dataset GSE70784 responders vs. Placebo (8 days) (p < 0.05). Heatmap of differential log 2 fold change of the most neg. selected sgRNAs (left to right). for an extended time. These data reflect a more chronic drug exposure, and response to the drug is measured by ALT. GSE74000 consisted of liver biopsies from Livers being replaced after APAP-induced ALF and liver biopsies obtained from non-ALF donors. This dataset, although it contains few samples, represents differential gene expression in humans at the 4d-point of the disease. The mouse RNA-seq data GSE110787 provided an extremely controlled population with controlled APAP dosage, avoiding issues of inter-population variabilities that may affect studies in human populations. The local inflammatory response and accumulation of neutrophils, which is not considered necessary to the initiation of progression of ALF contribute a major role in clearing necrotic cells and alter the liver injury micro-environment. In addition the inflammasome contributes greatly to the late stage of injury with activation of caspase-1 and IL1β with further cytokines and chemokines contributing to the recruitment of neutrophils and monocytes 44 . This late-stage of injury would be better captured by the mouse RNA-seq (ALF, GSE110787) and human microarray (ALF, GSE74000) datasets, since they represent a late-stage disease in a whole organism, which includes inflammatory and immune interactions not present in hepatocytes alone. It is therefore unsurprising that we observed the best overlap of the CRISPR screen data with the human liver injury microarray data (GSE70784).
This approach addresses APAP-induced liver injury in 2 distinct ways. First, we identified genes with a role in APAP metabolism by assessing the effect of gene knockouts on cell proliferation and survival. Next, we identified genes that were differentially expressed in response to APAP. The combination helps us to build hypotheses about the role of these genes in the disease process. This cross-validation with other APAP datasets is targeted at identifying genes that are important to APAP metabolism and may be novel diagnostic or therapeutic biomarkers. Genes that are highly ranked in the CRISPR screen (p < 0.05) and whose RNA are expressed differentially at high enough levels that a blood sample (preferable) or liver biopsy (less preferable) could be used to detect changes in expression levels resultant from APAP overdose rapidly in clinic. Novel genes identified by this method that were highly ranked in the CRIPSR-Cas9 screen and in the gene expression data are the strongest candidates for further study.
We tested the effect of siRNA knockdown of Lztr1, Nampt, Pgm5, and Naaa in primary mouse hepatocytes to validate our screen findings. We demonstrate that Leucine Zipper Like Transcription Regulator 1 (LZTR1) knockout in HuH7 and knockdown in mouse cells increase cellular survival of APAP-induced injury. LZTR1 has a positive LFC in the APAP-exposed human microarray data GSE70784, suggesting that the while the gene knockout
Gene
Gene increases survival of APAP, it is also elevated in APAP-treated subjects (Supplementary Data 11). LZTR1 mutations are associated with Noonan Syndrome 10, Schwannomatosis-2, gastric cancer, ventricular septal defects, and deletion of the gene may be associated with DiGeorge syndrome [45][46][47][48][49] . The GO annotations for LZTR1 include transcription factor activity and sequence-specific DNA binding. The protein localizes to the golgi, where it is thought to have a stabilizing effect. Nicotinamide Phosphoribosyltransferase (NAMPT, PDBID 4LVF.A) was selected for further study because although it is not significant in this screen, other lab data demonstrates a protective effect of overexpression against APAP-induced hepatotoxicity. In mice, Nampt has reduced expression after APAP treatment (LFC = −0.476, p < 0.05). This in combination with the number of other NAD metabolism genes that are significantly ranked in this screen led us to validate the observed effect of NAMPT knockout in HuH7 with knockdown in mouse hepatocytes, which we found to increase susceptibility to APAP-induced injury. NAMPT protein is involved in the catalysis of the biosynthesis of the nicatinomide adenine dinucleotide. NAMPT's role in NAD salvage is thought to be important to a number of metabolism and aging-related conditions [50][51][52][53][54][55][56][57] . It is involved in the NAD metabolism and Common Cytokine Receptor Gamma-Chain Family Signaling pathways. GO annotations include protein homodimerization activity and drug binding. NAMPT's role in APAP-induced hepatotoxicity does however need further study in whole organisms to evaluate its role during the different stages of liver injury. The secreted form of Nampt functions as both a cytokine and adipokine and functions to inhibit neutrophil apoptosis which is implicated in the second phase of acetaminophen-induced injury 58 .
Phosphoglucomutase 5 (PGM5) knockdown increased cellular survival of APAP treatment, validating our CRISPR/Cas9 screen finding that knockout of the gene is protective (Supplementary Data 11). PGM5 has a negative LFC in the APAP-exposed human microarray data GSE70784, suggesting that the gene knockout increases survival of APAP exposure and gene expression is decreased after APAP exposure. PGM5 does not exhibit phosphoglucomutase activity and is a component of cell-cell and cell-matrix junctions. It is expressed at high levels in smooth muscle and is essential in the metabolism of galactose and glycogen and is involved in the Porphyrin and chlorophyll metabolism pathway. GO annotations include structural molecule activity, intramolecular transferase activity, and phosphotransferase activity. Abnormal expression and mutation of PGM5 are associated with a number of diseases, including Duchenne's Muscular Dystrophy and colorectal tumorigenesis 59,60 .
Although we were able to confirm knockdown of mouse Naaa in vitro, we were not able to validate the increase in susceptibility observed in the CRISPR-CAS9 screen. It is possible that the effect was too small in the conditions used for the validation experiments, or that a true knockout is needed to observe the effect.
It is widely accepted that the cytochrome P450 isoform play an important role in APAP metabolism. While we expected to see the cytochrome P450 isoforms higher in the gene rankings of the negative screen, it is unsurprising that they are not highly ranked. It is suspected that multiple isoforms can regulate the metabolism of APAP, so it is possible that others are compensating for the knocked out isoform. The low, though not totally absent, expression of some CYPs in HuH7 arguably increases the potential for this system to reveal non-canonical mechanisms of survival and susceptibility 61 . HuH7 additionally metabolized NAPQI by glucuronidation and sulfation at low levels 7,61 . Although there are always concerns when using a cell line to study a biological mechanism, HuH7 has been used successfully for studies of drug metabolism 61,62 . To carry out the CRISPR-Cas9 screen it was necessary to use a cell line that could be transduced and didn't require differentiation. Whenever possible, we validated our findings in primary mouse hepatocytes.
To better control for potential differences in drug metabolism across systems and to identify the most promising candidate genes, the CRIPSR-Cas9 gene knockout rankings were cross-referenced with multiple human and mouse datasets to select the most promising candidate genes. We also identified genes with likely and known associations with APAP-induced hepatotoxicity (NAD metabolism and genes containing polymorphisms). Further study of the polymorphisms in these genes could result in a diagnostic or prognostics SNP panel. Further study of the role of these genes could inform their use in targeted therapies. These candidate genes were assessed for drugability by existing drugs as a means to more quickly bring forward new therapies. Indeed, 6 candidate genes (3 novel and 3 known) are targets for existing drugs which have an interaction predicted to be protective against APAP-induced hepatotoxicity.
Conclusions
Collectively, this study has illustrated the power of a genome-wide CRISPR-Cas9 screen to systematically identify novel genes involved in APAP-induced hepatocyte toxicity and to provide potential new targets to develop novel therapeutic modalities. Combined with functional validations, this screening technique offers a robust and dynamic way to identify candidate genes for a variety of disease models. In this study we demonstrate that LZTR1 and PGM5 knockout and knockdown are protective against APAP-induced hepatotoxicity.
The gene NAMPT is protective against APAP-induced ALI in vivo, although not identified directly by the sgRNA screen, we show knockdown increases susceptibility to APAP-induced hepatotoxicity. NAMPT has a known role in NAD salvage that warrants further study to identify if its protective effect is resultant of increased NAD supporting glutathione production and CYP function, or if it is protective by a novel mechanism.
These genes represent novel diagnostic and therapeutic targets for improving the care of acetaminophen overdose. Gene expression could be used to determine susceptibility to APAP-hepatotoxicity as well diagnose and predict disease severity and outcome. Expression and function-associated variants in these genes could be used in risk-assessment genotyping panels. Furthermore, these genes represent novel biomarkers for personalized therapeutics. In silico analysis of candidate genes identified a number of the candidate genes that are targets for existing drugs. These existing drugs could be quickly re-purposed to treat and prevent APAP-induced ALF. Further studies are needed to better understand the functional role of the genes and pathways highlighted in this study. Cell Culture. HEK293FT cells (Thermo Fisher cat. R70007, Waltham, MA) were maintained in high-glucose DMEM (Thermo Fisher cat. 11965118) supplemented with 100 U/ml penicillin and streptomycin (Thermo Fisher cat. 15140122), non-essential amino acids (Thermo Fisher cat. 11140050), 2 mM L-glutamine (Thermo Fisher cat. 25030081), 1 mM sodium pyruvate (Thermo Fisher cat. 11360070), and 10% fetal bovine serum (Atlanta Biologicals cat. S11150, Atlanta, GA). Cells were detached with trypsin-EDTA (Thermo Fisher cat. 25200056). HuH7 was obtained from the Japanese Collection of Research Bioresources Cell Bank 66 . The HuH7 human hepatocellular carcinoma cell line (JCRB cat. 0403, Osaka, Japan) was chosen as a model for APAP toxicity studies because it is more robust than primary hepatocytes, allowing efficient lentiviral transduction, transfection, and genome editing with CRISPR/Cas9 62,67-70 .
Cells were maintained in DMEM (Thermo Fisher cat. 111885092) supplemented with 100 U/ml penicillin and streptomycin (Thermo Fisher cat. 15140122), non-essential amino acids (Thermo Fisher cat. 11140050), and 10% fetal bovine serum (Atlanta Biologicals cat. S11150) as previously described, with the addition of 2 mM L-glutamine (Thermo Fisher cat. 25030081) and 1 mM sodium pyruvate (Thermo Fisher) 71 . Cells were detached with trypsin-EDTA (Thermo Fisher cat. 25200056). All incubations were performed at 37 °C and 5% CO 2 . Cell Transduction Using the GeCKOv2 Library. HuH7 cells were detached using 0.25% Trypsin-EDTA (Thermo Fisher cat. 25200056) and seeded the day prior to transduction at 6E6 cells per T-150 TPP flask (MidSci cat. TP0151, Valley Park, MO). The flasks were then transduced for 48 h in culture media + 8 µg/ml polybrene (Thermo Fisher cat. 107689-10 G) + Cas9 lentivirus at an MOI <0.1. HuH7 underwent monoclonal selection by 1 ug/ml blasticidin (Thermo Fisher cat. A1113903) before Cas9 expression was confirmed by western blot. HuH7-Cas9 was transfected with the GeCKOv2 packaged lentiviral library as described above at 0.5 MOI. The pooled, transduced cells were selected with 1.5 µg/ml puromycin (Invitrogen cat. Ant-pr-1) for 3 days alongside cells transduced with the empty vector lentiGuidePuro, positive fluorescent control PLJM1-EGFP. PLJM1-EGFP fluorescence was verified 48 h post-transduction. APAP Screen and Sample Collection. After 8 days of transduction a T0 sample was collected (N = 2) and the remaining library-transduced cells were treated with 15 mM APAP for 30 minutes up to 4 days (2 biological replicates for T0, 24 hour, and 4 day samples). Samples that underwent 4 days of APAP treatment were outgrown for 21 days prior to collection. Genomic DNA was isolated from samples of a minimum of 2E7 cells using the Blood and Cell Culture Midi Kit (Qiagen cat. 13343, Valencia, CA), resulting in a minimum of 136 µg DNA per sample. DNA was quantified using the Qubit high-sensitivity DNA quantification assay (Thermo Fisher cat. Q32851) and Take3 microspot plate reader (BioTek Epoch, Winooski, VT).
Lentivirus Production and Purification to
Sequencing. 3.33 µg of the isolated genomic DNA was used to amplify the bar-coded amplicons in 39 Herculase II DNA polymerase (Agilent cat. 600679, Santa Clara, CA) reactions per sample (primers described in Supplementary Data 13). 5 µl amplicon or 1 µl diluted plasmid library was used as template in 13 50 µl Herculase II DNA polymerase reactions per sample to attach pooled variable-length spacers and Illumina indexes (primers described in Supplementary Data 13). 24 cycles were used to amplify DNA in the first and second PCR, respectively. The amplicon fragments after PCR 2 have the following sequence (354-362 bp library with variable 20 bp sgRNA sequence in the middle) (SF1). DNA was pooled by sample and purified using the Nucleospin Gel and PCR Clean-up kit (Clontech cat. 740609.250, Mountain View, CA). DNA was quantified using a Qubit high-sensitivity DNA quantification assay (Thermo Fisher cat. Q32851) and Take3 73 . Read counts were normalized to the median with T0 as control and analyzed using sgRNA and gene-level RRA (Robust Rank Aggregation) in MaGeCK v0.5.6. In comparisons between 2 time points the biological replicates were handled as independent replicates and in the pooled T0 vs. 30 min-24 h and 30 min-end the replicates were combined. Gene-level analysis was validated using Maximum Likelihood Estimate (MLE) in MaGeCK v0.5.6. Genes with fewer than 3 sgRNA were removed from the gene-level analysis but were included in the Gene Set Enrichment Analysis (GSEA) pathway analysis implemented in MaGeCK v0.5.6 32,74 . Box plot, scatter plots and heat map were generated in R. Venn-diagrams were generated using http://bioinformatics.psb.ugent.be/webtools/Venn/.
Pathway analysis. Analysis of pathway-level effects of APAP treatment in the 24 h and 4d samples individually vs. T0 was accomplished using GSEA in Mageck v0.5.6 using the MsigDB "KEGG gene sets" and "all GO gene sets". Ingenuity Pathway Analysis of 24 h vs. T0 (genes with p < 0.05) and 4d vs. T0 (genes with p < 0.05) was also used to predict pathway-level effects of APAP treatment.
Statistical analysis of GEO datasets. Human APAP analysis. We then analyzed samples from 2 publicly available human datasets of acetaminophen overdose from the Gene Expression Omnibus, GSE74000 and GSE70784 9,75 . Gene candidates identified using the genome-wide CRISPR-Cas9 screen were cross-referenced with gens that were significantly correlated with APAP overdose from 2 human microarray datasets identified in the Gene Expression Omnibus (https://www.ncbi.nlm.nih.gov/geo/). Of the available gene expression datasets assessing the effect of APAP, these were selected because they address hepatotoxicity at a range of stages. These datasets were analyzed in GEO2R using the microarray data normalized and deposited by the original authors. GSE70784 contains gene-expression data from blood in patients receiving a daily dose of APAP or placebo. These data compare patients at a higher risk of injury (responders) to non-responders and placebo after 1 day and 8 days of dosing. Genes with differential expression in blood, especially early after dosing, are ideal diagnostic biomarkers. GSE7400 contains gene expression data from liver biopsies from healthy patients and patients APAP-induced-ALF. These data address differential gene expression in end-stage disease, and better inform the biological mechanisms active in APAP-induced ALF.
In GEO2R, microarray data from 12 APAP responder blood samples were compared to 32 non-responders and 10 placebo controls on 1 day and 8 days of APAP treatment (GSE70784). Subjects were treated with 4 g APAP or placebo for 7 days and were followed for 14 days. Responders were classified as patients with ALT (alanine aminotransferase). >2 times the upper limit of normal during days 4-9 after the start of APAP dosing. Background correction and normalization was completed by the depositing authors. Data was log 2 transformed prior to analysis and the unadjusted p-values were used for comparison with the CRISPR screen.
Microarray data from 3 APAP-induced ALF liver samples were compared to 2 healthy liver samples were obtained from the GEO dataset GSE74000 and compared using GEO2R. Background correction, median polish summarization, and quantile normalization were completed by the depositing authors. Data was log 2 transformed prior to analysis and the FDR-adjusted p-values were used for comparison with the CRISPR screen. Heat maps were generated in R. Box plots were generated in GEO2R.
Mouse APAP analysis. RNA-seq data from mice previously published by our lab (GSE110787) evaluating the effect of APAP overdose on RNA expression changes in the liver was 7 male 11 week old C57BL/6 mice, 4 saline treated control mice and 3 mice 24 h after 200 mg/kg APAP (Sigma cat. A7085, St. Louis, MO) exposure via intraperitoneal injection, underwent RNA-sequencing on an Illumina HiSeq 1500 35 . RNA was isolated from liver using the MirVana miRNA isolation kit (Thermo Fisher cat. AM1561, Waltham, MA).
Samples were prepared using the TruSeq Stranded Total RNA Sample Preparation Kit (Illumina cat. RS-122-2201, San Diego, CA) and clusters were generated using the TruSeq Paired-End Cluster Kit v3-cBot-HS (Illumina cat. PE-401-3001, San Diego, CA). Paired -end sequencing (2 × 101 cycles) was completed using the TruSeq SBS kit v3-HS (Illumina cat. FC-401-3001, San Diego, CA). The raw base calling (.bcl) files were converted to demultiplexed compressed FASTQ files using Illumina's bcl2fastq v2.17 software. TopHat 2.0.9 was used to map RNA-seq reads against the mouse reference genome (mm10) using default parameters 76,77 . Transcript assembly and abundance estimation and comparing expression were conducted with Cufflinks v2.2.1 and reported in Fragments Per Kilobase of exon per Million fragments mapped (FPKM). Cuffdiff, a part of the Cufflinks package, was used to calculate statistical significance changes of gene expression between treated and untreated mice. Box plot and heat maps were generated in R.
This RNA-seq study of APAP-induced ALI identified genes which were differentially expressed in a genetically and drug dosage controlled environment after liver injury has occurred, but prior to ALF. These data better illustrate the changes in gene expression due to the drug overdose absent of the variation that is unavoidable in human studies.
Functional validations in primary mouse hepatocytes and analysis. Cryopreserved hepatocytes (Lonza cat. MBCP01, Allendale, NJ) from 8-week old male C57/Bl6 mice were thawed in thawing media (Lonza. Cat. MCRT50) and immediately seeded at a density of 15,000 cells/96-well and 250,000 cells/12-well in Williams E media with thawing and plating supplement (Thermo cat. A1217601, cat. CM3000, respectively). After 4 h the cells were transfected using the standard Polyplus INTERFERin protocol for 4 h (VWR cat. 89129-930, | 2019-02-05T15:30:35.584Z | 2019-02-04T00:00:00.000 | {
"year": 2019,
"sha1": "4d143d4fcfba169b7ed64c7f7b8461d7d49ca23f",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-37940-6.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3749835272dd3354fd1a1403b6b54c4445e76f54",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
229157990 | pes2o/s2orc | v3-fos-license | Effects of Semitransparent Window AspectRatio on Interaction of Collimated Beam withNatural Convection: Part I
The effects of the semitransparent winodw's aspect ratio on the interaction of the collimated beam with natural convection have been investigated numerically in the present work. The combination of geometrical parameters of the semitransparent window, i.e., height ratio ($h_r$) and window width ratio ($w_r$) and Planck numbers of the medium have been considered. The other parameters, like flow parameter (Ra$=10^5$), fluid parameter (Pr=0.71), thermal parameter (N), Irradiation (G=1000 $W/m^2$), Angle of incidence ($\phi=135^0$) and geometrical parameter of the geometry ($A_r$=1) and the wall conditions have been kept constant. A collimated beam is irradiated with irradiation value (G=1000 $W/m^2$) on the semitransparent window at an azimuthal angle ($\phi) 135^0$. The cavity is convectively heated from the bottom with heat transfer coefficient 50 $W/m^2 K$ and free stream temperature 305 $K$. A semitransparent window is created on the left wall and isothermal conditions (T=296 $K$) is applied on the semitransparent, left and right vertical walls, wherein adiabatic conditions are applied on upper wall of the cavity. The dynamics of two vortices inside the cavity change considerably by combinations these semitransparent window's aspect ratio and Planck number (Pl) of the medium. The left vortex breaks into two parts and remains confined in upper and lower left corners for some combination of aspect ratios and Planck numbers of the medium. The thermal plume flickers depending on the situation of dynamics of two vortices inside the cavity. The localized hating of the fluid happens mostly for large height ratio of semitransparent window. The conduction; radiation and total Nusselt number are also greatly affected by the semitransparent window's aspect ratio and the Planck number of the medium.
Introduction
Natural convection flows in square/rectangular enclosures have been extensively investigated by many experimental and numerical studies due to its wide variety of engineering applications like, nuclear reactors, cooling electronic equipments and heat exchangers, HVAC system, Building energy management etc. The flow in these applications are majorly buoyancydriven. The buoyancy-driven flow exhibits a complex fluid flow phenomenon which mainly depends on the temperature gradient, the aspect ratios, geometric shapes and sizes like square/circular/rectangular/triangular and position of energy source inside the enclosures. Though, the practical applications have complex geometries but, most of the complex fluid flow and heat transfer phenomenon of natural convection can be understood in simplified geometries like square, rectangular etc.
Two heated cylindrical geometries (elliptical and cylindrical) inside a square enclosure with different aspect ratios (0.25 to 4.00) were simulated for natural convection for the range of Rayleigh numbers 10 4 to 10 6 using immersed boundary method by Cho et al. [1,2]. A second order accurate central difference scheme and fractional time step method were used to simulate the flow field. The advection and diffusion terms were treated by Adams-Bashforth second order scheme and the Cranck-Nicolson scheme, respectively. The authors observed the formation of secondary vortices above the top cylinder for Rayleigh number 10 5 due to increase in the convection rate. The flow and the thermal fields were unsteady state for Rayleigh number (Ra=10 6 ).
Cheong et al. [3] numerically studied the natural convection in an inclined enclosure with sinusoidal temperature profile on the left wall and isothermal condition on right wall, whereas the top and bottom walls were adiabatic. They found that the convection was dominated for the aspect ratio (A r ) of 0.25 ≤ A r ≤ 5 and the heat transfer was mostly by conduction for the enclosure for A r =10 for all Rayleigh number range under study. Yigit et al. [4] has investigated the effect of enclosure aspect ratio of a cavity which was heated from the bottom, on the yield stress for Bingham fluid in natural convection case.
It was reported that the number of convective vortices formation were greatly affected by aspect ratio of the geometry. A comprehensive review on the natural convection in different non-square enclosures and practical geometries for engineering applications were complied by Das et al. [5] and Rahimi et al. [6], respectively.
Webb and Viskanta [7] have performed an experiment to observe the distribution of the temperature and the flow field of water inside a rectangular enclosures that subjected to radiation flux on a wall and rest walls were kept adiabatic. Their experimental results showed that the formation of thin hydrodynamic boundary layer at the vertical walls, stagnant and stably stratified temperature field at core of the cavity. A numerical study have been reported for the interaction of the thermal radiation with buoyancy driven flows in different geometrical configurations with heated cylinder at its centre in [8,9,10]. It was reported that the radiation exchange homogenized the temperature field inside the cavity. The average Nusselt numbers for the square and the triangular geometries are higher compared to the cylindrical geometry. The effect of the orientation of the cavity on the combined modes of heat transfer was studied in [11]. The flow and the temperature field were significantly altered by the radiation. The conducted Nusslet number increased with increase of optical thickness, whereas the radiative and the total Nusselt number have decreased. Sun et al. [12] had investigated the performance of P 1 , SP 3 , P 3 , Discreate ordinate method (DOM) and finite volume method (FVM) for the two-dimensional combined natural convection with the radiation for an absorbing emitting medium and compared with the results by the monte carlo method. The effects of Rayleigh numbers, Planck numbers, and optical thickness on the flow field and the heat transfer rates were analyzed in details. It was reported that the FVM took twice CPU time than DOM. The SP 3 and P 3 produced high accuracy than FVM for optically thick medium but they could predict the radiative heat flux accurately because of oscillating behaviour of convergences at lower Planck numbers.
Mondal and Mishra [13] have used lattice Boltzman method (LBM) to simulate natural convection in a square cavity coupled with radiation to study the effects of various parameters such as the extinction coefficient and the scattering albedo on the flow filed and the temperature distributions inside the cavity. The FVM was used for the RTE. It was observed that flow field was symmetric and scattering coefficient had no much significant effect. The extinction coefficient had a pronounced effect on the temperature distributions. A coupled numerical investigation on natural convection with volumetric radiation with gray and isotropic scattering medium in two-dimensional rectangular cavity were analysed for Planck numbers, scattering albedo of the medium for various tilt angles and the aspect ratios of the cavity in [14]. They observed that emissivity of the horizontal wall and the scattering albedo have significant effect on the flow and temperature patterns. The heat transfer has decreased with increase in the scattering albedo. Hakan and Derbentil [15] investigated the combined natural convection with radiation for the rectangular enclosure with different aspect ratios. They also proposed the correlations for the mean values Nusselt number and observed that the mean Nusselt number has increased from for higher aspect ratio of the cavity.
Nia and Nassab [16] have made a numerical study of the natural convection due to the temperature and the concentration gradients in a square cavity with radiation for the range of optical thickness of the medium. It was stated that the optical thicknesses had affected the thermal and the mass transfer in the cavity. They also stated that thermal field reached to steady state faster than the concentration field. To get a better insight of the flow and the concentrations field, the time evolution of the isotherm, stream and iso-concentration lines were presented for Ra = 10 4 and optical thickness 10. A transient simulations of the effect of the solar radiation on the reservoir sidearm has been performed by Lei et al. [17]. Three distinct regimes 1) initial stage was dominated by the conduction at the bottom, 2) a transitional stage, where circulation was established and the presence of instabilities and 3) and quasi-steady state, have been observed. Ming and Zang [18] implemented modified finite volume method for the combined natural convection and the radiation for the hybrid grids and validated their solver with the benchmark cases.
Wang et al. [19] have compared the first and second order formulations of the radiative transfer equation (RTE) by diffuse approximation meshless (DAM) method without using upwinding treatment for interpolation. Two and three dimensional geometries were considered to investigate the accuracy and the computational resource utilization. It was observed that first order formulation of RTE is faster than the second order.They used the moving least square meshless method to investigate the accuracy for coupled natural convection with radiation in semitransparent medium and considered the vorticity-stream function formulation and vorticity-vector potential formulation for 2-D and 3-D geometries, respectively, and the discrete ordinate method to solve the radiative transfer equation. Their results showed that the moving least square meshless method was stable and accurate to deal with the natural convection coupled with the radiation.
In all the above works either pure natural convection or combined natural convection with diffuse radiation was considered, however a little work is available on the collimated beam radiation like, Discrete transfer method (DTM) was used to solve the radiative transfer equation in varying refractive index in participating media by Ben and Dez [20]. Anand and Mishra [21] used DTM to solve RTE in participating media and derived the exact radiative flux field expression for the linearly varying refractive index. Ilyushin [22] studied numerically the collimated beam in the refractive index medium. [24]. The presence of absorbing, emitting and anisotropically scattering medium within a two-dimensional rectangular domain have been considered. A collimated beam is irradiated on top wall over a small width whereas remaining top, left, right, and bottom walls were maintained at the constant temperature. The collimated beam feature was only applied on the wall whereas diffuse radiation treatment was done inside the cavity. Recently, a collimated beam feature has been developed in OpenFOAM by Chanakya and Kumar [25] and its effects on the natural convection also have been investigated. They further investigated the thermal adiabatic boundary condition [26] on the semitransparent wall of the cavity.
From the above literature it is noticed that there is no significant work on the collimated beam radiation inside the cavity available to the best of author's knowledge at present. The collimated beam has wide range of application like, laser treatment, solar cavity receiver, laser solidification and melting, illumination from the head lamp of a car, etc. In above application, the collimated beam travel through a optical window which is also known as semitransparent window. The effects of semitransparent windows aspect ratio on the interaction of the collimated beam with natural convection have been studied numerically in OpenFOAM framework in the present work. This paper is outlined as follows: the problem statement is defined in section 2, followed by mathematical modeling and numerical scheme in section 3. The verification and independent test for grids, respectively, are explained in section 4 and 5. Section 6 elaborates the results and discussion for Planck numbers and all aspect ratios. Finally, the conclusions of this numerical study are provided in section 7.
Problem Description
Consider the buoyancy driven flow of Newtonian fluid within the square enclosure of as depicted in Fig. 1. The enclosures bottom wall is subjected to convective boundary with free stream temperature 305 K and heat transfer coefficient 50 W /m 2 K, top wall is taken as adiabatic, the right and the left walls are subjected to isothermal (296 K) boundary condition.
The Euclidean co-ordinate axes are along the bottom and the left vertical walls of the enclosure and the origin is at the junction of these two walls. The acceleration due to gravity acts vertically downward direction (negative direction). All walls of enclosure are treated as opaque with emissivity 0.9 for radiation expect semitransparent window on the left wall where collimated beam is applied. The four cases of height aspect ratios as height ratio (h r =H w /L) and window width ratio (w r =W w /L) have been considered below. and various semitransparent window's aspect ratios.
Mathematical formulation and Numerical procedures
The following assumptions have been considered for the mathematical modeling of the above problem 1. Flow is two-dimensional, steady, laminar and incompressible.
2. Flow is driven by buoyancy force that is modeled by Boussinesq approximation.
3. The thermophysical properties of the fluid are constant. 4. The fluid medium absorbs and emits but does not scatters the radiation energy.
5. The transmissivity of the semitransparent window is one for the incoming radiation and zero for the other walls.
Based on the above assumptions, the governing equations in the Cartesian coordinate system are given as where u is velocity, p is pressure, ρ is density, β T is thermal expansion coefficient, g is gravity, c p is specific heat capacity at constant pressure, κ is thermal conductivity of fluid. i, j are tensor indices which vary 1-3 in Cartesian co-ordinates system.
The δ i2 is Kronecker delta and given as The ∂q R i ∂x i in eq (3) is the divergence of radiative flux which is calculated as where κ a is the absorption coefficient, I b is the black body intensity and G is the irradiation which is evaluated by integrating the radiative intensity (I) in all directions, i.e., The intensity field inside the cavity can be obtained by solving the following radiative transfer equation (RTE) wherer andŝ are position and direction vectors, respectively, and s is path length in the beam direction.
The non-dimensional form of equations (1)-(3) are The non-dimensional quantities and parameters involved in the above equations are as follows, The scales for length, velocity, temperature, conductive and radiative fluxes are L, u o , (T f ree -T c ), κ(T f ree -T c )/L and is convective velocity scale. N is the conduction-radiation parameter, Pl is the Planck number and τ is the optical thickness of the medium. Fig. 2: The pictorial representation of (a) cell arrangement for finite volume method for partial differential equations and (b) Angular discretization for the radiative transfer equation The non-dimensional irradiation is given as the radiative transfer equation is always solved in dimensional form, this may be due to fact that radiation quanties depends on absolute values of temperature, irradiation rather than scaled values.
The Navier-Stokes equation (2), and temperature equations (3) are subjected to following boundary conditions where q c = −k ∂T ∂n and q r = 4π I(r w ,ŝ)(n ·ŝ)d The radiative transfer equation (6) is subjected to following boundary condition on all the walls except semitransparent window forn ·ŝ < 0 wheren is the unit area surface normal and the ε is emissivity of the walls and considered be 0.9 for present study.
The semitransparent window is subjected to collimated irradiation (G co ) of value 1000 W /m 2 . The boundary condition for RTE on the semitransparent window is is Dirac-delta function, and defined as I co is intensity of collimated irradiation and calculated from the irradiation value as below where dΩ is the collimated beam width.
In the current work, the solid angle of discretized angular space ( Fig. 2b) in collimated direction is considered as beam width. The pictorial representation of diffuse emission, reflection and collimated beam radiation from the wall are shown in The OpenFOAM uses the finite volume method (FVM) to solve the Navier-Stokes (eq. 2) and the energy equations (eq. 3). The FVM integrates an partial differential equation over a control volume (Fig. 2a) to convert the partial differential equation into a set of algebraic equations of the form where φ p is any scalar, a p is central coefficient, a nb coefficients of neighbouring cells and S is the source values. Whereas, RTE (eq (6)) is converted into a set of algebraic equations by double integration over a control volume (Fig. 2a) and over a control angle (Fig. 2b) . The set of above algebraic equations are solved by preconditioned bi-conjugate gradient (PBiCG) and the details of the algorithm and its implementation in OpenFOAM can be found in the book by Patankar [27] and Moukalled [28], respectively.
In the present simulation, linear upwind scheme which is second order accurate has been used for interpolate face centered values for the cell centred values. The linear upwind scheme is given mathematically as where f φ is the flux of the scalar φ on a face (Fig 2a), and p, nb (includes E, W, N, S, NE, NW, SE and SW i.e., East, West, North, South, North-East, North-West, South-East and South-West cells) indicate present and neighboring cells and f indicates face value of scalar.
The conductive and radiative fluxes on the walls are converted into Nusselt number as where, Nu C and Nu R , are conductive and radiative Nusselt numbers, respectively q Cw and q Rw are conductive and radiative fluxes respectively and L is the characteristic dimension of the present problem. Further, the total Nusselt number is defined as below where Nu tot is the total Nusselt number.
Verification
In the absence of any standard benchmark test case for the present problem, the validation has been performed in three steps, first, the standalone feature of collimated beam irradiation problem, in second step, pure natural convection problem which is heated from the bottom and in the third and last step , the combined natural convection and radiation in defferentially heated cavity have been verified. The collimated irradiation feature [29] has been tested in a square cavity as shown in the Fig. (4a). The left side of the wall has a small window of non-dimensional size 0.05 at a non-dimensional height of 0.6.
The walls of square cavity are black and cold and also medium is non-participating. A collimated beam is irradiated on the window at azimuthal 135 0 direction. It is expected that the beam would travel in oblique direction of 135 0 angle without any attenuation and hit exactly non-dimensional distance of 0.6 from left wall. Figure (4b) shows the contour of irradiation which clearly shows the travel of collimated without any attenuation. For second step, fluid flow with heat transfer (without any radiation) is validated with Aswatha et al. [30] and combined diffuse radiation and natural convection in a cavity whose top and bottom walls are adiabatic and vertical walls are isothermal at differential temperatures and radiatively opaque has been validated with Lari et al. [31]. The present results for both the cases (see Fig. 5) are in good agreement with the published results.
Grid Independent Tests
Numerical solutions of Navier-Stokes, energy equations and radiation transfer equation are sensitive to the spatial discretization. Additionally, radiative transfer equation also requires angular space discretization which provides directions along which radiation transfer equation is being solved. Thus, optimum number of grids and directions have been obtained through independent test study in two steps, 1. Spatial grids independence test: Three spatial grid sizes are chosen to calculate the area average total Nusselt number on the bottom wall as shown in Table 1 for the present problem of the natural convection. The percentage error between the first and second grid sizes is 0.8%, whereas between second and third grid sizes is 0.15%. Thus, the spatial grid points, i.e., 80×80 is selected for further study.
2. Angular direction independence test: The polar (n θ ) discretization does not have any effect in two-dimensional analysis, thus, it has been fixed to 2 for polar angle of 180 0 in OpenFOAM. The effect of angular discretization in one quadrant angular space on the area average total Nusselt number on the bottom wall is shown in Table 2. The percentage difference in area average Nusselt number in the first and second azimuthal discretization is 0.09%, whereas in second and third angular discretization is 0.22%. Thus, finally n θ × n φ = 2 × 5 in one quadrant angular space is selected for the study of the present problem.
Results and Discussion
In the present numerical simulation, parameters such as, Rayleigh number, Prandtl number, Planck numbers, collimated irradiation and angle of collimated irradiation, respectively fixed to values 10 5 , 0.71, Pl= 0, 1, 10 and 50, G co = 1000W /m 2 , 135 0 have been fixed. The simulations have been performed for the different aspect ratios of semitransparent window and Planck numbers and correspondingly, the fluid flow and heat transfer characteristics were studied. Unlike to cases A and B for the transparent medium (Pl = 0) the left vortex is little smaller in size to the right side vortex (compare (7a), (15a) to (17a)). This is mainly due to the fact that the collimated beam incidence takes place between the non-dimensional length 0.3 to 0.5 on the bottom wall. This incidence length is below to the left vortex, thus higher buoyancy causes the reduction of size of left vortex. One interesting fact to notice that fluid velocity has almost 90 0 turn in the right vortex at end point of collimated strike. Also, the flow rate in the right vortex is higher than the left vortex unlike to cases A and B for transparent medium with increase of Planck number of the medium, the flow rate in the right vortex keeps on increasing till Pl = 10. The size of left vortex also keeps on decreasing with Planck number of the medium and the left vortex breaks into two for Pl = 10. However, total a different situation appears for these two vortices for Pl = 50. The left vortex has grown and right vortex reduced in the size. The flow rate in left vortex has also increased whereas it is reduced in the right vortex.
The plume is arising from the collimated incidence length and it is almost vertical from the transparent medium (Pl = 0) (see fig 18a), this plume is bent towards left for Planck number (Pl = 1)(see fig 18b) case. It gets totally bent and nearly 18c). On contrary to this, the plume is bent toward right for Pl = 50 (see fig 18d). The isotherm lines are also clustered and parallel to the semitransparent window for Planck number Pl = 50. In this case also, the maximum temperature rise can be see on the bottom wall like cases A and B and is at the point of incident of the collimated beam for Planck number 0 and 1 and at the point of plume rise for Planck number 10 and 50.
6.4 Case D: h r =0.4 w r =0.4 There is no major change occurs in the fluid flow behaviour with the increase of window width ratio for the transparent medium case (fig 19). Nevertheless, the flow rate in both the vortices increases (compare fig 17a and 19a) Similarly, the maximum non-dimensional temperature inside the cavity is shown in Table 4. In few scenarios, the maximum non-dimensional temperature has increased beyond one. The maximum non-dimensional temperature is found for case D and Planck number Pl = 0, and the minimum non-dimensional temperature is found for case A and Planck number Pl = 10. Nusselt number curve disappear for cases A and B (fig 23c) however, the shapes for case C and D becomes Gaussian where case D has higher value than the case C. Furthermore, these peaks disappear in cases C and D (see fig 23d). Table 5 shows the area average Nuselt number on the bottom wall of the cavity. The average conduction Nusselt number The area average Nusselt number conduction, radiation and total on the right and left walls with exclusion of the semitransparent window are presented in Table 6 The maximum and minimum total Nusselt number on the semitransparent wall are found for case D for Pl = 0 and Pl = 50, respectively.
Conclusions
The effects of the semitransparent window's aspect ratio on the interaction of collimated beam with natural convection have been studied numnerically in a square cavity which is heated from the bottom for Ra=10 5 , Pr=0.7. The four combination of height ratio (h r ) and window width ratio (w r ) and range of Planck numbers have been considered. A collimated beam is irradiated on this semitransparent window at an azimuthal angle 135 0 and the interaction of this collimated beam irradiation with natural convection is studies. The following conclusions are drawn: 1. The left vortex is bigger than the right vortex for case A and B for transparent medium Pl = 0, Furthermore, the fluid flow turns almost right angle turn happens for right vortex at the junction of two vortices for case C and D.
2. The left vortex changes its dynamics with Planck number of the participating medium. It breaks into two parts for cases B, C and D for Planck number Pl = 10. Moreover, the left vortex remains confined into lower left corner for case B and breaks into two part for case D for Planck number 50.
3. The thermal plume flickers right to left for medium changes from non-participating to participating medium for all cases 5. The temperature rise on the bottom wall at the location of collimated incidence happens in increasing order for case A to case D for transparent medium case and this rise diminishes also in the same order for Planck number Pl = 1, and finally no major rise in the temperature at the bottom wall appear for Pl = 50.
6. The maximum non-dimensional temperature inside the cavity increase beyond, and maximum non-dimensional temperature is found for Pl = 0 case D and minimum for Pl = 10 case A.
7. The conduction Nusselt number curve at the collimated incidence location on the bottom wall is Gaussian for case A and square for case D for transparent medium. The rise diminishes fast and no such rise in the conduction Nusselt number is seen for all cases for Pl=50.
8. The radiative Nusselt number rise at the collimated beam incidence on the bottom wall is almost same for all cases for transparent medium case. However this curve is Gaussian for case A and C and square for B and D, nevertheless wavy with peak at the ends and trough at the middle for square distribution.
9. The peak in radiation Nusselt number diminishes fast from case D to case A for Pl = 1 and no peak appear for Pl = 50.
10. The maximum Nusselt number is 5.76 is found for Pl = 50 for case D and minimum total Nusselt number i.e., 0.42 for Pl = 0 for case B on the bottom wall. | 2020-12-15T02:15:52.928Z | 2020-12-14T00:00:00.000 | {
"year": 2020,
"sha1": "99b5d1637fa22d6dfde0fb9228309506a3d53ae8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "99b5d1637fa22d6dfde0fb9228309506a3d53ae8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
232059111 | pes2o/s2orc | v3-fos-license | Clinical Profile and Outcome of Childhood Autoimmune Hemolytic Anemia: A Single Center Study
Objective To analyze clinical and laboratory parameters, and treatment outcomes of children with autoimmune hemolytic anemia (AIHA). Methods Retrospective analysis of 50 children aged 0–18 years. Monospecific direct antiglobulin test (DAT) and investigations for secondary causes were performed. Disease status was categorized based on Cerevance criteria. Results Median (range) age at diagnosis was 36 (1.5–204) months. AIHA was categorized as cold (IgM+,C3+/cold agglutinin+) (35%), warm (IgG+ with/without C3+) (28%), mixed (IgG+, IgM+, C3+) (15%) and paroxysmal cold hemoglobinuria (4%). Primary AIHA accounted for 64% cases. Treatment modalities included steroid (66%), intravenous immunoglobulin (IVIg) (4%), steroid+IVIg (4%), and steroid+rituximab (4%). Treatment duration was longer for secondary AIHA than primary (11 vs 6.6 months, P<0.02) and in patients needing polytherapy than steroids only (13.3 vs 7.5 months, P<0.006). During median (range) follow-up period of 73 (1–150) months, 29 (58%) remained in continuous complete remission, 16 (32%) remained in complete remission. Conclusion Infants with AIHA have a more severe presentation. Monospecific DAT and a thorough search for an underlying cause help optimize therapy in most patients of AIHA.
VOLUME 58 __ AUGUST 15, 2021 A utoimmune hemolytic anemia (AIHA) is caused by the presence of auto-antibodies directed against antigens on the surface of red blood cells, leading to premature destruction [1,2]. AIHA is the main cause of acquired extra corpuscular hemolysis in children [1]. AIHA can be subdivided into primary (or idiopathic) and secondary. AIHA presenting with thrombocytopenia (Evans synd-rome) tends to have a more chronic and relapsing clinical course [3][4][5]. There is scarcity of data on Indian children with AIHA and their treatment outcome. We present data on children with AIHA from a single center in India.
METHODS
This is a retrospective analysis of data from January, 2007 to April, 2019 from our unit's database. Fifty children less than 18 years of age diagnosed with AIHA were enrolled in the study. AIHA was diagnosed based on the clinical presentation, positive direct anti globulin test (DAT) and at least one of the following: reticulocytosis, haptoglobin <10 mg/dL, and total bilirubin >1 mg/dL [7]. DAT Glucocorticoids were used as the first line therapy in both warm and cold AIHA. In cold AIHA, additionally, treatment of the underlying disease was prioritized. The patient was kept warm and in cases with very severe anemia, packed red cell transfusion was given with a heat generator inside the tubing. Intravenous methylprednisolone (2 mg/kg 8 hourly for 3 days followed by oral prednisolone, 2mg/kg/day for 4 weeks, then tapered gradually) was used for patients who were sick, unable to take orally or had very severe hemolysis. If there was complete remission after 4 weeks, tapering of prednisolone was done by 10% with each dose change over a period of 6 months. If there was no remission/ steroid dependence with prednisolone dose of 0.2 mg/kg/day, second line treatment was used. Secondline therapy comprised of either intravenous immunoglobulin (IVIg), rituximab, cyclosporine, mycophenolate mofetil (MMF) or azathioprine. In steroid dependent cases, one of the immunosuppressants (cyclosporine, MMF, azathioprine) was used. In common variable immunodeficiency (CVID), IVIg was additionally used. Packed red blood cell transfusion as supportive therapy was given if the child had a hemoglobin value less than 3 g/dL or 3-6 g/dL with cardiac failure or respiratory distress and needing intensive care unit (ICU) care.All patients received folic acid and vitamin B12 during treatment to support hematopoiesis.
R R R R R E E E E E S S S S S E E E E E A A A A A R R R R R C C C C C H P H P H P H P H P A A A
Patients were followed every month till complete remission and then 3-monthly till 1 year. Clinical and lab parameters at last follow-up were classified based on Cerevance criteria [6] into 4 categories: No remission (NR), partial remission (PR), complete remission (CR) and continuous complete remission (CCR).
Statistical analysis: Chi square test and Student t test (two tailed, unpaired) were used to compare variables between primary and secondary AIHA. P value less than 0.05 was considered significant. SPSS version 20.0 was used for the analyses.
Direct anti-globulin test was 4+ positive in 26 children, 3+ positive in 11 children, 2+ positive in 5 children, and 1+ positive in 5 children.In three DAT-negative children, the diagnosis was based on clinicopathological suspicion after ruling out other causes of hemolytic anemia and on the basis of response to treatment. Two out of three DAT-negative patients were positive for Donath Landsteiner test. Monospecific DAT test was performed in 24 children after its availability from the year 2013; of which, IgG ± C3 was present in 10 (41%) children, IgM and C3 were present in 3 children (13%) and both IgG and IgM with C3 were present in 4 children (17%). Cold agglutinin testing was performed in 21 children and was positive in 13 children. Based on above results cold, warm, mixed AIHA and PCH was seen in 35%, 28%,15% and 4% children, respectively. In the other 18% children seen prior to 2010, AIHA was unclassified.
Among infants, hemolysis was found to be much severe than those who developed AIHA after infancy (mean (SD) hemoglobin, 3.96 (1.18) vs. 5.13 (1.65) g/dL, P=0.01). In primary AIHA, the mechanism of hemolysis was more often IgM and combined antibody mediated than in children with secondary AIHA wherein it was mainly IgG-mediated hemolysis.
Steroids alone were used in 33 (66%) children; other medications used were IVIg in 2 children, steroid and IVIg in 2, steroid and rituximab in 2, steroid, rituximab and cyclosporine in 1, and steroid and other drugs (three or more) in 7 children. Other immunosuppressive medications used were mycophenolate mofetil and azathioprine. Among three patients of Evan syndrome, two patients responded to first line glucocorticoid therapy and one responded to second line therapy with IVIg followed by rituximab. One patient improved spontaneously and was not given any therapy.
DISCUSSION
We present our institutional data on pediatric AIHA and underscore the preponderance of AIHA in younger children; although, the median age at diagnosis in our study was higher than that in previous studies (10.8-16 months) [6,7]. Patients younger than one year required ICU care in view of severe anemia and hypoxia, similar to the report by Fan, et al. [7].
In our study, 94% cases had positive DAT, similar to another Indian study by Naithani, et al. [8]. Negative DAT in some patients may be due to low titer of IgG antibodies or IgA or IgM auto antibody mediated hemolysis. Reticulocytopenia seen in 26% cases was probably due to destruction of erythroid progenitors by autoantibodies [9]. A French national study [6] also observed a high incidence of reticulocytopenia (39%) indicating that although reticulocytosis is an important marker of hemolysis, its absence alone should not rule out AIHA.
We found that hemolysis was severe whenever combined or IgM coated antibody mediated hemolysis occurs. This observation was similar to previously published study by Sokol, et al. [10], which showed that compared to IgG mediated hemolysis alone, IgG along with IgM or IgA leads to more severe hemolysis. Secondary AIHA was due to infection in 10% whereas Fan, et al. [7] showed that infection accounted for 97.6%. In contrast to this, a French study [6] showed that secondary forms of AIHA were mainly due to immunological cause (53%) and infections contributed to a very small portion (10%). This observation may be due to increased burden of infection and early exposure of viruses like EBV in low or middle in-come countries.
Aladjidi, et al. [6] showed 90% remission rates with 39% achieving CCR and 51% attained CR. This may be due to prolonged usage of steroids (median duration 8 months). Two patients with PCH had early disease remission. This may be due to the self-limiting nature of the condition; however, as per unit policy they were also treated with short course of steroids. If there was no remission/steroid dependence with prednisolone in a dose of 0.2 mg/kg/day, second line treatment was used [11]. We needed immunosuppressants as second line of treatment in 26% cases. Rituximab was used in the standard dose of 375 mg/m 2 per day [12].
Our study is limited by the fact that it is a retrospective study, comprises of a small cohort of patients and lacks protocol uniformity. Treating AIHA in children can be challenging and may need prolonged and complicated therapy, especially in secondary AIHA. We suggest that relapsed or refractory cases of AIHA should be cared by pediatric hematologists in a tertiary care center. | 2021-02-27T06:16:21.954Z | 2021-01-02T00:00:00.000 | {
"year": 2021,
"sha1": "22f8d7b5896e2c13795d256b7230ea69b560e353",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13312-021-2282-7.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "92dc8cb18ee4aaa14e995522b5a362b6887ebfc6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14003940 | pes2o/s2orc | v3-fos-license | The Role of Fibroblast Growth Factors in Tooth Development and Incisor Renewal
The mineralized tissue of the tooth is composed of enamel, dentin, cementum, and alveolar bone; enamel is a calcified tissue with no living cells that originates from oral ectoderm, while the three other tissues derive from the cranial neural crest. The fibroblast growth factors (FGFs) are critical during the tooth development. Accumulating evidence has shown that the formation of dental tissues, that is, enamel, dentin, and supporting alveolar bone, as well as the development and homeostasis of the stem cells in the continuously growing mouse incisor is mediated by multiple FGF family members. This review discusses the role of FGF signaling in these mineralized tissues, trying to separate its different functions and highlighting the crosstalk between FGFs and other signaling pathways.
Introduction
Organogenesis is a complex physiological process. An intricate array of signaling molecules such as FGFs, bone morphogenetic proteins (BMPs), Wnt, and Hedgehog (Hh) families are known to regulate the formation, differentiation, and maintenance of the tooth and alveolar bone during the development and throughout adulthood [1][2][3][4].
FGF signaling occupies a significant position in inducing the proliferation and differentiation of multiple cell types during embryonic stages [5][6][7][8][9][10], as well as in regulating the development in different animals [11][12][13][14]. In addition, FGFs have been shown to regulate mouse tooth development [2,[15][16][17]. Nevertheless, a comprehensive description about the mechanism underlying FGFs that regulate different mineralized tissues of tooth during the embryonic stages, as well as incisor renewal in the adulthood, is still needed. Here, we summarize the roles of FGF signaling in mouse tooth development and the ways FGFs control the stem cells in incisor renewal, trying to separate its different functions and highlighting the crosstalk between FGFs and other signaling pathways.
Development of Tooth and Supporting Bone Structure
Most vertebrate groups have the ability to replace their teeth. Mammals have two sets of teeth: primary and adult teeth. In contrast, mice contain one set with two different types: molars located at the proximal area and incisor located at the distal area, which are separated by the toothless diastema region. Mouse incisors grow continuously throughout the lifetime in sharp contrast to the molars. It has been demonstrated that the presence of stem cells, which are located in the proximal end of the incisor, gives rise to the differentiated tooth cell types, thus promoting continuous growth of this tooth [18]. It has been widely held that tooth morphogenesis is characterized by the sequential interactions between the mesenchymal cells derived from the cranial neural crest, and the stomadial epithelium [19,20]. This process consists of several phases, that is, bud, cap, and bell stages. In mice, the dental mesenchyme is attributed to neural crest cells which are derived from the midbrain and hindbrain regions around embryonic day 8.5 (E8.5) [21][22][23][24]. The determination of tooth-forming sites during E10.5 [25][26][27] and the thickening of the dental epithelium at E11. 5 have been considered as the first signs of tooth development [28]. During the bud stage (E12.5-E13.5), in both incisor and molar, the thickened dental epithelium buds into the underlying mesenchyme, thus forming the epithelial tooth bud around the condensed mesenchymal cells. At the subsequent cap stage (E14.5-E15.5), the epithelial component undergoes specific folding. A central event, during the transitional process between bud and cap stages, is the formation of the enamel knot (EK), a structure composed of a group of nondividing cells. Moreover, several signaling molecules, such as Shh, FGF4, FGF9, BMP4, and BMP7, as well as Wnt10a/b, are restrictedly expressed in the enamel knot. Several studies have shown that the EK, as the signaling center, has an important role in tooth cusp patterning control [29,30]. During the following bell stage, the ameloblasts and odontoblasts originate from the dental epithelium and mesenchyme, respectively [2]. At this stage, the secondary EKs (sEK) succeed the primary EKs (pEK) in the molar. In addition, the condensed mesenchymal cells around the developing epithelial tooth germ at the bud stage go on to differentiate into a supporting alveolar bone that forms the sockets for the teeth at the bell stage [31][32][33].
With reference to its origin, it has been reported that the alveolar bone is formed by intramembranous ossification [32,33]. Intramembranous ossification starts with the mesenchymal cells which are derived from embryonic lineages correspondingly, which then migrate towards the locations of the future bones. Here, they form high cellular density condensations that outline the size and shape of the future bones. The mesenchymal cells subsequently differentiate into osteoblasts, thus forming bone directly within the condensations [3].
Stem Cells in Incisor Renewal and Osteogenesis
As it was previously mentioned, the adult mouse incisors can grow unceasingly throughout their lifetime, and this growth is counterbalanced by continuous abrasion. Essential to this phenomenon is the presence of active somatic stem cells which reside at the proximal end of the incisor. As a result, extensive studies have uncovered that the epithelial and mesenchymal stem cells of the incisor give rise to ameloblasts and odontoblasts, which are in turn responsible for producing new tissue which replaces worn enamel and dentin [1]. The epithelial stem cells reside in a niche called the cervical loop. From contemporary understanding of ameloblast development and maturation, these stem cells are located in the outer enamel epithelium (OEE) and the stellate reticulum (SR) of the labial cervical loop. These stem cells give rise to the transit-amplifying (TA) cells, which are divided for several generations and then differentiate into preameloblasts.
In turn, these cells give rise to mature ameloblasts that are characterized by three component stages: presecretory, secretory, and maturation zones [34]. In contrast, compared to the epithelial counterparts, the stem cells which are derived from the mesenchyme and reside in the dental pulp are relatively poorly characterized [1].
In addition to incisor renewal, stem cells also show powerful osteogenic potential due to their ability to differentiate into osteoblasts. For instance, the condensation of mesenchymal stem cells (MSCs) from the neural crest or mesoderm has shown to stimulate the beginning of mammalian skeletal development [4]. The alveolar bone tissue regenerates during the process of bone repair and synostosis after implantation, exodontia, and orthodontic treatment, indicating the importance of stem cells in bone repair and regeneration. Numerous techniques have been used to stimulate stem cell-driven osteogenesis [35], including direct implantation of undifferentiated cells, or after in vitro differentiation, as well as stimulation of native stem cell differentiation through cytokine introduction. Adult bone marrow-derived mesenchymal stem cells are potentially useful for craniofacial mineralized tissue engineering [36]. It has been shown that compared with conventional guided bone regeneration, implanted tissue repair cells induce regeneration of alveolar bone and decrease the need for secondary bone grafting [37]. Adipose-derived stem cells (ADSCs), like bone marrow stem cells (BMSCs) that are derived from the mesenchyme and provide a supportive stroma for cell differentiation, may be extensively used in osteogenesis. Yet, larger quantities of ADSCs may be harvested with less pain as opposed to BMSCs [38]. In the clinical setting, further investigations of optimization for stem cell harvesting as well as scaffoldbased delivery are required given the challenges in stem cell transplantation [36].
Fgfr1-3 undergo alternative mRNA splicing events and thereby generate alternative versions of the immunoglobulinlike domain III (IIIb or IIIc) [45]. This process increases the ligand-binding properties via regulation in a tissuedependent manner [46][47][48]. The IIIb splice variant expression is predominantly detected in epithelial lineages and is responsible for transducing signals initiated by FGFs detected in the mesenchyme. Furthermore, the IIIc splice variant is restrictedly expressed in mesenchymal lineages and it transduces signaling from epithelial FGFs [49][50][51][52][53]. By contrast, Fgfr4 is not alternatively spliced [54].
Triggered by the dimerization of receptors, the transphosphorylation and activation of FGFRs initiate signaling via multiple downstream intracellular pathways [55]. By binding to various arrays of adaptor proteins such as SHP2 and growth factor receptor-bound protein 2 (GRB2) [56][57][58][59], the activated receptor's cytosolic domain in turn mediates Ras signals to activate the downstream signaling cascades, such as PI3K/AKT and MAPK pathways [60].
While FGF signaling, encompassing FGF and FGFRs, occupies a critical position in regulating diverse cellular functions, it could be regulated by various upstream regulators. The most well-investigated regulator group are the Sprouty genes, which encode antagonists of FGF signaling by binding with GRB2 thus preventing Ras activation [61]. Other signaling pathways, for example, the Wnt pathway, have been recently identified as a positive regulator of FGF signaling [62].
Expression Patterns of FGFs during Tooth Development
FGFs are expressed in the dental epithelium throughout tooth development ( Figure 1). During the initiation stage of odontogenesis, the expressions of Fgf8, Fgf9, Fgf10, Fgf17, and Fgfr2IIIb are detected in the prospective tooth region around E10.5 to E11.5 [63][64][65][66]. In the same region, following the formation of the dental lamina, Fgf8, Fgf9, Fgf15, and Fgf20 are expressed, while the expression of Fgf10 in the epithelium is decreased [63]. As the epithelial bud is formed unceasingly in the dental lamina, the Fgf9 and Fgf20 expressions persist while Fgf3 and Fgf4 are initiated [65,66]. Fgf3, Fgf4, Fgf9, Fgf15, and Fgf20 are expressed in the pEK after its formation, while the expressions of Fgfr1IIIb, Fgfr1IIIc, and Fgfr2IIIb are found in the dental epithelium. Fgf16 and Fgf17 are expressed in the cervical loop epithelium [65]. In the sEK at the bell stage, the Fgf4 and Fgf20 expressions are restricted in the forming cusps. The expressions of Fgf9, Fgf16, Fgfr1IIIb, and Fgfr1IIIc are detected in the differentiating ameloblasts. At the same time, the expressions of Fgf1, Fgf9, Fgf16, and Fgf17 can be found in the cervical loop epithelium of the incisor [65,66]. During tooth development, the expressions of FGFs are also detected in the mesenchyme (Figure 1). Fgfr1IIIc and Fgf10 expressions are detected in the prospective tooth region during the early stage [63,66]. During the thickening of the prospective tooth region epithelium which then forms the dental lamina, the expressions of Fgf10 and Fgf18 are found in the mesenchyme [63,65]. After the formation of the epithelial bud, the expressions of Fgf10 and Fgf18, as well as that of Fgf3, are found; besides, Fgfr2IIIc expression appears [65]. After pEK formation, Fgf3, Fgf10, and Fgf18 are found in the mesenchyme [65]. The expressions of Fgf16 and Fgf17 are detected in the cervical loop mesenchyme while Fgfr1IIIc and Fgfr2IIIc are expressed in the mesenchyme of the buccal side [63,65,66]. At the late bell stage, Fgf3 is expressed in the dental papilla, while Fgf10 is expressed in the differentiating odontoblasts. In addition, Fgf15 is restricted to the mesenchyme while the expressions of Fgfr1IIIb and Fgfr1IIIc are located in odontoblasts [63,65]. Moreover, Fgf3, Fgf7, Fgf10, Fgf16, Fgf18, and Fgf21 are also detected in the incisor [65].
The mesenchymal-derived alveolar bone is histologically detectable after E13.0, and its early formation occurs by E14.0. After E15.0, the development of the alveolar bone is well progressed. Comparative PCR array analysis has shown an increased statistical significance (14-fold) in the Fgf3 expression levels between E13.0 and E15.0 [67]. In addition, Fgf7 transcripts have been detected in the developing bone surrounding the tooth germ [63].
During tooth development, Sprouty (Spry) genes, as FGF antagonists, are also expressed in different tissues [68]. During the cap stage, the expression of Spry1 appears in diastema buds and is highly expressed in the tooth germs of the first molar (M1), whereas Spry2 is strongly expressed in the epithelium of both M1 tooth germ and diastema. Spry4 is uniquely expressed in the mesenchyme in tooth germs of M1 and in the diastema. Nevertheless, Spry3 is not detected within the tooth germ.
The Role of FGFs during Tooth Development
6.1. The Role of FGFs during the Formation of Enamel. Tooth formation begins with the first signals from the future tooth epithelium at E9.5 [69]. In the area where a prospective tooth forms, the oral ectoderm thickens; the epithelial Fgf8, Fgf9, and Fgf17 expressions suggest that these FGFs may take part in the initiation of tooth development [65,66]. An early study has shown that FGF8 can induce the expression of Pax9 in mice, which reveals the prospective odontogenesis locations, and is essential beyond the bud stage of tooth development [25]. In the first branchial arch (BA1) with ectoderm Nestin-Cre, conditional Fgf8 knockout leads to a decrease in Pax9 expression in the expected molar region, and the formation of molar is stopped. The deletion of Fgf8 does not affect Pax9 expression within the presumptive incisor region, and thus the incisor is formed in a normal manner. The recent study has indicated that Fgf8-expressing cells labeled during the initiation stage of molars can furnish the epithelial cells and collectively migrate towards the dental lamina site which is important for prospective molar positioning [70]. In addition, the conditional deletion of Fgf8 by E11.5 leads to an arrest in the formation of the dental lamina, and it also affects further development of the dental primordium and leads to a shorter invaginated structure [70]. At this early stage, Fgf10, a member from another FGF subfamily, is expressed in the epithelium [63]. Teeth develop in Fgf10-deficient mice, although a defect of the stem cell compartment in the incisor cervical loop has been observed [71], and deletion of Fgf9 which is also expressed at the early stage does not affect tooth formation either [72,73]. Fgf17 expressed at the early stage is another member from the FGF8 subfamily. The expression of Fgf17 occurs in the prospective molar rather than the incisor epithelium, indicating that FGF17 is involved in presumptive molar site positioning, like FGF8 [65]. It is believed that FGF8 is essential in determining the tooth type [25,74], while FGF17 may also take part in this process. At E10, Bmp2 and Bmp4 offset the induction of Fgf8 at the transcription level of Pax9, before dental ectoderm thickening. Furthermore, it has been shown that the initiation of odontogenesis only occurs in regions with the presence of the inducer FGF and the absence of its antagonists (BMPs), while the mesenchyme can react to the inducer.
The epithelium becomes thickened at the future toothforming site and subsequently forms the multilayered epithelium which then contributes to the dental lamina formation. The Fgf10 expression is negatively regulated at this stage [63]. In the meantime, Fgf8 and Fgf9 are maintained in the epithelium. In the dental lamina, the initiation of Fgf15 expression is detected on the lingual side whereas the expression of Fgf20 is detected at the tip, implying that these FGFs participate in epithelial thickening [65]. Interestingly, it appears that the knockout of Fgf9, Fgf10, or Fgf20 does not affect epithelial thickening or formation of lamina [73,75]. This may result from the compensation between these FGFs, and the combination of conditional deletion at this stage is necessary to investigate the roles of these FGFs on lamina formation. In addition, Fgf2rIIIb is detected in the odontogenic epithelium at the early stage.
Subsequently, invagination of the dental lamina occurs in the underlying mesenchyme, while the cells in the mesenchyme condense around the dental epithelium, thus contributing to the formation of tooth bud and cap. FGF expression patterns suggest that the binding of FGF3 and FGF10 to FGFR2IIIb activate FGF signaling from the epithelium at the stages of invagination and tooth bud [64,65]. In Fgfr2deficient mice, the formation of tooth is inhibited after thickening of the epithelium. Although Fgf3 and Fgf10 in the mesenchyme can still be observed in Fgfr2 mutants, the Fgf3 expression in the epithelium is decreased [76].
Given that FGF3 and FGF10 bind to FGFR2IIIb, it is important for these FGFs to be involved in the transitional process to the tooth bud [77,78]. Surprisingly, a single deletion of Fgf3 or Fgf10 in mice does not affect early tooth development, which proceeds normally to the cap stage. The deletion of both Fgf3 and Fgf10 has revealed that the development of molar is inhibited prior to the bud stage, suggesting possible compensations between Fgf3 and Fgf10 during invagination of the dental epithelium [79,80]. At this stage, Fgf9 is highly expressed in the tip of the bud. The deletion of Fgf9 does not affect tooth bud invagination in mice; nevertheless, it affects progenitor cell differentiation in the incisor [72,73]. The defective invagination of the dental epithelium in Runx2-deficient mice is recuperated by exogenous FGF9 protein [72,81], which suggests that during tooth invagination FGF9 functions downstream of RUNX2 as an important factor. These results imply potential compensations between FGF9 and other FGFs in the epithelium. In addition, FGF9 upregulates Msx1, a homeoboxcontaining transcription factor essential for invagination of the tooth bud [66,82].
During bud invagination, FGF signaling also regulates PITX2, an important transcription factor, whose expression in the oral epithelium is initially controlled by FGF8 and BMP4. FGF8 upregulates the expression of Pitx2 whereas BMP4 represses it [83]. Fgf8 expression in the oral epithelium decreases with the absence of Pitx2 [84,85]. In addition, the expression of Fgf20 is restricted to the tip of the tooth bud. Early tooth development is not arrested in mice with deletion of Fgf20 or Fgf9 [73]. Considering these redundant roles, it would be useful to analyze double or triple FGF deletion to gain a better understanding of gene function at this stage.
Recent study has shown that in the explant slice culture system, after treatment with a pan-FGF receptor inhibitor SU5402 at E11.5, a significantly shallower tooth bud has been detected. Interestingly, SU5402 treatment at E12.5 only results in narrower tooth bud formation, indicating that FGF signaling takes part in epithelium stratification but not placode invagination [86,87]. This finding has been further complemented by gain-of-function experiments with FGF10-soaked beads towards the single-layered tongue epithelium [86,87]. At the bell stage, FGF signaling is important in the differentiation of ameloblasts. The expressions of Fgf4 and Fgf9 are detected in the inner enamel epithelium (IEE) [66], while the expression of Fgf2 is found in the SR, the expressions of Fgfr1 and Fgfr2IIIb in the ameloblasts. With inactivation of Fgfr1, dysfunctional ameloblasts produce disorganized enamel [88]. In cultured embryonic molars, Fgf2 overexpression leads to a decrease in amelogenin expression, whereas expression of amelogenin and formation of enamel increase with inhibition of FGF2 [89]. In tooth cultures, exogenous FGF2 and FGF4 promote the expression level of Tbx1, which can be expressed in the epithelium and encode a transcription factor. However, the expression of Tbx1 decreases in Fgfr2 −/− mice [90]. Besides, from in vitro cultured Tbx1-deficient mice, there is lack of ameloblasts while enamel is not formed in incisors, thus Tbx1 is necessary for the differentiation of ameloblasts [91]. As downstream targets of FGFs, members of the Ras superfamily are also involved in amelogenesis. With conditional Rac1 deactivation, a decreased level of amelogenin is expressed in ameloblasts, which also loosely attach to the secreted enamel matrix, and thus cause hypomineralization in enamel [92].
Decreasing Sprouty expression level can increase FGF signaling, which results in the formation of ectopic enamel and supernumerary teeth formation [68]. Ameloblast differentiation occurs and subsequently forms ectopic enamel on the lingual side of the incisor in Spry2 +/− ;Spry4 −/− mice [93,94]. Furthermore, HRas are downstream of FGFs and hypomineralization, and disorganization in enamel could be caused by increased HRas signaling in mice which could be rescued by inhibition of the MAPK pathway [95].
The Role of FGFs during the Formation of Dentin and
Supporting Bone Structure. During the initiation stage, apoptosis occurs in mesenchymal cells in the BA1 proximal region with the absence of FGF8, which has an important role in survival of mesenchymal cells [96]. Fgf10 is also expressed in the mesenchyme at this early stage [63]. As it was mentioned previously, the deletion of Fgf10 in mice does not affect the formation of teeth [71], as well as FGF9 which is expressed in the epithelium at the same stage [72,73]. Given these data, neither FGF9 nor FGF10 takes part in tooth site positioning. Another possibility is the redundant roles of these FGFs when the tooth initiates.
FGF18 is another member of the FGF8 subfamily. At the lamina stage, Fgf18 expression is observed in the mesenchyme within the buccal side, unlike other FGFs from the FGF8 subfamily that are expressed in the epithelium. In tooth development, the function of FGF18 is still unknown, and further studies are necessary to determine its role in odontogenesis. Moreover, Fgf1rIIIc is found to be expressed in the mesenchyme at these early stages [66]. FGFs such as FGF2, FGF4, and FGF9 onto mandibular explants at this stage induce the expression of CCN2-one of the CCN proteins which are cell-associated and extracellular molecules relevant to several developmental processes-and can in turn promote dental mesenchymal proliferation [97].
At the bud stage, the expression of FGF4 initiates in the epithelium. But in Lef1-null mice, the expression of Fgf4 is reduced in tooth germs at E13, which in turn causes an arrest in mesenchymal condensation [98]. With exogenous FGF4, Fgf3 expression is rapidly induced in mesenchyme and the defect in Lef1 −/− tooth germs is fully rescued [99]. These data suggest that Fgf4 may function as a transcriptional target gene of WNT signaling. At this stage, FGF18 is expressed in the mesenchyme, except for the region underneath the epithelium of the tooth bud. In order to understand the role of this FGF in odontogenesis, further studies are necessary [65].
During the cap stage and early bell stage, the expressions of Fgf3, Fgf10, and Fgfr2 are detectable in the mesenchyme. Recent studies have demonstrated that Twist1, which is expressed in the mesenchyme, could bind to Fgf10 and Fgfr2 promoters and in turn regulate the Fgf10 and Fgfr2 expressions. In Twist2 Cre/+ ;Twist1 fl/fl mice, the expressions of Fgf3, Fgf10, and Fgfr2 were significantly reduced at E14.5 and E15.5, indicating that FGF signaling could be affected by Twist1 [100][101][102].
At the bell stage, the differentiation turns the cells from the dental papilla into odontoblasts, by which a dentin matrix is secreted. This matrix promotes differentiation which turns the epithelium into ameloblasts, which produce an enamel matrix [103]. The differentiation of odontoblasts is induced by FGFs from the EK [104,105]. In addition, the expressions of Fgf3 and Fgf10 are found in the mesenchyme, and their expression is negatively regulated when dental papilla cells undergo differentiation to become odontoblasts [63,106].
As mentioned earlier, the supporting alveolar bone is derived from condensed mesenchymal cells around the developing epithelial tooth germ, and it subsequently forms sockets for the teeth at the bell stage. During the formation of a molar root, FGF2 that is expressed in differentiating osteoblasts of the adjacent developing alveolar bone can stimulate the proliferation of chondrocytes, osteoblasts, and periosteal cells and stimulate the production of type I collagen [107]. FGF7, detected in the developing bone surrounding the molar tooth germ and the mesenchyme adjacent to the incisor cervical loop, is involved in the formation of alveolar bone [63]. Furthermore, the addition of FGF4 or FGF8 beads into mouse dental mesenchymal cells can promote their osteogenic differentiation and the expression of CBFA1, which belongs to the CBFA family and functions as an important regulator for differentiating osteoblasts in vertebrata [81]. Given the strong expression of CBFA1 in osteoblasts in tooth alveolar bone at the late bell stage, signaling of FGF4 and FGF8 from the epithelium may also have an important role during alveolar bone formation. It has also been reported that increased β-catenin signaling is related to the fate of dental mesenchymal cells, while FGF3 can sustain the odontogenic fate of incisor mesenchymal cells by downregulating intracellular β-catenin signaling [108]. Therefore, the lack of FGF3 could induce the potency of mesenchymal cells to differentiate into osteoblasts which are responsible for the formation of the supporting bone structure. Since the role of FGFs in supporting alveolar bone remains largely unexplored, further investigations are still needed.
6.3. The Role of FGFs in Tooth Size, Shape, Number, and Arrangement. The signaling center pEK, which regulates the size and shape of the tooth, consists of nonproliferative cells [109]. Different signaling molecules and their antagonists, including FGFs, Shh, Sprouty genes, BMPs, several WNTs, and follistatin, are expressed in pEK [110]. pEK cells cannot respond to FGFs since there are no FGF receptors expressed in these cells [66]. The nonproliferative cells in the pEK and the surrounding extensive proliferation cells may explain the epithelial folding and the transition process between the tooth bud and cap stages [15,109]. Afterwards, the pEK induces the sEK in multicuspid teeth. The spatial arrangement of sEK has also been shown to contain a network of activators and inhibitors [111,112]. The location and shape of the cusps are determined by the proliferation and differentiation of the epithelial cells which are regulated by the sEK; thus, the shape of the tooth crown is determined.
In molars, pEK size can affect the shape of the invaginated epithelium. Tooth size and cusp number decrease if the size of the pEK is too small, since a small size can affect the dental epithelium folding as well as the cervical loop and sEK formation. Ectodysplasin (Eda) and Traf6 are two members of the TNF-α family involved in tooth development regulation. A small size of the pEK will be present in mice without either of those proteins, and it will then result in reduced tooth size and cusp number [113,114]. The arrangement of sEK will be changed in case signaling from the pEK is compromised by changing its size or shape; thus, defects of cusp will occur. Furthermore, molar shape and cusp patterns will be altered under modulation in the levels of gene expression in BMP, SHH, and WNT signaling [62,[115][116][117][118][119].
In the mesenchyme, the expression of Fgf3 is maintained by FGF4 and FGF9, which are detected to be highly expressed in the pEK and sEK [63,66]. FGF4 from the EK promotes the proliferation and has a role in the development of tooth cusps [30,109]. Besides, FGF4 can also prevent cell apoptosis in the dental epithelium and mesenchyme [120,121]. Nevertheless, inactivation of neither Fgf4 nor Fgf9 can affect tooth shape or number [72,73]. Moreover, epiprofin, a transcription factor from the Sp family, can promote dental epithelial FGF9 which could elicit proliferation of dental mesenchymal cells through FGFR1c; this is essential for the tooth morphogenesis with the correct shapes and proper sizes [122].
FGF20 is another member of the FGF9 family, and its expression is found in the anterior bud of the lamina and the EK, along with the expressions of Fgf3, Fgf4, Fgf9, and Fgf15 [65,66,123]. During tooth development, FGF20 functions as a downstream target of EDA: in Eda mutant mice, the Fgf20 expression was reduced in molars, while it was increased in Eda-overexpressing (K14-Eda) mice [73]. In addition, Fgf20 knockout mice exhibited molar teeth with reduced size and a mild change in the anterior cusp, while the overall pattern of the cusp was normal in Fgf20 mutants. Therefore, FGF20 has shown to have a crucial role in fine tuning of the pattern of the anterior cusp and functions as a regulator of tooth size. Double knockout of Fgf9 and Fgf20 has shown strong additive effects by strikingly shortening the length of EK in comparison with the length of either single deletion mutant, which implies the redundancy between these two FGF ligands [73].
In the mesenchyme, FGFs have been shown to be involved in tooth shaping. Like Fgf20-deficient mice, Fgf3 −/− ;Fgf10 +/− mice exhibit small molars [73,80], and the Eda −/− molar phenotype can be partially offset by FGF10 in vitro [113]. Consequently, decrease in FGF signaling in either epithelium or mesenchyme can lead to similar effects during tooth formation.
Tooth number and arrangement are also found to be tightly regulated by FGF signaling within the dentition. Supernumerary teeth, which are mainly positioned at the prospective site of the premolar, have been found in several mutant mice. K14-Eda has been discovered as the first transgenic mouse line with ectopic teeth [124]. The following studies have reported that in this genetic background, the formation frequency of an extra tooth increased with lack of Fgf20, while single deletion of Fgf20 could hardly promote the formation of an extra molar [73]. Supernumerary incisors and teeth anterior to the first molar have also been discovered in mice with deletion of Sprouty genes [68,125]. To sum up, these findings indicate that FGFs function as stimulators, while Sprouty genes function as endogenous antagonists of FGF signaling in the development of the tooth.
The Role of FGFs in Incisor Stem Cell Renewal
It is well known that continuous growth of rodent incisor is counterbalanced by wear, which is promoted by the lack of enamel on the lingual side of the tooth surface. The absence of lingual ameloblasts results in the lack of enamel on that side [126]. Asymmetric wear maintains the length of incisor and leads to a sharp tip. The cervical loop includes various cell types: IEE cells, OEE cells, SR cells, TA cells, and stratum intermedium (SI) cells. In addition, an extra group of cells has been found between the SR and OEE [127]; however, their exact function still remains unknown. FGF signaling is known to have an important role in the regulation of incisor cervical loop maintenance (Figure 2). During incisor development, an overlapping expression of Fgf3 and Fgf10 is initially detected in the dental papilla and is maintained through E14 in the incisor bud [79]. The expression of Fgf10 remains stable in the mesenchyme adjacent to both labial and lingual IEE of the developing cervical loops from E16 to adulthood, while Fgfr1b and Fgfr2b are expressed in the forming cervical loops. Fgf3 is the only protein expressed in the mesenchyme neighboring to the labial IEE [18,63,79,80]. These FGFs expressed in mesenchyme are essential for the survival and proliferation of epithelial stem cells in the forming cervical loops; nevertheless, they are not essential for early ameloblast differentiation [79,80]. This is consistent with the Fgf10 −/− embryos, whose cervical loop initially forms and then regresses due to increased apoptosis and decreased growth [79]. However, teeth in Fgf3deficient mice are generally normal, which may result from the redundancy of Fgf10. Interestingly, Fgf3 −/− ;Fgf10 +/− mutants develop a severely hypoplastic LaCL and either thin or missing enamel layer, suggesting that FGF signaling levels have an important role in the maintenance of the epithelial stem cell pool in the incisor [80]. Coincident with this result, mice without FGFR2IIIb have no distinct incisors at birth [77]. In addition, Fgf9 is expressed in the epithelium of incisor [65,66] and may function as a key factor in activating FGF expression in the mesenchyme [80,128]. Consistent with this view, Fgf3 and Fgf10 in the dental mesenchyme are reduced with the genetic ablation of the core binding factor β, which in turn binds to Runx transcription factors and is essential for Fgf9 expression in the epithelium [72]. FGF9 and FGF10 signaling both function through FGFR2b. The defect in ameloblasts and enamel, the suppression in Shh expression, and the decrease in cellular proliferation all occur with the conditional knockout of Fgfr2b or decrease in signaling via Fgfr2b [129,130]. It coincides with the idea that in the cervical loop, the proliferation and differentiation of the progenitors are regulated by FGF9.
It has also been suggested that the spatial and quantitative balance of FGF signaling is important in maintaining the asymmetry of the incisor, where ameloblasts and enamel are located in the labial side. The intracellular antagonists encoded by Sprouty (Spry1, 2, and 4) are important regulators of FGF. As mentioned earlier, the expressions of Sprouty genes are detected in both labial and lingual epithelia and the adjacent mesenchyme [93]. In Spry4 −/− ;Spry2 +/− mutants, both labial and lingual epithelial and mesenchymal cells reveal a large increase in sensitivity to FGF signaling. As a result, ectopic mesenchymal expressions of Fgf3 and Fgf10 as well as lingual ameloblast formation were observed [93]. The Sprouty genes may partially function by indirect regulation of BCL11B and TBX1, transcription factors which are, respectively, down-and upregulated in LiCL in Spry4 −/− ; Spry2 +/− mutants at E16.5 [91,106]. At E16.5, deletion of Bcl11b results in an inverted expression of Fgf3/10 in labial and lingual mesenchymes, resulting in an expanded LiCL and lingual ameloblast formation, with smaller LaCL and an abnormal development in labial ameloblasts [106]. Moreover, a hypomorphic Bcl11b mutation has shown to induce the proliferation of adult TA cells and to maintain the quantity of epithelial stem cells. Yet, whether this mechanism includes FGF3 remains unknown [131]. On the other hand, TBX1 induces the proliferation of incisor epithelial cells by inhibiting the transcriptional activity of PITX2, which in turn supports the expression pattern of p21, a cell cycling inhibitor [132]. Supporting this view, incisors of Tbx1-deficient mutants cultured in kidney capsules exhibit hypoplasia and complete lack of enamel [91].
The expression of E-cadherin is negatively regulated by FGFs in the stem cells, which causes these cells to migrate out of the niche, followed by proliferation and differentiation into TA cells, which can become ameloblasts afterwards. In Fgf3 −/− ;Fgf10 +/− mice, no downregulation of E-cadherin expression is detected in the TA region, while cell proliferation decreases dramatically [127]. However, an abnormal expression of Fgf3 has been found in the lingual side of the mesenchyme in Spry2 +/− ;Spry4 −/− mice, which in turn leads to the formation of TA cells and ameloblasts without lingual E-cadherin [93,127].
The Shh expression is partly regulated by Fgf9 in the epithelium. The mice exhibit a reduction in the size of the labial cervical loop, where the Shh expression area expands to a more posterior location due to the deletion of Fgf9 [72]. Shh mRNA expression is significantly downregulated by ectopic FGF9 in incisor explants [72]. Given the essential role of TA region Shh expression in ameloblast differentiation [133], FGF9 may take part in protecting progenitor cells from the Shh signal so as to keep them undifferentiated in the cervical loop. This would be parallel to the forming limb, where Etv4/5 dependent on FGF is necessary to repress Shh expression in the mesenchyme of the anterior limb bud and limit Shh expression posteriorly [134,135]. Yet, it is not clear whether Etv family molecules have similar roles during the development of the incisor. BMP4 and activin, two proteins from the TGFβ family, modulate the activity of FGF and the regulation of the asymmetry of the incisor during incisor development. The symmetrical expression of BMP4 occurs throughout the mesenchyme and suppresses the expression of Fgf3 indirectly in the lingual mesenchyme. The expression of activin is more robust in the labial mesenchyme, and the bead implantation study in incisor explants at E16 indicates that activin offsets the effect of BMP4 [80]. This can maintain the expression of Fgf3 on the labial side of the mesenchyme and in turn increase the proliferation of stem cells. In addition, the activity of residual activin on the lingual side is counteracted by follistatin that was detected in the lingual epithelium and functions to preserve the effect of BMP4 on repressing the Fgf3 expression in the lingual mesenchyme. Consequently, embryos without the Fst gene which encodes follistatin have shown to exhibit ectopic expression of Fgf3 in the lingual mesenchyme; these results in the expanded LiCL and lingual ameloblasts as well as enamel formation [80]. On the contrary, Fst misexpression in the epithelium leads to a reduction in the expression of Fgf3 and subsequently causes reduced proliferation and the size of LaCL [80]. BMP4 can also increase the differentiation ability of ameloblasts in the more distal side of the labial epithelium, while in the lingual epithelium this process is repressed by follistatin expressed locally to maintain the asymmetry of the incisor [136]. Coincident with the view that BMP4 acts in two regions of the incisor during its development, misexpression of noggin (the inhibitor of BMP) leads to incisor hyperplasia because in the cervical loop the proliferation of the population of progenitor cells is promoted. However, as ameloblast differentiation normally promoted by BMP signaling is inhibited, the incisors do not form enamel in the mutant [137]. Furthermore, mesenchymal TGFβ receptor type I (Alk5/Tgfbr1) can modulate the proper initiation of tooth and the epithelium development of the incisor [138,139]. Mesenchymal Fgf3 and Fgf10 expressions were downregulated when Alk5 was knocked out specifically in the mesenchyme, causing fewer label-retaining cells and decreased proliferation in the cervical loop. Exogenous FGF10 proteins could rescue this phenotype in incisor explant culture [138]. The mesenchymal expression of Fgf is partially activated via transcription factors MSX1 and PAX9, which can initiate Fgf3 and Fgf10 by E12.5 and in turn contribute to subsequent incisor development [128,139,140]. Moreover, with epithelial deletion of Isl1, FGF signaling is upregulated and is associated with both lingual cervical loop-generated ectopic enamel and labial side premature enamel formation [141]. FGF signaling and downstream signal transduction pathways are also suppressed in Ring1a −/− ;Ring1b cko/cko incisors [142].
It has also been reported that FGF signaling is required for stem cell self-renewal and can prevent differentiation of dental epithelial stem cells (DESCs) in the cervical loop and in the DESC spheres. The inhibition of the FGF signaling pathway can decrease proliferation and increase apoptosis of the cells in the DESC spheres. On the other hand, inhibiting FGFR or its downstream targets can decrease Lgr5expressing cells in the cervical loop and induce cell differentiation in both cervical loop and the DESC spheres [143]. In addition, FGF signaling may also be required for YAPinduced proliferation in T-A cells [144].
The Importance of FGF Signaling in Human Tooth Development
It has been shown that in clinics, FGFs are required for human tooth development. Its dysregulation seriously affects tooth development in humans, leading to enamel defects and tooth agenesis. Lacrimo-auriculo-dento-digital (LADD; Online Mendelian Inheritance in Man (OMIM) database no. 149730) syndrome, a congenital autosomal dominant disorder, results from the heterozygous missense mutations in FGF10, FGFR2, and FGFR3. LADD is characterized by aplasia, hypoplasia/atresia of salivary/lacrimal glands, ears with cup shape, and hearing loss [145][146][147][148], as well as various dental phenotypes, including hypodontia, teeth with peg shape, and hypoplastic enamel [149]. In addition, compound heterozygous or homozygous FGF3 mutations cause congenital deafness with labyrinthine aplasia, microtia, and microdontia (LAMM; OMIM no. 610706) syndrome which is also characterized by malformed external ear, malformed/ missing inner ear, and peg-shaped teeth with reduced size [150][151][152].
Mutations in FGFRs can also cause several syndromes such as Apert and Crouzon syndromes. Among them, the Apert syndrome (OMIM no. 101200) derives from gain of function in FGFR2 mutations and is characterized by hypoplasia of midface, craniosynostosis, and syndactyly of the hands and feet [153]. The mutations in FGFR2 can cause Crouzon syndrome (OMIM no. 123500) characterized by craniosynostosis, leading to hypertelorism, prognathism of mandible, hypoplastic maxillary, and short upper lip [154]. Patients with Apert and Crouzon syndromes usually exhibit hypodontia, mostly of the third molar, second incisor in maxillary, and second premolar in mandible [155,156].
It has also been reported that the application of FGF2 can promote the regeneration of periodontal tissues [157,158]. In this study, a clinical trial was performed in 253 adult periodontitis patients. A modified Widman periodontal surgery was carried out, and during the surgery, a 200 μL investigational formulation containing FGF2 in different concentrations was applied to 2-or 3-walled vertical bone defects. The application of FGF2 showed a significant effect over the placebo-control group (p < 0 01) for the bone fill percentage after 36 weeks of administration. The results demonstrate that topical FGF2 application can treat the bone defect caused by periodontitis and it can be efficacious in human periodontal tissue regeneration [158]. In addition, FGF2 can also promote the neovascularization of human dental pulps which is severed [159]. Human molars without caries were used for preparation of tooth slices which were then treated with 0-50 ng/mL recombinant human FGF2 for a week in vitro. The result showed that the density of microvessel in dental pulps was enhanced with FGF2 treatment compared with untreated controls, indicating that topical application of FGF2 in advance of replantation might be efficacious in the treatment for avulsed teeth [159]. Another study isolated and characterized stem cells from inflamed pulp tissue of human functional deciduous teeth (iSHFD) in order to investigate the role of FGF2 on the potential of regeneration of these cells [160]. Application of FGF2 to iSHFD during their expansion improved the colonyforming efficiency of the cells and increased their potential of migration and proliferation, but decreased their potential of differentiation in vitro. This provides a good stem cell source for future applications in clinics and a new way to use inflamed tissues which has to be discarded before.
Given the results of these studies, the application of FGFs can be a potential treatment for human dental diseases, even for those defects in tooth development as well as for the syndromes caused by mutations in FGFs and FGFRs. The delivery of FGFs to the primary nidus still needs to be improved, and further clinical trials are also required.
Conclusion
FGF signaling has been the focus of intense interest over the past years, and thus, it has been investigated both in vitro and in vivo, by using different cell and genetic mouse models. The FGF expression has an important role in different stages of tooth development, including tooth initiation and mineralized tissue formation. Uniquely in rodents, FGFs are essential to maintaining the stem cell niche fueling the unceasingly growing incisor throughout their lifetime. The tooth offers an attractive model to further dissect the regulation and transduction of FGFs in developmental as well as stem cell biology. Despite the understanding of the role of FGF signaling, many questions remain unexplored. Thus, it is necessary to further investigate more molecular mechanisms which regulate FGFs and examine their other pathways. In addition, like the irreplaceable function of FGFs in regeneration and tissue homeostasis in the mouse model, FGFs have also been found to be involved in these processes in humans. By controlling the activity of FGFs, it could be possible to obtain novel methods to treat human diseases. Studies on the underlying mechanism of FGF regulation in teeth may potentially extend the current knowledge of other organ systems and may also offer insights into progression of diseases, presenting new therapeutic approaches.
Conflicts of Interest
The authors confirm that this article content has no conflicts of interest. | 2018-05-03T02:53:33.747Z | 2018-03-11T00:00:00.000 | {
"year": 2018,
"sha1": "71e3484e7d062b2baf6540d38e51a1890716aae2",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/sci/2018/7549160.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "71e3484e7d062b2baf6540d38e51a1890716aae2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
56474094 | pes2o/s2orc | v3-fos-license | Fingerprint Recognition with Edge Detection and Dimensionality Reduction Techniques.
At present fingerprint recognition has been used widely, such as an authentication means of mobile phone usage and a monitoring for working hours. But the recognition performance of existing system low. We thus propose techniques to improve the recognition. We notice that edge detection techniques applied to the fingerprint images can enhance the quality of images and cause the improvement in image recognition We thus study the four edges detection techniques: sobel, prewitt, robert and canny. For faster classification we also apply two dimensionality reduction techniques: principal component analysis and linear discriminant analysis. Then, we identify fingerprint images with the algorithm support vector machine using linear kernel function. Experimental results showed that the pre-processing fingerprint images using canny edge detection with principal component analysis can increased the recognition rate from 64.3% to 88%. On using canny edge detection with linear discriminant analysis, the fingerprint image recognition can be improved from 73.8% to 88%
Introduction
The combination of biological, medical, and computer technologies can be used to identify a person from his/her unique feature.Individual can be automatically identified by a comparison of such feature to the one that has been stored in the database.The system to authenticate or identify a person is called a biometric system.The physical characteristics of the people do not change over time but the physical behaviors may have changed.Thus identifying the person by physical characteristics is more reliable than the use of physical behavioral.At present fingerprint identification used to access smart devices such as cellular phones.The fingerprint biometric system is however not secure enough because the accuracy is less than a hundred percent.In the past, many researchers proposed edge detection techniques to be used to enhance the recognition.The recognition performance has been improved by the application of wavelet transform with prewitt edge detection (1) .The edge detection with gray level watershed approach makes faster data classification and better performance (2) .Edge detection is one important image processing techniques to recognize the fingerprint.It influences the image extraction and affects the matching of images.The algorithm should be chosen according to the characteristics of the image for the perfect detection (3) .
We thus study a variety of edge in detection techniques to enhance recognition rate of the fingerprint images.However, fingerprint images obtain from each individuals will look very similar and make the correct classification very difficult.Therefore, we have separate elements of images.The element is maintained major characteristics of fingerprint images.We separate element of image by applying four edges detection methods: sobel, prewitt, robert, and canny.Edge detection is expected to make a fingerprint images look clear.Then we apply two dimensionality reduction techniques: principal component analysis and linear discriminant analysis.Dimensionality reduction is applied to make a classification faster and save a memory.Then dimensionality reduced image data are fed into the algorithm that is used to perform the classification task.We use the support vector machine with linear kernel and then compare the performance of each model.
Theories 2.1 Edge detection
Edge detection (4) is to find line around the object in the image.When we know line around the object, we can calculate the area (size) or recognition type of the object.However, finding correct image edge detection perfect not an easy task.In particular, finding edge of image with low quality or uneven light is even harder.Edges can be detected by the difference of light intensity from one point to another point.If there is much difference of light intensity, the edges can be outlined clearly.If the difference of light intensity is low the edges are not clear.Edge detection techniques can be divided into two main groups: Gradient method and Laplacian method.
In this study, we use four gradient methods: sobel edge detection, prewitt edge detection, robert edge detection, and canny edge detection.
Sobel edge detection
Sobel (4) is used to find change of colors in image between object and background.The gradient value in each band is calculated by convolution image with filter of size 3X3.The result applying Sobel edge detection is show in Fig. 1.
Prewitt edge detection
Prewitt (4) performs edge detection by calculating gradient vector of each point on the original image.The higher gray level intensity shows border between object and background.Gradient is calculate by filter of size 3X3.Example of Prewitt edge detection is show in Fig. 2.
Robert edge detection
Robert (4) edge detection technique is similar to the Sobel edge detection.But use filter at smaller size of 2X2.Example of applying the Robert edge detection techniques is show in Fig. 3.
Canny edge detection
Canny (4) edge detection applies adjustments to smooth image with Gaussian filter to remove noise in the image.It makes a better edge finding.Then calculate magnitude size and orientation of gradient.The next step is to use non-maxima suppression with Gradient magnitude to make the edges thinner.Finally use double thresholding algorithm to identify edge pixels and connect consecutive edges.Demonstration of Canny edge detection to the sample image is shown in Fig. 4. In this study, we use two dimensionality reduction techniques: principal component analysis and linear discriminant analysis.
Principal Component Analysis (PCA)
PCA (5,6) is the technique of multivariate data analysis without segmenting variables.It is commonly used to reduce size of the matrix of variables to smaller size appropriate for the further analysis of data.PCA will create a new variable, which is made up of variable or the variance of a combination of original variables.Finding relations among image normally uses the matrix of covariance from image data to build an Eigen faces instead of the vector (Eigenvector).
Linear Discriminant Analysis (LDA)
LDA (7) is a technique used for supervised learning.It is commonly used for dimensionality reduction over data variables and also for classifying data.It uses a function to project data onto a subspace in such a way that data coming from different classes are well separated, and data from the same class are formed closer together to allow easy classification.It considers a distribution within group and distribution between the groups.LDA can identify picture that are affected by factors such as lights and shooting characteristics.
Support Vector Machine (SVM)
SVM (8,9) is a process of selecting the optimal model for inducing the patterns.Support vector machine is popular in pattern recognition and data classification.Support vector machines for classification use optimal hyperplane in classifying data.Hyperplanes can be created in various ways.But therewill be the one optimal hyperplane that can maintain the greatest distance between two groups.The optimal hyperplane can be found by locating the support vector that is used as representative of the entire data set.This support vector is used to divide the data by taking a plane that can separate the two datagroups as much as possible.Then find a plane with maximum margin and assume that plane a suitable one for classification.
We assume a set of n data points, ( , ), … , ( , ) when ∈ , ∈ *−1, 1+.When m is dimension, x is data input, and y is class -1 or +1.Creating a plane to split data can be calculated using Equations 1and 2.
Fingerprint Recognition Accuracy of Non-edge versus Edges Detection
We split data into two datasets: training data containing 126 images and test data consisting of 42 images.Then perform classification using support vector machine algorithm with linear kernel.The classification accuracy is 52.4%.This is the base line for comparison because it is the classification performance of the original image data.After applying edges detection: sobel, canny, and perwitt the classification accuracy increase.
Recognition Accuracy of Full Feature Fingerprint versus Dimensionality Reduction
When we use dimensionality reduction techniques, the classification accuracy increase significantly.LDA can increases accuracy from 52.4% to 73.80% and PCA increases accuracy from 52.4% to 64.30%.
Accuracy Improvement with Edge Detection and Dimensionality Reduction
We have shown results before and after edge detection with two dimensionality reduction technique: principal component analysis (PCA) and linear discriminant analysis (LDA), as a graph in Fig. 8 Fig. 8 is a graph showing the accuracy in accordance with the number of components.We are interested in the most accuracy model by comparing between edge detection method and non-edge detection method.Non-edges detected LDA + SVM gives the most accurate model at 73.8%.After using sobel edge detection + LDA + SVM, model's accuracy increases to 88%.Non-edge detected + PCA + SVM has the highest accuracy at 64.3%.After using sobel edge detection + PCA + SVM the accuracy increase to 88%.The accuracy is increased after applying edge detection because the fingerprint images can be noticed clearly.It can be seen from examples in Fig. 7 that the sobel edge detection technique converts prominent ridges into gray scale that can facilitate the classification algorithm.
Conclusions
The fingerprint recognition technology is widely used in many real-life applications such as the access to mobile devices, border control, entering building, and so on.Adopting fingerprints for identification and authentication is, however, still inaccurate.We propose in this paper the improvement of fingerprint image recognition through the use of edge detection and dimensionality reduction techniques.We use four edge detection methods: sobel, prewitt, robert, and canny.We apply dimensionality reduction techniques to help faster identification using two techniques: principal component analysis and linear discriminant analysis.Then classify fingerprint image with support vector machine using linear kernel.In a series of experimentation, we use fingerprint images of size fingerprint image size 80x80 pixels (6400 components).The dataset contains of 168 images obtained from 21 people.The experimental results showed that LDA + SVM gave a classification model that has the highest accuracy at 73.8%.After using sobel edge detection + LDA + SVM the increased accuracy to 88%.And PCA + SVM, model has the highest accuracy 64.3% after using sobel edge detection + PCA + SVM model accuracy increased to 88%.The results can be summarized as sobel edge detection when used in conjunction with dimensional reduction technique: principal component analysis or linear discriminant analysis, the recognition of fingerprint image can be significantly enhanced.
Fig. 5 .
Fig. 5. Optimal hyperplane for classification (a) This graph shows the accuracy with the components by edges detection and LDA.(b) This graph shows the accuracy with the components by edges detection and PCA.
Table 2 .
Before applying edge detection
Table 3 .
After applying LDA with edge detection
Table 4 .
After applying PCA with edge detection | 2018-12-15T11:50:50.919Z | 2015-02-05T00:00:00.000 | {
"year": 2015,
"sha1": "5ff94db197469c08ebb63d14e25982a86a83fd90",
"oa_license": "CCBY",
"oa_url": "https://www2.ia-engineers.org/conference/index.php/iciae/iciae2015/paper/download/553/470",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "5ff94db197469c08ebb63d14e25982a86a83fd90",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
247009855 | pes2o/s2orc | v3-fos-license | Comprehensive Characterization of Shredded Lithium‐Ion Battery Recycling Material
Abstract Herein we report on an analytical study of dry‐shredded lithium‐ion battery (LIB) materials with unknown composition. Samples from an industrial recycling process were analyzed concerning the elemental composition and (organic) compound speciation. Deep understanding of the base material for LIB recycling was obtained by identification and analysis of transition metal stoichiometry, current collector metals, base electrolyte and electrolyte additive residues, aging marker molecules and polymer binder fingerprints. For reversed engineering purposes, the main electrode and electrolyte chemistries were traced back to pristine materials. Furthermore, possible lifetime application and accompanied aging was evaluated based on target analysis on characteristic molecules described in literature. With this, the reported analytics provided precious information for value estimation of the undefined spent batteries and enabled tailored recycling process deliberations. The comprehensive feedstock characterization shown in this work paves the way for targeted process control in LIB recycling processes.
Introduction
Since its commercialization three decades ago, the lithium-ion battery (LIB) has been a key technology to achieve a digitalized 21 st century. Hand-held consumer electronics are battery powered examples for this. Improvements in energy and power density, cycle and calendar life, energy efficiency and safety also shifted application of LIBs towards electromobility to achieve greener mobility. [1][2][3] After first restraint, (plug-in) hybrid electric vehicles ((P)HEVs) and fully battery powered electric vehicles (BEVs) are gaining popularity. Reasons are manifold, ranging from an increase of social ecological awareness over improved LIB performance to lowered acquisition costs, partially enabled by government subsidies. [2] Starting mainly from ecological motivations enforced by law, original equipment manufacturers (OEMs) also realized the economic potential of electromobility and expand their EV portfolios. [4] Beyond that, OEMs also invest in battery cell production to meet their rising demands. [2,5,6] Accompanied with the massive growth of LIB cell production and application, the amounts of end-of-life LIBs will also increase time-shifted. Therefore, recycling of LIBs will play an important role, not only to further reduce the ecological footprint of electromobility, but also for a more secured raw material supply of geographically unevenly distributed elements like cobalt and nickel. [7][8][9][10][11] However, to lower the overall ecological footprint of LIBs, also recycling process need to be improved or even regulated regarding sustainability for example by moving towards circular processes. [12,13] One current state-of-the-art LIB recycling procedure can start with deactivation and shredding of the spent battery modules. After discharge and dismantling, modules are shredded under inert conditions to avoid thermal runaways and volatile electrolyte residues are removed. [14][15][16][17][18] Afterward, hydrometallurgical procedures and classification are applied to regain valuable active and inactive materials. If the recycling process starts with pyrometallurgical treatment, discharge and deactivation are not mandatory, but are in some cases performed. [16,[19][20][21] Detailed organization and implementation of future LIB recycling on industrial scales is not clear, yet. In addition, responsibilities for complying with the required recycling rates remain unclear. Presumably, decentralized deactivation and larger recycling plants will combine safe treatment and transport of spent LIBs with the use of economy of scale for recycling plants. [10,22] LIB material characterization is inevitable in the context of material recycling. Evaluation and adjustment of recycling procedures requires reliable and comprehensive information of the feedstock, which means reverse engineering in most cases since no information about for example cell chemistry is available. [23,24] For example, elemental analysis of the starting material is needed to calculate recycling rates of targeted elements and possible impurities over the process. Furthermore, with possible LIB life times of up to > 15 years, the return flow of spent LIBs will not represent state-of-the-art (SOTA) materials, but various cell chemistries from more than a decade. Therefore, a reasonable value estimation of the present scrap requires elemental analysis of containing metal stoichiometries. [1] For elemental analysis, inductively coupled plasma (ICP)-based methods enable best sensitivity. Moreover, analytical methods like atomic absorption spectroscopy or X-ray-based methods like total reflection X-ray fluorescence or energy dispersive Xray spectroscopy (EDX) can also give sufficient information and were successfully applied for LIB material characterization. [25][26][27][28] Beyond elemental analysis, speciation enables deeper insights, especially into present organic compounds. Analyses of the organic electrolyte, electrolyte additives, binder and their degradation species enable conclusions regarding cell aging conditions and material aging history. Moreover, possibly the recycling process interfering species, like binder polymer residues or potential dangers by hazardous species can be identified. [23] Afterward, repeated analysis within the recycling procedure enables reliable process control by investigating the removal of these interferences. Especially, chromatographybased investigations are well-established for speciation of organic compounds in LIBs. After separation, mass spectrometric (MS) detection combines sensitivity and structural information for best compound identification. [29,30] Analytical investigations on LIBs were mainly applied to labbuilt and -aged cells or lab-aged commercial cells. These samples secured information on materials, handling and aging to identify causal coherences between treatment and observed decomposition reactions. However, for more complex recycling material samples, studies on the transferability and adaptability of known methods are needed, as well as developments of new approaches for sample-specific characterization. [31] In this work, we report the application of analytical methods, previously established for laboratory aging and post mortem cell studies, to investigate unknown shredded LIB material from an industrial recycling process. Solely the positive electrode active material of the shredded cells was declared as LiNi 0.6 Co 0.2 Mn 0.2 O 2 (NCM622) by the material supplier. Elemental analysis and speciation by chromatography-based techniques were applied for detailed material characterization. Further, extraction-and pyrolysis-based methods were conducted to maximize the accessible range of species. Based on the obtained information, the material history was accessed and starting points for analytical process control during subsequent recycling were highlighted.
Extractions of volatile and soluble species for chromatographic analysis: Solely dry shredded material was obtained. Therefore, extraction methods were applied to access electrolyte residues as well as decompositions species.
For analysis of volatile species, solid phase microextraction (SPME) was done with acrylate fibers in headspace mode with short (10 s) and long (600 s) sampling durations to preconcentrate main constituents and further detectable compounds, respectively. The solid sample was held at room temperature to prevent further aging by thermal decomposition during the sampling procedure. A SPME setup from CTC Analytics (Switzerland) controlled by the cycle composer software of the AOC 5000 autosampler (Shimadzu, Japan) was used. Further parameters were applied according to Horsthemke et al. [32] For analysis of nonvolatile species, liquid extraction was performed. ACN was chosen as solvent, since it is typically used during sample preparation for liquid chromatography (LC) analysis and solves most literature known decomposition species. The pure shredded material (6.5 g) was transferred into a 50 mL Vial and 5 mL ACN were added. The mixture was intensively shaken for 5 min and filtered with a syringe filter (22 μm) to obtain a clear liquid solution that was analyzed by LC-MS (undiluted) and ion chromatographyconductivity detection (IC-CD) (1/100 v/v). ( Figure S1).
Further, the shredded material was extracted analogously with nonpolar DCM to solve organic carbonate residues and prevent conducting salt solvation for subsequent gas chromatography (GC)-MS analysis with liquid injection.
Analytical investigations
Pyr-GC-MS: For investigations with pyrolysis (Pyr)-GC-MS, a PY-3030D pyrolyzer (Frontier Laboratories, Japan) was used. Measurements were conducted according to Stenzel et al. [36] Adjusted pyrolysis temperatures of 200, 300 and 515°C represented a compromise for simultaneous measurement of positive and negative electrode materials in the shredded material mixture, based on evolved gas analysis results in a previous study. [36] GC-MS: GC-MS experiments were executed on a Shimadzu GCMS-QP2010 Ultra with assembled AOC-5000 Plus autosampler and a nonpolar Supelco SLB-5 ms (30 m × 0.25 mm. 0.25 μm; Sigma Aldrich) column. Further parameters were applied according to Horsthemke et al. [32] GC investigations with high resolution (HR)MS detection were performed on a Q Exactive GC Orbitrap GC-MS/MS system with a TRACE 1310 GC and a TriPlus RSH autosampler (all Thermo Fisher Scientific, USA). Experimental parameters were applied according to Peschel et al. [37] and target analysis was performed based on extracted ion chromatograms (EICs) of measured accurate masses with a mass window of 5 ppm.
LC-MS: For LC investigations with ion trap-time of flight (IT-TOF)-MS detection, a Nexera X2 UHPLC system (Shimadzu) hyphenated to a LCMS-IT-TOF (Shimadzu) was used. Reversedphase (RP) chromatography was conducted on a ZORBAX SBÀ C18 column (100 × 2.1 mm, 1.8 μm; Agilent, USA) at 40°C and a flow rate of 0.5 mL min À 1 . The analyte target list and further experimental parameters were applied according to Henschel et al. [38] IC-CD-MS: IC investigations were performed on an 850 Professional IC (Metrohm, Switzerland) with conductivity detection (CD). For MS detection, the system was further hyphenated to the IT-TOF-MS. A Metrosep A Supp 7 column (250 × 4.0 mm, 5 μm; Metrohm) was used for isocratic anion separation at 65°C and a flow rate of 0.7 mL min À 1 was applied. The applied method is based on Kraft et al. [39] and further parameters were applied according to Henschel et al. [40] ICP-OES: ICP-optical emission spectroscopy (OES) measurements were performed using an ARCOS (Spectro Analytical Instruments, Germany) with an axial positioned plasma torch. For analysis, multiple emission lines were observed. All other parameters and sample preparations were applied according to Vortmann et al. and Evertz et al. [41,42] SEM and EDX: For scanning electron microscopy (SEM) and EDX analysis, material from a sieved fraction (0.5-1 mm) was optically presorted. Coppery and silvery colored flakes were separately attached to the sample trays. SEM measurements were performed by an Auriga electron microscope (Carl Zeiss Microscopy, Germany) with an accelerating voltage of 3 kV and EDX measurements were carried out with an accelerating voltage of 20 kV with an energy dispersive X-ray detector (Oxford Instruments, United Kingdom).
Results and Discussions
End-of-life LIBs obtained from an industrial shredding process were investigated to get insights into material composition. For first impressions of the inhomogeneous shredded material, optical presorting was performed. The material showed larger coppery and silvery colored flakes with attached black mass, different plastic pieces, remaining hard housing and black mass. ( Figure S2) The optically presorted material was chosen for some experiments, as well as two sieved material fractions (0.5-1.0 mm and 0.100-0.315 mm).
More detailed optical impressions were obtained via SEM imaging. The SEM image of an optically presorted coppery colored flake (0.5-1.0 mm fraction, assumed as negative electrode origin) is shown in Figure 1. The SEM image illustrates partially mixed positive (round shaped NCM) and negative electrode (graphite flakes) material also on particle level. In contrast to dissembled cells, major material inhomogeneities have to be considered for analytical sample complexity, but also for material recycling.
Analysis of the elemental composition
To analyze the elemental composition, representative samples of the shredded material were measured via ICP-OES after microwave assisted digestion (threefold determination). Main focus was to reconstruct the active material stoichiometry of the positive electrode. Conclusions regarding current collector metals, electrolyte constituents and binder materials were also drawn.
The with the limited information obtained by the material supplier. It has to been stated, that mixtures of NCM or additional LiNi x Co y AlO 2 (NCA) stoichiometries could make this analysis more complicated. If the ICP-OES/MS results hint at a mixture of NCM and/or NCA materials, for example single particle investigations, recently introduced by Kröger et al., [43] could give further insights regarding mixed stoichiometries and varying active materials.
The quantified Li proportion can originate from the NCM material, as well as from the conducting salt. The common commercially used conducting salt is LiPF 6 , whose occurrence was analyzed in more detail by IC-CD analysis. Further, copper and aluminum were identified, commonly applied as current collector materials for the negative and positive electrode, respectively. This conclusion was further proven by a combination of SEM and EDX measurements of optically presorted flakes. The identified proportion of sodium indicated the use of carboxymethyl cellulose (CMC) as a binder material, which is commonly applied as sodium salt. [44,45] Binder polymers were further analyzed by Pyr-GC-MS. Sulfur could originate from additionally applied sulfur containing conducting salt anions like bis((trifluoromethyl)sulfonyl)imide (TFSI À ) or bis(fluorosulfonyl)imide (FSI À ) which was further investigated by IC-CD-IT-TOF-MS, and from application of sulfur containing electrolyte additives, as further investigated by Pyr-GC-MS and GC-HRMS.
In addition to representative samples of the shredded material, also sieved fractions (0.5-1 mm and 0.100-0.315 mm) were analyzed by ICP-OES to determine changes of elemental composition caused by the choice of sample constitution. Significant differences were observed for the current collector metals. The 0.5-1 mm fraction contained higher Al (10.19 (� 0.60) wt %) and Cu (19.91 (� 0.95) wt %) contents with lower deviations for multiple digestions. The fraction (see Figure S2) mainly consisted of small, coated electrode flakes reflected by the higher current collector metal contents in the ICP-OES measurements. In contrast, the fine (0.100-0.315 mm) fraction showed lower contents compared to the representative unsieved sample with 2.39. (� 0.05) wt % and 1.98 (� 0.95) wt % for Al and Cu, respectively. The significant differences illustrate inhomogeneous sample constitution after shredding and relevance of representative sample choice, to obtain reliable insights into elemental material composition.
For recycling purposes, elemental composition analysis of starting material with unknown history is inevitable. The value of unknown LIB scrap highly depends on the applied positive electrode material due to different material values of Ni, Co and Mn. [8] Accordingly, higher cobalt contents as applied in LiNi 0.33 C 0.33 M 0.33 O 2 or LiCoO 2 materials could account for higher scrap prices compared to NCM622-based material. Not only with the return flow of LIB chemistries from multiple decades, but also with continuous reduction of inactive material contents for improved energy densities, elemental value of the scrap varies. [1] Moreover, identification of mixed positive electrode materials is relevant for robust hydrometallurgical process control, for example by consideration of LiFePO 4 contents. [46] Analysis of organic compounds Elemental analysis was informative for recycling valuable estimation by identification and quantification of positive electrode-based (transition) metals and current collector materials. However, only 41.16 (� 3.88) wt % of the overall sample mass was dedicated to the quantified elements. Further, graphite, applied as negative electrode active material, was identified by SEM-EDX imaging. Anyhow, fluorine species and further organic residues are present in the shredded material and knowledge on these is valuable for tailored treatment. Further sample characterization can be obtained via speciation of the organic substances. Therefore, chromatographic techniques mainly coupled to MS detection were applied. [29] Moreover, pyrolysis, preconcentration and extraction methods were occupied.
Pyr-GC-MS investigations
Pyr-GC-MS was applied to analyze polymeric binder residues in the shredded material. After shredding, electrode materials are randomly mixed and separate analysis of positive and negative electrode materials as described in literature was not practicable. [36] For easy sample handling, first a random sample of a sieved fraction (0.1-0.315 mm) was analyzed. The pyrograms obtained after pyrolysis at 200 and 300°C, mainly showing literature known electrolyte residues and electrolyte decomposition products, are shown in the Supporting Information ( Figure S3). Identification of electrolyte residues and decomposition species will be discussed based on (SPME)-GC-MS results, later. Significant amounts of 1-propanesulfonic acid ethyl ester (EPS) and PS were identified after pyrolysis at a temperature of 300°C by data base comparisons (NIST 11 scores > 94 %). ( Figure S4) To prove these findings, additionally optically presorted coppery colored flakes were analyzed. An excerpt of the overlay of the TIC and by factor 100 magnified EICs of marker fragment ions is depicted in Figure 2. Also, 1propanesulfonic acid methyl ester (MPS) was identified (NIST 11 score 92 %) whose identification suffered from peak overlapping caused by the sample complexity in previous measurement. The obtained background subtracted GC-single quadrupole (SQ)-MS mass spectra of the three identified sulfur containing analytes are shown in the Supporting Information ( Figure S6 and S7). Target analysis of EPS and PS was also performed by GC-HRMS and will be discussed in a following section. MPS, EPS and PS were identified at a pyrolysis temperature of 300°C. Regarding PS, the M + ion with m/z 122 was identified, due to limited fragmentation behavior on the SQ-MS system. PS is applied as a film forming additive and ring opening reactivity during cell formation was described. After ring opening of PS, lithium alkyl/alkenyl sulfonates are formed that were reported to improve the lithium ion conductivity of the SEI for graphitebased negative electrodes. [47][48][49][50] Relating to reversed engineering approaches, PS can be used alone or in combination with further substances such as fluoroethylene carbonate (FEC) and vinylene carbonate (VC) as film former. [47] No clear evidence for either of them was detected, as discussed in a following section. Concerning recycling purposes of the shredded LIB material, PS exposition at elevated temperatures has to be considered. PS suffers from major toxicity and also volatile derivates should be treated as possible dangers. [51,52] Further, degradation reactions of PS resulting in toxic and highly volatile SO 2 or H 2 S are conceivable at higher temperatures, but were not observed in these experiments. Besides hazard potential, sulfur containing additives represent a further hetero atom containing specie, relevant for example for hydrometallurgical treatment.
The obtained pyrogram at a pyrolysis temperature of 515°C is depicted in Figure 3. The sample complexity resulted in complex pyrograms with peak overlapping.
Target analysis of previously reported markers of typical binder materials was conducted. [36,53] Especially the benzylic fingerprint of styrene-butadiene rubber (SBR) with benzene (6.45 min), toluene (9.28 min), ethylbenzene (12.65 min), styrene (13.80 min), propylbenzene (16.08 min), prop-1-en-2-ylbenzene (17.14 min) and phenol (17.22 min) was identified, concluding SBR as an applied binder material. SBR is usually applied in combination with CMC in aqueous processed SOTA graphitebased negative electrodes. [54,55] Detection of Na by ICP-OES already hinted at CMC usage, but only 1,4-dioxane (7.40 min) was found by Pyr-GC-MS, probably originating from CMC. [36] However, concluding a mixture of CMC with SBR as main negative electrode binder material was reasonable. For identification of the positive electrode binder material, hints of the SOTA material polyvinylidene difluoride (PVdF) were found at a pyrolysis temperature of 515°C. The EIC of m/z 64, reported as C 2 F 2 H 2 in literature, is shown in the Supporting Information ( Figure S5), but suffered from major peak overlapping with further highly volatile species at short retention times in the total ion chromatogram (TIC). [36,53] For recycling of the active materials, organic compounds like carbonate or binder residues are usually removed. [56,57] Investigations by Pyr-GC-MS were proven as a powerful tool to identify electrolyte and binder residues in the inhomogeneous shredded material. The parallel identification of electrolyte (additive) residues, decomposition species and binder materials enabled fast and broad-ranging screening on organic compounds. Further, based on Pyr-GC-MS investigations, also polymer electrolytes or separators could be investigated.
Subsequent material treatment can be tailored based on these investigations, for example thermal treatment temperatures can be adjusted based on identified materials and furthermore, this paves the way for customized quality control of process steps aiming at organic compound removal during material recycling. Target analysis by Pyr-GC-MS after thermal, mechanical and/or chemical treatment could be performed to control successful removal of the species.
GC-MS investigations
For more detailed insights into species with significant vapor pressure at room temperature, SPME-GC-MS was performed. SPME-GC-MS enables fast screening also of dry materials without any sample treatment. [32] The headspace above a random solid sample was analyzed without heating to avoid ongoing decomposition. The obtained SPME-GC-MS chromatogram after preconcentration for 10 s is depicted in Figure 4.
Five main peaks representing the analyzed sample and an additional peak caused by detected air (1.56-1.77 min) were found. The observed linear carbonates dimethyl carbonate
Chemistry-A European Journal
Research Article doi.org/10.1002/chem.202200485 (DMC) (2.41 min), ethyl methyl carbonate (EMC) (3.34 min) and diethyl carbonate (DEC) (4.99 min) are typical SOTA LIB electrolyte solvent molecules. The electrolyte(s) applied in the investigated cells might consisted of mixtures of these linear carbonates, but it seems more plausible, that the symmetric linear carbonates were formed by transesterification reactions of EMC and the pristine electrolyte formulation was solely EMCbased. [58] For this case, the relatively low degree of transesterification also indicated the application of an interphase film forming additive. PS was identified, but further commercially applied film forming additives such as FEC and VC were not observed. [59] However, their complete consumption cannot be excluded.
The SOTA electrolyte solvent ethylene carbonate (EC) (9.52 min) was also detected. Besides linear and cyclic carbonates, cyclohexylbenzene (CHB, 12.56 min) was detectable after a short extraction time. The large peak compared to the carbonate-based solvent molecules is caused by the good ionization efficiency of the aromatic ring structure. CHB is known as an overcharge protective shuttle additive and was found in aged electrolytes of LIBs for high power applications in (P)HEV with quantities in the low percent range. [40,60] In conclusion, the identification of CHB indicated a possible lifetime application of the analyzed LIB material.
For analysis of lower concentrated volatile species, the sample was also extracted for 600 s. Magnified sections of the obtained chromatogram are depicted in the Supporting Information ( Figure S8). Besides the previously described substances, further literature known decomposition species like C 3/4 carbonates, [61,62] dimethyl (DMDOHC)-ethylmethyl (EMDOHC)and diethyl-2,5-dioxahexane dicarboxylate (DEDOHC), [58,63] and applied electrolyte components like biphenyl (BP) [64] and propylene carbonate (PC) [65] were detected. Carbonates with elongated alkyl chains and oligo carbonates are typical electrolyte decomposition species after electrochemical (= cyclic) aging. [62,63] Possible explanations for the occurrence of PC and BP range from combined application with other applied compounds (e. g. BP and CHB), over impurities (e. g. BP in CHB), to simultaneous shredding of different cells or carry over contaminations during the shredding process. [40,64] SPME enabled fast and simple characterization of highly volatile substances. Anyhow, for sampling at room temperature, analytes with low vapor pressures suffer from sensitivity discrimination due to lower headspace extraction yields. Therefore, liquid extraction with a nonpolar solvent (DCM) was performed to enable liquid injection into the GC system despite dry sample material. DCM was chosen as it is proven to minimize conducting salt injection based on low LiPF 6 solubility. [66] The resulting GC-MS chromatogram is shown in the Supporting Information ( Figure S9). In addition to previously performed SPME-GC-MS analysis, further benzylic species were identified by NIST 11 database comparisons of background subtracted mass spectra (scores > 90 %). Moreover, adiponitrile (ADN) was detected, also reported as a high voltage-compatible and high flashpoint component in LIB electrolytes. [67,68] Solely qualitative data hindered conclusions regarding reversed engineering, but identification illustrated possible material com-plexity after industrial shredding with high probabilities of cross contaminations.
For improved sensitivity and selectivity, the DCM extract was also investigated by GC-HRMS. Structures, previously identified based on GC-SQ-MS database comparisons, were confirmed utilizing accurate mass capabilities. Moreover, pyrolysis findings on PS were investigated. A commercially available PS standard was investigated to obtain exact knowledge on retention and fragmentation behavior on the GC-HRMS system. Based on this data, target analysis by EICs of characteristic sulfur containing fragment ions was conducted. The overlay of the characteristic EICs is depicted in Figure 5. For chromatographic data of the PS standard material and the obtained GC-HRMS mass spectrum, the reader is kindly referred to the Supporting Information ( Figure S10).
Target analysis of PS (11.39 min) resulted in very low intensities of the chosen sulfur containing marker fragment ions at the contemplable retention time. The identification of PS was within the limits of detection. Combining Pyr-GC-MS and GC-HRMS results, PS and propyl sulfonates have to be considered as occurring sulfur containing species in the shredded material. EPS was synthesized for reliable identification by retention time and characteristic fragment ions by means of GC-HRMS. The resulting chromatogram of the reaction mixture, the GC-HRMS mass spectrum and identification of EPS in the DCM extract is depicted in the Supporting Information ( Figure S11). Identification of EPS in the DCM extract proved, that the esterification of the alkyl sulfonate also occurred without pyrolysis and therefore, is concluded as a PS aging marker. However, this will be investigated in separate works with aged, but unshredded PS containing cells with precise knowledge of electrolyte, active material and aging conditions. The combined results from the performed GC-MS measurements underline the value of comprehensive analytical investigations for both, highly informative and highly reliable material characterization, even for the same chromatographic technique.
LC-MS and IC-CD(-MS) investigations
For LC-MS target analysis of electrolyte decomposition species, the shredded material was extracted with ACN. As for GC analysis, data evaluation was simplified by previously defined target marker molecules. [38] Target analysis by EICs was performed based on literature known species and adduct formation. Among others, oligo carbonate, phosphate carbonate and oligo phosphate species were detected. Further, ether oligomers and carbonate ether co-oligomers, described as thermal strain markers, were identified. Altogether, more than 50 species were identified within the observed mass range, solely based on the target lists introduced by Henschel et al. [38] (Table S1-S4) Exemplarily, Figure 6 shows an overlay of EICs of characteristic adducts formed for diphosphates with varying alkylation. ((Me) 4 !(Et) 4 ) In previous studies oligo phosphates were found in EMC and VC containing electrolytes after > 1000 cycles, but not in film forming additive free electrolytes or solely after cell formation. Based on this, these species were described as possible VC marker molecules. [38] Afterward, the detection in DEC-based electrolytes after > 500 cycles was also described with low intensities. [63] For DEC-based electrolytes, it has to be considered, that only tetra ethyl species are formed, and therefore, the overall detection limits for the substance class suffer from statistical effects depending on the linear carbonate. Nevertheless, conclusions related to cell operating and aging history of the analyzed material were enabled, based on the detection of these species. Obviously, the material underwent long-term cyclic aging before shredding. Moreover, detection of significant intensities of oligo phosphates correlated with Pyr-GC-MS and GC-HRMS indications on the usage of at least one film forming additive. The identification of thermal strain markers could not be clearly assigned to conditions during cycling, as thermal stress during the shredding and electrolyte removal processes was also reasonable.
In addition to RPLC separation, IC was conducted to get insights into detectable conducting salt anions and possible anionic decomposition species. The IC-CD chromatogram obtained from the diluted (1/100 v/v) ACN extract is depicted in Figure 7.
Qualitative IC-CD measurement proved PF 6 À (16.3 min) as the dominating conducting salt anion, which is in line with the literature. [69,70] A further peak with a retention time of 26.5 min was detected. Identification was performed via HRMS detection. The background subtracted mass spectrum of the peak at 26.5 min is shown in the Supporting Information ( Figure S12). The EIC of the M À ion is depicted in Figure 8.
The measured M À ion with m/z 179.9236 belongs to FSI À , which is also applied as conducting salt anion in LIBs. [70,71] The origin of FSI À could not be clearly determined. Application as a co-conducting ion or cross contamination during shredding were possible explanations. Nevertheless, the identification represents another example of the complexity of the obtained recycling starting material. Anionic decomposition species like PF 6 À hydrolysis products were not detected. Reasons could be their absence or low extraction yields via ACN. For recycling of the conducting salt anions, detection of significant amounts of undecomposed PF 6 À after shredding and rough electrolyte removal paves the way for possible early-stage recovery of the conducting salt via extraction methods. However, reproducibility and recovery rates of the performed solvent extraction require further evaluation.
Conclusions
In this study, inhomogeneous shredded LIB material from an industrial recycling process was analyzed in detail. Comprehensive application of a wide range of analytical methods enabled deep understanding of elemental composition and present organic species. On the one hand, quantitative elemental analysis of material was informative for both, elemental recycling value and reversed engineering approaches. To illustrate sample inhomogeneities after shredding, sieved fractions of the shredded material were also analyzed, showing significantly different Al and Cu contents. On the other hand, organic speciation via chromatography-based methods was applied and conclusions regarding electrolyte and binder aging history were drawn. Reasonable pristine materials were discussed and long-term cyclic aging of the material was proven by different examples. Furthermore, obtained data enabled evaluation of challenges for recycling purposes like safety aspects and process interfering sulfur containing species. Thereby, comprehensive analysis combined the advantages of gas and liquid chromatography, as well as extraction and pyrolysis methods for maximized information output. Performed comprehensive analysis proved demand and capabilities of analytical methods for characterization of shredded LIB material. Despite material complexity and lack of information on material history, deep understanding of the present recycling material can be obtained. Beyond that, approaches for ensuing recycling product as well as process control were pointed out based on the same analytical methods. | 2022-02-22T06:23:08.886Z | 2022-02-21T00:00:00.000 | {
"year": 2022,
"sha1": "12e7913d79295c1a522e6d6af47a180a3cb02cbd",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/chem.202200485",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d5f7bff5ceecab6d0d3763f4a2d3818c0d20af6f",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252293394 | pes2o/s2orc | v3-fos-license | Crowdsourcing Public Engagement for Urban Planning in the Global South: Methods, Challenges and Suggestions for Future Research
: Crowdsourcing could potentially have great benefits for the development of sustainable cities in the Global South (GS), where a growing population and rapid urbanization represent serious challenges for the years to come. However, to fulfill this potential, it is important to take into consideration the unique characteristics of the GS and the challenges associated with them. This study provides an overview of the crowdsourcing methods applied to public participation in urban planning in the GS, as well as the technological, administrative, academic, socio-economic, and cultural challenges that could affect their successful adoption. Some suggestions for both researchers and practitioners are also provided.
Introduction
Although the concept of crowdsourcing is fairly recent [1], the idea of engaging the public and non-experts in problem-solving and data collection has a long history in both research and practice.In 1936, the Japanese company Toyota (then Toyoda) organized a public contest for the design of its new logo [2].In total, 27,000 designs were submitted, and the best logo was selected and used between 1936 and 1989.In the 1960s, public advocacy theory [3] (emphasized the importance of public participation in urban planning.Other concepts, such as citizen science, and Public Participation Geographic Information Systems (PPGIS), follow the same principle of engaging the public to participate in the design and implementation of solutions to various problems regardless of their level of expertise.With the upsurge of the Internet, researchers and practitioners had to rethink the ways in which public participation is carried out and re-assess the societal transformation that comes with it.This led to the emergence of crowdsourcing, "a web-based business model requiring voluntary open collaboration to develop innovative solutions" [1].By tapping into a large and diverse pool of stakeholders through the Internet and Web 2.0 technologies, crowdsourcing, as a public participation method, has alleviated the spatial and temporal constraints that are associated with the aforementioned methods.
The term Global South (GS) has several definitions which have economic, geopolitical, and cultural implications.Economically speaking, the GS groups developing countries characterized by, among other indicators, medium and low human development index (HDI less than 0.8).Geographically speaking, most of the GS is in the southern hemisphere and regroups African, Southern and Central American, and Asian countries (with the exception of Japan, South Korea, and Singapore).Due to their limited resources, these countries struggle to develop plans that could effectively address the challenges faced by contemporary cities.For example, in the Asia-Pacific region, over 50% of the Sustainable Development Goals (SDGs) cannot be measured due to a lack of data [4].Fraisl et al. [5,6] have demonstrated that crowdsourcing could help monitor SDG indicators.Thus, crowdsourcing could be very useful to these countries as it allows gathering useful data, which in turn could support better-informed policies.From a historical point of view, most of these countries are former European colonies.As such, the traditional urban planning method consisted mainly of copying strategies implemented in the former colonial power [7,8].However, these strategies are rarely successful as they fail to take into account the unique challenges faced by developing countries [9].With its participatory approach, crowdsourcing could provide a platform that taps into the citizens' local knowledge to identify the main challenges faced by cities in the GS.This, in turn, could help planners better define their priorities and implement policies that meet the needs of the local communities.Furthermore, this democratized planning process through public participation can lead to more transparency and greater citizens' acceptance of public decisions [10].
In the Global North, crowdsourcing has helped democratize the planning process, empower citizens, provide low-cost data for real-time planning, and helped mitigate the limitations of traditional data collection methods such as census data [11][12][13][14][15].In America, Thiagarajan et al. [12] used low-cost, grassroots GPS tracking solutions to improve riders' transit experience (e.g., reduction of waiting time), while Griffin and Jiao [15] demonstrated that collecting data through crowdsourcing increased the inclusiveness of the participatory planning process from the perspective of geography and equity.In Australia, K. Hu et al. [14] developed a low-cost participatory sensing system (called HazeWatch) for urban air pollution monitoring which yields more accurate measurements than the existing government system.The HazeWatch system provides a better understanding of the health impact of air pollution in metropolitan areas.In Italy, MiraMap [13], a we-government platform, helps facilitate the collaboration between the public and the administration while promoting social inclusion, transparency, and accountability in smart city management.These examples in the Global North show the potential value crowdsourcing could have for the GS, which is characterized by limited resources, low or inexistent citizen participation, and a lack of transparency, accountability, and data-driven planning methods.
However, despite these possible advantages, the potential of crowdsourcing remains to be exploited in the GS.This is even more true in Africa, where most urban studies rely on qualitative analysis or traditional data collection methods (survey questionnaires) instead of quantitative methods that require abundant and reliable data [16].This affects the reliability of the findings and limits the effectiveness of the policies that could be implemented from the existing literature.Furthermore, the existing reviews on crowdsourcing in urban studies mainly provide a global overview of the literature [17][18][19][20][21][22].These studies provide a clear understanding of the methods and challenges associated with the use of crowdsourcing.However, the GS faces specific cultural, technological, political, and administrative challenges which could greatly impact the successful use of crowdsourcing in this part of the world.To fill this gap, this paper presents a review of the crowdsourcing research efforts conducted in the GS.More specifically, this study describes the crowdsourcing methods adopted in the GS as well as the main areas of application.The methods described focus on public engagement to support urban planning.Therefore, crowdsourcing in this context mainly consists of data collected and shared by the public through mobile devices (GPS tracking, crash reporting, environment monitoring, etc.) and/or local knowledge shared through collaborative websites (crime mapping, flood mapping, idea generation for smart city management, etc.).Furthermore, drawing from the descriptive statistics of the reviewed papers as well as the characteristics of the GS, the paper also discusses the challenges that could hinder the implementation of crowdsourcing.Finally, it suggests some solutions that could be useful to developing countries in general.This approach has some advantages, which could lead to significant contributions to the existing literature:
•
By providing an overview of the main areas of research, we identify the domains where more research is needed in the future; • Drawing lessons from countries that share the same historical, social, and economical experiences seems more logical than copying methods adopted in the developed world and could lead to more realistic solutions.
The remainder of this paper is organized as follows.The next section discusses the concept of crowdsourcing and adapts it to the context of this study.Section 3 outlines the review method and provides a summary of the reviewed literature.Sections 4 and 5 identify the main areas of application and methods, respectively.Section 6 discusses the challenges associated with the development of crowdsourcing methods in the GS and provides suggestions for future implementations.Section 7 concludes this study.
Crowdsourcing: Definitions
Since Howe [1], several studies have provided different definitions of crowdsourcing.These definitions are important as they provide a basis for what should be considered crowdsourcing and what should not.For example, some studies perceive YouTube and Wikipedia as crowdsourcing [23] while others do not [24].
In urban planning, concepts such as problem-solving, idea generation, and collaborative mapping are widely accepted as crowdsourcing [23,[25][26][27], while data collection methods such as social media scraping and crowdsensing are subject for debate [25,28] Brabham [25] defines crowdsourcing as a top-down approach to solving planning problems.This definition includes approaches such as idea generation for smart city solutions [23,29] but excludes data collection methods such as crowdsensing, Public Participation Geographic Information Systems (PPGIS), social media, etc. Nakatsu et al. [28] argue for a broader definition that includes "geo-located data collection" (e.g., GPS tracking, a form of crowdsensing) but excludes social media.Their main argument for excluding social media was the absence of explicit outsourcing of a task to the crowd.Furthermore, although social media have been widely adopted as crowdsourced data, the method usually consists of extracting people's posts (social media scraping) through Application Programming Interfaces (APIs) without their consent.This could raise some ethical concerns as the people whose posts are extracted may not be willing to participate in data collection.Besides, Howe, who introduced the concept of crowdsourcing, also defined it as a voluntary process.Finally, Estellés-Arolas and González-Ladrón-De-Guevara [24] have provided a definition of crowdsourcing based on a thorough review of the existing literature.They found voluntary participation and a clearly defined task among the main criteria for crowdsourcing.Based on the aforementioned studies, the adoption of methods that do not necessarily require voluntary participation (such as social media scraping and crowdsensing) may be problematic.However, it would be too simplistic to discard all studies using social media or crowdsensing without exploring cases where the participation is voluntary and the task clearly defined.The next subsections address this issue in detail.
Social Media Data
Although most studies use social media scraping, there are specific cases in which the methods described meet the criteria we described above.These cases are:
•
Voluntary participation in dedicated social media groups or pages.Dedicated social media pages can be open platforms for citizen engagement.In this case, the task could consist of submitting complaints (e.g., HarassMapEgypt, a Facebook page [30]), participating in e-governance or sharing citizen sensing data (e.g., pictures, videos, etc.), etc. (see Section 5.3).
•
Studies using social media scraping as a primary data collection method and another crowdsourcing method (usually Open Street Map, OSM) as a secondary dataset.We believe such studies to be of importance as they demonstrate how crowdsourcing could complement other datasets.
Crowdsensing
Crowdsensing leverages the proliferation of low-cost sensing devices and citizen engagement for collecting and sharing data in different domains (environment monitoring, traffic management, waste management, etc.).Participation in crowdsensing can be voluntary or non-voluntary.For example, a crowdsensing application can combine sensing data (e.g., GPS data) with lobation-based service network datasets such as social media check-ins [31].Thus, similar to social media, we will carefully identify the studies in which participation in crowdsensing is voluntary.
Web-based PPGIS has also been used to crowdsource data for urban planning [32].Some Web-based PPGIS projects provide an online platform where participants can share local knowledge through open calls, which is consistent with the basic principles of crowdsourcing.
Therefore, in line with the arguments discussed above, we adopt a broader definition of crowdsourcing which covers voluntary crowdsensing, dedicated social media campaigns, and collaborative websites (web-based PPGIS, collaborative mapping, and idea generation).
Literature Search
In this section, we adopt the PRISMA [33] method to search for the core literature used in this study.PRISMA (Figure 1) consists of the following steps: identification, screening, eligibility, and inclusion.We discuss each step below.During the identification, we used the SCOPUS database to search for articles corresponding to our keywords.Based on the research objectives, keywords related to "crowdsourcing," "urban planning," and "Global South" should be identified (Table 1).Drawing from the discussion in Section 2, the main keywords related to "crowdsourcing" are VGI, PPGIS, PGIS, crowdsensing, etc. "Urban planning" was split into two keywords: "urban" (town, city, etc.) and "planning" (management, planning, and policy).The main difficulty was finding keywords related to "Global South," as several studies do not have specific keywords for the GS.Instead, they use the name of the country or city in their abstract, title, or keywords.In order to include as many articles as During the identification, we used the SCOPUS database to search for articles corresponding to our keywords.Based on the research objectives, keywords related to "crowdsourcing," "urban planning," and "Global South" should be identified (Table 1).Drawing from the discussion in Section 2, the main keywords related to "crowdsourcing" are VGI, PPGIS, PGIS, crowdsensing, etc. "Urban planning" was split into two keywords: "urban" (town, city, etc.) and "planning" (management, planning, and policy).The main difficulty was finding keywords related to "Global South," as several studies do not have specific keywords for the GS.Instead, they use the name of the country or city in their abstract, title, or keywords.In order to include as many articles as possible, "Global South" was left out of the keywords and checked during the screening stage.Using the keywords displayed in Table 1, we generated the following query: TITLE-ABS-KEY ((crowd*sourc* OR "participatory sensing" OR crowd*sensing OR vgi OR "volunteered geographic information" OR "participatory gis" OR "participatory geographic information system*" OR "public participation geographic information system*" OR *pgis OR "user*generated content") AND (urban OR residential OR city OR cities OR town) AND (planning OR management OR policy OR policies)) AND (LIMIT-TO (SRCTYPE, "j")) AND (LIMIT-TO (DOCTYPE, "ar")) AND (LIMIT-TO (LANGUAGE, "English")) The query above searches for journal articles written in English and corresponding to the keywords described in Table 1.The literature search was first performed in March 2021 and repeated in late December 2021 in order to find the latest articles.In total, 591 articles were obtained from SCOPUS.An additional 29 papers were obtained from other sources (references of selected papers and other reading materials), giving a total of 620 articles.
After removing the duplicates, the remaining articles were screened for relevance and geographic location.The articles corresponding to our research objectives and investigating cities of the GS were retained.A total of 144 articles were obtained at the end of this process.
The available articles from the remaining 144 were downloaded and checked for eligibility based on the following criteria.First, one of the main objectives of this study was to explore the crowdsourcing methods adopted in the GS and their associated challenges.Thus, only studies with a clearly detailed methodology were retained.Second, studies where the data were extracted without the knowledge of the users (social media scraping, non-voluntary crowdsensing, etc.) represented a large portion of the available literature and had to be removed manually.Using the aforementioned criteria, we further screened the database and obtained a final core literature of 78 papers (see Supplementary Materials).
The following section provides the descriptive statistics of the reviewed papers.
Source Titles and Article Frequency
Table 2 shows that the most represented journals are Remote Sensing (6 articles), Sustainability (5), IEEE Access, Cities, and the International Journal of Geographical Information Science (3).Sustainability, IEEE Access, and PloS One are all Open Access (OA) journals.Furthermore, 25 out of the 78 papers were published in OA journals (about 32%).More OA journals are needed as most researchers in the GS cannot afford journal subscriptions.Open Access journals would be a good way to democratize access to the latest findings and methods in this research area.
IEEE Access 3
International Journal of Geographical Information Science 3 GeoJournal 2 Journal of Flood Risk Management 2
Journal of Universal Computer Science 2
PLoS ONE 2 There are also many GIS/engineering journals, which may seem surprising.Given the research topic, one may expect more journals with a planning focus.However, several studies also tried to demonstrate how crowdsourcing could complement other methods, such as remote sensing, to solve the data scarcity problems of the GS [34].These studies may target non-planning journals such as Remote Sensing Furthermore, several studies tried to optimize the crowdsourcing methods (through new incentive mechanisms, more privacy, better coverage, etc.) using advanced computer and engineering methods [35].Such studies may target more engineering-oriented journals such as the IEEE series.The presence of GIS journals is mainly due to the fact that several crowdsourcing methods use VGI data (e.g., OSM, web-based PPGIS, etc.).However, all reviewed papers address important urban planning issues and could be of tremendous value for the GS.
All reviewed papers were published in the last 12 years, and the increasing numbers are evidence of a growing interest in the application of crowdsourcing methods in the GS (see Figure 2a).
Large Contribution from China and Researchers Outside the GS
Figure 2b shows that most of the research was conducted by researchers affiliated with Chinese institutes or those outside the GS (United States, United Kingdom, Germany, etc.).In terms of study areas (Figure 2c,d), 38% of the research was conducted in China, followed by 25% in Central and South America (Brazil, Argentina, Guatemala, etc.), 19% in the other parts of Asia (India, Iran, Pakistan, etc.), and 18% in Africa (South Africa, Egypt, Morocco, etc.).There was no contribution from the Pacific Islands.
These numbers show the large domination of China in this research field (both in terms of affiliations and study area).Meanwhile, the other areas of the GS are largely covered by researchers outside the region.
Research Areas
Table 3 shows the main research areas covered in the reviewed papers.We can see that urban morphology and transportation are the most represented areas (16 papers each), followed by environmental monitoring (13 papers).Papers that demonstrate the potential of crowdsourcing as a data collection method, as well as techniques to optimize it and those that assess crowdsourcing tools/methods, represent an important portion of the reviewed papers (9 papers each).Other areas such as urban demographics, disaster detection and management, and smart city management are also covered.The next section provides a detailed description of the main research areas and their key aspects.
as the IEEE series.The presence of GIS journals is mainly due to the fact that several crowdsourcing methods use VGI data (e.g., OSM, web-based PPGIS, etc.).However, all reviewed papers address important urban planning issues and could be of tremendous value for the GS.
All reviewed papers were published in the last 12 years, and the increasing numbers are evidence of a growing interest in the application of crowdsourcing methods in the GS (see Figure 2a).
Large Contribution from China and Researchers outside the GS
Figure 2b shows that most of the research was conducted by researchers affiliated with Chinese institutes or those outside the GS (United States, United Kingdom, Germany, etc.).In terms of study areas (Figure 2c,d), 38% of the research was conducted in China, followed by 25% in Central and South America (Brazil, Argentina, Guatemala,
Main Research Areas and Keys Aspects
In this section, we use Table 3 to describe the main areas and key aspects covered in the reviewed papers.
Urban Morphology
These studies use data shared by the public to examine the urban forms, their formation, and evolution, as well as their impact on different aspects of urban life.The main elements of urban forms investigated in the reviewed papers are land use, infrastructures, and housing.The GS experiences fast urbanization which negatively affects the aforementioned elements, and strong measures need to be taken in order to overcome the challenges.In terms of land use, studies in the GS focused on the classification of functional zones so as to determine the main areas where human activities usually occur [36][37][38].Such studies are important for the GS as they can help, among others, detect rapid urbanization and can therefore help better manage the existing resources.Crowdsourcing is, in this case, a source of training datasets for the classification algorithms.Regarding infrastructures, they should be a major domain of investigation due to the lack of basic infrastructure in many areas of the GS [39].Some studies investigated the effects of the road network on cyclist behavior [40].Studies on urban design focus on the effects of the urban landscape and street configuration on human activities and/or behavior.For example, Mohamed & Stanek [30] examined the effects of street configuration on sexual harassment, while other researchers analyzed the impact of the urban landscape on physical activities [41,42].Such studies can help guide future urban design so as to build safer, more equitable, and healthier urban environments.Regarding housing, it has been a major cause of concern in the GS, mainly due to the lack of affordable housing and the proliferation of informal settlements.Sub-Saharan Africa has the highest proportion of slums in the world (50.2%), followed by Central and Southern Asia (48.2%) [43].To tackle these challenges, some studies have involved the public in the mapping of informal settlements in the GS.However, they usually rely on the most basic forms of community mapping with paper drawings and limited sample sizes [44,45].With the proliferation of smartphones in some parts of the GS, more advanced methods through crowdsourcing could help reach larger samples.
Urban Transportation
Due to its importance and several implications on different aspects of urban life, transportation is among the most represented areas among the reviewed papers (16 papers).The wide variety of domains covered also justifies the large number of papers in the reviewed literature.As a service designed for the public, transportation is heavily impacted by the way people behave through time and space as well as their response to different transportation-related services.Investigating travelers' behavior could help understand their impact on the urban space (e.g., through their travel patterns) and help draw more data-driven policies to support better transportation planning in the GS.In some cities of the GS, crowdsourcing has been used to examine users' travel behavior through travel patterns [46], route choice [47], travel behavior's impact on congestion [48], etc. Travelers' responses to mobility services as well as strategies to improve them were also investigated.Musakwa and Selala [49] used crowdsourced GPS data to investigate cycling patterns, while other studies developed multimodal or public transportation networks with crowdsourced data [50,51].Other studies also focus on the traffic signal optimization [52], traffic density estimation [53], etc.Given the large number of social media users among young people, researchers have also looked for ways to involve the youth in transportation planning by crowdsourcing through dedicated social media pages.
Environmental Monitoring and Management
In an era of sustainable urban planning, research on how public engagement could foster the development of more sustainable cities has become a trend in some cities of the GS.This is also in line with the United Nations' 2030 agenda for sustainable development goals (SDGs) regarding sustainable cities and communities [54], which supports the improvement of urban planning in participatory and inclusive ways.For this reason, researchers have leveraged the power of public engagement through crowdsourcing to monitor the environment and, in some cases, develop decision support systems for both the public and decision-makers.The proliferation of smartphones has made this process easier as smartphones can capture and share data without any technical knowledge from the users.This made possible the collaborative collection of noise data [55], air temperature from smartphone batteries [56,57], the reporting pollution of coastal zones [58], etc.
Data Collection and Optimization
These studies demonstrate the potential of crowdsourcing as a source of data for the GS as well as ways to optimize the data collection methods.For example, in China, several research efforts have developed new methods to increase the spatio-temporal coverage of voluntary crowdsensing tasks to obtain larger and more representative datasets while minimizing the cost and improving privacy.These methods include protecting participants' privacy, increasing the coverage distribution of sensing tasks through incentive mechanisms [59], and enhancing data forwarding performance through cooperative data forwarding mechanisms [60,61].Taking into consideration the characteristics of the GS, other studies showed different solutions to involve the public in data gathering and experiment design [62].Recently, there has been a growing trend on the potential for crowdsourcing as a data collection method for monitoring sustainable development goals (SDGs) in the GS.Pateman et al. [63] provided a review on the use of citizen science for monitoring SDGs in low-and-middle-income countries, while Fraisl et al. [6] introduced a citizen science tool (Picture Pile) for monitoring SDGs.
Assessment of Crowdsourcing Methods for Urban Planning
Some studies have assessed crowdsourcing methods in the context of urban planning in the GS.Given the novelty of crowdsourcing in the GS, such studies are crucial when assessing its applicability and usefulness for cities in this part of the world.If most studies adopt a more objective approach using statistical evaluations (through the density, accuracy, nature of the crowd, etc.), others opt for a subjective method through users' perceptions (perceived usefulness, perceived ease of use, perceived satisfaction, etc.).The objective assessments mainly focused on collaborative mapping and were conducted in China [64,65], Turkey [66], Kenya [67], as well as cities in Argentina and Uruguay [68], most of them focusing on OSM.Regarding the subjective assessments, Cilliers & Flowerday [69] investigated the subjective factors affecting the intention to use the Interactive Voice Response (IVR) system in South Africa, while Bugs et al. [70] examined the perceived ease of use, perceived usefulness, and satisfaction with a Web-based PPGIS platform for urban planning in Brazil.
Smart City Management
Smart cities put the public at the center of the planning process.Therefore, participatory approaches such as crowdsourcing play an important role as they allow the public to share their ideas and opinions for more efficient planning practices.However, the GS is behind the rest of the world in terms of smart city management due to a lack of basic infrastructure and a clear understanding of what a smart city should be in local contexts.For this reason, crowdsourcing could start with an exchange on steps towards smart city transformation in the context of the GS.This is the method adopted by Kumar et al. [29], who crowdsourced ideas (idea generation) for smart city transformation in India.Another step would be to consult the public on the efficient management of the existing resources, as demonstrated by other studies in the GS [71].
Urban Demographics
The rapid population growth in many cities of the GS, especially African cities, raises some challenges which could be mitigated with data-driven methods.Such methods could help monitor the changes in the population, predict future trends and implement proactive policies to face future challenges.However, despite the potential advantages for the GS, urban population estimation has not been widely investigated in the area as all reviewed studies were conducted in China [72][73][74][75].In the aforementioned studies, crowdsourcing (collaborative mapping through OSM) was adopted as supplementary open data so as to improve the accuracy of the mapping algorithms.
Disaster Detection and Management
If natural disasters are common in all regions of the world, the GS is particularly vulnerable to them due to the lack of resources for disaster detection and management.Crowdsourcing, especially collaborative mapping, has played an important role in helping the GS face these challenges.One of the main examples is the use of OSM for disaster relief during the 2010 earthquake in Haiti.Some studies have shown how public engagement can help improve flood mapping in the GS ( [76][77][78].Crowdsourced data can supplement other datasets (e.g., wireless sensor networks data) to develop spatial decision support systems (SDSS) for flood management, as demonstrated by Horita et al. [78].
Other Areas
Some research areas that could have tremendous effects on urban planning have not been widely investigated in the reviewed papers.Although lack of security is often an issue in the GS, only one study has addressed it among the reviewed papers [79]).This is also the case for urban tourism, urban governance, and facility location selection.
Several studies that were excluded also discussed different aspects of urban planning using social media data.As we explained in Section 2, social media without explicit consent from the crowd are excluded from the reviewed literature.The topics discussed in those studies include urban health (e.g., COVID mapping in urban areas), urban tourism, disaster detection and management (earthquake, flood detection, etc.).
Crowdsourcing Methods
This section provides an overview of the crowdsourcing methods applied in the GS.More specifically, for each method, we discuss the basic principles, its potential value for urban planning, especially in the context of the GS, the type of data obtained and, finally, its main areas of application.The methods identified in this study can be regrouped into three categories: collaborative websites, voluntary crowdsensing, and dedicated social media campaigns.
Collaborative Websites
Collaborative websites are web-based platforms that allow participants to share local knowledge, maps, geo-tagged pictures, etc., within a specific framework.Globally, they have the benefit of providing a participatory planning process that allows the end users to comment on planning projects [70], map the areas they are most familiar with (e.g., OSM), report violations and crimes [80], suggest innovative ideas for smart city planning [23], etc. Collaborative websites are generally in line with the concept of collective intelligence [81], as individual knowledge is openly available to other participants who can access, edit, discuss and improve them.As a result, they yield better outcomes than knowledge from single individuals.In the GS, government agencies usually lack the equipment and workforce to adequately handle the many problems they face on a daily basis [58,80].Collaborative websites can provide a low-cost and effective solution to these problems while raising awareness among the people.
For more clarity, we distinguish between mapping as a separate endeavor and providing user-generated content to support a specific area of planning (crime reporting, environmental monitoring, etc.) using web-based PGIS/PPGIS, for example.This is because some maps are collaboratively built for general purposes (e.g., OSM).The resulting map could then be applied to different areas, including urban planning.The next subsection discusses collaborative mapping as a separate endeavor.
Collaborative Mapping
Collaborative mapping is a collective effort in which volunteers with various levels of expertise and motivations participate in the creation, edition, and dissemination of digital maps [27,82].The mapping process relies on the collection (through sensors), assemblage, and annotation of geographic information using web mapping tools (e.g., OSM website).Since these web mapping tools are easy to use, even non-experts can take part in collaborative mapping, which helps reach a potentially larger crowd.
These mapping endeavors have several potential advantages for the GS, including accessibility and accuracy.Accessibility refers to the possibility for any user to freely use the maps.OSM provides this possibility as long as the user acknowledges their use of the service [82].Furthermore, many areas of the GS still use outdated maps due to the costliness of professional mapping services [67].Collaborative platforms offer the same level of accuracy as commercial mapping services [83].These factors make collaborative mapping a potential source of reliable and up-to-date urban data with minimal cost for areas with limited resources, such as the GS.
The main data used from collaborative mapping in the GS consists of road networks (road types, stations, etc.) [47,84,85].Datasets related to building footprint have also been used [86].Collaborative mapping has been used to investigate urban morphology [40], land use [84,87], transportation planning [47,50], urban population estimation [74], etc.Recently, there has been an increasing trend in the contributions of corporate editors to OSM, especially in Southeast Asia [88].Finally, it is worth noting that some collaborative mapping platforms also apply to specific areas such as disaster relief, election monitoring, etc.One example of such platforms in the GS is the use of the Ushahidi platform for flood mapping in Brazil [77].
Web-Based PPGIS
A web-based PPGIS framework consists of four main concepts: GIS, public participation, web development, and the domain of application (e.g., crime monitoring, environmental monitoring, flood mapping, etc.).As such, it is a multidisciplinary area that allows participants to share local knowledge through online GIS platforms.Participants can use the platform to post or comment on different urban problems, including infrastructure damage, crime, natural disasters, etc.The importance of the aforementioned problems and the risk they could represent in a community are among the main factors that explain the need for the public to actively participate in these PPGIS projects.Furthermore, artificial sensors (i.e., cameras), which are often used to monitor crime and other types of violations do not have the intelligence to provide an in-depth and real-time interpretation of the events.In this case, the public could provide a better response to the issue (e.g., crime reporting, helping the victim, etc.) than an artificial sensor.The posts can be in the form of geotagged text, audio, video, or a combination.The fact that other participants can also comment on a post helps ensure the reliability of the information provided.
There are also bottom-up decision support systems that allow the public to get involved throughout the decision-making process.In Iran, the web-based Spatial Decision Support System (WebGIS-SDSS) allows citizens to access, discuss, review and submit their opinions about urban development applications based on multi-criteria decision-making [90].The authorities then aggregate the opinions to make their final decision (accept or reject the application).
Idea Generation/Idea Contest
Some collaborative websites provide a platform for innovative ideas to support modern and sustainable city planning.They allow non-experts to actively participate in the planning process by identifying their needs, submitting innovative ideas, commenting and/or rating other users' suggestions [23], or helping choose the location of future facilities.Unlike collaborative mapping and web-based PPGIS, these websites do not necessarily require the use of geo-location services, and users can directly participate without any prior GIS knowledge.Despite its potential advantages for citizen empowerment, this type of collaborative website was rare among the reviewed papers.Examples in the GS include idea generation for smart city transformation in India [29] and facility location selection [91].
Voluntary Crowdsensing
The latest smartphones are equipped with a variety of sensors, including cameras, accelerometers, microphones, a global positioning system (GPS), air quality sensors, etc.Furthermore, there have been tremendous improvements in the memory size and computational capabilities of these mobile devices in the last few years.These factors, coupled with the increasing accessibility of smartphones, have turned mobile phone owners from simple users to contributors of rich sensing data.In addition to smartphones, these sensors can also be installed in other devices (laptops, tablets, etc.) or locations such as cars and help gather data for traffic or environmental monitoring from a potentially large group of participants.Crowdsensing uses the power of the crowd and the ubiquity of sensing technologies to collect data for various urban planning activities, including traffic management, environmental monitoring, etc.
Crowdsensing has advantages in investigating modes with low share, such as cycling.Given the small number of cyclists in many cities (between 1 and 2% for work trips), it may be difficult to find a representative sample for analyzing cycling patterns using traditional methods such as cycle count [92].In this case, crowdsensing could help cover a larger and more diverse sample (e.g., Strava in Johannesburg, South Africa [49]).Crowdsensing is also beneficial in traffic control and management.It could be a time and cost-efficient alternative to roadside cameras and loop detectors to detect traffic congestion.Most developing countries cannot afford these cameras or other roadside sensors and could highly benefit from these methods [52].In Africa, recent studies have used GPS devices to address the lack of reliable data [93,94].However, these methods are expensive and suffer from a limited sample size [93,94].Crowdsensing could provide a low-cost solution to these problems [95].
Data from crowdsensing usually consists of GPS tracks, temperature, noise level, particulate matter (PM 2.5 and PM 10 ), geotagged pictures/videos/audio/comments shared by participants, etc. Crowdsensing can also provide large urban datasets for planners.Examples of such datasets in the GS include datasets for environmental sensing [55,57], GPS data for cycling patterns analysis [49]), and smart city management [96], Waze data for urban mobility [48], etc.
As specified in Section 2, we only investigated voluntary participation in crowdsensing throughout our analysis.
Dedicated Social Media Campaigns
Social media campaigns (SMC) are not to be confused with social media scraping (through Twitter or Weibo).By SMC, we refer to specific social media pages, groups, etc. which are launched with a clear goal (e.g., addressing an urban planning issue), the task is clearly defined and outsourced to the crowd (submission of proposals, comments, complaints, vote, etc.), and the participation is always voluntary.They are, to some extent, similar to collaborative websites.However, unlike many collaborative websites which require technical skills and/or financial resources (that are not always available in the GS), SMC leverage existing social media platforms and are easy to set up and manage.A concrete example would be the use of a dedicated Facebook group to discuss public safety in an urban area.The crowdsourcer can launch a page that invites all inhabitants of a specific area to report issues, comment, and suggest solutions.The information provided could be valuable to policymakers and the public without any financial burden.
Examples of applications in the GS include Facebook pages for public participation in transportation planning [97] and e-government [98], sexual harassment reporting on Facebook and Twitter [30], etc.
Challenges
Although some challenges, such as sample representativeness, privacy, access, and data processing, are applicable to all crowdsourcing projects [99], some issues are specific to or more severe in the GS.The challenges discussed here are drawn from the characteristics of the GS and corroborated with the 78 reviewed papers.
The Digital Divide
Despite the recent improvements, the digital divide is still present in many parts of the world [63].The GS is also characterized by a technological gap with respect to the rest of the world.For example, in 2022, Internet penetration rates are lower in Africa (43.2%),The Caribbean/South America (80.5%), the Middle East (77.1%) and Asia (67.0%) [100].In comparison, North America has a 93.4% penetration rate, while 89.2% of Europeans have access to the Internet.Furthermore, access to mobile Internet is also lower in low and middle-income countries [101].Given the importance of the Internet in crowdsourcing, the GS suffers a severe disadvantage compared with the rest of the world.In addition to access, the ability to use the technology is also an indicator of the digital divide.A lack of literacy and digital skills is a barrier to mobile Internet use in low-and-middle-income countries [101].As a result, many people lack the knowledge or means to effectively handle the technology needed to participate in collaborative mapping [27] or web-based planning decision support systems.This is evidenced by Young et al. [102], whose crowdsourcing effort across Africa was hindered by slow and expensive Internet connections, regular disruptions due to power outages, and participants' limited digital skills.Zia et al. [66] also found a strong correlation between literacy level and the density of OSM in Turkey.Thus, the digital divide could lead to limited mapping coverage and/or reliance on armchair mappers.Despite their efforts, armchair mappers lack local knowledge, which can have significant effects on the accuracy of the produced maps.De Leeuw et al. [67] found that participants with local knowledge (including laypeople) achieved significantly higher accuracies than those without local knowledge, including professional mappers.
Academic Challenges and Digital Colonialism
Figure 2b shows that, besides China, most of the studies were conducted by researchers outside the GS.The possible reasons for the limited numbers of researchers from the GS (besides China) are a lack of access to the technologies (see Section on digital divide), limited resources for undertaking data collection campaigns, and a lack of trained experts able to process the data.For these reasons, countries of the GS are at the mercy of NGOs, funding agencies, and institutes of the Global North whose interests may not be aligned with the challenges faced by cities of the GS.This predominance of foreign entities could be an opportunity if they fully involved their local counterparts in the process.However, since foreign institutes and NGOs also provide the funding, their collaboration with local scholars usually turns into a top-down relationship where local researchers are merely used as "glorified data collectors" [103].This hierarchical relationship affects the way research is conducted in the GS and could leave out important issues that affect the local people.For example, a few studies among the reviewed literature address issues related to public safety (e.g., crime mapping), illegal dumping, gender-related issues, or lack of basic facilities and services in many GS cities (e.g., good roads, poor public transportation systems, etc.).Beyond the academic area, this also raises several questions about the possible "exploitation" of the GS citizens whose efforts to contribute data to crowdsourcing campaigns may only serve the interests of foreign (usually Global North) organizations.Do the data contributed result in solutions that help the participants?Do the participants have access to the data they contributed?Do these external scholars and NGOs give an accurate and unbiased portrayal of the Global South?All these issues have raised concerns over new forms of "digital and data colonialism" [102,104].Moreover, the growing influence of corporate editors [88] could be problematic, especially in the vulnerable areas of the GS, if the generated maps serve corporate interests rather than the local people.Digital colonialism poses challenges to citizen empowerment, data ownership, and academic excellence.If these challenges are not addressed, they could contribute to enforcing the same North-South inequalities that democratic processes such as crowdsourcing were supposed to mitigate.
It is important to clarify that we did not see any direct relationship between a particular crowdsourced method and the former colonial ruler.This is due to the fact that the crowdsourcing projects are usually launched by different Western organizations regardless of who the former colonial power was.For example, a project in Niger (a former French colony) was initiated by a German organization [105]).Another project in Mexico (a former Spanish colony) was initiated by a Belgian organization [62].All these projects have certain characteristics in common.
•
Besides China, most projects were initiated by foreign, western universities, an indication of dependence on western countries for crowdsourcing.
•
Such dependence has implications in terms of data ownership, research design, and administration (as we explained in Section 6), which lead to the phenomenon of digital colonialism.
Thus, digital colonialism is not necessarily associated with a particular former colonial ruler; it is due to the presence and practices of western organizations whose control over the data may not serve the interests of the local communities.
It is, however, important to point out that digital colonialism's impact on the global south varies depending on the region.As shown in Figure 2b, China is less dependent on foreign researchers.
Socio-Economic and Cultural Challenges
The issues raised here are related to income, age, and gender.Regarding age and income, about 25% of adults in low-and-middle-income countries are unaware of mobile Internet, while more than half of the people do not meet the UN's mobile Internet affordability target [101].Lack of awareness and/or affordability of mobile Internet is an obstacle to people's participation in mobile crowdsourcing and a source of bias.The potential cultural issues are mostly related to gender.Gender issues in the GS include the assumption in many cultures that women lack the ability to provide useful information in mapping projects, for example [106].Furthermore, women from low-andmiddle-income countries are 20% less likely than men to use mobile Internet.Since one of the main objectives of crowdsourcing is to democratize the planning process and empower the public, all stakeholders need to actively and freely participate regardless of gender.In addition, crowdsourcing could help address many of the problems (primarily) faced by women, such as sexual harassment.One example of such a case in the GS is HarassMap, a platform for reporting sexual harassment in Egypt.Although such platforms could help raise awareness and encourage the authorities to address this issue, the authors also pointed out the lack of female participation as a major challenge [30].Another cultural challenge in the GS is the expectation of financial reward in exchange for participation, even when the project has clear value for the community [102].This could seriously limit the number of participants as many citizen engagement projects do not offer any financial reward.
Administrative Challenges
Public participation should integrate the input of all stakeholders into the planning process.In this regard, citizens should not be simple providers of census data or travel diaries, nor should their role be limited to a simple consultation prior to decision making.Instead, they should be involved throughout the decision-making process.However, the GS is mostly characterized by top-down governance systems, which give little to no room for full citizen participation [90].This is also evidenced in the literature corpus analyzed in this study, where a few studies provide platforms for citizen participation in problem-solving or decision-making.
The existence of top-down governance systems also does not allow to break away from traditional authoritative data collection methods.Many projects often seek approval from local authorities before implementation [102].It is important for the public to know that their efforts to collect and/or share data/information will be accessible to them, or they will result in solutions, technologies, or policies that will help them, not exploit them.A failure to meet the aforementioned requirements could further reinforce digital colonialism and affect the public's motivation to participate in any crowdsourcing initiative [102].In the GS, some scholars address this issue by looking for ways to help the public directly appropriate their data and/or participate in the interpretation of the results.For example, in Niamey (Niger, Africa), participants and their families were also involved in the analysis and interpretation of the crowdsourced photos [105].In China, Li et al. [55] stated that data shared by the public through their collaborative environmental sensing network (CESN) would be publicly available, which could increase the number of participants.Unlike Strava Metro, which is a commercial platform, OSM data is available to the public.Thus, participants in OSM projects can directly access the fruit of their labor and use it for future endeavors.This makes OSM more suitable for the GS than other commercial services.Experts should also use the data contributed by the public to develop solutions that will benefit them.In Kenya, Williams et al. [107] used the data collected through the digital Matatu project to help local experts build a mobile crowdsourcing application (Ma3Route), which shares real-time traffic data with users [108].In Morocco, El Alaoui El Abdallaoui et al. [109] designed an air quality decision support system that uses collaborative environmental sensing data to recommend the least polluted route and display information pertaining to public health.Finally, it is important to get participants more involved in the design of the methodologies.This could help design methods that are more suitable for the participants and ensure that the projects' goals are in line with community priorities [63].An example is provided in Mexico, where participants and experts codesigned the crowdsourcing experiment for SenseCityVity [62].
For Governments and Research Institutes
In addition to the public, digital colonialism also affects local scholars.Crowdsourcing could be an opportunity for strong collaboration between North-South researchers if challenges related to digital and data colonialism are addressed.To do so, a new framework that integrates equal inputs from both sides is needed.More concretely, local researchers should not be "exploited" for data collection.Instead, they should fully participate in the definition of the objectives and methodology so that the research is in line with the needs of the GS and does not repeat the same North-South inequalities that characterize digital and data colonialism.
This review has shown that there is an urgent need for more research in many areas.The Caribbean, Central America, the Pacific Islands, the Middle East, and Sub-Saharan Africa are barely covered, with the research largely condensed in China and Brazil.This opens an opportunity for more research and perhaps more insight into the urban dynamics of this area of the world that remains unknown.Furthermore, as discussed in Section 4, several topics have not been widely investigated despite their importance for the GS and the potential benefits of public engagement in these areas.This shows the tremendous potential that is yet to be fulfilled in the application of crowdsourcing for urban studies in the GS.
For governments, it is important to include public participation in all aspects of their programs to raise awareness and encourage citizen engagement.A good example in the GS is Brazil, which encourages public engagement via legislation [70].This seems to have a major effect as a large part of the platforms for public engagement were found in Brazil [70,71,77,78,110].
Finally, authorities, as well as researchers, should adopt more affordable methods such as open-source software (which allows access and replication of the methods) [111], as well as accessible and easy-to-use platforms such as Ushahidi (https://www.ushahidi.com/about/pricing, accessed on 9 May 2022).Since most crowdsourcing projects are not sustained due to a lack of resources, these solutions could be useful to the GS.
Solutions to Socio-Economic and Cultural Challenges
Sample representativeness is a challenge in all crowdsourced projects.The problem is bigger in the GS, where some segments of the population are excluded due to local customs (see Section 6.1.3).Including all members of the society helps mitigate sample bias and provide a more accurate analysis of the problem under study, which in turn helps implement more suitable policies.A change of mentality (especially regarding women) is required.Another way to include women is to adapt the public engagement process to their temporal and spatial constraints, for example, by allowing them to participate with their children [63].
Other solutions to socio-economic and cultural barriers include using emojis, pictures, checkmarks, and voice messages to reach the illiterate [63,69,111].An interesting example in the GS would be the adoption of the interactive voice response (IVR) system in South Africa [69].The IVR is a crowdsourcing system that allows users to report public safety concerns by directly recording and sharing a voice message without having the write or deal with a web interface.Furthermore, the IVR system can be available through a toll-free number (accessible to the poor) and be adapted to local languages.
Finally, the use of mixed methods is also important.This includes combining crowdsourcing with traditional methods to obtain more representative samples.For example, in areas with low literacy levels, the project could involve calling, chatting on social media platforms (WhatsApp, for example), and having personal meetings with the participants in order to explain in detail the basics of the participation process [102] or combining the crowdsourced data with census data.
Conclusions
Although crowdsourcing has several advantages, it is important for planners to have a clear understanding of the target population [15].This could help anticipate some of the challenges that could affect the quality of the crowdsourced data.Given the unique characteristics of the GS, it was thus crucial to conduct a review of the crowdsourcing methods adopted in this region of the world and highlight their potential benefits and challenges so as to provide some suggestions for future research.
To achieve this goal, we reviewed 78 English-written journal articles focusing on voluntary participation in crowdsourcing in the GS.The reviewed articles were mainly contributed by researchers affiliated with Chinese institutes or outside the GS.Among the crowdsourcing methods, collaborative mapping (through OSM) was most widely adopted, while the studies covered a variety of areas, including urban transportation, event detection and crisis management, urban tourism, urban health, environmental monitoring, gender, etc.Based on the descriptive statistics of the reviewed papers and the characteristics of the GS, we discussed the potential administrative, academic, technological, socio-economic, and cultural challenges that could affect the successful adoption of crowdsourcing in the GS.Solutions to these challenges were provided as suggestions for future implementations and included new collaboration frameworks with foreign experts so as to avoid digital colonialism, the inclusion of all segments of the population (especially women), the use of more accessible platforms to foster public participation in urban planning (e.g., Ushahidi), the development of methods that are more in line with the needs and the characteristics of the GS, etc.
Overall, this study has demonstrated that even though crowdsourcing has been heralded as a means for less developed countries to gather large urban data at a minimal cost and foster citizen empowerment and awareness through public platforms, several challenges still need to be addressed.The needed datasets and/or platforms are VGI (e.g., OSM) datasets to complement remote sensing datasets for investigating the challenges of the GS (e.g., informal settlements, lack of data for disaster response, crimes, lack of clean water, etc.), citizen sensing data for a better understanding of mobility patterns (e.g., GPS) and environmental monitoring (noise, temperature, etc.), recommendations/solutions for better planning practices, etc.
This study is, however, not without limitations.The inclusion of China in the literature could be misleading as a large portion of the reviewed papers is from this country.This might give the impression that the GS, in general, has produced a large part of the studies on this topic.The main reason for including China was the fact that, despite its rapid growth in the last four decades, the country still displays some characteristics of the GS [112].Thus, its inclusion could help compare it with the rest of the GS and stress the progress that is to be made in order to perhaps reach the same standards in terms of innovation and academic achievements.Furthermore, although a careful and well-justified literature search was adopted, the small number of studies from French-speaking African countries and Latin America could be due to the fact that only articles written in English were included in the review.Given the large number of French-speaking countries in Africa and Spanish-speaking countries in Latin America, some valuable contributions could have been left out.English is, however, the language adopted in most studies and literature reviews [113], and we believe that reviewed articles offer a clear overview of the crowdsourcing methods adopted for urban planning in the GS as well as the associated challenges and our suggestions are expected to encourage more research in this area.
Figure 1 .
Figure 1.Literature search based on the PRISMA framework.
Figure 1 .
Figure 1.Literature search based on the PRISMA framework.
Figure 2 .
Figure 2. Descriptive statistics.(a): Number of articles by year of publication; (b): Number of articles by affiliation; (c): Number of case studies for each country of the Global South; (d): Number of case studies for each area of the Global South and China.
Figure 2 .
Figure 2. Descriptive statistics.(a): Number of articles by year of publication; (b): Number of articles by affiliation; (c): Number of case studies for each country of the Global South; (d): Number of case studies for each area of the Global South and China.
6. 2 .
Suggestions for Future Implementations 6.2.1.Data Ownership and Benefits for the Public: A Solution to Digital Colonialism and Low Participants' Motivation
Table 1 .
List of possible keywords for each theme.
Table 2 .
Top source titles (with at least two articles).
Table 3 .
Main research areas.
Research Areas Key Aspects Number of Papers Urban
morphology Land use, urban landscape, effects of urban forms on physical activities, housing & urban development (neighborhood infrastructure planning, housing schemes, urban development control).Resource management, smart city transformation in the Global South. | 2022-09-16T15:22:56.430Z | 2022-09-13T00:00:00.000 | {
"year": 2022,
"sha1": "da990a6feddc227dda81a2e50567542c5fc29586",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/14/18/11461/pdf?version=1663075522",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3677a9e891733bf83a3668e16fb5d1be33babe3e",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": []
} |
253233876 | pes2o/s2orc | v3-fos-license | Enhancing the Performance of Piezoelectric Wind Energy Harvester Using Curve‐Shaped Attachments on the Bluff Body
Abstract This paper presents a piezoelectric wind energy harvester that operates by a galloping mechanism with different shaped attachments attached to a bluff body. A comparison is made between harvesters that consist of different shaped attachments on a bluff body; these include triangular, circular, square, Y‐shaped, and curve‐shaped attachments. Simulation of the pressure field and the velocity field variation around the different shaped bluff bodies is performed and it is found that a high pressure difference creates a high lift force on the bluff body with curve‐shaped attachments. A theoretical model based on a galloping mechanism is presented, which is verified by experiments. It is observed that the proposed harvester with curve‐shaped attachments provides the best performance, where the harvester with a curve‐shaped attachments provides the highest voltage and power output compared to the other shaped harvesters examined in this study. This paper provides a new concept for improving the power performance of the piezoelectric wind energy harvesters with modifications made on the bluff body.
Introduction
Energy harvesting techniques on small scale are promising for the supply of continuous energy for wireless sensors, and low power portable electronic devices. [1][2][3][4] Powering these devices using traditional chemical batteries can have limitations as they have a limited lifespan and can have difficulty in replacement or battery management. Piezoelectric, [5] triboelectric, [6] and electromagnetic [7] approaches to energy harvesting have been examined to provide the electrical energy for small electronic devices by utilizing the mechanical vibration of the system. Flow induced energy harvesting has gained enormous popularity in microelectromechanical systems (MEMS) for a variety of industrial, biomedical and agricultural monitoring applications. [8][9][10] Piezoelectric energy harvesters based on vibration due to wind energy is an important approach to energy harvesting because of its simple structure, without the need for any rotating components; it can also provide a high output in terms of voltage. [11][12][13] Wind induced vibration involving the piezoelectric effect can be achieved by a variety of mechanisms, including vortex-induced vibration (VIV), [14][15][16] flutter, [17][18][19] and galloping. [20][21][22] Energy harvesters based on both vortex-induced vibration as well as galloping phenomena have been studied in order to obtain optimum output power. [23][24][25] Wind energy harvesters operating on vortex induced vibration and galloping generally involve the construction of a simple cantilever beam with piezoelectric sheets attached onto it. A bluff body is attached at the free end of the beam in order to produce vibration when wind flows over it. When the structures are subjected to wind flows, large amplitude oscillations at low frequency are produced. This phenomena known as galloping can lead to failures in structures with asymmetric cross sections. Failures of transmission lines in cold places, due to accumulation of snow on the wires, can lead to vibrations with high amplitude and is a good example of galloping. [26] However, galloping in energy harvesting is considered to be beneficial as it produces large deflections of piezoelectric beams and subsequently higher electrical output.
Harvesting wind energy from a piezoelectric cantilever beam with a buff body attached to the free end is an effective approach for achieving galloping. A number of researchers have investigated the shape of the bluff body and proposed the This paper presents a piezoelectric wind energy harvester that operates by a galloping mechanism with different shaped attachments attached to a bluff body. A comparison is made between harvesters that consist of different shaped attachments on a bluff body; these include triangular, circular, square, Y-shaped, and curve-shaped attachments. Simulation of the pressure field and the velocity field variation around the different shaped bluff bodies is performed and it is found that a high pressure difference creates a high lift force on the bluff body with curve-shaped attachments. A theoretical model based on a galloping mechanism is presented, which is verified by experiments. It is observed that the proposed harvester with curve-shaped attachments provides the best performance, where the harvester with a curveshaped attachments provides the highest voltage and power output compared to the other shaped harvesters examined in this study. This paper provides a new concept for improving the power performance of the piezoelectric wind energy harvesters with modifications made on the bluff body.
optimum shapes to obtain a high output in simple wind harvesting systems. A comparison of equilateral triangular, square, rectangular and D-shaped bluff bodies has been performed by Yang et al. [27] A square shaped bluff body was found to perform the best for small scale galloping phenomenon, which was validated experimentally. Abdelkefi et al. [28] examined the galloping on a cantilever beam with square, triangular and D-shaped cross-section. A distributed-parameter model considering nonlinearities was used to study the effects of different shapes for optimum power at different electrical load resistances. A Y-shaped bluff body made of thin sheets was designed and attached to the beam of a wind energy harvester by Liu et al. [29] Several simulations and experiments were carried out by altering the angle of the blades, which confirmed that the proposed wind harvester performed better than the harvester with square cross-section bluff body. Zhou et al. [30] developed a bluff body with a curved shape thin plate to enhance the performance of a harvester at low cut-in speed, and comparison were made with the square, triangle and D-shaped harvester. The shape of the bluff body was modified to achieve an improved output of the wind energy harvester by attaching different shaped rods at certain angles in the bluff body. Hu et al. [31] investigated the efficiency of the harvester by attaching two small rods to the main circular cylinder and the study showed that the output voltage can be greatly influenced by adding attachments to the bluff body. A high-performance harvester with Y-shaped attachments was proposed by Wang et al. [32] and the transition of VIV to galloping phenomenon was studied. A comparison of the harvester with and without attachments revealed that a simple modification of the bluff body was sufficient to achieve optimum performance at low wind speeds. Wang et al. [33] developed a harvester with spindle-like and butterfly-like crosssections for power enhancement at low speeds. Compared to other existing galloping harvesters, this proposed harvester was able to achieve high power by coupling both VIV and galloping phenomena.
Besides the high electrical voltage and power output, the reliability of the flow energy harvesters in the real applications is also essential. Gong et al. [34] presented an energy harvester based on the vortex induced swing that is capable to measure the flow speed from the harvested power inside the water. Piezoelectric wind energy harvester that operates on the fluctuating wind speed was developed by Xu et al., [35] in where the wind was considered as a time dependent random process. The wind speed was studied using a mean component and a fluctuating component under stochastic averaging, and the framework developed in the study can be applied in real complex applications of highly efficient galloping energy harvesters. Zhang et al. [36] developed an rotating piezoelectric energy harvester which is capable to power the sensor and also capable to detect faults on the rotating bearings.
In order to achieve the optimum performance, this paper designs a harvester with a bluff body that contains curve shaped attachments attached to it. A comparison of the harvester is undertaken with circular, triangular, square, and Y-shaped attachments. A theoretical model of the harvester is developed, and a variety of experiments are carried out to validate the model, and undertake a parametric study of different design parameters. In Section 2, a mathematical model of the proposed wind energy harvester is discussed. The simulation of the harvester with different attachments is performed in Section 3. In Section 4, the experimental setup is described, along with the procedure involved. The experimental results obtained are then analyzed and compared among the harvesters with different shaped bluff bodies. We will see that the energy harvester with curved attachments in the bluff body has high performance with increased aerodynamic properties and high output voltage and power.
Mathematical Model
The design of the harvester is based on galloping phenomena that occurs when air flows through the bluff body of the harvester. Figure 1a,b shows a piezoelectric wind energy harvester that consists of a piezoelectric beam fixed at one end, and a bluff body attached at the other end. The direction of the wind is perpendicular to the surface of the bluff body, as shown in Figure 1. Encouraged by the design of bluff body with two circular rods at different angles, [37] we present the concept of adding different shaped attachments to the main circular bluff body for obtaining efficient galloping, and hence the improved electrical output. Different shaped attachments, namely circular, square, triangular, Y-shaped, and curvedshaped are used in this study, and are shown in Figure 1c. Considering Euler-Bernoulli beam theory, the piezoelectric effect, Kirchhoff's law and self-induced galloping vibration, a distributed parameter model of the harvester can be obtained which can be further converted into lumped parameter model. [12,32] The modeling of piezoelectric wind energy harvester involves the piezoelectric effect with fluid flow interaction between describing fluid dynamics and structural mechanics. Therefore, it is suitable to model the system as a lumped parameter, Figure 1b, which greatly simplifies the complexity of the system.
The structure is modified into a single degree of freedom (sdof) model with equivalent mass, stiffness and damping constants as shown in Figure 1b. The system can be described by a set of coupled electromechanical equations, and can be written as where the equivalent mass, 3 with m 1 , m 2 , and m 3 are the masses representing the piezoelectric beam, bluff body and the two attachments fixed to the bluff body, respectively. The attachments are fixed on the bluff body and make an angle of 2θ = 120° with each other, as shown in Figure 1c. The equivalent damping C eff = 2ξw n M eff and equivalent stiffness eff (1) and (2), we can obtain the electromechanical coupling coefficient as The natural frequency, w oc defined at the open circuit condition and the natural frequency, w sc defined at the short circuit condition can be calculated by experimental measurements. The capacitance of the piezoelectric patch, C p is obtained from the manufacturer's formula.
By performing experiments and using the principles discussed above, the model parameters of the energy harvester are obtained. The mass of a cantilever beam and a bluff body was measured to be 2.54 and 2.52 g, respectively. Similarly the mass of the curve-shaped attachments was 3.45 g. The effective mass, effective damping, and the effective stiffness of the lumped parameter model are 7.5 g, 0.0059 N (m −1 s −1 ), and 6.8359 N m −1 , respectively. In addition, the electromechanical coupling coefficient, θ c is calculated to be 2.24 × 10 -5 N V −1 . The capacitance, C p of a PZT piezoelectric patch is 1.3574 × 10 -8 F. [38] In addition, the natural frequency, f oc measured at open circuit condition and the natural frequency, f sc measured at short circuit criteria of the system are calculated as 4.821 and 4.814 Hz, respectively. The damping ratio, ξ is experimentally measured to be 0.013.
The aerodynamic force, F y (t) that acts on the bluff body due to galloping phenomenon is given as [39,40] F t U dhC where d and h are the frontal dimensions of the bluff body facing the direction of wind, ρ is the density of the air and U represents the speed of the wind. C Fy denotes the coefficient of the aerodynamic force in y-direction. The aerodynamic force coefficient is an important parameter in design of the energy harvester and depends on the shape of a bluff body. The galloping phenomenon that is responsible for the motion of the bluff body can be explained by considering the force acting on a body, other than circular cross-section as shown in Figure 2.
The force F y acting on a bluff body is expressed in terms of lift and drag forces [39] as Assuming a quasi-static hypothesis, the aerodynamic force acting on the oscillating body is equivalent to the force acting on a steady body, measured at an equivalent angle of attack, considering low oscillation of the body. For a body undergoing only translational vibration motion, without any rotational motion, the angle of attack α is given as The value of C Fy depends on the angle of attack and an approach to obtain its value is to express it in a cubic polynomial form as [39,41] Global Challenges 2023, 7, 2100140 boundaries are considered to be fixed. The variation of the pressure field and the velocity field around the bluff body with different shaped attachments is shown in Figure 3. Figure 3a shows the pressure and velocity variation when the air blows past the plain cylindrical bluff body. The pressure difference between the upstream and the downstream side of the bluff body reveals that there exists a lift force on the bluff body, Equation (11) is solved using MATLAB Simulink for the vibration response of the beam along with calculation of output voltage across different electrical load resistances. The average power output of the harvester is calculated using, The theoretical model discussed above incorporates the both electromechanical and the aerodynamic behavior of the The empirical coefficients a 1 and a 3 are achieved by curve fitting of C Fy versus α curve. The plot of C Fy versus α curve is obtained experimentally in a static test with varying angle of attack. Den Hartog [42] explained the instability of the galloping phenomenon, which can be expressed as where C L = 2L/(ρU 2 hd) is the lift coefficient, C D = 2D/(ρU 2 hd) is the drag coefficient. L and D are the lift and drag forces acting on a body respectively. Equation (8) indicates that for a specified orientation of a bluff body with a small oscillation and a small change in angle of attack, the galloping instability of the body requires negative slope of the lift coefficient. [43] If we consider the rotation effect of the bluff body, the polynomial expansion of C Fy can be revised as where y′ (t) = μy(t) and, μ is the coefficient that relates the transverse displacement and rotation at the free end of the cantilever beam and is given as, μ = 1.5/l. A linear analysis on the coupled electromechanical equations is carried out to obtain the solution for the energy harvester model. For this, we define a state vector X as Rearranging Equations (1) and (2), we can obtain the governing equations harvester in matrix form as piezoelectric wind energy harvester system. The electromechanical model presents the coupled equations, based on lumped parameter model and predicts the electrical voltage and electrical power of the energy harvester. The force that drives the wind energy harvester is defined based on the aerodynamic model of the wind energy harvester. The force acting on the wind energy harvester with different shaped bluff bodies is different, depending on the values of the aerodynamic force coefficient, C Fy . The empirical coefficients, a 1 and a 3 related to the aerodynamic force coefficients are calculated experimentally and listed on Table 1. The complete theoretical model presented here predicts the mechanism of piezoelectric wind energy harvester under galloping phenomenon.
Simulation Analysis of the Bluff Body
To demonstrate the mechanism of galloping based wind energy harvesting using different shaped attachments, a two dimensional model was developed. We performed simulations with the standard k-ε turbulence model in order to obtain better computational accuracy with high stability. The length and width of the computational domain used for the simulation are 100 mm and 80 mm respectively. A free triangular mesh is adopted, with three different meshing sizes, namely 65 652 (coarse), 89 532 (medium), and 102 568 (fine). When the coarse mesh is replaced by a medium mesh, the lift and drag coefficients changes by nearly 12%. Similarly, when the resolution of the medium mesh is adjusted to fine, the coefficients of lift and drag forces is changed by less than 2%. Thus, the medium mesh resolution is chosen for our simulation analysis.
The incoming air velocity at the inlet boundary is taken as 3 m s −1 and the air is supposed to flow in the direction perpendicular to the inlet domain. The pressure is considered zero at the outlet boundary of the domain and the top and bottom which creates the transverse force component that is necessary for the galloping mechanism. Recently, Liu et al. [44] performed a CFD analysis based on COMSOL on a circular bluff body with double flat plates placed ahead of the bluff body for utilizing the wake flow produced by such plates for improved harvesting performance. Figure 3b1-f1 shows the pressure field variation around the bluff body with different shaped attachments, and the velocity field variation can be observed in Figure 3b2-f2. If we observe the pressure variation in bluff bodies with different shaped attachments, it is found that the minimum pressure occurs at the downstream of the body. The occurrence of negative pressure and high lift force will make the system aerody-namically unstable, and this instability will eventually increase the amplitude of vibration of the body. The variation of velocity with maximum value at the upper and lower sides of the body, whereas the minimum value just behind the bluff body that is observed for different shaped attachments signifies that there is vibration in the bluff body.
Experimental Studies
In order to verify the results achieved from the theoretical model, an experimental setup was developed, and a range of Global Challenges 2023, 7, 2100140 Figure 4 shows the experimental setup used for energy harvesting using different shaped attachments on the bluff bodies. A centrifugal air blower was used to supply the wind required to drive the harvester. The speed of the wind was measured using digital anemometer. The required speed of wind is maintained by making appropriate adjustment of the blower. A PZT-5A (SP-5A, India) lead zirconate titanate piezoelectric sheet with dimensions of 50 × 20 × 0.4 mm 3 was placed at one end of pure aluminium beam with dimensions of 200 × 25× 0.6 mm 3 ; this a is relatively soft ferroelectric materials that has high piezoelectric activity, making it suitable for sensing and harvesting applications. A circular bluff body with a height of 120 mm and a diameter of 32 mm was made of expanded polystyrene (EPS) material with low density and the different shaped attachments required were fabricated using a 3D printer. The material used in the attachments is polylactic acid (PLA) and the length was made equal to that of the bluff body with diameter 5 mm for circular attachments. Similarly, the sides of triangular and square attachments were chosen to be 5 mm and similar dimensions was considered for Y-shaped and curve-shaped attachments. The output voltage obtained across the electric load resistance was measured using a digital oscilloscope (InfiniiVision DSO-X 3034A) with an input impendence of 10 MΩ. The experimental results obtained are analyzed and compared with the simulation results. The experimental output voltage of the harvester with curved shaped attachments is found to be ≈25 V and the simulated output voltage is about 29 V when the speed of wind is kept at 4 m s −1 , as shown in Figure 5a. The difference in the simulated and experimental value can be due to the assumptions used in modeling of the harvester. The modeling is based on the lumped parameter model and a quasi-steady hypothesis is assumed in the derivation of transverse force component with small angle of attack. In addition, the experiments are performed in an open environment, where it is difficult to predict the wind behavior accurately. Figure 5b compares the experimental and simulated output voltage of the harvester with Y-shaped attachments. Similarly, Figure 5c-e illustrate the comparisons for square, circular and triangular attachments respectively. If we compare the output voltage produced by different shaped harvesters, we can conclude that the harvester with curved shaped attachments provides the best performance, while the harvester with triangular attachments leads to the lowest output voltage and power. It can be seen that it requires a few seconds for the harvesters to produce stable output voltage for both simulation and experiment, as seen in Figure 5. Figure 6a illustrates the variation of output voltage with wind speed produced by the harvesters with different attachments. It can be seen that the output voltage is increased when the speed of wind is increased for the harvesters with attachments subject to galloping phenomenon. Galloping occurs when the Global Challenges 2023, 7, 2100140 wind speed is greater than the threshold value required for galloping and the experimentally measured threshold value is ≈1.65 m s −1 . However, for the harvester without any attachments, there is vortex-induced vibration (VIV) phenomena. The maximum output voltage for the harvester with a plain circular bluff body occurs at a velocity where a lock-in region exists. In this lock-in region, the oscillating frequency is locked to the natural frequency of the harvester. Figure 6a demonstrates that the lock-in region exists at ≈1.2-1.5 m s −1 , where the harvester has maximum voltage of 6 V. It can also be observed that a post-synchronization stage exists after the lock-in region, where the output voltage starts decreasing with increasing velocity. It is important to understand that the harvester operating under vortex-induced vibration will enter the lock-in region at a velocity less than that required for galloping of the harvesters with different shaped harvesters. However, the harvesters operating under galloping will perform at a higher velocity as compared to the harvester with only a plain circular harvester. The variation of output voltage with different load resistance is shown in Figure 6b. Experiments were carried out for five different load resistances (0.1, 0.5, 1, 2.5, and 5 MΩ) and the results show that the output voltage increases when we shift from 0.1 to 1 MΩ resistance, but it remains almost constant while resistance is increased from 1 to 5 MΩ. The output voltage of the harvester with curve-shaped attachments provides highest output voltage with different load resistances.
The variation of the power output with wind speed and load resistance is illustrated in Figure 7a,b respectively. The power output provided by the harvesters operating at wind speeds lower than 1.5 m s -1 is low and beyond it, the power increases with increasing the wind speed. The power output of the harvester with curve-shaped attachments provides the maximum value of ≈0.105 mW while operating at a wind speed of 4 m s -1 . Similarly, a lower power output is generated by harvester with triangular attachments and its value is 0.048 mW. If we observe the variation of power output with electric load resistance, it is found that the harvester with curve-shaped attachments performs best under different load resistances. The maximum power output is obtained with curve-shaped attachments operating at 4 m s -1 wind speed with an electrical load resistance of 0.5 MΩ and its value is 0.46 mW as seen in Figure 7b.
The frequency domain diagram is plotted in Figure 8. (FFT) method. Figure 8a,b shows the frequency of vibration of the harvester with a curve-shaped attachments, at the wind speeds of 1.5 and 3.5 m s -1 , and are found to be 4.05 and 4.75 Hz, respectively. Similarly, the frequency of vibration of the harvester with a cylindrical shaped bluff body, at the wind speeds of 1.5 and 3.5 m s -1 , and are found to be 4.72 and 6.06 Hz, respectively, as shown in Figure 8c,d. The plot of frequencies of the two different harvesters consisting a plain cylindrical shaped bluff body and curve-shaped attachments to the bluff body, at different wind speeds is shown in Figure 8e. The natural frequency of vibration for both shaped harvesters is 4.8 Hz. It can be seen that for the harvester with curve-shaped attachments, the frequency of vibration is less than the natural frequency of the system at low wind speeds. As the wind speed increases beyond 2 m s -1 , the frequency of vibration is close to the natural frequency of the harvester, thus providing the oscillations with higher amplitude. In case of the harvester without attachments, the frequency of vibration is close to the natural frequency at the wind speed of range 1 to 1.5 m s -1 . The frequency of the harvester is locked to the natural frequency of the system in this wind speed range, thus providing large voltage output from large amplitude oscillations. As the wind speed is increased beyond 1.5 m s -1 , the oscillating frequency of the harvester also increases, as shown in Figure 8e. Figure 9 represents the bar diagram showing the comparison of the output voltage produced by the harvesters with different shaped attachments under study. The output voltage and the output power of the piezoelectric wind energy harvester depends on the amplitude of oscillation of the bluff body attached to the beam. The shape of the bluff body greatly influences the oscillations produced because the force acting on the bluff body depends on it. In case of curve-shaped attachments, the force coefficient acting on the bluff body is higher because the lift force that drives the harvester is more in compared to other shaped harvester. It can be observed that the curve-shaped attachments provides a high output voltage and power whereas, the triangular attachments provides a comparatively low output voltage and power.
Conclusions
Wind energy harvesting approaches based on aerodynamic instabilities are widely used for providing power to wireless sensor networks and small electronic devices. Research has revealed that the galloping based harvesters are favorable for harvesting ambient wind energy due to its simple structure and ease of device fabrication. In this paper, a comparative analysis of different shaped attachments on a bluff body is performed to illustrate their effect on the output voltage and output power of a piezoelectric wind energy harvester. A theoretical model considering lumped parameters is developed, which verified with results obtained from experiments, with good agreement. The voltage and power produced by the harvester with several different shaped attachments is compared; these include circular, triangular, square and Y-shaped attachments. The harvester Global Challenges 2023, 7, 2100140 with curved-shaped attachments provides an output voltage of 25 V and a power output of 0.105 mW when operating at a wind speed of 4 m/s, which is higher than the output produced by harvesters with other shaped attachments under consideration. Similarly, the overall output produced by the harvester consisting of triangular attachments on a bluff body is the lowest, when compared with other shaped attachments. Simulation analysis performed on the bluff bodies with different attachments leads to a high pressure difference on a bluff body with curved attachments compared to other attachments producing more vibration required for improved performance. The proposed curved attachments on a bluff body lead to improved aerodynamic efficiency with design flexibility. We have performed the experimental analysis in an open environment which allows good potential for different applications of harvested power as it is not always possible to harvest wind power in a wind tunnel. | 2022-10-31T15:24:46.838Z | 2022-10-28T00:00:00.000 | {
"year": 2022,
"sha1": "fec3f42ade6fb7e5d89e881e5b622f1f5d36a09c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Wiley",
"pdf_hash": "90fb199bab04d77dab977ddc9a601cb88d11cac2",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
250279693 | pes2o/s2orc | v3-fos-license | Unambiguously Testing Positivity at Lepton Colliders
The diphoton channel at lepton colliders, $e^+e^- (\mu^+\mu^-) \to \gamma \gamma$, has a remarkable feature that the leading new physics contribution comes only from dimension-eight operators. This contribution is subject to a set of positivity bounds, derived from the fundamental principles of Quantum Field Theory, such as unitarity, locality, analyticity and Lorentz invariance. These positivity bounds are thus applicable to the most direct observable -- the diphoton cross section. This unique feature provides a clear, robust, and unambiguous test of these principles. We estimate the capability of various future lepton colliders in probing the dimension-eight operators and testing the positivity bounds in this channel. We show that positivity bounds can lift certain flat directions among the effective operators and significantly change the perspectives of a global analysis. We also discuss the positivity bounds of the $Z\gamma/ZZ$ processes which are related to the $\gamma\gamma$ ones, but are more complicated due to the massive $Z$ boson.
In this letter, we identify a specific process, e + e − → γγ (or µ + µ − → γγ), in which the dim-8 operators provide the leading new physics contribution, and a test of the positivity bounds can be unambiguously carried out. The measurements of this simple process at lepton colliders thus have profound implications. A confirmed violation of the positivity bounds would be more revolutionary than any particle discovery, as it would indicate a breakdown of at least one of the foundations of QFT.
The diphoton channel .-The leading new physics contributions to e + e − → γγ appear at dim-8. This can be easily deduced in the massless tree-level limit as follows, and we postpone a more detailed discussion of the dim-6 effects to the next section. Neglecting the electron mass, the tree-level SM amplitude takes only the A(f + f − γ + γ − ) helicity configuration (the superscripts denote the helicity), where f = e L,R is the left-or righthanded electron. The lowest order new physics contribution to the same helicity amplitude (required to generate an interference term with the SM) is a contact interaction that has mass dimension four, which is generated by dim-8 operators. Denoting with e the electric coupling, v the Higgs vacuum expectation value (vev), the amplitude of the diphoton process can be written as where the effective parameter a (denoted as a L,R later for f = e L,R ) depends on the dim-8 coefficients, and s, t, u are the Mandelstam variables. Here we only highlight the key features of the helicity amplitude formalism used in Eq. (1) and refer the readers to recent reviews [20][21][22] for more details. The two-component spinor |p] (|p ) has mass dimension 1/2 and helicity +1/2 (−1/2). The total helicities of the amplitude need to be consistent with the ones of the external particles (labelled in numerical order). This uniquely fixes the form of the dim-8 contact term, which has an overall mass dimension of four, while [ tive interference between the SM and the dim-8 amplitudes. Positivity bounds can be derived from a twicesubtracted dispersion relation, assuming that the UV completion obeys the fundamental principles of QFT [1]. The dispersion relation connects the second s derivative of an elastic amplitude to an integration of its discontinuity, which is positive definite. Rotating the diphoton amplitude to the elastic process eγ → eγ, and taking the forward limit, we have where M SM ≡ 2e 2 34 2 12 32 | t→0 is the SM amplitude in the forward limit. An important feature of the eγ → eγ process is that, in the forward limit where the positivity bound is derived, the SM amplitude is a nonzero finite constant, and one could explicitly show that (see Appendix A) M SM = −2e 2 . This is in contrast with the examples in Ref. [1], where the dim-4 Lagrangian is a free theory and the interference term does not exist. In other cases (such as the scattering of two fermions), the SM elastic amplitude may have a t-channel pole from the exchange of a massless particle, and the forward limit is not well defined. In such cases, additional treatments are needed to obtain meaningful positivity bounds, for instance by systematically subtracting all calculable SM contributions to the amplitude before taking the forward limit. It is also possible for the SM amplitude to have s-channel poles that may contaminate the positivity bounds of dim-8 coefficients, which again need to be systematically subtracted. These cases introduce additional steps and subtleties in understanding the implications of positivity bounds on observables. The fact that the SM eγ → eγ forward amplitude is a finite constant means that the positivity bound also uniquely fixes the relative sign of the SM and dim-8 contributions, as suggested by Equation 2. Due to crossing, a positive a here corresponds to a destructive interference between the SM and dim-8 terms. Since M SM = −2e 2 < 0, the positivity bound, implies a ≥ 0. The interference between SM and dim-8 contributions is thus bounded to be destructive in eγ → eγ, and constructive in e + e − → γγ. One could work in the amplitude basis [23,24] and directly connect Eq. (1) with the massless amplitudes of the W and B fields in the unbroken electroweak phase. Alternatively, using the basis of Ref. [25] (see also Ref. [26]), a L and a R are given by where s W ≡ sin θ W , c W ≡ cos θ W , c i 's are the coefficients of the five dim-8 operators, Q i , as defined in Ref. [25] (see Appendix A), and Λ denotes the scale of the potential new physics. They are the only relevant operators in the full dim-8 basis, not only for diphoton but also for the CP-even A(ē L e L V + 1 V − 2 ) d8 and A(e RēR V + 1 V − 2 ) d8 amplitudes in the massless limit, where V 1,2 = Z, γ.
Dim-6 contributions.-A dim-6 tree-level contribution to the diphoton process can be generated only by a dipole operator, and has a different fermion helicity configuration than the SM one, A(f + f − γ + γ − ). The dim-6 interference term therefore does not exist [27]. At the oneloop level, several dim-6 contributions arise, but they are all strongly constrained by other measurements, and can be safely ignored with a loop factor suppression. For instance, operator O 3W = 1 3! g abc W a ν µ W b νρ W c ρµ contributes to A(f + f − γ + γ − ) at one loop, but it can be very-well probed by the e + e − → W W process. A rough estimation with the projections from Ref. [28] ( 10 −4 in terms of the anomalous triple-gauge couplings) suggests that its impact on the diphoton cross section is at most around δσ γγ /σ γγ ∼ 10 −7 , much smaller than the expected precision at a realistic lepton collider (see Appendix A). Similarly, the modifications in the Ze + e − (Zµ + µ − ) couplings are already stringently constrained at the 10 −4 (10 −3 ) level even with current measurements [28], and their loop contributions to the diphoton process can be safely neglected. The one loop contributions involving the Higgs boson are also irrelevant since they are suppressed by the square of electron (muon) Yukawa coupling. While the four-fermion operators involving two electrons and two top-quark fields are poorly constrained, their contribution to A(f + f − γ + γ − ) is forbidden by the angular momentum selection rules, since they cannot produce the J = 2 state of two photons [29]. The interference between the one-loop SM amplitude and the tree-level dipole contribution is also absent with massless electrons. Contributions with two insertions of dim-6 operators are formally indistinguishable from dim-8 operators. At the tree level, the only such contribution which is not equivalent to a contact dim-8 operator insertion comes from two insertions of electron dipole couplings 1 . They are strongly constrained by the g e − 2 and the electric dipole moment measurements [31,32]. A rough estimation suggests that their impact on the diphoton cross section is at most δσγγ σγγ ∼ ( E 10 7 TeV ) 2 where E is the center-of-mass energy, and can be safely ignored. Finally, we note that dim-8 operators involving Higgs fields do not contribute to A(f + f − γ + γ − ) either, as the insertion of a Higgs vev makes the amplitude effectively at a lower mass dimension, where a contact term for Naively, one expects that the dim-6 contributions from new physics will be first observed in some other processes. What then is the motivation to look for dim-8 deviations in e + e − → γγ? First, testing positivity at dim-8 provides more fundamental information about the nature of new physics, namely whether it is consistent with the QFT framework, which one cannot tell from a SMEFT analysis truncated at dim-6. An observation of dim-6 deviation elsewhere would only strengthen the motivation to test dim-8 deviations in e + e − → γγ. Second, dim-6 effects from different UV states could be suppressed due to dynamics [33], certain symmetries [18], or accidental cancellation. In contrast, constraining the positively bounded dim-8 effects could lead to unambiguous exclusion limits on all possible UV particles, as each of them contributes positively, assuming the QFT framework is valid [12].
Positivity bounds on cross sections.-The positivity bounds, a L ≥ 0 and a R ≥ 0, restrict the interference between SM and dim-8 contributions to be constructive in e + e − → γγ. As such, they can be directly related to the cross section. Since the helicities of the two photons cannot be measured in practice, we work with the folded distribution of the production polar angle θ, dσ(e + e − → γγ) d|cos θ| where s is the square of the center-of-mass energy, P e − (P e + ) is the polarization of the electron (positron) beam, and c θ ≡ |cos θ|. The a L , a R terms come from the interference between SM and dim-8 operators, while the dim-8-squared contributions can be safely neglected due to the high measurement precision of this channel at lepton colliders. It is now clear that the positivity bounds a L ≥ 0, a R ≥ 0 have a simple consequence, namely for any beam polarizations and any |cos θ|. We see that the e + e − → γγ channel is special in that the positivity of Wilson coefficients can be directly translated into positivity in realistic observables, without being contaminated by any other non-positive operators. The diphoton process thus provides a simple, clear, and unambiguous test of the fundamental principles of QFT.
It is also interesting to note that the measurements at LEP2 display an overall signal strength of e + e − → γγ to be about 1.5 standard deviations below the SM expectation [34]. While statistically insignificant, this deviation exhibits a small tension with the positivity bound.
Collider reach.-To estimate the reach at future lepton colliders, we perform a simple binned analysis in the range |cos θ| ⊂ [0, 0.95], with a bin width of 0.05, and consider only statistical uncertainties. A differential analysis in |cos θ| helps discriminate the dim-8 contribution from the SM one, as the latter dominates the forward region due to the t/u-channel electron exchange. We expect the largest background to be the Bhabha scattering (e + e − → e + e − ). Assuming a sufficiently small rate ( 1%) for an electron to be misidentified as a photon, this background is more than two orders of magnitude smaller than the signal (see Appendix A). The cut on the minimal production polar angle (|cos θ| < 0.95) is also very effective in removing the beamstrahlung and ISR effects. We note that the reach on Λ is only mildly sensitive to the measurement uncertainties (∼ ∆ −1/4 ) due to the 1/Λ 4 dependence of the dim-8 contribution, so our analysis gives a reasonable projection as long as the systematic uncertainties are not overwhelmingly large. As a validation, we apply it to the LEP2 run scenarios and find a very good agreement with the result of Ref. [34] (with a 10% difference in the reach on Λ).
To illustrate the interplay between the measurements and the positivity bounds, we show the ∆χ 2 = 1 contours in Figure 1 for collider scenarios CEPC/FCC-ee 240 GeV [35][36][37] and ILC 250 GeV [38]. According to Eq. (5), if the beams are unpolarized (P e − = P e + = 0), only the combination a L + a R is probed, leaving a flat direction along a L = −a R as shown by the diagonal band (indicating ∆χ 2 ≤ 1) for CEPC/FCC-ee. It can be lifted by having multiple runs with different beam polarization, as for example at the ILC. Clearly, beam polarization is desirable, because it allows for testing the signs of a L and a R (or the two polarized cross sections) individually.
On a different ground, assuming the UV completion is consistent with the QFT principles which imply a L , a R ≥ 0, a L and a R can be simultaneously constrained even without beam polarization, as illustrated in Figure 1. This is a general feature that also applies to many other processes, such as the fermion scattering [12] or the Higgs production [39]. Positivity thus provides important information for future global SMEFT analyses, complementary to the experimental inputs.
High energy lepton colliders can probe these operators even further. The precision reach on the parameter a scales with the energy E and luminosity L as ∆a ∼ E −3 L −1/2 , as the energy dependence of the dim-8 contribution gives an E −4 dependence, and the measurement uncertainties are proportional to (σ SM · L) −1/2 ∼ 95%CL reach from e + e -(μ + μ -) → γγ E · L −1/2 . Since a ∼ 1/Λ 4 , the reach of Λ thus scales as assuming all other variables are the same for the two scenarios 1 and 2.
In Figure 2, we show the 95% CL reach for Λ 8 ≡ v/a 1 4 for various collider scenarios, where a = a L , a R is defined in Eqs. (1). Λ 8 corresponds directly to the scale of new physics which modifies the e + e − → γγ amplitudes. The band covers integrated luminosities of 1 to 5 ab −1 and various beam polarization scenarios, and is consistent with Eq. (7). We also show the best reach for each collider scenario listed in Appendix A from any linear combinations of a L and a R . For linear colliders, a L and a R can be independently constrained, and the corresponding Λ L and Λ R are also shown. Similar analyses can be carried out for muon colliders, which probe operators associated with muon fields. Constraints on muon dipole moments [32] are significantly weaker than those of the electron ones. We find that two insertions of the electric dipole operator could generate a deviation in the µ + µ − → γγ cross section comparable to the expected precision reach. However, future improvements on the muon electric dipole moment measurement could make this contribution irrelevant ( δσγγ σγγ ∼ ( E 10 5 TeV ) 2 ) [40,41]. On the contrary, the current muon g µ − 2 measurement [42][43][44] sufficiently constrains the magnetic dipole operator, so that the latter can be safely ignored for the diphoton measurement ( δσγγ σγγ ∼ ( E 10 5 TeV ) 2 ), independent of whether the apparent discrepancy with the SM is confirmed.
Interplay with Zγ and ZZ measurements.-The same operators that enter Eq. (4) also contribute to the Zγ and ZZ processes. These processes are, however, more complicated due to the massive Z boson, which enables contributions to multiple helicity states from both SM and dim-8 operators (including those responsible for neutral triple-gauge-boson couplings [45][46][47][48]). Dim-6 operators could also contribute at the tree level via modifications of the Ze + e − couplings. At very high energies ( √ s m Z ), the Z boson is effectively massless, and the +− final helicity states dominate the Zγ and ZZ cross sections [7]. In this limit, the ZZ process also exhibits a similar positivity bound, For the Zγ process, we focus on the CP-even elastic amplitude A(eV → eV ), where V is an arbitrarily mixed state of γ and Z. This gives where a Zγ L,R (a ZZ L,R ) is defined as in Eq. (1) with γ + γ − replaced by Z + γ − (Z + Z − ), together with a L,R → a γγ L,R to distinguish them. This implies a simple relation among the Zγ, γγ and ZZ cross sections for any fixed collider scenario (again only in the √ s m Z limit), where ∆σ ≡ dσ d|cos θ| − dσSM d|cos θ| . We note here again that a proper treatment of the Zγ and ZZ processes requires the inclusion of all helicity states of the gauge bosons. The decay of the Z boson also provides new observables sensitive to the interference of different Z helicity states [48][49][50]. The mapping between positivity bounds and observables in the Zγ/ZZ processes are generally more complicated, and we leave such an analysis to future studies. On the contrary, the positivity bound of the diphoton process is simple and unambiguous, as we emphasized above.
Violation of positivity.-The observation of d|cos θ| does not necessarily establish the violation of positivity bounds. It is important to check whether the EFT description itself is invalid, for instance, due to contributions from new light particles. In this process, however, it is difficult for such particles to generate a sizable destructive interference term while evading the current and future search constraints. A t-channel fermion exchange only generates a constructive interference. Another possibility is an s-channel exchange of a light composite spin-2 particle, which could be very-well probed by the resonance search e + e − → Xγ/XZ, X → γγ/e + e − [51]. Measuring the diphoton process at multiple center-of-mass energies also helps probe or exclude these light particle contributions and further verifies that the observed deviations are generated by dim-8 operators. After all other possibilities are excluded, the result would then indicate the breakdown of the fundamental principles of QFT 2 . Interestingly, a recent study [52] shows with an explicit example that order-one violations of positivity bounds could be generated if the Lorentz symmetry is spontaneously broken.
Summary and outlook .-Positivity bounds require that the diphoton cross section at lepton colliders must be no smaller than the SM prediction, which offers a rare opportunity to clearly and unambiguously test the fundamental principles of QFT. While high energy colliders provide the best reaches, such probes are robust even for a collider at around 240-250 GeV, a feature that is unique for the diphoton process. Alternatively, imposing these bounds could lift the flat directions among operators, indicating that positivity could provide important information for future global analyses with dim-8 operators.
Hadron colliders, such as the LHC or a future 100-TeV collider, have a large center-of-mass energy and could potentially provide powerful probes on the dim-8 operators and their associated positivity bounds [7,8,14]. In particular, a similar process with quarks, qq → γγ (or qq → Zγ/ZZ [7]), is already probed at the LHC with a larger center-of-mass energy than the ones of most future lepton colliders. However, these measurements usually suffer from low measurement precisions which make the EFT interpretation problematic, and a consistent EFT treatment often results in much reduced sensitivities to the new physics scale [53,54]. This is particularly important for probing the positivity bounds, for which the contributions of dim-10 operators, not subject to the same bounds, are a potential source of contamination. On the other hand, a potential future high energy photon collider [55] could measure the reverse process γγ → ff for different fermion final states, and probe a wider range of operators and their associated positivity bounds. We leave the detailed analyses of these colliders to future studies. SM e − γ → e − γ in the forward limit: We perform an explicit calculation of the amplitude M(e − γ → e − γ) in SM in the forward limit with massless electrons. Our calculation follow closely Sec 5.5 of Ref. [56]. The general amplitude is given by
Acknowledgement
where p, k, p and k are the momenta of the ingoing e − , γ and the outgoing e − , γ, respectively, as shown in Figure 3. We assume that both e − and γ have + helicity (right-handed). It can be shown that the results are the same for the other helicity configurations. To calculate the amplitude in the forward limit, it is most convenient to choose a particular reference frame and a basis for the spinors. We choose the initial (and final) momenta to be along the z-axis, as shown in Figure 4. The Dirac matrices are given by (A2) Since we have chosen e − and γ to be right-handed, we have for the Dirac spinor following the spinor choice in Ref. [56]. This also gives The polarization vector are given by Plugging everything into Equation A1, we have Operators: Eq. A8 shows the five dimension-8 operators mentioned in the main text, as defined in Ref. [25]. The lepton flavor indices are omitted as they are not relevant for our study. The Lagrangian is written in the form Differential cross section: Figure 5 shows the differential cross section dσ/d|cos θ| for the SM and the d8 contribution for √ s = 240 GeV, unpolarized beams. The SM contribution dominates in the forward region due to the t/u-channel electron exchange.
Run scenarios: Table I and CLIC, the numbers in the brackets are the values of beam polarizations P (e − , e + ). For simplicity, we assume the Z-pole or W W -threshold runs are at one single energy. Numbers are taken from Refs. [28,57]. Further details can be found in Refs. [35-38, 58, 59].
Measurement uncertainties:
We have only considered statistical uncertainties in our analysis. Here we provide further verifications of this assumption. Figure 6 shows the total cross sections of diphoton process and the main background from Bhabha scattering (e + e − → e + e − ), the latter contributes to the diphoton channel if both final state particles mistagged as a photon. Even with a conservative 1% mistag rate for both electrons and positrons, this background is more than two orders of magnitude smaller than the signal. This is consistent with the LEP analysis in e.g. Ref. [60], which stated that the contamination from the major background, Bhabha events, was estimated to be less than 0.5% after selection cuts. Another potential source of background is the double-hard-FSR: e + e − → e + e − γγ, where the two photons take most of the energy of the scattered electrons. Applying m γγ ≥ 0.9s on the invariant mass of the two photons with other reasonable cuts, we estimate the cross section of this process to be 6 ∼ 7 orders of magnitude lower than the signal process. The statistical uncertainties of the total diphoton cross section measurements are provided for each collider scenario in Table II as references. ∆σtot/σtot, e + e − → γγ II: The projected (relative) statistical uncertainties of the total cross section measurement of e + e − → γγ with the run scenarios in Table I. A cut on the production polar angle |cos θ| < 0.95 is applied. | 2020-11-09T02:00:12.691Z | 2020-11-05T00:00:00.000 | {
"year": 2020,
"sha1": "de83d6b612d4e3fff9c01c4beedcacabd24997a6",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevLett.129.011805",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "de83d6b612d4e3fff9c01c4beedcacabd24997a6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
256157560 | pes2o/s2orc | v3-fos-license | Exploration of Analgesic , Antiinflammatory , Laxative , and Anthelmintic Activities of Hygrophila phlomoides
To identify novel bioactivities, we studied analgesic, antiinflammatory, laxative, and anthelmintic properties of Hygrophila phlomoides (H. phlomoides) crude extract. The phytochemical study of H. phlomoides extract reflected the existence of numerous secondary metabolites such as reducing sugars, phenolic compounds, flavonoids, tannins, proteins, alkaloids, glycosides, saponins, steroids, terpenoids, and acidic compounds, which might be playing a role for its medicinal properties. The extract of H. phlomoides at 250 and 500 mg/kg exhibited significant writhing inhibition by 35.82% and 58.96%, respectively. H. phlomoides extract reflected significant antiinflammatory activity at 250 and 500 mg/kg in formaldehyde-induced paw edema up to 4 h. The experimental data exhibited that H. phlomoides extract at 250 and 500 mg/kg significantly increased stool production by 61.58% and 77.03%, respectively. In the anthelmintic study, H. phlomoides extract paralysed and killed the parasites dose-dependently. The outcome of this study indicated that the ethanol extract of H. phlomoides aerial parts possesses analgesic, antiinflammatory, laxative, and anthelmintic properties.
INTRODUCTION
ature has blessed us with a plethora of medicinal plants. The use of medicinal plants to treat diseases is as old as human civilization. The majority of medications are still derived from plants. People in third-world countries frequently rely on medicinal plants for disease cures due to the high cost of treatment. In developing countries, up to 80% of the population still uses herbal medicines for primary health care 1 . Hygrophila phlomoides (H. phlomoides) belonging to the family Acanthaceae, is about 1 m tall and, erect. Leafblade is elliptic, obovate, or oblong and flowers axillary, several clustered or in whorls upward. The plant is habitat in Cambodia, India, Bangladesh, Indonesia, Laos, Myanmar, Pakistan, Philippines, Thailand, and Vietnam 2, 3 . To date, no chemical compound or biological activities have been reported from the species. However, the genus has been identified with the presence of alkaloids, steroids, tannins, proteins, flavonoids, carbohydrates, fats, oils, glycosides, and phenolic compounds 4 . The genus was also found with analgesic, antiinflammatory, laxative, and anthelmintic potentialities 5,6 . Strobilanthes anamallaica belongs to the same family and has been reported with laxative property 7 . Based on the literature review, H.
phlomoides was selected to study the analgesic, antiinflammatory, laxative, and anthelmintic activities.
The pain had been regarded as a distressing sensory and emotional experience connected to real or potential tissue injury. An analgesic is a substance that, either centrally or peripherally, acts on the sensory nervous system to lessen or remove pain without appreciably affecting awareness. Non-steroidal antiinflammatory drugs (NSAIDs), opioids, and corticosteroids, which are frequently used in contemporary medicine to decrease pain and inflammation, only offer symptomatic relief. Additionally, using these medicines is linked to major side effects 8 .
The local reaction of living mammalian tissues to harm caused by any substance is known as inflammation. The complex array of enzyme activation, mediator release, fluid extravasations, cell migration, tissue disintegration, and repair that the inflammatory response entails is usually triggered in the majority of disease conditions and is directed at the host's defense 9 . Synthetic medications, such as NSAIDs, opioids, and corticosteroids, are clinically the most significant medications used to treat inflammatory disorders. However, prolonged use of these medications may result in toxic side effects, such as gastrointestinal ulcers, bleeding, renal disorders, and other issues 10 . Even though synthetic medications currently rule the market, the possibility of some degree of harm still exists. Moreover, their prolonged use may cause severe adverse effects 11 . stimulants, lubricants, and saline laxatives to help the colon empty for rectal and bowel examinations. Diarrhea may result from taking laxatives in sufficiently high doses 12 . Currently, lifestyle changes and dietary fiber consumption are utilized as traditional mainstays to manage constipation together with medicine. However, none of these choices result in satisfying answers [13][14][15] .
Human helminth infections are among the most common infections that affect a huge portion of the global population. Even though the majority of helminth infections are often only found in tropical areas, they pose a serious health risk and increase the risk of malnutrition, anemia, eosinophilia, and pneumonia 16 . The population in endemic areas is most affected by the brutal morbidity caused by parasitic diseases 17 . The treatment of helminth disorders faces a major challenge because the gastrointestinal helminths become resistant to the anthelmintic medications that are currently on the market 18 . Therefore, we sought to explore phytochemical analysis and look into the plant's analgesic, antiinflammatory, laxative, and anthelmintic properties based on literature reviews of the plant and the need for new treatments.
Plant Collection, and Crude Extract Preparation
The aerial part of H. phlomoides was collected from the Khulna University campus. The collection took place in January 2018 at daytime. Any form of adulteration was strictly forbidden during collection. Experts at the Bangladesh National Herbarium in Mirpur, Dhaka, recognized the plant, and a voucher specimen (45951 DACB) was provided there for future use.
The desired aerial parts were freed of the undesirable elements, plants, and plant fragments. The plant underwent shade drying to prevent the breakdown of the active components. The dried aerial part was ground into a coarse powder and it was stored in an airtight container in a cool, dark, and dry environment. H. phlomoides powder weighing 100 g was placed in clean, glass container with flat bottom and allowed to soak in 600 ml of ethanol. The containers and its contents were sealed and kept for 15 days while being occasionally shaken and stirred. After that, a piece of clean cloth was used to perform a coarse filtration on the entire mixture. The filtrate was obtained and evaporated after it had been filtered via filter paper. It produced a paste concentrate that was greenish-black in color (crude extract yield 10.12%).
Phytochemical Tests
Identification of the types of chemicals present in the crude extract is crucial to assess the extract's pharmacological activity. Standard techniques were used to identify the chemical components of plant extract [19][20][21] .
Animals
The experiment was conducted using young Swiss-albino mice (both male and female), aged 4-5 weeks, with an average weight of 28-35 g. The mice were purchased and collected from the International Centre for Diarrheal Disease and Research, Bangladesh (ICDDR,B). After being purchased, they were adapted for a week in a standard condition in the animal home of the pharmacy department at Khulna University in Bangladesh. The animals were kept at a normal day-night cycle and fed regular laboratory food and water. All of the tests were carried out in a quiet, secluded environment. Live parasites (nematodes) were procured from recently butchered calves at nearby abattoirs to conduct the anthelmintic test. Parasites were cleaned, then stored in 0.9 % phosphatebuffered saline (PBS), which was made with 8.01 g of sodium chloride, 0.20 g of potassium chloride, and 1.78 g of sodium biphosphate, and 0.27 g of potassium biphosphate in 1 litre of distilled water at 37±1 °C.
Analgesic activity
Test samples (H. phlomoides extract) at 250 and 500 mg/kg, a positive control (diclofenac sodium), and negative control (1% tween-80 in water) were administered orally. For the prescribed substances to be properly absorbed, a 30 minutes break was allowed. Then, acetic acid solution (0.7%) was injected intraperitoneally into each rat in a group. After a 5 minutes break for acetic acid absorption, the number of writhing was recorded for 15 minutes 22 .
Antiinflammatory activity
The antiinflammatory potentiality of H. phlomoides extract was investigated utilizing the formaldehyde-induced paw edema in mice by Jahan et al., 2021 23 . Briefly, the test groups of mice received the extract at 250 and 500 mg/kg orally. Ibuprofen (100 mg/kg) and 1% v/v tween 80 (10 ml/kg) were given to mice in the positive control and negative control group, respectively. The right hind paw's linear circumference was then determined using a slide caliper. After 30 minutes, 0.1 ml of formalin (2% v/v) was administered into the mice's right hind paw's sub-plantar region to cause edema. The treated paw's linear circumference was recorded at the 1 st , 2 nd , 3 rd , and 4 th hours after formalin injection. Then, paw size change was estimated as = (paw size after formalin injection -paw size prior to formalin injection). % inflammation of paw edema = ℎ ℎ X 100 % inhibition of inflammation = 100 -% inflammation
Laxative activity
This activity was investigated following the approach mentioned by Capasso et al. 1986 with minor modification 24 . The experimental mice were distributed into four groups comprising 6 in each group. The first group (control) was administered normal saline and the second group (standard) was administered bisacodyl. The other two groups were administered the plant extract in two different doses. The concentration of bisacodyl, as well as the test extract, was determined in such a way that each mouse received 2 ml of solution consistently. Following 16 hours, the feces were weighed for each group and compared the test groups with the control and standard group 25 .
Anthelmintic activity
Anthelmintic potentiality of the crude extract was assessed using live cattle parasites based on Utpal et al. 2020 with minor modification 26 . The parasites were separated into four test groups having six in each group. The standard albendazole at 15 mg/ml of 10 ml in PBS and extract at 25 and 50 mg/ml were prepared and put into petri dishes. Treatment for the control group included 0.1% tween-80 in PBS. The period of paralysis was noted, when no movement was visible unless violent shaking. The death time was measured when there was no movement in response to external stimulation, violent shaking, or immersion in warm water (50 °C). The anthelmintic effect was measured as the time needed to be paralyzed and death of parasites compared to control.
Analgesic activity
The outcome of this study reflected that H. phlomoides ethanol extract at 250 and 500 mg/kg exhibited significant writhing inhibition by 35.82% and 58.96%, respectively while the writhing inhibition by standard, diclofenac sodium was found to be 79.10% at 25 mg/kg compared to the negative control. Therefore, it can be summarised that this extract shows dose-dependent analgesic activity.
Antiinflammatory activity
Extract H. phlomoides showed significant antiinflammatory activity at 250 and 500 mg/kg in formaldehyde-induced paw oedema at 1 h, which persisted up to 4 h. Ibuprofen also showed a similar level of antiinflammation from 1 h onwards, which persisted up to 4 h after administration of 100 mg/kg per oral (Table 2). Values are expressed as mean ± standard error of the mean; (n = 3); * indicates p<0.05, ** indicates p<0.01, and *** indicates p<0.001 when compared with control.
Laxative activity
The experimental data showed that H. phlomoides extract at 250 and 500 mg/kg exhibited a significant increase in stool production at 61.58% and 77.03%, respectively whereas the standard drug bisacodyl (10 mg/kg) caused an 80.7% increase in the stool production. Here, the total amount of produced stool was significant with soft consistency in comparison with the standard and negative control group (Table 3).
Anthelmintic activity
The parasites were both paralyzed and killed by the crude extract of H. phlomoides. These periods were noted and revealed to be dose-dependent. Paralysis times for parasites at 25 and 50 mg/ml of H. phlomoides extract were 24.55 and 16.47 minutes, respectively, while paralysis time for albendazole was 8.43 minutes. The death time for parasites at 25 and 50 mg/ml were 40.02 and 27.69 minutes, respectively, however, the time for albendazole was 15.21 minutes (Table 4). Values are expressed as mean ± standard error of the mean; (n = 3); * indicates p<0.05, ** indicates p<0.01, and *** indicates p<0.001 when compared with control.
DISCUSSION
The preliminary phytochemical study of H. phlomoides crude extract revealed the existence of reducing sugars, phenolic compounds, flavonoids, tannins, proteins, alkaloids, glycosides, saponins, steroids, terpenoids, and acidic compounds. Among these phytochemicals, phenolic compounds, flavonoids, tannins, and alkaloids are the most beneficial for therapeutic activity. It has already been reported that polyphenolic compounds, such as phenolic acids, flavonoids, and tannins, exert multiple biological responses, including antioxidant, antiinflammatory, laxative, and anthelmintic activity 26 . Phytochemicals like terpenoids, flavonoids, and tannins contribute to analgesic activity 27,28 . Therefore, the presence of different phytochemical groups in H. phlomoides extract assisted us to carry out various pharmacological activities In the acetic acid-induced analgesic activity evaluation, acetic acid was injected into mice to cause pain of peripheral origin 29 . To discover promising peripherally acting antinociceptive compounds, this model is extremely useful for dosage where the analgesic and antiinflammatory properties of medicines would be unproductive in other pain models 30 . When acetic acid is injected intraperitoneally, peripheral nociception is activated through the direct stimulation of non-selective cationic channels or the indirect release of various endogenous mediators, including prostaglandins, cytokines, and bradykinin, along with higher production of the enzymes lipoxygenase (LOX) and cyclooxygenase (COX) 31 , which stimulates nociceptive neurons sensitive to nonsteroidal antiinflammatory drugs 32 . Therefore, this study can be used to investigate novel NSAIDs. The analgesic potential of H. phlomoides extract was compared to that of diclofenac sodium in this experiment, which suggests that the extract may have peripheral antinociceptive properties. Its mode of action may involve a peripheral inhibition of LOX and/or COX, a decrease in prostaglandin synthesis, and intervention with the transduction phenomenon in primary afferent nociceptors. Phytoconstituents such as terpenoids, flavonoids, and tannins present in the extract of H. phlomoides could be responsible for analgesic activity.
The antiinflammatory study showed that H. phlomoides ethanol extracts have antiinflammatory characteristics. Alkaloids, flavonoids, tannins, steroids, and phenols are some of the polyphenolic chemicals that may be responsible for these activities 33 . Ibuprofen works by inhibiting COX enzymes, which transform arachidonic acid into prostaglandin H2 (PGH2). PGH2 is then transformed by other enzymes into several different prostaglandins (mediators of pain, inflammation, and fever) and thromboxane A2. Ibuprofen is a nonselective COX inhibitor, similar to aspirin and indomethacin, as it inhibits both COX-1 and COX-2 cyclooxygenase isoforms 34 . The extract of H. phlomoides may show antiinflammatory activity following this mechanism.
Bisacodyl is a member of the polyphenolic group of stimulant laxatives. To cause a bowel movement, it works directly on the colon. It is frequently recommended for the treatment of constipation, the control of neurogenic bowel dysfunction, and as a bowel preparation measure before pathological tests such as colonoscopy 35 . The function of bisacodyl on the small intestine is minor; stimulant laxatives mainly stimulate evacuation of the colon 36 . In this study, H. phlomoides extract showed a significant increase in stool production in mice. H. phlomoides extract may act to follow the mechanism of bisacodyl, which could be due to the presence of polyphenolic compounds observed in our phytochemical study.
Nematodes called helminths are exceedingly prevalent and inhabit the gastrointestinal tracts of mammals. These parasites deprive their hosts' bodies of blood, nutrition, vitamins, and other essential elements, leading to the development of various diseases. These protozoal diseases afflict around 3.5 million individuals 37 . Anthelmintic effects of plants are typically attributed to secondary metabolites such as proanthocyanidins, alkaloids, and terpenoids [38][39][40] . This study showed that the extract of H. phlomoides paralyzed and killed the parasites. Alkaloids, terpenoids, and phenolic compounds were identified in the phytochemical investigation, and these may be responsible for the anthelmintic effects of the crude extract.
CONCLUSION
The current study reveals that the H. phlomoides crude extract is enhanced with analgesic, antiinflammatory, laxative, and anthelmintic chemicals based on a variety of methodological approaches. Additional research into the isolation and characterization may aid in the identification of new bioactive molecules from natural resources. | 2023-01-24T16:45:02.355Z | 2022-12-15T00:00:00.000 | {
"year": 2022,
"sha1": "b21d25d7d1a84a816ea34ee8caf63a838b06c7ff",
"oa_license": null,
"oa_url": "https://doi.org/10.47583/ijpsrr.2022.v77i02.006",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "178b1c0cab6053fcffec0b3e5fc8913202c3ccf5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
226250806 | pes2o/s2orc | v3-fos-license | Down-regulating NEAT1 inhibited the viability and vasculogenic mimicry formation of sinonasal squamous cell carcinoma cells via miR-195-5p/VEGFA axis
Abstract The role of long non-coding RNA nuclear-enriched abundant transcript 1 (lncRNA NEAT1) in sinonasal squamous cell carcinoma (SNSCC) remained obscure. Target genes and potential binding sites of NEAT1, microRNA (miR)-195-5p and VEGFA were predicted using StarBase and TargetScan, and confirmed by dual-luciferase reporter assay. Quantitative real-time polymerase chain reaction (qRT-PCR) was performed to detect the expressions of NEAT1, vascular endothelial growth factor A (VEGFA) and miR-195-5p. Pearson’s correlation analysis of NEAT1, miR-195-5p and VEGFA was conducted. Cell viability, apoptosis and tube formation capability were assessed by MTT assay, flow cytometry and capillary-like tube formation assay, respectively. Expressions of VEGFA and proteins related to the phosphatidylinositide 3-kinase/Protein Kinase B (PI3K/AKT) pathway were measured by Western blot. In SNSCC tissues and cells, the expressions of NEAT1 and VEGFA were up-regulated while the expression of miR-195-5p was down-regulated, and NEAT1 was negatively correlated with miR-195-5p yet positively correlated with VEGFA. Overexpressed VEGFA promoted the viability and capillary-like tube formation of SNSCC cells yet suppressed their apoptosis, while silencing VEGFA led to the opposite results. MiR-195-5p could bind to NEAT1, and down-regulating miR-195-5p reversed the effects of silencing NEAT1 on the expressions of NEAT1 and miR-195-5p, cell viability, apoptosis and capillary-like tube formation as well as PI3K/AKT pathway activation. VEGFA was the target of miR-195-5p, and overexpressed VEGFA reversed the effects of miR-195-5p. Down-regulating NEAT1 inhibited the viability and vasculogenic mimicry formation of SNSCC cells yet promoted their apoptosis via the miR-195-5p/VEGFA axis, providing a possible therapeutic target for SNSCC treatment.
Introduction
Sinonasal malignancies are rarely occurring tumors which account for less than 3% of overall head and neck cancers [1]. Squamous cell carcinoma is the most common type of sinonasal malignancy, occupying up to 75% of all sinonasal malignancies [2]. Sinonasal malignancies are categorized as aggressive tumors because patients, who remain asymptomatic before diagnosis, are generally diagnosed at an advanced stage when the tumor has grown big enough to manifest symptoms. [3]. Though remarkable progress has been made in surgery and therapy, the prognosis of patients with sinonasal squamous cell carcinoma (SNSCC) remained poor [4]. Therefore, it is of great urgency and significance to further discover the molecular mechanisms related to SNSCC development and progression.
Long non-coding RNAs (lncRNAs) are a family of transcripts which are longer than 200 nucleotides without protein-coding ability [5]. It has been addressed that many lncRNAs may play a pivotal role in tumor carcinogenesis. For instance, Xiong et al. pointed out that lncRNA MYOSLID could promote the invasion and metastasis of head and neck squamous cell carcinoma via modulating epithelial-to-mesenchymal transition [6]. It was also reported that lncRNA ZFAS1 may be an oncogene in head and neck squamous cell carcinoma [7]. Besides, lncRNA AC091729.7 has been discovered as a novel lncRNA that promotes the proliferation and invasion of SNSCC cells through binding to serine/arginine-rich splicing factor 2 (SRSF2) [8]. As for nuclear-enriched abundant transcript 1 (NEAT1), Wang et al. elucidated that it could promote laryngeal squamous cell cancer through regulation of the miR-107/CDK6 pathway [9]. However, the role of NEAT1 in SNSCC requires further investigation.
Previous study showed that lncRNAs may act as competitive endogenous RNAs (ceRNAs) of microRNAs (miRNAs; miRs), that is, lncRNAs may 'sponge' miRNAs to inhibit miRNA functions by competitively binding to microRNA response elements (MREs) [10]. Among them, NEAT1 was found to promote the progression of aggressive endometrial cancer cells via a network mediated by miR-361 [11]. It was also reported to play a key role in sepsis-induced acute kidney injury via targeting miR-204 and modulating the NF-κB pathway [12]. In addition, NEAT1 could target miR-34a-5p and promoted the progression of nasopharyngeal carcinoma [13]. Vascular endothelial growth factor A (VEGFA), a member of the VEGF family, has been found implicated in the progression of many human malignancies [14]. However, the molecular mechanisms of NEAT1 and its correlation with miR-195-5p and VEGFA in SNSCC remain inadequately discussed. Therefore, the present study aimed to discover the correlations among NEAT1, miR-195-5p and VEGFA by unveiling the molecular mechanisms via which NEAT1 played a role in SNSCC, so as to determine their roles in SNSCC and find a possible therapeutic method for SNSCC.
Clinical samples
In the present study, tumor tissues from patients diagnosed with SNSCC (n=30) and turbinate mucosal tissues from normal healthy patients (n=20) were collected from Affiliated Hospital of Chengde Medical University between June 2019 and December 2019. All patients enrolled met the following criteria: (a) the patients had not received chemotherapy or radiotherapy treatment; (b) the patients had no other cancers, autoimmune diseases, contagious diseases or other diseases. Clinical samples were available at the initial resection and preserved in a refrigerator at −80 • C after the tissues were washed with phosphate buffered saline (PBS).
Target gene prediction and dual-luciferase reporter assay
Using StarBase and TargetScan, we successfully predicted the target genes and potential binding sites of NEAT1, miR-195-5p and VEGFA, which were then confirmed by dual-luciferase reporter assay. The 3 UTRs of NEAT1 and VEGFA harboring miR-195-5p target sites were synthesized by Gene Pharma (Shanghai, China) and inserted into the pMirGLO luciferase vector (AM5795; Thermo Fisher Scientific, U.S.A.) to form wild-type NEAT1 and VEGFA (NEAT1-wt; VEGFA-wt) reporter plasmids. A site-directed mutagenesis kit (F541; Thermo Fisher Scientific, U.S.A.) was used to perform 3 UTR mutagenesis at the miR-195-5p target site so as to form mutated NEAT1 and VEGFA (NEAT1-mut; VEGFA-mut) reporter plasmids.
Flow cytometry
After transfection for 48 h, 1 × 10 5 RPMI-2650 cells were treated with 5 μl of Annexin V and 5 μl of propidium iodide (PI) for 15 min in the dark at room temperature. Cell apoptosis was detected using an Annexin V-FITC cell apoptosis kit (130-092-052; Miltenyi Biotech, Waltham, MA, U.S.A.) and data were analyzed using Kaluza C Analysis Software (Beckman Coulter, Indianapolis, IN, U.S.A.).
Capillary-like tube formation assay
Capillary-like tube formation assay was performed as previously described [15]. In detail, after being cultured alone for 6-8 h, HUVECs were co-cultured with RPMI-2650 cells (2 ×10 4 cell/well) in a 96-well plate. The cells were then plated on pre-chilled Matrigel (50 μl; BD Biosciences, Franklin Lakes, NJ, U.S.A.) in MEM at 37 • C for 1 h. Next, the plate containing the medium was exposed to Niclosamide (5 μM; N3510; Sigma-Aldrich, U.S.A.) for 8 h. Photos of tubular structures were taken and observed using an optical microscope with a recording camera (DP27; Olympus, Tokyo, Japan). Five fields were randomly selected from each well for evaluation of tube formation, and the data were further analyzed using Tube Formation ACAS Image Analysis Software (v.1.0, ibidi GmbH, Gräfelfing, Germany).
RNA isolation and qRT-PCR
Total RNA from SNSCC tissues and cells was extracted with TRIzol reagent (A33250, Invitrogen, U.S.A.) in accordance with the manuals of the manufacturer, and then preserved in a −80 • C refrigerator. Concentration of the total RNA was quantified using a biological spectrometer (NanoDrop 2000, Thermo Fisher Scientific, U.S.A.). One microgram of the total RNA was synthesized into cDNA using a First-strand cDNA Synthesis Kit (04379012001; Roche Life Sciences, Mannheim, Germany) following the manufacturer's manuals. Then the qRT-PCR experiment was conducted using a qScript One-Step RT-qPCR kit (95057-050, Quanta Bio, Beverly, MA, U.S.A.) in real-time PCR Detection system (LineGene 9600 Plus; Biosan; Riga, Latvia) under the following conditions: at 95 • C for 10 min, followed by 40 cycles at 95 • C for 10 s, at 60 • C for 15 s and at 72 • C for 10 s. Primer sequences used in this experiment are listed in Table 2. GAPDH and U6 were used as internal controls. Expressions of relative genes were quantified by the 2 − C T calculation method [16].
Western blot
In our study, Western blot was applied to measure protein expressions of related mRNAs as previously described [17].
Statistical analysis
Experiments in our study were performed at least three times independently. Data were expressed as mean + − standard deviation (SD). Analysis of the statistics was performed using SPSS 21.0 software (IBM Corporation, Armonk, NY, U.S.A.). Statistical significance was assessed by one-way ANOVA and Student's t test followed by Dunnett's post hoc test. Correlation analysis of NEAT1, miR-195-5p and VEGFA was performed by Pearson's correlation test. P<0.05 was considered statistically significant.
MiR-195-5p could competitively bind to NEAT1 and target VEGFA
Using StarBase and TargetScan, we successfully recognized miR-195-5p as the candidate miRNA which could not only competitively bind to NEAT1 but also target VEGFA. The complementary binding sites are listed in Figure 1A,C. Then, for dual-luciferase reporter assay, wild-type or mutated NEAT1 and VEGFA (NEAT1-WT; NEAT1-MUT; VEGFA-WT; VEGFA-MUT) as well as miR-195-5p mimic or mimic control were co-transfected into RPMI-2650 cells. It was found that the luciferase activity in PRMI-2650 cells of the M+NEAT1-WT group was down-regulated as compared with the MC+NEAT1-WT and M+NEAT1-MUT groups ( Figure 1B, P<0.05), whereas that in the M+NEAT1-MUT group was not affected. The same result was found in the M+VEGFA-WT group as compared with the MC+VEGFA-WT and M+VEGFA-MUT groups ( Figure 1D, P<0.01). These results suggested that miR-195-5p could bind to NEAT1 and target VEGFA.
Effects of silenced or overexpressed VEGFA on SNSCC cell viability, apoptosis and capillary-like tube formation
To discover the role of VEGFA in SNSCC cells, we transfected VEGFA overexpression plasmid as well as small interfering RNA for VEGFA (siVEGFA) into SNSCC RPMI-2650 cells. As shown in Figure 3A, VEGFA expression was down-regulated after transfection with siVEGFA, whereas transfection with the VEGFA overexpression plasmid led to an opposite effect (P<0.01), suggesting that overexpressed VEGFA up-regulated VEGFA expression in SNSCC cells, whereas silencing VEGFA caused an opposite effect.
We then detected the effects of overexpressed or silenced VEGFA on SNSCC cell viability, apoptosis and vascular mimicry (VM) formation by MTT assay, flow cytometry and capillary-like tube formation assay, respectively. As shown in Figure 3B, there was a decrease in the viability of SNSCC cells after silencing VEGFA, whereas overexpressed VEGFA resulted in an opposite effect (P<0.05), indicating that silencing VEGFA suppressed SNSCC cell viability while overexpressed VEGFA exerted an opposite effect. It was discovered from the results of flow cytometry that after silencing VEGFA, the apoptosis rate of SNSCC cells was significantly up-regulated while overexpressed VEGFA caused a decrease in the apoptosis rate of SNSCC cells ( Figure 3C, P<0.001), suggesting that silencing VEGFA resulted in the promotion of SNSCC cell apoptosis, whereas overexpressed VEGFA posed an opposite effect.
According to the results from capillary-like tube formation assay in Figure 3D, the relative angiogenesis rate of HUVECs mediated by SNSCC cells in the siVEGFA group was decreased (P<0.01). Overexpressed VEGFA, however, resulted in a contrary result ( Figure 3D, P<0.01). Therefore, it could be summarized that silencing VEGFA suppressed capillary-like tube formation in SNSCC cell-mediated HUVECs whereas overexpressed VEGFA led to a contrary result.
Down-regulating miR-195-5p reversed the effects of silencing NEAT1 on NEAT1 and miR-195-5p expressions, and SNSCC cell viability, apoptosis and capillary-like tube formation
To find out the role of NEAT1 and its correlation with miR-195-5p in SNSCC cells, we transfected siNEAT1 as well as miR-195-5p inhibitor into SNSCC RPMI-2650 cells. As shown in Figure 4A,B, NEAT1 expression was down-regulated yet miR-195-5p expression was increased after transfection with siNEAT1, whereas transfection with miR-195-5p inhibitor led to an opposite effect (P<0.01). Furthermore, we also found that the effects of silencing NEAT1 on NEAT1 and miR-195-5p expressions in SNSCC cells were reversed by down-regulating miR-195-5p ( Figure 4A,B, P<0.001).
Using MTT assay, flow cytometry and capillary-like tube formation assay, we then detected the effects of NEAT1 and miR-195-5p on SNSCC cell behavior. As shown in Figure 4C, the results from MTT assay showed that silencing NEAT1 caused a decrease in SNSCC cell viability, whereas down-regulating miR-195-5p resulted in an opposite effect (P<0.05). In addition, we discovered that down-regulating miR-195-5p could reverse the effects of silencing NEAT1 on SNSCC cell viability ( Figure 4C, P<0.01).
Flow cytometry results revealed that after silencing NEAT1, the apoptosis rate of SNSCC cells was significantly up-regulated, while down-regulating miR-195-5p led to an opposite effect ( Figure 4D, P<0.05). Besides, down-regulating miR-195-5p was found to reverse the effects of silencing NEAT1 on SNSCC cell apoptosis ( Figure 4D, P<0.001).
Results from capillary-like tube formation exhibited a decrease in the angiogenesis rate of SNSCC cells-mediated HUVECs after silencing NEAT1, whereas down-regulating miR-195-5p caused an opposite effect ( Figure 4E, P<0.01). In addition, we discovered that down-regulating miR-195-5p could reverse the effects of silencing NEAT1 on capillary-like tube formation of SNSCC cells-mediated HUVECs ( Figure 4E, P<0.05).
Down-regulating miR-195-5p reversed the effects of silencing NEAT1 on VEGFA, p-PI3K and p-AKT expressions in SNSCC cells
To discover the effects of NEAT1 and miR-195-5p on VEGFA, PI3K and AKT expressions in SNSCC cells, we measured these expressions in SNSCC cells after silencing NEAT1 and down-regulating miR-195-5p by Western blot. As shown in Figure 5A, VEGFA, p-PI3K and p-AKT expressions were decreased after silencing NEAT1, whereas down-regulating miR-195-5p resulted in a contrary result (P<0.001). We also found that the effects of silencing NEAT1 on VEGFA, p-PI3K and p-AKT expressions in SNSCC cells were reversed by down-regulating miR-195-5p ( Figure 5A, P<0.01).
Furthermore, we verified PI3K and AKT phosphorylation in SNSCCs after silencing NEAT1 and down-regulating miR-195-5p. In this section, we found that silencing NEAT1 evidently down-regulated the phosphorylation levels of PI3K and AKT in SNSCCs ( Figure 5B,C, P<0.01). However, an opposite result was obtained after down-regulating miR-195-5p, which suggested that down-regulating miR-195-5p could enhance the levels of PI3K and AKT phosphorylation in SNSCCs ( Figure 5B,C, P<0.01). In conclusion, down-regulating miR-195-5p reversed the effects of silencing NEAT1 on the phosphorylation levels of PI3K and AKT in SNSCC cells.
Overexpressed VEGFA reversed the effects of miR-195-5p up-regulation on VEGFA and miR-195-5p expressions in SNSCC cells
To find out the role of miR-195-5p and its correlation with VEGFA in SNSCC cells, we transfected miR-195-5p mimic and the VEGFA overexpression plasmid into SNSCC RPMI-2650 cells. As shown in Figure 6A-C, overexpressed VEGFA increased the protein and mRNA expressions of VEGFA (P<0.01) yet had no significant effect on miR-195-5p expression, while transfection with miR-195-5p mimic led to an opposite effect (P<0.05). Furthermore, we also found that overexpressed VEGFA reversed the effects of up-regulating miR-195-5p on VEGFA and miR-195-5p expressions ( Figure 6A-C, P<0.05).
Overexpressed VEGFA reversed the effects of miR-195-5p up-regulation on SNSCC cell viability, apoptosis and capillary-like tube formation
Then, we detected the effects of VEGFA and miR-195-5p on SNSCC cell behavior by MTT assay, flow cytometry and capillary-like tube formation assay. As shown in Figure 6D, the results from MTT assay exhibited an increase in SNSCC cell viability after transfection with the VEGFA overexpression plasmid, whereas up-regulating miR-195-5p resulted in an opposite effect (P<0.05). In addition, we discovered that the effects of up-regulating miR-195-5p on SNSCC cell viability were reversed by overexpressed VEGFA (Figure 6D, P<0.01).
Flow cytometry results indicated that overexpressed VEGFA exerted a significant down-regulatory effect on the apoptosis of SNSCC cells, while up-regulating miR-195-5p led to a contrary result ( Figure 7A, P<0.05). Besides, VEGFA overexpression was found to reverse the effects of miR-195-5p up-regulation on SNSCC cell apoptosis ( Figure 7A, P<0.01).
Through capillary-like tube formation assay, we observed an increased angiogenesis rate of SNSCC cell-mediated HUVECs after transfection with the VEGFA overexpression plasmid, whereas up-regulating miR-195-5p caused an opposite effect ( Figure 7B, P<0.05). In addition, we discovered that overexpressed VEGFA could reverse the effects of up-regulating miR-195-5p on capillary-like tube formation of SNSCC cell-mediated HUVECs ( Figure 7B, P<0.01).
Overexpressed VEGFA reversed the effects of miR-195-5p up-regulation on p-PI3K and p-AKT expressions in SNSCC cells
To discover the effects of VEGFA and miR-195-5p on PI3K and AKT expressions in SNSCC cells, we measured PI3K and AKT expressions in SNSCC cells after transfection with miR-195-5p mimic and the VEGFA overexpression plasmid by Western blot. As shown in Figure 8A, there was an increase in p-PI3K and p-AKT expressions after transfection with the VEGFA overexpression plasmid, whereas up-regulating miR-195-5p resulted in a contrary result (P<0.05). We also found that overexpressed VEGFA could reverse the effects of up-regulating miR-195-5p on p-PI3K and p-AKT expressions in SNSCC cells ( Figure 8A, P<0.001).
Furthermore, we verified PI3K and AKT phosphorylation in SNSCCs after transfection with miR-195-5p mimic and the VEGFA overexpression plasmid. In this section, we found that overexpressed VEGFA markedly raised the phosphorylation levels of PI3K and AKT in SNSCCs ( Figure 8B,C, P<0.01). However, an opposite result was obtained after up-regulating miR-195-5p, which suggested that up-regulating miR-195-5p could suppress the PI3K and AKT phosphorylation in SNSCCs. Also, overexpressed VEGFA could reverse the effects of up-regulating miR-195-5p on the phosphorylation levels of PI3K and AKT in SNSCC cells ( Figure 8B,C, P<0.01).
Discussion
Among all head and neck tumors, sinonasal malignancies rarely occur yet they are highly fatal with an overall survival rate of as low as less than 5% in patients, and therefore it is highly necessary to develop novel therapeutic target for the disease management [18]. SCC, the most common histological variant of sinonasal malignancies, has been reported to have a high incidence rate (>50%) of nasal and paranasal sinus tumors [19]. SNSCC has been found most prevalent in maxillary sinus, followed by nasal cavity. [20]. The prognosis of SNSCC patients remained poor even after surgical treatment, and therefore it is significant to further discover the molecular mechanisms related to SNSCC development and progression [21].
Multiple studies have suggested that lncRNAs might be implicated in sinonasal malignancies [22]. As an architectural component of nuclear paraspeckles, lncRNA NEAT1 has been found implicated in many human malignancies [23,24]. However, the role of NEAT1 in SNSCC development and progression remained elusive. In our present study, we found that NEAT1 could competitively bind to miR-195-5p, and NEAT1 expression was up-regulated yet miR-195-5p expression was down-regulated in SNSCC tissues. Also, the effects of silencing NEAT1 on the viability, apoptosis and capillary-like tube formation of SNSCC cells were reversed by miR-195-5p down-regulation.
The PI3K/AKT pathway has been found involved in the progression of many diseases [25], and previous studies suggested that NEAT1 could activate the PI3K/AKT pathway so as to promote disease progression. Xu et al. pointed out that NEAT1 could participate in the development of Multiple Myeloma (MM) through activating the PI3K/AKT pathway [26]. Also, Xia et al. found that the NEAT1/PI3K/AKT pathway might be implicated in sepsis-related inflammation [27]. In addition, NEAT1 down-regulation could result in the inactivation of the PI3K/AKT pathway [28]. However, the roles of NEAT1 and the PI3K/AKT pathway in SNSCC remained poorly understood. In our present study, we found that NEAT1 down-regulation could inhibit the activation of PI3K/AKT pathway and thereby inhibit SNSCC progression. However, the effects of silencing NEAT1 on the PI3K/AKT pathway were reversed by miR-195-5p down-regulation in SNSCC cells.
VM is defined as a process in which invasive tumor cells can simulate endothelial cells and form a pipeline structure, and its presence has been found associated with high tumor grade, short survival, invasion and metastasis [29]. Since first discovered in melanoma, VM has been found in multiple human malignancies, such as breast cancer [30], hepatocellular carcinoma [31] and glioma [32]. Many discoveries have brought novel insights into the molecular mechanisms governing VM, and it has been addressed that VEGFA played a pivotal role in VM formation [33]. VEGFA was found to bind to and activate two tyrosine kinase receptors, namely VEGF receptor 1 (VEGFR1) and VEGF receptor 2 (VEGFR2), which have been reported to be implicated in many signaling capacities in VM channel formation [34,35]. It was also found that VEGFA could be regulated by miRNAs in multiple human malignancies. As Li et al. pointed out, VEGFA was targeted by miR-200b and its down-regulation might contribute to the amelioration of diabetic retinopathy [36]. In colorectal cancer progression, miR-150-5p was discovered to act as a suppressor via targeting VEGFA [37]. In addition, miR-15a-5p could suppress peritoneal dialysis-induced inflammation and fibrosis of peritoneal mesothelial cells by targeting VEGFA [38]. As for miR-195, Liu et al. found that VEGF was the target of miR-195 and miR-195 could suppress the metastasis and angiogenesis of squamous cell lung cancer (SQCLC) cells [39]. However, the roles of miRNA, miR-195-5p in particular, and VEGFA in SNSCC remain 1 to be further discovered. In our present study, we found that VEGFA was the target gene of miR-195-5p, and overexpressed VEGFA reversed the effects of miR-195-5p up-regulation on the viability, apoptosis and capillary-like tube formation of SNSCC cells.
Besides, other studies suggested that VEGFA was also able to activate several signaling pathways, the PI3K/AKT pathway in particular [40]. In our present study, we found that VEGFA expression was up-regulated in SNSCC tissues, and overexpressed VEGFA promoted the viability and capillary-like tube formation yet suppressed the apoptosis of SNSCC cells, suggesting that VEGFA might also play a pivotal role in SNSCC development and progression. In addition, we found an increase in the phosphorylation levels of PI3K and AKT after transfection with the VEGFA overexpression plasmid, which therefore showed that VEGFA may promote VM formation in SNSCC via activating the PI3K/AKT signaling pathway. Moreover, overexpressed VEGFA reversed the effects of miR-195-5p up-regulation on the phosphorylation of PI3K and AKT in SNSCC cells. Besides, the inhibitory effects of silencing NEAT1 on VEGFA levels were reversed by miR-195-5p down-regulation in SNSCC cells.
However, there were some limitations to our study. In our study, the effects of NEAT1, miR-195-5p and VEGFA in SNSCC were detected mainly by experiments in vitro, which are yet to be validated in animal experiments. Hence, studies in vivo are required to further verify our results. Besides, some clinicopathological data of the SNSCC patient and healthy human samples that show the levels of NEAT1, miR-195-5p and VEGFA are also worth further study. In addition, as transcription factor NF-κB plays an important role in regulating VEGF expression, it will be interesting to observe the effect of silencing/overexpressing NEAT1 on NF-κB signaling in future study.
In conclusion, our study revealed a new role of NEAT1 in SNSCC. It was found that NEAT1 was up-regulated in SNSCC, and down-regulating NEAT1 inhibited the viability and vasculogenic mimicry formation yet promoted the apoptosis of SNSCC cell via the miR-195-5p/VEGFA axis. These results unveiled the possible molecular mechanisms of NEAT1 in SNSCC, along with the potential therapeutic target for SNSCC treatment.
Data Availability
The analyzed datasets generated during the study are available from the corresponding author on reasonable request. | 2020-11-05T09:05:35.781Z | 2020-11-04T00:00:00.000 | {
"year": 2020,
"sha1": "82dc852c8ffe9af69d00182a06e29eba70a8a552",
"oa_license": "CCBY",
"oa_url": "https://portlandpress.com/bioscirep/article-pdf/40/11/BSR20201373/897686/bsr-2020-1373.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4500b14a77b4f94b54889d53bd3a0878bfddeb9f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
231839621 | pes2o/s2orc | v3-fos-license | On the generalized Andrews-Curtis-Problem -- A Disproof of the Relative Case
The generalized Andrews-Curtis Conjecture expects that finite PLCW 2-complexes which are simple-homotopy equivalent, can be 3-deformed into each other. If in addition subcomplexes are required to be kept fix during the deformation, this is not possible.
ց L, provided n ≥ 3, see Wall [Wa66]. In Hog-Angeloni/Metzler, Chapter I of [LMS 197], 1993, pages 45,46, the case n = 1 is also listed. But in several publications, I am (co-)author of the case n = 2 is called questionable. This is the so called generalized Andrews-Curtis-Problem. A positive expectation is called the Andrews-Curtis Conjecture (AC').
For n ≥ 3 Wall even proved that a common subcomplex of K and L can be kept fix during the deformation. In the case n = 2 this relative version is what we disprove in the present paper. It was mentioned as open in Chapter 2 of [LMS 446]. In addition to my own previous work I strongly use two results of Allan J. Sieradski. Whereas in higher dimensions the relative case needs extra labour, dimension 2 does so in the absolute one. The end of the present paper contains hints towards this goal.
I dedicate this paper to friends and colleagues, who were and are partners of my work on (AC ′ ), in particular to Cynthia Hog-Angeloni, and to my wives Ingrid Baumann-Metzler as well as to the memory of Helga Metzler (1942Metzler ( -1994, who accompanied the development of [LMS 446] resp. [LMS 197].
§1 Bias
For terminology we refer to earlier publications, in particular [LMS 197] and [LMS 446]. This covers the algebraic counterpart of 3-deformations, namely Q−, Q * − and Q * * − transformations of finite presentations. Q * − and Q * * − transformations were first defined in [Me76]. The notion of bias is due to Micheal N. Dyer and Allan J. Sieradski and concerns how spherical elements lie in the second homology of complexes. An overview can be found in M. Paul Latiolais Chapter III of [LMS 197].
Let K 2 , L 2 be 2−complexes with isomorphic abelian π 1 and let α : prime to m, and likewise a presentation of π 1 (L 2 ) with generators α(a i ) and x ′ ij , y ′ ij are given, then (1) K 2 and L 2 are at most Q * * − (or homotopy-) equivalent, if a k with x ′ ij y ′ ij ≡ ±k g−1 x ij y ij mod m exists.
Definition: m is called the bias modulus and the residual class of ± x ′ ij y ′ ij · x ij y ij −1 the bias in this situation.
(1) is the main result in [Me76]. For topological interpretations und generalizations of the bias invariant see [Dyer86] and [Me00].
(2) For Q ( * )− equivalence of K 2 and L 2 the bias even has to be ±1.
In his paper [Si77] Allan J. Sieradski showed that the criteria (1) and (2) are also sufficient and generalized them to a finite number of free products of finite abelian groups with the same m and g. This corresponds to forming one-point unions of standard 2−complexes.
We now use this example for the
Theorem There is no Q * * −transformation P 1 ∨ P 2 −→ Q 1 ∨ Q 2 rel. the joint 1−skeleton of the standard 2−complexes K 2 (P 1 ∨ P 2 ) and L 2 (Q 1 ∨ Q 2 ), the map of which is homotopic to the one given by the initial Q−transformation from P 1 ∨ P 2 to Q 1 ∨ Q 2 .
§2 Proof of the Theorem
The bias is a homotopy invariant of maps (see M. Paul Latiolais, Chapter III in [I]). Because of being induced by the Q−transformation P 1 ∨ P 2 −→ Q 1 ∨ Q 2 , its fundamental group map is (homotopic to) the identity, and by (2) the bias has a value ±1.
For the proof we need in addition from [Si85] that for finite abelian π 1 an automorphism can be decomposed into row transformations and diagonal ones. 1 Such a decomposition is possible even if the automorphism is the identity but the commutators of the presentations contain nontrivial exponents.
Keeping fix the 1−skeleton of P 1 ∨ P 2 and Q 1 ∨ Q 2 (up to homotopy) during a Q * * −transformation would mean that the free factors of π 1 would be fixed. There would hold an equation (3) ± 1 ≡ k g−1 · 2 · 2 mod 5, one 2 belonging to P 1 ∨ P 2 , the other one to Q 1 ∨ Q 2 .
But fixing the free factors, by (1) above and [Si85] the two factors 2 in (3) aren't quadratic residues mod 5, although their product is. This is a behaviour similar to the fact that a "product" of two Möbius bands results in a torus. Hence a Q * * − transformation with the properties of the theorem doesn't exist.
Remark:
The example of our theorem and similar ones, which are based on bias give rise to the two special cases for (AC'): 1 I don't know a generalization of Sieradski's result for more than one free factor. 4 A) In general is is impossible to fix a subcomplex during a deformation.
B) In general it is likewise impossible to choose the final map being homotopic to the initial (simple-)homotopy equivalence.
As Cynthia Hog-Angeloni has mentioned, in non-bias situations the cases may disagree. §3 An outlook to the absolute case Our theorem stimulates the idea to show the necessity of 4−expansions in the absolute case, which -astonishingly enough -could be avoided in the relative one. This idea may be made concrete by thickening the above example at those subcomplexes that were fixed so far. And the Möbius bands may give assistance of algebraic topology. Of course, other strategies may be useful in addition. | 2021-02-08T02:16:07.664Z | 2021-02-05T00:00:00.000 | {
"year": 2021,
"sha1": "4d4f3a534acaffac6bb819f32bec44c0408739ea",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4d4f3a534acaffac6bb819f32bec44c0408739ea",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
5422097 | pes2o/s2orc | v3-fos-license | Review of the safety, efficacy and patient acceptability of the levonorgestrel-releasing intrauterine system.
The levonorgestrel-containing intrauterine system is an extremely effective, reversible and safe form of long-term yet reversible birth control. In view of its efficacy, it is a safer alternative to permanent contraceptive methods such as sterilization. It is especially useful in situations where use of estrogen-containing contraceptives is contraindicated. While menstrual disturbances are a common side effect, proper counseling improves compliance. In addition to its contraceptive effect, the levonorgestrel intrauterine system offers potential therapeutic benefits in other clinical contexts, including menorrhagia, symptomatic fibroids, endometriosis, and endometrial protection.
Introduction
The levonorgestrel-containing intrauterine device is a very effective and safe form of reversible long-term birth control. In addition to its contraceptive effect, it offers potential non-contraceptive therapeutic benefi ts.
We reviewed the effi cacy, safety and clinical applications of the levonorgestrelcontaining intrauterine device (LNG-IUS, Mirena ® ). The search included the PubMed, Cochrane Controlled Trials Register and WHO publications on contraception. We included randomized controlled trials, controlled clinical trials and systematic/clinical reviews published in the English language in peer-reviewed journals and guidelines published by the WHO and the National Institute for Clinical Excellence (NICE). The key words used to search data included IUD, IUD-IUS, contraception.
Rationale for IUD
Among the several long-acting contraceptive methods, the intrauterine device (IUD) is the most popular and overall it is second most popular contraceptive method worldwide after sterilization (Progress in Reproductive Health Research 2002). The popularity of the IUD stems from the fact that in addition to providing long-lasting, highly effective, rapidly reversible contraception, it has no known effects on breast milk or breastfeeding; it does not interfere with sexual intercourse or with any type of medication; it is widely available throughout the world, it can used by women of any age or parity and following an abortion or miscarriage and fi nally once in place, its user can more or less forget about it with no further costs (WHO 2007). There is no evidence that use of IUD increases tubal infertility (Grimes 2000). Kailasam and Cahill In addition to its contraceptive effect, the LNG-IUS offers potential therapeutic benefi ts in other clinical contexts, including menorrhagia, symptomatic fi broids, endometriosis, and endometrial suppression.
Development and pharmacology of Mirena ® intrauterine contraceptive device
The aim of progesterone-releasing intrauterine systems initially was to reduce IUCD expulsion, by the addition of 'uterine relaxing hormones' (Odlind 1996). This led to the development of Progestasert ® (the fi rst hormonally impregnated device releasing 65 μg of progesterone per day) and Mirena ® LNG-IUS (releasing 20 μg of levonorgestrel per day).
Mirena ® LNG-IUS was developed by Leiras Oy, Turku, Finland, and was launched fi rst in Finland in 1990. It has a T-shaped body (32 × 32 mm) made of polyethylene with an elastomer sleeve consisting a 1 to 1 mixture of polydimethylsiloxane and 52 mg of levonorgestrel mounted around its vertical part. The sleeve is covered with a drug-releasecontrolling membrane of medical grade polydimethylsiloxane that releases levonorgestrel over an extended time of up to 5 years at a practically constant rate. The initial release rate of levonorgestrel is 20 μg per 24 h, and at the end of 5 years the release rate is still above 10 μg per 24 h (Lähteenmäki et al 2000). The distal end of the T-frame contains 2 removal threads. The device also contains barium sulfate, which makes it visible on X-ray examination. Local delivery of LNG results in low but detectable serum levels of LNG (0.1-0.4 ng/mL), much lower than peak levels observed with other combined or progestin-only contraceptives containing levonorgestrel (ESHRE Workshop 2008).
Levonorgestrel is a highly effective progestin, with an estimated progestational potency 10 times greater than that of progesterone; it also exhibits some androgenic properties (Sitruk-Ware 2007). The recommended duration of use of the Mirena ® coil is 5 years. Even though the licensed duration of action is 5 years, evidence suggests that it is effective as a contraceptive for up to 7 years (Sivin et al 1991). Furthermore, women who are aged 45 years or older when their LNG-IUS is inserted and are amenorrhoeic may keep it until they no longer need contraception, even if this is beyond the duration of UK Marketing Authorisation (NICE guidelines 2005).
Mirena ® LNG-IUS is currently licensed in the UK as a 5-year contraceptive agent (license awarded 1995), treatment for idiopathic menorrhagia (license awarded 2001), and to provide uterine protection during estrogen replacement therapy in perimenopausal and postmenopausal women (license awarded 2005). The second and third applications for Mirena ® LNG-IUS are not licensed in the US or Canada (Varma et al 2006).
Mechanism of action
The LNG-IUS acts predominantly by preventing implantation and sometimes by preventing fertilization. The contraceptive effects of the LNG-IUS are mediated via its progestogenic effect on the endometrium. Local intrauterine delivery of levonorgestrel (LNG) results in extensive decidualization of endometrial stromal cells, atrophy of the glandular and surface epithelium, and changes in vascular morphology (suppression of spiral artery formation and presence of large dilated vessels) along with down-regulation of sex steroid receptors in all cellular components (Guttinger and Critchley 2007). The result is a thin decidualized endometrium, an environment that is unsuitable for sperm survival, fertilization and implantation. The endometrial changes develop in the fi rst month after insertion and persist until the device is removed (Guttinger and Critchley 2007). By inactivating the endometrium and suppressing proliferation, it also decreases menstrual blood loss (MBL) and pain. The levonorgestrel released locally alters the quality of cervical mucus, making it hostile to the movement of sperm through the cervix (Jonsson et al 1991). Thus, the number and quality of sperm reaching the site of fertilization in the tube seems to be reduced in LNG-IUS users.
Ovulation is not suppressed as it has little infl uence on ovarian activity; women have normal estradiol values from the time of insertion through its 5-year life span (Luukkainen et al 1990) ensuring that the LNG-IUS would not expose the user to hypoestrogenism leading to osteoporosis (Bahamondes et al 2006). Although the anovulation rate is almost 85% at the beginning of use, this rate falls to less than 15% at the end of the fi rst year (Nilsson et al 1980). As ovulatory cycles occur in most, even amenorrheic, users, ovulation suppression is not the primary mode of action (Lähteenmäki et al 2000). Serum levels of LNG are usually not suffi cient to suppress ovulation, as a release of 50 μg per 24 h of LNG would be necessary to completely inhibit ovulation (Lähteenmäki et al 2000). The local progestative effect of the LNG-IUS on the endometrium manifests within a period of 3 months and over after insertion (Zalel et al 2003). This means that it can take up to 3 months for the initial menstrual disturbances to settle. Women should be accordingly counseled so as to Levonorgestrel-releasing intrauterine system decrease the discontinuation rate of the LNG-IUS due to the initial menstrual disturbances.
Insertion
The LNG-IUS can be inserted at any time in the menstrual cycle if it is reasonably certain the woman is not pregnant. However compared with the Cu-IUD that is effective immediately, it takes 7 days to provide effective contraceptive protection. Hence additional contraception or abstinence should be advised for 7 days after inserting the LNG-IUS unless inserted in the fi rst 7 days of the cycle or when switching from a different method of contraception unless the current contraceptive method is still effective (FSRH Guidance 2007). While the insertion procedure may be relatively easy compared with insertion of other IUDs, some women may need analgesia and cervical dilatation (Jensen et al 2008). IUDs can be inserted immediately after fi rst or second trimester abortion and from 4 weeks post partum, irrespective of the mode of delivery (El Tagy 2003;NICE 2005). In complicated valvular heart disease, prophylactic antibiotics should be used at the time of insertion to prevent endocarditis (WHO 2004).
Contraindications
The LNG-IUS should be avoided in patients with unexplained vaginal bleeding. It is preferably avoided in the presence of sexually transmitted diseases such as chlamydia and gonorrhea. In a systematic review, Mohllajee et al (2006) reported that with IUD insertion in the presence of chlamydia infection or gonorrhea, subsequent pelvic infl ammatory disease (PID) rates were 0%-5%, compared with insertion in the absence of infection (0%-2%).
Contraceptive benefi ts
The LNG-IUS provides highly effective contraception and is equally effi cient in all age groups with the risk of failure similar throughout the life span of the device. The 5-year cumulative pregnancy rate per 100 users is 0.5 and the 5-year Pearl rate 0.11 (Backman et al 2004). The cumulative pregnancy rate at 5 years is Ͻ0.5% (Thonneau and Almont 2008). Its use in lactating women provides highly effective and acceptable contraception and does not negatively infl uence breast-feeding or the growth and development of breast-fed infants (Shaamash et al 2005). Women with an intrauterine pregnancy with an LNG-IUS in situ should be advised to have the LNG-IUS removed before 12 completed weeks' gestation whether or not they intend to continue the pregnancy (NICE 2005).
It can be safely used in women with a past history of PID or ectopic pregnancy, women with fi broids, and in young nulliparous women (WHO 2004). The LNG-IUS is medically safe for women to use if oestrogen is contraindicated (NICE 2005).
The LNG-IUS is both safe and extremely effi cacious for use in nulliparous women with no greater risk of perforation or expulsion (Prager and Darney 2007). In fact the LNG may be protective against infection via thickening of the cervical mucus (Jonsson et al 1991) and decreased menstrual blood loss. Nulliparous users are at no increased risk for infection and infertility than multiparous users and it is safe to offer post-abortion placement of the LNG-IUS to nulliparous women (Prager and Darney 2007). In a randomized study of young nulliparous women (Suhonen et al 2004), the safety and acceptability of the LNG-IUS for contraception was observed to be as good as with oral contraceptives, with a high continuation rate. The discontinuation rate in the fi rst year of the LNG-IUS is 20%, indicating that acceptability is similar among nulliparous and parous women (Prager and Darney 2007).
In contrast to the copper IUDs, the LNG-IUS is not recommended for emergency contraception. The absence of embryotoxic copper ions and the relatively low serum levels of levonorgestrel obtained immediately following LNG-IUS insertion compared with standard hormonal emergency contraception suggests that it may not be effective (ESHRE Capri Workshop Group 2008).
The LNG-IUS has been favorably compared with other contraceptive methods. In randomized comparative trials (RCTs), pregnancy rates were significantly lower with the LNG-IUS than with copper devices (Sivin et al 1991;Andersson et al 1994;Pakarinen et al 2003), though a Cochrane review of 21 RCTs (French et al 2004) concluded that there is insufficient evidence to conclude that the LNG-IUS is more effective than copper IUDs. In a recent systematic review of the Cochrane Library for all IUD-related reviews (Grimes et al 2007), the LNG-IUS was found to have comparable effi cacy to that of IUDs with Ͼ250 mm 2 of copper, immediate post-partum, and post-abortal insertion appeared safe and effective and prophylactic antibiotics at the time of insertion appeared unwarranted except in populations with a high prevalence of sexually transmitted infections (STIs). The LNG-IUS and tubal sterilization have comparable high effectiveness, with the LNG-IUS a safer option, and all women, particularly young women, who are at high risk for sterilization regret, should be encouraged to consider the LNG-IUS in place of a surgical procedure that is potentially irreversible (Grimes and Mishell 2008).
Repeated use of the device has had favorable outcomes. The initial bleeding problems that are frequently observed after the insertion of the fi rst LNG-IUS do not recur after an immediate change from the fi rst IUS to the second IUS (Rönnerdag and Odlind 1999). In contrast to other long acting progestin only contraceptives, LNG-IUS has no effect on bone mineral density (Inki et al 2007).
There does not seem to be a delay in the return of fertility following removal of the Mirena ® coil with conception rates 79.1/100 women at 12 months after removal (Andersson et al 1992).
Non-contraceptive benefi ts of the LNG-IUS Menorrhagia
Available medical treatments for menorrhagia include the LNG-IUS, non-steroidal anti-infl ammatory drugs, antifibrinolytic drugs, progestogens, oral contraceptives, and danazol. The choice of medical treatment can depend on individual factors such as requirement for contraceptive and dysmenorrhea. However, fi rst-line therapy with drugs has variable effi cacy and, at best, oral medication reduces menstrual blood loss by only 50% (Istre and Qvigstad 2007). The immediate and intense suppression of the endometrium leads to over 90% reduction of menstrual blood loss over a period of 12 months (Anderson and Rybo 1990) along with signifi cant benefi cial increase in hemoglobin and ferritin levels (Xiao et al 2003).
In a randomized controlled trial (Hurskainen et al 2004) health-related quality-of-life outcomes associated with the LNG-IUS and hysterectomy was similar with fi nancial benefi ts in favor of the LNG-IUS. In a Cochrane review of 10 randomized control trials (Lethaby et al 2005), the LNG-IUS was more effective than other medical interventions, with a 90% reduction from baseline in menstrual blood loss. Although the LNG-IUS results in a smaller reduction in menstrual blood loss than endometrial ablation, there are no differences in the women's rates of satisfaction or quality of life. In a Cochrane systematic review of 8 trials (Marjoribanks et al 2006), use of LNG-IUS was more cost effective, with levels of satisfaction and quality of life with an LNG-IUS system similar to those after surgical treatment such as transcervical endometrial resection or balloon ablation or hysterectomy. Further longterm studies are needed to compare the effectiveness of the LNG-IUS against conservative surgical treatments.
Inherited bleeding disorders may be the cause of menorrhagia in up to 13% of women and the LNG-IUS is an effective treatment option in such women (Kadir and Chi 2007), as medical treatments may otherwise be contraindicated and surgery carries additional risks. It is also an effective treatment for menorrhagia in women receiving oral anticoagulation (Pisoni 2005).
The LNG-IUS is cost effective in the treatment of menorrhagia, while offering reliable contraception. Compared with oral contraceptives and surgical treatment, treatment strategies employing the LNG-IUS are the most cost-effective in managing dysfunctional uterine bleeding in women not desiring additional children (Blumenthal et al 2006). LNG-IUS followed by endometrial ablation may be the most cost-effective treatment for menorrhagia, when compared with immediate surgery (Clegg et al 2007).
Endometriosis
The LNG-IUS delivers signifi cant amounts of levonorgestrel into the peritoneal fl uid (Lockhat et al 2005) and this may explain the pain relief in patients with peritoneal endometriosis.
Medical treatments that are based on the reduction of lesions or on ovarian estrogen suppression, cause profound hypoestrogenism inducing a decrease in bone mineral density and hence treatment is limited 6 months (d'Arcangues 2006), although longer treatment with add-back hormone therapy is possible. In addition, there are systemic side effects, and the need for regular administration could affect compliance. In such patients the LNG-IUS can be a useful alternative.
In a systematic review on the use of LNG-IUS for symptomatic endometriosis following surgery, post-operative use of the LNG-IUS reduced the recurrence of painful periods in women who have had surgery for endometriosis while there was insuffi cient evidence for other benefi ts such as reduced likelihood of further surgery for endometriosis and improved long-term fertility (Abou-Setta et al 2006). In a randomized controlled trial, insertion of the Mirena ® coil signifi cantly reduced the medium-term risk of recurrence of moderate or severe dysmenorrhea compared with expectant management following operative laparoscopy for symptomatic endometriosis (Vercellini et al 2003). In another RCT (Petta et al 2005), LNG-IUS and depot-GnRH-analog were equally effective in signifi cantly decreasing endometriosisrelated pain. However an advantage with LNG-IUS is the fact that it does not provoke hypoestrogenism while being effective for 5 years. There is insuffi cient information on the effi cacy of the LNG-IUS in the possible prevention of endometriosis recurrence.
Adenomyosis
The LNG-IUS has been reported to be useful in women with adenomyosis, although studies have been limited by small numbers. Its use may signifi cantly reduce pain and abnormal bleeding associated with adenomyosis along with signifi cantly reduced adenomyotic lesions, as evaluated by the thickness of the junctional zone (Braghetoa et al 2007). A long-term study showed that the use of the LNG-IUS led to signifi cant pain relief, reduction in the uterine volume and menstrual blood loss volume, and improvements in hematologic indices in patients with adenomyosis (Cho et al 2008); however, there was a gradual increase in uterine volume, pain scores, and pictorial blood loss assessment chart scores at 2 years after insertion and the authors suggested that to maintain the effi cacy of the LNG-IUS for the management of adenomyosis, a new device might be needed after 3 years.
Fibroids
The LNG-IUS appears safe and effective in the treatment of menorrhagia in women with uterine cavities distorted by submucosal fi broids (Soysal and Soysal 2005). A recent review (Kaunitz 2007) of the published literature suggested that, in women with uterine fi broids, with or without menorrhagia, the LNG-IUS reduces menstrual blood loss and likely reduces menstrual pain while maintaining high contraceptive effi cacy. However, expulsion rates are higher and there is inconsistent evidence on whether the LNG-IUS decreases uterine/fibroid dimensions. Although symptomatic improvement may not be uniform, these fi ndings indicate that the LNG-IUS is a useful therapeutic option for selected women with menstrual symptoms associated with uterine fi broids.
Endometrial protection
The targeted delivery of progestagen in the uterine cavity is a preferred route in women who need endometrial protection due to the absence of systemic side effects along with high effi cacy.
Use of oral tamoxifen as adjuvant therapy for women with breast cancer has improved survival rates. However, it exerts weak estrogenic effect on the endometrium and hence is associated with endometrial pathologies such as polyps, hyperplasia and endometrial cancer. In view of its progestational effects, the LNG-IUS is an effective prophylaxis in the prevention of endometrial pathology in women receiving tamoxifen (Chan et al 2007;Gardner et al 2000).
The LNG-IUS adequately suppresses the endometrium during hormone replacement therapy with estrogens (Riphagen et al 2000) while avoiding the potential adverse systemic effects of progestogens. A literature review by Riphagen et al (2000) and a subsequent long-term study of post-menopausal women by Wildemeersch et al (2007a) highlighted the endometrial protection offered by the LNG-IUS in women receiving estrogen replacement therapy.
The LNG-IUS has been investigated in the treatment of non-atypical and atypical hyperplasia as a useful alternative to hysterectomy especially in younger women who still wish to become pregnant or in women who refuse operation or are in poor health. While studies (Wildemeersch et al 2007b;Varma et al 2008) suggest it may be an effective option for suppressing the endometrium, there have been reports of progression of atypical endometrial hyperplasia to adenocarcinoma despite intrauterine progesterone treatment with the levonorgestrel-releasing intrauterine system (Kresowick et al 2008). Hence extreme caution should be exercised and we need robust randomized controlled trials to evaluate the effectiveness of the LNG-IUS in treating endometrial hyperplasia.
Side effects
The adverse events of interest fall into 2 categories: those related to an intrauterine device, such as dysmenorrhea, irregular bleeding, ectopic pregnancy, and expulsion of the device; and those related to progestogens, such as bloating, weight gain, and breast tenderness. In a systematic review of the literature, reported cumulative discontinuation rates with the LNG-IUS were as high as 24% after 1 year and 33% after 2 years (NICE 2005).
Bleeding complications
Overall, the commonest reason for discontinuation is unacceptable bleeding patterns.
Up to 60% of women stop using the LNG-IUS within 5 years, which is similar to other IUDs, unacceptable vaginal bleeding and pain being the most common reasons for discontinuation (NICE 2005). Even though irregular bleeding and spotting are common during the fi rst 6 months following LNG-IUS insertion, oligomenorrhea or amenorrhea is likely by the end of the fi rst year of LNG-IUS use (NICE 2005). Since frequent irregular bleeding is common during the fi rst few months following system insertion, proper counseling of the patient about possible bleeding patterns is crucial in order to minimize premature LNG-IUS removals. Since amenorrhea is an expected outcome (occurring in about 20% of users at 12 months), adequate counseling provides reassurance that the absence of bleeding does not generally signify pregnancy or other problems leading to high continuation rate and high level of patient satisfaction (Jensen et al 2008).
Information received at the insertion visit is strongly associated with increased user satisfaction among the users of the LNG-IUS (Backman et al 2002), the association between high user satisfaction and advance information being strongest on the possibility of missing periods.
Uterine perforations
Incidence of uterine perforations related to the insertion of a LNG-IUS is around 2.6 per 1000 insertions (Van Houdenhoven et al 2006). Insertion in lactating women, even beyond 6 weeks after delivery, is an important risk factor. The manufacturer of the LNG-IUS currently recommends that post-partum insertions should be postponed until 8 weeks after delivery. Uterine perforation at insertion seems less likely to occur if a withdrawal rather than a push-out technique -the recommended technique for a LNG-IUS -is used.
Expulsion and displacement
Expulsion of an IUD occurs in approximately 1 in 20 women, and is most common in the fi rst 3 months after insertion (NICE 2005). Patients at increased risk of expulsion include nulliparous women, women with severe dysmenorrhea, and those with insertions immediately post partum or post abortion. There is insuffi cient evidence to indicate that expulsion rates are lower with LNG-IUS (Chrisman et al 2007). There are no differences in the rates of expulsion between Cu-IUDs and the LNG-IUS (FSRH 2007). As expulsion generally occurs within the fi rst few months, women should be encouraged to attend follow-up within 12 weeks of insertion.
It is rare for the LNG-IUS to get displaced and there is confl icting evidence on how best to manage these patients. Intra-peritoneal dislocated LNG-IUS results in plasma levonorgestrel levels 10 times higher (4.7 nmol/L) than those seen with LNG-IUS placed in utero. This high plasma levonorgestrel level suppresses ovulation and therefore it has been suggested that a misplaced LNG-IUS should be removed when pregnancy is desired, as opposed to the copper IUD that may be left intraperitoneally, especially if asymptomatic (Haimov-Kochman et al 2003). However pregnancies have also been documented with a displaced LNG-IUS (Budiman et al 2007).
Ectopic pregnancy
The LNG-IUS is a very effective contraceptive and the absolute risk of pregnancy (intrauterine and ectopic) is very low. A previous ectopic pregnancy is not a contraindication to the use of intrauterine contraception (FSRH 2007). The risk of ectopic pregnancy when using the LNG-IUS is lower than when using no contraception. The overall risk of ectopic pregnancy when using the LNG-IUS is very low, at about 1 in 1000 in 5 years. If a woman becomes pregnant with the LNG-IUS in situ, the risk of ectopic pregnancy is about 1 in 20 (NICE 2005). Similar rates of ectopic pregnancy are reported for the LNG-IUS and Cu-IUDs (French et al 2004).
Infection
The risk of developing PID following LNG-IUS insertion is very low (less than 1 in 100) in women who are at low risk of STIs and removals due to PID among LNG-IUS users is below 1% at 1 year, and below 1.5% at 5 years (NICE 2005). A systematic review reported that there is confl icting evidence on whether levonorgestrel IUD is associated with a lower risk of PID than other IUDs and any risk of upper-genital-tract infection after the fi rst month is small (Grimes 2000). The protective effect of the LNG-IUS may be due to impenetrable cervical mucus, endometrial changes, or reduced retrograde menstruation (Toivonen et al 1991). If a woman was to develop PID with the IUD in place, it may be reasonable to offer initial treatment without immediate removal (WHO 2004). In rare cases of pelvic infection secondary to Actinomyces israelii, device removal in conjunction with antibiotic treatment is more successful at clearing the colonization than antibiotics alone (Bonacho et al 2001).
A woman who currently has an STI such as gonorrhea or chlamydia or is at very high risk should not have an IUD inserted as insertion may increase the risk of PID. If a high risk patient screens negative, then an IUD can be inserted and if the screen is positive, then an IUD can be inserted after treatment, if she is not at risk of reinfection by the time of insertion (WHO 2007). In exceptional circumstances, if other, more appropriate methods are not available or not acceptable, an IUD can be inserted in high risk individuals even if STI testing is not available. Presumptive treatment should be considered with a full curative dose of antibiotics effective against both gonorrhea and chlamydia and inserting the IUD after completion of treatment. The patient should be carefully checked for signs of infection at followup and treated accordingly while being advised to return at once if there are any signs of infection (WHO 2007).
Levonorgestrel-releasing intrauterine system Farley et al (1992) reported that PID among IUD users is most strongly related to the background risk of STI and hence screening for chlamydia should always be considered prior to inserting the LNG-IUS.
A systematic review to assess the effectiveness of prophylactic antibiotic administration before IUD insertion in reducing IUD-related complications and discontinuations within 3 months of insertion highlighted the low risk of IUD-associated infection, with or without use of antibiotic prophylaxis (Grimes and Schulz 2001). Another systematic review (Mohllajee et al 2006) suggested that women with chlamydial infection or gonorrhea at the time of IUD insertion were at increased risk of PID relative to women without infection, the absolute risk of PID being low for both groups. However, whether IUDs increase the risk of PID in women with an STI at the time of insertion is not known (Mohllajee et al 2006).
Ovarian cysts
The use of the LNG-IUS is associated with a small risk of development of ovarian cysts (Inki et al 2002). The precise mechanism by which the ovarian cysts are caused is not known, but may be secondary to disturbances in the normal growth and rupture of follicles during LNG-IUS use. However in a prospective, randomized trial by Inki et al (2002) these were symptomless and showed a high rate (94%) of spontaneous resolution and hence no routine ultrasound screening is necessary of women using the LNG-IUS.
Other rare side effects
Unrecognized retention in the uterine cavity of the active part (hormone-releasing capsule) of an LNG-IUS may lead to secondary amenorrhea. Although LNG-IUS are inserted and removed without particular diffi culty in most cases, it may be prudent to check the device following removal to ensure that the capsule remains attached to the rest of the device (Forrest et al 2008).
Hormonal complications
The systemic absorption of levonorgestrel may have the potential to cause hormonal side effects. The LNG-IUS releases 20 μg per day of levonorgestrel and so drugrelated adverse events are less frequent than with the oral preparations of progesterone, which result in higher serum concentrations.
However discontinuation due to hormonal (non-bleeding) problems is rare. While changes in mood and libido or weight gain are similar whether using the LNG-IUS or IUDs, there is an increased possibility of developing acne (NICE 2005). While some women may complain of headaches, women who have migraine with or without aura may use the LNG-IUS.
The use of the LNG-IUS has not been associated with an increased risk of breast cancer (Backman et al 2005). In women with a past history of breast cancer, Trinh et al (2007), reported that, overall, there was no increased risk of breast cancer recurrence associated with use of the LNG-IUS; subgroup analysis suggested that while the LNG-IUS is not associated with an increased risk of recurrence in patients who start using the LNG-IUS after completing their breast cancer treatment, women who developed breast cancer while using an LNG-IUS and who continued to use the LNG-IUS, showed a higher risk of recurrence of borderline statistical signifi cance. Hence additional research is needed to confi rm or refute these fi ndings (Trinh et al 2007).
Conclusion
Women contemplating undergoing sterilization or hysterectomy seek a long-term solution for contraception or treatment of menorrhagia. The LNG-IUS is one of the most versatile forms of long-acting reversible method of contraception. New developments in the delivery of levonorgestrelreleasing intrauterine devices such as the Femilisk ® (parous women), the Femilisk Slim ® (nulliparous women), and the frameless FibroPlant ® levonorgestrel LNG-IUS possess features that may solve the main problems encountered with conventional IUDs (eg, expulsion, abnormal or excessive bleeding, and pain) (Wildemeersch 2007). The LNG-IUS system is an extremely effective contraceptive and has many non-contraceptive health benefi ts, including suppression of menstruation, maintenance of iron stores, improvement in dysmenorrheal, and endometrial protection for women on estrogen replacement therapy.
Disclosures
Neither author has any confl icts of interest to disclose. | 2018-04-03T04:53:11.782Z | 2008-02-02T00:00:00.000 | {
"year": 2008,
"sha1": "71ed79e81d1d0cfb48b8bd62b24401e9180b66f7",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=3675",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "71ed79e81d1d0cfb48b8bd62b24401e9180b66f7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.