id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
231607761 | pes2o/s2orc | v3-fos-license | Proteome-wide and lysine crotonylation profiling reveals the importance of crotonylation in chrysanthemum (Dendranthema grandiforum) under low-temperature
Background Low-temperature severely affects the growth and development of chrysanthemum which is one kind of ornamental plant well-known and widely used in the world. Lysine crotonylation is a recently identified post-translational modification (PTM) with multiple cellular functions. However, lysine crotonylation under low-temperature stress has not been studied. Results Proteome-wide and lysine crotonylation of chrysanthemum at low-temperature was analyzed using TMT (Tandem Mass Tag) labeling, sensitive immuno-precipitation, and high-resolution LC-MS/MS. The results showed that 2017 crotonylation sites were identified in 1199 proteins. Treatment at 4 °C for 24 h and − 4 °C for 4 h resulted in 393 upregulated proteins and 500 downregulated proteins (1.2-fold threshold and P < 0.05). Analysis of biological information showed that lysine crotonylation was involved in photosynthesis, ribosomes, and antioxidant systems. The crotonylated proteins and motifs in chrysanthemum were compared with other plants to obtain orthologous proteins and conserved motifs. To further understand how lysine crotonylation at K136 affected APX (ascorbate peroxidase), we performed a site-directed mutation at K136 in APX. Site-directed crotonylation showed that lysine decrotonylation at K136 reduced APX activity, and lysine complete crotonylation at K136 increased APX activity. Conclusion In summary, our study comparatively analyzed proteome-wide and crotonylation in chrysanthemum under low-temperature stress and provided insights into the mechanisms of crotonylation in positively regulated APX activity to reduce the oxidative damage caused by low-temperature stress. These data provided an important basis for studying crotonylation to regulate antioxidant enzyme activity in response to low-temperature stress and a new research ideas for chilling-tolerance and freezing-tolerance chrysanthemum molecular breeding. Supplementary Information The online version contains supplementary material available at 10.1186/s12864-020-07365-5.
Background
Plants are often subjected to various environmental stresses that can seriously affect growth and development, including low-temperatures. It has been determined that plants can respond to environmental stresses through a complex set of biological mechanisms [1][2][3][4][5].
Recently, more and more PTMs have shown important roles in plant abiotic stress [6][7][8]. As the technology of proteomics research has matured, the joint analysis of proteomics and protein modification has been helpful in understanding the mechanism of plant response to environmental stress.
PTMs of proteins include phosphorylation, acetylation, ubiquitination, sumoylation, glycosylation, methylation, and so forth. They are mainly involved in cell activities through signal transduction, the regulation of protein stability and activity, the regulation of gene expression, and the maintenance of genome integrity [9][10][11][12]. Among them, the acylation modification that occurs on lysine is the most studied modification [13]. Lysine crotonylation is a new type of histone lysine acylation. For the first time, 28 crotonylation sites were found in the human somatic and mouse male germ cell genomes. Crotonylation modification in histones is closely related to gene transcription and replication. The chemical group modified by crotonylation on histones is crotonyl, with crotonyl-CoA as its main donor [14]. The research in mice has shown that histone crotonylation is associated with acute kidney injury [15]. There are also a large number of crotonylation modifications in non-histone proteins. In non-histone proteins of HeLa cells, 1185 crotonylation modification sites were identified, and they are closely related to DNA and RNA metabolism and cell cycle [16]. A total of 2696 crotonylation sites were identified on 1024 non-histone proteins in H1299 cells. They involve multiple signaling pathways and cell functions, such as RNA processing, nucleic acid metabolism, chromosome assembly, gene expression, and Parkinson's disease pathways [17]. Current studies have shown that the writers that promote crotonylation are p300 / CBP, CBP, PCAF, and hMOF [17][18][19]. The erasers for decrotonylation mainly include SIRT1-3, HDAC1, and HDAC3 [17,20,21]. Lysine crotonylation in plants was not identified in tobacco until 2017. So far, little research has been done on crotonylation in plants. A total of 2044 and 5995 crotonylation sites were identified in 637 and 2120 proteins of tobacco and papaya, respectively [22,23]. In rice seedlings, 1265 crotonylation sites were identified on 690 proteins. These modifications are crucial in the regulation of rice gene expression [24]. A total of 45 crotonylation sites were identified in rice histones, which have important functions for rice gene activation under starvation and submergence [25]. In tea leaves, 120 and 151 crotonylated modified proteins were differentially expressed after 3 h and 3 d of ammonium resupply, respectively [26]. These findings suggest that lysine crotonylation may play a potentially important role in plants responding to environmental stress. Lysine crotonylation related to low-temperature stress has not yet been studied.
Chrysanthemum is one of the most important ornamental plants in the cut flower market, and it is susceptible to chilling stress during flowering. Thus, it is important to understand the cold-tolerance mechanism of chrysanthemum and to cultivate new varieties of chrysanthemum. In this study, the dynamic change of proteome-wide crotonylation of chrysanthemum was quantified by TMT labeling and sensitive immuno-precipitation and high-resolution LC-MS/MS. The results showed that 2017 crotonylation sites were identified in 1199 proteins. The crotonylated proteins and motifs in chrysanthemum were compared with tea, rice, papaya, and tobacco to obtain orthologous proteins and conserved motifs. So as to further study the mechanism of crotonylation of chrysanthemums in abiotic stress, we investigated the effects of different crotonylation events on the activity of APX in the antioxidant system through site-directed mutations.
Results
Chrysanthemum seedling survival rate and physiological changes under low-temperature treatment As shown in Fig. 1a, the chrysanthemum seedlings grew normally, and the phenotype of chrysanthemum did not show noticeable differences from normal circumstances after 24 h treatment at 4°C. After 4 h treatment at − 4°C, the leaves of the seedlings appeared wilted and dehydrated, and the whole plant died. Moreover, the survival rate of the control group (CK) was 100% after 2 w of recovery from low-temperature stress, while the treatment group (T) was only 62% (Fig. 1b).
Through histochemical staining of chrysanthemum leaves, we further characterize the oxidative damage to chrysanthemum under low-temperature stress. As shown in Fig. 1c,d, chrysanthemum leaves accumulated more H 2 O 2 and O 2 − at low-temperature. This indicates that chrysanthemum is under more severe oxidative stress at low-temperature. The activities of antioxidant enzymes (APX, peroxidase (POD), superoxide dismutase (SOD), and catalase isozyme (CAT)) in chrysanthemums treated at low-temperature are significantly higher than those under normal conditions (Fig. 1e-h). The content of glutathione (GSH) is significantly higher than under normal conditions (Fig. 1i). In addition, a decrease in chlorophyll content was also observed under lowtemperature (Fig. 1j). These results can prove that chrysanthemum is capable of fighting against the negative effect of low-temperature stress through adjusting the activity of antioxidases.
Identification of proteome-wide and lysine crotonylation sites in chrysanthemum under low-temperature This study used a proteomic method based on sensitive immuno-precipitation and high-resolution LC-MS/MS to confirm the crotonylated proteins and their modifcation sites in chrysanthemum (Fig. S1a). The distribution of peptide length in proteomic analysis shows that most peptides were between 7 and 13 amino acids in length, which is consistent with the properties of trypsin peptides (Fig. S1b). At the same time, Fig. S1c shows that the peptide mass error distribution was close to zero, proving the validity of MS data. Fig. 1 The phenotype, survival rate, and ROS content of chrysanthemum under low-temperature stress. T refers to chrysanthemums treated first at 4°C for 24 h and then − 4°C for 4 h, while CK is chrysanthemum without low-temperature treatment. The value measured without low-temperature treatment is taken as 1, and the relative content refers to the ratio between the low-temperature treatment and the untreated. a The phenotypic changes of chrysanthemum plants under low-temperature; (b) the survival rate of chrysanthemum under low-temperature; (c-d) histochemical staining with DAB and NBT for assessing the accumulation of H 2 O 2 and O 2 − , respectively, under low-temperature. e Relative activity of SOD in chrysanthemum leaves before and after low-temperature treatment; (f) relative activity of POD in chrysanthemum leaves before and after low-temperature treatment; (g) relative activity of CAT in chrysanthemum leaves before and after low-temperature treatment; (h) relative activity of APX in chrysanthemum leaves before and after lowtemperature treatment; (i) relative content of glutathione in chrysanthemum leaves before and after low-temperature treatment; (j) relative content of chlorophyll in chrysanthemum leaves before and after low-temperature treatment. Data represent means and standard errors of three replicates. Different letters above the columns indicate significant (P < 0.05) differences according to Duncan's multiple range test A total of 6693 proteins were identified, of which 5339 were quantified. Among the quantified proteins, a total of 393 proteins were upregulated, and 500 proteins were downregulated, including target proteins (1.2-fold threshold; P < 0.05). During low-temperature treatment, the number of downregulated proteins was greater than the number of upregulated proteins (Table S1). In addition, the total number of peptide sequences was 2122, the total number of peptides (include modified and non-modified) was 2238, the number of modified peptides was 2173, and the enrichment efficiency was 97.1%. A total of 2017 sites were identified in 1199 proteins, among which 1787 lysine crotonylation sites in 1076 proteins were quantified, and 1089 lysine crotonylation sites in 572 proteins were normalized. The fold-change cutoff was set when proteins with quantitative ratios above 1.2 or below 1/1.2 were deemed significant. Among the quantified proteins after proteome normalization, 89 lysine crotonylation sites in 61 proteins were upregulated and 87 lysine crotonylation sites in 72 proteins were downregulated in the group (Table S2).
We further analyzed the function and characteristics of the identified and quantified proteins by annotating gene ontology, domain pathways, and predicted subcellular localization. Several proteins were found to contain a large number of lysine crotonylation sites in the details of the lysine crotonylation sites and their matching proteins (Table S2). For example, a 'histone H2B-like' protein, a 'probable ATP synthase 24 kDa subunit mitochondrial'protein, and a 'stromal 70 kDa heat shockrelated, chloroplast-like'protein contained 14, 14, and 13 crotonylation sites.
Functional classification analysis of differentially quantified crotonylated proteins in chrysanthemum under low-temperature Crotonylated proteins were annotated by bioinformatics analysis of GO and predicted subcellular localization. The functions of all crotonylated proteins after GO annotations can be divided into three main categories: biological processes, cellular components, and molecular functions. In the cellular component category, the majority of crotonylated proteins were predicted to be related to the cell, the macromolecular complex, the organelle, and the membrane. In the biological process category, many of the crotonylated proteins were enriched in the metabolic process, cellular process, and single-organism process. The analysis of molecular function showed that most crotonylated proteins were related to binding, catalytic activity, structural molecule activity, and transporter activity (Fig. 2a-c). A more detailed classification of differentially quantified crotonylated proteins is shown in Table S3. Predictive analysis of subcellular localization showed that 44% of crotonylated proteins were located in the chloroplast, 38% were located in the cytoplasm, and 12% were located in the nucleus (Fig. 2d).
Functional enrichment analysis of differentially quantified crotonylated proteins in chrysanthemum under lowtemperature Enrichment analysis of the GO and KEGG pathways was used to further understand the biological function of the crotonylated proteins. GO enrichment analysis showed that, in the molecular function, many crotonylated proteins were significantly enriched in the structural constituent of the ribosome, structural molecule activity, and proton-transporting ATP synthase activity. Based on cellular component enrichment, most crotonylated proteins were mainly enriched in the proton-transporting ATP synthase complex, the catalytic core F, protontransporting ATP synthase complex, and the large ribosomal subunit. Based on biological process enrichment, most crotonylated proteins were mainly enriched in the cellular nitrogen compound biosynthetic process, the nucleobase-containing compound, and the organonitrogen compound biosynthetic process (Fig. 2e). The KEGG pathway enrichment involved the ribosome and RNA degradation (Fig. 2f). Thirteen crotonylated proteins were only enriched in the ribosome pathway.
Conservative analysis of crotonylated proteins of chrysanthemum compared with other plants
We first used BLASTP to compare crotonylated proteins sequences of chrysanthemum(1199) against specified protein sequences, which includes four species: tea(971), rice (690), papaya (2219), and tobacco (637). By applying a reciprocal best BLAST hit approach, we determined the orthologous proteins among these genomes. We found that chrysanthemum has 683, 562, 853, and 442 orthologous crotonylated proteins with camellia, rice, papaya, and tobacco ( Fig. 3, Table S4). Meanwhile, among these orthologous crotonylated proteins related to the ribosome pathway, the photosynthesis pathway and the antioxidant system were selected (Table S5-7).
In order to further analyze the orthologous crotonylated proteins of chrysanthemum and other plants, we conducted crotonylated lys conserved analysis. The results showed that chrysanthemum and papaya had the highest number of conserved lysine, and tobacco had the lowest number of conserved lysine (Table S8). However, compared with other plants, the orthologous proteins of chrysanthemum and tea have the highest crotonylated lys conserved percentage and un-crotonylated lys conserved percentage (Fig. 4). This showed that lysine was the most conserved among the orthologous crotonylated proteins of chrysanthemum and tea.
Motif analysis of lysine crotonylation peptides
After evaluating the characteristics of all identified crotonylation peptides, it was found that 792 (66.06%) proteins contained a lysine crotonylation site (Kcr site), and 97 (6.59%) proteins contained 4 or more Kcr sites among 1199 Kcr proteins. A total of 14 conserved crotonylation motifs have been identified, which were KcrK, KcrD, FKcrE, EKcrG, KcrE, FKcr, YKcr, EKcr, DKcr, NKcr, AKcr, PKcr, WKcr, and GKcr, and they exhibited different abundances. Analysis of fold increase showed that FKcrE was significantly enriched (Fig. 5a). Positively charged K residues were observed to be enriched at − 10 to − 6 and + 8 to + 1 position, while enrichment of negatively charged residues D and E were observed at Fisher's exact test P value is shown on the X axes. The number of proteins found in each GO class and the number of all proteins present in each GO class were provided in the brackets followed the scores positions − 1 to + 5. In accordance with these findings, crotonylation was preferred on lysine residues that were adjacent to aspartic acid, glutamate, and lysine (Fig. 5b).
Comparing these conserved motifs identified in chrysanthemums with other plants, a large number of conserved motifs shared with chrysanthemum were found ( Fig. 6, Table S9) [22][23][24]26]. It is worth noting that chrysanthemum and four other studied plants contain the same conserved KcrD, KcrE, and Ekcr. This indicated that these three motifs are generally conserved in plants. Meanwhile, KcrK, FKcrE, PKcr, and WKcr are new plant crotonylation motifs found in chrysanthemum.
Crotonylation and proteome crosstalk analysis
A total of 6693 proteins were identified, of which 5339 were quantified. Among the quantified proteins, a total of 393 proteins were upregulated and 500 proteins were downregulated, including target proteins (1.2-fold threshold and P < 0.05). In the crotonylation research of TvsCK, there were a total of 2017 lysine crotonylation sites in 1199 proteins, and 1787 lysine crotonylation sites were quantified in 1076 proteins. After the removal of modifications caused by changes in protein levels, 1089 lysine crotonylation sites in 572 proteins were normalized. Among the quantified proteins after proteome normalization, 89 lysine crotonylation sites in 61 proteins were upregulated and 87 lysine crotonylation sites in 72 proteins were downregulated (1.2-fold threshold and P < 0.05) (Table S2). In aggregate, most proteins (53) had opposite changes in protein and crotonylation levels, and very few proteins (3) showed consistent changes. A total of 671 proteins were identified in proteome and crotonylation after comparing the proteome and crotonylation datasets.
Meanwhile, according to the correlation analysis between TvsCK's proteome and crotonylation, there were more points in the 2,4 quadrant than points in the 1,3 quadrant. This suggested that the trends of crotonylation and proteome were not completely consistent, which may be caused by low-temperature treatment (Fig. S2).
Lysine crotonylation affects APX activity
Significant upregulation of crotonylation levels of K136 on APX was detected at low-temperatures (Table S2). APX is an important enzyme for plants to resist ROS. Studies have shown that post-translational modification of proteins can regulate APX activity [27,28]. However, it has not been reported how lysine crotonylation affects APX activity under low-temperature stress. Based on NCBI (https://www.ncbi.nlm.nih.gov/Structure/cdd/ wrpsb.cgi) analysis of the DgAPX domain, DgAPX contained an ascorbate peroxidase conserved domain consisting of a total of 245 amino acids from Positions 5 to 250, all 24 heme binding sites, all 8 substrate binding sites, and all 6 K + binding sites. Multiple sequence comparisons with other plant APX protein sequences revealed that DgAPX was highly identical to other plant APX sequences, and K136 was located in the ascorbate peroxidase domain (Fig. 7).
To further understand how crotonylation at K136 affects APX, we performed a site-directed mutation at K136 (mutated K136 to arginine to simulate decrotonylation and to asparagine to simulate complete crotonylation). We first carried out Western blot and APX activity detection on wild-type tobacco (WT) and tobacco infected with empty carrier (pSuper1300-GFP) under a normal temperature. The results showed that the GFP antibody could only recognize tobacco infected with an empty carrier, while wild-type tobacco could not be identified (Fig. 8a), and there was no significant difference in APX activity between wild-type tobacco and empty carrier tobacco (Fig. 8b). The Western blot results showed that, with consistent protein expression, the APX activity of the simulant complete crotonylation tobacco was 3.38 higher than that of infected unmutated tobacco, while the APX activity of infected unmutated tobacco was 4.23 higher than that of the simulant decrotonylation tobacco. Meanwhile, their APX activity is significantly higher than that of wild-type tobacco and infected tobacco with an empty vector (Fig. 8b).
Discussion
Response mechanism of lysine crotonylation and ribosome under low-temperature stress When plants are subjected to environmental stress, ribosomes may affect protein synthesis. Ribosomes play an important biological role in plant cold tolerance [29]. The ribosomal protein Rpl33 plays an essential role when tobacco is exposed to low-temperature stress [30]. In Arabidopsis under low-temperature stress, ribosomal protein S5 plays a key role. Many proteins related to the low-temperature stress response in rps5-1 are greatly reduced. Overexpression of plastid RPS5 improves cold tolerance of transgenic plants [31]. Rice ribosomal protein TCD11 is involved in the low-temperature response [32]. Under low-temperature stress, rice adapts to environmental changes by inhibiting ribosome biological processes [33]. In tomatoes, low-temperature stress affects the translational extension of ribosomes and prevents plants from repairing damaged proteins [34]. These studies indicate that regulation of ribosomal-related proteins is important for plants to cope with lowtemperature stress.
In chrysanthemum, a total of 35 proteins were associated with ribosomes, of which 25 proteins were identified with upregulated crotonylation sites. Fourteen proteins were significantly downregulated at the protein level (Table S10). The crotonylation level of the chaperonerelated protein was almost downregulated, while the protein level was not significantly changed. As molecular chaperones, annexin D2-like, annexin D1-like, membraneassociated 30 kDa protein, heat shock cognate 70 kDa protein 2, and temperature-induced lipocalin-1-like assist the protein to fold correctly to perform normal functions. This indicated that the crotonylation modification may play a potential role in biological processes such as ribosomal protein synthesis and processing, and may be related to the plant response to low-temperature stress. The mechanism needs further study.
Interestingly, after comparison with the crotonylation proteins of other plants, among the 35 ribosom-related crotonylated proteins of chrysanthemum, 8 orthologous proteins (22.85%) were also crotonylated in tea, rice, papaya, and tobacco, and 12 (34.28%) were also crotonylated in three of these plants (Table S5). These results suggested that these 20 ribosomal-related proteins were extensively modified by crotonylation in plants. Further research on the crotonylation modification of these proteins will help to understand the potential role of crotonylation in the ribosom pathway.
Response mechanism of lysine crotonylation and chrysanthemum photosynthesis under low-temperature stress
Abiotic stresses (such as low-temperature, drought, and salt) will lead to the degradation of photosynthesisrelated proteins in plants, further reducing plant light energy utilization and being more susceptible to light inhibition [35][36][37]. In photosynthesis, ribulose 1,5-bisphosphate carboxylase/oxidase (rubisco) is a key enzyme involved in the fixation of CO2 [38]. Rubisco is involved in various physiological responses such as plant growth, photorespiration, glucose metabolism, heat tolerance, low-temperature stress, and salt stress [39][40][41][42][43]. Many studies have shown that the large rubisco subunit is easily degraded under adverse conditions. Under cold stress, 19 protein spots in rice were identified as rubisco large subunit degradation fragments [44]. Drought stress caused significant upregulation of the rubisco subunit degradation products in the beet leaf proteome [45]. Under water stress, the expression level of the large rubisco subunit in black pine was upregulated, and the increase in expression may be due to the degradation of rubisco [46]. The capture and transmission of photosynthetic light energy of higher plants is mainly achieved through the light-harvesting pigment protein complex (LHC I and LHC II), in which LHC II is the most abundant antenna protein in the chloroplast thylakoid of green plants [47]. The chlorophyll a/b-binding protein is a member of the LHC that captures external energy and transmits it to the light system for photosynthesis [48]. Many studies have shown that Photosystem I (PS I) and Photosystem II (PS II) are the important members of photosynthesis in plants. PSI photoinhibition is thought to be the main reason for photosynthetic decline in cold-sensitive plants under low-temperature [49][50][51][52]. PS II can maintain photosynthetic efficiency and thus respond to low-temperature stress.
Under low-temperature stress, photosynthesis in chrysanthemum leaves was inhibited, and the chlorophyll content decreased significantly (Fig. 1j). Altogether, 20 crotonylated proteins associated with photosynthesis were identified in chrysanthemum under low-temperature treatment (Table S11). Our results indicated that many proteins related to photosynthesis all showed opposite trends in protein levels and crotonylation modification occurred, such as chlorophyll a/b binding protein, PS I P700 chlorophyll A apoprotein A1, and PS I reaction center subunit II. In addition, 45%(9/20) of the crotonylated proteins are related to ATP synthesis, mainly including ATP synthase subunit b and ATP synthase CF1 alpha chai. Plants photosynthesize mainly through chloroplasts. Chloroplast ATP synthase (CFI and CFO) is present on the chloroplast thylakoid membrane and is a key enzyme for physiological activity in plants. It converts light energy into chemical energy and forms ATP to supply a variety of life activities. A previous study has shown that the large synthesis of the ATP synthase beta subunit improved plant stress tolerance [53]. Most crotonylated proteins associated with photosynthesis contained one crotonylation site. ATP synthase subunit b was crotonylated at four crotonylation sites and modified by upregulation, but none of them caused a significant difference in protein. These results indicated that the crotonylation modification may play a potential role in the photosynthesis pathway, and the change of the crotonylation modification state may be related to the response of photosynthesis-related proteins of plants under low-temperature stress. The mechanism of crotonylation modification in plant photosynthesis pathway under low-temperature stress needs further study.
Interestingly, among the 20 types of crotonylated proteins related to photosynthesis in chrysanthemum, 8 orthologous proteins (40%) were also crotonylated in camellia, rice, papaya, and tobacco, and 6 (30%) were also crotonylated in three of these plants (Table S6). These results suggested that these 14 photosynthesisrelated proteins were extensively modified by crotonylation in plants. Further research on the crotonylation modification of these proteins will help to understand the potential role of crotonylation on the photosynthesis pathway.
Response mechanism of lysine crotonylation and chrysanthemum antioxidant system under lowtemperature stress To prevent oxidative damage caused by low-temperature stress, plants eliminate or reduce excess reactive oxygen species (ROS) or ROS-induced toxic substances through the antioxidant enzyme system [54,55]. Many proteomic researchers showed that a low-temperature causes a differential expression of antioxidant defense-related proteins [56,57]. Papaya proteome analysis indicates that lysine crotonylation is involved in the antioxidant system. Altogether, 32 antioxidant-related proteins were identified. Nine of them showed significant changes in crotonylation levels (Table S12). The ROS-related proteins identified from chrysanthemum leaves under lowtemperature stress mainly involved POD, CAT, APX, and GST. SOD is an important component of reactive oxygen defense and clearance systems that can disproportionate superoxide anions into molecular oxygen, which has no harmful effect on cells [58,59]. POD, SOD, CAT, and APX are important components of the active oxygen defense and clearance system. Under abiotic stress including low-temperature, the activities of these antioxidant enzymes will change accordingly. They can help plants clear excess ROS and increase plant resistance [60][61][62][63]. GST is also included in the antioxidant enzyme system. GSTs are used to catalyze the combination of glutathione with toxic heterologous or oxidized products, thereby promoting the metabolism of such substances or elimination to reduce oxidative damage [64]. Among the nine antioxidant-related proteins with significant changes in crotonylation levels, five did not have significant changes in protein levels. They are one each of CAT, APX, and SOD and two GSTs. However, the activities of CAT, APX, and SOD increased significantly under low-temperature stress (Fig. 2), which means that crotonylation modification has a potential role on the enzyme activity of the antioxidant system of chrysanthemum and is related to the response of plants under low-temperature stress.
At the same time, our analysis of the orthologous crotonylated proteins of chrysanthemum, tea, rice, papaya, and tobacco showed that, among the 32 antioxidantrelated crotonylation proteins of chrysanthemum, there are 9 orthologous proteins (28.12%) were also crotonylated in camellia, rice, papaya, and tobacco, and one (3.12%) was also crotonylated in three of these plants, which includes 6 species: POD, CAT, APX, GST, GPX, and SOD (Table S7). These results suggested that these 10 antioxidant-related proteins were extensively modified by crotonylation in plants. Due to the important role of these antioxidant enzymes in abiotic stress, further studies on the crotonylation modification of these proteins will help to understand the potential role of crotonylation in abiotic stress.
Lysine crotonylation regulates APX activity in chrysanthemum under low-temperature stress Among the antioxidant proteins, APX is an important enzyme for plants to resist ROS. APXs can use ascorbic acid as a substrate to convert hydrogen peroxide into water, which is harmless to plants [65,66]. For example, OsAPX2 can remove ROS from rice and protect rice from low-temperature stress [67,68]. S-nitrosated residue Cys32 is present in pea APX, and APX can be inactivated or activated by nitration [27]. The sulfide added by S-mercaptolysis can increase the activity of APX in Arabidopsis [28]. These studies have shown that the post-translational modification of proteins can regulate APX activity. Similarly, this study indicated that lysine decrotonylation reduced APX activity and that complete crotonylation increased APX activity (Fig. 8).
Under low-temperature stress, physiological indicators of chrysanthemum showed increased APX activity. Moreover, proteome analysis showed that APX did not significantly downregulate protein levels, but at the level of crotonylation, in site K136, APX was significantly upregulated, which indicated that the lysine crotonylation level of APX was increased under low-temperature stress, and the APX activity increased with the modification level. Crotonylation positively regulated APX activity under low-temperature stress, thereby reducing the oxidative damage caused by low-temperature stress.
Conclusions
In summary, our study comparatively analyzed proteomewide crotonylation in chrysanthemum under lowtemperature stress, and it further explored all aspects of crotonylation in chrysanthemum's biological processes, especially the ribosome pathway, photosynthesis, and the antioxidant system. The data provided by this research can be used as an important resource for the analysis of lysine crotonylation functions in chrysanthemum. The crotonylated proteins and motifs in chrysanthemum were compared with tea, rice, papaya, and tobacco to obtain orthologous proteins and conserved motifs. In addition, we identified mutations in specific crotonylation sites and preliminarily explored an important enzyme (APX) regulatory mechanism of the crotonylation anti-oxidation system. Due to the extensive and complex role of crotonylation in cellular processes, how crotonylation participates in the lowtemperature threat response requires further research on more specific lysine crotonylation sites to cope with lowtemperature stress.
Plant materials and low-temperature treatments
Chrysanthemum (chrysanthemum materials of this study were provided by plant tissue culture room of Sichuan Agricultural Uniersity) was grown on MS medium (200 μL m − 2 s − 1, a 16 h photoperiod, 25°C/22°C day/ night temperature) for 30 d. It was then transferred to a flowerpot filled with a 1:1 peat and perlite mixture and adapted under the same conditions for 3 d. Leaf samples were taken as CK-01, CK-02, and CK-03 at 0 h of seedling treatment. After low-temperature treatment of seedlings (200 μL m − 2 s − 1, 16 h photoperiod, first at 4°C for 24 h and then − 4°C for 4 h) [69], the leaves were sampled as T-01, T-02, and T-03. These two groups of samples were recorded as CK and T and used for histochemical staining and physiological index determination and proteome extraction. For histochemical staining, the reader can refer to Ma 's method [70]. Physiological indicators of SOD activity, POD activity, CAT activity, APX activity, GSH content, and chlorophyll content were determined using a kit from Nanjing Jiancheng Bioengineering Institute according to the instructions. The survival rate was calculated after the seedlings were restored for 2 w. The experiment was performed thrice for accuracy.
Protein preparation, TMT labeling, and HPLC fractionation
Chrysanthemum leaves were ground with liquid nitrogen and lysed in 5 mL of lysis buffer (8 M urea, 2 mM ethylenediaminetetraacetic acid, 10 mM dithiothreitol, and 1% protease inhibitor (Protease Inhibitor CocktailVI, Merck Millipore, USA)) and centrifuged (4°C, 20,000 g, 10 min) to collect the supernatant. The supernatant was precipitated with 15% trichloroacetic acid (TCA) at − 20°C for 2 h, and the centrifuged precipitate was washed with cold acetone 3 times and dissolved in buffer (8 M urea, 100 mM Tetraethylammonium bromide (TEAB), pH 8.0) and the catalog product number of BCA was P0011 (Beyotime Biotechnology).
The prepared chrysanthemum protein (700μg) solution was added to 10 mM DL-Dithiothreitol (DTT) and incubated for 1 h at 37°C, and 10 mM IAM was then added and incubated in the dark for 45 min. Next, the protein was diluted with urea < 2 M. Finally, a mixture with a ratio of trypsin (14 μg) to protein of 1:50 was digested overnight. A mixture with a ratio of trypsin (7 μg) to protein of 1:100 was digested for 4 h. After trypsin digestion, peptide was desalted by a Strata X C18 SPE column (Phenomenex, Strata X C18 SPE column, Los Angeles, USA) and vacuum-dried. Peptide was reconstituted in 0.5 M TEAB. The samples were subjected to 6-fold processing according to the instructions of the TMT kit, and the peptides subjected to TMT were then separated and fractionated by high pH reversed-phase HPLC using an Agilent 300Extend C18 column (5 μm particles, an inner diameter of 4.6 mm, a length of 250 mm, Thermo Scientific, USA). The peptide was separated into 80 fractions using 2-60% acetonitrile (ammonia adjust to pH 10) and combined into 18 fractions. The whole process lasted 80 min, and the peptide was finally dried under vacuum.
Affinity enrichment
Next, Kcr peptide enrichment was performed. The initial amount of enrichment was 4 mg peptide. Prewashed antibody beads (PTM Biolabs) and tryptic peptides dissolved in NETN buffer (100 mM NaCl, 1 mM EDTA, 50 mM Tris-HCl, 0.5% NP-40, pH 8.0) were gently shaken at 4°C overnight. The bound peptides were eluted from the beads with 0.1% trifluoroacetic acid (TFA) after the beads were washed four times (25 ul antibody beads, washing volume: 0.5 ml/time, elution volume: 400 ul) with NETN buffer and twice with ddH2O. The eluted fractions were combined and vacuum-dried. The resulting peptides were cleaned with C18 ZipTips (Merck Millipore, ZTC18S960, Billerica, USA) according to the manufacturer's instructions, followed by LC-MS/MS analysis.
LC-MS/MS analysis
The above graded peptides were dissolved in 0.1% formic acid (FA) and loaded onto a reversed-phase analysis column (Acclaim PepMap RSLC, 2 μm particles, an inner diameter of 50 μm, a length of 15 cm, Thermo Scientific, USA) for analysis. The gradient was dissolved in 6-22% Solvent B (98% acetonitrile, 0.1% FA v/v) for 19 min, and dissolved in 22-35% Solvent B for 10 min and then Solvent B for 4 min. It was increased to 80% and measured in this state with a constant flow rate of 800 nL/min.
The peptides were subjected to NSI followed by tandem mass spectrometry (MS/MS) in Q ExactiveTM Plus (Q-Exactive Plus, Thermo Scientific, USA) coupled online to the UPLC. The complete peptides were detected at a resolution of 70,000 (NCE setting: 30), and the ion fragments were selected at a resolution of 17,500. A data-dependent procedure that alternated between one MS scan followed by 20 MS/MS scans was applied for the top 20 precursor ions above a threshold intensity greater than 1E4 in an MS survey scan with 30.0 s dynamic exclusion. The electrospray voltage applied was 2.0 kV. Automatic gain control (AGC) was used to prevent overfilling of the orbitrap; 5E4 ions were accumulated for generation of MS/MS spectra (DDA mode (traditional data-dependent collection mode)). For MS scans, the m/z scan range was from 350 to 1800. Fixed first mass was set as 100 m/z. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PXD010297.
Database search
The MS/MS data obtained from the above experiments were analyzed using the Andromeda search engine (v.1.5.2.8), and tandem mass spectra were searched through the Dendrathema grandiflorum database. Peptides (10 ppm) and ion fragments had a mass error of 0.02 Da. Carbamidomethylation on Cys was designated as a fixed modification, and croton substitution of Lys was designated as a variable modification. The false discovery rate (FDR) threshold was set to 1%. Minimum peptide length was set at 7. For quantification, TMT-6plex was selected. All other parameters in MaxQuant were set to default values. The site localization probability was set as > 0.75.
Quantitative analysis of protein and lysine Crotonylation
For protein quantification, the ratios of the TMT reporter ion intensities in MS/MS spectra from raw data sets were used to caculate fold changes in the T and CK samples. For each sample, the quantification was meannormalized at peptide level to center the distribution of quantitative values. Protein quantitation was then calculated as the median ratio of corresponding unique peptides for a given protein. The relative quantitative values of each sample make the data conform to the normal distribution and log2 transform, and the p value is then calculated by the two-sample two-tailed T test. When the p-value was < 0.05 and the expression ratio was > 1.2, the protein was considered to be upregulated. Conversely, when the p-value was < 0.05 and the expression ratio was < 1/1.2, the protein was considered to be downregulated.
For crontonylation sites quantification, the ratios of the TMT reporter ion intensities in MS/MS spectra from raw data sets were used to caculate fold changes in the T and CK samples. The relative quantitative values of these two samples were the ratio of their quantitative values. In order to remove the modifications caused by changes in protein levels, the proteome was quantitatively normalized, and which means that the ratio of crontonylation sites divided by ratio of corresponding protein in the T and CK samples. When the ratio was greater than 1.2, it was defined as an upward adjustment, and when it was less than 1/1.2, it was defined as an upward adjustment. The resulting crotonylation site on the protein will be used for subsequent biological information analysis.
Annotation methods
For gene ontology (GO) annotation, we used the UniProt-GOA database (https://www.ebi.ac.uk/GOA/) for annotation. For pathway analysis, we used the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. KEGG annotated results were mapped via Mapper. Predicted subcellular localization of protein analysis was performed by WolfPsort software. We also analyzed eukaryotic sequences using the new version of PSORT/ PSORT II to complete the analysis. In order to analyze the enrichment of identified proteins relative to the differentially expressed protein against all identified proteins, a two-tailed Fisher's test was used. Calibration of multiple hypothesis tests was performed using standard error discovery rate control methods. It was considered significant when the p-value of the enriched cluster was < 0.05.
Ten amino acids above and below specific positions in all identified protein sequences were used as analysis objects, and they were analyzed by Motif-X software [71]. All database protein sequences were used as background database parameters, and other parameters used the default.
Conservative analysis
To determine the degree of evolutionary conservation of crotonylation, we first used BLASTP to compare crotonylated protein sequences of Dendranthema grandiforum (PXD010297) against specified protein sequences, which includes 4 species: Camellia sinensis (PXD011610), Oryza sativa (PXD008716), Carica papaya (PXD008166), and Nicotiana tabacum (IPX0000889000). By applying a reciprocal best BLAST hit approach, we determined the orthologous proteins among these genomes. For each orthologous group, we used MUSCLE v3.8.31 to perform multiple sequence alignment. We then determined the lysine conservation for each species by counting the total number of conserved crotonylated lysine, and the total number of conserved non-crotonylated lysine was considered to be conserved if both the Dendranthema morifolium protein and the query protein in the multiple sequence alignment were lysine residues at the aligned positions. All lysine residues of the proteins identified in this study were considered as controls. Mean conservation of the crotonylated and control Lys between Dendranthema morifolium sequences and sequences from other microorganisms in the specified protein sequences were plotted separately. P-values were calculated for each comparison using Fisher's exact test.
Site-directed mutagenesis and vector construction
The conserved domain analysis of DgAPX was performed in NCBI (https://www.ncbi.nlm.nih.gov/ Structure/cdd/wrpsb.cgi). The amino acid sequence of DgAPX was compared with the APX amino acid sequence of other plants by DNAMAN. L-ascorbate peroxidase (APX) was subjected to whole-gene synthesis, and APX site K136 was mutated to arginine (R) and asparagine (N). We simulated the complete crotonylation (K136N) and complete de-crotonylation (K136R) according to the mutated charge stability. Two mutants and one no-mutant were selected by DNA sequencing. These three sequences were constructed on the expression vector (pSuper1300-GFP), respectively. Gene synthesis and expression vector construction were completed by Sangon Biotech (Shanghai) Co., Ltd.
Expression of recombinant APX
Three constructs (pSuper1300-DgAPX-GFP, pSuper1300-DgAPXK136R-GFP, and pSuper1300-D-gAPXK136N-GFP) and empty pSuper1300-GFP were, respectively, transformed into Agrobacteriu-m GV3101 cells. The cells were cultured in Luria-Bertani (LB) medium (100 mg/L kanamycin, 100 mM MES, 40 μM AS) until the OD600 reached 0.8. The cells were centrifuged(4000 rpm, 10 min, 25°C) and collected, and the cells were resuspended in an equal volume of buffer (10 mM MgCl2、200 μM AS). After being in the dark for 4 h, it was immersed in the leaves of N. benthamiana and co-cultured for 48 h. The infected tobacco and uninfected tobacco were treated at a normal temperature (25°C; 12 h). Tobacco leaves were sampled for Western blot and enzyme assay.
Western blot and enzyme assay
Protein extraction was performed, and concentration was determined for each sample as before. The prepared protein samples were separated on a 15% SDS-PAGE gel and transferred to a polyvinylidene fluoride fluoropolymer (PVDF) membrane (0.22 μm, Millipore Cat. No: ISEQ00010). The membrane was then blocked with TBST containing 5% Skim Milk for 1 h. ProteinFindTM Anti-GFP Mouse Monoclonal Antibody was used in the Western blot analysis. Detection was performed with Horseradish Peroxidase-conjugated Anti-Mous IgG(H + L) secondary antibody and EasySee® Western Blot Kit. APX activity of different samples was measured using an ascorbate peroxidase activity assay kit of Nanjing Jiancheng Bioengineering Institute. The method was performed according to the instructions. | 2021-01-15T14:15:01.661Z | 2020-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "37981231431359c68ac0fafe23962fee13208747",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/s12864-020-07365-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b741238178b6afa5005ccf9eb2943ae21572465f",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
15292760 | pes2o/s2orc | v3-fos-license | Reply to: ”Tropical cirrus and water vapor: an effective Earth infrared iris feedback?”
. In assessing the iris effect suggested by Lindzen et al. (2001), Fu et al. (2002) found that the response of high-level clouds to the sea surface temperature had an effect of reducing the climate sensitivity to external radiative forcing, but the effect was not as strong as LCH found. The approach of FBH to specifying longwave emission and cloud albedos appears to be inappropriate, and the derived cloud optical properties may not have real physical meaning. The cloud albedo calculated by FBH is too large for cirrus clouds and too small for boundary layer clouds, which underestimates the iris effect.
Introduction
In assessing the iris effect suggested by Lindzen et al. (2001) (hereafter LCH), Fu et al. (2002) (hereafter FBH) found that the response of high-level clouds to the sea surface temperature had an effect of reducing the climate sensitivity to external radiative forcing, but the effect was not as strong as LCH found. This weaker reduction in climate sensitivity was due to the smaller contrasts in albedos and effective emitting temperatures between cirrus clouds and the neighboring regions. FBH specified the albedos and the outgoing longwave radiation (OLR) in the LCH 3.5-box radiative-convective model by requiring that the model radiation budgets at the top of the atmosphere (TOA) be consistent with that inferred from the Earth Radiation Budget Experiment (ERBE) (Barkstrom, 1984). In point of fact, the constraint by radiation budgets alone is not sufficient for deriving the correct contrast in radiation properties between cirrus clouds and the neighboring Correspondence to: M.-D. Chou (chou@climate.gsfc.nasa.gov) regions, and the approach of FBH to specifying those properties is, we feel, inappropriate for assessing the iris effect.
Albedo contrast between cirrus and boundary-layer clouds
In the LCH 3.5-box model for studying the iris effect, the tropics is divided into a moist region and a dry region. Each region covers half of the tropics. The moist region is further divided into a region covered with high-level cirrus clouds (cloudy moist) and a region without cirrus clouds (clear moist). The areal coverage of the former is assumed to be 22% of the tropics and the latter 28%. The low-level boundary clouds are assumed to have an areal coverage of 25% throughout the tropics. In the cloudy-moist region, the low-level boundary clouds overlap with the high-level cirrus clouds. The iris effect depends primarily on the contrast among the longwave emission and albedo of the three tropical regions. The specification of the areal coverage, as well as the specification of the effective emitting temperatures and cloud albedos, are required to be consistent with the overall ERBE radiation budgets. FBH estimated the OLR of the dry region and the clearmoist region from radiation model calculations with appropriate temperature and humidity profiles for the tropics. The OLR of the cloudy-moist region was then derived by requiring that the mean OLR of the model tropics be the same as that of the ERBE, which is 255 W m −2 . Based on both ERBE data and model calculations, they assumed that the effect of high cirrus clouds on the shortwave and longwave radiation (or cloud radiative forcing, CRF) nearly cancelled each other, and the net effect on the TOA radiation budget was negligible. They then derived the albedos of the highlevel cirrus clouds and the low-level boundary-layer clouds c European Geophysical Society 2002 by requiring that the following two conditions were met. (1) The radiative forcing of high-level clouds for shortwave radiation and longwave radiation was equal in the cloudy-moist region.
(2) The mean albedo of the tropics was equal to the ERBE-inferred value of 0.241.
Cirrus clouds reduce both the longwave cooling and shortwave heating of the Earth. The magnitude of these two competing effects depends on the optical thickness of the clouds. Thin cirrus with an optical thickness of, say < 1, in the visible spectral region are relatively transparent to shortwave radiation but not necessarily transparent to the longwave radiation. Thus, thin cirrus clouds have a stronger longwave warming effect than a shortwave cooling effect, which leads to a net warming effect on the climate. On the other hand, thick cirrus clouds are highly reflective to shortwave radiation and generally have a net cooling effect on the climate. The overall effect of cirrus on the Earth radiation budget depends strongly on the areal coverage of thin cirrus clouds relative to that of thick cirrus clouds.
There are two sources of thin cirrus clouds. One is the detrainment of deep convective anvil clouds which spread, precipitate, and evaporate to become thin cirrus in the neighborhood of cumulus cloud clusters. The other is the thick cumulus clouds that are left behind propagating large-scale atmospheric disturbances and decay rapidly to become thin cirrus and contribute to the supply of water vapor in the upper troposphere. The upper tropospheric water vapor may later form thin cirrus clouds due to atmospheric wave motions (Boehm and Verlinde, 2000). These thin cirrus clouds are widespread and can persist for a long period of time due to large-scale lifting of air in the tropics (Boehm et al., 1999). Although it is generally believed that thin cirrus are widespread in the tropics, detection and retrieval of these clouds using satellite-measured radiances are largely unreliable, as it is very difficult to differentiate thin cirrus clouds from broken clouds at lower levels. As a result, the net effect of high-level cirrus clouds on the earth radiation budget is hard to assess, either directly from satellite radiation measurements, or coupled with radiation model calculations as was done by FBH. The assertion by FBH that the radiative forcing of high clouds in the shortwave and longwave spectral regions cancelled each other has yet to be actually validated with reliable cloud and radiation data.
The model climate sensitivity, as calculated by LCH, depends only weakly on the subjectively specified areal extent of the three tropical regions. However, if the longwave emission of the three regions and the shortwave albedos of clouds are derived by imposing certain radiation budget constraints as in FBH, these parameters will be sensitive to the subjectively specified areal coverage of the three regions, and so is the model climate sensitivity. Figure 1 shows the longwave emission of the cloudy-moist region as a function of the cirrus cloud fraction relative to the entire tropics. It is derived by assuming that OLR in the dry and clear-moist regions are, respectively, 293 W m −2 and 268 W m −2 as calculated by FBH, and an areal coverage of 50% of the dry region as assumed by LCH. To match the mean OLR of 255 W m −2 in the tropics inferred by ERBE, FBH calculated the OLR to be 154 W m −2 for a 22% coverage of the cloudy-moist region, which is marked by a circle in the figure. If the cloudy-moist region increases by 5% from 22% to 27%, the longwave emission would increase by 22 W m −2 according to Figure 1.
Considering the large extent of thin cirrus clouds, the highlevel cloud cover of 27% in the tropics is not at all unrealistic. The albedos of both high clouds and low clouds will have to change if the FBH approach is to be followed, which requires that the net CRF of high clouds be zero and the mean albedo of the tropical region be equal to the ERBE-inferred value of 0.241. The contrast between the cloud albedos will vary with the subjectively specified coverage of the three regions, and the simulated climate sensitivity may not have any physical meaning. Both the contrast among the longwave emission of the three tropical regions and the contrast between the albedos of the high-level cirrus and low-level boundary layer clouds derived by FBH are smaller than that specified by LCH. LCH assigned an albedo of 0.24 for high cirrus clouds, and a significantly higher albedo of 0.42 was assigned for low boundary clouds. In contrtast, FBH derived a nearly constant albedo for high-level and low-level clouds 0.342 for the former and 0.331 for the latter. FBH found that these differences in cloud albedos and differences in longwave emission caused the negative feedback factor as estimated by LCH to decrease by 50%.
The optical thickness of low-level boundary clouds varies with fractional cloud cover. The visible optical thickness inferred from the high spatial-resolution Landsat imagery of marine boundary layer clouds ranges from ∼5 for scattered Atmos. Chem. Phys., 2, 99-101, 2002 www.atmos-chem-phys.org/acp/2/99/ cumulus to ∼20 for overcast stratocumulus (Barker et al., 1996). Szczodrak et al. (2001) derived the optical thickness and the effective radius of marine stratocumulus clouds using the NOAA Advanced Very High Resolution Radiometer (AVHRR) radiance measurements over the eastern Pacific Ocean and the Southern Ocean near Tasmania. They found that the majority of clouds have a visible optical thickness in the range 5-30. Our model calculations show that, for a solar zenith angle of 60 • , the albedo corresponding to these optical thickness ranges from 0.35 to 0.60. Buriez et al. (2001) studied the cloud optical thickness retrieved from the Advanced Earth Orbiting Satellite-Polarization and Directionality of the Earth's Reflectances (ADEOS-POLDER) observations. They found that overcast low-level clouds were bright with an albedo of 0.4-0.7. Thus, the albedo of 0.42 used by LCH is in agreement with these satellite observations. The specification of the albedo of 0.24 by LCH for high clouds takes into consideration of the extended coverage of thin cirrus clouds in the tropics, which have a low albedo. On the other hand, the albedos of 0.342 for high clouds and 0.331 for low clouds as used by FBH are inconsistent with observations. LCH specified the OLR to be 263 W m −2 in the clearmoist region and 303 W m −2 in the dry region. Whereas FBH calculated the OLR and albedo of these regions to be 268 W m −2 and 293 W m −2 , respectively, using a radiation model. When we apply the OLR values computed by FBH to the LCH 3.5-box model, the negative feedback factor is reduced only moderately by ∼20%, 10%, and 16% for γ = 1.0, 0.5, and 0, respectively. In commenting on this response, Baker (2002) suggests that the contrast of the TOA radiation between the tropical moist-cloudy and dry regions is highly overestimated in LCH. From the ERBE data, she finds that for a given latitude band, the longitudinal contrast of the net TOA radiation does not exceed 40 W m −2 . This number is significantly smaller than the radiation contrast of 110 W m −2 between the moist-cloudy and dry regions in LCH. Baker then concludes that the feedback factor of climate sensitivity as estimated by LCH is significantly exaggerated.
In the ERBE data archive, the TOA radiation budgets are monthly mean values averaged over 2.5 • × 2.5 • latitudelongitude boxes. Tropical cloud systems associated with easterly waves and Madden-Julian Oscillations propagate in zonal directions. The MJO's, for example, propagate from west to east at a speed of ∼5-6 m s −1 . With this speed, cloud systems propagate by >250 km within a day. Therefore, one can expect that the possibility of a 2.5 • × 2.5 • latitude-longitude box in the convection region to be continuously and totally covered by clouds in a one-month period is nearly 0. The ERBE monthly radiation in convective regions represents a mixture of radiation of both the moistcloudy and moist-clear regions, and the net TOA radiation of the cloudy-moist region cannot be identified from the longitudinal distributions of the ERBE fluxes, as is attempted by Baker (2002).
Conclusions
The approach of FBH to specifying longwave emission and cloud albedos appears to be inappropriate for studying the iris effect. These radiation properties are sensitive to the subjectively specified areal coverage of the three tropical regions, and the derived properties by FBH may not have real physical meaning. From the point of view that thin cirrus are widespread in the tropics and that low boundary clouds are optically thick, the cloud albedo calculated by FBH is too large for cirrus clouds and too small for boundary layer clouds. The near-zero contrast in cloud albedos derived by FBH has the effect of underestimating the iris effect. On the other hand, the contrast of longwave emission among the three regions as derived by FBH is smaller than that of LCH. If the longwave emission derived by FBH is appropriate, then LCH may indeed have overestimated the iris effect somewhat, though hardly by as much as that suggested by FBH. | 2018-05-30T06:56:20.420Z | 2002-05-30T00:00:00.000 | {
"year": 2002,
"sha1": "5667d058037b667202d2c4915af0f1b56981ea33",
"oa_license": "CCBYNCSA",
"oa_url": "https://acp.copernicus.org/articles/2/99/2002/acp-2-99-2002.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1edfb61fd072bdadadd78f8259b7e2ae40af047b",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
55327862 | pes2o/s2orc | v3-fos-license | Reaching high sensitivity of radio-acoustic spectroscopy using " strong microwaves "
Gas molecular spectroscopy is a powerful instrument for both fundamental studies and applications, such as qualitative and quantitative gas analysis, non-invasive medicine, atmospheric remote sensing, etc. Sensitivity is one of the key parameters of any spectrometer, which determines the range of its possible applications for resolving both fundamental and applied problems. The higher the sensitivity, the higher the accuracy of measurement of the spectral line parameters and the greater the number of lines that can be observed in the experiment (the smaller the number of molecules in a gas mixture needed for their lines to appear in the spectrum) and the higher the accuracy with which the properties of the molecules can be explored. One can recall quite a large number of currently known wideband spectrometers used for a study of the spectra of various molecules in the mm/submm wave range. They can be divided into two types according to the principle of molecular spectra registering: from variations in the characteristics of either the probing radiation (first type) or the gas being studied (second type). For most of the mm/submm spectrometers, in which either the radiation transmitted through a cell with gas or the radiation reradiated by the gas is detected, a sensitivity close to the limit determined by fundamental physical principles is achieved (see, e.g. [1], chapter 15, p. 414). The only method that permits one to advance in solving the problem of high sensitivity achievement is known as optoacoustic (photoacoustic or radioacoustic) detection of absorption [2]. The sensitivity of any spectrometer is determined by several factors. The most crucial ones are radiation power, detection system noise, and spectral purity of the radiation. In spectrometers of the first type (e.g., classical video-spectrometer), the sensitivity increases with increasing radiation power until the detection system noise becomes surpassed by the radiation noise. Typically for mm/submm video spectrometers with a liquid heliumcooled bolometer an upper limit of radiation power is about 1 mW. Further improvement of a spectrometer sensitivity can be achieved by increasing the optical path length and by data averaging (if ambient and gas conditions are stable enough). The situation is radically different in spectrometers of the second type (e.g., in a spectrometer with radioacoustic detection of gas absorption – RAD spectrometer). The limiting sensitivity of the spectrometer is obtained when thermal fluctuations of the membrane are defined preferably by the Brownian motion of the gas. These fluctuations do not depend on the radiation power passing through the gas cell. Meanwhile, the useful signal amplitude is directly proportional to the radiation power absorbed by the gas and, therefore, linearly increases with the power. Thus, the sensitivity of RAD method is also linearly increasing. In this report we present our recent results on reaching the high sensitivity of RAD spectrometer using a few approaches, such as increasing radiation power, modifying the cell and measurement parameters for the noise reduction and data averaging. Backward-wave oscillators (BWOs) are used as radiation sources in the RAD spectrometers, a series of which covers a very wide range of 35 to 1500 GHz. High stability, low level of the phase noise, and exact knowledge of the frequency are provided by the use of a phaselocked loop system of the BWO radiation referenced by a microwave frequency synthesizer signal, which is synchronized with a frequency and time standard signal. The BWO power can vary from a fraction to tens of mW (reaching more than 100 mW in the best tubes) within the operating frequency range. Larger radiation power suitable for molecular spectroscopy was not available during long time because of the limited choice of mm/submm radiation sources. Progress in the development of high-power mm/submm radiation sources, which was achieved in recent years, made it possible to test the “power” approach of increasing the RAD sensitivity [3]. In particular, an automated facility [4] based on a gyrotron operated at a frequency of about 263 GHz in the CW regime with a radiation power up to ~1 kW was developed in IAP RAS. This gyrotron permits smooth tuning of the radiation frequency, although within small (in terms of spectroscopy standards) limits of the order of 0.2 GHz due to varying the operating voltage and temperature of the gyrotron cavity. Experimental spectra of the SO2 and argon mixture obtained using the RAD spectrometer for different pressures of gas in the cell at two significantly different values of radiation power are shown in Fig. 1. It is clearly demonstrated that an increase of radiation power by about three orders of magnitude leads to a proportional increase of the RAD spectrometer sensitivity. The achieved sensitivity of the spectrometer is not a limit. The spectrometer sensitivity obtained in this study is determined only by the SO2 line saturation effect, which can be substantially reduced by proper selection of the molecule, the transition, and experimental conditions. For reducing the influence of external acoustic and mechanical noise the cell was weighted in about ten times by small lead balls 3-mm in diameter (initial cell weight was about 2.5 kg). Analysis of experimental spectra recorded using a cell with significantly different weights shows (Fig. 2) almost sevenfold increase of the SNR of the experimental spectra. DOI: 10.1051/ , 02028 (2017) 71490 1 EPJ Web of Conferences epjconf/201 49
Gas molecular spectroscopy is a powerful instrument for both fundamental studies and applications, such as qualitative and quantitative gas analysis, non-invasive medicine, atmospheric remote sensing, etc. Sensitivity is one of the key parameters of any spectrometer, which determines the range of its possible applications for resolving both fundamental and applied problems.The higher the sensitivity, the higher the accuracy of measurement of the spectral line parameters and the greater the number of lines that can be observed in the experiment (the smaller the number of molecules in a gas mixture needed for their lines to appear in the spectrum) and the higher the accuracy with which the properties of the molecules can be explored.
One can recall quite a large number of currently known wideband spectrometers used for a study of the spectra of various molecules in the mm/submm wave range.They can be divided into two types according to the principle of molecular spectra registering: from variations in the characteristics of either the probing radiation (first type) or the gas being studied (second type).
For most of the mm/submm spectrometers, in which either the radiation transmitted through a cell with gas or the radiation reradiated by the gas is detected, a sensitivity close to the limit determined by fundamental physical principles is achieved (see, e.g.[1], chapter 15, p. 414).The only method that permits one to advance in solving the problem of high sensitivity achievement is known as optoacoustic (photoacoustic or radioacoustic) detection of absorption [2].
The sensitivity of any spectrometer is determined by several factors.The most crucial ones are radiation power, detection system noise, and spectral purity of the radiation.In spectrometers of the first type (e.g., classical video-spectrometer), the sensitivity increases with increasing radiation power until the detection system noise becomes surpassed by the radiation noise.Typically for mm/submm video spectrometers with a liquid heliumcooled bolometer an upper limit of radiation power is about 1 mW.Further improvement of a spectrometer sensitivity can be achieved by increasing the optical path length and by data averaging (if ambient and gas conditions are stable enough).
The situation is radically different in spectrometers of the second type (e.g., in a spectrometer with radioacoustic detection of gas absorption -RAD spectrometer).The limiting sensitivity of the spectrometer is obtained when thermal fluctuations of the membrane are defined preferably by the Brownian motion of the gas.These fluctuations do not depend on the radiation power passing through the gas cell.Meanwhile, the useful signal amplitude is directly proportional to the radiation power absorbed by the gas and, therefore, linearly increases with the power.Thus, the sensitivity of RAD method is also linearly increasing.
In this report we present our recent results on reaching the high sensitivity of RAD spectrometer using a few approaches, such as increasing radiation power, modifying the cell and measurement parameters for the noise reduction and data averaging.
Backward-wave oscillators (BWOs) are used as radiation sources in the RAD spectrometers, a series of which covers a very wide range of 35 to 1500 GHz.High stability, low level of the phase noise, and exact knowledge of the frequency are provided by the use of a phaselocked loop system of the BWO radiation referenced by a microwave frequency synthesizer signal, which is synchronized with a frequency and time standard signal.The BWO power can vary from a fraction to tens of mW (reaching more than 100 mW in the best tubes) within the operating frequency range.Larger radiation power suitable for molecular spectroscopy was not available during long time because of the limited choice of mm/submm radiation sources.
Progress in the development of high-power mm/submm radiation sources, which was achieved in recent years, made it possible to test the "power" approach of increasing the RAD sensitivity [3].In particular, an automated facility [4] based on a gyrotron operated at a frequency of about 263 GHz in the CW regime with a radiation power up to ~1 kW was developed in IAP RAS.This gyrotron permits smooth tuning of the radiation frequency, although within small (in terms of spectroscopy standards) limits of the order of 0.2 GHz due to varying the operating voltage and temperature of the gyrotron cavity.
Experimental spectra of the SO 2 and argon mixture obtained using the RAD spectrometer for different pressures of gas in the cell at two significantly different values of radiation power are shown in Fig. 1.It is clearly demonstrated that an increase of radiation power by about three orders of magnitude leads to a proportional increase of the RAD spectrometer sensitivity.The achieved sensitivity of the spectrometer is not a limit.The spectrometer sensitivity obtained in this study is determined only by the SO 2 line saturation effect, which can be substantially reduced by proper selection of the molecule, the transition, and experimental conditions.
For reducing the influence of external acoustic and mechanical noise the cell was weighted in about ten times by small lead balls 3-mm in diameter (initial cell weight was about 2.5 kg).Analysis of experimental spectra recorded using a cell with significantly different weights shows (Fig. 2) almost sevenfold increase of the SNR of the experimental spectra.An example of the experimental spectra of the SO 2 and argon mixture obtained using the RAD spectrometer for different pressures of gas in the cell at a constant level of radiation power.Partial pressure of SO 2 in the mixture is 0.01 mBar.The spectra obtained using a BWO with a radiation power of about 0.01 W (typical power of an OB-24 type tube) and using a gyrotron with output radiation power of about 7 W are shown by gray broken and black smooth lines, respectively.The synchronous detection time constant is 1 s.Finally, high stability of radiation parameters (frequency and power) and experimental conditions (room and cell temperature, pressure in the cell) allowed averaging a large number of repeated experimental recordings for each chosen pressure for a multiple increasing of SNR for spectra recordings (see Fig. 3).The achieved sensitivity of the RAD spectrometer allowed the first observation of manifestation of the speed-dependence of the collision cross section of the 118-GHz oxygen fine structure line [5].
Fig.1.An example of the experimental spectra of the SO 2 and argon mixture obtained using the RAD spectrometer for different pressures of gas in the cell at a constant level of radiation power.Partial pressure of SO 2 in the mixture is 0.01 mBar.The spectra obtained using a BWO with a radiation power of about 0.01 W (typical power of an OB-24 type tube) and using a gyrotron with output radiation power of about 7 W are shown by gray broken and black smooth lines, respectively.The synchronous detection time constant is 1 s.
Fig. 2 .
Fig. 2. Recordings of the 183 GHz water line obtained for two significantly different cell weights (2.5 kg and 25 kg).Residuals of the fit of model function to the experimental spectra zoomed in 20 times are shown in the lower part of the figure
Fig. 3 .
Fig. 3. Experimental spectra of pure oxygen near 118.75 GHz.The lower plot is a zoomed-in part of the upper plot.Twenty spectra recorded at 1.1 Torr of pure oxygen are shown by grey.The black curve is the result of their averaging.The dotted curve shows the baseline recorded for the same conditions with 1.1 Torr of pure nitrogen It is worth noting that all the approaches were tested separately and being combined in one setup they can give the record sensitivity of the RAD spectrometer.The study was supported by Russian Science Foundation (project 17-19-01602). | 2018-12-11T14:37:51.758Z | 2017-08-01T00:00:00.000 | {
"year": 2017,
"sha1": "9ede6b5cdc78789c80968aedb8b79e89321fe9cc",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2017/18/epjconf_smp2017_02028.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9ede6b5cdc78789c80968aedb8b79e89321fe9cc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
244316919 | pes2o/s2orc | v3-fos-license | Categorical Vehicle Classification and Tracking using Deep Neural Networks
The classification and tracking of vehicles is a crucial component of modern transportation infrastructure. Transport authorities make significant investments in it since it is one of the most critical transportation facilities for collecting and analyzing traffic data to optimize route utilization, increase transportation safety, and build future transportation plans. Numerous novel traffic evaluation and monitoring systems have been developed as a result of recent improvements in fast computing technologies. However, still the camera-based systems lag in accuracy as mostly the systems are constructed using limited traffic datasets that do not adequately account for weather conditions, camera viewpoints, and highway layouts, forcing the system to make trade-offs in terms of the number of actual detections. This research offers a categorical vehicle classification and tracking system based on deep neural networks to overcome these difficulties. The capabilities of generative adversarial networks framework to compensate for weather variability, Gaussian models to look for roadway configurations, single shot multibox detector for categorical vehicle detections with high precision and boosted efficient binary local image descriptor for tracking multiple vehicle objects are all incorporated into the research. The study also includes the publication of a high-quality traffic dataset with four different perspectives in various environments. The proposed approach has been applied on the published dataset and the performance has been evaluated. The results verify that using the proposed flow of approach one can attain higher detection and tracking accuracy. Keywords—Vehicle classification; generative adversarial networks; single shot multibox detector; vehicle tracking; deep neural networks
I. INTRODUCTION
With a rising count of vehicles on road, and those in a huge variety, resulting in traffic congestion and a slew of related difficulties, it is necessary to address these issues [1]. It motivates us to consider an intelligent and smart traffic monitoring system that could assist traffic agencies in addressing issues such as routing traffic based on the density of vehicle movement on the road, collecting traffic data like count of vehicles, vehicle type, and vehicle motion parameters, and managing roadside assistance in the event of an accident or other anomalous incident. It conducts traffic analysis using the acquired data to optimize the use of highway networks, forecast future transportation demands, and enhance transportation safety [2]. The primary functions of an intelligent and intelligent traffic monitoring system are vehicle categorization and tracking on a category basis. Due to the substantial technological problems associated with the same, several research topics have been studied, resulting in the creation of numerous vehicle categorization, and tracking systems. Classifying vehicles and maintaining their trajectories properly in a variety of environmental circumstances is critical for efficient traffic operation and transportation planning.
The scientific advancements have resulted in the development of several novel vehicle categorization systems. Three types of categorical vehicle classification systems may be found in use today: in-road, over-road, and side-road. Each category of vehicle classification is further divided into subcategories depending on the sensors utilized, the techniques used to utilize the sensors, and the processes used to classify cars [3]. While both in-road and side-road approaches are capable of accurate categorical vehicle classification, they differ significantly in terms of sensor types, hardware configurations, configuration process, parameterization, operational requirements, and even expenses, making it even more difficult to determine the most suitable solution for a given vehicle in the first instance. These techniques have limitations when more than one vehicle is in the same location at the same time [4]. So, these techniques can't be utilized for tracking the vehicles.
To circumvent the restrictions, over-the-road-based methods for category vehicle classification and tracking are used. Camera-based systems are the most popular technology for over-road-based systems [5] [6]. The cameras are mounted at a height sufficient to cover the road's wide field of vision and can span several lanes. There are two primary obstacles to attaining our aim that are linked with camera-based systems. To begin, their performance is significantly impacted by weather and lighting conditions, resulting in blurred, hazy, and rainy observations in collected pictures. The same findings are made in captured pictures when automobiles are travelling at high speeds on the road. Second, a higher viewing angle allows for consideration of more distant road surfaces, however, the vehicle's object size changes significantly, and the accuracy of detection of tiny objects located distant from the road suffers because of the shift. We focus on above two difficulties in this work to provide a feasible solution, and we demonstrate how to adapt the category vehicle recognition findings to multiple object tracking.
A. Image Restoration
Images restoration problems such as image deblurring, dehazing and deraining being all focused at creating an (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 12, No. 9, 2021 565 | P a g e www.ijacsa.thesai.org accurate representation of a clear final picture out of an insufficiently clear input image. Numerous studies have been conducted in this area. A multi-layer perceptron technique for deblurring that eliminates noise and artefacts [7]. To cope with outliers, a CNN based on the single value dissemination is used [8]. Certain techniques [9], [10] begin by estimating blur kernels with convolutional neural networks and subsequently deblur images using traditional restoration methods. Many edge adaptive neural networks have been developed for the purpose of recovering clear images instantly [11], [12]. Recent deep learning-based approaches for image dehazing [13], [14] estimate transmission maps first and subsequently restore clear images using conventional methodologies [15]. Typically, traditional methods for image deraining are created using the statistical characteristics of rainy streaks [16][17][18][19]. The author in [20] built neural network for removing rain and/or dirt from pictures. Having been developed with the aid of the ResNet [21], [22] built deep network for image deraining. The author in [23] introduced Generative Adversarial Network (GAN) architecture for generating realistic pictures from random noise. Numerous techniques for visual tasks have been developed because of this framework [24][25][26][27]. The authors in [28][29][30][31] have also utilized the GAN framework to low-level vision issues. We chose to apply the capabilities of the GAN framework physics model [32] for picture restoration jobs due to the positive findings.
B. Detection of Vehicles
Now, vehicle detection can be accomplished using both standard machine vision techniques and sophisticated deep learning techniques. Traditionally, machine vision techniques employ a vehicle's motion to distinguish it from a fixed backdrop picture. This approach may be classified into three categories [33] as background subtraction [34], frame subtraction on a continual basis [35], and optical flow [36]. Variance is determined by applying the frame subtraction technique, which compares pixel data of two or three successive frames. Additionally, threshold separates the shifting foreground region [35]. By employing this technique and reducing noise, the vehicle's halt may also be recognized [37]. When the video's backdrop picture being stationary, background data is used to build the model [37]. Following that, it is possible to segment the moving object as well as the frame pictures by comparing each frame image to the backdrop model. Optical flow approach being exploited to detect a motion area in frames. The resulting optical flow field encodes the direction of motion and speed of each pixel [36]. While the classic machine vision approach detects the vehicle more quickly, it does not perform well in case the image brightness varies, there being a continuous motion in backdrop, or there are vehicles moving with low speed or some complicated sceneries. Vehicle identification using deep convolutional neural networks [52] may be classified into two broad groups. The two-stage technique begins by generating a candidate box for the item using multiple methods and then classifying it using a CNN. Second, a single-stage technique could not produce candidate box but instead turns object bounding box placement problem straight transform it into a regression problem that can be processed. Region-CNN (R-CNN) [38] employs a two-stage technique that utilizes selective search of region [39] in image. CNN image input must be fixed size, and the network's deeper structure needs a lengthy training period and uses a significant amount of storage capacity. SPP NET [40], which is based on concept of spatial pyramid matching, enables the network to accept pictures of varying sizes and provide fixed outputs. Among the one-stage techniques, the Single Shot Multibox Detector (SSMD) [41] and You Only Look Once (YOLO) [42] frameworks are most important. For many categories, SSD for single shot detectors (YOLO) that is significantly faster than the preceding state-of-the-art and as accurate as slower techniques that undertake explicit area recommendations and pooling, such as the Faster R-CNN [43]. SSMD's central idea is to forecast category scores and box offsets for a specific set of default bounding boxes by applying tiny convolutional filters on feature maps. We chose to use the SSD framework [43] for categorical vehicle identification and classification tasks due to the positive findings.
C. Tracking of Vehicles
Aspects of the functioning of an intelligent traffic system that need advanced vehicle object identification applications, such as multiple object tracking, are also crucial [44]. DBT (Detection-Based Tracking) and Detection-Free Tracking (DFT) are the two most common methods of initializing objects in multi-object tracking systems (DFT). To detect moving objects in video frames, the DBT method first uses background modelling to detect them before tracking them. However, the DFT technique is only capable of initializing the tracking object and cannot deal with the addition of new objects or the removal of current ones. Multi-object tracking algorithms must consider the similarity of items inside a frame, as well the associated problem of objects across frames, when developing their algorithms. The normalized cross-correlation function may be used to determine the similarity of objects inside a frame. As shown in [45], the Bhattacharyya distance is being used to calculate the distance between two objects based on the colour histograms of their respective images. When connecting inter-frame items, it is critical to specify that each item may appear on no more than one track at a time and that each track may include no more than one object. It is now possible to fix this issue by using either detection-level exclusion or trajectory-level exclusion. SIFT and ORB feature points were used for object tracking to overcome the difficulties caused by size and illumination changes in moving objects in [46] and [47], however this approach is slow and requires many feature points. The feature point detection technique Boosted Efficient Binary Local Image Descriptor (BEBLID) is proposed for use in this study [48]. BEBLID is considerably faster than SIFT and ORB in extracting feature points.
D. Our Contributions Comprise the following Items
On the foundation of this work, a large-scale dataset of vehicle movement on roads has been developed, which may offer many distinct category vehicle objects that have been thoroughly annotated under diverse situations taken by high-mounted cameras. It is possible to utilize the dataset to test the performance of a variety of vehicle detection methods.
For recovering blurred, hazy, or rainy images recorded in road scenes, a method based on the GAN framework www.ijacsa.thesai.org for image restoration has been developed. This approach is utilized to increase the accuracy of vehicle detection in road scenes.
A technique based on convolutional neural networks, i.e., SSMD, is implemented for category vehicle detection.
A system for tracking and analyzing several vehicles is presented for road situations. The BEBLID method extracts and matches the detected object's feature points.
Findings of this investigation will be discussed in further detail in the following sections. Section 3 introduces the vehicle dataset that will be utilized in this work. During Section 4, you'll learn about the general procedure of the suggested system. Section 5 shows the results of the experiments as well as the relevant analyses. Section 6 provides a comprehensive summary of the complete method.
III. VEHICLE DATASET
Because of concerns about copyright, privacy, and security, traffic dataset is rarely made public owing to the widespread use of traffic surveillance cameras on highways across the world. With images of highway sceneries and typical road scenes, the KITTI benchmark dataset [31] aids in the solution of issues such as 3D object identification and tracking, which are commonly encountered in automated vehicle driving applications. The Tsinghua-Tencent Traffic-Sign Dataset [32] contains pictures captured by automobile cameras in a variety of lighting and weather situations, however there are no cars identified. The Stanford Car Dataset [33] and the Comprehensive Cars Dataset [34] are vehicle datasets captured by non-monitoring cameras and featuring a bright car look; they are used in research and development. The datasets are captured by security cameras; one such dataset is BV Dataset [35], which is an example. Even though this dataset categorizes vehicles into 6 categories, shooting angle being positive, and the vehicle object is too tiny for each image, making the generalization impossible for CNN training. A dataset called Traffic and Congestions [36] comprises photos of cars on roads collected by security cameras, however most of the images have some degree of occlusion in them. This dataset has a small number of images and contains no information on the vehicle's classification, making it less helpful. As a result, only a few datasets have pertinent annotations, and there are only a few images of traffic scenes available. This section provides an overview of the vehicle dataset from the standpoint of road surveillance footage that we created. Dataset available on link: https://drive.google.com/drive/folders/1vYwLPkZZ2OX1cIIP QZA4SgB3dum7vPwV?usp=sharing. The video in the dataset is taken from the DND road in Delhi, India as shown in Fig. 1. The road monitoring camera was put on the side of the road and built at a height of 10 meters with a fixed angle of view. The photos taken from this vantage point span a large portion of the road in the distance and include cars of all types. The pictures in the dataset were taken from four surveillance cameras at different times of day and under varied lighting situations to provide a diverse range of photographs. The vehicles in this dataset are divided into three categories: twowheelers, Light Motor Vehicles (LMV), which include threewheelers, automobiles, minivans, and other similar vehicles, and Heavy Motor Vehicles (HMV), which include buses, trucks, and other similar vehicles (Fig. 2). The Table I
IV. METHODOLOGY
The technique of the categorical vehicle classification and tracking system is described in detail in this section. First, the video data from the road traffic scenario is imported into the system. Second, the GAN framework is used to recover the pictures that have been captured. After that, the road area is excavated. The SSMD deep learning object detection technique is being used to recognize presence of vehicles belonging to three different categories in a road traffic environment. Finally, BEBLID feature extraction is carried out on the identified vehicle box to complete the tracking of numerous vehicle objects. In the proposed technique, the essential components of picture restoration, vehicle detection, propagating object states into future frames, linking current detections with existing objects, and controlling the lifespan of tracked objects are all discussed in detail. Diagram of the methodology's building blocks is depicted in Fig. 3.
A. Image Restoration
As previously stated, weather and lighting circumstances have a significant impact on the performance of camera-based systems, resulting in blurring, hazing, and precipitation observations in the captured pictures. High-speed vehicle movement on the road is observed in captured images, and the same or similar observations can be deduced from those images. The former scenario is caused by environmental changes and is thus less likely to occur, but the latter situation occurs almost without fail, necessitating the need for restoration. To achieve precise vehicle detection, it is necessary to repair the images to eliminate the issues that have arisen. Following a study of the literature on picture restoration approaches, we were encouraged by the positive results to apply the capabilities of the GAN framework physics model [32] to image restoration problems in our own research.
1) Image Restoration with GAN:
An image restoration task is to predict a clear picture x from an input image y that as been provided. Fundamentally, the estimated x should be compatible with the input y under the picture creation paradigm, which is as follows: (1) The operator H is used to transfer the unknown outcome x to the seen picture y. Depending on the situation, the blur, haze, or rain operation may be used. It is required to apply extra constraints on x to regularize it since the estimation of x from y is not well-posed. In the maximum a posteriori (MAP) paradigm, one frequently used method is predicated on the assumption that x may be solved by, In the above equation, | and are probability density functions, which are referred to as the likelihood term and image prior in the scientific literature, respectively. The mapping functions between x and y are directly learned using mathematical approaches, G is the mapping function in this case. In the case of the function G, it can be considered an inverse operator of H. If the mapping function can be predicted accurately, G(y) should be near to the ground truth, theoretically speaking.
The adversarial learning method used by the GAN algorithm is used to learn a generative model. It trains a generative network and a discriminative network at the same time by optimizing, among other things.
in which z represents random noise, x represents a genuine picture, and D represents a discriminative network are used. For the sake of convenience, we will also refer to a generative network as G. As part of the training process, the generator generates samples (G(z)) that may be used to deceive the discriminator, while the discriminator learns to discriminate between actual data and samples generated by the generator. A binary classifier is used as the discriminator. If the observed image y serves as the input to the generator, then the adversarial loss is, The value of (5) is near to zero if the distribution of the produced picture G(y) differs considerably from the distribution of the clear image, and it is greater if the distribution differs significantly from the clear image. It is possible to address the image restoration difficulty by doing the negative log procedure, If we consider the data term to ensure that the recovered image x and the input image y are consistent under the appropriate image degradation model, then we get . The regularisation of the recovered image x is denoted by and models the characteristics of the recovered image, respectively. In vision tasks, the function functions as a discriminator, with the value of the function being considerably smaller if x is clear and much bigger otherwise. In other words, maximizing the goal function as Eq. 3 will result in a decrease in the value of x. As a result, the predicted intermediate picture will be significantly more detailed. Accordingly, in order to regularize the solution space of picture restoration, adversarial loss can be employed as a precursor to the restoration. Fig. 4 depicts the major components of the GAN method, which include two discriminative networks, one generative network, and one picture degradation model [32], as well as their interactions.
where being the kernel for blur, and represents convolution operator. For image dehazing and deraining, ̃ where representing an atmospheric factor and being the transmission map. The discriminative network D g is used to determine if the distributions of the generator G outputs are comparable to those of the ground truth images. It is required to categorize using the discriminative network D h whether the regenerated result ̃ is consistent with the observed image . All the networks are taught in a collaborative manner from beginning to end.
During training, we rely on an Adam optimizer, which starts with a learning rate of 0.0002, with the method outlined in [24] being used. To get our results, we choose a batch size of one and a slope of 0.2 for the Leaky-ReLU. We use the same weight initialization strategy [24] uses. We must first get the generator G to create G( ) and . We may utilise the relevant physics model parameters to employ the generator, as we know the training data as well as the physics model parameters ̃. The discriminators D g and D h accept input data sets { , G( )} and { , ̃} respectively. We update the discriminators using a history of produced pictures (rather than the most recent generative networks' images) according to the methods discussed in [24]. The generator and the discriminators have a one-to-one update ratio set between them.
B. Excavation of the Road Area
The next section covers the procedure for removing the road surface. We developed it using an image processing approach called the Gaussian mixture model, which results in superior vehicle detection results when combined with the deep learning object detection method, as shown in Fig. 2. The video picture of traffic on the road has a wide field of vision. In this investigation, the cars are the primary centre of attention, and the road area is the zone of interest in the resulting image. Meanwhile, depending on the camera's view angle, road area being focused for certain range of the image's horizontal and vertical planes. We were able to extract the road segments from the video using this function. In a traffic scenario, a perfect background is not always accessible and may always be modified in crucial circumstances by the introduction or removal of items from the picture, as well as the presence of objects that are either slow moving or immobile. The Gaussian mixture model (GMM) was used to account for all these factors correctly. According to the method, background is visible more frequently than foreground and model variance is small [49].
The recent history of the intensity values of each pixel X 1 , ..., X t is modeled by a mixture of K Gaussian distribution. The probability of observing the current pixel value is given by the formula: where K gives the number of Gaussian distributions, is the weight of the k th Gaussian in the mixture at time t having mean and covariance matrix and η is a Gaussian probability density function which is given by | | (10) where n is the dimension of the colour space and is the number of colours in the colour space. As soon as the parameters have been initialized, the K Gaussians are sorted in the order of the ratio 1/(k). Due to the fact that backgrounds are more prevalent in scenes than moving objects, as well as the fact that their values are almost constant, it follows that a backdrop pixel equates to a high weight with low variation. The first B Gaussian distributions that surpass a specific threshold T 1 are kept for use as a background distribution. For example, Distributed data that is part of the foreground represents other distributions. Until a match is found, the process repeats as the system computes and compares every new X t value to the K Gaussian distributions. A pixel's value follows a Gaussian distribution if it is 2.5 standard deviations away from that distribution's mean. The background image is smoothed using a Gaussian filter once the road section has been extracted as the background picture. The MeanShift method smoothes the input image's colour. The final step is to finish filling the holes and carrying out morphological procedures in order to get most of the road surface. To extract the road regions, we made use of a variety of landscapes and have the results in Fig. 5.
C. Categorical Vehicle Detection using SSMD
Here is a description of the object detection approach that was employed in this study. The SSMD network was utilised in the development of the categorical vehicle detection framework and its deployment. The SSD approach's final detections are created by feeding bounding-boxes and scores of object class occurrences into a fixed-size feed-forward convolutional network followed by a non-maximum suppression phase. Addition of an auxiliary structure to the base network, such as the VGG-16, results in detections that have the following important characteristics:
1) Maps of multi-scale feature for identifying anomalies:
At end of the truncated base network, convolutional feature layers are added to complete the network. These layers get smaller and smaller as time goes on, and they allow for predictions of detections at various sizes.
2) Convolutional neural network prediction techniques: A sequence of convolutional filters is associated with each feature layer, and it creates a discrete set of detection results. The three-dimensional tiny kernels provide either a score for each category or an offset in the shape relative to the default box coordinates and are the essential element used for the prediction of parameters in a feature layer of size mxn with channels. For each kernel location, it generates a number as an output. When it comes to figuring out the bounding box offset output values, it is crucial to first understand the differences between measurements made on various feature maps.
3) Box and aspect ratio defaults: In the design of feature map cells, each is equipped with default bounding boxes, even if many feature maps are employed above the cell. Due to the tiling of the feature map's boxes, with the position of each box in relation to its associated cell fixed, the boxes' arrangements in the feature map are fixed. We predict the offsets, class scores, and the box shapes for each feature map cell. From there, we calculate the class scores and four offsets to get the final bounding box, as seen in the illustration. The (c + 4)k filters being applied around each spot in the feature map amount to (c + 4)kmn outputs for a m x n feature map. a) Training: For SSD training to be effective, the ground truth information must be allotted to certain detector outputs in the fixed set of detector outputs. Once a decision has been made on this assignment, it is applied completely to the loss function and back propagation. Additionally, you must pick the set of default boxes and scales that you will use for the data augmentation and the hard negative mining and methods.
i) Training method for matching: For training, we need to discover the ground truth boxes and train the network according to that discovery. For each ground truth box we create, we are using a preset box that is predefined with a variety of attributes, such as box size, box aspect ratio, and box placement. For every ground truth box, we compare it to the best-overlapping default box. Any boxes that meet the requirements are then matched to ground truth in which the jaccard overlap is over a certain level (0.5). By contrast, the learning challenge is made easier since the network may make predictions about a large number of default boxes that overlap, instead of needing to select one single box as the biggest overlapper.
ii) Loss function: The training aim is to be able to deal with a variety of vehicle types. We'll define an indication of matching a box in the i-th category to a box in the j-th category as . ∑ holds under the matching strategy shown above. The weighted sum of the localization loss (loc) and the confidence loss (conf) are the overall objective loss function: (12) where N is the number of matching default boxes, and the weight term has been adjusted to one via cross validation. If N equals 0, the loss is set to zero. In a localization test, the localization loss is the difference between the expected box (l) parameters and the ground truth box (g) values.
iii) Scales and aspect ratios for default boxes: To manage diverse object scales, feature maps from many distinct layers in a single network are used for prediction, with parameters shared across all object scales. This allows the network to handle several object scales at the same time. In addition, it has been depicted that feature maps from the lower layers could help to enhance the quality of semantic segmentation since the lower layers capture finer features of the input (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 12, No. 9, 2021 570 | P a g e www.ijacsa.thesai.org objects. For detection, we make use of both the bottom and higher feature maps. With the tiling of default boxes, we may train individual feature maps to be sensitive to objects of different sizes and shapes over time. Assume that we wish to make predictions using m feature maps. The following formula is used to determine scale of default boxes for every feature map: (13) Where equals 0.2 and equals 0.9, the lowest layer has a scale of 0.2, the topmost layer has a scale of 0.9, and all levels in between are evenly spaced. We impose various aspect ratios on the default boxes, denoted by the variables . We can determine the width and height of each default box. The centre of each default box is set to ( | | | | ), where | | denotes the size of the k-th square feature map, | | .
iv) Hard negative mining: we rank the default boxes according to their largest confidence loss and choose just those at the top of the list, ensuring that the ratio of negatives to positives is no more than 3:1. This resulted in a speedier optimization process and more uniform training.
v) Enhancement of data: To make the model more robust to a broad range of input object sizes and shapes, each training image is randomly chosen using one of the following methods: Utilize the whole original input image.
Sample a patch with values of 0.1, 0.3, 0.5, 0.7, or 0.9 to obtain the least feasible jaccard overlap with the objects.
Take a sample of a patch at random.
Each sampled patch is between [0.1 and 1] of the original image's size, with an aspect ratio of between 1/2 and 2. Following the preceding sampling step, each sampled patch is given a fixed size, and the patches are then horizontally flipped with a probability of 50%.
D. Multiple Vehicle Object Tracking
This section describes how numerous vehicle objects are tracked using the object box discovered in the preceding section. During this stage, the BEBLID algorithm was employed to extract vehicle characteristics, and good results were achieved. The BEBLID method surpasses the competition by a considerable margin in terms of computing performance and matching costs. This algorithm is a superior alternative to other image description algorithms that have been previously described in the literature. Feature computations for the BEBLID algorithm are based on differences in grey values between a pair of box image regions, with the integral image serving as a basis for computations for the BEBLID algorithm features based on differences in grey values between a pair of box image regions. The technique takes use of AdaBoost to train a descriptor on an imbalanced data set to handle the challenge of highly asymmetric image matching. Binarization in a descriptor is achieved by minimizing the amount of new similarity loss in which all weak learners share a common weight. The coordinate system must be established by assuming the feature point to be at the centre of a circle and using the centroid of the point region to represent the coordinate system's x-axis. Thus, when the image is rotated, the coordinate system may be adjusted to match the image's rotation, resulting in rotation consistency in the feature point descriptor. When viewed from a different angle, a consistent point can be made. After getting the binarization, the feature points are matched using the XOR operation, which improves the overall efficiency of the matching process. 6 illustrates the tracking method. When the number of matching points collected reaches a predefined threshold, the point is regarded successfully matched, and the object's matching box is painted around it. The following information relates to the source of the prediction box: Purification of feature points is performed using the Maximum Likelihood Estimator Sample Consensus (MLESAC) algorithm, which can exclude incorrect noise points caused by matching errors, and estimation of the homography matrix is performed using the MLESAC algorithm, which is capable of excluding incorrect noise points caused by matching errors. The estimated homography matrix and the location of the original object detection box are transformed into a perspective to get a matching prediction box for the original object detection box. Both the prediction box in the first frame and the detection box in the second frame must fulfil the centre point's criterion for the smallest distance between them to match the same item effectively. To be more specific, we define a threshold T equal to the greatest pixel change between the observed centre point of the vehicle object box and the vehicle object box's centre point when it moves between two subsequent video frames. The difference between two successive frames of the same vehicle in terms of positional movement is less than the threshold T. When the centre point of the vehicle object box crosses T in two subsequent frames, the vehicles in those two frames become unrelated, and the data connection fails. The threshold T value is proportional to the size of the vehicle object box, taking scale shift into vehicle. The thresholds for each vehicle object box are set to a variety of values. This definition is sufficiently flexible to accommodate vehicle mobility and a variety of different video input sizes. When T = box height/0.25 is used, the height of the vehicle object box is utilized as the input parameter for the calculation. We discard www.ijacsa.thesai.org any trajectory that has not been updated in ten consecutive frames, which is suitable for a camera scene with a wide-angle image collection along the route under investigation. If the prediction box does not match the item in future frames, it is determined that the object is absent from the video scene and the prediction box is removed. The method outlined above results in the collection of global object identification and tracking trajectories from the viewpoint of the whole road surveillance video.
E. Analysis of Trajectories
This section discusses both the analysis of moving objects' trajectories and the gathering of data on numerous items in a traffic flow. The majority of roadways are split into two lanes, separated by isolation barriers. We identify the vehicle's orientation in the world coordinate system based on its tracking trajectory and mark it as approaching or fleeing the camera. A straight line is drawn across the traffic scene image to serve as a detection line for the purpose of calculating vehicle classification data. The detection line must be centred on the 1/2 point of the traffic image's high side. Concurrently, the road's traffic flow in both directions is counted. The object's memory is accessed when the object's trajectory crosses the detection line. The number of objects in different orientations and categories over a certain time may be calculated at the end of the operation.
V. SIMULATION AND RESULTS
Many measures have been developed in the past for evaluating the systems performance quantitatively. The proper one depends heavily on the application, and the search for a single, universal evaluation criterion is currently underway. On one side, it being ideal to condense results into a single number that can be compared directly. On the other side, one could not want to lose knowledge about the algorithms' specific faults and present a large number of performance estimations, which makes a clear voting impossible. So, we would be evaluating the performances with more than one parameter.
A. For Image Restoration 1) Peak signal to noise ratio(PSNR): Considering a reference image f and a test image g, which have a resolution of MxN, the PSNR score among f and g being calculated as: (14) , The PSNR score increases as the mean squared error (MSE) decreases; this indicates that a greater PSNR value results in a higher image quality.
2) Structural similarity index (SSIM):
The SSIM being a well-known quality statistic that is used to compare two images. It is thought to be connected to the human visual system's perception of quality. The SSIM score being calculated as: (16) , (19) l: luminance, c: contrast and s: structural comparison function Few results of GAN framework for image restoration are shown in Fig. 7.
The images are randomly selected, and their performance is quantified in terms of PSNR and SSIM. The average of the two parameters' scores, is shown in Table II.
B. For Vehicle Detection
It was necessary to use the test set to compute the mean average precision (mAP); mAP is an acronym for Average Precision (AP), which is defined as calculating the area under the precision-recall curve for a given total number of object class instances [43]. The experiment is divided into three classes, which include two-wheelers, light motor vehicles, and heavy motor vehicles. The mean of 11 points for each potential threshold in the category's precision/recall curve is described for each category by AP. We utilized a series of criteria [0, 0.1, 0.2,..., 1] to measure our results. For recall values larger than each threshold (in this experiment, the barrier is 0.25), there will be a matching maximum precision value, denoted by pmax(recall). The precisions listed above are computed, and AP is the average of these 11 maximum precisions (recall). This number was used to describe the overall quality of our model. www.ijacsa.thesai.org The calculation of precision, recall and IoU (Intersection over union) is as follows: (24) in which TP, FN, and FP denote the number of true positives, false negatives, and false positives, respectively We used the following formulas to compute the parameter scores for both categories: 1) When the dataset was sent directly into the object detection algorithm, that is, when no image restoration procedure was used to restore the image.
2) When a picture is restored using the GAN framework, a dataset is fed into the object detection algorithm.
Tables III and IV provide the results of the parameters for each of the two categories. There is a 13.7 percent difference in the two-category results for the metric mAP when comparing them. This improvement figure clearly demonstrates that restoring the pictures has a significant influence on the quality of object identification and, indirectly, on the accuracy of tracking while tracking objects.
Few results of SSMD approach for categorical vehicle detection id depicted in Fig. 8.
C. Multiple Vehicle Object Tracking
The performance evaluation for multiple vehicle object tracking is done through following parameters [51]:
1) Multiple Object Tracking Accuracy (MOTA):
This parameter takes into account three different types of errors: false positives, missed targets, and identity changes. For improved tracking accuracy, a high MOTA value is preferred. It is calculated as: The frame index is t, and the count of ground truth objects is GT. MOTA could be negative if count of mistakes produced by tracker is more than total object count in the scene. MOTA score being solid indicator of tracking system's overall performance. (26) d t,i is the bounding box overlap of target i with its assigned ground truth object, and c t is count of matches in frame t. Average overlap among all properly matched hypotheses and their corresponding objects being given by MOTP, which spans among t d : 50% and 100%.
3) False Alarms per Frame (FAF):
It reflects per-frame amount of false alarms. A lower value of FAF is desirable for better tracking.
4) Mostly Tracked (MT):
It indicates the number of paths that have been mainly tracked. i.e. the target has had the same label for at least 80% of its existence. A high value of MT parameter is desirable for better tracking.
5) Mostly Lost (ML):
It indicates the amount of trajectories that have been lost for the most part. i.e. the target being not monitored for at least 20% of the time it is alive. A lower value of ML parameter is desirable for better tracking. 8) IDsw: The amount of times an ID changes to a formerly tracked object. A lower value of IDsw parameter is desirable for better tracking. 9) Frag: The amount of times a track is fragmented due to a miss detection. A lower value of Frag parameter is desirable for better tracking. The score of the various tracking parameters is depicted in Table V. Trajectory estimation done on the dataset is depicted in Fig. 9. It summarizes the movement of vehicles with direction information and maps the future state predictions.
VI. CONCLUSION
This research developed from the standpoint of surveillance cameras, a dataset of vehicle objects and presented a technique for image restoration, object detection, and tracking for road traffic video scenes. The use of the GAN framework for picture restoration, as well as the GMM for road area extraction, resulted in a more effective detection system. The annotated road vehicle object dataset was used to train the SSMD object identification algorithm, which resulted in the development of an end-to-end vehicle detection model. The location of the object in the image being evaluated by the BEBLID feature extraction method based on results of the object detection technique and image data. The trajectory of the vehicle might thus be determined by tracking the binary characteristics of many objects. Lastly, the vehicle trajectories were examined to obtain information on the road traffic scene, such as driving direction as well as vehicle category and traffic density. Testing findings confirmed that suggested vehicle identification and tracking approach for road traffic scene has good performance and is practicable, as demonstrated by the outcomes of the experiments. The method described in this paper being low in cost and high in stability when compared to the traditional method of monitoring vehicle traffic by hardware. It also requires no large-scale construction or installation work on existing monitoring equipment, which is a significant advantage over the traditional method. | 2021-10-18T16:59:50.704Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "50147d38b7d2515fef7eec438006e2f0a40e515a",
"oa_license": "CCBY",
"oa_url": "http://thesai.org/Downloads/Volume12No9/Paper_64-Categorical_Vehicle_Classification_and_Tracking.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "880d8b04ccaeaa61e07ad963eddffbe71c66906c",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
257805220 | pes2o/s2orc | v3-fos-license | Viscous effects on morphological and thermodynamic non-equilibrium characterizations of shock-bubble interaction
A two-fluid discrete Boltzmann model with a flexible Prandtl number is formulated to study the shock-bubble interaction (SBI). This paper mainly focuses on the viscous effects on morphological and thermodynamic non-equilibrium (TNE) characterizations during the SBI process. Due to the rapid and brief nature of the SBI process, viscosity has a relatively limited influence on macroscopic parameters but significantly affects the TNE features of the fluid system. Morphologically, viscosity affects the configuration of the vortex pair, increases both the amplitudes of gradients of average density and average temperature of the fluid field, and reduces circulation of the bubble. As a higher viscosity fluid absorbs more energy from the shock wave, it leads to an increase in both the proportion of the high-density region and the corresponding boundary length for a fixed density threshold. The spatiotemporal features of TNE quantities are analyzed from multiple perspectives. The spatial configuration of these TNE quantities exhibits interesting symmetry, which aids in understanding the way and extent to which fluid unit deviates from the equilibrium state. Theoretically, viscosity influences these TNE quantities by affecting the transport coefficients and gradients of macroscopic quantity. Meanwhile, the viscosity increases the entropy production rate originating from the non-organized momentum flux mainly through amplifying the transport coefficient and enhances the entropy production rate contributed by the non-organized energy flux by raising the temperature gradient. These multi-perspective results collectively provide a relatively comprehensive depiction of the SBI.
I. Introduction
2][3][4][5][6][7][8] For example, in astrophysics, the Puppis A supernova remnant interacts with a complex system of interstellar clouds. 9In combustion systems, the shock wave ignites the mixture bubble composed of H 2 and O 2 . 10In inertial confinement fusion, laser-induced shock wave impacts the isolated defects bubble inside the capsule, aggravating hydrodynamic instability. 11In the field of medicine, shock waves are directed into the body and fragment the kidney stones inside patient. 12,13A fundamental setup for studying SBI involves a spherical bubble accelerated by a planar shock wave.The deformation of the bubble is influenced by numerous physical factors, including the type and strength (expressed as Mach number, Ma) of the incident shock, the initial bubble shape, the boundary types of fluid field, the density ratio between the bubble and the ambient gas (Atwood number), the specific-heat ratio, the viscosity, and the heat conduction, etc. Due to the significance of SBI in engineering applications, substantial efforts have been dedicated to unraveling its evolutionary mechanisms.Researchers have employed variety of approaches, primarily encompassing theoretical methods, 14,15 experimental investigations, 2,[16][17][18][19] and numerical simulations [20][21][22][23][24][25][26][27] .Among these, Samtaney et al. 14 provided the analytical expressions for circulation Γ which are within and beyond the regular refraction regime.Ding et al. 28,29 conducted experimental and numerical investigations into the effects of initial interface curvature on the interaction between planar shock waves and heavy/light bubbles.Additionally, other scenarios, such as bubbles impacted by the converging shock wave, 30 spherical/cylindrical bubbles interact with planar shock wave under re-shock conditions, 31,32 have also been explored.
Numerical simulations of SBI can be categorized into three types based on their theoretical foundation: macroscopic, mesoscopic, and microscopic methods.In previous studies, the traditional macroscopic modeling method, founded on the continuum hypothesis (or equilibrium and near-equilibrium hypothesis), has been widely employed.Such macroscopic fluid models are often represented by the Euler equations and Navier-Stokes (NS) equations, wherein the former assumes equilibrium, and the latter assumes near-equilibrium.The hydrodynamic equations in the traditional macroscopic modeling method only describe the hydrodynamic behaviors corresponding to the conservation laws of mass, momentum, and energy.However, with increasing the degrees of nonequilibrium and non-continuity, more appropriate hydrodynamic equations refer to the Extended Hydrodynamic Equations (EHEs) which encompass not only the evolution equations of conserved kinetic moments but also the most relevant non-conserved kinetic moments of the distribution function. 33he traditional macroscopic model describes the SBI process from a macroscopic perspective, primarily encompassing the fields of density, temperature, velocity, and pressure, which has been instrumental in advancing our understanding of the physical aspects of SBI.For example, Ding et al. 28,29 demonstrated a good agreement of interface structure between the numerical results obtained from the compressible Euler equations and experimental data.Zou et al. 21investigated the Atwood number effects and the jet phenomenon caused by the shock focusing through the multi-fluid Eulerian equations.In contrast, a smaller portion of SBI research utilizes the mesoscopic method, such as the Direct Simulation Monte Carlo method. 34o challenges are encountered in the previous SBI numerical studies.(i) Most of these studies describe mainly the flow morphology and SBI process from a macroscopic view.They are concerned more with dynamic processes such as bubble deformation, interface motion, vortex motion, mixing degree, etc.These physical quantities are helpful for understanding the flow morphology during the SBI process but are far from being sufficient.However, with increasing the non-continuity and Thermodynamic Non-Equilibrium (TNE), the complexity of system behaviors increases sharply.To ensure that the ability to describe the system does not decrease, it is necessary to incorporate additional physical quantities, such as TNE parameters, to fully characterize its state and behavior.6][37][38][39][40][41][42][43][44][45][46][47][48][49][50] Among these, Zhang et al. 47 studied the specific-heat ratio effects on kinetic processes of SBI from multiple perspectives.(ii) The viscous effects on small-scale structures and kinetic features of SBI need further investigation.Research conducted by Zhang et al. 45 presented the significant role of viscosity in material mixing in single-mode Rayleigh-Taylor (RT) system.Zhang et al. 34 demonstrated that the viscous effects lead to the disappearance of some typical phenomena on the reacting shock-bubble interaction.The bulk viscosity associated with the viscous excess normal stress, including different physical properties of diatomic and polyatomic gases, significantly changes the flow morphology and results in complex wave patterns, vorticity generation, vortex formation, and bubble deformation. 51Moreover, studying the viscous effects on kinetic features, particularly on TNE features, is crucial for understanding the fundamental mechanism of viscous effects.
3][64][65] One is to construct physical models that connect microscopic and macroscopic descriptions.The other aims to develop a new kind of scheme that numerically solves the various partial differential equation(s).The current DBM is evolved from the physical modeling branch of LBM with some additions and some abandons.The DBM serves as a physical modeling and the complex physical field analysis method.Its tasks mainly include two aspects: (i) Capturing the main features of the problems to be studied.In addition to mass, momentum, and energy conservation moments, the DBM considers more relevant non-conserved kinetic moments that describe the main TNE behaviors of the system.With the increase of TNE, more non-conserved kinetic moments should be required to ensure the non-significant decrease of the system state and behavior description function.Through the Chapman-Enskog (CE) multiscale analysis, the kinetic moments describe the system state and features can be quickly determined.It should be noted that in complex systems, each kinetic moment represents one perspective and complex systems require multiperspective research.The result from these perspectives together constitutes a relatively complete description of the system.(ii) Trying to extract more valuable physical information for massive data and complex physical fields.Based on the non-equilibrium statistical physics, the DBM uses the nonconservative moments of ( f − f eq ), i.e., the TNE quantities, to describe how and how much the system deviates from the thermodynamic equilibrium state and to check corresponding effects due to deviating from the thermodynamic equilibrium. 53he TNE quantities open a high-dimensional phase space, and this phase space, along with its sub-space, provide an intuitive geometric framework for understanding complex behaviors.Figure 1(a) shows the schematic space opened by independent components of TNE quantities.In the phase space, the origin represents the thermodynamic equilibrium state, and a state point indicates a specific TNE state.The distance D is used to describe the degree of a specific TNE state deviating from the equilibrium state.The distance d between two state points can define as the difference between two TNE states, and its reciprocal, 1/d, can define as the similarity between the two TNE states.The mean distance (d) during the kinetic process can roughly characterize the difference between the two corresponding kinetic processes.The reciprocal of mean distance, 1/d, can describe the process similarity.As shown in Fig. 1(b), the phase space description methodology can be extended to any set of system characteristics.Many kinds of TNE quantities can be defined according to the research requirement, and each TNE characteristic quantity describes the TNE behaviors from its own perspective.
Other analysis methods, such as the morphological anal-FIG.1.Schematic space opened by independent components of TNE quantities, where "TE" represents thermodynamic equilibrium and "NE" indicates non-equilibrium.
ysis method based on the Minkowski measures, 66 and the description of tracer particle method, 45 are also coupled in DBM.These analysis approaches together constitute a relatively complete description for complex physical field.Summarily, DBM modeling surpasses the traditional macroscopic modeling in at least two aspects: (i) Extended the description ability for non-equilibrium flows.The physical function of DBM corresponds to the EHEs which considers not only the evolution of conserved kinetic moments but also the evolution of the most relevant non-conserved moments.That allows DBM to more accurately describe the higher TNE flows.(ii) Provided a set of analysis methods for complex physical field.
In numerical simulation research, the NS model is only responsible for physical modeling before simulation and do not address the analysis of the complex physical fields after simulation.While the DBM is responsible for both pre-simulation and post-simulation.
In the following part, section II outlines the discrete Boltzmann method construction process which includes the coarsegrained modeling part and the analysis scheme construction part, corresponding to steps (i) and (iii) in Fig. 2(a).Section III presents the numerical results and Sec.IV makes the conclusion.
II. Discrete Boltzmann method construction process
As shown in Fig. 2(a), the numerical experimental research mainly includes three parts: (i) physical modeling, (ii) selection/design of discrete format, and (iii) numerical experimental study and analysis.The DBM is a kind of physical modeling complex physical field analysis method, so it only works for the parts (i) and (iii).The discrete format researches in step (ii) are not included in DBM and DBM is just the users of discrete format.Therefore, the complete interpretation of DBM is Discrete Boltzmann modeling and analysis Method.Summarily, there are mainly two tasks for DBM: (i) for the specific physical problems to be investigated, the DBM should ensure the rationality of the theoretical model and give consideration to the simplicity, (ii) for the massive data and complex physical fields, the DBM is aims to try extract more valuable and helpful information.considered in the modeling process. 47Therefore, in the following part, we take this first-order two-fluid DBM as example, and demonstrate in detail its physical modeling steps.The method construction process of DBM contains two parts: the coarse-grained physical modeling before simulation and the complex physical field analysis methods after simulation.Among them, the coarse-grained physical modeling includes simplification of collision operator and discretizing of the particle velocity space.
A. Coarse-grained modeling part
Simplification of collision operator
To facilitate the solution of the Boltzmann equation, the original complex collision operator should be simplified.A common method, through introducing a local equilibrium distribution function f eq and writing the collision operator into the linearized form, is presented by Bhatnagar, Gross and Krook (BGK). 67In the simplification step, it requires that the kinetic moments described the physical problems cannot change their values after simplification, i.e., T represents the concerned kinetic moments.It should be noticed that the original BGK model only describes the situations correspond to the quasi-static and quasi-equilibrium.The currently used BGK-like models for non-equilibrium flows are the modified versions that incorporate the mean-field theory description. 53,55,68To realize a adjustable Pr number, the El-lipsoidal Statistical Bhatnagar-Gross-Krook (ES-BGK) Boltzmann equation is adopted in this work. 57,69,70Up to this step, the Boltzmann equation in ES-BGK form is obtained, i.e., where v, r, t, and τ represent the particle velocity, particle position, time, and relaxation time, respectively.The ES distribution function f ES is where ρ is the mass density and u α (u β ) represents the flow velocity in α , with b a flexible parameter adjusting the Pr number, i.e., Pr = 1/(1 − b).When b = 0, f ES is reduced to f eq and the ES-BGK form is simplified to the BGK form.
Discretizing of particle velocity space
For simulation, the continuous Boltzmann equation should be discretized in its velocity space.In this step, the continuous kinetic moments should be transferred into summation form.The DBM must ensure that the reserved kinetic moments describe the system behaviors need to keep their values after discretizing the velocity space, i.e., f Because the distribution function f can be expressed by f eq , so the reserved kinetic moments of f eq should keep their values, i.e., f eq Ψ ′′ (v)dv = ∑ i f eq i Ψ ′′ (v i ), where Ψ ′ and Ψ ′′ represent the conserved kinetic moments.The conserved kinetic moments depend on specific physical problems and they give the most necessary physical constrains for the discrete way in discretizing the velocity space.The discrete Boltzmann equation in ES-BGK form is In DBM, the CE analysis is used to determine the type and number of the conserved kinetic moments.Specifically, five kinetic moments are identified when constructing a first-order DBM, i.e., Their expressions are provided in the Appendix A. They can be obtained by integrating the Eq. ( 4) in the particle velocity space v.Because the f ES is related to ∆ * 2,αβ , it also requires the kinetic moments of f eq in the modeling process, i.e., M eq 0 , M eq 1 , M eq 2 , M eq 3 , M eq 4,2 .Their expressions can be obtained by For convenience, we can write the kinetic moments into a matrix equation 71 , i.e., and Equations ( 4) and ( 5) represent the physical constrains imposed by the discretization step in DBM modeling.According to the number of the reserved kinetic moment, the dimension of fσ,eq ( fσ,ES ) is fσ,eq = ( f σ ,eq with N moment = 13.C represents the matrix of discrete velocity model (DVM).It should be noted that the DBM retains the use of discrete velocities but does not adhere to the specific discrete format, only gives the most necessary physical constraints for the discrete velocities selection to follow.In summary, during the coarse-grained physical modeling processes, the DBM offers the necessary physical constrains that should be followed.
According to the CE multiscale analysis, the Boltzmann equation can be reduced to the macroscopic hydrodynamic equations, as detailed in Appendix B).It should be noted that recovering the corresponding hydrodynamic equations is only one aspect of the DBM's physical function.Physically, the physical function of DBM corresponds to the EHEs which not only reserve the conserved kinetic moments evolution equations, but also some of the most closely related non-conserved moments evolution equations.The modeling method that derives EHEs from the Boltzmann equation is referred to as Kinetic Macroscopic Modeling (KMM) method.However, the DBM is a kind of Kinetic Direct Modeling (KDM) method.In DBM, deriving the hydrodynamic equations is used to verify the correctness of physical modeling, the DBM does not need to solve the hydrodynamic equations.
Introducing of two-fluid model
To describe the interaction between two different fluid components, we should introduce two sets of distribution functions.Each distribution function describes one fluid component, and corresponds to one sets of hydrodynamic quantities (density ρ σ , flow velocity u σ , temperature T σ , pressure p σ ).After this step, the two-fluid discrete Boltzmann equation in ES-BGK form can be obtained, i.e., where the superscript σ represents the type of fluid component, i.e., σ = A or B. The i is the kind of discrete velocities.
, with ρ σ , u, T are the mass density of component σ , flow velocity of mixture, temperature of mixture, respectively.In two-fluid DBM, the mass density and flow velocity of each component are calculated by the first two conserved kinetic moments, respectively, i.e., The mass density and flow velocity of mixture are The temperatures of each component and the mixture are obtained from the third conserved kinetic moment where 2 is the internal energy of component σ .D represents the spatial dimension.
B. Analysis scheme construction part
In the DBM, the non-conserved kinetic moments reflect the manner and extent to which the systems deviate from thermodynamic equilibrium.By analyzing the non-conservative moments of ( f − f eq ), we can effectively characterize the TNE state and extract valuable TNE information from the fluid system.Because the DBM used in this paper only considered a limited number of non-conserved kinetic moments, it only captures parts of the TNE behaviors of non-equilibrium systems.However, these captured TNE behaviors are the most relevant and critical aspects for understanding the system's overall dynamics and characteristics.
In a first-order DBM, four fundamental TNE quantities can be defined, i.e., ∆ σ * 2 , ∆ σ * 3,1 , ∆ σ * 3 , and ∆ σ * 4,2 .Their definitions are: where v * i = v i − u represents the central velocity, with u the macro flow velocity.The subscript "m, n" means that the morder tensor is contracted to n-order tensor.The first two, ,α e α , are the most typical TNE quantities, where e α (e β ) is the unit vector in the α (β ) direction.Physically, they correspond to more generalized viscous stress (or non-organized momentum flux, NOMF) and heat flux (or non-organized energy flux, NOEF), respectively.The latter two TNE quantities contain condensed information.The . The spatiotemporal evolution of these four TNE quantities is a common scheme that described the way and degree of fluid unit deviating from the equilibrium state.
Mathematically, the expressions for ∆ σ * 2,αβ and ∆ σ * 3,1,α can be found in the corresponding hydrodynamic equations.Their expressions are where µ σ is the dynamic viscosity efficient and κ σ represents the heat conductivity.Please refer to Appendix B for more derivations.These two expressions are valuable for grasping the concept of TNE quantity.However, it's important to note that these two expressions only represent the first order of ∆ σ * 2,αβ and ∆ σ * 3,1,α .TNE quantities defined by DBM constitute a mesoscopic description method originating from nonequilibrium statistical physics.They hold significant physical interpretations within the phase space.
In addition, other coarse-grained TNE quantities can be defined.For example, the total TNE strengths describing the extent to which each fluid unit deviates from equilibrium are obtained as follows, i.e., where the operator "| |" indicates summing all the components.Furthermore, by summing their non-dimensional values over the whole fluid field, the global TNE strength described the fluid system can be obtained, i.e., 3,1 /T 3 , and d σ * 4,2 = ∑ ix,iy ∆ σ * 2 4,2 /T 4 , where "ix" and "iy" represent the positions of fluid unit.In addition, by summing the four non-dimensional TNE quantities, another global TNE strength quantity, which contains more condensed information, can also be defined, i.e., For reading, Table I summarizes the definitions, elements, and physical meanings of the corresponding TNE quantities.It should be noticed that these TNE quantities describe the TNE features/behaviors from their own perspectives.The fluid system requires multi-perspectives research.The results from multi-perspectives together constitute a relatively comprehensive description of the system.Therefore, in this paper, a nonequilibrium degree/strength/intensity vector D is introduced, with its elements composed of serval TNE quantities, to provide multiple-perspective non-equilibrium description for the fluid system.In summary, in this subsection, the DBM provides the most relevant TNE effects which are not convenient to obtain form NS model.
A. Selecting the discrete scheme
To obtain the values of f σ ,eq i and f σ ,ES i , a specific DVM must be selected, i.e., the matrix C in Eqs. ( 4) and (5).The selection of DVM must obey the physical constrain imposed by the reserved kinetic moments.Thus, the dimension of T , and N i the number of discrete velocities.The selection of C also considers the numerical stability and computational efficiency, etc.Once a specific DVM determined, the values of f σ ,eq i can be obtained, i.e., f σ ,eq = C −1 • fσ,eq , where C −1 is the inverse matrix of C. By calculating the values of ∆ σ * 2,αβ , and submitting they into fσ,ES , we can get f σ ,ES i , i.e., It should be noticed that solving the inverse matrix is one of the common ways to obtain f σ ,eq , but it is not the standard or the best method.In this paper, the number of discrete velocities is chosen to be equal to the number of kinetic moment, i.e., N i = N moment .We can use the D2V13 TNE quantity definition element(s) physical meaning non-organized momentum flux (NOMF) non-organized energy flux (NOEF) total TNE strength from the view of total TNE strength from the view of model in this paper, and its sketches can be seen in Fig. 3.The specific values of D2V13 are given as: where c 1 , c 2 and c 3 are adjustable parameters of the DVM.Then, it is very convenient to obtain C −1 using some mathematical software such as Mathematica, etc.
The standard Lattice Boltzmann Method (LBM) inherits a physical image of "propagation + collision" in a given way of "virtual particle". 72,73In LBM, the direction of discrete veloc- ity represents the motion direction of these "virtual particle".This concise image is beneficial for improving the computation efficiency of LBM.However, there is no image of "propagation + collision" in DBM.Although DBM retains the use of discrete velocity, the direction of discrete velocities in DBM does not indicate the motion direction of particles.The function of DVM in DBM is to keep the values of the reserved kinetic moments.The specific discrete formats for the spatial derivative, time integral and discrete velocities should be selected reasonably according to the specific situation.The specific discrete format presented in this paper is only a set of choices based on a series of attempts to meet the current research needs, and is by no means a standard or optimal template.In this paper, the first-order forward Euler finite difference scheme and the second-order non-oscillatory non-free dissipative scheme are used to solve the two partial derivatives in Eq. ( 6) , respectively.
B. Configuration and initial conditions
Figure 4 illustrates the configuration of the interaction between a planar shock wave and a cylindrical bubble.The flow field is rectangular with a dimensionless scale of L x × L y = 0.24 × 0.12, where the left side corresponds to the highpressure region, and the right area is the lower-pressure region.It is divided into N x × N y = 800 × 400 grid size.The grid number used in the manuscript have passed the gridindependence test.After the initial moment, a planar shock wave with a strength of Ma = 1.23 propagates downstream and impacts the high-density bubble.The initial macroscopic conditions of the flow field are as follows: (ρ, T, u x , u y ) bubble = (5.0168,1.0, 0.0, 0.0), (ρ, T, u x , u y ) 1 = (1.29201,1.303295, 0.393144, 0.0), (ρ, T, u x , u y ) 0 = (1.0,1.0, 0.0, 0.0), where the subscript "0" ("1") represents the low-pressure (high-pressure) region.Other parameters used for the simulation are: c 1 = 0.
C. Morphological features
Shown in Fig. 5 are the density contours (left column) and schlieren images (right column) at five different moments, with Pr = 1.0.The SBI can be primarily divided into two stages, i.e., the shock compression stage (t < 0.05) and the post-shock stage(t > 0.05).At t = 0, the incident shock impacts the bubble interface, resulting in the generation of a downstream-traveling transmitted shock (TS) and an upwardmoving reflected shock (RS) due to the refraction of the shock wave.During the shock compression stage, it is characterized by the misalignment between the density gradient and pressure gradient, leading to the vorticity deposition effect which is the intrinsic mechanism behind the formation of two pairs of vortex rings.Afterward, as TSs move downstream, they converge near the downstream pole, causing the shock focusing and the generation the jet structure.Subsequently, due to the deposited vorticity, a pair of counter-rotating vortexes is produced.Figure 6 plots the density contours at three moments for cases with five different Pr numbers (corresponding to different viscosities).It can be observed that until t = 0.05, there are almost no differences among cases with different viscosities because the viscous effects are relatively weaker compared to the shock compression effects.However, during the post-shock stage, the viscous effects are gradually becoming apparent.The discernible difference (marked by the red circles in the figures) around the undeveloped vortex pair can be observed at t = 0.15.In the case with higher viscosity, the undeveloped vortex pair appears smaller.As these vortexes continue to evolve, the influence of viscosity on their shape becomes increasingly pronounced.
To further investigate the viscous effects on macroscopic fluid fields, we analyze the density and temperature fields along the y axis by averaging ρ(ix, iy) and T (ix, iy) (i.e., the average density ρ s = ∑ iy ρ(ix, iy)/N x and average temperature T s = ∑ iy T (ix, iy)/N x ), and obtain their corresponding gradients (i.e., ∂ x ρ s and ∂ x T s ).As shown in Figs.7(a) and (b), profiles of ∂ x ρ s and ∂ x T s at two different moments are plot- ted, respectively.At t = 0.02, there are no obvious differences between cases with different viscosities for both gradients of density and temperature.However, some subtle but discernible differences can be observed in the corresponding sub-figures.The case with a higher viscosity exhibits larger peaks.At t = 0.4, the differences between cases with different viscosities become more visible.It also can be seen that the viscosity increases both the amplitudes of density and temperature gradients.This is because higher-viscosity fluid is more effective at absorbing energy from shock wave.To investigate the viscous effects on the velocity field, Fig. 7(c) shows the evolution of bubble circulation during the SBI process.Γ + = ∑ ω| ω>0 ∆x∆y (Γ − = ∑ ω| ω<0 ∆x∆y) means the positive (negative) circulation and Γ = ∑ ω∆x∆y represents the total circulation, where ω = (∂ u y /∂ x − ∂ u x /∂ y)e z is the vorticity.The two sub-figures in Fig. 7(c) are the vorticity contours at t = 0.42.They show the subtle but discernible differences of vorticity between case Pr = 1.0 with case Pr = 10.0.The circulation can be used to describe the strength of vorticity and velocity shear effect.During the shock compression stage, the circulations increase rapidly due to the vorticity deposition effect.In this stage, the viscosity contributes little effect to circulations.However, as the shock wave sweeps through the bubble and the vortex pair continues to develop, the influences of viscosity become apparent.Specifically, the viscosity reduces the values of circulation, indicating that the viscosity inhibits the shear motion of the bubble.
The utilization of Minkowski measures is an effective method to extract information from complex physical field. 35,40,66,74It provides a complete description for a Turing pattern.In a D-dimensional space, a set of convex sets that satisfy motion invariance and additivity can be fully described by D + 1 Minkowski measures.In the case of two dimensions, the three Minkowski measures include the proportion A of the high-Θ region (i.e., A = A h /A total , where A h is the area of high-Θ region and A total is the area of fluid field), the boundary length L between the high-and low-Θ regions, and the Euler characteristic χ, where Θ can refer to density, temperature, velocity, or pressure.
In the following analysis, we focus on density Turing patterns (i.e., Θ is density).Fig. 8 investigates the viscous effects on the proportion A and boundary length L of density Turing patterns, where "th" indicates the "threshold" value.The selection of the threshold value depends on the specific physical effects to be considered.For example, when ρ th > ρ 0 (where ρ 0 = 5.0168 is the initial density of the bubble), the propor- tion A and boundary length L reflect the abilities of viscosity to achieve a high-density state due to the shock compression.When ρ th < ρ 0 , the viscous effects on bubble deforma- tion and diffusion, which result in a lower density, have also been taken into account.In Fig. 8(a), for cases ρ th = 0.01 and ρ th = 0.5, the proportion A decreases rapidly because the bubble is compressed by the incident shock wave.After the shock wave sweeps through the bubble, the proportion A gradually increases due to the bubble deformation and gas diffusion effects.During the earlier stage, the viscosity has little impact on the proportion A because the shock compression dominates.In the later stage, the viscosity slightly reduces the values of proportion A due to its adverse effect on deformation.When ρ th = 5.5, the values of proportion A increase from zero due to the shock compression, and then decrease slowly because of the deformation and diffusion.When further increasing the ρ th , the high-density region appears around t = 0.05 and almost disappears at other times.In contrast to cases with ρ th < ρ 0 , in cases where ρ th > ρ 0 , the viscosity slightly increases the proportion A. Overall, the viscosity reduces the lower-ρ th region but increases the higher-ρ th region.The reason is that the fluid with a higher viscosity can absorb more energy from shock wave, making it easier to increase the density during the compression process.Fig. 8 (b) shows the evolutions of boundary length L. Similar to the proportion A, the viscous effects are more pronounced in the later stage.In addition, the viscosity reduces the boundary length L for cases with ρ th = 0.01 and ρ th = 0.5, but increases it for cases with ρ th = 5.5 and ρ th = 7.5.
D. TNE features
Understanding the spatial distribution and temporal evolution of TNE quantities is crucial for further investigating the kinetic behaviors during the SBI process.In the following part, we introduce a non-equilibrium strength vector D, in which its elements are composed of , to analyze the TNE strength of fluid system from multiple perspectives.To provide an intuitive representation of TNE quantities, Fig. 9 shows the contours of the total TNE strengths at three different moments.In this figure, the odd and even rows represent the quantities of component A and B, and the first and last two rows correspond the total TNE strengths |∆ σ * 2 | and |∆ σ * 3,1 |, respectively.It can be observed that the values of total TNE strengths are greater than zero in regions where the gradients of macroscopic quantities are pronounced, indicating significant deviations from the thermodynamic equilibrium state.The larger the value of the TNE quantity, the higher the degree of deviation from the thermodynamic equilibrium state.
To qualitatively investigate the viscous effects on the spatial distribution of TNE quantities, two average TNE strengths quantities are analyzed.Shown in Figs.10(a dients of macroscopic quantities of component A are weak.As time goes, the viscous effects on macroscopic quantity gradients gradually become more apparent. Figures 10(c) and (d) show the average TNE strength from the perspectives of ∆ σ * 3,1,α .Compared to the strengths of ∆ σ * 3,1,x , the strengths of ∆ σ * 3,1,y exhibit more significant amplitude, indicating that the average heat flux in y direction is stronger than that in x direction.The profiles of ∆ σ * 3,1,x are symmetric about the line y = 0.6 and profiles of ∆ σ * 3,1,y are symmetric about the origin point.The Pr number has limited effects on strengths of ∆ σ * 3,1,x and ∆ σ * 3,1,y since the viscosity does not directly change their transport coefficients ( as shown in Eq. ( 16) ).
There are two points that should be stated: (i) It is essential to note that the above descriptions are based on the results at a typical moment t = 0.02.Descriptions of other moments are equally important for a comprehensive understanding of the kinetic behaviors during the SBI process.(ii) Moreover, higher-order TNE quantities (∆ σ * 3,αβ γ and ∆ σ * 4,2,αβ ) contain more condensed information.They are also closely related to the gradients of macroscopic quantities.
To qualitatively investigate the viscous effects on TNE strength of fluid system, Fig. 11 3,1 but not affects the strength of d A * 2 .The reason is attributed to the fact that a fluid with higher viscosity exhibits enhanced capacity for absorbing energy from shock waves, thereby resulting in an elevated heat flux.
The entropy production rate and entropy production, which are significant in the compression science field, are also analyzed.There are two kinds of entropy production rates 58 : The former is caused by the NOEF and the temperature gradient, and the latter is contributed by the NOMF and the velocity gradient.Fig. 12(a) plots the evolutions of entropy production rates ṠNOEF and ṠNOMF .It can be seen that the changes in the curves of the entropy generation rate are closely related to the position of the shock wave.For ṠNOMF , its values increase continuously during the process when the incident shock wave is acting on the bubble (t < t 1 ).Later, the values of ṠNOMF decrease.When the incident shock wave runs out of the fluid field (t > t 2 ), the ṠNOMF reduces rapidly and subsequently maintain their values for a longer time.For ṠNOEF , before t 2 , its values reduce and then increase.When t > t 2 , the ṠNOEF shows an upward trend.The viscosity significantly raises the ṠNOMF by increasing the transport coefficient, and amplifies the ṠNOEF by increasing the temperature gradient.
Summing the entropy generation rates over this period, the entropy generations (S NOEF and S NOMF ) during this period can be obtained.Fig. 12(b) shows the profiles of entropy generations versus viscosities (Pr numbers).It can be seen that both the two types of entropy generations increase as the viscosity increases.Before Pr < Pr c , the S NOMF is larger than S NOEF .When Pr > Pr c , the situation is reversed.
IV. Conclusions
A two-fluid DBM with a flexible Pr number is designed to investigate the influence of viscosity on the morphological and TNE characterizations during the SBI process.Different from most of the previous numerical research that relied on traditional macroscopic models/methods, this paper studies the dynamic and kinetic processes of SBI from a mesoscopic view.Overall, for the rapid SBI process, the viscosity contributes subtle but discernible influences on macroscopic quantities while strongly affects the some TNE features of the fluid system.Morphologically, (i) the viscosity affects the shape of the vortex pair, increases both the amplitudes of gradients of density and temperature of the fluid field, reduces the values of the circulation of the bubble.(ii) the viscosity increases both the proportion of the area occupied by the high-density region and the boundary length separating the high-and low-density regions.The underlying reason is that the fluid with higher viscosity possesses an enhanced ability to absorb energy from shock waves, thus facilitating an increase in density during the compression process.Introducing a non-equilibrium strength vector D, defined as , offers diverse non-equilibrium descriptions from multiple perspectives for the fluid system.The spatial distributions of TNE quantities demonstrate that the contact interfaces with macroscopic quantity gradients significantly deviate from thermodynamic equilibrium, while the regions far from the interface remain close to the equilibrium state.The spatial configuration of average TNE strength show interesting symmetry.Different TNE quantities describe the TNE strength from their corresponding perspectives, and the results from various perspectives together constitute a relatively comprehensive description of the system.Theoretically, the viscosity influences these TNE quantities by affecting the transport coefficient and gradients of macroscopic quantities.Through raising the transport coefficient and the temperature gradient, respectively, the viscosity increases the two types of entropy production rates and their corresponding entropy production.The fundamental research presented in this paper enhances our understanding of the SBI mechanism in various applications, such as inertial confinement fusion, supersonic combustors, underwater explosions, etc.The dynamic viscosity efficient is µ σ = Pr σ τ σ p σ = 1 1−b σ τ σ p σ , with p σ = ρ σ R σ T σ .The heat conductivity is κ σ = C σ p τ σ p σ , with C σ p = D+2 2 R σ .It should be pointed out again that recovering the hydrodynamic equations is only one part of the physical function of DBM.The physical function of DBM corresponds to the EHEs.In DBM, it does not need to solve the hydrodynamic equations in DBM simulation.
Figure 2 (
b) is the flowchart of DBM simulation.It shows the two steps (the red boxes in the figure) that are focused in DBM numerical simulation: (i) for a specific physical problem, determining the control equation and physical constrain, and (ii) making statistics and analyzing the needed physical quantities.Apparently, the two steps correspond to physical modeling and complex physical field analysis in numerical experimental research, respectively.Physically, to describe the physical image of SBI, and to investigate the viscous effects on SBI, we can choose a first-order two-fluid DBM with flexible Prandtl (Pr) numbers, where "first-order" means only the first order of TNEs are
FIG. 2 .
FIG. 2. (a) Three steps of numerical study.The DBM works for part (i) and (iii).(b) Flowchart of DBM simulation.
view of d σ * TABLE I.The definitions, elements, and physical meanings for the corresponding TNE quantities in DBM, where the operator ∑ ix represents summing the TNE quantities for each row of the fluid field and the operator ∑ ix,iy indicates summing the TNE quantities for all fluid units.
FIG.4.The computational configuration of the SBI.
FIG. 5. Density contours (left column) and schlieren images (right column) at five different moments.
(a) plots the evolutions of the global TNE strength d σ * .It can be seen that the peaks (valleys) of d A * and d B * profiles are closely related to the location of the shock wave.Here the t 1 indicates the moment that the incident shock has just passed the bubble, and the t 2 represents the moment when the incident shock exits the fluid field.It is also shown that the viscosity increases both the global TNE strengths of the two components.Further, Fig. 11(b) shows the evolutions of global TNE strengths d σ * 2 and d σ * 3,1 .Lines with different colors indicate cases with different TNE components, and different symbols represent different viscosities.For component B (the bubble), the viscosity significantly enhances the strength of d B * 2 but not changes the strength of d B * 3,1 .The reason for this is that when adjusting the Pr number, it does not directly modify the transport coefficient of d B * 3,1 .For component A (the ambient gas), the viscosity significantly increases the strength of d A *
FIG. 7 .
FIG. 7. (a) Profiles of ∂ x ρ s at t = 0.02 and t = 0.4, respectively.(b) Profiles of ∂ x T s at t = 0.02 and t = 0.4, respectively.(c) Evolutions of bubble's circulation during the SBI process.The two sub-figures are the vorticity contours at t = 0.42.The upper sub-figure is the case with Pr = 1.0 and the bottom one is Pr = 10.0.
FIG. 8 .
FIG. 8. (a) Evolutions of proportion A. (b) Evolutions of boundary length L. Four cases with different density threshold values are plotted. | 2023-03-30T01:16:19.634Z | 2023-03-29T00:00:00.000 | {
"year": 2023,
"sha1": "82a9b1e76edc8031f3118ab8f0c795942670968f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "82a9b1e76edc8031f3118ab8f0c795942670968f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
23665343 | pes2o/s2orc | v3-fos-license | Bose-Fermi Mixtures in a Three-dimensional Optical Lattice
We have studied mixtures of fermionic $^{40}$K and bosonic $^{87}$Rb quantum gases in a three-dimensional optical lattice. We observe that an increasing admixture of the fermionic species diminishes the phase coherence of the bosonic atoms as measured by studying both the visibility of the matter wave interference pattern and the coherence length of the bosons. Moreover, we find that the attractive interactions between bosons and fermions lead to an increase of the boson density in the lattice which we measure by studying three-body recombination in the lattice. In our data we do not observe three-body loss of the fermionic atoms. An analysis of the thermodynamics of a noninteracting Bose-Fermi mixture in the lattice suggests a mechanism for sympathetic cooling of the fermions in the lattice.
Quantum liquids and quantum gases are remarkable objects which reveal macroscopic quantum phenomena, such as superfluidity and Bose-Einstein condensation. These fundamental concepts have profoundly influenced our understanding of quantum many-body physics. The distinct behaviour observed for purely bosonic or purely fermionic systems sheds light on the role played by quantum statistics. New insights can be attained by mixing bosonic and fermionic species. One of the most prominent examples is a mixture of bosonic 4 He and fermionic 3 He. There it has been observed that with increasing admixture of 3 He the critical temperature of the transition between the superfluid and the normal fluid phase is lowered and below the tricritical point phase separation is encountered [1].
In trapped atomic gases mixing of bosonic and fermionic species has led to the observation of interaction induced losses or collapse phenomena [2,3] and collisionally induced transport in one-dimensional lattices [4]. In this work we report on the creation of a novel quantum system consisting of a mixture of bosonic and fermionic quantum gases trapped in the periodic potential of a three-dimensional optical lattice. The optical lattice allows us to change the character of the system by tuning the depth of the periodic potential. This leads to a change of the effective mass and varies the role played by atom-atom interactions. The interaction between bosonic and fermionic atoms interconnects two systems of fundamentally different quantum statistics and a wealth of physics becomes accessible which is beyond that of the purely bosonic [5,6] or purely fermionic [7,8] case. A variety of theoretical work has been devoted to Bose-Fermi mixtures in optical lattices and new quantum phases have been predicted at zero temperature [9,10,11,12]. Moreover, the coupling between a fermion and a phonon excitation in the Bose condensate mimics the physics of polarons [13]. At finite temperature phase transitions to a supersolid state and phase separation are expected [14].
In our experiment, we prepare fermionic 40 K atoms together with a cloud of Bose-Einstein condensed 87 Rb atoms. The qualitative behaviour when changing the mixing ratio between bosons and fermions is depicted in figure 1. The momentum distribution of the pure bosonic sample shows a high contrast interference pattern reflecting the long-range phase coherence of the system. Adding fermionic particles results in the loss of phase coherence of the Bose gas, i.e. a diminishing visibility of the interference pattern and a reduction of the coherence length.
FIG. 1: Interference pattern of bosonic atoms released from a three-dimensional optical lattice for varying admixture of NF fermionic atoms at a value UBB /zJB = 5. The bosonic atom numbers are NB = 1.2 × 10 5 (a and b) and NB = 8 × 10 4 (c) and the image size is 660 µm×660 µm. The coordination number of our lattice with simple cubic geometry is z = 6.
Our experimental setup used to produce a degenerate mixture of a Bose and a Fermi gas has been described in detail in previous work [8]. In brief, fermionic 40 K atoms are sympathetically cooled by thermal contact with bosonic 87 Rb atoms, the latter being subjected to forced microwave evaporation. The potassium atoms are in the hyperfine ground state |F = 9/2, m F = 9/2 and the rubidium atoms in the hyperfine ground state |F = 2, m F = 2 . After reaching quantum degeneracy for both species we transfer both clouds into a crossed beam optical dipole trap operating at a wavelength of 826 nm. The laser beams for the optical dipole trap are aligned in horizontal plane and their elliptical waists have 1/e 2 radii of approximately 50 µm and 150 µm in the vertical (z) and horizontal (x,y) directions, respectively. In the optical trap we perform evaporative cooling by low-ering the power in each of the laser beams to ≃ 35 mW. After recompression, the optical dipole trap has the final trapping frequencies (ω x , ω y , ω z ) = 2π × (30, 35, 118) Hz for the rubidium atoms. We estimate the condensate fraction to be 90% and use this value to obtain the temperature of both clouds. The Fermi temperature T F in the optical dipole trap is set by the number of potassium atoms and the trapping frequencies, and we obtain T /T F ≃ 0.3, which is in agreement with a direct temperature measurement of the fermionic cloud.
The three-dimensional optical lattice is generated by three mutually orthogonal laser standing waves at a wavelength of λ = 1064 nm and a mutual frequency difference of several 10 MHz. Each of the standing wave fields is focused onto the position of the quantum degenerate gases and the 1/e 2 radii of the circular beams along the (x,y,z)-directions are (160,180,160) µm. To load the atoms into the optical lattice we increase the intensity of the lattice laser beams using a smooth spline ramp with a duration of 100 ms. This ensures adiabatic loading of the optical lattice with populations of bosons and fermions in the lowest Bloch band only. We have checked the reversibility of the loading process into the optical lattice by reversing the loading ramp and subsequently let the particles equilibrate during 100 ms in the optical dipole trap without evaporation. We measure that for both the pure Bose gas and the Bose-Fermi mixture the condensate fraction decreases by ≃ 1.4% per E R of lattice depth, where E R = h 2 /2m Rb λ 2 denotes the recoil energy and m Rb the mass of the rubidium atoms.
The physics of the Bose-Fermi mixture in an optical lattice can be described by the Bose-Fermi Hubbard model (e.g. [10]). The parameters of the model are the tunnelling matrix elements J B,F for bosons and fermions, respectively, and the on-site interaction strength U BB between two bosons and U BF between bosons and fermions. Using the most recent experimental value of the K-Rb swave scattering length [15] we obtain U BF /U BB ≈ −2.
We have studied the phase coherence of the bosonic atoms in the optical lattice for various admixtures of fermionic particles. We switch off the optical lattice quickly and allow for 25 ms of ballistic expansion before taking an absorption image of the atomic cloud. From the absorption image we measure the visibility of the interference pattern. We determine the maximum n max and the minimum n min of the density of the atoms at a momentum | q| = 2hk with k = 2π/λ (see inset in figure 2a) [17]. From this we calculate the visibility V = (n max − n min )/(n max + n min ).
For the purely bosonic case we obtain results similar to previous measurements [16,17,18]. In our data (see figure 2a), the visibility V starts to drop off at a characteristic value (U BB /zJ B ) c ≈ 6.5. For larger values of U BB /zJ B the decrease in visibility is approximated by V ∝ (U BB /zJ B ) ν with ν = −1.41 (9), which is consistent with our earlier measurement in a different lattice setup giving ν = −1.36(5) [16] but different from the exponent ν = −0.98(7) obtained in [17]. For a mixture of bosonic and fermionic atoms the results change, see also [19]. Whereas in the superfluid regime for very low values of U BB /zJ B the visibility is similar to the pure bosonic case, the presence of the fermions decreases the characteristic value (U BB /zJ B ) c beyond which the visibility drops off significantly. Nevertheless, the visibility still shows a power-law dependence on U BB /zJ B with an exponent in the range of −1 < ν < −1.5.
To quantify the shift of the visibility data towards smaller values of U BB /zJ B we have fitted the power-law decay for large values of U BB /zJ B and extrapolated the slope to the visibility for the superfluid situation (dashed lines in figure 2a). The intersection defines the characteristic value (U BB /zJ B ) c which depends on the mixing ratio between fermions and bosons N F /N B as shown in figure 3. From this graph it is evident that even very small admixtures of fermionic atoms change the coherence properties of the bosonic cloud significantly.
The phase coherence of the bosons in the lattice is not only described by the visibility of the interference pattern but also by the coherence length of the sample [16,20]. In a superfluid state the coherence length is comparable to the size of the system. The coherence length is related to the inverse of the width of the zero-momentum peak plus a small contribution from the repulsive interaction between the bosonic atoms. For the pure bosonic case we find that this width starts to increase at a value of (U BB /zJ B ) c ≈ 9 (see Fig. 2b). For the case of a Bose-Fermi mixture this value is dramatically altered: for an increasing admixture of fermionic atoms to the bosonic sample, the value decreases and it behaves very similar to the corresponding value (U BB /zJ B ) c for the visibility (see figure 3). Our interpretation of the simultaneous decrease of both the coherence length and the visibility with increasing admixture of fermions is that the system leaves the superfluid phase. While for the pure bosonic case this indicates a Mott insulator transition [16,17], for the mixture the analysis is more delicate due to the different interactions and the different quantum statistics of the two species. The full understanding of the observed effects including strong interactions and finite temperature is challenging. We will consider two limiting situations, namely a strongly interacting Bose-Fermi mixture at T = 0 in which polarons and composite fermions are formed, and a non-interacting mixture at finite temperature. In both explanations we encounter a destruction of the superfluid with increasing fermionic admixture which qualitatively reflects our results.
At zero temperature several quantum phases of the system are predicted [10,11,12,13,14], depending on the sign and the strength of the Bose-Fermi interaction. At low depth of the optical lattice the interaction of the Bose-Einstein condensate with the Fermi gas leads to the depletion of the condensate [21] and to the formation of polarons where a fermion couples to a phonon excitation of the condensate [13]. The coupling strength of the fermions to the phonon modes depends on U BF and the ratio U BB /J B . If the coupling becomes very strong the system is unstable to phase separation (U BF > 0) or to collapse (U BF < 0). In the stable regime, the polarons can form a p-wave superfluid or induce a charge density wave, as has been analyzed in one spatial dimension [13]. The enhanced bosonic density around a fermionic impurity increases the effective mass of the fermion and might enhance the tendency of the bosons to localize. For our parameters, the phonon velocity is comparable to the Fermi velocity, a regime that is usually unaccessible in solids. On the other hand, the interaction of the Bose gas with the second species leads to an effective attractive interaction between the bosons which would favor a Mott insulator transition at a larger depth of the optical lattice [13]. At a larger depth of the optical lattice other effects come also into play. Composite fermions consisting of one fermion and n B bosons form when the binding energy of the composite fermion exceeds the gain in kinetic energy that the particles would encounter by delocalizing. An effective Hamiltonian for these (spinless) composite fermions with renormalized tunneling and nearest neighbor interaction has been derived and their quantum phases have been investigated theoretically [9,10]. In this situation, the Bose-Einstein condensate can be completely depleted by the interactions between bosons and fermions.
For the finite temperature model of the noninteracting gas we consider the entropy of the cloud of bosons and fermions, which is S = αN F T /T F + βN B (T /T c ) 3 . T F =hω F (6N F ) 1/3 denotes the Fermi temperature for N F fermions in a trap with frequencyω F and T c =hω B (N B /ζ(3)) 1/3 the critical temperature for Bose-Einstein condensation with α and β being numerical constants. When increasing the depth of the optical lattice adiabatically, the temperatures of the two species remain equal to each other due to collisions, while T c and T F evolve very differently. This is due to the fact that the tunnelling rates for the fermions are up to an order of magnitude larger than for the bosons for our lattice parameters. Since the effective masses m * B,F ∝ 1/J B,F enter into the degeneracy temperatures, T c decreases much faster than T F . At constant entropy, this results in adiabatic heating of the bosonic cloud and a reduction of the condensate fraction. Simultaneously the fermionic cloud is cooled adiabatically, similar to the situation considered without a lattice in [22]. For the noninteracting mixture with our parameters one expects a reduction of T /T F by a factor of approximately 2 at a lattice depth of 20 E R .
In the experiment we have further studied the occupation of the optical lattice by measuring three-body recombination. Lattice sites with a higher occupation than two atoms are subject to inelastic losses where a deeply bound molecule is formed and ejected from the lattice together with an energetic atom. Independent of their occupation all lattice sites are furthermore subject to loss processes such as off-resonant light scattering, background gas collisions, or photo-association due to the trapping laser light. The attractive interaction between the bosons and the fermions changes the occupation of bosons on the sites of the optical lattice. For the given ratio of the on-site interaction strength of U BF /U BB ≃ −2 it is energetically favorable to have up to five bosons per site if a fermion is present.
The experimental sequence to study the three-body decay starts from an initially superfluid Bose gas at a potential depth of 10 E R . We use a ramp time of 30 ms to increase the potential depth of the lattice from zero to 10 E R during which we do not observe a loss of atoms. Subsequently, we freeze the atom number distribution by quickly changing the lattice depth to a large value of 18 E R where the tunnelling time of the bosons is τ B = 1/zJ B = 23 ms. We monitor the total atom number as a function of the hold time in the deep optical lattice (see figure 4) and observe two distinct time scales of the decay of the atoms. The fast initial time scale is due to three-body losses from multiply occupied lattice sites. The slower decay is due to single-particle loss processes.
To extract quantitative information from the loss curves we fit the data with the following model. We assume that any singly or doubly occupied site decays with a single particle loss rate Γ 1 . Multiply occupied sites decay with a rate determined by the three-body loss constant K B 3 = 1.8 × 10 −29 cm 6 /s [23] and the threebody density [n(r)] 3 at the lattice site. Since we start from a superfluid the number distribution at the lattice sites can be approximated by a coherent state with a third order correlation function being equal to unity [24]. We calculate the three-body loss rate assuming gaussian ground state wave functions at each lattice site to be Γ 3 = 0.24 × n 3 B s −1 , where n B is the number of bosons on the site. By fitting the data with this model we extract the occupation of the lattice. We obtain n 1,2 = 67(3)% of the sites with single or double occupation, n 3 = 23(9)% sites with triple occupation and n 4 = 10(8)% of lattice sites with occupation four. A mean field calculation neglecting tunnelling yields the theoretical values n 1,2 = 58%,n 3 = 33%, andn 4 = 17% which gives reasonable agreement given the simplicity of the model. The slow decay rate is determined to be 0.35(7) s −1 .
FIG. 4: Decay of a pure bosonic gas (squares) and a Bose-Fermi mixture (circles) in the optical lattice. The fast initial decay of the bosons is much more pronounced in the mixture, reflecting the higher density due to Bose-Fermi attraction. For the fermions hardly any loss is observed. The error bars indicate statistical errors from three repetitive measurements.
Upon adding fermions to the system we find a much faster initial decay due to three-body loss for the rubidium atoms. The single particle loss constant is, however, the same. In contrast, for the fermionic atoms we do not observe a particle loss of a comparable order of magnitude. This suggests that the observed loss is only due to three-body recombination between three rubidium atoms. Recent results have suggested that the three-body loss constant K BF | 2018-04-03T03:00:12.632Z | 2006-04-05T00:00:00.000 | {
"year": 2006,
"sha1": "08a96219754a72116ed973d8478bfe5ff0232680",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0604139",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "08a96219754a72116ed973d8478bfe5ff0232680",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
133991752 | pes2o/s2orc | v3-fos-license | Efficacy of using Radar Induced Factors in Landslide Susceptibility Analysis : case study of Koslanda , Sri Lanka
Through recent technological developments of radar and optical remote sensing in the areas of temporal, spectral, 10 spatial, and global coverage, the availability of such images either at a low cost or free of charge, and the advancement of tools developed in image analysis techniques and GIS for spatial data analysis, a large variety of applications using remote sensing and GIS as tools are possible. Hence, this study aims to assess the efficacy of using Radar Induced Factors (RIF) in identifying landslide susceptibility using bivariate Information Value method (InfoVal method) and multivariate Multi Criteria Decision Analysis based on the Analytic Hierarchy Process statistical analysis. Using identified landslide causative factors, four 15 landslide prediction models as bivariate without and with RIF, multivariate without and with RIF are generated. Twelve factors topographical, hydrological, geological, land cover and soil plus three RIF are considered. The prediction levels of susceptibility regions are distinguished and categorized into four classes as very low, low, moderate, and high susceptibility to landslides. With integration of RIF, boundary detection between high and very low areas increased by 7%, and 4% respectively, and there is an improvement of 2.45% prediction and 1.12% validation performances of bivariate analysis than 20 multivariate.
Introduction
Landslides are one of the major types of geo-hazards in the world as approximately 09% of global natural disasters are recorded as landslides (Chalkias et al. 2014).The recent statistics on landslide disasters per continent, from year 2000 to 2017, summarized in the Emergency Disaster Database (EM-DAT) indicate that landslides cause around 16500 deaths and affect 4.5 million people worldwide, with property damages of about US $3.5 million (OFDA/CRED 2016).The spatial prediction of landslide disasters, incorporating statistical analysis to identify areas that are susceptible to future land sliding, based on the knowledge of past landslide events, topographical parameters, geological attributes, and other possible environmental factors, is one the important areas of geo-scientific research (Park et al. 2013).
Presently, remote sensing technology has been used extensively to provide landslide-specific information for emergency managers and policy makers in terms of disaster management activities in the world (Baroň et al. 2014, Martha 2011).In recent years, there is an increasing demand for high resolution satellite data to be used for extracting geometric object information and mapping.The spatial resolution of space-borne optical data is now less than 1m in panchromatic images, and at the same time, the interest in Synthetic Aperture Radar (SAR) sensors and related processing techniques has also increased.Radar is considered to be unique among the remote sensing systems, as it is all-weather, independent of the time of day, and is able to penetrate into the objects.Additionally, radar images have been shown to depend on several natural surface parameters such as the dielectric constant and surface roughness.The dielectric constant is highly dependent on soil moisture due to the large difference in dielectric constant between dry soil and water (Kseneman et al. 2012).The forest and the vegetation cover of the earth surface is well sensed by the remote sensing techniques, where the shorter wave length regions as X and C radar bands identify the forest canopy clearly in radar remote sensing.
It is accepted in the scientific community that remote sensing techniques do offer an additional tool for extracting information on the causes of landslides and their occurrences.Especially for deriving various parameters related to the landslide predisposing and triggering factors at global and regional scales, remote sensing plays a vital role (Corominas et al. 2014, Muthu et al. 2008).Most importantly, landslide susceptibility analysis has greatly aided the prediction of future landslide occurrences, which is important for humans who reside in areas surrounded by unstable slopes.It is therefore identified that remote sensing techniques are significant in order to extract the landslide susceptibility regions by providing most suitable landslide predisposing factors at smaller scale.
It can be observed that there is massive potential for applicational research in the area of disaster management, if, conventional remote sensing data and radar are integrated.This is because each method has its inherent disadvantages and shortcomings, as well as advantages, and integrating the two could potentially complement each other.As such, this study combines the predisposing factors derived from both optical and radar satellite data for landslide susceptibility analysis.Furthermore, significant landslide predisposing factors like the soil moisture content, surface roughness, and forest biomass will be derived from radar images, and the impacts of these factors on landslide susceptibility will be examined.Hence, this study aims to investigate the efficacy of using Radar Induced Factors (RIF) for landslide susceptibility analysis under bivariate and multivariate nature.
Statistical Methods for Landslide Susceptibility Analysis
There are inherent limitations and uncertainties in landslide susceptibility analysis, and yet, several methods have been utilized and successfully applied in the past (Kanungo et al. 2009).These methods employed have been of both qualitative and Qualitative methods simply make use of landslide inventories to identify areas with similar geological and geomorphologic properties that show susceptibility for land failures.These methods can be divided into two groups as geomorphologic analysis, and map combination.In geomorphologic analysis, the landslide susceptibility is determined directly either in the field or by the interpretation of images through geomorphologic analysis (Bui et al. 2011).Map combination is based on combining a number of predisposing factor maps for landslide susceptibility analysis.However, map combination analysis comprises of a semi-quantitative nature by integrating the ranking and weighting of landslide susceptibility (Ayalew et al. 2004, Kavzoglu et al. 2014, Saaty 1980).The analyses based on the quantitative approaches depend on numerical data and statistics, expressing the relationship between instability or predisposing factors with landslides (Reis et al. 2012).These methods are categorized into two groups as bivariate and multivariate statistical analysis.Within the context of this work, popular Information Value method (InfoVal) as bivariate and Multi-Criteria Decision Analysis (MCDA) based on Analytic Hierarchy Process (AHP) as multivariate methods are compared with respect to their performances in landslide susceptibility modelling.
The InfoVal method determines the susceptibility at each point or pixel, jointly considering the weight of influence of all predisposing factors.The weight of influence is based on the landslide inventory map of the particular area.When constructing a probability model for landslide prediction, it is necessary to assume that the landslide occurrence is determined by landsliderelated factors, and that future landslides will also occur under the same, or almost similar, conditions as past landslides (Remondo et al. 2013, Saha et al. 2005).Hence, at the beginning of the analysis, the landslide inventory map is divided in to two samples as training and validation, enabling the use of this data for landslide susceptibility analysis and validation of results respectively.The Log function is used to control the large variation of weights in calculations.Larger the weight of influence, the stronger the relationship between landslide occurrence and the given factor's attribute.
The MCDA method integrates all the independent predisposing factors with the inclusion of relative contribution of each factor by putting more emphasis on the predisposing factors that contribute to landslide occurrence.The same predisposing factors without or with radar, are used to investigate the landslide susceptibility regions from AHP technique within the GIS domain.
In AHP, each pair of factors in a particular factor group is examined at one time, in terms of their relative importance.Relative weights for each factor are calculated based on a questionnaire survey from experts in the field.However, expert knowledge could be subjective at times, or may cause to assign different weights for each factor, when dealing with a large number of causative factors.Hence, in order to avoid this inconsistency, Consistency Ratio (CR) is calculated.For better predictive models, the CR should be less than 0.01, else each factor has to be generated with the proper pairwise comparison.
Landslide predisposing factors
It is understood that landslides may occur as consequences of complex predisposing and triggering factors.Topographical and geological factors, together with local climatic conditions, lead to landslide occurrences.The selection of these factors, and preparation of corresponding thematic data layers, are vital for models used in landslide susceptibility analysis (Jakob et al. 2006, Lee et al. 2017).There are no universal guidelines regarding the selection of predisposing factors in landslide susceptibility analysis.Some parameters may be important factors for landslide occurrences in a certain area but not for another one.Scientists ( van Westen 1997, van Westen and Getahun 2003, van Westen et al. 2003) show that every study area has its own particular set of predisposal factors which condition landslides.Determination of appropriate causal factors is a difficult task, and no specific rule exists to define how many factors are sufficient for a specific landslide susceptibility analysis.Hence, the selection of predisposing factors are dependent on the nature of the study area, opinions of the experts, and the availability of data for generating the appropriate spatial and thematic information (Kavzoglu et al. 2015, Shahabi andHashim 2015).
Study Area
Koslanda in Sri Lanka is located at the geographical coordinates of 06° 44' 00" North and 81° 01' 00" East, and the elevation is around 700 -1000 m from the Mean Sea Level (MSL).It is a remote, hilly area with harsh weather conditions, where the monthly rainfall ranges from 60 mm to 200 mm, and average temperature is 20 0 C. The area has rains for most of the year, with very short, dry period during the months of February to April.The population is around 5000 people, and the study area has an extent of 19 km 2 within the Koslanda area.Koslanda has been the site for of several massive landslides over the years, and both the Naketiya landslide in the year 1997, and Meeriyabedda landslide in the year 2014, are very distinct in Fig. 1, and within a span of two years, major landslides have occurred three times at the same location (NBRO 2016).
The geomorphology of the area is described as a gently inclined talus slope, with a thick, loosely compacted colluvium deposit at the foot of the near vertical rocky scarp.Koslanda is situated at the middle part of the slope, with the lower area showing a fairly steep surface as well.The composition of the colluvium deposit in the area includes a randomly arranged mixture of weathered clayey and sandy materials, with the organic matter making the deposit act as a sponge with high water content.
The study area was an abandoned tea land in which the properly maintained surface drainage system has been neglected"(Somaratne 2016).
Data and methodology
The most important phases in landslide prediction analyses are the collection of data from different sources, and the construction of a spatial database for these data on a common platform (Lan et al. 2004).The data utilized for the landslide prediction analysis include the topographical, hydrological, geological, soil, and land cover factors.All factors are derived from optical images (Landsat-8, Sentinel-2), radar images (Sentinel-1, TerraSAR-X), Digital Elevation Model (DEM) derived from aerial triangulation and other available data sources (geology, rainfall).Stereo aerial photographs from 1993 are used to generate the DEM using aerial triangulation.An inventory map of landslides for the study area was constructed by integrating the interpreted multi-temporal aerial photographs, satellite images, and some temporal images from the Google Earth.
Verifications are carried out through field investigations.In this research, the predisposing factors were selected from among the most widely considered factors in literature and opinion from the experts as depicted in the Table 1.
Most data are derived as primary data from remote sensing techniques for a large area with up-to-date information.As such, fifteen predisposing factors are selected for the landslide susceptibility analysis by using bivariate and multivariate statistical techniques.Of these, twelve factors are derived from optical images, DEM and auxiliary data, while three more factors are derived from radar images.These factors were then combined in order to analyse the performance of this integration for landslide susceptibility analysis.
Topographical Factors
The topographical factors include elevation, slope, aspect, planar curvature, profile curvature and surface roughness of the terrain.The elevation is important to study the local relief of the terrain and ranges from 446 -1537m above MSL in the study area.Since the area contains high mountains, more than a 1000m difference in elevation can be observed.The basic parameter for the slope stability analysis is the slope angle.The slope angle of the study area ranges from 0 0 to 80 0 degrees, showing a significant increase of slope within a relatively small area.Additionally, the area with steep slopes ranging from 60 0 -80 0 can be seen in the northern part of Koslanda.Aspect is defined as the direction of maximum slope of the terrain surface, or the compass direction of a particular slope.The curvature is theoretically defined as the rate of change of slope (or slope), of the focused slope.Planar curvature describes convergence and divergence of the flow across a surface, while the profile curvature refers to acceleration or deceleration of the flow across a surface.
Under radar configuration, the magnitude of radar backscatter is defined as a function of surface roughness and moisture Co-occurrence Matrix (GLCM), which is based on the second order probability density function.Hence, the GLCM texture analysis is performed using a window size of 9*9 pixels and the homogeneity or dissimilarity criterion is used to determine the surface roughness of the study area.
Hydrological Factors
Distance to hydrological features, rainfall, and TWI defined by Eq. ( 1) are selected as the hydrological factors for this landslide susceptibility analysis.Proximity to the hydrological features is an important factor when considering the landslide susceptible analyses (Sar et al. 2016, Shahabi andHashim 2015).TWI is a solid index that is capable of predicting areas susceptible to saturation or wetness of land surfaces, and the areas that have the potential to produce an overland flow.Within the Sri Lankan context, heavy and prolonged rainfall is the main triggering factor for the landslides.The monthly average rainfall data for the years 2014 to 2016 from 10 nearby stations to Koslanda were used in this study.Monthly rainfall data from 10 rain gauge stations are averaged, and the average rainfall map for the study area is generated using the Inverse Distance Weighting (IDW) interpolation method within the ArcGIS environment.TWI has been used to study the spatial scale effects, or topographic control, on hydrological processes.This index was developed by Beven and Kirkby (1979) and can be defined in Eq. (1) as; where ∝ is the local upslope area draining through a certain point per unit of contour length, and is the gradient of the local slope in degrees.The applicability of the TWI in the calculation and validation of landslide susceptibility analysis has been shown by Kavzoglu et al.(2014) and Sørensen et al. (2006), among others.
Soil Factors
The Soil Moisture Index (SMI) defined in Eq. ( 2) and Delta Index defined in Eq. ( 5) are the soil factors focused upon in this research.Surface soil moisture is one of the most important parameters in land susceptibility analysis (Carlson et al. 1994, Zhan et al. 2002).Several methods have been proposed to estimate the surface soil moisture conditions accurately with in situ The SMI is "0" along the dry edge and "1" along the wet edge.According to the studies from (Wang andQu 2009, Zenga et al. 2004), SMI can be defined in Eq. ( 2) as; where Tmax, Tmin are the maximum and minimum surface temperature for a given NDVI, and T is the remotely sensed derived surface temperature at a given pixel for a given NDVI.The simple regression relationship between T and NDVI is formulated in Eq. (3) and Eq. ( 4) as; (3) where, 1 = -5.2362, 1 = 300.14, 2 = 2.9254, and 2 = 289.11.
Radar remote sensing provides advantages for extracting near surface soil moisture (0-5cm), including timely coverage with repeat passes during day and night, under all weather conditions.Technically, the surface roughness and vegetation affect radar backscatter much more than soil moisture.Hence, both the surface roughness and vegetation have to remain unchanged during the image acquisition for soil moisture estimation (Thoma et al. 2006).Delta Index is a modified, image differencing technique, and many studies (Barrett et al. 2009, Sano et al. 1998, Thoma et al. 2004) have proven it to be a good predictor for near surface soil moisture extraction.This index describes the change of wet scene backscatter relative to the dry scene backscatter, and is defined by Thoma et al.(2004) in Eq. ( 5) as; Sri Lanka.Therefore, the topographical changes like roughness and vegetation density showed no significant changes during these four months' time.
Land Use
The major land uses existing in this study area are identified as tea, scrub, forest, rock, rice, water, and residential.The Sentinel-2A image from 10 th October 2016 is used to extract the desired land uses from the study area by applying supervised classification.Scrub areas are typically the tea estates that are in abundance, while the residential areas are the rooms of tea workers.It is noted that most of the devastating landslides in this area had occurred within the extensive tea estates.Hence, the main reason for the continuous occurrence of these landslides can be identified as the lack of proper land use management in the area.
Forest biomass is a significant factor that can control the landmass failures or landslides.The main limitations of using optical remote sensing for forest biomass estimation is the near constant tropical cloud cover, and the insensitivity of reflectance to change of the biomass in older and mixed forests.Radar has potential to overcome the above limitations due to its all-weather, day and night capability, with the positive relationship of radar backscatter and forest biomass.Kuplich et al. (2005) and Caicoya et al.(2016) related the radar image texture derived from GLCM to the forest biomass.An experiment was conducted by Kuplich et al. (2005) with seven texture measures, but only the GLCM derived contrast increased the correlation between the backscatter and the log of biomass in Eq. ( 6) as; = 2.24 + 0.33 + 0.0001 where, b is the radar back scatter and the c is the GLCM contrast texture for the particular radar image.TerraSAR-X spot light image from 2 nd November 2014, with 3m resolution and dual polarization (HH and VV), was used to estimate the forest biomass in this work.
Geological Factors
Geology refers to the physical structure and the substance of the Earth.In order to investigate the land mass failures, the geological structure of that particular area have to be analysed carefully.In addition to the Geology of the area, lineament density has also been considered as a factor.The geological information of the particular area is obtained from the geological map available at the Geological Survey Mines Bureau (GSMB), Sri Lanka at 1:100000 scale, and seven types of different geological structures are contained in the selected study region.Lineaments are extractable linear features which are correlated with the geological structures of the earth.When considering the analysis of lineaments with respect to the landslide potentiality, lineaments exhibit the zones of weakness surfaces as faults, fractures, and joints (Mandal and Maiti 2015).This study uses the Sentinel-2A optical satellite image, with 10m resolution, for the extraction of lineaments of the study area.
After decisive analysis of the types of predisposing factors, the presented work proceeded to consider fifteen predisposing factors that are derived from optical, radar and other available auxiliary data sources.Three significant causative factors as surface roughness, soil moisture from Delta Index, and forest biomass were estimated by using radar satellite images.Thus, this work investigated the performance of landslide susceptibility analysis using bivariate and multivariate methods with the inclusion of RIF and described the processing steps in Fig. 2.
The weight of influence of all predisposing factors as thematic maps are added in bivariate and multivariate nature to obtain the contribution of all predisposing factors for landslide susceptibility analysis.After calculating the cumulative percentage of failures of the weighted susceptibility maps, value ranges for each percentage of failure are obtained from quantile classification for 10 classes.The entire study area of each landslide susceptibility map is then discretized in to four classes as 0%, 10%, 30% and 60% of failure regions for very low, low, moderate, and high susceptibility classes, respectively.
Results and Discussions
Four Landslide prediction models, (i) bivariate without RIF (BiNR), (ii) bivariate with RIF (BiWR), (iii) multivariate without RIF (MNR), and (iv) multivariate with RIF (MWR) are discussed.The region has been analysed and classified into four (04) landslide susceptibility regions as; high, moderate, low, and very low.
Bivariate InfoVal method Without and With RIF
Susceptible regions are identified from the bivariate InfoVal method without RIF as 12% for high, 45% for moderate, 38% for low, and 5% for very low as shown in Fig. 3 (a).Hence, 57% areas from the total study area are predicted as having high and moderate susceptibility for the landslide hazard.Very steep slope mountains in the North, North West, and East regions are identified as very low susceptibility areas, given that the area was free from historical landslides.The middle regions with 30 0 -50 0 slope are detected as having a high probability for landslide occurrences.The bivariate InfoVal method with RIF identified 19% of failure regions for high susceptibility, 39% for moderate, 33% for low, and 9% for very low susceptible regions as presented in Fig. 3 (b).Therefore, 58% of the total study area is predicted as having high and moderate susceptibility for landslides.Very steep slope mountains in the North, North West, East, and South East regions, the area near the Eruwendumpola Oya, are identified as having very low susceptibility for landslides.Similar to the bivariate analysis without RIF, the middle regions with 30 0 -50 0 slope are detected as having high probability for landslide occurrences and the reason for this is mainly with the past experience from Naketiya and Meeriyabedda landslides that had taken place in the same area.As mentioned before in Sec.1.1, the presented work utilizes the historical database of landslides and the land failures in the same region.
Multivariate MCDA based on AHP Without and With RIF
All fifteen weighted predisposing factors were grouped as without and with RIF, and weighted overlay is performed separately in order to obtain the landslide susceptibility regions.The calculated weights for elevation, slope, aspect, planar curvature, profile curvature, TWI, land use, lineament density, distance to water bodies, SMI in NDVI-T domain, geology, and rainfall are 0.030, 0.172, 0.022, 0.018, 0.014, 0.074, 0.149, 0.052, 0.045, 0.094, 0.185, and 0.145, respectively.The Consistency Ratio (CR) is a measure of consistency in subjective judgement, and ranges from 0 to 0.1, where 0 indicate the maximum inconsistency of relative judgement and 0.1 indicate the maximum consistency of relative judgements.For the present work, the CR for the relative judgement of weighting predisposing factors is 0.089 in the pairwise comparison.The weights for the fifteen predisposing factors with RIF, as elevation, slope, aspect, planar curvature, profile curvature, TWI, land use, lineament density, distance to water bodies, SMI in NDVI-T domain, geology, rainfall, soil moisture (Delta index), surface roughness, and forest biomass are 0.022, 0.145, 0.016, 0.013, 0.011, 0.053, 0.126, 0.039, 0.033, 0.065, 0.153, 0.124, 0.088, 0.088, and 0.027, respectively.When considering the fifteen predisposing factors, the CR is 0.092, which is less than the 0.1 thereby showing a realistic level of consistency in the pairwise comparison matrix.
FIGURE 3
Figure 3 (c) illustrates the landslide susceptibility map from the multivariate method without RIF and is able to identify 18% for high, 44% for moderate, 36% for low and 2% for very low susceptible regions.Hence, 62% of areas from the total study area are predicted to be of high and moderate susceptibility for the landslide hazard.In the landslide susceptibility map from the multivariate method with RIF, from the total area, 21% of the area show a high susceptibility to landslides, with 40% of area as moderate, 34% area as low, and 5% of area as having very low susceptibility as shown in Fig. 3 (d).Hence, 61% of areas from the study area are predicted as having high and moderate susceptibility for the landslide hazard.In a similar manner to the InfoVal method, the top of the mountains in the North, North West, East, and South East regions, area near to the Eruwendumpola Oya, are identified as having a very low susceptibility to landslide hazards, while the middle regions with 30 0 -50 0 slopes are detected as having high and moderate probability for landslide occurrences.
The area identified as having high and moderate susceptibility classes in these four approaches (57%, 58%, 62%, and 61% respectively in BiNR, BiWR, MNR, and MWR) are close in value, but shows an increase in multivariate analysis when compared with bivariate analysis as tabulated in Table 2. Moderate and low landslide susceptibility areas show very small ((1-2) %) changes between these four types of analysis.With the integration of RIF as surface roughness, near surface soil moisture from Delta Index, and forest biomass in bivariate and multivariate analysis, the high and very low susceptible areas are increased significantly (high: 7% -bivariate, 3% -multivariate and very low: 4% -bivariate, 3% -multivariate).However, when comparing the high and very low susceptibility areas from bivariate and multivariate analysis, high susceptibility areas show a considerable increase (without radar: 6% and with radar: 2%) while, very low susceptibility areas have a noteworthy decrease (without radar: 3% and with radar: 4%).
Results Validation
The landslide susceptibility maps derived from the bivariate and multivariate analysis are validated using the selected validation samples from the landslide failure map.The most commonly used and scientifically recognized Receiver Operating Characteristics (ROC) curves are used to analyse the prediction and validation performances.ROC is a graphical plot that illustrates the performance of classification, and is considered as a powerful tool for the validation of landslide susceptibility analysis for many years (Neuhäuser et al. 2012).The Area Under Curves (AUC) for the four different approaches, as bivariate and multivariate without and with RIF, are calculated and graphed in Fig. 4.
FIGURE 4
The areas under the success rate curves measure how the landslide prediction analysis fit with the training data set, while the areas under the prediction rate curves measure how well the landslide prediction models and landslide causative factors predict the landslides.If the area under the ROC curve is closer to 1, the result of the test is excellent and vice versa, and when AUC is closer to 0.5, the result of the test is fair or acceptable (Kamp et al. 2008).
The AUC of all the success rates are more-or-less near 0. From the results obtained, it can be concluded that the bivariate and multivariate statistical analysis, without and with RIF, can be used for landslide prediction analysis.However, with the integration of RIF as surface roughness, near surface soil moisture from Delta Index, and forest biomass, the detection of the boundary between the high and very low susceptibility areas is increased.When comparing the bivariate analysis with the multivariate analysis, the area identified as high susceptibility regions are increased while very low susceptibility regions decreased.As a whole, there is an improvement of prediction and validation performances of bivariate analysis than multivariate analysis.
This study focused on the applicability of remote sensing and GIS for rapid landslide prediction analysis at finer scale.Further, by considering the significance of radar data for landslide analysis, this study mainly investigates the efficacy of radar induced factors for landslide prediction analysis which is not well experimented in the current researches.Most significant factors as surface roughness, soil moisture, and forest biomass derived from radar are incorporated to examine the landslide prediction analysis.Successful prediction and validation of prediction analysis via ROC curves are achieved.Even though this study was tested for a sample area, the same methodology can be applied for any landslide prone area to investigate the landslide prediction analysis using radar induced factors by using bivariate and multivariate analysis.This is because the radar induced factors can be derived for any area, as long as the data are available, and at any time under whatever the weather conditions as radar are weather independent.Additionally, the technology can be learned easily and anyone can be trained to use this methodology to predict landslide susceptibility areas, and this is especially helpful for developing countries who do not have up-to-date data at fine resolutions.
Nat. Hazards Earth Syst.Sci.Discuss., https://doi.org/10.5194/nhess-2018-335Manuscript under review for journal Nat.Hazards Earth Syst.Sci. Discussion started: 3 January 2019 c Author(s) 2019.CC BY 4.0 License.quantitative nature.Generally, qualitative methods are based on expert opinions while the quantitative approaches, such as statistical and probabilistic approaches, depend on the past landslide experiences.
FIGURE 1
FIGURE 1 content.Similar studies fromRahman et al. (2008) andSeptiadi and Nasution (2009) emphasized the extraction of surface Nat.Hazards Earth Syst.Sci.Discuss., https://doi.org/10.5194/nhess-2018-335Manuscript under review for journal Nat.Hazards Earth Syst.Sci. Discussion started: 3 January 2019 c Author(s) 2019.CC BY 4.0 License.roughness from radar data using textural analysis.Hence, to estimate the surface roughness without the use of any ancillary field data, a radar image on 12 th March 2015 under the dry climatic condition was used to reduce the effect of the moisture component from the radar backscatter.The texture is the structure, or appearance, of the surface, and as such, describes the coarseness or the homogeneity of the image structure.One of the most prominent methods for texture analysis is Grey Level Nat. Hazards Earth Syst.Sci.Discuss., https://doi.org/10.5194/nhess-2018-335Manuscript under review for journal Nat.Hazards Earth Syst.Sci. Discussion started: 3 January 2019 c Author(s) 2019.CC BY 4.0 License.measurements.However, these methods are time consuming and costly when the area of interest is large, and the scale of work is small.Hence, this research uses the Universal Triangle relationship between Soil Moisture, Normalized Difference Vegetation Index (NDVI) and Land Surface Temperature (LST) derived from Landsat-8 image bands as an optical remote sensing approach, and the Delta Index derived from two radar images, as wet and dry conditions, as a radar remote sensing approach.Band 5 (Near Infrared (NIR), 30m resolution), band 4 (Red, 30m resolution) and band 11 (Thermal, TIR-2, 100m resolution) of Landsat-8 image of 3 rd July 2015 is processed for extracting the soil moisture index in the Thermal-NDVI space.
backscatter (decibels) from a pixel in the radar image representing wet soil conditions, and 0 is the radar backscatter (decibels) from a pixel in the same geographic location representing dry soil conditions at a different time.Sentinel-1 images with 10m spatial resolution and VV polarization is used in the presented study.The dry reference image was acquired on 12 th March 2015 and the wet image was acquired on 24 th November 2014 after the landslide in Meeriyabedda, Nat.Hazards Earth Syst.Sci.Discuss., https://doi.org/10.5194/nhess-2018-335Manuscript under review for journal Nat.Hazards Earth Syst.Sci. Discussion started: 3 January 2019 c Author(s) 2019.CC BY 4.0 License.
Figure 1
Figure 1 Topographical formation of Koslanda, Sri Lanka with its previous Landslides Signatures
Figure 2
Figure 2 Methodological flow of the Landslide susceptibility analysis using Bivariate and Multivariate approaches
Figure 3 Figure 4
Figure 3 Landslide susceptibility maps from bivariate and multivariate analysis without and with RIF.(a)-bivariate without RIF, (b)-bivariate with RIF, (c)-multivariate without RIF, and (d)-multivariate with RIF
TABLE 2
80, indicating good prediction performances according to the definition.The AUC of all the prediction rates are having values above 0.50, thereby indicating that they are within the acceptable range as per the definition.As such, they indicate that the accuracy of prediction rate of land susceptibility and the selection of land causative factors are acceptable, but not excellent, even though the fit between the landslide prediction and the training data set are excellent as compared in Table3.The incompleteness of the available landslide inventory map, as well as an insufficient number of validation samples in the study area can be shown as reasons for the discrepancy.As a whole, Nat.Hazards Earth Syst.Sci.Discuss., https://doi.org/10.5194/nhess-2018-335Manuscript under review for journal Nat.Hazards Earth Syst.Sci. Discussion started: 3 January 2019 c Author(s) 2019.CC BY 4.0 License.
better prediction and validation capabilities are shown by the bivariate analysis when compared with the multivariate approaches.
TABLE 3 5
ConclusionsThe main difference between bivariate and multivariate analysis is that in multivariate analysis, selected predisposing factors are also weighted by considering how each of them influence landslides.This study investigated fifteen landslide predisposing factors as elevation, slope, aspect, planar curvature, profile curvature, TWI, land use, lineament density, distance to hydrology, SMI in NDVI-T domain, geology, rainfall, soil moisture (Delta Index), surface roughness, and forest biomass.Most of the factors are derived from radar and optical remote sensing techniques, where smaller scale studies with up-to-date information allows the work to be conducted at the meter-level accuracy, and repeated analysis simultaneously. | 2019-08-05T10:02:04.232Z | 2019-01-03T00:00:00.000 | {
"year": 2019,
"sha1": "a7922b8abf2e2a03ee72259cacabac0f32d55e66",
"oa_license": "CCBY",
"oa_url": "https://nhess.copernicus.org/articles/19/1881/2019/nhess-19-1881-2019.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a7922b8abf2e2a03ee72259cacabac0f32d55e66",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
220518923 | pes2o/s2orc | v3-fos-license | Michael Acceptors Tuned by the Pivotal Aromaticity of Histidine to Block COVID-19 Activity
The question of whether COVID protease (SARS-CoV-2 Mpro) can be blocked by inhibitors has been examined, with a particularly successful performance exhibited by α-ketoamide derivative substrates like 13b of Hilgenfeld and co-workers (ZhangL., et al. Science2020, 368, 409−41232198291). After the biological characterization, here density functional theory calculations explain not only how inhibitor 13b produces a thermodynamically favorable interaction but also how to reach it kinetically. The controversial and unprovable concept of aromaticity here enjoys being the agent that rationalizes the seemingly innocent role of histidine (His41 of Mpro). It has a hydrogen bond with the hydroxyl group and is the proton carrier of the thiol of Cys145 at almost zero energy cost that favors the interaction with the inhibitor that acts as a Michael acceptor.
to 162 is quite rigid. On the other hand, there would be His41, which is part of a helix, which is in a flexible region.
All in all, the analysis of the real systems was performed at 3 different levels, the first model including all the amino acid chains included in the 10 Å sphere. Unfortunately, this was unaffordable, not only because of the thermal corrections, but also because of the optimization of the geometry that entails the flexibility of the different parts between the chosen part of the M pro and the inhibitor 13b. The second still included the entire inhibitor, but removes the fragments around histidine His41, further from the active site where C-S bond formation takes place. The third model is the same as the second model, but including His41, for its active role in the C-S bond formation. In fact, this third model can use the same energy reference as the second, simply by considering the group containing histidine His41.
Benchmark
Actually, the study on COVID started by a benchmark based on a further simplified fourth model, where the SH group was just linked to a methyl, and the keto of the amide group of the inhibitor 13b just surrounded by the CONH(CH 3 ) moiety on one side and CH(CH 3 )NHCO(CH 3 ) on the other side. For the sake of consistency the most widely used functional ever was used, namely B3LYP of Becke and Perdew, and including the dispersion correction D3 by Grimme. The discrepancy between the most accurate basis sets, as well as taking into account the enormous size of the systems under study, had to be taken into account and given the corresponding weight. Solvent effects were also tested, including explicit and/or implicit models. Because water is the solvent, the use of explicit molecules that help proton transfers was mandatory.
The small models of the active center of M pro with the inhibitor 13b, simplified or not, allowed to obtain the right image for this reference point of benchmark.
It is necessary to make the initial incision that all the energies discussed are made from the fragments separately as an energy reference, although the adducts or intermediates have also been calculated, but it has been used as a reference since in all cases they are energetically disadvantaged. However, in the case of where the activity is, it could be controversial which value would be the most real. But here it is mandatory the honesty, even if it was against the results presented, to take the values of the worst possible scenario. Therefore, one might think that the values for the energy barriers of the transition states, as well as the deprotonations, could be overestimated. However, free energies in solvent are calculated at P = 1354 atm to correct this overestimation, following the advice of Martin and coworkers. 6 Going to the 3 models, thus to the results that validate the conclusions of the manuscript, the addition of Michael, where the SH group interacted with the keto of the amide of inhibitor 13b was tested and with and without implicit solvent, and it was concluded that the results varied very slightly. And this then allowed the study of systems up to 500 atoms, which would not have been possible otherwise. In Table S1 it can be seen that the formation of the C-S bond requires only 1.7 kcal/mol less at P = 1 atm and even 1.1 kcal/mol more at P = 1354 atm, validating the convergence of values including implicit solvent effects in the geometry optimizations or not. Interestingly, explicit water molecules assistance did not affect much in geometry optimizations, and only in the transition state was the addition of one explicit water molecule favorable. In the other intermediates studied, the presence of water molecules did not imply additional stabilization, from a thermodynamic point of view.
The size of the calculation basis set was a major concern for this work, and although it started with the dual-polarization Def2SVP, it had to be switched to 3-21G(d) as the systems would not inhibitor and the simplified M pro is placed 3.3 kcal/mol above in energy (see Fig. S1a), therefore both species are interested in finding each other to get a H-bond.
In addition, Fig. S1e shows the expected structure after the C-S bond formation, with a Hbond between the oxygen atom of the hydroxyl and the NH moiety of the close His41, while Fig. S2f displays a more stable conformation by 7.7 kcal/mol, where there is a H-bond between the hydroxyl and the nitrogen without a proton of His41. Geometrically, by comparison with the corresponding X-Ray data (see Table S3), it is an evidence that the computed values in schemes e and f of Fig. S1 are feasible.
The analysis of the absolute and relative errors in Table S3 could lead to the conclusion that the protonation of the hydroxyl group goes along with the protonation of the NH group of His41 closer to it. However, the N···O distance is too short for systems where the NH fragment is the other, compared to X-ray structures. Therefore, in this study it is confirmed that the short piece of His41 it is not enough, specially for the latter protonation scheme of His41. Probably the most real scenario is that the 5-membered ring of the His41 rotates freely and this would explain why the C-N distances are almost identical for both nitrogen atoms, when only one is protonated.
The direct formation of the C-S bond, without deprotonation of the thiol group, using model 2, is displayed in Fig. S2. The reactant complex and the product are placed 19.2 and 24.0 kcal/mol below than the separated reactants, with a transition state that links them with an energy barrier of 36.1 kcal/mol. The latter energy barrier decreases by 9.0 kcal/mol when assisted with an explicit water molecule. Explicit water molecules can help in the kinetics here only, because thermodynamically they represent a destabilization of at least 6.1 kcal/mol for the intermediate and product.
In order to validate the results, calculations were also performed starting from the X-Ray with space group P2 1 2 1 2 1 , for both protomers, A and B. For protomer A, with model 1, the reactant complex is 15.2 kcal/mol more stable than the two moieties separately. If it is checked the direct formation of the C-S bond between them, it is localized a transition state with an energy barrier of 32.5 kcal/mol, that decreases 7.1 kcal/mol assisted by an explicit water molecule. This means to proceed via a 6-membered transition state instead of a constrained 4-membered one. And the final product is placed 28.2 kcal/mol. Interestingly the latter product is 2.3 kcal/mol more stable again when the closest nitrogen of H41 assists with a H-bond with the hydroxyl group. For the sake of consistency, when freezing all heteroatoms, thus except for hydrogens, from the X-Ray geometry, this difference following the same trend is 1.3 kcal/mol. Switching to the more simplified model 2, the reactant complex is just 7.9 kcal/mol more stable, and the product 10.2 kcal/mol. And including the imidazole in model 3, from the reactant complex bearing the free pair of electrons of the N of the histidine closer to the thiol is favored by 3.0 kcal/mol. And for the corresponding product this difference is nearly identical, 2.2 kcal/mol (and freezing all atoms except for the hydrogens 2.8 kcal/mol). For the protomer B, neither significant trend nor energy difference were observed. And it was checked again for model 2 that the direct C-S bond formation assisted with a water molecule displays an energy barrier of 27.8 kcal/mol, which is not feasible at room temperature, and highly disfavored with respect to the deprotonation of the thiol by the imidazole group of His41.
Steric maps
To characterize the occupation around Cys145, using the sulfur as a center, steric maps were performed. The orientation for the steric map in Fig. 1g is plotted in Fig. S3 (the steric map is performed in the xy plane). Next, in Tables S4-S6 the steric maps together with the %V bur values S6 are included (total and by quadrants). The analyses reveal that the reactive pocket must describe what is around the thiol group in a radius of at least 7 Å, and more detailed using 10 Å.
NCIplots
To highlight the role of stabilization between inhibitor 13b and M pro , non-covalent interactions (NCI) are traced in Fig. S4, calculated using the NCIplot program developed by Contreras-Garcia and coworkers, 21 on the optimized structure of the largest model of the X-ray structures, as well as the transition state leading to the formation of the C-S bond and the previous intermediate as well. The differences are very small, and there are basically favorable interactions where there are hydrogen bonds between the inhibitor 13b and M pro . Fig. S4 shows the NCI plot for the structure corresponding to what the product would be for model 3 from the X-Ray with space group C2. Since it is difficult to distinguish which are the main interactions we can summarize basically from 13b with a quite long list of aminoacids from M pro : Cys145, Asn142, Gly143, Phe140, His163, Glu166, Thr26, Gln189, His164, and Phe140. Of course for the calculated intermediates including His41 the favorable interactions with the thiol are particularly remarkable in the NCI plots.
Energy data and xyz coordinates
All xyz coordinates and absolute energy data of the computed geometry optimizations, and single point energy calculations in solvent, are included in Table S7. Figure S2. Reaction profile of the direct C-S bond formation between inhibitor 13b and the thiol moiety for model 2 of X-Ray with space group C2 (free energies in kcal/mol; the structures are simplified for the sake of clarity). Figure S3. Orientation of the x, y, z axes of the steric map of the active site in the crystallographic structure (space group C2) of the protease M pro . On the z axis there is the sulfur atom of the glycine that bonds to 13b, the carbon atom of the carbonyl at the origin, while its oxygen atom is on the xz plane (in Å). Figure S4. NCIplot of the model 3 from the X-Ray with space group C2 (left), detailed snapshot of the C-S bond (right). The isosurface represents a value of 0.4 with a color scale for the reduced density gradient from −0.05 (red) to 0.05 (blue). Table S1. Interaction of the model system of the thiol moiety (CH 3 -SH) of M pro together with the inhibitor 13b, and the 5-membered ring of the histidine His41 (free energies in kcal/mol). -11.75 -9.84 -9.84 Table S3. Comparison of the imidazole for the computed models and X-Ray data (in Å).
Model 2 Model 1
Protonation product scheme Table S5. Steric map of the active site in the crystallographic structure (protomer A, space group P2 1 2 1 2 1 ) of the protease M pro . Table S6. Steric map of the active site in the crystallographic structure (protomer B, space group P2 1 2 1 2 1 ) of the protease M pro . | 2020-07-15T13:05:56.325Z | 2020-07-13T00:00:00.000 | {
"year": 2020,
"sha1": "89964d2e5ced9fd0306383cdbb26e206ea385c89",
"oa_license": null,
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.jpclett.0c01828",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "566bee49e77f16c0c4e7e56685f1a8557984d3d7",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
139102504 | pes2o/s2orc | v3-fos-license | [18F]fluorothymidine and [18F]fluorodeoxyglucose PET Imaging Demonstrates Uptake and Differentiates Growth in Neurofibromatosis 2 Related Vestibular Schwannoma
Supplemental Digital Content is available in the text
Objective: To investigate whether [ 18 F]fluorothymidine (FLT) and/or [ 18 F]fluorodeoxyglucose (FDG) positron emission tomography (PET) can differentiate growth in neurofibromatosis 2 (NF2) related vestibular schwannomas (VS) and to evaluate the importance of PET scanner spatial resolution on measured tumor uptake. Methods: Six NF2 patients with 11 VS (4 rapidly growing, 7 indolent), were scanned with FLT and FDG using a highresolution research tomograph (HRRT, Siemens) and a Siemens Biograph TrueV PET-CT, with and without resolution modeling image reconstruction. Mean, maximum, and peak standardised uptake values (SUV) for each tumor were derived and the intertumor correlation between FDG and FLT uptake was compared. The ability of FDG and FLT SUV values to discriminate between rapidly growing and slow growing (indolent) tumors was assessed using receiver operator characteristic (ROC) analysis. Results: Tumor uptake was seen with both tracers, using both scanners, with and without resolution modeling. FDG and FLT uptake was correlated (R 2 ¼ 0.67-0.86, p < 0.01) and rapidly growing tumors displayed significantly higher uptake (SUV mean and SUV peak ) of both tracers ( p < 0.05, one tailed t test). All of the PET analyses performed demonstrated better discriminatory power (AUC ROC range ¼ 0.71-0.86) than tumor size alone (AUC ROC ¼ 0.61). The use of standard resolution scanner with standard reconstruction did not result in a notable deterioration of discrimination accuracy. Conclusion: NF2 related VS demonstrate uptake of both FLT and FDG, which is significantly increased in rapidly growing tumors. A short static FDG PET scan with standard clinical resolution and reconstruction can provide relevant information on tumor growth to aid clinical decision making. Key Words: [ 18 Neurofibromatosis 2 (NF2) is a dominantly inherited tumor predisposition syndrome affecting approximately 1 in 33,000 live births (1). The hallmark of this condition is the development of bilateral vestibular schwannomas (VS) (2,3) and once diagnosed patients typically undergo annual magnetic resonance imaging (MRI) screening to evaluate the size and growth with a cohort of tumors displaying relatively rapid growth (3,4).The cornerstone of modern NF2 management is conservation of hearing function and quality of life (3,5). While surgery plays a role in the management of rapidly growing tumors, the decision to operate depends on multiple factors including hearing deterioration rate, tumor growth rate, and tumor size (3). Surgery carries significant risks such as facial nerve injury (3), but early surgery, before tumors become too large, reduces the complication risk, and improves the outcome of adjunctive hearing preservation techniques such as auditory brainstem implantation (3,(6)(7)(8)(9). There is therefore a clinical need to identify rapidly growing VS early but with current MRI screening regimens there is a danger of missing significant growth due to the time-interval between scans. Furthermore, the accuracy and interobserver reproducibility of tumor measurements varies considerably depending on the measurement method used (10,11) and there is considerable debate within the literature as to what constitutes significant growth within lesions (12). An imaging biomarker that allows earlier identification of rapidly growing tumors would therefore be of clinical utility, particularly to patients harboring tumors that are approaching the threshold size for increased surgical risk. In these cases detecting further growth through serial MRI may mean that the optimum window for management has been missed whereas predicting growth offers the best opportunity to maximize surgical outcomes, patient quality of life, and avoid resulting costly treatment (13).
The positron emission tomography (PET) tracers fluorine-18 labeled deoxy-2-D-glucose (FDG) and 3'-deoxy-3'-fluorothymidine (FLT) have been increasingly used in oncology as imaging biomarkers of cellular metabolism and cellular proliferation respectively. Tumor cells preferentially accumulate FDG due to increased expression of glucose membrane transporters and the enzyme hexokinase, alongside a tendency to favor the more inefficient anaerobic pathway resulting in greater metabolic demand (Warburg effect) (14)(15)(16)(17)(18). FLT is transported into cells by the same nucleoside transporters as thymidine, and undergoes intracellular phosphorylation through the enzyme thymidine kinase 1. Elevation of thymidine kinase 1 occurs in rapidly dividing cells and consequently FLT uptake is a marker of cellular proliferation rate (19). Whereas FDG use within the central nervous system has been limited due to constitutively high uptake within the normal brain (20,21), brain uptake of FLT is normally limited by the blood-brain barrier (22,23), but has been demonstrated in regions of blood-brain barrier disruption such as within intrinsic glioma (24,25).
PET imaging in VS can be challenging and previous FDG PET studies in non-NF2 patients with sporadic tumors have shown inconclusive results due to low uptake within the tumor compared with the adjacent cerebellum (20,21). Similarly inconclusive results have been reported when using other PET tracers relevant to central nervous system tumors such as [ 11 C]methionine (20). FLT or FDG-PET has not, however, been previously described in NF2 patients, and there is growing evidence that sporadic and NF2 related tumors are biologically different both at the macroscopic level (26), but also with regard to their cellular proliferation indices (27,28).
The rationale of this pilot study was to investigate whether PET with FLT and/or FDG in combination with MR could be used in the future to assist in refining clinical decision making in NF2 related VS. The objective of this study was to therefore first assess if VS in a cohort of NF2 patients have measurable FLT and/or FDG uptake, and second to determine if rapidly growing tumors displayed differences in the uptake of these PET tracers compared with more indolent tumors. Given the comparatively small size and technically challenging location of VS in the context of PET imaging, a novel study design was adopted by which patients were scanned using both a conventional PET-CT and a high-resolution research tomograph (HRRT), which has the highest spatial resolution for human brain PET (29). Through such an approach the effect of scanner spatial resolution and reconstruction methods on tracer uptake could also be assessed.
Patients
Patients were recruited via the nationally commissioned, specialized NF2 multidisciplinary team meeting in Manchester, UK. Adult patients (aged between 18 and 70 yr of age) with a confirmed diagnosis of NF2 and at least one vestibular schwannoma (VS) were recruited. Exclusion criteria included: female patients pregnant or intending to become pregnant; patients who had undergone previous radiotherapy or antiangiogenic treatment; and patients with contra-indications to MRI. All patients gave informed written consent. The study was approved by an independent research ethics committee (REC 13/NW/0260) and by the United Kingdom Administration of Radioactive Substances Advisory Committee (ARSAC RPC 595/3586/30119).
All patients had undergone previous routine clinical assessment including MRI at 6 to 12 month intervals and the median length of follow-up across all patients was 1.52 years (range ¼ 0.60-7.30 yr). The study MRI scan was reviewed in addition to the results of previous clinical MR imaging by the multidisciplinary team and tumors were classified as either rapidly growing or indolent. This classification reflected clinical decision making in these patients with tumors being classified as indolent undergoing further radiological surveillance and rapidly growing tumors being considered for either surgical resection or treatment with the antiangiogenic agent bevacuzimab (Avastin). To confirm the differential growth pattern across these two cohorts, volumetric measurements of tumor size were made for the preceding clinical scan, the study MR scan, and a follow-up scan 1 year later (see Table 1). Volumetric measurements were made on T 1 -weighted (T1W) postcontrast imaging using the semiautomatic segmentation tool within the Brainlab iPlan software (Brainlab AG Germany) and the results of segmentation were reviewed and, where necessary, edited by an experienced neuroradiologist (I.D.).
PET Data Acquisition
Patients were scanned using FDG and FLT on two separate occasions, less than a week apart. For both tracers 200 MBq was the target injected activity. Patients were scanned using both a conventional PET-CT scanner, the Truepoint TrueV Biograph PET-CT scanner (Siemens), with a spatial resolution of approximately 4.5 mm full width at half maximum (30); and with a brain dedicated scanner the (HRRT, Siemens) with spatial resolution of approximately 2.5 mm full width at half maximum (29). For each radiotracer, the scan sequence followed a 60-gap-30-gap-30 minute structure with alternated order of scanners, i.e., three scans for each radiotracer injection alternating between the PET-CT and the HRRT (with sequence shown in supplementary Figure 1C, http://links.lww.com/MAO/ A784). Patients were placed on one of the two scanners (with the initial scanner altering between patients), injected with the radiotracer, and data acquired for 60 minutes (scan 1). Following a short break of between 10 and 20 minutes, patients were placed on the other scanner, with data acquired for 30 minutes (scan 2). Finally, following a second short break the patient was placed on the original scanner with data acquired for a further 30 minutes (scan 3).
This scan sequence was devised to allow assessment of tracer uptake during scan 2 at approximately 75 minutes postinjection on both scanners, either from direct measurement or from linear interpolation of data from scans 1 and 3 (radioactivity concentrations on the VS followed an approximately linear relationship during this period for both tracers). For attenuation correction, a 6-minute transmission scan was acquired when using the HRRT (preinjection for scan 1 and postemission acquisition for scans 2 and 3) and a pre-emission CT scan when using the TrueV PET/CT scanner.
Image Reconstruction
Data from both scanners were reconstructed using implementations of three-dimensional iterative Ordinary Poisson Ordered Subset Expectation Maximisation (31) without (No-RM) and with resolution modeling (RM), reconstructing the data during the last 30 minutes of scan 1 and the data for scans 2 and 3, each into three 10 minute frames. For the TrueV scanner, the Siemens offline reconstruction package ''e7_tools'' was used with an image zoom of two resulting in images with a voxel size of 1.33 mm  1.33 mm  2.03 mm and an image grid dimension of 256  256  107 voxels. HRRT data was reconstructed using HRRT user community software generating images consisting of 256  256  207 voxels each of size 1.22 mm  1.22 mm  1.22 mm. In both cases, 10 and 12 iterations for No-RM and for RM respectively were conducted using 16 subsets for HRRT and 21 for the TrueV. RM reconstruction is referred to as HD for the TrueV PET (32) while for the HRRT user community software was used (33). The iterations and subsets selected reflect our standard image reconstruction protocols. Postreconstruction smoothing using Gaussian filters, which can be used to reduce image noise, was not performed since it could worsen image resolution, which was considered to be critical for this clinical application.
Reconstructions for both scanners were performed with full corrections including scatter and attenuation. In the case of HRRT, attenuation correction was calculated from a reconstructed and segmented m-map image using the total variation TXTV method (34). To minimize the effects of patient motion particularly the deterioration of image resolution, image-based motion correction using frame-by-frame realignment for each 10 minute frame was used for both scanners (35).
Delineation of Tumor VOI for PET Quantification
Tumor volumes of interest (VOI) for PET analysis were manually drawn on contrast enhanced T1W MR images (voxel size 0.9 mm  0.9 mm  0.8 mm), acquired as part of the study MRI. Regions were drawn to the edge of the enhancing tumor (care was taken when delineating the tumor to avoid partial volume effects from nearby structures or surrounding CSF) and subsequently were modestly eroded using a single iteration and a 3  3  1 erosion kernel. All manual outlining was done using Analyze version 11 and was performed under the supervision of AJ and ID, consultant neuroradiologists with over 40 years of combined experience. The study MRI was acquired on the same day as one of the PET scans for all the patients and therefore within 1 week of both PET scans. Using SPM 8 (http://www.fil.ion.ucl.ac.uk/spm), contrast enhanced T1W MRIs were coregistered to the 30 minutes motion corrected PET images from each of the three scans, and the manually drawn VOIs were resliced to PET space using the rigid body transformations calculated from this coregistration and nearest-neighbor interpolation.
PET quantification was performed using the standardized uptake value (SUV), whereby the radiotracer concentration at 75 minutes posttracer injection within each voxel was normalized by the injected radioactivity dose and patient weight (36). The tumor VOIs were then applied to the PET data to calculate SUV mean (reflecting the overall regional tracer distribution), SUV max (max value of the tracer distribution), and SUV peak within each tumor. The latter is considered to be less sensitive to the VOI boundary and the uptake distribution (37).
Statistical Analysis
SPSS version 23 was used for all statistical analyses. The normality and homogeneity of variance for derived values was assessed using the Shapiro-Wilk and Levene test respectively. Intergroup differences in growth rate, SUV mean , SUV max , and SUV peak between indolent and growing tumors were compared using a Student's t test. Linear regression analysis was undertaken to assess intertumor relationship between standardized uptake values for both FDG and FLT using each scanner with and without RM. Finally, the ability of each tracer to classify VS as rapidly growing was assessed using the area under the curve (AUC) of the receiver operator characteristic (ROC) curve for each SUV parameter, using the multidisciplinary defined growth classification as the truth.
Patient Demographics
Six patients with NF2 participated in this study, three males and three females with an age range of 21 to 59 years. Five patients had bilateral VS with the remaining patient having undergone previous surgical removal of a left-sided VS. Six tumors were intracanalicular at the time of the PET study and among the 11 VS, 4 were classified as rapidly growing while the rest were indolent (see Table 1). Confirmatory measurements of tumor volume change between the preceding clinical MRI and the study MRI demonstrated that compared with the indolent tumor group rapidly growing tumors displayed a higher annual adjusted growth rate (0.00 versus 0.49 cm 3 /yr, p ¼ 0.01, two-tailed t test Figure 1 shows axial coregistered T 1 -weighted contrast enhanced MRI, FDG PET, and FLT PET image sections for two patients (A and D) with bilateral tumors. All of the PET images shown were acquired using the TrueV scanner and show decay corrected SUV (g/ml) at approximately 75 to 105 minutes postinjection with the FDG images windowed to saturate the high brain uptake. Patient A (top row) had bilateral rapidly growing tumors, with the right smaller VS scheduled for surgical removal at the time of the PET scans. High uptake of both tracers is observed in the larger left-sided VS, with a small area of focal FDG uptake within the right-sided tumor. Patient D (bottom row) also had bilateral VS, with the right-sided tumor classified as rapidly growing and the left-sided tumor classified as slow growing (indolent). For both tracers clear uptake is observed for the right-sided rapidly growing tumor while little uptake is observed for the left-sided tumor.
Group Comparison
Intergroup differences in tumor SUV mean , SUV max , and SUV peak between rapidly growing and indolent tumors for both FDG and FLT are shown in Table 2. The group comparison between FDG and FLT for both scanners using RM is presented in Figure 2. Rapidly growing tumors displayed significantly higher FDG SUV mean and SUV peak compared with indolent tumors using both scanners, with and without RM ( p < 0.05, one-tailed t test). With the exception of values derived using the HRRT scanner without RM, the FDG SUV max values were also significantly higher in the rapidly growing tumor group ( p < 0.05).
While use of the TrueV scanner without RM did not demonstrate a significant difference in FLT uptake between rapidly growing and indolent tumors ( p > 0.05), use of the TrueV with RM did demonstrate significantly higher FLT SUV mean values in the rapidly growing tumors ( p < 0.05, one-tailed t test). Similarly, use of the HRRT scanner with and without RM also demonstrated significantly higher SUV mean and SUV peak values compared with indolent tumors ( p < 0.05, one-tailed t test).
Scatter Plots
Scatter plots of SUV for FDG against FLT for the TrueV and HRRT scanners are shown in Figure 3. Each point of the graph represents one of the VS with data shown for the SUV mean with and without RM (rows). Lines of best fit for linear relationships are shown, together with the fit equation and R-squared values. VS classified as rapidly growing are plotted as a solid circle, while indolent tumors are plotted as a square.
Visual inspection of the scatter plots in Figure 3 suggests that FDG and FLT are related to each other in a proportional manner with the use of the higher resolution HRRT scanner and/or RM improving the correlation between FDG and FLT SUV mean values (TrueV: adjusted R 2 value of 0.67 vs 0.73 with RM, HRRT: adjusted R 2 value of 0.85 vs 0.86 with RM). Similar plots for both SUV max and SUV peak without and with RM can be found in supplementary Figures 1C and 2C, http://links.lww.com/MAO/A784, for the TrueV and HRRT scanner respectively.
In supplementary Figure 4C, http://links.lww.com/ MAO/A784, scatter plots of the SUV mean for FDG and FLT versus tumor volume for the TrueV scanner without RM are shown. A weak positive correlation between SUV mean and tumor volume is observed with adjusted R 2 values of 0.18 ( p ¼ 0.11) and 0.08 ( p ¼ 0.17) for FDG and FLT respectively.
Area Under the Curve of the Receiver Operator
Characteristic Curve AUC of the ROC curves for SUV mean, maximum, and peak, for both tracers, and for both scanners are shown in Table 3. Values ranged from 0.714 to 0.857 with SUV mean and from 0.786 to 0.821 with SUV peak , suggesting a good ability of FDG and FLT SUV values to discriminate indolent from rapidly growing tumors. Use of RM for both scanners generally increased the AUC values. Overall, FDG displayed higher AUC values than FLT (0.750-0.893 vs 0.643-0.857) with the exception of SUV mean when using the HRRT scanner with RM, where FLT displayed greater discriminatory power (AUC 0.857 vs 0.821). Both FDG and FLT outperformed tumor volume in discriminating between rapidly growing and indolent tumors with AUC ROC value of 0.601.
DISCUSSION
In this pilot study we have demonstrated for the first time that there is uptake of two commercially available PET radiotracers, FDG and FLT, within NF2 related VS and that uptake of these tracers has the potential ability to discriminate rapidly growing VS from more indolent tumors. This was established through a complex study design to elucidate the relative contributions of tracer, noise, and spatial resolution to the PET signal. The data demonstrates, however, that a short PET acquisition with clinically available tracers on a standard scanner can yield clinically relevant information on tumor growth.
The finding of growth-dependant uptake of FDG in NF2 related VS is in clear contrast to previous inconclusive results with FDG seen in sporadic VS (20,21). While differences in experimental design may partly underlie this discordance, greater uptake of FDG within NF2 related VS may also reflect fundamental biological differences between these two tumor groups at both the macroscopic and microscopic level. While sporadic VS are generally found as a single tumor arising from the vestibular nerve at the porus acousticus (38), NF2 related tumors are often multilobulated, originating from multiple sites on both the vestibular and cochlear nerve (26). At the cellular level, NF2 related VS display higher cellularity (27) and greater immunostaining for cellular proliferation indices (e.g., Ki-67, MIB-1) compared with sporadic tumors (28,39). Furthermore, there is evidence that pathophysiological mechanisms other than cellular proliferation such as cyst formation (40,41), intratumoral hemorrhage (42)(43)(44), and inflammation (44-47) may play a greater role in the growth of sporadic VS.
While uptake of FDG and FLT represent differing underlying biological processes, the uptake of both these tracers within NF2 related VS was strongly correlated in our study. One interpretation is that the uptake of FDG and FLT relates to a common factor or process such as tumor size or vascularity, but the correlation between tracer uptake and tumor volume was, however, comparatively weaker than the relationship between FDG and FLT uptake itself. Similarly while increased neovascularization within growing tumors may result in greater early tracer delivery (48,49), with the later PET measurements (75-105 min) used in this study these effects would be minimal. As such, the increased uptake of both FLT and FDG seen in this study likely represents that within growing NF2 related VS there is both concurrent cellular proliferation and increased metabolic demand.
Imaging VS with FDG and FLT has been previously viewed as challenging due to the limited spatial resolution of conventional PET, leading to potential contamination of tumor uptake from surrounding brain and bony marrow respectively (50). To assess this we used a complex scanning regime which incorporated two different PET scanners with different spatial resolution, both with and without RM reconstruction, and without any postreconstruction image smoothing. One consequence of this approach is that noise in the images is increased and this may explain the reduced discriminatory power of SUV max when compared with SUV mean and SUV peak . Use of either RM or the higher spatial resolution HRRT scanner improved the proportional relationship between FDG and FLT suggesting that when tumor uptake contamination from neighboring tissues is reduced, a better correlation between the two imaged biological processes is observed. Use of the HRRT scanner or RM, however, resulted in only small improvements in AUC ROC values suggesting that the degree of contamination from neighboring structures is small in comparison with the tumor uptake range, and that increased spatial resolution has only a modest effect on tumor growth classification. As such use of more clinically available lower spatial resolution PET scanners such as the TrueV PET-CT scanner may still show good ability to discriminate growing VS.
The results of this study demonstrate that both FDG and FLT uptake has merit to discriminate between rapidly growing and slow growing (indolent) tumors, and that this discriminatory ability exceeds that of tumor volume alone. While standard clinical practice in many institutions is radiological surveillance of tumor growth with serial MRI, there is a danger of missing significant tumor growth between interval scans, with the complication rate and difficulty of surgery increasing as tumors become larger (51,52). In many cases this strategy will be acceptable but select patients exist in whom the ability to predict rather than detect growth may be valuable. The above results suggest that assessment of tumor proliferative and metabolic activity using FDG or FLT PET may have future clinical utility in allowing more timely identification of tumors requiring surgical intervention.
A limitation of this study is that the number of included patients was low, due in part to patient concerns regarding additional radiation exposure and the complexity of the scanning regime. Future, larger studies, which incorporate just one scanner and a single tracer injection of either FDG or FLT, should be performed. These studies could be performed on new generation PET-MR
FIG. 2.
Intertumor comparison boxplots of derived mean and peak SUV values (g/ml) between slow growing (indolent) and fast growing VS following the injection of FDG and FLT. Using the TrueV PET-CTscanner with RM for FDG (A) and FLT (C); using the HRRTscanner with RM for FDG (B) and FLT (D). Ã p < 0.05-one tailed Student's t test, comparison between slow growing/indolent and fast growing VS for each SUV parameter.
scanners, which allow for both simultaneous MR image acquisition and also potentially for reductions in the injected radioactive dose due to improved scanner sensitivity (53). Evaluation of FDG and FLT PET as predictive markers of future tumor growth is limited in part in this study due to loss of growth follow-up in resected tumors. It is, nonetheless, interesting to note that within this study the two non-resected rapidly growing tumors with high FDG and FLT uptake continued to demonstrate rapid growth and larger, prospective studies should be undertaken to further evaluate the role of these tracers as growth predictors.
CONCLUSIONS
Data from 6 NF2 patients, with a total of 11 VS, indicate that for both FLT and FDG an uptake signal above background can be detected and that this uptake shows promise in providing additional and complementary information to serial MRI measurements for the classification of VS which are rapidly growing. Further studies should be undertaken to assess FLT and FDG PET as predictors of tumor growth, and as a clinical imaging tool for early identification of tumors requiring consideration of early treatment. | 2019-04-30T13:03:42.829Z | 2019-04-25T00:00:00.000 | {
"year": 2019,
"sha1": "8df078598460187746e41e51680d43a9642c843c",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.lww.com/otology-neurotology/Fulltext/2019/07000/_18F_fluorothymidine_and__18F_fluorodeoxyglucose.32.aspx",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8df078598460187746e41e51680d43a9642c843c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247690644 | pes2o/s2orc | v3-fos-license | Temporal Quantum Memory and Non-Locality of Two Trapped Ions under the Effect of the Intrinsic Decoherence: Entropic Uncertainty, Trace Norm Nonlocality and Entanglement
: The engineering properties of trapped ions and their capacity to engender numerous quantum information resources determine many aspects of quantum information processing. We devise a setup of coherent and even coherent fields acting on two trapped ions to generate quantum memory, non-locality, and entanglement. Various effects, such as intrinsic decoherence, Lamb–Dicke regime, and dipole–dipole interaction are investigated. The inter-coupling of trapped ions, as well as the generation and dynamics of correlations between them, are analyzed. Using quantum memory assisted entropic uncertainty, trace-norm measurement induced non-locality, and concurrence, we find that the coherent and even coherent fields successfully generate non-local correlations in trapped-ions, with the latter being more resourceful for the dynamics and preservation of the non-local correlations. Furthermore, we observe that the entropic uncertainty and the trace norm induced non-locality present symmetrical dynamics. The dipole–dipole interaction improves correlation’s generation, robustness, and entropic uncertainty suppression.
The uncertainty principle [19] is a fundamental concept of quantum physics. This principle is one of the most notable illustrations of how quantum physics differs from classical physics. Robertson's version of uncertainty relations [20], which extended Heisenberg's finding to two arbitrary observables, is possibly the most well-known. Entropic uncertainty relations, according to Deutsch, are of great interest due to the lower limit of the Robertson's uncertainty relation, which is conditional on the state of a quantum system [21].
The Physical Model
We explore two coupled two-level cold-ions beyond the Lamb-Dicke regime in this paper. Each cold-ion is considered as a qubit. In the resonant case with the resolved sideband limit and the resonant case (where the laser and the k-th vibrational sideband have the same frequency) [51,52], the two-ion Hamiltonian is: K l E l e (iθ l −|µ l | 2 ) ∑ m (−iµ j ) 2m+k (m + k)!m!â †m lâ m j (â k l |1 j 0 j | +â †k l |0 j 1 l |), θ l depicts the initial applied laser phase of l-trap.σ l z designs the atomic flip operator between the upper |1 l and lower |0 l (l = A, B) state with the atomic transition frequency ω 0 . The transition dipole moment is denoted by K l . E l is the amplitude of the applied laser field amplitude of the l-field.â l andâ † l represent the lowering and rising operators of the center-of-mass vibrational modes with the frequency ω l (l = A, B). µ l is the atomic trapped l-ion's Lamb-Dicke parameter. Based on the approach outlined in [53] for designing N twolevel cold-ions beyond the Lamb-Dicke limit, we investigate the case where the two-level cold-ions (two-qubit system) are close to one other and their total center of motion is the centre-of-mass vibration of the two cold-ions. As a result, each ion frequency's vibration ω l (l = A, B) is replaced by the total centre of motion frequency ω.
For the resonant case, we take ω l = ω 0 = ω, µ l = µ. The proposed study is inspired by the fact that the dipole-dipole coupling cannot be neglected while using the rotating-wave approximation [54]. Consequently, the interaction picture of the two-qubit Hamiltonian is: where, we take λ j = K j E j e iθ j e −|µ| 2 = λ(j = A, B), D represents the dipole-dipole coupling.
Using the associated Laguerre (k, µ)-polynomials L k m (µ 2 )(m = 0, 1, 2, ...), the diagonal elements of the nonlinear function operator F µ (â †â ) [51] are given by: The dynamics of the two cold-ions interaction with the intrinsic decoherence are analytically explored by the Milburn model [47]. On sufficiently short time steps, the Milburn model assumes that the system does not evolve continuously under unitary evolution but rather as a stochastic sequence of identical unitary phase changes. The Milburn model is used to investigate the dynamics of numerous real-world systems, including polar molecules in pendular states [55] and two superconducting qubits [56]. In terms of the system density matrixM(t), the Milburn master equation is given by: where γ is the intrinsic decoherence parameter. This intrinsic decoherence has been proposed as a solution to decoherence problems in which quantum coherence is destroyed as the system progresses. Milburn developed an equation involving intrinsic decoherence, a term for quantum coherence loss that occurs without the system interacting with a reservoir and without energy decay. We provide a specific analytical solution for Equation (4) when the two trapped ions are originally in a disentangled state to study the capacity of the two cold-ions interactions to form entangled two-qubit states, in particular the upper state |1 A 1 B . The nonclassical effects of the initial center-of-mass vibrational mode on the two ion-qubits dynamics is examined by considering the vibrational mode initially in two different coherent states. One of them is the coherent state and is given by: The initial intensity of the coherent field is designed by N = |α| 2 . Another initial state is the even coherent state, which is characterized by its high nonclassicality. It is given by: Therefore, the initial density matrix of the two cold-ions M(0) is: where P mn represents the photon number distribution of the coherent/even-coherent state.
Here, we use the eigenstates of the Hamiltonian given in Equation (2) to find an analytical solution of the Equation (4).
In the basis of the two-level cold ions: .., the eigenstates |E i are given by: where X m jn satisfies the eigenvalue-problem:Ĥ int |U m j = E m j |U m j based on the eigenvalues: Using the initial state of Equation (7) and the eigenvalues E m j (j = 1 − 4) and the eigenstates |U m j of the Hamiltonian Equation (2), the time-dependent two trapped-ions density matrix is described by: where Y jk are given by: The unitary evolution Λ jk mn (t) and the intrinsic decoherence T jk mn (t) terms are defined by: We aim to explore the dynamics of the correlations functions between the two trapped ions. We then need to find the two-ion reduced density matrix by considering the trace of the vibrational mode states from the system's density matrix M(t). The time-dependent two-ion state is obtained by: For the one-photon case k = 1, we can investigate the parameter effects of the Lamb-Dicke nonlinearity, the vibrational mode nonclassicality, the trapped-ion coupling and the decoherence on the non-local correlations between the two trapped ions.
Quantum Memory and Non-Locality Quantifiers
The entropic uncertainty, measurement-induced non-locality, and concurrence entanglement are the commonly employed quantum memory and non-locality measures.
Two Trapped-Ions Entropic Uncertainty
For incompatible observables P and Q, Bob's uncertainty regarding the two qubit (A and B) measurement outcome is verifying the following inequality [57]: where S(A|B) = S(R AB ) − S(R B ) represents the densityM AB operator's conditional von Neumann entropy with S(M) = −tr(M log 2M ) (for a density matrixM).
Here,R B = tr A (R AB ) and |ψ x represent the eigenvectors of X and I is the identical operator. The left UL and right UR entropic uncertainty sides of Equation (12) can be represented as follows: where L(t) and R(t) are respectively, the entropic uncertainty and its lower bound.
Trace Norm Measurement-Induced Nonlocality (Trace Norm-MIN)
For a two-qubit density matrix, ρ AB , based on the Schatten p-norm [58], the p-MIN is defined as: Here, we use 1-MIN and trace distance MIN, which is based on local invariant measurements. Therefore, it is based on the maximization of the trace distance between the pre-measurement and the post-measurement states. The two trapped-ions trace-norm MIN is given by [59,60]: where , the summation of β runs over all the cyclic permutations of 1, 2, 3. r represents the local Bloch vectors, t mn = tr{ρ AB (σ m ⊗ σ n )} design the components of the correlation matrix T = [t mn ], and σ = (σ 1 , σ 2 , σ 3 ) are the Pauli matrices.
Concurrence Entanglement
Here, we investigate entanglement (CE) [61] between the two trapped-ions by using the concurrence function C(t): with R i > R i+1 , and R i being the square roots of the M-matrix eigenvalues: The CE value C(t) = 1 corresponds to a maximally two trapped-ions state, and its zero-value corresponds to an unentangled state. Otherwise, the CE values correspond to partial two trapped-ions entanglement.
Dynamics of Correlations
This section presents the results of the time-dependent density matrix given in Equation (10) for two trapped ions linked with a laser beam in resonance. The two cold-ions are coupled with the field. The density matrix of the system is described by Equation (4), beyond the Lamb-Dicke regime and under intrinsic decoherence. Using Equations (13), (16) and (17), we study the generation and dynamics of temporal quantum memory, induced nonlocality, and entanglement in two trapped ions coupled to a laser field resonantly beyond the Lamb-Dicke regime. We study separately the two cases, i.e., under the absence and presence of intrinsic decoherence. In these cases we explore the properties of relevant fields' two trapped-ion qubits.
The effect of the laser l-field outside of the Lamb-Dicke regime under the absence of intrinsic decoherence and dipole-dipole interaction, is investigated in Figure 1. Our results show that the quantum memory functions UL − 1 and UR − 1 increase initially. The emergence of quantum memory, non-locality, and entanglement is examined when the initial coherent field intensity strength N is kept minimal. Before switching the interaction between the two-trapped ions and field, the functions UL − 1 and UR − 1 are both zero. When the interaction is turned on, the quantum memory assisted entropic functions grow. In other words, in addition to the increase of entropic uncertainty, quantum memory is becoming more prevalent. Similarly, the non-locality and entanglement functions, M(t) and C(t), arise and develop with time, ensuring that non-classical effects and entanglement are generated. It is worth noting that all the current measures appear to have similar qualitative dynamics initially. The dynamical behavior of the UL − 1 and UR − 1 functions becomes inverted to that of the M(t) and C(t) functions. It was seen that the minima of UL − 1 and UR − 1 at comparable intervals meets with the M(t) and C(t) functions' maxima, resulting in qualitatively opposite properties. This physically means that as the entropic uncertainty in the two-trapped ions coupled with a coherent state field increases, the entanglement and non-locality in the system decreases. In the resonant cases, the qualitative dynamical behavior of all functions in Figure 1 show a revival phenomenon. The values of quantum memory generation never reach a steady state, but rather fluctuate. The entanglement and non-locality functions show similar dynamics, indicating that entanglement and nonlocality are continuously exchanged between the l-laser fields and the system. Furthermore, unlike the two-qubit system coupled with cavities in Ref. [62], the nonclassical correlations exhibit monotonic dynamical behavior, which presents opposite results to the current case. In the absence of intrinsic decoherence, the Lamb-Dicke parameter µ regulates the dynamical behavior, and as this parameter increases, the exchange between the two trapped-ions decreases. The exchange rate for µ = 0.5 in Figure 1a is extremely high when compared to µ = 0.99 in Figure 1c. The trapped-ion qubits coupled with the laser field are more resourceful in terms of non-locality and entanglement preservation for lower values of µ, as shown in Figure 1a. For increasing values of µ, however, the state in large time intervals becomes separable, as shown in Figure 1b,c. The revival character of UL − 1, UR − 1, M(t), and C(t) is enhanced by increasing the interaction time, however, with a slightly different patterns as can be seen in Figure 1a,b. The qualitative dynamical behavior of non-local correlations in the Lamb-Dicke regime has a different exchange rate than the current case [43]. The effects of initial even coherent state fields on the generated temporal quantum memory and non-locality is shown in Figure 2 when the centre-of-mass vibrational mode is in the even coherent state. The effects of varying the Lamb-Dicke parameter and the even coherent vibrational mode state on the time dynamics of entropic uncertainty, related lower bound, non-locality, and entanglement in two trapped-ion qubits coupled with laser field are investigated. We find that the left and right sides of the entropic relation are initially zero, but grow as interaction time increases, indicating that initially, the current configuration of trapped-ions and field is completely separable. The rapid revival of entropic relations implies that entropic uncertainty increases, but the system's order is restored because of the action of even coherent state fields. Compared to the coherent state field investigated in Figure 1, the current case appears to have a better recovery for entropic uncertainty regimes. As shown in Figure 1, the entropic uncertainty, entanglement, and non-locality functions for the even coherent state field experience fast revivals compared to the coherent state field. Furthermore, in the current case, the sudden decreasing of the UL − 1 and UR − 1 functions accommodate lower values than those observed in Figure 1, ensuring a quick decrease in the entropic uncertainty due to the even coherent state fields. In agreement, the maxima of the M(t), and C(t) functions grow, indicating an enhancement in entanglement and non-locality in the system. The generation and dynamics of entanglement and non-locality witnessed by M(t) and C(t) appear qualitatively similar and undergo rapid revivals in a similar pattern, suggesting the similar nature of the two phenomena. This implies that the state loses entanglement, however, as the entropic action of the field decreases, entanglement and non-locality are recovered. Besides, the exchange rate is sufficiently high for low µ values with respect to the normalized interaction time λt/π as shown in Figure 2a. For higher values of µ, the system appears less entangled and non-local, with higher entropic uncertainty, as shown in Figure 2b. Because their quantitative values differ at comparable intervals, the current results also assess the different capacities of non-locality and entanglement to withstand against the intrinsic decoherence. This means that once the interaction is enabled, the state of the system oscillates non-periodically between locality and non-locality. We find that the current dynamical behavior of two trapped-ion qubits differs significantly from that of a three-qubit XY chain model with lower exchange rates, for example, see Ref. [63]. The upper bound UL − 1 appears to have a larger amplitude than the lower bound UR − 1 in the case of entropic relations, ensuring that they are quantitatively unequal. Figure 1a,c, but for the initial even super-position coherent state.
In the presence of intrinsic decoherence, the dynamics of the correlation's functions UL − 1, UR − 1, M(t), and C(t) for trapped-ion qubits coupled with coherent field are shown in Figure 3. The intrinsic decoherence parameter is set as γ = 0.02λ in the existing case. The dynamical behavior of the correlations becomes completely changed when the intrinsic decoherence parameter is switched on. As the interaction between the system and the field is turned on, the entropic uncertainty rises quickly for low µ values. The functions UL − 1 and UR − 1 then gradually decline, but never reach zero, and take on a revival appearance. UL − 1 = UR − 1 = 0 at the initial time, which indicates that the two-trapped ions and field are initially uncorrelated, but as time evolves, the related functions increase, showing the generation of temporal quantum memory. In agreement, M(t) = C(t) = 0 initially, implying that the total configuration is comprized of the product states of twotrapped ions and the relative field. The M(t) and C(t) grow with time, indicating that the system is approaching high non-locality and entanglement. The initial generation of entanglement and non-locality in the current two-trapped ions is qualitatively similar to the trapped ions and XY-spin chains investigated in Refs. [43,63]. According to the time evolution of entropic relations, non-locality, and entanglement, the field's intrinsic decoherence constraints the system's revival character. The amount of induced entropic relations, as well as the corresponding stability of entanglement and non-locality, is affected by the Lamb-Dicke parameter, as illustrated in Figures 1 and 2. The lowest µ values produce less entropic uncertainty but more entanglement and non-locality generation, as shown in Figure 3a. For large values, such as those depicted in Figure 3b, entanglement, unlike non-locality, is greatly inhibited, eventually disappearing entirely at varied interaction intervals. For trapped ions, the current qualitative results are similar to those reported in [64], but with a different configuration setup. In Figure 3a, we notice that the UR − 1 functions reach zero in the interaction time range 1.5 < λt/π < 1.9, which means that the entropic uncertainty becomes minimum. In Figure 4, the dynamics of entropic, trace norm-MI N, and entanglement concurrence functions are discussed in the presence of two trapped-ion couplings. Here, we focus on the effect of the dipole-dipole interaction (D = 10λ) on trapped-ion coupling with coherent state fields (a) and even coherent state fields (b) in the absence of the intrinsic decoherence, i.e., γ = 0.0λ. The dynamical behaviors in Figure 4a,b differ from those in Figures 1a and 2a. As a result, the trapped-ion coupling and the strength of the dipole-dipole interaction have a significant impact on the generation and preservation of quantum memory in two trapped ions. Because the initial values of all the functions are zero, there is no quantum memory, entanglement, or non-locality between the trapped ions. The entropic uncertainty, entanglement, and non-locality functions grow with interaction time as the coherent and even coherent state fields influence the trapped-ion coupling. It is worth mentioning that in the coherent state field, entanglement is more resilient when 0 ≤ λt π ≤ 0.5. While the trapped-ion coupling's entanglement resilience interaction time varies 1.5 ≤ λt π ≤ 3 in the even coherent state field. The sudden increase in entanglement robustness in the existing case is solely due to the dipole-dipole interaction. When compared to Figures 1-3, the current case's entropic relations remain suppressed. In the current cases, non-locality along with the entanglement in the trapped-ion coupling which is exposed to coherent and even coherent state fields remain more robust. The nonlocal correlations in two qubits have non-zero values when coupled with lossy cavities and classical fields, as seen in [62,[65][66][67][68]. In comparison to the correlation functions in coherent state fields, trapped-ion coupling experiences more sudden rises and drops in even superposition coherent state fields, generating results similar to Figures 2 and 3. Non-classical correlations perform better when associated fields are prepared in even superposition coherent states rather than in the standard coherent states. It is also worth noting that, for the even superposition coherent states, the correlations' unpredictability increases dramatically. This means that the two trapped-ion couplings and related centre-of-mass vibrational modes are quickly swapping their quantum information resources with the even coherent state field. Figure 5 illustrates the robustness of the generated temporal quantum memory, nonlocality and entanglement, in two trapped ions coupled with external coherent fields in the Lamb-Dicke regime limit µ 1 for the two trapped-ions coupling D = 10λ. It should be noted that, in this case, the effect of intrinsic decoherence on the generation and dynamics of the correlation functions is taken into account. The entropic uncertainty relations functions UL − 1 and UR − 1 are initially zero, showing that quantum memory does not exist initially. However, when the interaction between the system and coherent or even coherent states are activated, the entropic relations rapidly grow to their maximum values. The entropic uncertainty functions swing between their respective extrema over time. The random nature of the entropic relations, which strengthen with a higher revival rate at γ = 0.0λ as shown in Figure 4, contrasts sharply with the current situation, which shows fewer revivals. The entanglement and non-locality functions, M(t) and C(t), are initially zero but grow with time. Despite this, the corresponding measures show fewer revivals than those seen in Figure 4 in the absence of intrinsic decoherence. When intrinsic decoherence is present, the difference in the dynamical patterns of quantum assisted memory entropic uncertainty relations, entanglement, and non-locality appears insignificant when compared to that observed in Figure 4 for the same set of parameters but with γ = 0.0λ. The amplitude of entropic relations, particularly the UL − 1 function, is significantly higher in the absence of dipole-dipole interaction, (see Figures 1-3). As soon as the dipole-dipole interaction becomes active, the relative amplitudes and maxima of the UL − 1 function become suppressed, as shown in Figures 4 and 5. This indicates that the dipole-dipole interaction strengthens coherent and even coherent fields, allowing the two-trapped ions to become more entangled and non-local. The non-locality and entanglement functions grow over time as entropic uncertainty is suppressed. The relative maxima of the UL − 1 and UR − 1 functions coincide with the minima of the M(t) and C(t) functions, demonstrating that the increase in the entropic action of the fields corresponds to the decrease of the non-locality and entanglement and vice versa. The UL − 1 function's amplitude remains higher than that of the UR − 1 function, ensuring the inequality. Entanglement is more vulnerable to the intrinsic decoherence than the non-locality in the two trapped ions. From Figure 5, we note that the entropic uncertainty and the trace norm induced non-locality present symmetrical dynamics, which confirms that the trace-norm MIN can be used as another indicator to the quantum memory assisted entropic uncertainty.
Comparison of Current and Previous Results
Two-qubit gate operations in trapped ions are typically based on exciting the ion motional sidebands with laser light, which results in a delayed process. An experimental technique to creating rapid entangling gate procedures involves carefully selected pulsed lasers, raising uneasy technological challenges. Late experiments, however, show that in order to design fast gates with higher speeds, an ultrafast entangling-gate source controlled by an optical frequency comb synthesizer appears to be an encouraging prospect [69]. Ref. [70] investigates another approach, which employs a new gate-optimizing concept that runs with substantially decreased power consumption and so accomplishes fast gate entangling operations. Other solutions for scaling multi-qubit operations are being considered, such as pulsed non-adiabatic gates based on coherent effects, or implementing techniques that extend repetitive indirect readout to qubits in a way that enables resilience to off-resonant photon scattering errors [71,72]. The paper of [73] focuses on the outcome of error mechanisms in laser-free one-and two-qubit gates in trapped ions and electrons. The effect of drive field inhomogeneities with respect to one-qubit gate fidelities is investigated. Moreover, the paper achieves in-depth study of two-qubit gate errors, including static frequency shifts, trap anharmonicities, field inhomogeneities, and heating. Another experimental approach demonstrates high-fidelity laser-free universal control of two trapped-ion qubits by creating maximally entangled states, based on using RF magnetic field gradients overlapped with microwave magnetic fields. The scheme shows robustness against multiple sources of decoherence, while its advantages are flexibility, adaptability, and scalability, as it can be practically employed for almost any trapped ion species [74]. Besides, the first demonstration of a qubit gate operation in a fault-tolerant control regime is achieved by Egan et al. in [75]. In this study, we have examined two-trapped ions coupled with coherent fields. It can be traced back to the configuration presented in Ref. [73], however, it has been affected by intrinsic decoherence. Furthermore, we assume the fields are beyond the Lamb-Dicke regime and in the presence of the dipole-dipole interaction. In addition, we investigate measurement-induced nonlocality, concurrence, the dynamics of temporal quantum memory, nonlocality, and entanglement in systems of two coupled trapped ions. To this we add the additional effects caused by intrinsic decoherence and the mode for initially distinct coherent states. Mihalcea et al. explored a similar configuration of two trapped ions in Ref. [76], which employs a model to analyze dynamical stability for systems of two coupled ions. The system under consideration here can be used to examine quantum dynamics for many-body systems composed of identical ions in 2D and 3D ion traps [77]. In the current context, and in light of the prior discoveries, we find that trapped ions are invaluable resources for quantum information processing, which can be linked back to the utility of trapped ions, as described by Haffner et al. in Refs. [78,79]. Similarly, Kaushal et al. proposed that trapped ions can be employed for a microstructured array of RF traps, which can be connected to the implementation of scalable quantum information processing nodes. Furthermore, employing trapped ions, the authors of Ref. [80] demonstrate procedures with ideal coherent controlled techniques that enable the realization of maximally dense universal parallelized quantum circuits. Thus, trapped ions are useful, not only for quantum information processing and computing but also for various different quantum mechanical protocols, such as for quantum error correction, sensing, and networks [81,82].
Conclusions
We have explored the generation and dynamics of quantum memory, non-locality, and entanglement in two trapped ions coupled to coherent or even coherent fields. The effects of the intrinsic decoherence, the Lamb-Dicke regime, and dipole-dipole interaction have been investigated. To analyze the correlations between the two trapped-ions, quantum memory assisted entropic uncertainty, trace norm measurement-induced nonlocality, and concurrence are utilized. Note that under the Lamb-Dicke regime, the two trapped ions are viewed as two dipole-coupled qubits. Initially, the system is in coherent fields and the two trapped ions are fully uncorrelated, but owing to field interaction, non-local correlations arise over time. The properties of the coherent fields influence the dynamics of the correlations between trapped ions and fields. In the absence of intrinsic decoherence, the correlations show larger revivals. Correlations are suppressed when intrinsic decoherence occurs. In contrast, in even coherent vibrational fields, the dynamical patterns of the correlation functions have a higher rate of revival, ensuring better correlation preservation in two-trapped ions. The non-local correlations appear to be less preserved in the Lamb-Dicke regime as the relative parameter increases, along with higher induced entropic uncertainty. Unlike the Lamb-Dicke regime, the dipole-dipole interaction increases the non-locality and entanglement preservation while substantially reduces the entropic uncertainty. Thus, the robustness of the correlations between the two-trapped ions and coherent vibrational mode fields can be considerably enhanced by reducing the Lamb-Dicke and intrinsic decoherence parameters while rising the dipole-dipole interaction. | 2022-03-26T15:09:39.208Z | 2022-03-23T00:00:00.000 | {
"year": 2022,
"sha1": "0386218b6d42e92cca93f65574790fe9a71a87f4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-8994/14/4/648/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d9995b20aeaa7ccddca0e0f90f25fc39b36ba21f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
119580895 | pes2o/s2orc | v3-fos-license | A theorem of Roe and Strichartz for Riemannian symmetric spaces of noncompact type
Generalizing a result of Roe \cite{Roe} Strichartz proved in \cite{Str} that if a doubly-infinite sequence $\{f_k\}$ of functions on $\R^n$ satisfies $f_{k+1}=\Delta f_k$ and $|f_{k}(x)|\leq M$ for all $k=0,\pm 1,\pm 2,...$ and $x\in \R^n$, then $\Delta f_0(x)= -f_0$. Strichartz also showed that the result fails for hyperbolic 3-space. This negative result can be indeed extended to any Riemannian symmetric space of noncompact type. Taking this into account we shall prove that for all Riemannian symmetric spaces of noncompact type the theorem actually holds true when uniform boundedness is modified suitably.
Introduction
Generalizing a result of Roe [14], Strichartz ([16]) proved the following theorem on R n . (See also [11] and the references therein.) Theorem 1.0.1 (Strichartz). Let {f j } j∈Z be a doubly infinite sequence of measurable functions on R n such that for all j ∈ Z, (i) f j L ∞ (R n ) ≤ C for some constant C > 0 and (ii) for some α > 0, Strichartz also observed in the same paper [16] that the result holds true for Heisenberg groups H n , but fails for hyperbolic 3-space. A slight generalization of the counter example given in [16] shows that in any Riemannian symmetric space X of noncompact type there is a sequence of functions {f j } which satisfies the hypothesis (where the Laplace-Beltrami operator ∆ on X replaces L), but f 0 is not an eigenfunction of ∆ (see [12]). We take this negative result as our starting point. Aim of this paper is to prove an analogue of Strichartz's result for all Riemannian symmetric spaces X = G/K of noncompact type.
From the counter example provided by Strichartz in [16] it is not difficult to perceive that the failure is influenced by the spectral properties of the Laplacian ∆ of the symmetric space. More precisely the failure is due to the difference between the L 2 and L ∞ -spectrum of ∆, which in turn depends on the exponential volume growth of the underlying manifold X. This sets the task of searching for a possible analogue conducive to the structure of the space. We began our study in this direction with the rank one symmetric space in [12], where the situation was saved, substituting the L ∞ -norm by the weak L 2 -norm. However the use of weak L 2 -norm seems to be restrictive to the rank one case. The following observation can be considered as a first indication of this. We recall that the foremost examples of eigenfunctions of ∆ are the elementary spherical functions φ λ with λ ∈ a * . Unlike the rank one case, in general rank, these eigenfunctions do not belong to the weak L 2 -space. We also recall that on X the objects which correspond to e i λ,x (on R n ) are e λ,k : x → e −(iλ+ρ)H(x −1 k) , k ∈ K, λ ∈ a * . They are the basic eigenfunctions of ∆ which are constant on each horocycles ξ k,C = {x ∈ G/K | H(x −1 k) = C}; but unlike their counterparts in the Euclidean case, e λ,k are not L ∞ -functions, in particular they do not satisfy the hypothesis of Strichartz's theorem. As in the Euclidean case, we have another set of prominent eigenfunctions namely the Poisson transforms P λ F (x) = K/M e λ,k (x)F (k)dk, λ ∈ a * of L pfunctions F on the boundary K/M of the space X for some p ≥ 1. Taking all these into account we motivate ourselves to look for a size-estimate which accommodates a large class of eigenfunctions of ∆, including those mentioned above. One such is the so-called "Hardy-type" norm introduced in [15] and used effectively in [3]. We shall use this norm to formulate our main result, which is stated below. (For any unexplained notation see Section 2.) Theorem 1.0.2. Let {f j } j∈Z be a doubly infinite sequence of measurable functions on X such that for some real number α and for all j ∈ Z: for all a ∈ A + and for a constant C p > 0 depending only on p.
The particular case of p = ∞ in the condition (ii) of the hypothesis simplifies as |f j (x)| ≤ Cφ 0 (x) for all x ∈ X and j ∈ Z, and is close to the estimate used in Theorem 1.0.1. We take a λ ∈ a * with |λ| 2 = α 2 . Then for any fixed k ∈ K, the sequence of functions {(−1) j e λ,k } j∈Z satisfy the hypothesis with p = 1. For such a λ and for any 1 ≤ p < ∞, the sequence of Poisson transforms {(−1) j P λ F } j∈Z for any F ∈ L p (K/M ) also satisfy the hypothesis. (See Theorem 2.0.1 in Section 2.) The result above may also be viewed from the following perspective. In [3] eigenfunctions of ∆ with eigenvalues of the form mentioned above.
The space X = G/K enjoys a dichotomy as it can be viewed as a solvable Lie group, S = N ⋊ A through the Iwasawa decomposition of G = N AK. The group S is amenable like R n and H n , though unlike them S is nonunimodular. On S, one considers a second order right S-invariant differential operator L which is known as the distinguished Laplacian. Unlike ∆, the L p -spectrum of L for 1 ≤ p < ∞ coincides with the L 2 -spectrum (see e.g. [7]). This intrigues us to formulate a version of Theorem 1.0.2 substituting ∆ by L, which is the next result.
where δ is the modular function of S, then Lf 0 = αf 0 .
Both of these results may be viewed as "exact" analogues of the Euclidean theorem. For Theorem 1.0.2, one can argue that any reasonable analogue of the object φ 0 on R n is the constant function 1, while for Theorem 1.0.3 one may recall that for R n (and H n ), δ ≡ 1.
Notation and Preliminaries
For two positive functions f 1 and f 2 we shall write f 1 ≍ f 2 to mean there are positive constants For a measurable function f on R n we define its Euclidean Fourier (where λ · x is the Euclidean (real) inner product of λ and x), whenever the integral converges. Let S(R n ) be the Schwartz space on R n . Precisely S(R n ) is the set of functions in C ∞ (R n ) such that Here D r = ∂ r 1 ∂x1 . . . ∂ rn ∂xn , r = (r 1 , . . . , r n ) is a differential operator and |x| is the Euclidean norm of x. The space S(R n ) becomes a Frechet space with respect to the topology generated by the seminorms µ r,s .
The dual space of S(R n ) is called the space of tempered distributions which will be denoted by S(R n ) ′ .
The following facts are well known: (1) f → f is an isomorphism from S(R n ) to itself, (2) using this isomorphism one can extend the notion of Fourier transform and derivative to S(R n ) ′ , (3) L p -functions for 1 ≤ p ≤ ∞ are tempered distributions. The required preliminaries and notation related to the noncompact semisimple Lie groups and the associated symmetric spaces are standard and can be found for example in [6,8,9]. To make the article self-contained we shall gather only those results which will be used throughout this paper. Let G be a noncompact connected semisimple Lie group with finite centre and K be a maximal compact subgroup of G. Let X = G/K be the associated Riemannian symmetric space of noncompact type. We let G = KAN denote a fixed Iwasawa decomposition of G. Let g, k, a and n denote the Lie algebras of G, K, A and N respectively. Let g C be the complexificaion of g and U(g C ) be its universal enveloping algebra.
We recall that the elements of U(g C ) are identified with the left-invariant differential operators on G and there exists an anti-isomorphism ı from U(g C ) to algebra of right-invariant differential operators on G. We choose and keep fixed throughout a system of positive restricted roots for the pair (g, a), which we denote by Σ + . The multiplicity of a root α ∈ Σ + will be denoted by m α . As usual the half-sum of the elements of Σ + counted with their multiplicities will be denoted by ρ. Let H : G −→ a be the Iwasawa projection associated to the Iwasawa decomposition, G = KAN . Then H is left K-invariant and right M N -invariant where M is the centralizer of A in K. The Weyl group of the pair (G, A) will be denoted by W . Let a * be the real dual of a and a * C its complexification. Let a + (respectively a * + ) denote the positive Weyl chamber in a (respectively a * ). Let A + = exp a + and A + be the closure of We recall that the Killing form B restricted to a is a positive definite inner product on a and it gives a W -equivariant isomorphism of a with a * . For λ ∈ a * we denote the corresponding element in a by H λ . Let dim a = l, i.e. l is the rank of X. We will identify a and a * with R l , as an inner product space, with the inner product on R l being the pull-back of the Killing form. This inner product on R l as well as on a, a * will be referred to as the Killing inner product and will be denoted by , . The associated norm will be denoted by | · |. We hope that this symbol will not be confused with the absolute value symbol. Since exp : a −→ A is an isomorphism, as a group A can be identified with R l .
coming from the Riemann metric on X induced by the Killing form restricted to p. Here g = k ⊕ p (Cartan decomposition) and p can be identified with the tangent space at eK of G/K. The function On X we fix the measure dx which is induced by the metric we obtain from B. As the metric is for every integrable function f on X which we also consider as a right K-invariant function on G. While dealing with functions on X, we may slur over the difference between the two measures.
Through the identification of A with R l we use the Lebesgue measure on R l as the Haar measure da on A. As usual on the compact group K we fix the normalized Haar measure dk (i.e. vol(K) = K dk = 1).
Finally we fix the Haar measure dn on N by the condition that Following integral formulae correspond to the Iwasawa and polar decompositions respectively. For where dH is the Lebesgue measure of R l with which a is identified, dn, dk are the normalized Haar measures of K, N respectively and C being a normalizing constant.
For a measurable function f on X we define its For λ ∈ a * C = C l , the elementary spherical function φ λ is given by It is a K-biinvariant eigenfunction of the Laplace-Beltrami operator ∆; ∆φ λ = −(|λ| 2 + |ρ| 2 )φ λ and Moreover, suppose that a function f on G/K satisfies ∆f = −|ρ| 2 f and then there exists unique F ∈ L p (K/M ) when 1 < p ≤ ∞ and a signed measure µ when p = 1 such that For a suitable function f on G/K its spherical Fourier transform f (λ) for λ ∈ C l is defined by It is then clear that f (λ) = f (wλ) for all w ∈ W .
We now need to introduce the notion of Schwartz spaces and distributions on X. The L 2 -Schwartz space C 2 (G) is defined as the set of all C ∞ -functions on G such that for all nonnegative integers r and g 1 , g 2 ∈ U(g C ). Let C 2 (G//K) be the set of K-biinvariant functions in C 2 (G). We recall that ([1]) the spherical Fourier transform is an isomorphism from C 2 (G//K) to The dual space of C 2 (G/K) will be denoted by C 2 (G/K) ′ and its elements will be called L 2 -tempered distributions. The translation of T ∈ C 2 (G/K) ′ by an element y ∈ G and its convolution with a function g ∈ C 2 (G//K) are defined in the usual way by (ℓ y T )f = T (ℓ y − 1 f ) and T * g(y) = (ℓ y T )(g), where , ψ ∈ C 2 (G/K). The set of K-invariant L 2 -tempered distributions on G/K will be denoted by C 2 (G//K) ′ . The heat kernel h t for t > 0 is a K-invariant function in C 2 (G/K) which is defined using the isomorphism of C 2 (G//K) with S(R l ) W , prescribing its spherical Fourier transform h t (λ) = e −t(|λ| 2 +|ρ| 2 ) . It is well known that h t ∈ C 2 (G//K). Using [5, Theorem 4.1.1] it can be verified that f * h t → f in C 2 (G/K) as t → 0 and hence T * h t → T as t → 0 in the sense of distribution.
Finally we need the notion of Abel transform. For a K-invariant function f on G/K its Abel transform Af is defined by: Af (a) = e ρ(log a) N f (an)dn, for a ∈ A, whenever the integral makes sense. We recall that the slice projection theorem ( [1]) states that for any f ∈ C 2 (G//K), λ ∈ a * ≡ R l , we have the identity The following theorem proved in [1] will be crucial for this paper.
The use of Abel transform in our proof is somewhat similar to that of [2] (see also [10]).
Distribution-version of the Euclidean theorem.
Since the real rank of G is l, it follows that W is a subgroup of O(l) as W preserves the inner product induced by the Killing form. For T ∈ S(R l )) ′ and w ∈ W we define where |W | denotes the cardinality of W.
A tempered distribution T is called W -invariant if T # = T . It is easy to verify that for T ∈ S(R l ) ′ and f ∈ S(R l ), T # f = T f # and in particular when T is W -invariant T f = T f # . It is also not difficult to see that the Laplacian L of R l commutes with W -action and hence if f ∈ S(R l ) W (respectively . We shall first prove the following version of the Euclidean theorem. Below L 1 = L − |ρ| 2 . (ii) for all ψ ∈ S(R l ) W , | T j , ψ | ≤ M µ(ψ) for some fixed seminorm µ of S(R l ) and M > 0. Then This theorem is essentially proved in [16,11]. For the sake of completeness, we include here only a very brief sketch of the argument.
Theorem 3.2.2. If for a doubly infinite sequence
for some z ∈ C with |z| ≥ |ρ| 2 and for a fixed seminorm ν of Proof. We need to use frequently the fact that ∆ commutes with translations and the K-averaging operator K defined in section 2. It is clear from the condition ∆T j = zT j+1 that if one element of the sequence {T j } is zero, then every elements of the sequence is zero and we have nothing to prove. Therefore we assume that none of the T j are zero. We fix j ∈ Z. We claim that there is an x ∈ G such that ℓ x T j has nonzero K-invariant part. Indeed if K(ℓ x T j ) = 0 for all x ∈ G, then ℓ x T j , h t = 0 for all t > 0 since the heat kernel h t is a K-invariant function (see section 2). That is T j * h t ≡ 0. But T j * h t → T j as t → 0 in the sense of distribution. Therefore T j = 0 which contradicts our assumption.
We note that this also shows that if for two L 2 -tempered distribution T and T ′ , K(ℓ x T ) = K(ℓ x T ′ ) for all x ∈ G, then T = T ′ .
Our aim now is to show that for any y ∈ G, the sequence {K(ℓ y T j )} of K-invariant distributions satisfies the hypothesis of Theorem 3.2.1. Since ∆ commutes with the K-averaging operator and translations, it follows from the hypothesis ∆T j = zT j+1 that ∆K(ℓ y T j ) = zK(ℓ y T j+1 ).
(Note that if K(ℓ y (T 0 )) = 0 for some y ∈ G, then the identity ∆K(ℓ y (T 0 )) = −|z|K(ℓ y (T 0 )) is trivial.) Again appealing to the fact that ∆ commutes with translations and K-averaging operator we have K(ℓ y (∆T 0 )) = K(ℓ y (−|z|T 0 )) for all y ∈ G. This implies (see the first paragraph of the proof) that ∆T 0 = −|z|T 0 which is the assertion.
Roe-Strichartz theorem for Distinguished Laplacian
The main result of this section is an analogue of Theorem 1.0.2 for a right invariant second order differential operator which in the context of R l is nothing but the Laplace Beltrami operator of R l . This is known as the distinguished Laplacian of X. We shall make it precise now. Let G = N AK be the Iwasawa decomposition of G and S be the solvable Lie group N ⋊ A. We can identify the manifold S with the Riemannian symmetric space G/K. The image of the G-invariant measure on G/K under this identification corresponds to the left Haar measure on S and the Riemannian metric on G/K corresponds to a left-invariant metric on S. In a similar fashion we can identify functions and differential operators on G/K with those on S. To define the distinguished Laplacian L we first consider the inner product X, Y = B(X, θY ) on g where B is the Cartan killing form and θ is a Cartan involution. With respect to the above inner product the decomposition s = a⊕n is orthogonal. We choose an orthonormal basis {H 1 , . . . , H l , X 1 , . . . , X m } of s such that span{H 1 , . . . , H l } = a, span{X 1 , . . . , X m } = n and we view these elements as right invariant vector fields in the usual way. The distinguished Laplacian L is defined as (see [4,7]) The operator L is essentially self adjoint on L 2 (S) with respect to the left Haar measure of S and enjoys a special relationship with the Laplace-Beltrami operator ∆ when viewed as a left-invariant operator on the solvable group S. This relation is explained below as it is crucial for our purpose. For a function f we define f (x) = f (x −1 ) for x ∈ S, where x −1 is the inversion of the group S. We recall that ∆ 1 denotes the operator −(∆ + |ρ| 2 ). It then follows that for all x ∈ S (see [4, p.108]), where we recall δ(an) = e −2ρ(log a) , for a ∈ A and n ∈ N . It follows trivially that ∆ 1 f = λf for some λ ∈ C if and only if L(δ 1/2 f ) = λ(δ 1/2 f ). This relation between the Laplacians yields Theorem 1.0.3 stated in the introduction.
Using (4.0.1) again we get Lf 0 = αf 0 which is the assertion.
We conclude with the observation that despite the fact that the distinguished Laplacian L has some similarities with the usual Laplacian L on R l (see Introduction), a straightforward analogue of the Euclidean result of Strichartz [16] is not a possibility. Following counter example will establish this.
Let ψ 2 be the constant function 1. We shall show that ψ 2 is an eigenfunction of L with eigenvalue −4|ρ| 2 .
Concluding Remarks
1. In view of the results in [12] and in [3], it is natural to expect the following result. (i) ∆f j = (4ρ 2 /qq ′ )f j+1 , (ii) for a fixed p ≥ 1, f j (·a) L p (K) ≤ C p φ iγq ρ (a) for all a ∈ A + and for a constant C p > 0 depending only on p.
2.
A recent paper ( [13]) studies the L p -Schwartz space isomorphisms and related analysis in the context of Heckman-Opdam hypergeometric functions, which generalizes analysis of K-biinvariant functions on a noncompact connected semisimple Lie group with finite centre. It should be possible to prove an analogue of our result in this set-up, through similar steps. | 2012-07-28T11:16:12.000Z | 2012-07-28T00:00:00.000 | {
"year": 2012,
"sha1": "3ba935c181c4ec1ccafdb1938af454cf713d66ab",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1207.6695",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3ba935c181c4ec1ccafdb1938af454cf713d66ab",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
15292680 | pes2o/s2orc | v3-fos-license | MEFV Gene Profile in Northwest of Iran, Twelve Common MEFV Gene Mutations Analysis in 216 Patients with Familial Mediterranean Fever
Familial Mediterranean Fever (FMF) is a hereditary autoinflammatory disease with autosomal recessive inheritance pattern often seen around the Mediterranean Sea. It is characterized by recurrent episodes of fever and polyserositis and rash. Recently, MEFV gene analysis determines the definitive diagnosis of FMF. In this study, we analyzed 12 MEFV gene mutations in more than 200 FMF patients, previously diagnosed by Tel-Hashomer clinical criteria, in northwest of Iran, located in the proximity of the Mediterranean Sea. In the northwest of Iran (Ardabil), 216 patients with FMF diagnosis, based on Tel-Hashomer criteria, referred to the genetic laboratory to be tested for the following mutations; P369S, F479L, M680I(G/C), M680I(G/A), I692del, M694V, M694I, K695R, V726A, A744S, R761H, E148Q. All patients were screened for MEFV gene mutations by a reverse hybridization assay (FMF Strip Assay, Vienna lab, Vienna, Austria) according to manufacturer’s instructions. Among these FMF patients, no mutation was detected in 51 (23/62%) patients, but 165 (76/38%) patients had one or two mutations, 33 patients (15/28%) homozygous, 86 patients (39/81%) compound heterozygous and 46 patients (21/29%) were heterozygous. The most common mutations were M694V (23/61%), V726A (11/11%) and E148Q (9/95%) respectively. MEFV gene mutations showed similarities and dissimilarities in different ethnic groups, while it is common among Arabs and Armenians genotype. Since common 12 MEFV gene analysis could not detect up to 50% of our patients, who had FMF on the basis of clinical Tel-Hashomer criteria, clinical criteria is still the best way in the diagnosis of FMF in this area. The abstract of this article has been presented in the 7th Congress of International Society of Systemic Auto-Inflammatory Diseases in Lausanne, Switzerland, 22-26 May 2013.
Introduction
Familial Mediterranean fever (FMF) is a hereditary recurrent inflammatory disease with autosomal recessive pattern of inheritance characterized by self-limiting recurrent fever and pain in the serous membranes, pleurisy, peritonitis, arthritis and erythema. 1,2 The most important complication of FMF is amyloidosis that eventually leads to kidney failure. 3 Symptoms can occur in the first 10 years of life and according to the recent study in more than 80 percent of patients, the symptoms appear in childhood era. 1 FMF is often presented in people of Mediterranean ancestry. 4 The disease is mostly seen among Turks, Arabs, Armenians, and Jews. 5 Gene associated with the disease (Mediterranean Fever) MEFV gene was detected in 1997, it is located on the short arm of chromosome 16(16p13). The MEFV gene encodes pyrin, a protein expressed in neutrophils, eosinophils, monocytes, dendritic cells, and fibroblasts. In monocytes, pyrin co-localized with the microtubule system. This fact may contribute to the therapeutic effect of colchicine, which destabilizes the cytoskeleton. The exact role for pyrin and the mechanisms by which MEFV mutations (more than 300 variants) exert their pathogenic effects are inadequately understood. Initial diagnosis of FMF is mainly based on clinical Tel-Hashomer criteria, as described below. 6 Definitive diagnosis requires at least two major criteria or one major and two minor criteria: Major criteria • Fever with peritonitis, synovitis, or pleurisy • Recurrent febrile attacks • AA amyloidosis (without risk factors or other chronic inflammatory diseases) Minor criteria • Erysipeloid erythema • Response to colchicine • Family history of periodic syndrome While identifying MEFV gene mutations determines the definitive diagnosis of FMF, clinical manifestations of the disease is much more sensitive and specific to diagnosis, especially in practice. It should be kept in mind that failure to find mutations in the MEFV gene does not rule out the disease. 6 Bonyadi, in a cohort of 200 unrelated healthy individuals, screened for the five most common MEFV mutations (M694V, V726A, M680I, M694I, and E148Q). In this genotyping, the carrier rate among the Azeri people was 25.5%, with E148Q (11.5%) followed by V726A (1.75%). This study showed that E148Q mutation and carrier rate are high among the Iranian-Azeri people. The remaining common mutations were not found in this cohort. 3 The allele frequency of MEFV gene mutations has high heterogeneity among the Turkish population and they are believed to be a high-risk population for developing FMF. The carrier-state frequency of FMF is 20%. The five most common mutations in the MEFV gene are M694V, E148Q, M680I, V726A, and M694I. 7 This study is conducted to evaluate 12 common MEFV gene mutations in FMF patients (with documented FMF) to design a common MEFV gene profile. Additionally, we aimed at evaluating the diagnostic value of Tel-Hashomer clinical criteria versus genetic study in patients living in the northwest of Iran (near the Mediterranean Sea) having Azeri-Turkish background.
Materials and Methods
Based on the Tel-Hashomer clinical criteria, 300 FMF diagnosed patients enrolled in this descriptive case series study during August 2011 to September 2012. The study took place at a FMF clinic in northwest of Iran and the participants were selected among the Azeri-Turkish community in Ardabil. In total, 216 patients participated in the study; among which 84 patients were unavailable, unwilling, or not prepared to fill-in the consent form. This study was approved by the Ethical Committee of the Medical Faculty.
The patients were investigated and screened for the 12 known FMF mutations by a reverse hybridization assay (FMF StripAssay, Vienna lab, Vienna, Austria) according to manufacturer's instructions. About 100 ml of peripheral blood was used for extracting DNA by boiling-based method. Initially, exons 2, 3, 5 and 10 were amplified for each patient in a single and multiplex PCR, with the primers supplied in the RDB kit. The thermocycling program of amplification was 35 cycles (94ºC for 15 seconds, 58ºC for 45 seconds, and 72ºC for 45 seconds) and a final extension at 72ºC for 7 minutes. Agarose electrophoresis revealed the accuracy of the amplification by detecting four amplified DNA fragments, including 206, 236, 295, and 318 bp. In a test strip presenting a parallel array of allelespecific oligonucleotide probes, biotinylated PCR products were selectively hybridized. By this means, the twelve common mutations E148Q in exon 2, P369S in exon 3, F479L in exon 5 and M680I (G/C), M680I (G/A), I692del, M694V, M694I, K695R, V726A, A744S, and R761H in exon 10 were determined.
Statistical analysis was performed using SPSS V 15.0. Comparison between the different genotypes was assessed using a chi-square test. P=0.05 was accepted as statistically significant. In comparison with different ethnic groups as well as different studies, the 12 common MEFV gene mutations show similarities and dissimilarities (table 3 and 4). In all ethnic groups, M694V was the most common mutation. M680I was the second most common mutation among the Turks, whereas V726A was the second mutation among the Jews, Armenians, Arabs, and Iranian-Azeri. While M6941 is common among Arabs and least frequent among Jews, it displayed a higher rate among the Iranian-Azeri. Except E148Q mutation, it seems that Armenians and the population in our study have the same pattern of mutations, although the frequency of MEFV gene is similar to Arabs. Bonyadi showed high frequency of E148Q in normal Azeri people, but our study represents significant frequency of this mutation in FMF patients as well (table 5). 8 In this study, common compound mutations were M694V-V726A (11.11%), M694V-M694V (8.33%), and E148Q-M694V (5.09%) respectively. However, in other studies 7-10 M694V-M694V is the most common mutation. Turkish researchers 7,9 showed that M694V-M680I is the second common compound mutation, however, Azeri-Turkish studies 8,10 showed that M694V-V726A is the second common compound mutation.
Among
In the present study, we rarely observed V761R, R726A, and I692del with one mutation. Furthermore, there are no reports on the compound heterozygote mutations K694A/ V726A, M694I/R761H, I692del/R761H, and A744S/R761H, as the case in our patients.
The key point in this study is the re-evaluation of MEFV gene in the diagnosis of FMF since we could only detect 55.09% of those patients suffering from FMF (by 12 common MEFV gene analysis). It seems that clinical criteria are still the best method in the diagnosis of FMF. In addition, molecular analysis of FMF mutations in Chetrit's study confirmed the diagnosis in about 60% of the referrals with suspected FMF. 12 Some individuals with paired MEFV mutations do not have clinical symptoms 13 and one molecular study showed diagnosis of FMF was unlikely in 7% of patients based on clinical criteria. 14 Many diagnostic criteria for FMF were initially developed for adults (Tel-Hashomer criteria and Livneh criteria) and then for children (Yalcinkaya criteria). All criteria have high sensitivity, but low specificity; particularly in the pediatric age .95 57 *In some references, both G to C and G to A were studied; **The fraction of identifiable mutations among all independent alleles regarding the five common mutations group where recurrent fever attacks are more common than in adults. Since the identification of the MEFV gene, considerable progress has been made in the understanding of FMF. To date, more than 300 sequence variants (mutations and polymorphisms) have been identified in MEFV gene. Gene analysis test is a valuable diagnostic tool, but is still unable to confirm diagnosis in all patients. Consequently, more attention should be given to the clinical diagnostic criteria. 15 As a whole, a new moleculo-clinical approach to FMF is inevitable in the future.
Conclusion
M694V is the most common mutation, and M694V-V726A is the most common compound in northwest of Iran. MEFV gene frequency is similar to studies on Arabs and Armenians. It is concluded that clinical criteria are the best way in FMF diagnosis. | 2016-05-12T22:15:10.714Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "4800a612910e4daec6426082499dbfcc2f8ca5cd",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "869f3481f823000c6dffa80a41480a5fe99e8f5b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256068292 | pes2o/s2orc | v3-fos-license | Cognitive impact of COVID-19: looking beyond the short term
COVID-19 is primarily a respiratory disease but up to two thirds of hospitalised patients show evidence of central nervous system (CNS) damage, predominantly ischaemic, in some cases haemorrhagic and occasionally encephalitic. It is unclear how much of the ischaemic damage is mediated by direct or inflammatory effects of virus on the CNS vasculature and how much is secondary to extracranial cardiorespiratory disease. Limited data suggest that the causative SARS-CoV-2 virus may enter the CNS via the nasal mucosa and olfactory fibres, or by haematogenous spread, and is capable of infecting endothelial cells, pericytes and probably neurons. Extracranially, SARS-CoV-2 targets endothelial cells and pericytes, causing endothelial cell dysfunction, vascular leakage and immune activation, sometimes leading to disseminated intravascular coagulation. It remains to be confirmed whether endothelial cells and pericytes in the cerebral vasculature are similarly targeted. Several aspects of COVID-19 are likely to impact on cognition. Cerebral white matter is particularly vulnerable to ischaemic damage in COVID-19 and is also critically important for cognitive function. There is accumulating evidence that cerebral hypoperfusion accelerates amyloid-β (Aβ) accumulation and is linked to tau and TDP-43 pathology, and by inducing phosphorylation of α-synuclein at serine-129, ischaemia may also increase the risk of development of Lewy body disease. Current therapies for COVID-19 are understandably focused on supporting respiratory function, preventing thrombosis and reducing immune activation. Since angiotensin-converting enzyme (ACE)-2 is a receptor for SARS-CoV-2, and ACE inhibitors and angiotensin receptor blockers are predicted to increase ACE-2 expression, it was initially feared that their use might exacerbate COVID-19. Recent meta-analyses have instead suggested that these medications are protective. This is perhaps because SARS-CoV-2 entry may deplete ACE-2, tipping the balance towards angiotensin II-ACE-1-mediated classical RAS activation: exacerbating hypoperfusion and promoting inflammation. It may be relevant that APOE ε4 individuals, who seem to be at increased risk of COVID-19, also have lowest ACE-2 activity. COVID-19 is likely to leave an unexpected legacy of long-term neurological complications in a significant number of survivors. Cognitive follow-up of COVID-19 patients will be important, especially in patients who develop cerebrovascular and neurological complications during the acute illness.
Background COVID-19, caused by severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2), is primarily a respiratory disease but has the capacity to damage other organs including the brain. Similarly to the severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MER) viruses [1][2][3], SARS-CoV-2 targets the brain (reviewed [4]) and a growing number of case reports and cohort studies indicate significant neurological disturbance in COVID-19 patients (reviewed [5]). Central nervous system (CNS) involvement including non-specific encephalopathy (headache, confusion, and disorientation) was first documented in 53/214 (25%) hospitalised patients in Wuhan, China [6]. More recent studies in Europe have reported higher rates of CNS involvement: 69% of 58 hospitalised patients in a French Study [7], and 31% of 125 cases with altered mental state, including psychosis and neurocognitive changes, in a recent UK survey [8]. A recent report described a 'dysexecutive syndrome consisting of inattention, disorientation, or poorly organised movements in response to command' in 33% of 43 patients discharged from hospital [7]. Moreover, neuroradiological evidence of microstructural damage and disruption of functional brain integrity at 3-month follow-up in recovered COVID-19 patients [9] indicates potential long-term neurological consequences in severely affected COVID-19 patients (reviewed [10]). Acute cerebrovascular disease (CVD), typically presenting as ischaemic stroke but occasionally as intracerebral haemorrhage (ICH), has emerged as an important clinical feature in COVID-19 (reviewed in [5]). There have also been multiple case reports of encephalitis with brain-stem involvement (reviewed in [5]). CNS involvement with neurological presentation is more frequent in older and more severely ill COVID-19 patients [6]. Based on the minimum prevalence of neurological complications in SARS and MERS, Ellul et al. [5] estimated that of the reported 4.8 million COVID-19 cases at the time, 1805-9671 had developed CNS complications.
Human coronaviruses are known to target the CNS and cause damage by direct neurotoxicity or activation of the host immune response [1]. The propensity of SARS-CoV-2 to cause cerebral vascular injury greatly increases the risk of chronic brain damage, not only because of the cumulative destructive effect of multifocal cerebral ischaemia or haemorrhage, but potentially also through chronic post-infective complications of CVD, including endothelial and blood-brain barrier (BBB) dysfunction and upregulation of pro-inflammatory cytokines within the brain [11]. Long-term cognitive decline and neurodegeneration, with associated hippocampal atrophy [12], were previously reported to complicate systemic inflammation associated with severe sepsis [13,14]. Acute respiratory distress syndrome (ARDS), a common clinical presentation in COVID-19 patients, is also associated with cognitive decline and neurodegeneration [15,16]. Long-term follow-up of COVID-19 patients that includes detailed cognitive assessment will be important, to determine the extent and prevalence of long-term neurological and psychiatric consequences of COVID-19 [17], especially in patients who develop cerebrovascular and neurological complications during the acute illness.
In this review, we discuss the pathophysiological processes and risk factors shared by COVID-19 and dementia, focussing particularly on the role of cerebrovascular disease and the involvement of the renin-angiotensin system (RAS) ( Table 1). We consider whether SARS-CoV-2 infection may increase the risk of later developing dementia, particularly in people with underlying cerebrovascular disease and high-risk co-morbidities, such as diabetes and hypertension.
Cerebral vascular disease (CVD) is common in severe COVID-19
Unlike in SARS and MERS, COVID-19 patients are at substantial risk of developing acute CVD. Studies to date indicate that CVD has affected 2-6% of hospitalised patients with COVID-19 (reviewed in [5]). In a Spanish cohort, 23 of 1683 patients (1.4%) developed CVD, with cerebral ischaemia accounting for 74% and ICH for 23% of the 23 cases [18]. Amongst COVID-19 patients with Table 1 Pathophysiological processes contributing to increased risk of chronic neurological disease, including dementia, in COVID-19 patients References 1. Hypoxia and cerebral hypoperfusion secondary to cardiorespiratory disease [25,26] -Hypoxic-ischaemic brain injury, diffuse white matter damage 2. Coagulopathy, with thrombotic occlusion of cerebral blood vessels [22] -Cerebral artery thrombosis, disseminated intravascular coagulation 3. Cerebral microvascular damage and dysfunction [23,24] -Endotheliitis, pericyte damage, BBB leakiness, neurovascular dysfunction, impaired autoregulation, impaired vascular/para-vascular drainage 4. Dysregulation of renin-angiotensin system [125-127, 160, 161] -Loss of regulatory RAS and overactivity of classical RAS signalling 5. SARS-CoV-2 encephalitis / post-infective encephalitis (rare) [27,28,38], reviewed in [5] -CNS viral neuroinvasion via olfactory nerve fibres or vasculature/post-infective immune injury to CNS neurological complications, the reported incidence of CVD is much higher. Acute CVD was diagnosed in 77% of 56 patients admitted to a neurology ward in Italy [19]. In a recent UK-wide survey of 153 COVID-19 cases with neurological and/or psychiatric disturbance, most patients (62% of 125), for whom a full clinical dataset was available, had had a cerebrovascular event, compared to 31% with encephalopathy; of those with CVD, 74% had presented with ischaemic stroke, 12% with ICH, and 1% with CNS vasculitis [8]. A common theme across most of these studies is the predominance of CVD in older patients in association with more severe disease, and in those with co-morbidities, including hypertension, diabetes and underlying cerebrovascular disease [20]. However, large-vessel stroke has also been reported in younger adults with COVID-19 [21]. The pathophysiology of CVD in COVID-19 has yet to be fully determined (Fig. 1). Inflammation-induced disseminated intravascular coagulation (DIC), often complicated by pulmonary embolism, has been documented in a high proportion of patients with neurovascular complications and is likely to be a major contributor to most acute CVD events in COVID-19 [22], especially in younger healthy adults. A recent review [23] highlighted a number of pathways involving localised endothelial cell dysfunction, vascular leakage and unregulated immune activation that contribute to DIC formation in ARDS in COVID-19 patients. Activation of the kallikrein-bradykinin system leading to reduced blood flow, upregulation of adhesion molecules that mediate leucocyte recruitment, activation of platelets and neutrophils, and increased inflammation and immune surveillance all potentially contribute to vascular damage (and pulmonary injury) in COVID-19 patients. SARS-CoV-2 has also been shown to target and infect endothelial cells in vascular beds in multiple tissues [24], but it remains to be confirmed whether endothelial cells in the cerebral vasculature are similarly targeted.
Most studies to date have implicated vascular dysfunction and ischaemic damage in the major neurological complications associated with COVID-19. Post-mortem neuropathological examination in 18 COVID-19 cases showed all to have acute hypoxic-ischaemic brain injury affecting the cerebrum and cerebellum, with rare foci of perivascular inflammation in two of the brains but no convincing evidence of virus within the CNS [25]. MR imaging of autopsy brains from deceased COVID-19 patients within 24 h of death revealed white matter changes including foci of haemorrhage in two cases and evidence of posterior reversible encephalopathy syndrome in another [26]. Plasma markers of neuronal and astrocyte injury (neurofilament light chain protein and glial fibrillary acidic protein) were elevated in COVID-19 Fig. 1 Mechanisms of cerebrovascular damage in COVID-19. a The virus reaches the central nervous system through inhalation (1) and lung infection followed by haematogenous spread (2), or through the nasal mucosa and olfactory nerve fibres (3). b A high proportion of COVID-19 patients with severe disease develop cerebrovascular disease. In addition to hypoxic-ischaemic brain damage from compromised respiratory and cardiovascular function, the virus may cause large-vessel stroke, multiple small infarcts and foci of haemorrhage, and diffuse ischaemic white matter damage and oedema. Putative mechanisms are illustrated in the diagram. c Already, considerable progress has been made in preventing or ameliorating cerebrovascular damage in COVID-19. Some of the approaches are listed here patients and associated with disease severity [27]. The authors concluded that further studies were needed to assess the relationship of brain damage to ischaemic and inflammatory processes. The key unanswered question is how much damage is mediated by direct effects of virus on the CNS parenchyma or vasculature (damage that would be expected to persist after the virus is cleared from the CNS), how much is indirect CNS vascular injury mediated by immune activation, and how much is hypoxicischaemic damage secondary to the extracranial effects of the virus on the respiratory and cardiovascular systems.
SARS-CoV-2 infects human brain
Both SARS-CoV-2 antigen and RNA have been detected within brain tissue in human post-mortem studies, the antigen mainly within the medulla and lower cranial nerves [28]. SARS-CoV-2 was detected in cerebrospinal fluid from a patient with viral encephalitis [29] and was observed at autopsy in neural and capillary endothelial cells in COVID-19 brain tissue [30]. These observations need to be confirmed in further studies, particularly given the high Ct values used for the PCR detection of viral RNA and the difficulty of electron microscopic interpretation of virus-like particles. Although retrograde axonal transport via the olfactory bulb, associated with anosmia, is a potential route for neuroinvasion, it is likely that the cerebral vasculature plays a more important role in entry of the virus into the CNS. ACE-2, the principal SARS-CoV-2 receptor, is highly expressed by endothelial cells [31] and pericytes throughout the body [32] and analysis of publicly available databases indicates that ACE-2 is also expressed in the brain [33].
Despite apparent low levels of ACE-2 mRNA within the brain [34][35][36][37], SARS-Cov-2 infects induced pluripotent stem cell-derived human neural stem and progenitor cells, neurospheres, and cortical neurons with brain organoids (all of which express ACE-2) [38][39][40][41]. These data suggest that the mRNA levels do not necessary reflect ACE-2 protein or enzyme activity within the brain, although we would point out that some of the information (e.g. [38][39][40][41]) has been published only on preprint servers; peer review may lead to modification of the conclusions. We and others have detected ACE-2 immunohistochemically within the cerebral vasculature in human post-mortem brains [38,42] and a pre-published study by the Betsholtz lab indicates that ACE-2 is also enriched in brain pericytes [43]. In addition to ACE-2, other docking receptors for SARS-CoV-2 have been identified, including basigin (BSG, CD147) (preprint [44]) and neuropilin (NRP1) (preprint [45]), and these are highly expressed in endothelial cells and pericytes [46]. These receptors may have an important role to play, alongside or separately to ACE-2, in viral entry and disease pathogenesis (reviewed [46]).
Activation of the cerebral endothelium in a range of disease states, including AD, is associated with increased expression of integrins and selectins that are responsible for the attachment, tethering and passage of immune cells across the BBB. This leads to infiltration of brain tissue by immune cells, including neutrophils, monocytes, and lymphocytes, contributing to the pathogenesis of disease (reviewed [47]). In view of the activation of endothelium and inflammatory cell infiltration in lung and other tissues in COVID-19, it is conceivable that activation of the cerebral endothelium and infiltration by immune cells also contributes to neurological disturbance in many patients, although this too remains to be determined.
Pericytes are mural cells located within the basement membrane of microvessels [48] and communicate with endothelial cells to maintain integrity of the BBB [49] and regulate essential vascular functions: blood flow [50] and neurovascular coupling, endothelial transcytosis [51], and angiogenesis [52]. Transcriptomic analysis of murine heart [32,53] and brain [54][55][56] indicates that pericytes express high levels of ACE-2 and are therefore likely targets of SARS-CoV-2. Lung biopsies from 4 hospitalised Covid-19 patients revealed a dramatic reduction in pericyte coverage of alveolar capillaries in addition to thickening of the capillary wall [57]. Pericyte degeneration and the consequent disruption of endothelial signalling and homeostasis are likely to be important contributors to vascular instability in COVID-19 [32]. In pericyte-deficient mice (Pdgfrb ret/ret ), levels of von Willebrand factor, which promotes platelet aggregation and coagulation, are elevated, suggesting that pericyte loss contributes to the proangiogenic response in COVID-19 patients [43]. These studies implicate pericyte dysfunction as a mediator of pathophysiology in COVID-19. It is not yet known whether pericytes within the brain degenerate or become dysfunctional in COVID-19 patients with neurological manifestations.
A recent study has provided further evidence of the neuroinvasive potential of SARS-CoV-2 [38]. The authors demonstrated ACE-2-dependent infection of nerve cells within human brain organoids, and hypoxia-like metabolic changes and damage in neighbouring uninfected cells. Expression of humanised ACE-2 within the brain of mice experimentally infected with SARS-CoV-2 caused vascular remodelling throughout the cortex and greatly increased their mortality. The authors examined brain tissue from three COVID-19 patients and reported that SARS-CoV-2 spike protein could be detected immunohistochemically within the walls of small vessels cortical adjacent to microinfarcts. They also reported spike protein immunopositivity in some cortical neurons. However, these findings need to be confirmed.
The types of cerebral ischaemic damage seen in COVID-19 are major contributors to cognitive decline and dementia
Pre-existing dementia ranks as one of the most significant risk factors, or co-morbidities, in COVID-19. Retrospective assessment of UK health records in the UK OpenSAFELY platform indicated a hazard ratio of 2.16 (fully adjusted model) in association with pre-existing dementia/stroke [58]. An odds ratio of 3.07 was associated with dementia in a UK biobank community study [59]. The reasons for the increased risk and mortality in patients with pre-existing dementia have been well reviewed [60]. What is perhaps not as widely appreciated is that the types of brain damage seen in COVID-19 are themselves major contributors to cognitive decline and dementia. Ischaemic brain damage is the defining pathological process in vascular dementia (VaD), and stroke is a major risk factor for dementia [61,62]. Thromboembolic occlusion of cerebral blood vessels, a major complication of DIC, can cause a wide spectrum of neurological deficits, including cognitive impairment or dementia. Single or multiple infarcts, as a result of thromboembolism affecting major cerebral arteries, are estimated to account for approximately 20% of dementia cases associated with stroke [63]. It is probable that acute large cerebral vascular occlusion associated with hypercoagulability in severely affected COVID-19 patients will increase the risk of dementia to some extent.
Small vessel disease (SVD) accounts for about 20% of all strokes [64] but about 80% of stroke-related dementia cases [63], and is the most common cause of vascular cognitive impairment. SVD-associated neuroimaging abnormalities of the white matter and arteriolosclerosis of cerebral microvessels can be demonstrated in about 50% of all patients with dementia [65,66]. Co-morbidities of SVD include hypertension and diabetes [67] (both also risk factors for severe . Hypercoagulability and disseminated intravascular coagulation that affect many patients with severe COVID-19 are likely to reduce perfusion through small intracerebral vessels more than through larger ones. SARS-COV-2 induces endothelial dysfunction [23] and infects vascular beds in multiple tissues [24]. Cerebral white matter is particularly vulnerable to changes in cerebral blood flow, as would be expected in association with diffuse small vessel dysfunction and has been reported in COVID-19 [68][69][70]. The integrity of subcortical white matter is critically important for maintenance of cognitive function [71,72], and one of the consequences of white matter damage in COVID-19 is likely to be cognitive impairment. This was highlighted by the neuroradiological demonstration that damage to white matter and disruption of functional integrity in brain regions such as the hippocampus, at 3-month follow-up in recovered COVID-19 patients, was associated with memory loss [9]. The pathophysiology of SVD remains incompletely understood but damage to endothelial cells and pericytes, as well as BBB leakiness, are contributors to SVD-related brain injury (reviewed [66,73]), which are likely to be exacerbated in severe COVID-19. Endothelial dysfunction and pericyte loss are associated with the cerebral influx and accumulation of toxic constituents of the plasma, such as fibrinogen, leading to oligodendrocyte damage and myelin loss [74]. Fibrinogen-mediated activation of the bone morphogenetic protein signalling pathway prevents maturation of oligodendrocyte precursor cells and restricts oligodendrocyte maturation and remyelination [75]. The infiltration of immune cells across a damaged BBB may also contribute to white matter damage and cognitive decline in dementia and perhaps COVID-19 (reviewed [47]). In addition, endothelial dysfunction and loss of pericytes are likely to impair the clearance of cerebral metabolites, including Aβ peptides, that are toxic when present in excess. Impaired drainage of metabolites including Aβ is implicated in the development of cerebral amyloid angiopathy and Alzheimer's disease [76,77] and ineffective drainage of solutes probably accounts for enlargement of perivascular spaces in patients with SVD (reviewed [78,79] and cerebral amyloid angiopathy ( [80][81][82]).
Several other factors may impact on cerebral perfusion during systemic infection. Increased blood viscosity tends to slow capillary transit and limit oxygen delivery. Damage to the glycocalyx, a carbohydrate-enriched matrix on the luminal side of capillaries may impair perfusion and exacerbate ischaemia [83,84]. Many of the deleterious alterations to small vessels in systemic infection are exacerbated by the same risk factors that predispose to severe COVID-19, such as ageing, hypertension, diabetes, and obesity.
Post-mortem and neuroimaging studies indicate that ischaemic damage to the cerebral white matter is present in up to two thirds of patients with AD [85][86][87]. Although cerebral amyloid angiopathy may contribute to the damage, in most cases the ischaemia-related damage probably results from a combination of arteriolosclerotic SVD and non-structural vascular dysfunction (reviewed [88]). A series of recent neuroimaging studies has shown that ischaemic white matter damage occurs at a very early stage of AD, accelerates progression of the disease and contributes to the cognitive decline [89][90][91][92][93]. These clinical observations are supported by experimental studies showing that brain ischaemia accelerates Aβ accumulation, through a combination of dysregulated processing of amyloid-β precursor protein (APP) and impaired Aβ clearance (reviewed [87]) and that, in turn, Aβ peptides mediate vasoconstriction, by inducing contraction of pericytes [94] and vascular smooth muscle cells [95]. Studies of microvascular endothelial cell monolayers [96,97], human APP transgenic mouse models [98], and on human post-mortem brain tissue [99] indicate that Aβ peptides also impair BBB function, in part by reducing expression of tight junction proteins. Pericyte degeneration in AD is associated with BBB breakdown [100][101][102]. Pericyte loss accelerates Aβ pathology and induces tau pathology and cognitive decline in human APP mice [103]. In human brain tissue from AD patients, a decline in the level of the pericyte marker, platelet-derived growth factor-β (PDGFRβ), was associated with increased Aβ level and reduced cerebral perfusion [104]. Aβ peptides are toxic to human brain pericytes in culture [105] and analysis of cerebrospinal fluid indicates that the level of soluble PDGFRβ, a marker of pericyte injury and BBB leakiness, is elevated in the elderly in association with the earliest detectable changes in cognitive performance [89,90].
There is increasing evidence that cerebral hypoperfusion is also linked to tau pathology. In clinically normal adults with positron emission tomography (PET) evidence of cerebral Aβ accumulation, those who also had an increased cardiovascular disease risk score were significantly more likely to show evidence of tau accumulation as well [106]. In patients with mild cognitive impairment, increased cerebrovascular burden was associated with elevated PET-Tau signal and worse cognitive performance, independently of Aβ-PET [107]. Several experimental studies have shown that modelling of cerebral hypoperfusion increases tau phosphorylation: in adult Wistar rats [108], transgenic mice with Aβ and tau accumulation [109], and rat and human brain slices exposed to oxygen and glucose deprivation [110]. In a recent autopsy study, elevated levels of soluble tau and insoluble phosphotau were associated with lower levels of endothelial tight junction proteins, claudin-5 and occludin, in AD [111]. Tau-overexpressing mice were shown to have abnormal blood vessel morphology and increased vessel density [112]. A recent study found impaired neurovascular coupling, prior to neurodegeneration, in young (2-3 months) mice expressing mutant tau [113]. There are therefore clinical and experimental data suggesting a bidirectional relationship between cerebral vascular dysfunction and pathological tau. There is evidence from recent studies that TDP-43 pathology, likewise, is associated with cerebral vascular dysfunction, including pericyte loss and small vessel disease [114,115]. Lastly, cerebral ischaemia induces phosphorylation of α-synuclein at serine-129; this is the predominant disease-related modification of α-synuclein in Lewy bodies and neurites in Parkinson disease, and dementia with Lewy bodies, and also is significantly associated with concomitant AD pathology in these Lewy body diseases [116].
Both cerebral ischaemia and systemic inflammation can induce endothelial activation, with increased expression of integrins and selectins leading to adhesion and transendothelial migration of leukocytes into the brain parenchyma. Endothelial activation, with recruitment of leukocytes, has also been shown in AD [47,117]. Leukocytes enter the brain through post-capillary venules within the parenchyma, and to a lesser extent, vessels in the leptomeninges and choroid plexus. Activated neutrophils in cerebral vessels and parenchyma were found to contribute to gliosis and cognitive impairment in human APP mice [118]. In APP/PS1 mice, respiratory infection increased infiltration of the brain by interferon γand interleukin-17-producing T cells and natural killer T cells, accompanied by gliosis and enhanced deposition of Aβ [119]. Monocytes are the most common type of peripheral immune cell that migrates via the BBB into the brain in AD. CCL2, the major ligand for CCR2 expressed on monocytes, is upregulated in microvessels in AD and plays a role in Aβ clearance (reviewed [47]). Whether endothelial activation predicts not only acute but also longer neurological complications (including dementia) in COVID-19 remains to be determined.
Angiotensin-converting enzyme-2-mediated entry of SARS-CoV-2 into human cells and activation of the classical renin-angiotensin system
SARS-CoV-2 cell attachment and entry are initiated by binding of the virus to angiotensin-converting enzyme-2 (ACE-2) [120]. The expression of ACE-2 at the cell surface is therefore likely to be a critical determinant of viral tropism and pathobiology in COVID-19. ACE-2 is expressed in stem cell-derived neurons [121] and in neuronal and glial cells within the brain [33,42], potentially enabling virus entry and spread through the cribriform plate by retrograde axonal transport along olfactory nerves [33], or from sensory fibres that pass from the lungs to the brain stem via the vagal nerve and nodose ganglion [4]. ACE-2 is also expressed in the temporal lobe and hippocampus-brain regions that are involved in cognition and memory and are affected in AD [122]. Neuronal uptake and spread within the brain were demonstrated in human ACE-2 transgenic mice infected with SARS-CoV-1 [123] and SARS-CoV-2 [124]. However, as noted above, ACE-2 is also highly expressed on endothelial cells and pericytes, and haematogenous spread followed by endothelial uptake or influx of infected peripheral immune cells are further possible routes of entry of virus into the brain.
During SARS-Cov-1 infection, ACE-2 is cleaved from the cell surface [125] by ADAM-17 [126] during viral entry [127]. Although likely, it remains to be determined whether SARS-CoV-2 similarly results in loss of cell membrane-associated ACE-2. Ordinarily, ACE-2 is a key effector of the regulatory RAS that counters the actions of the classical RAS and reduces the risk of cardiovascular disease (reviewed [128,129]), stroke [130][131][132], and dementia [133] (Fig. 2). ACE-2 is reduced in Alzheimer's disease (AD) [42] and cognitive decline is pronounced in ACE-2 knock-out mice [134]. If SARS-CoV-2 entry leads to loss of ACE-2 (as in SARS), angiotensin II-mediated classical RAS activation would increase the risk of cerebrovascular and neurological disturbances in COVID-19 patients. This mechanism has also been proposed to explain other vascular and pulmonary manifestations of COVID-19 [135,136]. Internalisation of ACE2, as a result of angiotensin II activation of angiotensin receptor type 1 (AT1R), may further exacerbate damage [137]. The available reservoir of ACE-2 may be an important determinant of clinical outcome in COVID-19. Studies in rodents indicate that ACE-2 expression declines with age and is lower in males [138][139][140][141]. In contrast, oestrogen upregulates ACE-2, which could help to protect pre-menopausal women against severe complications of COVID-19 [142,143]. It may be relevant that most co-morbidities that increase the risk of complications in COVID-19, including hypertension, obesity, and diabetes, are associated overactivity of the classical RAS. Indeed ethnicity and genetic variations also influence baseline ACE-2 levels [144] and could provide a biological explanation as to why some ethnic groups are at higher risk of COVID-19 [145]. The hypothesis would explain why angiotensin receptor blockers (ARBs) and angiotensin-converting enzyme-1 inhibitors (ACE-Is), which downregulate the classical RAS by either blocking angiotensin II (Ang-II) signalling or Ang-II synthesis, respectively, and upregulate ACE-2, may reduce mortality in COVID-19 patients [146,147].
Recombinant soluble ACE-2 (rsACE-2) shows therapeutic promise in severe COVID-19 infection; administration was reported to reduce viral titre and serum Ang-II, Fig. 2 Altered balance between the classical and regulatory parts of the renin-angiotensin system (RAS) in COVID-19. a Ang-II is formed by the ACE-1-mediated cleavage of Ang-I. The binding of Ang-II to AT1R within the vasculature not only induces vasoconstriction but also affects vascular permeability and neurovascular coupling and promotes neuroinflammation and oxidative stress within the CNS. Under normal circumstances, these actions are counteracted by ACE-2 activity, which leads to the production of Ang-1-9 and Ang-(1-7) and the activation of MasR. b Internalisation or cleavage of membrane-bound ACE-2 following the binding and cell entry of SARS-CoV-2 virus leads to downregulation of the regulatory RAS and overaction of the classical RAS, driving vascular dysfunction, inflammation, oxidative stress and CNS injury in COVID-19 elevate serum Ang-(1-7) levels and markedly reduce pro-inflammatory cytokines [160]. In addition to preventing viral binding, rsACE-2-mediated reduction in Ang-II is likely to prevent AT1R-mediated ADAM-17 cleavage of membrane-bound ACE-2 and restore balance in the RAS [161].
Potential neurological consequences of increased classical renin-angiotensin system activation in COVID-19
The RAS is expressed and functions independently within the brain. Overactivation of the classical RAS, with elevated ACE-1 and Ang-II, has been demonstrated in post-mortem human brain tissue in AD [162][163][164]. Cerebroventricular infusion of Ang-II into adult Wistar rats promoted Aβ production and tau pathology [165,166], and ARBs and ACEIs protect against cognitive decline and disease pathology in transgenic APP mouse models of AD (reviewed [167]). We previously reported that reduction of ACE-2 in brain tissue in AD correlated strongly with parenchymal Aβ and tau levels and with increased ACE-1 activity [42]. We and others have since shown that induction of ACE-2, or administration of Ang-(1-7) or peptide analogues, protects against Aβrelated cognitive decline in mice associated with reduced neuroinflammation and oxidative stress [133,168,169].
The RAS is a critical regulator of vascular function. Ang-II binds to AT1R on vascular smooth muscle cells [170,171] to induce cerebral artery constriction [172], and on pericytes to cause constriction of microvessels [173,174]. Ang-II also modulates BBB permeability: AT1R signalling induces leakiness in endothelial cell culture models of the BBB [175] and Ang-II infusion causes BBB leakiness in mice that can be reversed by adding a superoxide scavenger, indicating a role for oxidative stress [176]. Several mediators of BBB leakiness, including vascular endothelial growth factor and matrix metalloproteinase (MMP)-2 and MMP-9, are induced by Ang-II [177][178][179]. In mice, Ang-II was shown to impair neurovascular coupling (i.e. the blood flow response to increased neural activity) in the somatosensory cortex [180] and to interfere with cerebral autoregulation [181,182]. The accumulation of ACE-1 in the extracellular matrix around cerebral arterioles (particularly in AD patients with cerebral amyloid angiopathy) suggests that locally produced (as well as circulatory) Ang-II participates in cerebrovascular dysfunction mediated by overactivation of the classical RAS [163].
Overactivation of the classical RAS may also reduce clearance of Aβ. Intra-mural peri-arterial drainage (IPAD) and para-vascular glymphatic channels have been implicated in the removal of Aβ from the brain [183][184][185]. The functioning of these drainage pathways depends on the polarised expression of aquaporin-4 in astrocytic endfeet [186], which is regulated by pericytes [187]. Focal absence of pericytes results in the redistribution of aquaporin-4 to the cell soma [51]. The RAS modulates aquaporin-4 expression in astrocytes [188], and Ang-II was shown to act via AT1R to reduce ACE-2 expression in astrocyte cultures [189]. These alterations in pericyte and astrocyte function are likely to impair the clearance of Aβ. It remains to be determined whether this occurs in COVID-19.
Neuroinflammation is strongly implicated in the development of AD. Genome-wide association studies have identified several inflammatory pathway genes as risk factors for AD [190,191]. Complement [192] and inflammasome activation [193,194] are likely to contribute to cerebrovascular dysfunction, neuronal toxicity, and the accumulation of Aβ and tau in AD. Ang-II activates the complement system [195,196] and the NLRP3 inflammasome [197], and both complement [198] and inflammasome activation [11] have been proposed to contribute to neurological disease in COVID-19 patients. A recent in silico study implicated SARS-CoV-2 activation of Toll-like receptor 4 (TLR4) as a major contributor to the inflammatory response in COVID-19 [199]. Ang-II upregulates TLR4 [200], which is a critical determinant of Ang-II-mediated vascular remodelling [201]. Blocking of TLR4 signalling delayed the development of Ang-II-mediated hypertension in rats and was associated with a dramatic increase in ACE-2 [202]. Pericytes express high levels of TL4R, activated by free long-chain fatty-acids [203]. The spike protein of SARS-CoV-2 has been shown to bind to linoleic acid, affecting the conformation of the protein and possibly the binding of the virus to ACE-2 (pre-published study [204]). It may therefore be relevant that linoleic acid is reduced in both COVID-19 [205] and AD [206], potentially influencing the progression of both diseases. Ang-II also acts as a molecular switch regulating microglial phenotypeswitching between an M1 (pro-inflammatory) and M2 (immunoregulatory) protective phagocytic phenotype [207] which is relevant to AD pathogenesis [208]. The role of microglia in neurological manifestations of COVID-19 has yet to be fully explored.
Ang-II-mediated endothelial activation promotes the binding and diapedesis of leukocytes across the BBB; these effects are mitigated by Ang-1-7 [209]. Pericytes too have immune-regulating properties (reviewed in [210]) and their localisation within the cerebral vasculature suggests that they may serve a 'gate-keeper' role in regulating immune cell infiltration. Although pericytes express ACE-2, it remains to be established whether they are targeted by SARS-CoV-2. Because of the pivotal role of pericytes in regulation of cerebrovascular function (and perhaps immune cell infiltration), it is likely that virus-induced pericyte damage would compromise cerebral perfusion, BBB integrity, and immune regulation. In addition to vascular effects, angiotensin peptides derived from Ang-II, including Ang-IV and Ang-(1-7), have neuromodulatory [211] (reviewed [212]) and neuroprotective properties (reviewed [213]). Ang-(1-7) activation of Mas (regulatory RAS) receptor and Ang-IV activation of the c-Met and insulin-regulated aminopeptidase receptors (reviewed [214]) limit tissue damage in models of stroke (reviewed [215,216]). Similarly, ACE-2 activation and/or Ang-(1-7) infusion prevents cognitive decline and disease pathology in animal models of Aβ accumulation [133,169,217,218], independent of changes in blood pressure. There is therefore a wide range of mechanisms through which reduced regulatory RAS signalling may exacerbate brain damage in COVID-19.
Is APOE ε4, an established risk factor for AD and vascular dysfunction, also a risk factor for COVID-19?
APOE polymorphism greatly influences the risk of developing AD: the risk is increased with APOE ε4 and decreased with APOE ε2 [219,220]. The physiological roles of the encoded apolipoprotein (apolipoprotein E, ApoE) have still not been fully defined. Recent studies indicate that possession of APOE ε4 is associated with cerebrovascular dysfunction, including BBB leakiness and pericyte degeneration [221] and cerebral amyloid angiopathy with capillary involvement [222]. A recent UK study reported that there was a higher prevalence of COVID-19 in people who were carriers of APOE ε4 [223]. We previously showed that APOE ε4 individuals also have lowest ACE-2 activity [42]. Pericyte expression of APOE ε4 was reported to promote BBB leakiness because of deficient basement membrane formation [224]. Moreover, possession of APOE ε4 is associated with reduced cerebral blood flow and increased subcortical ischaemic white matter damage [225,226], as well as neuroinflammation in AD (reviewed [227]). Future studies should aim to clarify the relationships between APOE ε4, COVID-19, and cerebral vascular dysfunction and AD.
Potential upregulation of ADAM-17 in COVID-19
ACE-2 is cleaved by ADAM-17 upon SARS-Cov-1 entry into cells [125][126][127]. It seems likely that this also occurs upon SARS-Cov-2 cell entry, although the data are not yet available. Ang-II-mediated activation of ADAM-17 and shedding of ACE-2 points to a positive feedback loop in which increased Ang-II level is associated with loss of ACE-2 [228]. Yet, it is worthy of note that ADAM-17 cleaves many cell-associated proteins that are required for proper function of the vasculature including the ApoE receptor, low-density lipoprotein receptorrelated protein 1 (LRP-1), involved in transendothelial clearance of Aβ (facilitated by ApoE), and PDGFRβ, needed for maintenance of pericytes. Upregulation of ADAM-17 may therefore potentially exacerbate vascular dysfunction in COVID-19. ADAM-17 also acts as one of the α-secretases and cleaves APP, preventing the generation of Aβ (reviewed [229]). The complex divergent roles of ADAM-17 in AD and potentially COVID-19 require further investigation.
Clinical management, clinical trials and possible future targets for therapeutic intervention in COVID-19 patients In severe COVID-19, patients present with pneumonia and those most severely affected develop ARDS with features of septic shock and multiple organ failure, requiring oxygen treatment and/or mechanical ventilation. Infection-induced inflammatory and vascular changes associated with coagulopathy and thrombosis, including venous thromboembolism (VTE), DIC and thrombotic microangiopathy with thromboembolic microvascular complications, are common complications of severe COVID-19, as illustrated by reports of VTE in 25-27% hospitalised patients [230,231]. The International Society on Thrombosis and Hemostasis (ISTH) recommends measuring levels of d-dimer, prothrombin time, partial thromboplastin time and platelet count, in hospitalised COVID-19 patients [232]. A retrospective study in Wuhan, China, at the beginning of the pandemic, found that mortality rates were lower in patients given lowmolecular weight heparin [233]. Clinical management of severe COVID-19 patients now routinely includes lowdose sub-cutaneous heparin [233] and/or thrombotic prophylaxis [234], unless patients are at increased risk of bleeding.
Lymphopenia with marked loss of regulatory T and B cells and natural killer cells, reduction in monocytes, eosinophils and basophils, and an increase in neutrophils are typical in severe COVID-19 (reviewed [235]). There are also elevated levels of pro-inflammatory cytokines, sometimes marked (a so-called cytokine storm). Convalescent plasma [236] and plasma exchange [237,238] improve survival rates in severe disease, and immunomodulatory therapies such as tocilizumab, a monoclonal antibody against the IL-6 receptor [239], and sarilumab, an IL-6 receptor antagonist [240], may offer protection and are currently undergoing clinical trials. Neutralising antibodies targeting other pro-inflammatory cytokines (IL-1, IL-17) may also offer protection, as too may potential inhibitors of complement system activation. Intravenous transplantation of mesenchymal stems cells was shown to improve the outcome of COV-19 patients with pneumonia in 7 COVID-19 patients in Beijing, China [241]. It seems possible that mesenchymal stem cells might also ameliorate brain injury in severe COVID-19, given their immunomodulatory and antiinflammatory properties, and ability to attenuate BBB damage and neuroinflammation after cerebral ischaemia [242][243][244].
There is continued debate on the role of systemic or inhaled corticosteroids in COVID-19 patients. Although earlier studies indicated a lack of benefit from corticosteroids [245], a randomised clinical trial reported by the RECOVERY collaborative group, Oxford, UK, found that systemic dexamethasone reduced mortality in severely affected COVID-19 patients [246]. Inhaled steroids had previously been shown to reduce inflammation and tissue injury in ARDS [247] (reviewed [248]). In addition to their inherent anti-inflammatory properties, steroids may have anti-viral properties [249]. Ciclesonide, an inhaled corticosteroid, was shown to suppress replication of MERS-CoV, SARS-CoV and SARS-CoV-2 in vitro [250].
The expression and activity of interferon-β (IFN-β), an endogenous protein with anti-viral and anti-inflammatory properties, is impaired in COVID-19 [251]. Interferon inhibits SARS-CoV-2 replication in vitro [252]. In a Phase II clinical trial, IFN-β in combination with anti-viral drugs shortened the duration of viral shedding and length of hospital admission [253]. A pharmaceutical company based in the UK, Synairgen, reported a lower risk of requiring ventilation, and reduction in mortality by about 79%, in a Phase II clinical trial of SNG001, an inhaled form of IFN-β (these data are currently unpublished).
Since ACE-2 is a receptor for SARS-CoV-2, and ACEIs and ARBs are predicted to increase ACE-2 expression, it was initially feared that the use of these drugs might exacerbate COVID-19 [254]. Recent meta-analyses have instead suggested that RAS-targeting medications are protective in COVID-19 [146,147,255]. This is likely to be a due to the protective role of ACE-2 in lowering or preventing overactivation of the classical RAS and minimising consequent Ang-II-mediated ischaemic and inflammatory damage, as outlined above. Several clinical trials have been registered with the National Institutes of Health (NIH) to test ARBs such as losartan in COVID-19 patients: NCT04335123, NCT04312009 and NCT04311177. Two studies are also investigating the impact of discontinuation of ACE-Is on COVID-19 (EudraCT numbers 2020-001544-26 and 2020-001206-35). Boosting the regulatory arm of the RAS may ameliorate COVID-19 because of the protective effects of ACE-2 and Ang-(1-7); interventional trials with recombinant human ACE2 (rhACE2) and Ang-(1-7) have also been registered (NCT04287686 and NCT04332666, respectively), although the rhACE2 study seeking to recruit people between the ages 18-80 years in China has since been withdrawn. A further rhACE2 study (2020-001172-15) is registered on the EU Clinical trials register. Other strategies being explored include several studies seeking to inhibit TMPRSS2. Studies that could include TL4R blockers and ADAM-17 inhibitors might also be worthy of future study. For a comprehensive review of the pharmacological targets that are currently being investigated as potential interventions and treatments in COVID-19, the reader should refer to recent reviews [256][257][258].
Conclusion
Cerebral vascular disease is emerging as a major complication of severe COVID-19. This is likely to cause lasting brain damage and to increase the risk of stroke and vascular cognitive impairment. Several of the metabolic abnormalities that affect COVID-19 patients may also increase the risk of developing AD. Dementia and COVID-19 share many co-morbidities and risk factors, including age, gender, hypertension, diabetes, obesity and possession of APOE ε4-most of which are associated with an overactive RAS, cerebrovascular dysfunction and neuroinflammation. These shared co-morbidities and similar mechanisms may also explain the high incidence and increased rates of mortality amongst people with dementia [59,259,260]. There is urgent need for research to better understand the pathogenesis of neurological disturbances in COVID-19, some of which have probably been covert and the prevalence of which may be considerably underestimated. This understanding is essential to establish the long-term consequences from the disease (including the potential for increased risk of dementia in some cases) and to identify means of preventing or ameliorating the brain damage. Authors' contributions SM collected and collated most of the data. SM and SL wrote the manuscript. SL and PK participated in the critical revision of the manuscript for important intellectual content. All of the authors reviewed and approved the final manuscript.
Funding
The research by the authors on cerebrovascular and RAS dysfunction in dementia was supported by grants from Alzheimer's Society (AS-PG-17b-004), Alzheimer's Research UK (ARUK-PG2015-11 and ARUK-NAS2016B-1) and the Bright Focus Foundation (A2016582S).
Availability of data and materials
Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.
Ethics approval and consent to participate Not applicable.
Consent for publication
Not applicable. | 2023-01-22T14:35:01.826Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "c0554b6da164d4409c5fd90ecd30bfcccc43ebef",
"oa_license": "CCBY",
"oa_url": "https://alzres.biomedcentral.com/counter/pdf/10.1186/s13195-020-00744-w",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "c0554b6da164d4409c5fd90ecd30bfcccc43ebef",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
260950919 | pes2o/s2orc | v3-fos-license | Music listening while you learn: No influence of background music on verbal learning
Background Whether listening to background music enhances verbal learning performance is still disputed. In this study we investigated the influence of listening to background music on verbal learning performance and the associated brain activations. Methods Musical excerpts were composed for this study to ensure that they were unknown to the subjects and designed to vary in tempo (fast vs. slow) and consonance (in-tune vs. out-of-tune). Noise was used as control stimulus. 75 subjects were randomly assigned to one of five groups and learned the presented verbal material (non-words with and without semantic connotation) with and without background music. Each group was exposed to one of five different background stimuli (in-tune fast, in-tune slow, out-of-tune fast, out-of-tune slow, and noise). As dependent variable, the number of learned words was used. In addition, event-related desynchronization (ERD) and event-related synchronization (ERS) of the EEG alpha-band were calculated as a measure for cortical activation. Results We did not find any substantial and consistent influence of background music on verbal learning. There was neither an enhancement nor a decrease in verbal learning performance during the background stimulation conditions. We found however a stronger event-related desynchronization around 800 - 1200 ms after word presentation for the group exposed to in-tune fast music while they learned the verbal material. There was also a stronger event-related synchronization for the group exposed to out-of-tune fast music around 1600 - 2000 ms after word presentation. Conclusion Verbal learning during the exposure to different background music varying in tempo and consonance did not influence learning of verbal material. There was neither an enhancing nor a detrimental effect on verbal learning performance. The EEG data suggest that the different acoustic background conditions evoke different cortical activations. The reason for these different cortical activations is unclear. The most plausible reason is that when background music draws more attention verbal learning performance is kept constant by the recruitment of compensatory mechanisms.
Background
Whether background music influences performance in various tasks is a long-standing issue that has not yet been adequately addressed. Most published studies have concentrated on typical occupational tasks like office work, labour at the conveyer belt, or while driving a car [1][2][3][4][5][6][7][8][9][10][11][12][13]. These studies mainly concluded that background music has detrimental influences on the main task (here occupational tasks). However, the influence of background music was modulated by task complexity (the more complex the task the stronger was the detrimental effect of background music) [4], personality traits (with extraverts being more prone to be influenced by background music) [2,[4][5][6], and mood [14]. In fact, these studies mostly emphasise that mood enhancement by pleasant and alerting background music enhances performance of monotonous tasks such as those during night shifts.
Whether background music influences performance of academic and school-related skills has also been investigated. A broad range of skills have been considered, including the impact of background music on learning mathematics, reading texts, solving problems, perceiving visual or auditory information, learning verbal material (vocabulary or poems), or during decision making [2,8,. The findings of these studies are mixed, but most of them revealed that background music exerts a detrimental influence on the primary academic task.
The present study was designed to readdress the question of whether background music enhances verbal learning. There are a number of reasons for this renewed interest: Firstly, only a few of the preceding studies have examined the effects of background music on verbal learning in particular [31,[41][42][43][44][45], reporting more or less detrimental effects of background music on verbal learning, whereas several scientifically weak contributions have been published suggesting that listening to background music (in particular classical music) should have beneficial effects on learning languages [46,47]. Since verbal learning is an important part of academic achievement, we find it important to study the influence of background music on verbal learning more thoroughly. Secondly, the published studies used music of different genres (pop, classic) and vocals or they used instrumentals including musical pieces to elicit different emotions, music with different tempi, or simple tones as background music. Not one study has as yet controlled for the effects of emotion, complexity, tempo, and associated semantic knowledge of the musical pieces. The major aim of the present study was therefore to control these variables.
1. We used musical pieces unknown to the subjects. For this, we composed new musical pieces, avoiding any resemblance to well-known and familiar tunes. In doing this, we circumvented the well-known effect that particular contents of episodic and semantic memory are associated with musical pieces [48][49][50]. Thus, hearing a familiar musical piece might activate the episodic and semantic memory and lead to preferential/biased processing of the learned or the to be learned verbal stimuli.
2. A further step in avoiding activation of a semantic or episodic network was to use meaningless words. In combination with using unfamiliar musical pieces, this strategy ensures that established (or easy to establish) associations between musical pieces and particular words are not activated.
3. The musical pieces were designed in order to evoke pleasantness and activation to different degrees. Based on the mood-activation hypothesis proposed by Glenn Schellenberg and colleagues, we anticipated that music that evokes more pleasant affect might influence verbal learning more positively than music that evokes negative emotions [51][52][53]. 4. Within the framework of the theory of changing state effects [54,55], we anticipated that rapidly changing auditory information would distract verbal learning more seriously than slowly changing music. Thus, slower musical pieces would exert less detrimental effects on verbal learning than faster music. 5. Given that the potentially beneficial effects of background music are also explained by a more or less unspecific cortical activation pattern, which should be evoked by the music and would change the activation of the cortical network involved in controlling learning and memory processes, we also registered EEG measures during learning and recognition. Here, we used eventrelated desynchronization and event-related synchronization of the EEG alpha band as indices of cortical activation. Our interest in event-related synchronization and desynchronization in the alpha band relates to the work of roughly the last 2 decades on alpha power demonstrating the relationship between alpha band power and cortical activity [56]. In addition, several recent combined EEG/fMRI and EEG/PET papers strongly indicate that power in the alpha-band is inversely related to activity in lateral frontal and parietal areas [57][58][59], and it has been shown that the alphaband reflects cognitive and memory performance. For example, good memory performance is related to a large phasic (event-related) power decrease in the alpha-band [56].
Methods
Subjects 77 healthy volunteers took part at this experiment (38 men and 39 women). Two subjects were excluded because of data loss during the experiment. All subjects were recruited through advertisements placed at the University Zurich and ETH Zurich. All subjects underwent evaluation to screen for chronic diseases, mental disorders, medication, and drug or alcohol abuse. Normal hearing ability was confirmed for all subjects using standard audiometry. For intelligence assessment, a short test [60] was used that is known to correlate with standard intelligence test batteries (r = 0.7 -0.8). In addition, the NEO-FFI [61] was used to measure the personality trait "extraversion" because of its strong correlation with dual task performance [4][5][6]. All subjects were tested for basic verbal learning ability using a standard German verbal learning test (Verbaler Lern-und Merkfähigkeitstest) [62,63]. All subjects were consistently right-handed, as assessed with the Annett-Handedness-Questionnaire [64]. All subjects indicated not having received formal musical education for more than five years during their school years and that they had not played any musical instrument in the last 5 years. We also asked the subjects whether they had previously learned while listening to music. Most of them conformed having done so, and a few (n = 5) indicated having done so frequently. The sample characteristics of the tested groups are listed in Table 1. There were no statistical between-group differences in these measures. Each subject gave written, informed consent and received 30 Swiss Francs for participation. The study was carried out in accordance with the Declaration of Helsinki principles and was approved by the ethics committee of the University of Zurich.
Study design
The basic principle of this study was to explore verbal memory performance under different acoustic background stimulation conditions. The subjects performed a verbal memory test (see below) while acoustic background stimuli were present (background+) or not present (background-). Four different musical pieces and a noise stimulus were used as acoustic background stimuli (in-tune fast, in-tune slow, out-of-tune fast, out-of-tune slow, noise; for a description of these acoustic stimuli see below). The 75 subjects were randomly assigned to one of these five groups, each group comprising therefore 15 subjects. These five groups did not differ in terms of age, IQ, or extraversion/introversion (tested with Kruskal-Wallis-U-test).
In the background+ condition, participants were required to learn while one of the above-mentioned background stimuli was present. Thus, the experiment comprised two factors: a grouping factor with five levels (Group: in-tune fast, in-tune slow, out-of-tune fast, outof-tune slow, noise) and a repeated measurements factor with two levels (Background: without acoustic background [background-] and with acoustic background [background+]). We also measured the electroencephalogram (EEG) during the different verbal learning conditions to explore whether the different learning conditions are associated with particular cortical activation patterns. The order of background stimulation (verbal learning with or without background stimulation) was counterbalanced across all subjects. There was an intermittent period of 12-14 minutes between the two learning sessions during which the subjects rated the quality of the background stimuli and rested for approximately 8 minutes.
Background stimuli
Several studies have shown that tempo and the level of consonance/dissonance of musical excerpts strongly determine the level of arousal and emotional feelings [51,65,66]. We therefore designed 4 different 16 minutelong musical pieces differing in musical tempo and tuning. The musical excerpts were computerised piano sounds designed using FL Studio 4 software [67]. We composed a musical excerpt in C-major consisting of a melody and accompanying fundamental chords. This original musical excerpt was systematically varied in terms of tuning (in-tune, out-of-tune) and tempo (fast, slow), resulting in four different musical background stimuli ( Figure 1). Two of these background stimuli were fast (in-tune fast, out-of-tune fast), and two of them were slow (in-tune slow, out-of-tune slow) (musical excerpts can be downloaded as supplementary material [68]). The in-tune excerpts comprised the typical semitone steps between the tones while in the out-of-tune excerpts the melody was pitch-shifted by one quartertone above the original pitch, resulting in the experience of out-of-tune. The tempo of the musical excerpts was varied by changing the beats per minute (160 bpm for fast and 60 bpm for slow) [51]. In addition, we designed a noise stimulus (also 16 minutes long; brown noise) with a temporal envelope similar to that of the other four musical excerpts. In summary, we applied five different kinds of background stimuli: in-tune fast, in-tune slow, out-of-tune fast, out-of-tune slow, and noise.
In a pilot study, 21 subjects (who did not take part in the main experiment) evaluated these stimuli according to the experienced arousal and valence on a 5-point Likert scale (ranging from 0 to 4). In-tune music was generally rated as more pleasant than both out-of-tune music and noise (mean valence rating: in-tune fast: 3.04, in-tune slow: 2.5, out-of-tune fast: 1.7, out-of-tune slow: 0.9, noise: 0.6; significant differences between all stimuli). In terms of arousal, the slow musical excerpts were rated as less arousing than the fast excerpts and the noise stimulus (mean arousal rating: in-tune fast: Table 1 Mean sample characteristics of the five groups studied. In-tune fast ITF In-tune slow ITS Out-of-tune fast OTF Out-of-tune slow OTS Noise 2.4, in-tune slow: 0.8, out-of-tune fast: 2.14, out-of-tune slow: 1.05, noise: 1.95; significant differences between all stimuli). These five acoustic stimuli were used as background stimuli for the main verbal learning experiment. In this experiment, background stimuli were binaurally presented via headphones (Sennheiser HD 25.1) at approximately 60 dB.
Verbal memory test
Verbal learning was examined using a standard verbal learning test, which is frequently used for investigations with German-speaking subjects (Verbaler Lerntest, VLT). This test has been shown to validly measure verbal long-term memory [62,63]. The test comprises 160 items and includes neologisms, which evoke either strong (80 items) or weak (80 items) semantic associations. In the test, most of the neologisms are novel (i.e., presented for one time), while 8 of the neologisms are repeatedly presented (7 repetitions), resulting in a total of 104 novel trials and 56 repetition trials. In the procedure used here, subjects were seated in front of a PC screen in an electromagnetically shielded room, and they were asked to discriminate between novel neologisms (NEW) and those that were presented in previous trials (OLD, i.e. repetitions). Subjects were instructed to respond after the presentation of every word by pressing either the right or left button of a computer mouse (right for OLD, left for NEW). Each trial started with a fixation cross (0 -250 ms) followed by the presentation of a particular word (1150 -2150 ms). The inter-trial interval, that is, the time between the onsets of the words of two consecutive trials, was 6 seconds. The performance in this memory test was measured using the number of correct responses for recognition of new and old words. For this test we had two parallel versions (version A and B) which allowed for testing the same subjects in two different background conditions (i.e., [background-] and [background+]).
Psychometrical measures
Several psychological measures were obtained after each experimental condition. First, the participants rated their subjective mood state using the MDBF questionnaire (Multidimensionaler Befindlichkeitsfragebogen) [69]. The MDBF comprises 12 adjectives (content, rested, restless, bad, worn-out, composed, tired, great, uneasy, energetic, relaxed, and unwell) for which the subjects had to indicate on a 5-point scale whether the particular adjective describes their actual feeling (1 = not at all; 5 = perfect). These evaluations are entered into summary scores along the three dimensions valence, arousal, and alertness. The acoustic background stimuli were evaluated using an adapted version of the Music Evaluation Questionnaire (MEQ) [70]. This questionnaire comprises questions evaluating the preference for the presented musical stimuli and how relaxing they are. In this questionnaire, subjects were also asked how they feel after listening to the music (i.e. cheerful, sad, aggressive, harmonious, drowsy, activated, and excited). All items were rated on 5-point Likert scales ranging from (1) not at all to (5) very strongly. The 10 scales were reduced to 3 scales (on the basis of a factor analysis), that is the subjective feeling of pleasantness, activation (arousal), and sadness (sadness).
EEG recording
The electroencephalogram (EEG) was recorded from 30 scalp electrodes (Ag/AgCl) using a Brain Vision amplifier system (BrainProducts, Germany). Electrodes were placed according to the 10-20 system. Two additional channels were placed below the outer canthi of each eye to record electro-oculograms (EOG). All channels were recorded against a reference electrode located at FCz. EEG and EOG were analogue filtered (0.1-100 Hz) and recorded with a sampling rate of 500 Hz. During recording impedances on all electrodes were kept below 5 k. EEG preprocessing EEG data were preprocessed and analysed by using BrainVision Analyzer (BrainProducts, Munich, Germany) and Matlab (Mathworks, Natick, MA). EEG data were off-line filtered , and re-referenced to a common average reference. Artefacts were rejected using an amplitude threshold criterion of ± 100 μV. Independent component analysis was applied to remove ocular artefacts [71,72]. EEG data were then segmented into epochs (-1000 -4000 ms) relative to the onset of the word stimulus. Analysis of the time course of event-related desynchronization and event-related synchronization was performed according to the classical method described elsewhere [73,74]. We included NEW (i.e., neologisms) and OLD (i.e., presented previously) trials in the eventrelated synchronization/desynchronization analysis, and only those trials correctly identified as NEW and OLD by the participants. In this study, we calculated eventrelated synchronization/desynchronization in the alpha band. Several recent combined EEG/fMRI and EEG/PET papers strongly indicate that power in the alpha-band is inversely related to activity in lateral frontal and parietal areas [57][58][59], and it has been shown that the alphaband reflects cognitive and memory performance. In the procedure used here, event-related synchronization/ desynchronization in the alpha band was analyzed by filtering the artefact-free segments with a digital bandpass filter (8)(9)(10)(11)(12). Amplitude samples were then squared and averaged across all trials, and a low-pass filter (4 Hz) was used to smooth the data. The mean alpha-band activity in latency band -1000 -0 ms relative to word stimulus onset was defined as intra-experimental resting condition (i.e., the baseline condition). To quantify the power changes during verbal learning, event-related synchronization/desynchronization were calculated according to the following formula: eventrelated desynchronization (ERD)/event-related synchronization (ERS) = ((band power task -band power baseline ) * 100/band power baseline ). Note that negative values indicate a relative decrease in the alpha-band (event-related desynchronization) during the experimental condition compared to the baseline condition, while positive values indicate an increase of alpha-band power during the experimental condition (event-related synchronization). In order to avoid multiple comparisons, eventrelated synchronization/desynchronization values were averaged over 10 time windows with a duration of 400 ms, and were collapsed for the frontal (FP1, FP2, F7, F3, Fz, F4 and F8), central-temporal (T7, C3, Cz, C4 and T8), and parieto-occipital (P7, P3, Pz, P4, P8, O1, Oz and O2) electrode locations [75] (see also Figure 2). Taken together, we obtained a time course of eventrelated synchronization/desynchronization changes over three different cortical regions (frontal, central-temporal, parieto-occipital), and over a time course of 4000 ms after stimulus presentation (10 event-related synchronization/desynchronization values for the entire time course). For this paper, we restrict our analysis to the first 5 time segments after word presentation, and thus concentrate on a time interval of 0 -2000 ms after word presentation.
Statistical analysis of event-related synchronization/ desynchronization
For the main analysis of the event-related synchronization/desynchronization, a four-way ANOVA with the repeated measurements on the following factors was applied: Time Course (5 epochs after word stimulus presentation), Brain Area (3 levels: frontal, central-temporal, parieto-occipital), Background (2 levels: learning with acoustic background = background+, learning without acoustic background = background-), and the grouping factor Group (5 levels: in-tune fast, in-tuneslow, out-of-tune fast, out-of-tune slow, noise). Following this, we computed a four-way repeated measurements ANOVA, including the event-related synchronization/desynchronization data obtained for the left-and right-sided electrodes of interest, to examine whether hemispheric differences might influence the overall effect. For this, we used the multivariate approach to handle with the problem of heteroscedasticity [80]. Results were considered as significant at the level of p < 0.01. We used this more conservative approach in order to guard against problems associated with multiple testing. All statistical analyses were performed using the statistical software package SPSS 17.01 (MAC version). In case of significant interaction effects post-hoc paired t-tests were computed using the Bonferroni-Holm correction [81].
In order to assess whether there are between hemispheric differences in the cortical activations during listening to background music, we subjected the eventrelated synchronization/desynchronization data of the frontal, central-temporal, and parietal-occipital ROIs separately to a four-way ANOVA with Hemisphere (left vs. right), Group, Background, and Time as factors (Hemisphere, Group, and Background are repeated measurements factors). If background music and especially background music of different valence would evoke different lateralization patterns than the interaction between Hemisphere, Group or Hemisphere, Group, and Background should become significant. Thus, we were only interested in these interactions.
Learning performance
Subjecting the verbal learning data to a 2-way repeated measurements ANOVA with repeated measurements on one factor (Background: background+ and background-) and the grouping factor (Group: in-tune fast, in-tune slow, out-of-tune fast, out-of-tune slow, noise) revealed no significant main effect
The MEQ rating data were subjected to three 2-way ANOVAs with one repeated measurements factor (Background: background+ vs. background-) and the grouping factor (Group). There was a significant main effect for Group with respect to pleasantness, with the in-tune melodies receiving the highest pleasantness ratings and the noise stimulus the lowest (F(4,70) = 12.5, p < 0.001 eta 2 = 0.42). There was also a trend for interaction between Background and Group (F(4,70) = 2.40, p = 0.06, eta 2 = 0.12), which is qualified by reduced pleasantness ratings for the in-tune melodies for the condition in which the subjects were learning. For the sadness scale we obtained a main effect for Background (F(1,70) = 8.14, p = 0.006, eta 2 = 0.10) qualified by lower sadness ratings for the music heard while the subjects were learning.
The kind of acoustic background also influenced the subjective experience of pleasantness (F(1,70) = 24.6, p < 0.001, eta 2 = 0.26), with less pleasantness during learning while acoustic background stimulation was present. The subjective feeling of arousal and sadness did not change as a function of different acoustic background conditions.
EEG data
The event-related synchronization/desynchronization data of the alpha band were first subjected to a 4-way ANOVA with repeated measurements on three factors (Time = 5 levels; Brain Area = 3 levels: frontal, centraltemporal, and parieto-occipital; Background = 2 levels: background+ and background-) and one grouping factor (Group; 5 levels: in-tune fast, in-tune slow, out-of-tune fast, out-of-tune slow, noise). Where possible we used the multivariate approach to test the within-subjects effects, since this test is robust against violations of heteroscedasticity [80]. Figure 4 demonstrates the mean ERDs and ERSs as topoplots broken down for the 10 time segments and for learning with (background+) and without (background-) musical background. Figure 5 depicts the grand averages of event-related synchronization and desynchronization for the frontal, central-temporal, and parieto-occipital leads.
This complex ANOVA revealed several main effects and interactions. The event-related synchronization/ desynchronization data showed a typical time course with a strong event-related desynchronization peaking at the second time segment (400 -800 ms after word presentation). After reaching the maximum event-related desynchronization, the alpha-band synchronises again with the strongest event-related synchronization at the 5th time segment (1600 -2000 ms after word presentation). This time course is highly significant (F(4,67) = 106.2, p < 0.001, eta 2 = 0.86). The time courses of event-related synchronization/desynchronization are different for the different brain areas with larger eventrelated desynchronization for the parieto-occipital leads at the second time segment and larger event-related Figure 6 shows the mean eventrelated synchronization/desynchronization for the time segments 3 and 5.
The ANOVA analysis conducted to examine potential between-hemisphere differences in terms of eventrelated synchronization/desynchronization revealed that none of the interactions of interest (Hemisphere × Group, Hemisphere × Group × Background) turned out to be significant, even if the statistical threshold was lowered to p = 0.10.
Discussion
The present study examined the impact of background music on verbal learning performance. For this, we presented musical excerpts that were systematically varied in consonance and tempo. According to the Arousal-Emotion hypothesis, we anticipated that consonant and arousing musical excerpts would influence verbal learning positively, while dissonant musical excerpts would have a detrimental effect on learning. Drawing on the theory of changing state effects, we anticipated that rapidly changing auditory information would distract verbal learning more seriously than slowly changing music. Thus, slower musical pieces were expected to exert less detrimental effects on verbal learning than faster music. In order to control for the global influence of familiarity with and preference for specific music styles, we designed novel in-house musical excerpts that were unknown to the subjects. Using these excerpts, we did not uncover any substantial and consistent influence of background music on verbal learning.
The effect of passive music listening on cognitive performance is a long-standing matter of research. The findings of studies on the effects of background music on cognitive tasks are highly inconsistent, reporting either no effect, declines or improvements in performance (see the relevant literature mentioned in the introduction). The difference between the studies published to date and the present study is that we used novel musical excerpts and applied experimental conditions controlling for different levels of tempo and consonance. According to the Arousal-Emotion hypothesis, we hypothesised that positive background music arouses the perceiver and evokes positive affect. However, this hypothesis was not supported by our data since there was no beneficial effect of positive background music on verbal learning. In addition, according to the theory of changing state effects we anticipated that rapidly changing auditory information would distract verbal learning more seriously than slowly changing music. This theory also was not supported by our data.
As mentioned above, the findings of previously published studies examining the influence of background music on verbal learning and other cognitive processes are inconsistent, with more studies reporting no influence on verbal learning and other cognitive tasks. But before we argue too strongly about non-existing effects of background music on verbal learning we will discuss the differences between our study and the previous studies in this research area. Firstly, it is possible that the musical excerpts used may not have induced sufficiently strong arousal and emotional feelings to exert beneficial or detrimental effects on verbal learning. Although the four different musical excerpts significantly differed in terms of valence and arousal, the differences were a little smaller in this experiment than in the pilot experiment. Had we used musical excerpts that induce stronger emotional and arousal reactions, the effect on verbal learning may have been stronger. In addition, the difference between the slow and fast music in terms of changing auditory cues might not have been strong enough to influence verbal learning. There is however no available data to date to indicate "optimal" or more "optimal" levels of arousal and/or emotion and of changes in auditory cues for facilitating verbal learning.
A further aspect that distinguishes our study from previous studies is that the musical pieces were unknown to the subjects. It has been shown that music has an important role in autobiographical memory formation such that familiar and personally enjoyable and arousing music can elicit or more easily facilitate the retrieval of autobiographical memories (and possible other memory aspects) [48][49][50]. It is conceivable that the entire memory system (not only the autobiographical memory) is activated (aroused) by this kind of music, which in turn improves encoding and recall of information. Recently, Särkämo et al. [82] demonstrated that listening to preferred music improved verbal memory and attentional performance in stroke patients, thus supporting the Arousal-Emotion hypothesis. However, the specific mechanisms responsible for improving memory functions while listening to music (if indeed present) are still unclear. It is conceivable that the unknown musical pieces used in our study did not activate the memory system, thus, exerting no influence on the verbal learning system.
Although verbal learning performance did not differ between the different background stimulation conditions, there were some interesting differences in the underlying cortical activations. Before discussing them, we will outline the similarities in cortical activations observed for different background conditions. During learning, there was a general increase of event-related desynchronization 400-1200 ms after word presentation followed by an event-related synchronization in a fronto-parietal network. This time course indicates that this network is cortically more activated during encoding and retrieval in the first 1200 ms. After this, the activation pattern changes to event-related synchronization, which most likely reflects top-down inhibition processes supporting the consolidation of learned material [56].
Although the general pattern of cortical activation was quite similar across the different background conditions, we also identified some considerable differences. In the time window between 800 -1200 ms after word presentation, we found stronger event-related desynchronization at frontal and parietal-occipital areas for the in-tune-fast group only. Clearly, frontal and parietooccipital areas are stronger activated during verbal learning in the ambient setting of in-tune-fast music but not in the other conditions. Frontal and parietal areas are strongly involved in different stages of learning. The frontal cortex is involved in encoding and retrieval of information, while the parieto-occipital regions are part of a network involved in storing information [83][84][85][86][87]. One of the reasons for this event-related desynchronization increase could be that the fronto-parietal network devotes greater cortical resources to verbal learning material in the context of in-tune fast music (which by the way is the musical piece rated as being most pleasant and arousing). But we did not find a corresponding behavioural difference in learning performance. Thus, the activation increase may have been too small to be reflected in a behaviour-enhancing effect. A further possibility is that the in-tune-fast music is the most distracting of the background music types, therefore eliciting more bottom-up driven attention compared with the other musical pieces and constraining attentional resource availability for the verbal material. The lack of a decline in learning performance suggests that attentional resource capacity was such that this distracting effect (if indeed present) was compensated for.
A further finding is the stronger event-related synchronization in the fronto-parietal network at time segment 5 (1600 -2000 ms after word presentation) for the out-of-tune groups and especially for the out-of-tunefast group. Event-related synchronization is considered to reflect the action of inhibitory processes after a phase of cortical activation. For example, Klimesch et al. [56] propose that event-related synchronization reflects a kind of top-down inhibitory influence. Presumably, the subjects exert greater top-down inhibitory control to overcome the adverse impact on learning in the context of listening to out-of-tune background music.
We did not find different lateralisation patterns for event-related synchronization/desynchronization in the context of the different background music conditions. This is in contrast to some EEG studies reporting between hemispheric differences in terms of neurophysiological activation during listening of music of different valence. Two studies identified left-sided increase of activation especially when the subjects listened to positively valenced music opposed to negative music evoking a preponderance of activation on the right hemisphere (mostly in the frontal cortex) [76,77,88]. These findings are in correspondence with studies demonstrating dominance of left-sided activation during approach-related behaviour and stronger activation on the right during avoidance-related behaviour [89]. Thus, music eliciting positive emotions should also evoke more left-sided activation (especially in the frontal cortex), but not all studies support these assumptions. For example, the studies of Baumgartner et al. [78] and Sammler et al. [79] did not uncover lateralised activations in terms of Alpha power changes during listening to positive music. Sammler et al. identified an increase in midline Theta activity and not a lateralised activation pattern. In addition, most fMRI studies measuring cortical and subcortical activation patterns during music listening did not report lateralised brain activations [90][91][92][93][94][95]. All of these studies report mainly bilateral activation in the limbic system. One study also demonstrates that the brain responses to emotional music substantially changes over time [96]. In fact, a differentially lateralised activation pattern due to the valence of the presented music is not a typical finding. However, we believe that the particular pattern of brain activation and possible lateralisation patterns are due to several additional factors influencing brain activation during music listening. For example, experience or preference for particular music are potential candidates as modulatory factors. Which of these factors are indeed responsible for lateralised activations can not be clarified on the basis of current knowledge.
Future experiments should use musical pieces with which subjects are familiar and which evoke strong emotional feelings. Such musical pieces may influence memory performance and the associated cortical activations entirely differently. Future experiments should also seek to examine systematically the effect of pre-experimentally present or experimentally induced personal belief in the ability of background music to enhance performance and, depending on the findings, this should be controlled for in other subsequent studies. There may also be as yet undocumented, strong inter-individual differences in the modulatory impact of music on various psychological functions. Interestingly, subjects have different attention and empathy performance styles, and this may influence the performance of different cognitive functions.
Limitations
A methodological limitation of this study is the use of artificial musical stimuli, which are unknown to the subjects. In general we listen to music we really like when we have the opportunity to deliberately chose music. Thus, it might be that in case of listening to music we really appreciate to listen to while we learn the results might be entirely different. However, this has to be shown in future experiments.
Conclusion
Using different background music varying in tempo and consonance, we found no influence of background music on verbal learning. There were only changes in cortical activation in a fronto-parietal network (as measured with event-related desynchronization) around 400 -800 ms after word presentation for the in-tune-fastgroup, most likely reflecting a larger recruitment of cortical resources devoted to the control of memory processes. For the out-of-tune groups we found stronger event-related synchronization around 1600 -2000 ms in a fronto-parietal network after word presentation, this thought to reflect stronger top-down inhibitory influences on the memory system. We suggest that this topdown inhibitory influence is at least in part a response to the slightly more distracting out-of-tune music that enables the memory system to adequately reengage in processing the verbal material. | 2018-05-30T21:28:19.071Z | 2010-01-07T00:00:00.000 | {
"year": 2010,
"sha1": "3a03670695d5f14c1c6214935b74279722481f05",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8d9ff866ac665ca5c3b982ee368c64df75cc08f2",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
236488493 | pes2o/s2orc | v3-fos-license | The yields of light meson resonances in neutrinonuclear interactions at= 10 GeV
The total yields of the all well-established light mesonic resonances (up to the $\phi$(1020) meson) are estimated in neutrinonuclear interactions at= 10 GeV, using the data obtained with SKAT bubble chamber. For some resonances, the yields in the forward and backward hemispheres in the hadronic c.m.s. are also extracted. From the comparison of the obtained and available higher-energy data, an indication is obtained that the resonance yields rise almost linearly as a function of the mean massof the neutrinoproduced hadronic system. The fractions of pions originating from the light resonance decays are inferred.
Introduction
The space-time pattern of the leptoproduced quark-string fragmentation into hadrons would be rather incomplete without discerning to what extent the hadrons detected in a given phase-space domain originate directly from the string fragmentation and to what they are decay products of other, higher-mass string fragments−resonances. Although it is generally accepted that the later cause a significant fraction of the yields of stable hadrons (pions and kaons), a quantitative estimation of this fraction is not available yet, at least for neutrinoinduced reactions. At present, more or less detailed experimental data on the neutrinoproduction of mesonic resonances are available for ρ mesons ( [1,2,3,4,5] and references therein), and for charged K * (892) mesons ( [6,7] and references therein), while those for other resonances are rather scarce and obtained at high energies of (anti)neutrino, E ν ∼ 40-50 GeV [2]. The aim of this work is to measure, at the same experimental conditions, the yields of the all well-established light mesonic resonances (with masses up to ∼ 1 GeV/c 2 ) in neutrinonuclear charged current interactions at intermadiate energies ( E ν ∼ 10 GeV). In Section 2, the experimental procedure is described. Section 3 presents the experimental data on the total yields of 12 mesonic resonances: η, ρ 0 , ρ + , ρ − , ω, K * (892) 0 , K * (892) 0 , K * (892) + , K * (892) − , η ′ (958), f 0 (980) and φ. For same cases, the differential yields in the forward and backward hemispheres (in the hadronic c.m.s.) are also presented. The dependence of the resonance yields on their mass and the invariant mass W of the created hadronic system is compared to the higher-energy neutrinoproduction and e + e − annihilation data. Section 4 is devoted to the estimation of the fraction of pions originating from the decays of the light meson resonances. The results are summarized in Section 5.
Experimental procedure
The experiment was performed with SKAT bubble chamber [8], exposed to a wideband neutrino beam obtained with a 70 GeV primary protons from the Serpukhov accelerator. The chamber was filled with a propane-freon mixture containing 87 vol% propane (C 3 H 8 ) and 13 vol% freon (CF 3 Br) with the percentage of nuclei H:C:F:Br = 67.9:26.8:4.0:1.3 %. A 20 kG uniform magnetic field was provided within the operating chamber volume. Charged current interactions containing a negative muon with momentum p µ >0.5 GeV/c were selected. Other negatively charged particles were considered to be π − mesons, except for the cases explained below. Protons with momentum below 0.6 GeV/c and a fraction of protons with momentum 0.6-0.85 GeV/c were identified by their stopping in the chamber. Non-identified positively charged particles were considered to be π + mesons, except for the cases explained below. Events in which errors in measuring the momenta of all charged secondaries and photons were less than 60% and 100%, respectively, were selected. The mean relative error ∆p/p in the momentum measurement for muons, pions and gammas was, respectively, 3%, 6.5% and 19%. Each event is given a weight which corrects for the fraction of events excluded due to improperly reconstruction. More details concerning the experimental procedure, in particular, the estimation of the neutrino energy E ν and the reconstuction of π 0 → 2γ decays can be found in our previous publications [9,5].
The events with 3 < E ν < 30 GeV were accepted, provided that the reconstructed mass W of the hadronic system exceeds 1.8 GeV. No restriction was imposed on the transfer momentum squared Q 2 . The number of accepted events was 5242 (6557 weighted events). The mean values of the kinematical variables were E ν = 9.8 GeV, W = 2.8 GeV, W 2 = 8.7 GeV 2 , Q 2 = 2.6 (GeV/c) 2 . About 8% of neutrino interactions occur on free hydrogen. This contribution was subtracted using the method described in [10,11]. The effective atomic weight of the composite nuclear target is estimated [9] to be approximately equal to A ef f = 21±1, thus allowing to compare our results with those obtained in ν(ν) − Ne interactions at higher energies [2]. When considering the production of resonances decaying into charged kaon(s), the K − and K + hypothesis was applied, respectively, for negatively charged particles and nonidentified positively charged particles (provided that the kaon hypothesis is not rejected by the momentum-range relation in the propane-freon mixture), introducing thereat proper corrections for the momentum of these particles. Table 1. resonance ρ 0 f 0 ρ ± η ω η ′ K * 0 K * ± φ decay mode π + π − π + π − π ± π 0 π + π − π 0 π + π − π 0 π + π − γ K ± π ∓ K 0 s π ± K + K − Γ R exp (MeV) 47 55 110 66 90 100 28 30 10 In the cases, when Γ R exp significantly exceeds the resonance natural width, Γ R 0 , the mass distributions were fitted as a sum of the background (BG(m)) and Gaussian (G R (m)) distributions, where for the Gaussian width σ R an approximate relation was used: Γ exp R ≈ 2σ R √ 2 ln 2. Otherwise, the mass distributions were fitted by the form where BW R (m) is the corresponding Breit-Wigner function [12], with Γ R 0 replaced by an effective Γ R ef f estimated from simulations in which the BW function is smeared taking into account the experimental resolution. The form (1) was applied for η, ω, η ′ and φ, while the form (2) was used for ρ 0 , ρ ± , K * (892) 0 and K * (892) ± . For f 0 (980), for which Γ R exp does not much exceed its natural width Γ f 0 = 35 MeV (taken from a recent NOMAD measurement [3]), both (1) and (2) forms were used, leading to compatible results. The pole mass of f 0 (980), m f 0 = 963 MeV, was also taken from [3], while for other resonances considered in this paper the masses and widths are fixed according to the PDG values [13].
In general, the background distribution was parametrized as where m th is the threshold mass of the corresponding resonance; k = 1 or 2, depending on the statistics; B, β and ε i (i = 1, k) are fit parameters. In same cases, depending on the form of the mass distribution, the parameter β was fixed to 0.
b) Non-strange resonances
The (π + π − ) effective mass distribution is plotted in Figure 1 (the left panels) for the whole range of the Feynman x F variable, as well as for the forward (x F > 0) and backward (x F < 0) hemispheres in the hadronic c.m.s. Signals for ρ 0 and f 0 (980) production are visible, except for f 0 (980) at x F < 0. The corresponding mean multiplicities are quoted in Table 2 (the data on f 0 (980) are corrected for the π + π − decay fraction). These values are, as expected, somewhat smaller than those estimated recently [4] at a slightly severe cut on W (W > 2 instead of W > 1.8 GeV, or W = 3.0 instead of W = 2.8 GeV). As it is seen from Table 2, the ρ 0 and f 0 (980) production occurs predominantly in the forward hemisphere. The total yields of charged ρ mesons and the differential yields of ρ + at x F > 0 and x F < 0 were measured in our previous work [5] where the same cut W > 1.8 GeV was applied as in the present study. In this work, an attempt is undertaken to estimate the ρ − yields at x F > 0 and x F < 0 too. Figure 1 shows the effective mass distributions for π + γγ and π − γγ systems. The distributions are corrected for losses of reconstructed π 0 and contamination from the background γγ combinations (see [5] for details). The ρ − signal is observable at x F < 0, but not at x F > 0, occupied mainly by favorable mesons which can, unlike ρ − meson, contain the current quark. The total and differential yields of ρ + and ρ − are quoted in Table 2.
The signals for η and ω production were looked for in decays η → π + π − π 0 (with 22.7% branching fraction) and ω → π + π − π 0 (with 89% branching fraction). The π + π − γγ effective mass distributions in three x F -ranges are plotted in Figure 2 (the left panels). As for the case of ρ ± [5], the distributions are corrected for losses of reconstructed π 0 and the background γγ contamination. It is seen from Figure 2 and Table 2 that, as in the case of ρ 0 , the yields of η and ω are strongly suppressed at x F < 0 as compared to those in the forward hemisphere. A similar pattern was observed earlier in ν(ν) − Ne interactions at higher energies [2]. The production of η ′ was looked for in the channel η ′ → ρ 0 γ → π + π − γ (with the branching fraction 29.4%, including the non-resonant π + π − background). The effective mass of the (π + π − )-system was restricted to the ρ 0 mass range 0.6-0.9 GeV/c 2 , these boundaries being close to those applied in other experiments in which the η ′ → ρ 0 γ decay fraction had been measured (see references in [13]). The π + π − γ effective mass distributions, corrected for the γ detection efficiency, are plotted in Figure 2 (the right panels). Rather faint signals for η ′ production are visible, more significant at x F > 0. The estimated corresponding yields corrected for the decay fraction are presented in Table 2. Note, that the yield of η ′ at x F > 0 is significantly smaller as compared to that for η, composing (31 ± 25)% of the latter.
c) Strange resonances
The production of K * (892) 0 and K * (892) 0 was looked for in channels K + π − and K − π + , respectively. The main problem to separate these resonances is the large background from pion pairs in which the kaon hypothesis is applied for one of pions. The background from uncorrelated (π + π − ) can be approximated by a smooth curve (according to eq.(3)), while the background from correlated π + π − , originating from the decay of the same parent particle, can induce peculiarities in the effective mass spectra calculated for (Kπ) hypothesis.
To subtract the correlated background from η → π + π − π 0 , ω → π + π − π 0 and ρ 0 → π + π − decays, we calculated, using their yields extracted in this work, their contributions to the (π + π − ) mass spectrum. Then each (Kπ) combination was weighted, depending on the mass of the corresponding (π + π − ) pair. In a similar way, we took also into account the contribution from the non-identified K 0 s → π + π − decays at very close distances from the neutrino interaction vertex (the magnitude of these 'close' decays was estimated from the analysis of (π + π − ) mass spectrum). The effective mass spectra for (K + π − ) and (K − π + ), corrected as described above, are plotted in Figure 3, while the estimated yields of K * (892) 0 and K * (892) 0 (corrected for the decay fractions) are presented in Table 2. Despite of large relative errors, one can deduce that the yield of these resonances at x F > 0 is suppressed as compared to that at x F < 0. This can be a direct consequence of their valence quark composition, K * 0 (ds) and K * 0 (ds). As it can be estimated from the parton model (following [14,15,16]), the probability of creation of favorable K * 0 and K * 0 initiated by subprocesses νu → µ − s and νu → µ − d, respectively, and carrying relatively large positive x F 's, is much smaller, than that for unfavorable ones produced in the region of x F < 0, as a result of the fragmentation of the quark string formed in the main subprocess νd → µ − u.
The production of K * (892) ± was looked for in channels K 0 s π ± . The effective mass distributions of the (K 0 s π + ) and (K 0 s π − ) systems for the full x F -range and of the (K 0 s π + ) system at x F > 0 are plotted in Figure 4 (the left panels). The shape of the background distributions was determined with the help of the mixed event technics, combining K 0 s 's with π mesons from other events in which another K 0 s was detected. The parameters of the background distribution were fixed from the fit by eq. (3), except the normalization parameter B which was considered as a free parameter when fitting the experimental distribution by eq. (2). The fit results for the mean multiplicities (corrected for the decay fractions) are presented in Table 2. Due to the lack of the K 0 s statistics, no estimations for yields can be inferred separately at x F > 0 and x F < 0.
If one assumes, that the ratio R V (S/N) of the summary total yields of strange (K * 0 , K * 0 , K * + , K * − ) and non-strange (ρ 0 , ρ + , ρ − , ω) vector mesons is not significantly influenced by the contribution from the higher-mass resonances decaying into these mesons, the R V (S/N) can be considered as an approximate measure of the strangeness suppression in the quark string fragmentation process. Using the data quoted in Table 2, one obtains R V (S/N) = 0.25±0.09. This estimate is compatible with the values of the strangeness suppression factor λ s inferred from different experiments using various methods (see [17,18,19] for reviews, as well as the recent works [5] and [20]).
d) φ(1020) meson
The (K + K − ) effective mass distribution for different ranges of x F are plotted in Figure 4 (the right panel). The contamination from correlated (π + π − ) pair was subtracted as described in the previous subsection. The φ yields corrected for the decay fraction are presented in Table 2. As it is seen, the production of the φ mesons occurs, in contrast to the open strangeness K * (892) 0 and K * (892) 0 mesons, only in the forward hemisphere. This can happen if the φ meson is a decay product of a favorable meson carrying a significant fraction of the hadronic energy. The most probable candidate is the charmed, strange meson D + s created directly as a result of the subprocess νs → µ − c followed by the recombination of the current charmed quark with the strange antiquark from the nucleon remnant, or created indirectly as a result of the decay of the favorable D * + s meson, D * + s → D + s γ. The later process (for the case of D * − s ) was observed, with a sufficiently large probability (about 5%), in charged current ν − Ne interactions [21]. The results of a more detailed analysis of our experimental data on the φ neutrinoproduction will be presented elsewhere [22].
e) The W -dependence of the resonance yields Our data on η, ρ and ω neutrinoproduction combined with those obtained in ν(ν) − Ne interactions at higher energies [2], as well as the data on K * (892) + neutrinoproduction ( [6] and references therein) allow one to trace the W -dependence of the yields of these resonances plotted in Figure 5. These dependences can be approximately described by a simplest linear form b · ( W − W 0 ) at the fixed threshold value W 0 = 1.8 GeV. The fitted slope parameters b are given in Table 3. It is interesting to note that the parameters b for the neutral and charged favorable ρ mesons coincide at x F > 0 and significantly exceed those for η and ω which, in their turn, are almost equal. One can expect, that the main part of the light meson resonances in the forward hemisphere is a direct product of the quark string fragmentation, with a comparatively small contribution from the decay of higher-mass resonances. Furthermore, their yields in the forward hemisphere are less influenced by the intranuclear secondary interactions, in particular by interactions of pions leading to an additional production of resonances at x F < 0 (see [4] for the case of ρ 0 meson). Hence the data on the mass dependence of the resonance yields at x F > 0 provide an almost non-deteriorated information on the dynamics of the quark string fragmentation. The yields of favorable resonances at x F > 0, normalized to the spin factor (2J + 1), as a function of the resonance mass m R are plotted in Figure 6 (the left panel), together with the data of [2] which include also the tensor f 2 (1270) meson. Note, that the scaling to the spin factor is introduced in view of rather small spin alignment effects in the production of vector mesons ([7] and references therein). As it is seen from Figure 6, the mass dependence of the resonance yields can be approximately described by a simple exponential form A exp(−γm R ). with the fitted values of the slope parameter γ (quoted in Figure 6) not exhibiting any significant dependence on the initial internal energy of the quark string. It might be interesting to note, that a steeper mass dependence was observed in higher energy e + e − annihilation (see [13] for the data review and [20] for the analysis of the mass dependence).
The fraction of pions originating from the decay of light resonances
The data presented in the previous section allow one to estimate the fractions of π 0 , π − and π + originating from the decay of light resonances. These fractions were calculated using the corresponding branching ratios [13]. To avoid double account, the contributions from the indirect η, ρ, ω (from the η ′ and φ decays) were not accounted for. The total yield of pions were taken from [5]. The calculated individual and summary fractions of decay pions are collected in Table 4. As it is seen, the fractions of decay π 0 and π − are compatible, consisting about 1/3, while that for π + is significantly smaller, about 1/5. The W -dependence of the decay fraction can be obtained for pions from the decay of the lightest non-strange resonances (η, ρ, ω) which provide the main contribution to the decay pions (cf. Table 4) and for which the data at higher energies are available [2]. This dependence is plotted in Figure 7, where the corresponding fractions in e + e − at 91 GeV (see [13] for references) are also shown for comparison. As it is seen, the decay fractions continuously increase with energy.
Summary
The total yields of light meson resonances (up to mass ∼ 1 GeV/c 2 , including φ meson) are measured in neutrinonuclear reactions at E ν ≈ 10 GeV ( W = 2.8 GeV). For some resonances, the differential yields in the forward (x F > 0) and backward (x F < 0) hemispheres in the hadronic c.m.s. are also obtained. The data on the inclusive φ neutrinoproduction are presented for the first time. It has been shown that the production of favorable resonances (which can contain the current quark) occurs predominantly in the forward hemisphere, except for ρ + meson the yield of which at x F < 0 slightly exceeds that at x F > 0. On the contrary, the production of unfavorable resonances occurs mainly at x F < 0. An exception is φ meson the production of which occurs practically only in the forward hemisphere. It is shown from the combined analysis of the obtained and existing data, that the yields of resonances increase with W approximately linearly, in the range of W = 2.8 ÷ 4.8 GeV.
The yields of favorable resonances at x F > 0, normalized to the spin factor (2J + 1), are found to be exponentially falling as a mass function. The fractions of π 0 , π − and π + originating from the light resonance decays are estimated to be 34.7±7.6, 31.2±5.7 and 18.4±3.0%, respectively. x F < 0 m π + πγ (GeV/c 2 ) Figure 2: The effective mass distributions for systems π + π − γγ and π + π − γ. The explanation of curves is the same as for Figure 1. x F < 0 M Kπ + (GeV/c 2 ) Figure 3: The effective mass distributions for systems K + π − and K − π + . The explanation of curves is the same as for Figure 1. Figure 4: The effective mass distributions for systems K 0 s π + , K 0 s π − and K + K − . The explanation of curves is the same as for Figure 1. GeV for π + (π − ) in νNe interactions [2] are replaced by those for π − (π + ). | 2008-11-14T13:57:58.000Z | 2008-11-14T00:00:00.000 | {
"year": 2008,
"sha1": "3b4dee07a9823c386afb21cce3a5ac4fdda7de63",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0811.2343",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3b4dee07a9823c386afb21cce3a5ac4fdda7de63",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
261881892 | pes2o/s2orc | v3-fos-license | Theoretical Study of Inelastic Processes in Collisions of Y and Y$^+$ with Hydrogen Atom
Utilizing a simplified quantum model approach, the low-energy inelastic collision processes between yttrium atoms (ions) and hydrogen atoms have been studied. Rate coefficients corresponding to the mutual neutralization, ion-pair formation, excitation, and de-excitation processes for the above collision systems have been provided in the temperature range of 1000-10000K. 3 ionic states and 73 covalent states are considered in calculations for the collisions of yttrium atoms with hydrogen atoms, which include 6 molecular symmetries and 4074 partial inelastic reaction processes. For the collisions of yttrium ions with hydrogen atoms, 1 ionic state and 116 covalent states are included, which related to 3 molecular symmetries and 13572 partial inelastic collision processes. It is found that the rate coefficients for the mutual neutralization process have a maximum at T = 6000K, which is an order of magnitude higher than those of other processes. Notably, the positions of optimal windows for the collisions of yttrium atoms and ions with hydrogen atoms are found near electronic binding energy -2eV (Y) and -4.4eV (Y$^+$), respectively. The scattering channels located in or near these optimal windows have intermediate-to-large rate coefficients (greater than $10^{-12}$ cm$^3$s$^{-1}$). The reported data should be useful in the study of non-local thermodynamic equilibrium modeling.
Introduction
The abundance and distribution of the Yttrium element significantly influence the investigations of the chemical evolution of the cosmos, particularly within stellar environments. This light neutron-capture element is readily observed in stars of B to K-type, and it is primarily produced via three recognized neutron-capture reactions: the rapid (r-) process, the main component of the slow (s-) process, and the weak component of the s-process (Kappeler et al., 1989). These processes occur in stars with different types and masses (Gallino et al., 1998;Kappeler et al., 2011), offering a unique perspective to understand the enrichment history of heavy elements within the interstellar medium, and providing opportunities to refine the theories of nucleosynthesis.
The investigation of Yttrium abundance is highly relevant to stars with various spectral types and populations. For thin disk stars and solar-type stars, an accurate estimation of Yttrium abundance, in conjunction with other spectroscopic indices, can provide a tool to estimate the ages of stars. This has always been a challenge since the inception of stellar astrophysics (da Silva et al., 2012;Nissen, 2015;Tucci Maia et al., 2016;Titarenko et al., 2019;Berger et al., 2022). Moreover, the abundance ratio of Yttrium to Europium ([Y/Eu]) can serve as an efficient indicator of the efficiency of chemical evolution; it can also characterize the relative contribution of low-to intermediate-mass stars compared to high-mass stars (Recio-Blanco et al., 2021).
In order to better understand the origin of elements, the Galactic chemical evolution (GCE) models for these elements are constructed (see e.g. Kobayashi et al. (2020) Johnson & Bolte (2002); Honda et al. (2004); Franà §ois et al. (2007); Hansen et al. (2012); Roederer et al. (2014); Bensby et al. (2014); Reggiani et al. (2017); Baratella et al. (2021); Li et al. (2022)). These studies have revealed a moderate underabundance of the yttrium element in stars where the value of [Fe/H] is less than -1, accompanied by significant dispersions in the [Y/Fe] ratio. However, the yttrium abundances in these studies were obtained under the assumption of local thermodynamic equilibrium (LTE), which may lead to inaccurate results, especially, in metal-poor stellar atmospheres.
The non-local thermodynamic equilibrium (NLTE) method is a realistic alternative (Asplund, 2005;Barklem, 2016). This method can reveal important information about stellar composition and provide abundance data for many elements. A large amount of precise atomic data related to the element is required for the modeling in the NLTE method, including atomic energy levels, radiation data, collision data with electrons, and collision data with hydrogen atoms. The greatest uncertainty in NLTE modeling arises from the atomic data of non-elastic processes in ion/atom-hydrogen collisions (Asplund, 2005;Mashonkina, 2014;Barklem, 2016). In previous researches, the quantum cross-sections for non-elastic collisions between different atoms and hydrogen atoms have traditionally been estimated using the Drawin formula, which is derived from the classical Thomson model. Nevertheless, studies by Barklem, P. S. et al. (2011) indicated that the Drawin formula lacked an accurate physical foundation for low-energy collisions and cannot provide reliable results. Therefore, it is necessary to replace the Drawin formula with a reasonable quantum calculation to obtain the accurate atomic data for collisions involving hydrogen atoms.
Recently, the full quantum mechanical method has been employed to study the collisions between complex neutral atoms (or ions) and hydrogen atoms at low collision energies. Due to the complexity of the collision systems, these computations are exceptionally time-consuming. This approach is only suitable for dealing with simple collision systems (including a few electrons) or complex collision systems that have been simplified by freezing their inner layer electrons, such as Li+H (Li + +H − ) (Croft et al., 1999b,a;Belyaev & Barklem, 2003), Na+H (Na + +H − ) (Belyaev et al., 1999(Belyaev et al., , 2010, Mg+H (Mg + +H − ) (Belyaev et al., 2012;Guitou et al., 2015), Ca+H (Ca + +H − ) (Mitrushchenkov et al., 2017;Belyaev et al., 2019). However, for low-energy non-elastic collisions between hydrogen atoms and other complex atoms (or ions), the non-adiabatic couplings only contribute to the cross sections at large inter-nuclear distances. Based on this observation, Belyaev et al. proposed several quantum approximation models, including the quantum multi-channel analysis method (Belyaev, 1993), the quantum branching probability current method (Belyaev, 2013), and a simplified quantum model (Belyaev, A. & Yakovleva, S., 2017a,b) to deal with the complex collision systems. Comparing with the full quantum mechanical calculations, these models are reasonable in physics and the computational difficulty is greatly reduced, making them appropriate for estimating the cross-sections and rate coefficients for non-elastic collisions between hydrogen atoms and other complex atoms (or ions) at low collision energies.
In these quantum approximation models, the simplified quantum model employs basic Coulomb and flat potentials to represent the ionic and covalent states of the collision sysetms, respectively. The transition probabilities between different reaction channels are described by directly related non-adiabatic probabilities. This streamlined structure and non-adiabatic dynamics method make it more convenient and efficient to deal with the complex heavy particle collision systems compared with the other two quantum approximation models (Belyaev, 1993(Belyaev, , 2013. Moreover, the simplified model can also provide reliable rate coefficients of intermediate-to-large magnitude and reasonable estimations for the rate coefficients with small values comparing with the full quantum calculations. This model has been successfully implemented in several complex collision systems, such as Fe+H (Fe + +H − ) (Yakovleva et al., 2018(Yakovleva et al., , 2019, Co+H (Co + +H − ) (Yakovleva et al., 2020) and Ni+H (Voronov et al., 2022). The simplified model can serves as an effective tool for studying the non-elastic collision processes between complex atoms and hydrogen atoms in the cases which relevant quantum chemistry data is lacking. The motivation of the present work is to provide the rate coefficients for the non-elastic collision processes between yttrium atoms (and ions) and hydrogen atoms at low energies by employing the simplified model based on the Landau-Zener method (Belyaev, A. & Yakovleva, S., 2017a,b).
The organization of this paper is as follows: Section 2 outlines the details of the simplified model that we use. Section 3 discusses the results of our rate-coefficient calculations for the non-elastic collision processes in yttrium (and yttrium ions)-hydrogen collisions. Our conclusions are summarized in Section 4.
Simplified model
The quantum studies of the collisions between the neutral atoms and hydrogen atoms indicate that the interaction between ionic and covalent molecular states provides the primary mechanism for non-adiabatic transitions. This long-range ionic-covalent interaction mechanism plays a critical role in non-elastic collisions (such as charge transfer, mutual neutralization, ion-pair formation, and excitation and de-excitation processes.) involving hydrogen atoms. Due to the importance of this mechanism, Belyaev, A. & Yakovleva, S. (2017a,b) have proposed a simplified model to estimate relevant rate coefficients for the above collision systems. Here, we provide a brief overview of this model.
The simplified model which based on a semi-empirical ionic-covalent interaction theory, determines the long-range electronic structure of the collision systems. In the A Z+ (j)+H and A (Z+1)+ +H − collision systems, where A represents the considered chemical element, Z=0 represents collisions of neutral atoms with hydrogen atoms, and Z≤1 denotes the collisions between ions and hydrogen atoms. In order to estimate the state-to-state transition probabilities which leading to the large rate coefficients, we need to know how to represent the long-range adiabatic molecular potentials of the ionic state A (Z+1)+ +H − and the covalent state A Z+ (j)+H, by H ionic,ionic and H jj , respectively, varying with the nuclear distance R. The molecular potential of the ionic state can be expressed by the Coulomb potential as, where E H − =-0.754eV, is electronic bound energy of hydrogen anion. The potential energy of the covalent molecular state can be described by a flat potential as where electronic bound energy E j is defined as E j minus I ionizaiton , where I ionizaiton is the ionization energy of A Z+ , E j is the electronic energy taken from the ground state of A Z+ (considered zero energy). The off-diagonal matrix element can be estimated using the potential energies of the covalent and ionic states. For the single-electron capture process, H ionicj can be determined using the semi-empirical formula proposed by Olson et al. (1971).
where R j is the center of the nonadiabatic region. The non-adiabatic dynamics of the nucleus are studied using the Landau-Zener model, and the non-adiabatic transition probabilities p j can be calculated based on the diabatic potential energy curves as The cross section can be obtained by summing the total angular momentum quantum number J.
Herein, p stat i denotes the statistical probability of the initial molecular state i, and Λ represents the different molecular symmetries. P if represents the transition probability from state i to state f , which has different expressions for mutual neutralization and de-excitation processes, respectively. The simplified method ignores the effects of other non-adiabatic regions, and the transition probability can only be represented by the directly related non-adiabatic probability p k . In the mutual neutralization process, an incident projectile from the initial ionic state passes through the non-adiabatic region R f , and is then emitted through a covalent state channel f . The transition probability can be expressed as Meanwhile, in the de-excitation process, an incident projectile from the initial covalent channel i passes through the non-adiabatic regions R i and R f , and is then emitted from the covalent channel f , with the transition probability presented as It's important to note that the expression of the velocity v in Equation (4) is different for these two collision processes Belyaev, A. & Yakovleva, S. (2017b).
Finally, the rate coefficients for both neutralization and de-excitation processes at various temperatures can be determined through the Maxwell-Boltzmann integration over the partial cross section as follows, where K B is Boltzmann constant. Neglecting the influence of p stat i , it can be observed that the rate coefficient for the transition process i→f is only related to a covalent state binding energy E f in the neutralization process. For de-excitation process, the rate coefficient is only related to the binding energies (E i and E f ) of two covalent states. So the reduced rate coefficient for the mutual neutralization and de-excitation processes can be expressed as a function of electron binding energy as, N if and D if are defined as the reduced rate coefficients of the mutual neutralization and de-excitation processes, respectively. Then the total rate coefficient for state-to-state transitions can be obtained by considering various molecular symmetries and spins S as Employing the detailed balance relation, the rate coefficients for ion-pair formation and excitation processes can be deduced from the mutual neutralization and de-excitation processes, respectively, as where ∆E if = E i − E j , is greater than 0. (Palmer, 1977)) and the Considered Molecular Symmetries for the Treated Molecular States. j Asymptotic atomic states Asymptotic Molecular
Y+H and Y + +H − collisions
The ion-molecular state Y + +H − encompasses different electronic configurations, such as: the ground state Y + (5s 2 a 1 S)+H − (1s 2 1 S) only includes one molecular symmetry 1 Σ + , the first excited state Y + (4d5s a 3 D)+H − (1s 2 1 S) includes 3 molecular symmetries: 3 Σ + , 3 Π, and 3 ∆ and the third excited state Y + (4d 2 a 3 F)+H − (1s 2 1 S) includes 4 molecular symmetries: 3 Σ − , 3 Π, 3 ∆, and 3 Φ. In our calculations, only covalent scattering channels which have the same molecular symmetries with the ion-molecular states are considered. Due to a fact that there are two symmetries for Σ Table 2: YH Molecular States for Yttrium Ionic Configuration Y + (5s 2 a 1 S) and Y + (4d5sa 3 D), the Corresponding Asymptotic Atomic States (Scattering Channels), the Asymptotic Energies, (J-averaged values taken from NIST (Palmer, 1977)) and the Considered Molecular Symmetries for the Treated Molecular States. j Asymptotic atomic states Asymptotic Molecular energies (eV) symmetry 1 Y(4d 2 ( 3 F)5s a 4 F) + H(1s 2 S) Figure 1: Rate coefficients (in cm 3 s −1 ) for the partial excitation, de-excitation, mutual neutralization, and ion-pair formation processes in Y + (5s 2 a 1 S)+H − and Y + (4d5s a 3 D)+H − collisions at temperature of T= 6000K. Initial and final state (or scattering channels) labels are presented in Table 1. state, Σ + and Σ − , so the calculations for this symmetry are divided into two groups. The first group is the single-electron transfer process which involving the interaction between the covalent state Y[(4d 2 nl)/(4d5snl)/(5s 2 nl)/(5p 2 nl) 2,4 L] + H and the ion state Y + [(5s 2 a 1 S)/(4d5s a 3 D)] + H − . The asymptotic atomic states and corresponding energies of the related molecular states are shown in Table 1. The second group includes the single-electron transfer process which involving the interaction between the covalent state Y[(4d 2 nl)/(4d5snl)/(5p 2 nl) 2,4 L] + H and the ion state Y + (4d 2 a 3 F) + H − . The asymptotic atomic states and corresponding energies of the related molecular states are displayed in Table 2. Considering that the initial and final scattering channels in the same process may include different molecular symmetries, the rate coefficients of these non-elastic processes need to be computed independently for each molecular symmetry, and then summed to obtain the total rate coefficient of the corresponding process.
We calculate the rate coefficients for the excitation, de-excitation, ion pair formation, and mutual neutralization processes for the collisions of Y + H and Y + + H − at temperatures ranging from 1000 to 10000K by employing the simplified model. In Fig.1, we present the results of rate coefficients for non-elastic processes in collisions of Y[(4d 2 nl)/(4d5snl)/(5s 2 nl)/(5p 2 nl) 2,4 L] + H and Y + [(5s 2 a 1 S)/(4d5s a 3 D)] + H − at a temperature of T=6000K. The complete data for rate coefficients for the temperatures ranging from 1000 to 10000K can be seen in the supplementary materials. In the present calculations, the processes of Y + (5s 2 a 1 S)+H − →Y[(4d 2 nl)/(4d5snl)/(5s 2 nl)/(5p 2 nl) 2,4 L] and Y + (4d5s a 3 D) + H − →Y[(4d 2 nl)/(4d5snl)/(5s 2 nl)/(5p 2 nl) 2,4 L] are individually considered due to the distinct spins (or multiplicities) of the associated molecular states. As mentioned above, if the associated molecular states corresponding to the initial and final scattering channels have different spins (or multiplicities), the summation of the rate coefficient from these states yields the total rate coefficient of the corresponding collision processes.
It can be seen from Fig. 1 that the largest rate coefficients are corresponding to the mutual neutralization process. Specifically, for the mutual neutralization process of Y + (5s 2 a 1 S) + H − , the final scattering channels with rate coefficients exceeding 10 −8 cm 3 s −1 are labeled as j=12-23, and the corresponding specific states of the yttrium atom are displayed in Table 1. The rate coefficient for the scattering channel which denoted by j=17 (Y(5s 2 ( 2 D)5d e 2 D)+H) demonstrates a peak value of 6.91 × 10 −8 cm 3 s −1 . Within the mutual neutralization process of Y + (4d5s a 3 D) + H − , the final channels with rate coefficients exceeding 10 −8 cm 3 s −1 are labeled asj=12, 14-25. The specific states corresponding to these channels can been seen Table 1. It should be noted that the rate coefficient of the scattering channels which marked by j=17 (Y(5s 2 ( 2 D)5d e 2 D)+H) also show a peak value of 6.77 × 10 −8 cm 3 s −1 . Figure 2 presents the rate coefficients for non-elastic processes in collisions of Y[(4d 2 nl)/(4d5snl)/(5p 2 nl) 2,4 L] + H and Y + (4d 2 a 3 F) + H − at a temperature of T=6000K. In the mutual neutralization process of Y + (4d 2 a 3 F) + H − system, the final scattering channels with rate coefficients exceeding 10 −8 cm 3 s −1 are denoted as j=20-34, and the corresponding specific states of the yttrium atom are displayed in Table 2. The rate coefficients of the final scattering channel which marked by j=29 (Y(4d 2 ( 3 F)6s 4 F)+H) show a peak value of 6.95 × 10 −8 cm 3 s −1 . The other rate 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 Figure 2: Rate coefficients (in cm 3 s −1 ) for the partial excitation, de-excitation, mutual neutralization, and ion-pair formation processes in Y + (4d 2 a 3 F)+H − collisions at temperature of T= 6000K. Initial and final state (or scattering channels) labels are presented in Table 2. coefficients with intermediate values (10 −12 -10 −8 cm 3 s −1 ) which represented by green, cyan, yellow and orange squares correspond to the neutralization, ion pair formation, excitation, and de-excitation processes, respectively. It is to be noted that the maximum rate coefficients for the excitation and de-excitation processes are an order of magnitude smaller than those of the mutual neutralization process. The rate coefficients for the elastic processes are denoted by the white squares on the diagonals in Figs 1 and 2.
It should be noted that the rate coefficients with intermediate values of 10 −12 -10 −8 cm 3 s −1 in Fig. 2, correspond to the transitions between higher excited states of the collision systems. Belyaev, A. & Yakovleva, S. (2017a) clarified that the simplified model produces the intermediate-to-high values of rate coefficients for the processes which involving atomic states whose asymptotic energies approach the positions of the "optimal windows". The "optical windows" means a range of atomic binding energy, and the scattering channels which asymptotic energies near these positions will have larger rate coefficients. These "optimal windows" are approximately 2eV lower than the ionization energy of Y atoms. In the case of the ground state Y + (5s 2 a 1 S) and the first excited state Y + (4d5s a 3 D), the positions of optimal window are around 4.21eV and 4.31eV (excitation energy of Y atoms), respectively. Whereas for the third excited state of Y + (4d 2 a 3 F), the position of optimal window is approximately at 5.21eV. This accounts for the maximum values of the rate coefficient obtained for the scattering channels of j=17 and 29 in the neutralization processes in the first group (Y + (5s 2 a 1 S)+H − and Y + (4d5s a 3 D)+H − ) and second group (Y + (4d 2 a 3 F)+H − ), respectively. Thus, for different ionic scattering channels, the "optimal window" selects different final scattering channels (Voronov et al., 2022). Figure 3 presents the variation of the total rate coefficient of three sets of mutual neutralization processes at a temperature of T=6000K. The left panel shows the variation of the rate coefficient with the excitation energy of yttrium atoms for the collisions with ground ionic state Y + (5s 2 a 1 S) + H − , which characterized by a single molecular symmetry ( 1 Σ + ). The middle panel of the figure describes the results for the collisions between H − ions and the first excited ionic state of Y + (4d5s a 3 D), and for this collision system the molecule [YH] exhibiting three symmetries ( 3 Σ + , 3 Π and 3 ∆), while the right panel displays the results for the collisions of H − ions with the third excited ionic state of Y + (4d 2 a 3 F), with four molecular symmetries ( 3 Σ − , 3 Π, 3 ∆ and 3 Φ). In this figure, solid lines represent the reduced rate coefficients. Under the simplified model, non-adiabatic nuclear dynamics remain the same for each molecular symmetry, and the reduced rate coefficients also remain consistent for each ionic scattering channel. The difference between the reduced rate coefficients and the rate coefficients calculated in this study originates from the fact that some covalent states do not have all the molecular symmetries like those of the ionic states. When all the molecular symmetries of the initial ionic scattering channel are all included into the final covalent scattering channels in the calculation, the rate coefficients are equal to the reduced rate coefficients. On the contrary, if not all molecular symmetries are included, the rate coefficients are determined by the statistical probability of the contributed molecular symmetries multiplied by the reduced rate coefficients, resulting in the lower values of the rate coefficients corresponding to the reduced rate coefficients. This can be observed in Fig. 3. Furthermore, this figure also illustrates that the "optimal window" is typically situated near the point where the electron binding energy equals -2eV during atom-hydrogen collisions.
Y + +H and Y 2+ +H − collisions
The ground state Y 2+ (4p 6 4d 2 D)+H − comprises three molecular symmetries: 2 Σ + , 2 Π, and 2 ∆. In our calculations, we only consider covalent scattering channels that possess these molecular symmetries, which include 116 covalent states of Y + [(4dnl)/(5snl)/(5pnl) 1,3 L)+H. The asymptotic atomic states and the corresponding energies of the relevant molecular states are given in Table 3. In the same process, the initial and final states may have different molecular symmetries. Therefore, for the calculation of the rate coefficients for inelastic processes, we need to calculate the rate coefficient for each molecular symmetry separately, and then sum these results to obtain the total rate coefficient for the entire process. Using the simplified model, we calculate the rate coefficients for Y + +H collision system for the temperatures between 1000 and 10000K, demonstrating the general trends of the rate coefficients for Y + -H collisions. Same as dealing with the collisions between neutral atoms and hydrogen atoms, only single-electron transitions are considered in this model. Figure 4 illustrates the rate coefficients for excitation, de-excitation, ion-pair formation, and mutual neutralization processes in Y + [(4dnl)/(5snl)/(5pnl) 1,3 L)+H and Y 2+ (4p 6 4d 2 D)+H − collisions at T=6000K. Complete data of rate coefficient in the temperatures range between 1000 and 10000K can be found in the supplementary material. In this figure we only display the rate coefficients for the transitions which involving covalent states from j=1 to j=102 and the ionic states Y 2+ (4p 6 4d 2 D)+H − , due to a fact that the values of rate coefficient for the processes involving covalent states higher than j=102 are significantly small. The largest rate coefficients are found in the mutual neutralization processes of Y 2+ (4p 6 4d 2 D)+H − , and the rate coefficients for the final covalent scattering channels denoting as j=15-57 (with the exception of j=55) exceed 10 −8 cm 3 s −1 (refer to Table 3). Other processes involving ion-pair formation, excitation, and de-excitation, with the values of rate coefficients lying between 10 −12 − 10 −8 cm 3 s −1 , are denoted by green, cyan, yellow, and orange squares in the same figure. The intermediate-to-high values of rate coefficients for the excitation and de-excitation processes are primarily limited to between the initial and final channels marked as j=14-54, and the maximum is the order of 10 −9 cm 3 s −1 , which is an order of magnitude smaller than the highest rate coefficient of the mutual neutralization process. The results for the elastic processes are represented by white squares on the diagonal in the same figure. Transitions between molecular states with different symmetries (symbolized by grey squares in the same figure) are forbidden, so the values of the rate coefficients are zero.
The mutual neutralization process in Y 2+ (4p 6 4d 2 D)+H − collisions manifests the largest rate coefficients, as shown in Fig. 5. This figure illustrates the variation of the total rate coefficients for the mutual neutralization process with the excitation energy of Y + ions at a temperature of T=6000K. The ground ionic state Y 2+ (4p 6 4d 2 D)+H − as the initial channel has three molecular symmetries ( 2 Σ + , 2 Π and 2 ∆). In this figure, solid lines denote the reduced rate coefficients for the process of singly charged ion (Y + ) colliding with hydrogen atoms, which calculated by the simplified model. The figure also displays rate coefficients for different symmetries of initial molecular states, the summation of which yields the total rate coefficients for the corresponding collision process. Furthermore, the figure reveals that Figure 4: Rate coefficients (in cm 3 s −1 ) for the partial excitation, de-excitation, mutual neutralization, and ion-pair formation processes in Y 2+ (4p 6 4d 2 D)+H − collisions at temperature of T= 6000K. Initial and final state (or scattering channels) labels are presented in Table 3. the position of optimal window in the collisions between singly charged ions Y + and hydrogen atoms is proximal to the electron binding energy equivalent to -4.4eV, which corresponding to the excitation energy of Y+ ions are around 7.83eV. The rate coefficient for the final scattering channel which label ed as j=31 (Y + (4d5d e 1 G)+H) demonstrates a peak of 7.67 × 10 −8 cm 3 s −1 within the mutual neutralization process. Figure 6 shows the variation of total rate coefficients for the excitation and de-excitation processes with the excitation energy of Y + ions at T=6000K in Y + (4d6p 1 P o )+H collisions. This corresponds to the process which the covalent state Y + (4d6p 1 P o )+H is the initial channel, and the molecular ion [YH] + has two symmetries: 2 Σ + and 2 Π. Similar to Fig. 5, this figure also exhibits rate coefficients under different symmetries of the initial molecular ion states. Total rate coefficients can be obtained by the summation of the results from different molecular symmetries. The figure identifies that the position of optimal window also approaches the electron binding energy of -4.4eV. The rate coefficient for the de-excitation process Y + (4d6p 1 P o )+H→Y + (4d5d e 1 G)+H (j=38→j=31) shows a peak of 6.22 × 10 −9 cm 3 s −1 .
Conclusions
The low-energy inelastic collision processes between yttrium atoms (ions) and hydrogen atoms have been investigated using the simplified model for the first time. The rate coefficients for mutual neutralization, ion-pair formation, excitation, and de-excitation processes of the above collision systems have been provided in the temperature range of 1000 to 10000K. For the calculations of Y-H collision system, we consider 3 ionic scattering channels and 73 covalent scattering channels, including 4074 partial inelastic reaction channels. For Y + -H system, 1 ionic scattering channels and 116 covalent scattering channels are included in the calculations, 13572 partial inelastic reaction channels are considered. The computations for different molecular symmetries and spins are treated separately. The total rate coefficients can be obtained by summing the results from different molecular symmetries and spins. It is found that that the rate coefficients from the mutual neutralization processes exhibit the largest value in the considered temperature range.
In the inelastic processes involving the ionic state Y 2+ (4p 6 4d 2 D), we consider three molecular symmetries: 2 Σ + , 2 Π, and 2 ∆. The neutralization process Y 2+ (4p 6 4d 2 D)+H − →Y + (4d5d e 1 G)+H exhibits the greatest rate value of 6.95 × 10 −8 cm 3 s −1 . The de-excitation process Y + (4d6p 1 P o )+H→Y + (4d5d e 1 G)+H demonstrates a peak rate value of 6.22 × 10 −9 cm 3 s −1 . It is found that near the optimal window (-2eV and -4.4eV), excitations, de-excitations, and ion-pair formation processes also have intermediate values of the rate coefficients (10 −12 -10 −8 cm 3 s −1 ), which are an order of magnitude smaller than those of the mutual neutralization processes. It is anticipated that these intermediate-tolarge rate coefficients will be useful for astrophysical applications. The simplified model provides high accurate and reliable large-value rate coefficients, which is vital for non-LTE modeling. | 2023-08-16T06:41:08.894Z | 2023-08-15T00:00:00.000 | {
"year": 2023,
"sha1": "039465b7d133149424bf9dc9ae8b4dce7fe60651",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/mnras/advance-article-pdf/doi/10.1093/mnras/stad2906/52727338/stad2906.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "039465b7d133149424bf9dc9ae8b4dce7fe60651",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
797336 | pes2o/s2orc | v3-fos-license | Comparison of first dimension IPG and NEPHGE techniques in two-dimensional gel electrophoresis experiment with cytosolic unfolded protein response in Saccharomyces cerevisiae
Background Two-dimensional gel electrophoresis (2DE) is one of the most popular methods in proteomics. Currently, most 2DE experiments are performed using immobilized pH gradient (IPG) in the first dimension; however, some laboratories still use carrier ampholytes-based isoelectric focusing technique. The aim of this study was to directly compare IPG-based and non-equilibrium pH gradient electrophoresis (NEPHGE)-based 2DE techniques by using the same samples and identical second dimension procedures. We have used commercially available Invitrogen ZOOM IPGRunner and WITAvision systems for IPG and NEPHGE, respectively. The effectiveness of IPG-based and NEPHGE-based 2DE methods was compared by analysing differential protein expression during cytosolic unfolded protein response (UPR-Cyto) in Saccharomyces cerevisiae. Results Protein loss during 2DE procedure was higher in IPG-based method, especially for basic (pI > 7) proteins. Overall reproducibility of spots was slightly better in NEPHGE-based method; however, there was a marked difference when evaluating basic and acidic protein spots. Using Coomassie staining, about half of detected basic protein spots were not reproducible by IPG-based 2DE, whereas NEPHGE-based method showed excellent reproducibility in the basic gel zone. The reproducibility of acidic proteins was similar in both methods. Absolute and relative volume variability of separate protein spots was comparable in both 2DE techniques. Regarding proteomic analysis of UPR-Cyto, the results exemplified parameters of general comparison of the methods. New highly basic protein Sis1p, overexpressed during UPR-Cyto stress, was identified by NEPHGE-based 2DE method, whereas IPG-based method showed unreliable results in the basic pI range and did not provide any new information on basic UPR-Cyto proteins. In the acidic range, the main UPR-Cyto proteins were detected and quantified by both methods. The drawback of NEPHGE-based 2DE method is its failure to detect some highly acidic proteins. The advantage of NEPHGE is higher protein capacity with good reproducibility and quality of spots at high protein load. Conclusions Comparison of broad range (pH 3–10) gradient-based 2DE methods suggests that NEPHGE-based method is preferable over IPG (Invitrogen) 2DE method for the analysis of basic proteins. Nevertheless, the narrow range (pH 4–7) IPG technique is a method of choice for the analysis of acidic proteins.
Background
Two-dimensional gel electrophoresis (2DE) is one of the most widely used technique for the global protein separation and quantification [1,2]. More than 35 years ago, 2DE was developed independently by Klose [3] and O'Farrell [4], representing the combination of two orthogonal separation techniques. In the first dimension, the proteins are separated by isoelectric focusing (IEF) according to their isoelectric point. In the second dimension, proteins are separated according to their electrophoretic mobility by conventional SDS-PAGE. There are two different first dimension separation techniques: the method of Klose [3] and O'Farrell [4], where the pH gradient is formed via carrier ampholytes (CA) (amphoteric, oligoaminooligocarbonic acids with high buffer capacity at their pI) during the focusing process and the method described by Bjellqvist and Görg [5][6][7] using immobilized pH gradient (IPG). The protocol of 2DE with IPGs is being constantly refined, featuring a number of significant advances and applications over the past 30 years [8]. Due to its simple handling and commercialization, IPG-based IEF is typically used for 2DE-based proteome analysis and has widespread applications. Currently, various manufacturers provide a number of different IPG strips varying in length (7-24 cm) and pH range (narrow or broad, e.g. pH 4-7 or 3-10; linear or non-linear) [9]. In contrast, CA-based IEF, being a labour-intensive technique, failed to achieve widespread application, but is still used in more specialized laboratories.
Despite a widespread application, IPG-based 2DE method still has some limitations, especially in the analysis of basic proteins [10]. Separation of basic proteins by 2DE even now is considered as a challenge, and most of gelbased proteomic studies are being performed in the acidic range. In this regard, the CA-based 2DE method still should be considered for functional proteomics experiments in a broad pH range. The first CA-based technique described by O'Farrell was efficient mostly for acidic proteins, but later O'Farrell published the CA-based 2DE method for non-equilibrium pH gradient electrophoresis (NEPHGE) concerning the separation of basic proteins [11]. For the efficient analysis of basic proteins by this 2DE method, the proteins are applied to the anodic instead of the cathodic end of the IEF gel. This technique was further improved in the laboratory of Klose, and an updated protocol of NEPHGE-based 2DE method was finally reported in 1995 [12]. The equipment necessary for performing this technique later was made commercially available from WITA GmbH as a "WITAvision" 2DE system (detailed review in [13]). Therefore, it became possible to try out various formats (IEF gel lengths from 7 to 40 cm) of NEPHGE-based 2DE method.
The aim of this study was to directly compare IPGand NEPHGE-based 2DE techniques by using the same samples and identical 2nd dimension procedures. For IPG-based 2DE we have chosen Invitrogen "ZOOM IPGRunner" system. This mini-gel 2DE system is simple, unexpensive and both IEF gel length (7 cm) and recommended sample buffer composition is compatible with that of NEPHGE-based 2DE "WITAvision" system. It should be noted that our results represent only usage of Invitrogen IPG-based 2DE system. It was reported that commercially available IPG strips can vary considerably, leading to marked differences in subsequent protein resolution during 2DE [14].
Earlier comparisons of IPG-based versus NEPHGEbased 2DE techniques [15,16] were made as proteome analysis experiments. Here we performed a differential expression proteomics experiment using both methods and broad (pH 3-10) gradient range on UPR-Cyto stress in yeast Saccharomyces cerevisiae cells. The results were compared to our previous study of the same phenomenon using Invitrogen narrow range pH 4-7 IPG strips [17]. Our data suggest that NEPHGE-based 2DE method is a method of choice for the analysis of basic proteins. The most dramatic demonstration of this statement was identification of differentially expressed highly basic protein Sis1p by NEPHGE, but not by IPG technique. However, in the acidic pH range both techniques appeared to be similar with some specific advantages and drawbacks. We hope that this study will help others to choose the most efficient system or strategy to perform their proteomics experiments.
Results and discussion
Overview of the protocols The same samples of whole cell lysates from yeast cells expressing measles virus hemagglutinin (MeH) or nucleocapsid protein (MeN) and from the control yeast cells (transformed with empty vector pFGG3) were focused in a broad range (pH3-10) IPG strips (Invitrogen) and non-equilibrium pH gradient gels made according to manufacturers' (WITA) recommendations. After equilibration, the strips and gels were applied onto uniform SDS-polyacrylamide mini-gels and run under the same conditions in "Biometra" system. The second dimension SDS-PAGE with following gel staining, scanning and image analysis steps for IPG and NEPHGE samples were performed in parallel. Therefore, the only difference between IPG-and NEPHGE-based two-dimensional electrophoresis (2DE) was the first dimension isoelectring focusing step and some deviations in equilibration protocol (IPG strips were equilibrated after, whereas NEPHGE gels before the freezing in −70°C). It allowed direct comparison of the first dimension IPG and NEPGHE techniques as other parameters, conditions and samples in both 2DE experiments were exactly the same.
Examples of 2D gel images are shown in Figures 1 and 2. We have analysed various protein spot parameters at two different experimental conditions: at standard 1x protein load (50 μg of whole cell lysate protein per gel, as recommended by manufacturer of IPG strips) and at high 2x protein load (100 μg of total protein per gel). General quantitative analysis of IPG-and NEPHGE-based 2DE methods is presented in Tables 1 and 2, respectively, whereas their "trial" comparison with the concrete biological experiment [17] is summarized in Table 3.
Handling differences
When comparing IPG-and NEPHGE-based methods, the specific differences between these procedures should be mentioned. In the case of IPG, we used commercial dried polyacrylamide gels with immobilized pH gradient attached to plastic strips. After application of the sample, gel is left to rehydrate overnight, and during this step proteins enter the gel. Following rehydration, the first dimension electrophoresisisoelectric focusing (IEF)is performed in IPG strips, and proteins usually reach their isoelectric point, where their charge equals to zero. Such IEF procedure could be defined as equilibrium pH gradient electrophoresis. The protocol for this method is simple and easy, because IPG strips are practically identical, and convenient, well-defined procedure is used in every experiment. Therefore, it is easy to repeat the procedure in exactly the same way, and repeatable results can be expected. In the case of NEPHGE, the first dimension gels are casted by the user himself and the gel length and quality (e.g. presence or absence of the bubbles in the gel, etc.) depends only on the handiness of the experimenter. Moreover, after 1st dimension IEF, the handling of IPG strips is safe and easy, whereas NEPHGE tubal gels are fragile, could be easily broken during extrusion from the tubes, slipped out from equilibration grooves and are fragmented into pieces under careless treatment. Therefore, in terms of simplicity the IPG method is preferable over NEPHGE, which could be called "stressful" and requires serious skills. However, despite some troubles with NEPHGE gels in our experiments (in the beginning we have been loosing about half of the 1st dimension gels), the results presented here demonstrate the applicability of this method.
Spot reproducibility
We used Coomassie staining of 2D gels. It is not very sensitive protein detection method, and in this case manufacturer of IPG strips (Invitrogen) recommends loading 20-50 μg of total protein per ZOOM strip. For "standard" loading experiment, we used maximal recommended protein amount (50 μg) per strip. It should be mentioned that smaller amount of the whole cell protein was used in parallel NEPHGE-based 2DE experiment (~30 μg versus~50 μg in IPG). The volume of the sample is limited by the narrow tube diameter in NEPHGE procedure; therefore, it is impossible to load more protein if sample is too diluted. An example of 2D gels from this experiment is shown in Figure 1. Under these conditions, IPG-and NEPHGE-based 2DE methods were compared quantitatively in the Table 1. Similar number of protein spots (~100) was detected in 2D gels using both methods. As the study using standard protein load did not represent enough yeast proteins to make firm conclusions, we repeated the experiment by loading double amounts of protein on each gel. For this experiment, we used more Figure 1 2DE of yeast whole cell lysates using IPG (A-C) and NEPHGE (D-F) based methods at standard protein load. The same samples from control cells (transformed with empty vector pFGG3; A, D) and MeH (pFGG3-MeH transformant; B, E) or MeN (pFGG3-MeN; C, F) expressing cells were loaded onto IPG strips (50 μg of total protein in each strip) and NEPHGE gels (30 μg of total protein in each gel). Approximate pI values are indicated below the gels (pH 3-10 gradient was used in both methods). Dashed line indicates approximate zone of neutral pI 7.0, which separates acidic (on the left, pI < 7) and basic (on the right, pI > 7) protein spots. Protein molecular weight markers (M) are loaded onto IPG-based 2D gels, their masses are indicated at the right (kDa). Arrows point to the spots described in Table 3. Solid arrows indicate protein spots that were identified in our previous work [17], whereas dotted arrows point to additional spots identified by MS in this study. Quantitative analysis of each indicated protein spot is presented in Table 3. concentrated whole cell lysates (see Methods). The same experimental variants were analysed by loading 100 μg of whole cell protein onto each IPG strip and NEPHGE gel. All other 2DE conditions were exactly the same. The number of detected spots at high protein load substantially increased. More than 400 different spots were detected by IPG-and over 500 spots by the NEPHGE-based 2DE (an example of 2D gels is shown in Figure 2, whereas quantitative analysis at high protein load is presented in Table 2).
A comparison of loaded and detected protein amounts suggests that more than 1/3 of total protein amount was Figure 2 2DE of yeast whole cell lysates using IPG (A-C) and NEPHGE (D-F) based methods at high protein load. The same concentrated samples from control cells (transformed with empty vector pFGG3; A, D) and MeH (pFGG3-MeH transformant; B, E) or MeN (pFGG3-MeN; C, F) expressing cells were loaded onto IPG strips (100 μg of total protein in each strip) and NEPHGE gels (100 μg of total protein in each gel). Original scan of one of the replicas is shown for comparison (six gels were being scanned in parallel at the same time). The references are the same as in Figure 1. The same samples were analysed in IPG-and NEPHGE-based 2DE systems;~50 μg of whole cell protein was loaded onto IPG strips and~30 μg onto NEPHGE gels (due to small space for sample application in NEPHGE tubessee text). 2 Immobilized pH gradient (IPG) based 2DE method (Invitrogen pH3-10 system). 3 Non-equilibrium pH gradient gel electrophoresis (NEPHGE) based 2DE method (WITAvision pH3-10 system). 4 Parameters were calculated from 2-3 replicas (repeating analysis of the same samples). Each parameter was calculated both for whole gel (pI 3-10, all detected proteins) and for its pI <7 and pI >7 parts (acidic and basic proteins, respectively). Neutral pI 7, separating acidic and basic protein spots, is indicated by dashed line in Figure 1. 5 Number of detected separate protein spots in all samples (Control, MeH ir MeN), from all replicas. 6 The same spots detected among replicas of the same sample (according to matches of the spots generated by 2D image analysis software ImageMaster 2D Platinum 7.0); the percentage of matched spots (±SD) is given for a whole gel (pI 3-10) and for its acidic or basic parts. 7 Total volume (Vol; product of spot area and intensity) of all protein spots in one gel is calculated by 2D analysis software; here the average from whole pI3-10 gel is given as 100% (±SD), whereas pI <7 and >7 indicate acidic and basic protein portions, respectively. 8 Variation of volumes of the same spots in separate replicas; calculation was made using all spots matched by the software and then the average of variation ΔVol ±SD was calculated. 9 %Vol indicates percentage of volumes of separate spots among volume of all protein spots in a gel. In this case, all matched protein spots were evaluated in the same way as calculating variation of volumes (8), only instead Vol the values of %Vol was used (the result is average of Δ%Vol ± SD). 10 Average saliency of detected protein spots per gel ± SD. 11 Detected protein spots with saliency <500 were considered as low quality spots (see text). The percentage of such protein spots (±SD) was calculated for a whole gel (pI 3-10) and for its acidic or basic parts. lost using IPG strips in comparison to the total volume of protein spots detected in NEPHGE-based 2D gels (Tables 1 and 2). Analysis of the separate parts of 2D gels reveals that the loss of total protein in IPG-based 2DE method is mostly determined by the loss of basic proteins. In Tables 1 and 2 it is shown that protein amount in 2D gels is distributed unequally: in the case of NEPHGE, detected basic protein amount is twice as large as in IPG-based 2D gels, whereas total volume of acidic protein spots is somewhat similar in both techniques. Low quality spots (saliency <500) 11 % 20±5% 18±5% 28±7% 11±4% 13±5% 6±3% 2x higher protein amounts were loaded onto 1st dimension gels, than for standard application described in Table 1. Preparation of concentrated whole cell lysates for this experiment is described in Methods section. Other procedures and all calculations were the same as for 1x protein load described in Table 1. An example of 2D gel images from a high load experiment is shown in Figure 2. 1 The same samples were analysed in IPG-and NEPHGE-based 2DE systems; the equal amounts of~100 μg of whole cell protein were loaded onto IPG strips and onto NEPHGE gels.
2-11
The references are the same as in Table 1 and all parameters were calculated exactly as described in Table 1 legend.
Differentially expressed protein spots in this experiment are indicated by letters (see Figures 1 and 2). 2 The same protein spots are indicated by the numbers in the referenced article ( [17], see Figure nine and Table one). 3 Accepted name from the Saccharomyces genome database (SGD) and YPD. Spots 1 and 2 represent mixtures of similar proteins Ssa1 and Ssa2 (97% identity) at an unknown ratio (see Table one legend in [17]). 4 Cellular protein expression fold change in MeH expressing versus control cells that was determined in previous work using pH4-7 IPG-based 2DE system (Invitrogen); the values are taken from the Table one in reference [17]. 5 Expression fold changes of the same proteins determined from independent experiments in this work using pH3-10 IPG strips (Invitrogen);~50 μg of whole cell protein was loaded onto IPG strips in "Standard" experiment, whereas~100 μg was used in a "High load" experiment. 6 Expression fold changes of the same proteins determined from independent experiments in this work using pH3-10 NEPHGE first dimension gels (WITAvision); 30 μg of whole cell protein was loaded onto NEPHGE gels in "Standard" experiment, whereas~100 μg was used for each gel in the "High load" analysis. 7 Not identified (N.I.) in previous study, because increased amount of this protein was observed only in cells expressing MeH, but not MuHN protein (the expression fold change in MeH/Control cells determined by IPG4-7 system here is given from our unpublished data). 8 ? and ! indicate basic protein spots (pI > 7) that were not analysed in previous experiment on pH4-7 platform (N.A.not assayed). Despite that protein spot "?" showed false expression change ("artefact") in IPG-based system (unreliable expression changes are apparent by high error range), in this experiment it was identified as phosphoglycerate mutase 1 (Gpm1p). Protein Sis1p in this study was identified using NEPHGE-based 2DE system, whereas it was not detected by IPG-based 2DE method.
The drawbacks of IPG method in the basic gel side are not limited to the protein amount. Different proteins are lost in separate experiments; it is obviously demonstrated by the reproducibility parameter in Tables 1 and 2. Only about a half of basic protein spots detected by IPG-based method (~44% and~51% at standard and high protein load, respectively) are reproduced, still with large variation. Therefore, in IPG-based 2DE experiment, it is possible to quantitatively evaluate only~50% of detected basic protein spots, and even these tend to show unreliable results (described below). In contrast, the reproducibility of NEPHGE-based method is best in the basic gel side with the reproducibility of~90% and minimal gel-to-gel variation at high protein load ( Table 2). A few spots in NEPHGE-based 2D gels at standard protein load were not evaluated due to our imperfect performance in the first dimension with a couple of the control sample replicas. Some bubbles introduced during loading of the sample or slightly shorter 1st dimension NEPHGE gel resulted in incomplete focusing or impaired spot resolution ( Figure 1, D or not shown). These problems were avoided when running NEPHGE samples at high protein load.
The analysis of acidic proteins shows good reproducibility in the IPG-based method at standard protein load, because >80% spot reproducibility practically coincides with variations of total protein amount in the gel, which in our case reached almost 20% (due to loading errors or differences in gel staining intensities). Lesser amount of protein on the gel results in dissapearance of weak spots, and this is the main reason of the differences among the replicas. It is evident from Figure 1 that protein pattern in 2D gels of different samples (Control, MeH, MeN) analysed by the same method is very similar. By comparing our 2D gel images obtained using IPG-based method with earlier IPG-2DE analyses of yeast proteome [18,19], it could be noticed that positions and relative amounts of the vast majority of protein spots in our experiments match well with the results of previous studies. The exceptions are protein spots differently expressed due to different experimental conditions. Surprisingly, high protein load experiment showed better reproducibility of acidic protein spots in NEPHGE-than in IPG-based 2DE, and the difference was significant. This resulted in considerable difference of overall spot reproducibility with~87% in NEPHGE-versus only~68% in IPG-based 2DE ( Table 2). It suggests that IPG strips were overloaded. Indeed, some areas in acidic gel zone showed incomplete focusing and loss of resolution in IPG-based 2D gels at high protein load (see Figure 2, upper panel).
Spot quality and protein capacity of the 1st dimension gels
Spot quality is also an important parameter in 2DE analysis. We used ImageMaster 2D Platinum 7.0 software (GE Healthcare), which calculates saliency value for every detected protein spot. This parameter is a measure based on the spot curvature. Real spots generally have high saliency values, whereas artifacts and background noise have small saliencies. The saliency is an efficient parameter for filtering and discarding spots, but it may also be used for the evaluation of the spot quality. Other 2D gel analysis software packages provide some "spot quality" values, which are also based on the spot curvature property. For example, PDQuest software (BioRad) calculates spot quality numbers, which are mainly based on Gaussian fit assessing spot shape. Absolute values of spot saliency may vary depending on brightness and contrast of 2D images. However, in our case all procedures of image processing for both IPG-and NEPHGE-based 2DE gels were the same, and therefore it was possible to compare two methods by using saliency as the spot quality parameter. We have calculated the average saliency of protein spot and the percentage of low quality spots for every gel. This data is provided in Tables 1 and 2. Average spot saliency for a whole gel at standard protein load was similar in both methods, but again there was a difference when comparing acidic and basic gel zones. Quality of acidic protein spots was higher in IPG, whereas basic proteins were better shaped in NEPHGE-based 2DE method (Table 1). To count low quality spots, we had set an arbitrary value of saliency to 500. The saliency is highly dependent on the images, and, according to the software user manual, gels may need saliency values from 10 to 5000 for correct filtering. We have discarded all spots with a saliency <150, whereas protein spots with a saliency of <500 were defined as low-quality spots. The percentage of low-quality spots of acidic proteins at standard 1x protein load was considerably higher in NEPHGE-based method, whereas the results for spots of basic proteins were similar in both methods (Table 1). However, high protein load experiment showed substantially different results. Spot quality data confirmed that IPG strips were overloaded by 2x total protein amount. Thus, double protein load significantly decreased average spot saliency and increased percentage of low quality spots in IPG-based 2D gels, whereas NEPHGE-based 2D gels demonstrated increased overall spot quality (see Table 2 and compare with Table 1). Especially convincing was the reduction of low quality spots in NEPHGE gels, with only~6% of detected basic protein spots found below saliency value of 500 (Table 2). Lower spot quality in standard protein load experiment may be at least partially explained by our imperfect performance with NEPHGE gels, which is reflected by higher error ranges of average saliency values than at high protein load (Tables 1 and 2, respectively). Anyway, high protein load onto NEPHGE gels is preferable, because both spot quality and reproducibility are excellent. It seems that loading 100 μg of whole cell protein onto NEPHGE gel is near to optimal amount in a mini-gel format using Coomassie staining. Further attempts to increase protein amount and detect even more spots in a small gel may result in overlaping of neighbouring proteins by spots of highabundance proteins.
Experiments with loading 1x and 2x protein amount per gel revealed different protein capacity of IPG strips and NEPHGE gels. Recommendations of manufacturer to load only up to 50 μg of total protein onto broad pH range IPG strip seems to be correct, because double protein amount resulted in overloading and loss of spot quality and reproducibility. Therefore, protein capacity of pH 3-10 IPG strip is limited to~50 μg of total protein. Meanwhile, the protein capacity of NEPHGE gel is at least~100 μg of total protein from the same sample. It should be noted that higher protein capacity of NEPHGE gels over IPG strips is not limited to~2 fold. The volume of NEPHGE gels is much smaller than volume of IPG strips of the same length. The volume of NEPHGE tubal gel (7 cm in length and 0.9 mm in diameter) is only~45 mm 3 , whereas IPG gel (70 mm × 0.5 mm × 3.3 mm) has a volume of~115 mm 3 . Two and half fold difference in volumes means that protein capacity of the same volume NEPHGE gel is~5 fold higher than that of a broad range 1st dimension IPG gel.
Impact of the procedure on experimental variations
The results of the quantitative analysis showed considerable variation in the relative volumes (%Vol) of the spots among experimental replicas of the same samples at standard protein load, constituting~30 ± 25% (Table 1). It means that~1.3 ± 0.3 fold change of the %Vol of a protein spot between different experimental conditions is in the range of error and should not be estimated as biological effect. We noticed that at least half of this variation is determined by different conditions in independent experiments. Comparing the same samples processed in parallel, the variation in %Vol constituted only 10-15%. Thus, the easiest way to minimize the variation is to compare experimental sample with the control processed in parallel, but not in separate independent experiments. Experimental variation itself should not introduce false positive results when using larger amount of replicas, because in this case it is apparent in the error range. However, it should be considered that the threshold of fold change for differentially expressed protein spot is at least 1.3 if samples are run in parallel and >1.5-1.6 fold if samples are processed in separate 2DE experiments. Lower fold changes would fall into experimental variation range and are unreliable values for differential expression. These thresholds should be considered at least when analysing whole cell lysates.
High protein load did not considerably affect experimental variations of spot volume in the case of NEPHGE, except that the average %Vol variation of basic protein spots decreased from~31% at standard load to~21% at high load conditions (Tables 1 and 2). All other spot variation parameters were almost identical in NEPHGE-based 2DE at both standard and high protein load. Accordingly, the threshold for considering any protein spot as differentially expressed at high load should be the same as under standard conditions. Different situation was observed in the case of IPG, where high sample load increased variation of spot volumes, reaching~50 ± 50% (Table 2). It indicates that~1.5 ± 0.5 fold change in the %Vol of a protein spot can be the result of experimental variation. The threshold for differential expression under such conditions should be increased to~2.0 fold, and this may substantially complicate analysis of biological variations.
Evaluation of procedure impact on biological variations
Both 2DE methods were examined in biological experiment on cytosolic unfolded protein response (UPR-Cyto), and the results compared with our earlier study, where the same phenomenon was analysed using narrow range (pH 4-7) Invitrogen IPG-based 2DE method [17]. Here we show the analysis of analogous samples analysed in a broad range 1st dimension pH 3-10 gradient by both IPG-and NEPHGE-based 2DE methods. The main differentially expressed protein spots, already identified and evaluated in previous experiment, are indicated by solid arrows in Figures 1 and 2, and their quantitative analysis is given in Table 3.
In previously published work comparing IPG-and NEPHGE-based 2DE methods, it was difficult to predict the protein mobility in different gel systems [15]. We suggest that this may be partially related to different sample preparation and 2nd dimension electrophoresis in each method, because in our experiment overall protein pattern is rather similar, and most corresponding spots of high abundant proteins can be cross-referenced (Figure 1). The essential results of our previous study on UPR-Cyto response were repeated by both IPG and NEPHGE pH 3-10 systems; however, the quality of results is different. At standard protein load, both pH 3-10 methods were less sensitive than pH 4-7 method in the acidic pI range. The spots of low abundant proteins Sgt2p, Sti1p and Hsp104p were quantitatively evaluated and identified in earlier pH 4-7 IPG-based 2DE study [17]. In this experiment, spots of Hsp104p were near the limit of detection, preventing reliable quantitative analysis, whereas Sgt2p and Sti1p were entirely undetectable using standard protein load in pH 3-10 based method. The increased expression of more abundant pI 4-7 proteins in UPR-Cyto was also determined by IPG pH 3-10 method; however, calculated fold changes were lower than in previous study (Table 3). New differentially expressed protein spots in the acidic pI range were not detected by either pI 3-10 range method. In this study, we have identified by mass spectrometry (MS) only one additional acidic protein -spot "h" (Figure 1), but it was also detectable in earlier pI 4-7 based 2DE analysis with even higher fold change (see Table 3). The identified mixture of similar cellular chaperones Ssa1 and Ssa2 in spot "h" does not provide new proteins in UPR-Cyto, as their major forms are represented in spots a and b. The appearance of these minor isoforms of Ssa1/2p only suggests that the main overexpressed UPR-Cyto proteins undergo partial proteolysis in MeH expressing yeast cells.
Lower sensitivity of pH 3-10 versus pH 4-7 IPG in the acidic zone could be compensated by the analysis in the basic pI protein zone. However, using IPG pI 3-10 method, we have not identified any basic protein induced in UPR-Cyto stress neither at standard, nor at high protein load. Therefore, the use of IPG pH 3-10 instead of IPG pH 4-7 system is unsuitable, as its drawbacks are not compensated by any practical advantages. Comparison of a broad pH 3-10 range IPG and NEPHGE in acidic protein zone reveals positive and negative features of both methods. Some of the main UPR-Cyto proteins showed higher fold changes in NEPHGE-based method, and, in the case of Ssa and Sse1 proteins, the results of quantitative analysis are in better agreement with the earlier pH 4-7 IPG-based analysis than with the results of pH 3-10 IPG-based method ( Table 3). The main drawback of NEPHGE-based method was its failure to detect acidic UPR-Cyto protein Hsc/Hsp82 (Figures 1 and 2, spot f). In the basic side of 2D gels, the results unambigously demonstrated the advantage of NEPHGE over IPG. For example, the first replicas of IPG pH 3-10 based 2D gels at standard protein load showed the increased expression of basic protein Pmg1p in the MeH protein-expressing cells (Figure 1, A-C, spot "?"); however, it appeared an artefact, because this result was not repeatable (see fold change error value for this spot in Table 3). Meanwhile in NEPHGE-based method, the corresponding protein spot was repeatable in all 2D gel replicas and did not show any considerable variation (Figure 1, D-F, spot "?" and Table 3). Instead of such artefacts, the NEPHGEbased method revealed repeatable and statistically significant increase of the expression of less abundant highly basic (~pI 9) protein Sis1p in response to synthesis of MeH (spot "!" in Figures 1E, 2E and Table 3). Sis1p is type II HSP40 co-chaperone that interacts with the HSP70 protein Ssa1p, which is the most abundant cellular protein overexpressed during UPR-Cyto stress ( [17] and Figure 1, spots a and b). It can be noted that Sis1p was also identified as overexpressed cellular protein during UPR-Cyto in another earlier study using misfolded YFP expression [20]. Therefore, the identification of Sis1p is a convincing result and expands our knowledge on UPR-Cyto stress.
Although we did not perform independent experiments at high protein load, the fold changes of the same overexpressed UPR-Cyto proteins were calculated from three technical replicates for comparison with standard load procedure (see Table 3, "High load" columns). Loading 2× protein amount on IPG strips had two effects on evaluation of differentially expressed UPR-Cyto proteins. Firstly, the standard deviations of fold changes of overexpressed Ssa, Sse1 and Hsc/Hsp82 proteins were highly increased. It shows that experimental gel-to-gel variation under these conditions highly exceeds biological variation. Such variation among technical replicates almost reaches the level of variation between different biological states (i.e. differential expression of UPR-Cyto proteins in MeH expressing versus control cells). High protein load on IPG strips also significantly lowered average fold change values for overexpressed Kar2p, Eno2p and partially degraded Ssa1/2p form (spot h). The latter was not recognized as the overexpressed protein due to overloading and poor resolution, which resulted in overlaped protein spots in corresponding 2D gel area (see Figure 2B, spot h). It is unclear why the well-shaped Kar2p spot showed the lower fold change, but it also seems incorrect result, because all other quantitative analyses including immunobloting (described below) showed considerably higher fold change values. Finally, abovementioned basic Gpm1 protein spot with false overexpression at standard protein load in IPG here showed an opposite resultrepression. It once more confirmed that results in a basic zone of broad range IPG-based 2DE gels are unreliable. In summary, high protein load in pH 3-10 IPG strips was not suitable for the analysis of UPR-Cyto stress, therefore standard sample load is preferable for this method.
High protein load on NEPHGE gels resulted in lower fold changes and negligible standard deviations compared to 1× sample loading. Extreme fold change determined for Kar2p at standard protein load (9 ± 3.1) now was corrected to more reliable 2.5 ± 0.2 value, which better corresponds the Western blot result. This protein is not abundantly expressed under normal growth conditions; therefore, Kar2p spot is underrepresented in the control sample at lower protein amount, resulting in imprecise quantitative comparison. The only exception from lower fold changes of overexpressed UPR-Cyto proteins at high load analysis in NEPHGE-based method was the increase in fold change value of spot h ( Figure 2E and Table 3). Possibly, in this case the result of 1x load experiment was also improved, because higher fold change of spot h is practically identical to earlier pH 4-7 IPG-based 2DE analysis. Taken together, high protein load in NEPHGE-based 2DE showed reliable results in the UPR-Cyto analysis with minimal experimental variation of differentially expressed protein spots. It seems worth to use the high protein load as optimal conditions for the routine analysis of yeast protein samples by NEPHGE-based 2DE technique, because it has only advantages over standard protein load.
Verification of proteomic results by immunoblotting
To confirm the results of 2DE study, we have done immunoblots using commercially available antibodies against two overexpressed UPR-Cyto proteins Kar2 and Sis1. Kar2p showed the highest overexpression in UPR-Cyto stress, but determined fold change greatly varied from 1.8 to 9 depending on 2DE technique and protocol (Table 3). Sis1p was the most important protein identified in this study, because it was a new protein involved in UPR-Cyto stress and exemplified the main advantage of NEPHGE over IPG in the analysis of highly basic proteins. Representative images of Western blot analysis of Kar2p and Sis1p expression in crude yeast lysates are shown in Figure 3. Alongside with the overexpressed main Kar2p form, immunoblot has also revealed an additional Kar2p band of slightly higher molecular weight in the cells expressing MeH protein, which induces UPR-Cyto ( Figure 3B). Most likely it was a precursor of Kar2 protein with uncleaved signal sequence. Therefore, we have included both bands in calculation of Kar2p expression fold change. The results of three independent experiments showed 3.6 ± 0.5 increase of Kar2p expression in yeast cells expressing MeH protein.
It corresponds to the fold change of Kar2p determined in our previous study using a narrow range pH 4-7 IPG-based 2DE method (Table 3). Western blotting using antibodies against Sis1p showed 1.9 ± 0.2 expression increase in MeH expressing yeast. It confirmed the overexpression of Sis1p during UPR-Cyto.
General comparison of NEPHGE-and IPG-based 2DE methods
General characteristics of both methods are briefly summarized in Table 4. Considering all the data, NEPHGEbased method seems preferable in a broad range pH 3-10 gradient. The examples given above illustrate the essential differences between two methods in Tables 1 and 2: IPG 3-10 (Invitrogen) 2DE method is reliable only for the analysis of acidic proteins, whereas NEPHGE method produced acceptable results in entire pI range and was especially suitable for the analysis of basic proteins. By comparing only the results at high protein load, we should state that NEPHGE is by far better method, because all protein spot parameters are better than parameters obtained using IPG technique, even in the acidic range (see values in pI < 7 columns in Table 2). The results of differential expression proteomics experiment with UPR-Cyto stress confirmed that high sample load onto pH 3-10 IPG strips is unsuitable for studies of biological effetcs by 2DE. Therefore, it seems reasonable to discard from comparison high-load experiment with IPG strips and to directly compare IPG-based technique at standard 1x protein load with the NEPHGE-based method at 2x protein load. The comparison at optimal protein loading conditions (see Table 1 for IPG and Table 2 for NEPHGE) reveals very similar performance of both methods in acidic range with almost identical spot reproducibility and quality. Overall variation of spot volume parameters is also similar in this case, as higher variation averages in IPG-based 2DE are compensated by lower SD values. The only advantage of the NEPHGE method in this case is substantially larger protein amount resolved on 2D gel. As can be seen in Tables 1 and 2 resulted in almost five fold higher number of separate acidic protein spots in NEPHGE (372 spots versus 79 spots detected by IPG). The parameters of detected protein spots (reproducibility, quality etc.) were nearly identical in both methods. It suggests that in the acidic zone both spot separation methods reached some optimal level, which results in similarly good parameters of resolved protein spots.
, it
The intrinsic property of NEPHGE method is the higher protein capacity of 1st dimension gel than that of IPG gel of corresponding format. It is worth to discuss this in more detail, because it opens new opportunities. Detection of up to 500 good quality spots in a single small 2D gel by using Coomassie staining with relatively low sensitivity is a promising result. Taking into account experimental procedure and protein detection method, it seems difficult to achieve similar result using a broad range IPG strips. Thus, the main problem of pH 3-10 IPG strips seems to be limited amount of total protein that can be resolved into good quality spots on 2D gel. Moreover, significantly larger protein amount is lost during IPG-based 2DE procedure than in NEPHGE-based method even at standard protein load. In fact, there are several specific steps where the proteins are lost during IPG-based procedure. It was earlier shown that 20-55% of loaded protein is lost due to attachment of the proteins to reswelling tray during in-gel rehydration step [21]. Additionally, only 20%-51% of total protein amount loaded onto pH3-10 IPG strip was resolved onto 2D gel when complex protein mixture was analysed [21]. Cited study was performed using IPG strips produced by Amersham (now GE Healthcare). Therefore, it suggests that effects observed with Invitrogen strips in our study may be inherent to all pH 3-10 IPG strips in general. NEPHGE procedure does not include in-gel rehydration step (gels are casted and used fresh), and this could explain lower protein loss during 2DE. However, the most difficult thing is to explain why IPG strips are overloaded by much lower protein amounts than NEPHGE gels.
It should be noted that detection and analysis of large number of protein spots does not require the large amount of protein in the gel. Actually, we did not find any proteomic study on yeast proteins where a large number of protein spots (at least as high as in our study) was analysed by IPG-based 2DE using Coomassie staining method. For example, detection of~1200 spots and creation of the 2D pattern as the yeast reference map was performed by using radioactive labelling [18]. Another 2DE study using radioactive labelling of yeast proteome reported detection of~1100 protein spots [22]. Silver staining of large IPG-based 2D gels of yeast lysates resulted in visualisation of~1000 spots per gel [19]. Yet another silver staining procedure was reported to visualise of~1500 spots of yeast proteins; however, in that case a narrow range pH 4.7-5.9 IPG strip of the largest possible 25 cm length was used [23]. The use of silver staining for medium, 13 cm length, pH 3-10 IPG-based 2D gels resulted in detection and quantification of~400 yeast protein spots per gel [24]. Finally, the most spots in yeast proteome were detected by using fluorescent SYPRO Ruby staining method, which resulted in >2,000 protein spots on each 2D gel [25]. However, the numbers of protein spots detected by IPG-based 2DE in any reported study seem small if compared to the potential of NEPHGE-based 2DE method. Over 10,000 protein spots were detected in one NEPHGE-based 2D gel using silver staining [12]. It is not clear why several times more sensitive protein detection methods are necessary if it is possible to detect the same thousand of proteins by simple Coomassie staining after loading much larger amount of protein mixture onto 1st dimension gel. This would be convenient for both quantitative analysis and mass spectrometry protein identification. Usually it is possible to unambigously identify any protein spot visualised by Coomassie staining. Most likely, 2DE analysis of a whole proteome using Coomassie stain was not being used due to limited protein capacity of IPG strips. In this context, NEPHGE technique may offer an improvement in 2DEbased proteomic studies. When comparing narrow range pH 4-7 IPG (Invitrogen) and pH 3-10 NEPHGE systems, it is more difficult to conclude which method is better. Inability to detect some less abundant acidic proteins by the NEPHGE-based method at standard protein load was easily solved by increasing the amount of total cell protein. The essential drawback of the NEPHGE-based method in acidic protein analysis is disappearance of some differentially expressed protein spots. The examples here were Hsc82 and Hsp82 protein spots. However, NEPHGE-based method enabled identification of aforementioned basic differentially expressed Sis1 protein and this would compensate drawbacks in the acidic pI range. It should be mentioned that we have used anodic isoelectric focusing (AIF; sample applied to the anodic side of the gel) according to the NEPHGE technique developed by Klose [3], in contrast to cathodic isoelectric focusing (CIF; sample applied to the cathodic side of the gel) developed by O'Farrell [4]. It was reported earlier that when using CIF, a whole class of proteins (very basic proteins) is lost, whereas when using AIF, a certain amount of each protein in a protein class (very acidic proteins) do not enter the gel [11,12]. Our results suggest that the broad range pH 3-10 IPGbased 2DE method suffers from the same limitations (loss of the very basic proteins) as CIF technique of the NEPHGE method. Here is important to note that these specific problems are rather small if compared to the main drawback of a basic 2DE method itself. A lot of proteins do not enter any 2D gels at all. Usually a very few membrane proteins are detected by 2DE. Moreover, there are also other protein classes that are not presented on 2D gels. Good example is recombinant viral proteins MeH and MeN with the pIs of~6.6 and~5.2, respectively. In this study they were overexpressed in the yeast cells and are presented as strong bands in SDS-PAGE of crude yeast lysates ( Figure 3A). If all proteins from whole cell lysates would enter 2D gels, MeH and MeN should be presented at microgram amounts. However, no traces of these proteins were observed in 2D gels using both 2DE methods (Figures 1 and 2). It is evident that the loss of more than a half protein amount during 2DE procedure [21] is rather specific and a lot of proteins are totally lost from the samples. Taken together, there is no ideal technique for 2DE method, because all techniques have some drawbacks. In our case, it seems the most efficient way to be the usage of large format NEPHGE gels for a broad range pH 3-10 analysis, whereas in the acidic range the analysis could be doubled by the narrow range IPG mini-gels (pH 4-7 or pH 4.5-5.5 etc.).
Conclusions
First dimension IPG (Invitrogen) and NEPHGE (WITAvision) techniques were directly compared in twodimensional gel electrophoresis experiment using the same format mini-gels and the same samples of yeast whole cell lysates. Comparison of a broad range pH 3-10 gradient based 2DE methods suggests that NEPHGEbased method is preferable. IPG 3-10 (Invitrogen) 2DE method is reliable only in analysis of acidic proteins, because in basic side of 2D gels the results are not reproducible; meanwhile, NEPHGE method is suitable in the entire pI range and especially efficient for the analysis of basic proteins. In this study this was exemplified by identification of highly basic protein Sis1p overexpressed during UPR-Cyto stress in yeast cells. This protein was convincingly identified as differentially expressed protein by using NEPHGE, but not IPG based 2DE method. Overexpression of Sis1p was confirmed by immunoblot analysis. Nevertheless, the narrow range pH 4-7 IPG (Invitrogen) technique is a better method for the analysis of acidic proteins. Considering all the results derived from tested techniques, it seems the most efficient way is to use large format NEPHGE gels for a broad range pH 3-10 analysis, whereas in acidic range the analysis could be doubled by the narrow range IPG mini-gels (Invitrogen).
Plasmids, yeast strain, media and growth
Three plasmids were used in this study: pFGG3-MeH (for inducible expression of MeH protein causing UPR-Cyto stress in yeast), pFGG3 (empty control vector) and pFGG3-MeN (additional control for inducible expression of MeN protein, which does not cause the stress response in yeast). Generation of these DNA constructs were described previously (see [17] for pFGG3-MeH and [26] for pFGG3 and pFGG3-MeN).
The plasmids were used for the transformation of the S. cerevisiae strain AH22 (MATa leu2-3 leu2-112 his4-519 can1 [KIL-o]) as described previously [27]. Yeast culture media, growth and induction of S.cerevisiae transformants were exactly the same as reported in earlier study [17]. After induction of viral protein expression, transformed cells were harvested by centrifugation and stored at −70°C.
Design of the study
The aim of this study was to directly compare the first dimension IPG and NEPHGE techniques in twodimensional gel electrophoresis (2DE) method and evaluate their impact on the results of biological experiment. Commercially available systems "ZOOM IPGRunner" from Invitrogen and a "WitaVision g1D" from WITA GmbH (Teltow, Germany) were chosen for IPG and NEPHGE, respectively. The same platform including pI range (pH 3-10 gradient) and the gel length (7 cm mini-gels) was used for both methods.
The study was designed according to the main tasks: (i) to evaluate experimental variation in both 2DE techniques by running the same samples several times; (ii) to assess biological variation in protein expression during UPR-Cyto stress in yeast cells using both 2DE methods and thereby compare their efficiency in the concrete biological experiment. The biological material was essentially the same as reported previously [17] except that here we have used only measles virus proteins for the expression in yeast (i.e. mumps virus proteins were not used). Briefly, the UPR-Cyto stress was induced by the expression of MeH protein and the pattern of cellular proteins resolved by 2DE was compared to protein pattern from the control cells transformed with empty expression vector pFGG3. In addition, the yeast cells expressing MeN protein, which does not induce cellular stress, were used as internal control in this study. Experimental variation in both 2DE methods was evaluated using the same samples from three experimental variants (expressing MeH, MeN or control cells). This analysis was doubled by loading 1x and 2x amounts of protein samples (standard and high load conditions, respectively). 2DE of the same samples (from one independent experiment) was repeated three times and various parameters were calculated for both methods as is shown in Tables 1 and 2. Biological variation in cellular protein expression was assessed by performing independent experiments (transformation of yeast cells with vectors, growing yeast cells, induction of viral protein expression, preparation of whole cell lysates and 2DE with subsequent gel staining and image analysis) at standard 1x protein load. Fold changes for differentially expressed proteins were calculated from at least three independent experiments at standard conditions and the results are given in Table 3. In addition, fold changes of the same protein spots were calculated from three replicas of one independent experiment at high protein load and the values were also included in Table 3 for comparison.
In principle, all operations in both 2DE methods were performed in parallel except for 1st dimension electrophoresis (IEFisoelectric focusing). The same samples were applied onto IPG strips and NEPHGE gels. After IEF in different system equipments, the 2nd dimension electrophoresis (SDS-PAGE) was performed in parallel for corresponding IPG and NEPHGE samples (i.e. the SDS-PAA gels were casted and run simultaneously). All 2D gel staining and image analysis procedures were also identical for comparable IPG and NEPHGE-based 2DE samples. Therefore, the results should be influenced only by differences in the 1st dimension techniques and this enables their direct comparison.
Preparation of yeast lysates for 2DE
10-20 mg of cell pellets were collected into a 1.5 ml microcentrifuge tube by centrifugation, washed with distilled water and stored frozen at −70°C. After storage cells were quickly thawed and resuspended in 10 volumes (vol/wt) of denaturing IEF buffer containing 7 M urea, 2 M thiourea, 2% CHAPS detergent, 2% ampholytes (pH 3-10, GE Healthcare), 0.002% Bromphenol Blue and 75 mM DTT (added just before use). Note, that IEF buffer composition (given above) was suitable for both methods according to manufacturer's (Invitrogen and WITA, respectively) recommendations. An equal volume of glass beads was added and the cells were lysed by vortexing at high speed, 8 times for 30 sec, with cooling on ice for 10 sec followed by keeping 30 sec at room temperature between each vortexing. Then cell debris was removed by centrifugation at 16000 × g for 15 min. at 16°C. Supernatants (whole cell lysates) were applied onto 7 cm length IPG strips or onto 7 cm NEPHGE 1st dimension gels. Protein concentrations were determined by Roti-Nanoquant Protein-assay (Carl Roth Gmbh.), which is a modification of Bradford's protein assay. Additionally, protein concentrations in the supernatants were checked by SDS-PAGE followed by staining with Coomassie Brilliant Blue R-250 and evaluation of total protein amount in 1D gel lanes using the ImageQuant TL 1D gel analysis software (GE Healthcare). Samples were diluted with IEF buffer if necessary and equal protein concentrations were used for two-dimensional gel electrophoresis.
For high-load 2DE experiments, more concentrated samples were prepared by using less volume of denaturing IEF buffer. Cells were resuspended in 5 volumes (vol/wt) of denaturing IEF buffer and further preparation procedure was the same as described above.
Running the first dimension
For comparative analysis, the same samples from expressing MeH, MeN or control cells were run by both IPG and NEPHGE methods. The first-dimensional separation of proteins was performed according to manufacturer's (Invitrogen and WITA, respectively) recommendations. Briefly, IPG strips (ZOOM strips pH 3-10NL, Invitrogen) were used for IEF in Invitrogen ZOOM IPGRunner System. 50 or 100 μg of the protein from whole cell lysate was diluted to 155 μl by IEF buffer and applied onto IPG strip following rehydration overnight. Next day the ZOOM IPGRunner Mini-Cell was assembled and IEF was performed using "PowerEase 500 Power Supply" (Invitrogen) with the following running conditions: 200 V for 20 min; 350 V for 10 min; 500 V for 4 hrs. Finally, a higher voltage step at 2000 V was performed as recommended by manufacturer (for 2 hrs, the power supply "Consort EV233"). Focused IPG strips were stored in a sealed container at −70°C. Before 2nd dimension, the strips were incubated in equilibration buffer (50 mM Tris-HCl pH 8.8, 2% SDS, 6 M urea, 30% glycerol, 0.002% Bromphenol Blue) containing, in course, reducing (75 mM DTT) and alkylating (125 mM 2-iodoacetamyde) agents (treated for 15 min. by both). Equilibrated strips were applied onto SDS-polyacrylamide gels and SDS-PAGE was run for the second dimension.
NEPHGE was made with a non-linear pH3-10 gradient formed by carrier ampholytes. The mixture of carrier ampholytes and IEF gel solution composition was made according to Klose and Kobalz, 1995 [12]. Briefly, ampholytes of pH5-6.5 were at the highest concentration followed by ampholytes of pH4-5 and pH6.5-8, then with further expansion of pH gradient. Accordingly, it gives wider separation zone at the pH5-6.5, followed by pH4-5 and pH6.5-8. It is similar to used pH3-10NL IPG strips; however, small shift may be observed.
First dimension NEPHGE was performed according to the protocol of the manufacturer, using a set of standardized materials (from WITA Gmbh). Briefly, two gel solutions were cast in succession in a vertical device for preparation of the two-layered rod gels of the first dimension (quantities sufficient for a total of eight rod gels): 1.5 ml of separation gel solution plus 36 μl of 0.8% ammonium persulfate (APS) was prepared for polymerization of the first gel layer, and 600 μl of cap gel solution (WitaVision) was mixed with 15 μl of 0.8% APS for formation of the second gel layer of the rod gels (all solutions were degassed by sonication). For complete polymerization, the gels of the first dimension were held at room temperature for 30 min and then kept in a damp chamber for additional 72 hr. The first-dimensional separation of proteins in the rod gels was performed in a vertical electrophoresis device according to the operating instructions of the manufacturer (WitaVision). Briefly, the lower chamber of the device was filled with 400 ml of degassed cathode buffer (prepared on a 40°C heating plate, containing 20 g of glycine, 216 g of urea, 200 ml of aqua dest, filled up to 380 ml; and the addition of 20 ml of ethylenediamine). Following fixation of the rod gels in the device, the sample solutions containing 30 or 100 μg of the protein from whole cell lysate in agarose-supplemented ampholyte phosphate buffer were applied to the anodic sides of the capillary gels, and the remaining volumes of the capillary glass tubes were then covered with a sample stabilizing overlay solution (WitaVision). Subsequently, 400 ml of degassed anode buffer were applied (solution of 72 g of urea, 250 ml of aqua dest, filled up to 380 ml; addition of 20 ml of 85% phosphoric acid) to the upper chamber of the device, and the electrophoretic separation of the first dimension was started by using the following sequence of programmed running conditions: 100 V for 1 hr 15 min; 200 V for 1 hr 15 min; 400 V for 1 hr 15 min; 600 V for 1 hr 15 min; 800 V for 10 min; 1000 V for 5 min. After the termination of electrophoresis, the rod gels were carefully pushed out of the glass tubes onto plastic rails, and adaptation to the conditions of the second dimension was achieved by a series of three 15-min equilibrations in a corresponding equilibration buffer containing 75 mM DTT, followed by equilibration in the same buffer with 125 mM 2-iodoacetamyde. The equilibrated rod gels of the first dimension were stored at −70°C before application to the second dimension of the 2DE system.
Running the 2nd dimension (SDS-PAGE), fixing and staining of 2D gels
For separation in the second dimension of 2DE, standard SDS-PAGE was performed with 11% (w/v) polyacrylamide gels using a Minigel-Twin units (Biometra). Briefly, the IPG strips and rod gels of the first dimension were gently transferred from equilibration and storage rails to the top of the stacking gel zones and covered with 0.5% (w/v) agarose to fix the rod gels. The electrophoresis running conditions of the second dimension separation were set as follows: 15 mA per gel (~100 V) for~15 min (untill the dye reached resolving gel); 30 mA per gel (voltage gradually rises up to 200 V limit) for about 1 hr, until the bromophenol blue front reached the bottom of the gel.
After 2DE protein separation was complete, gels were fixed in Fixation Solution (50% ethanol, 40% HPLC grade water, 10% acetic acid) for at least 1 hr under gentle agitation at room temperature (RT) and stained with Coomassie Brilliant Blue R-250 (50% ethanol, 10% acetic acid, 0.1% Coomassie BB R-250, 40% HPLC grade water) over night under gentle agitation at RT. Next day the gels were destained in Destaining Solution (5% ethanol, 12.5% acetic acid in HPLC grade water) for 1 hr under gentle agitation at RT followed by further destaining with Storage Solution (7% acetic acid in HPLC grade water) for 4 hr (at least 2x exchange of solution) at RT. Then the 2D gels were scanned with ImageScanner III (GE Healthcare) and stored sealed in plastic pouches at 4°C.
Analysis of 2D gel images
All 2D gels in this experiment were scanned with calibrated ImageScanner III (GE Healthcare) under the same settings: blank filter, transparent mode and 300 dpi resolution. The gels that were run in parallel have been scanned simultaneously (usually six gels at oncethree from IPG-based and three from NEPHGE-based 2DE; original scanned image of one of the replicas is shown in Figure 2). Then the image was resolved into separate 2D gel images and these were imported into 2D gel analysis software. 2D gel images were analysed using the ImageMaster 2D Platinum 7.0 software (GE Healthcare). Protein spots were detected automatically by setting the same parameters (smooth, saliency and min area) for all analysed 2D gels. Artefact spots (mostly near the boundaries of the gels) were deleted manually in every 2D gel with detected spots. Then gels were matched in separate small groups of three gels (e.g. IPG 3-10 analysis of Control, MeH and MeN variants) followed by matches between the groups according to required comparison. 2D gel images or match sets were grouped into classes according to the task of analysis (e.g. the analysis of experimental variation between replicas of the same samples). Various comparisons and calculations of parameters were performed as indicated in the legends of Tables 1, 2 and 3. All 2D gels were divided into acidic and basic parts according to the position of known cellular proteins with near neutral pIs. The line of neutral pI 7.0 was applied to all gels at the same position of protein 2D pattern as it is shown in Figures 1 and 2. Then acidic or basic gel parts were selected and required calculations for acidic and basic protein spots performed as it is indicated in Tables 1 and 2. Differentially expressed cellular proteins during UPR-Cyto stress were evaluated by calculating "fold change" -the ratio of %Vol between spots of MeH expressing and control cells, respectively. Fold changes given in Table 3 represent data from three independent experiments (values are averages ± SD). Differentially expressed spots were also analysed in internal control samples from cells expressing MeN protein. The expression level of differentially expressed protein spots indicated by arrows in Figures 1 and 2 was similar in both control and MeN expressing cells (data not shown).
Protein identification
The protein identification was carried out at the Proteomics Center in the Institute of Biochemistry (Vilnius, Lithuania) by means of tryptic digestion and mass fingerprinting. Tryptic digestion was performed according to earlier described procedure [28]. Briefly, protein spots were excised from the gel and cut into 1×1 mm pieces. Gel pieces were destained with 200 μl of 25 mM ammonium bicarbonate in 50% acetonitrile (ACN), dehydrated with ACN and incubated with 40 μl 10 ng/μl of trypsin solution in 25 mM ammonium bicarbonate over night at 37°C. Next day, peptides were extracted with 2 × 100 μl 5% trifluoroacetic acid (TFA), lyophilized and dissolved in 3 μl 0.1% TFA in 50% ACN. Samples were applied to 384-well MALDI plate. 0.5 μl of sample were overlayed with 0.5 μl of matrix (alpha-cyano-4-hydroxycinnamic acid, 4 mg/ml 50% ACN with 0.1% TFA).
Proteins were identified by matrix-assisted laser desorption/ionization (MALDI) mass spectrometry using 4800 MALDI TOF/TOF mass spectrometer (AB/Sciex). Peptide mass spectra were acquired in reflector positive ion mode in m/z range 800-4000 Da, 400 laser shots were summed for each sample with mass accuracy ±50 ppm. MS/MS spectra for dominating peptides were acquired in positive mode, ion collision energy was set to 1 keV, 500 laser shots were accumulated for each spectrum with mass accuracy ±0.1 Da. Proteins were identified in the TrEMBL database (3-23-10 release) using the Mascot algorithm. Summary of protein identification data is provided in Additional file 1. | 2017-06-22T17:13:45.669Z | 2013-07-27T00:00:00.000 | {
"year": 2013,
"sha1": "9eaf109674a5c096370a4603bdf62c43944d9010",
"oa_license": "CCBY",
"oa_url": "https://proteomesci.biomedcentral.com/track/pdf/10.1186/1477-5956-11-36",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9eaf109674a5c096370a4603bdf62c43944d9010",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
257855739 | pes2o/s2orc | v3-fos-license | Simulation-based evaluation of SAR and flip angle homogeneity for five transmit head arrays at 14 T
Introduction Various research sites are pursuing 14 T MRI systems. However, both local SAR and RF transmit field inhomogeneity will increase. The aim of this simulation study is to investigate the trade-offs between peak local SAR and flip angle uniformity for five transmit coil array designs at 14 T in comparison to 7 T. Methods Investigated coil array designs are: 8 dipole antennas (8D), 16 dipole antennas (16D), 8 loop coils (8D), 16 loop coils (16L), 8 dipoles/8 loop coils (8D8L) and for reference 8 dipoles at 7 T. Both RF shimming and kT-points were investigated by plotting L-curves of peak SAR levels vs flip angle homogeneity. Results For RF shimming, the 16L array performs best. For kT-points, superior flip angle homogeneity is achieved at the expense of more power deposition, and the dipole arrays outperform the loop coil arrays. Discussion and conclusion For most arrays and regular imaging, the constraint on head SAR is reached before constraints on peak local SAR are violated. Furthermore, the different drive vectors in kT-points alleviate strong peaks in local SAR. Flip angle inhomogeneity can be alleviated by kT-points at the expense of larger power deposition. For kT-points, the dipole arrays seem to outperform loop coil arrays. Supplementary Information The online version contains supplementary material available at 10.1007/s10334-023-01067-1.
Introduction
Increasing the B 0 field strength remains a focal point of interest within MRI research due to expected gains in SNR, CNR, and spectral dispersion. In the last 5 years, the first imaging results of 10.5 T and 11.7 T human systems have been published and multiple other research sites are currently bringing 11.7 T MRI systems into operation [1][2][3][4][5]. More recently, various research sites across the world are planning to install 14 T MRI [6] systems with improved resolution of fMRI and spectroscopic imaging MRSI to provide a better understanding of human brain function.
Previous work at ultra-high field strength has uncovered the challenges that come with increasing B 0 field strength. Due to the shortening RF wavelength, both local SAR and transmit field inhomogeneity increase with field strength and have a negative impact on image quality and scan efficiency [7,8]. At 7 T-9.4 T, acquisition of uniform image contrasts throughout the brain is feasible with improved SNR compared to 3-4 T [9,10]. However, as soon as larger objects such as the body are studied at these field strengths, efficient acquisition of uniform images becomes very challenging [8]. A large amount of research over the past years has shown that improvements in the RF coil design and parallel transmission can be used to significantly alleviate problems with image contrast uniformity and local SAR at 7 T-10.5 T [11][12][13][14][15][16][17][18][19][20][21].
When further increasing the B 0 field strength to 14 T, it becomes questionable if the short RF wavelength will still allow for efficient imaging of the brain. At 14 T, the RF wavelength in white matter is ~ 6 cm which is significantly smaller than the dimensions of the head and which will likely cause destructive interference and standing wave patterns in the brain. To determine the feasibility of human brain imaging at 14 T, the purpose of this study is two-fold. We want to 1. Investigate trade-offs between local SAR and B 1 + uniformity in the human brain at 14 T for various RF shimming and parallel transmit approaches as compared to 7 T 2. Investigate the impact of coil design on local SAR and flip angle uniformity at 14 T. Although no 14 T MRI system yet exists for human applications, the expected imaging performance and short-wavelength penalties can be well investigated using numerical simulations. As an example, the flip angle homogeneity and SAR levels for various body imaging targets of the 10.5 T system currently operational at CMRR, Minneapolis, has been investigated beforehand [22]. This study thoroughly investigated the flip angle homogeneity and peak SAR levels that were to be expected once the system would be available for human scanning. The authors investigated the performance for RF shimming and parallel transmit (spokes pulses) and used L-curves to depict the trade-off between flip angle homogeneity and peak local SAR (further referred to as peak SAR).
Similarly, another simulation study has been performed to compare SNR, SAR and flip angle uniformity in the brain at 1.5-14 T [23]. This study thoroughly demonstrated the expected gains in signal-to-noise ratio for increasing field strengths for realistic pulse sequences and demonstrated that is was feasible to achieve uniform flip angle distributions in the brain at 14 T. However, the expected penalties in terms of SAR and B1 inhomogeneity represent an upper limit as this study only included one single coil array design, which was not optimized for the task at hand. Also, it did not study the trade-off between B1 field uniformity and peak SAR. Depending on the RF pulse design settings, users can choose either better uniformity or lower SAR levels. On top of this, the use of parallel transmission pulse designs (e.g. k T -points) allows for further optimization of flip angle homogeneity which was only sparsely investigated (i.e. without evaluating peak SAR for these pulses). The aim of the current study is to investigate the trade-offs between peak SAR, B 1 + and flip angle uniformity at 14 T in comparison to 7 T using numerical simulations. As findings will depend heavily on the chosen coil array design, the investigation is performed for a range of potential coil arrays for brain imaging at 14 T. L-curves will show for each array the trade-off between peak SAR and flip angle homogeneity using either RF shimming or 5-point k T -points pulse. Next to the expected RF transmit performance at 14 T, the study also provides initial directions in RF coil array design for 14 T brain imaging and provides a more complete picture of local SAR behavior at 14 T compared to 7 T. ). An array of 8 fractionated dipoles was also simulated at 298 MHz (7 T) as reference. Figure 1 shows a visualization of the proposed coil designs. These simulations were performed on the brain of human model Duke of the virtual family [24]. All antennas were placed on a ring with a diameter of at least 300 mm. In case of adjacent overlapping elements, one of the elements was placed 5 mm further away from isocenter. For 14 T, all array were kept within a total feethead length between 16 cm (8L) and 20 cm (all other arrays), whereas at 7 T an optimal dipole length of 30 cm was used [16].
EM simulations setup
The dipole antennas in configuration I, II and V had a length of 20 cm, according to optimal results from a previous 14 T simulation study at 14 T [25]. The dipole antennas were matched with a parallel capacitor (2.45 pF) and two parallel inductors (14 nH). The dipole antennas at 7 T had an optimal length of 30 cm [16] and were matched with a parallel inductor of 40 nH and two series inductors of 25 nH.
The large loop coils of array 8L and 8D8L had a height of 16 cm and a width of 14 cm (optimal for low SAR at 7 T [26]) and were tuned with 1.19 pF capacitors (12 per loop). The small loops had a height of 87.5 mm and a width of 14 cm. Loops were separated in the feet head direction by a gap of 5 mm (total array length 20 cm). The small loops were tuned with 12 × 1.85 pF capacitors. To match the loops, the port reference impedance was set equal to the real part of the coil impedance. All loops were overlapped by 1 cm, which resulted in minimum nearest-neighbor coupling values for an array diameter of 300 mm at 14 T.
All the coils in this work include a 62 cm diameter RF shield (gradient shield) and are placed on a 30 cm diameter ring.
A grid size of 1-1.5 mm 3 was used to voxelize the antennas, lumped elements and ports. The head and shoulders were surrounded by a bounding box which was voxelized at an isotropic grid size of 2.5 mm 3 to ensure that all body parts where high local SAR could occur were properly included in the model. Total grid size was between 15 and 31 million cells depending on the array complexity. All simulations were performed on a GPU (Nvidia Titan RTX, Nvidia, Santa Claura, USA) and took up to 1 h per port.
Electric and magnetic fields were exported from Sim-4Life into Matlab (Mathworks, Natick, USA). After calculating Q-matrices in Matlab, a custom 10 g averaging script [27] was used to calculate 10 g-averaged Q-matrices, which were then compressed into virtual observation points [28] (VOPs)) which allows calculation of peak SAR The compression caused a SAR overestimation of 2.5% for all investigated simulations.
Numerical analysis of EM simulations
The simulated B 1 + distributions and VOPs were used to find the optimal performance of each coil design. For this, either amplitude/phase shimming or the k T -points method [29] was used to design optimal non-selective pulses. Both methods were optimized and evaluated on the same 3D brain mask shown in Fig. 2.
RF shimming
For RF shimming, we choose to optimize the following cost function, which is a tradeoff between flip angle uniformity (normalized-root-mean-square error, NRSME) and forward power.
The regularization parameter λ was varied between zero and 1/10 with 25 equidistant increments, this range was found empirically to reflect the best L-curve.
This minimization problem was solved by Conjugate Gradients [30] using random uniform shim coefficients as initial solution. To prevent the convergence to a single local minimum we repeated this for 25 random initial solutions.
With these solutions we numerically calculated the peak SAR and NMRSE which form an L-curved shape when plotted on a 2D grid. From this, the optimal shim coefficients were visually selected near the strongest curvature since the L-curves suffered from irregularities that complicated automatic selection.
In addition to the optimized shim coefficients we also simulated 10 000 random shim settings. These were chosen from a random uniform distribution with an amplitude ranging between 0 and 4, and a phase variation between 0 and 2π. These were used to gain insight into the general response in SAR between 7 and 14 T. Fig. 2 Three slices of the brain mask used for optimization. The region covered by the white area is the used mask. To aid in the orientation, we added the gray matter of the brain imaged by the light gray area inside this mask. The red line indicates the middle slice that is used for visualization
k T -points
Non-selective k T -points pulses consisting of multiple block subpulses, with gradient blips in between were optimized using the interleaved greedy and local algorithm [31]. The optimization of the complex shims for each subpulse and their k T -points locations targeted a flip angle of 40°, with an initial phase distribution equal to the phase of the summed B 1 distributions. The first of set minimization problems, with a increasing number of k T -points subpulses, was regularized with forward power, where the regularization parameter had an initial value of 20 and was adjusted during the optimization process.
In subsequent optimizations, we optimized k T -points pulses containing a fixed number of 5 sub pulses, with forward power as fixed regularization. In this setting we varied the value of the regularization parameter in 20 exponential steps, to also demonstrate the trade-off between flip angle homogeneity and peak SAR.
Normalization flip angle
To compare the results of merely RF shimming versus k Tpoints, we used a linear scaling factor such that an average flip angle of 40° was reached inside the mask and a pulse of length 0.76 ms was used. This pulse length corresponds to the pulse length from the k T -points simulations. Choosing these settings corresponds to an average B 1 + level of 3.4 µT. Result from the L-curve method when optimizing RF shim coefficients. The optimal regularization parameter is found at the point of strongest curvature, denoted by the black star. The corresponding RF shim is used for final evaluation
Normalization to head SAR
We used the power deposition matrix to normalize the SAR distributions to 3.2 W/kg head SAR. This matrix is calculated by the following equation where is the conductivity of the tissue, E i is the three dimensional electrical field of coil i , and the complex conjugate is denoted with a bar above the variable. All variables inside the integral are position dependent. The quantity x T x was used to calculate the power deposition matrix for a specific drive vector x.
Results
The different coil designs were compared using the results of optimal RF shim drive vectors and the k T -points pulses. First the results from the RF shim solution are presented, followed by the results of the k T -points method. Furthermore, the scattering matrices are displayed in Fig. 3.
RF shimming
The L-curves of the optimization for RF shim coefficients are shown in Fig. 4 for each coil array. The peak SAR and NRMSE values are shown for all the 25 solutions. Based on this, the 16L array shows the best performance. However, in the vicinity of its optimal solution we do find that all other coil arrays are able to show similar performance. A visual impression of the SAR distribution for the selected optimal RF shims is given in Fig. 5 and a numerical Fig. 5 Using the optimal RF shim coefficients we evaluated the local SAR distributions, where we normalized to a head SAR of 3.2 W/kg comparison of the transmit performance for each coil design is given in Table 1. Here the minimization problem was regularized with the forward power and the coefficient of variation is evaluated over the brain mask Fig. 8 Result from the L-curve method when optimizing for 5 k T -points. Although the different solutions do not follow a smooth curve, the overall performance demonstrates an advantage for the dipole coil array designs compared to the loop coil array designs. The chosen optimal setting for each coil is denoted with a black star Table 1 shows that the peak SAR for a 40° flip angle at 14 T are approximately a factor 1.5-2 times higher than at 7 T, while minor differences are observed between the proposed coil designs. The level of homogeneity reached for all coil designs is also relatively similar.
Normalizing with respect to head SAR shows that the proposed designs at 14 T have a 20-40% increase in peak SAR compared to the 7 T array. We do observe that with this normalization all arrays are below the peak local SAR constraint of 10W/kg (normal operation mode) using a 100% duty-cycle with an average B 1 + of 3.4 µT. In terms of maximum allowed duty cycle and head SAR levels we see that the 8D8L and 8L array have a reduced performance compared to the other designs.
The distribution of SAR for the 10 000 randomly generated shims is presented in Fig. 6. These show how the SAR at 14 T relates to 7 T, where we see an average increase of a factor of 1-2. The shape of the histogram shows that at 14 T the distribution has a longer tail towards the higher SAR values. Further, the difference in mean value is minimal for each coil design where we see a range of 15-20 W/kg.
k T -points
The choice of the number of k T -points is derived from the results shown in Fig. 7. Here the coefficient of variation of the flip angle map is shown over the whole brain for each solution per coil design using a range of k T -points. This figure shows that pulses with more than 5 k T -points do not offer much improvement in homogeneity of the flip angles.
In Fig. 8, we demonstrate the trade-off between the peak SAR and flip angle homogeneity. Although the resulting 'curve' suffers from discontinuities, the curves exhibit a particular order. Apart from the 7 T curve that is used for reference, we observe that coil arrays with 15 or 16 channels outperform all coil arrays with 8 channels, and that the dipole arrays show an improved performance compared to the loop coil arrays. This order of performance is also in line with the homogeneity found over the ranges of k T -points that is shown in Fig. 7.
The different SAR distributions of each k T -points subpulses are displayed in Fig. 9. Notice how the peak SAR location changes across the 5 different subpulses, which reduces high peak SAR values in the time-averaged SAR distribution.
A numerical comparison between the proposed coil designs is given in Table 2. Notably, the k T -points pulse shows a strong improvement in the homogeneity of the flip angle compared to the optimal RF shim drive vectors. However, when using k T -points pulses, more power is required which is reflected by the increase in peak and head SAR levels.
Further, we observe a stronger variance across the proposed coil designs. For example, the 8D8L and 15D array show one of the best homogeneity of the flip angle whereas Fig. 9 The SAR distribution of each individual point for the optimal 5-point k T pulse normalized to 3.2 W/kg. The red star in each images shows the location of the peak SAR, notice how this changes location for each consecutive point the latter also has the lowest head SAR. On the other hand, the 8D has a relatively low peak SAR level and a beneficial duty cycle compared to the other proposed coil designs.
Discussion
This work provides an exploratory view on the trade-off between SAR and flip angle uniformity for various coil designs at 14 T and a reference coil at 7 T. As expected, flip angle homogeneity, peak local SAR and head SAR all increase when moving from 7 to 14 T. Other findings, however, are less obvious. Figures 4 and 8 show the trade-off between SAR and NRMSE for RF shimming and k T -points drive vectors. In both cases, it is clear that at 7 T, both lower NRSME and peak SAR values can be achieved. When RF shimming is used, the best performing 14 T array for lowest NRMSE is the 16L array. For RF shimming, differences between RF coils are small for the various coil arrays.
When using k T -points, the 15D array performs best, closely followed by the 8D8L array. Arrays with 15/16 transmit channels outperform the arrays with 8 channels and dipoles outperform loops coils. The latter result is in line with work at 7 T by van Leeuwen et al. [26], who demonstrated that dipoles reached lower peak SAR values than loop coils in unshielded head arrays. However, this pattern is less clear for RF shimming. The reason for this discrepancy is unclear. Possibly, because the dipole antennas have a larger penetration depth than the loop coils, they suffer from higher SAR levels because of stronger constructive interference of the electric fields. On the other hand, looking at Fig. 6, the 16L and 15D arrays have very much the same SAR histograms which contradicts this explanation.
Furthermore, the usage of various different drive vectors in k T -points averages out strong peaks in local SAR (Fig. 9), even though local SAR is not used as a regularization term when calculating the k T -points pulse. This finding highlights the impact of parallel transmission for 14 T brain imaging: using k T -points it becomes feasible to achieve uniform flip angles in the brain, while simultaneously alleviating local SAR hotspots. However, the use of k T -points also reduces overall power efficiency compared to normal RF shimming, therefore a lower duty cycle is allowed when using k T -points as compared to RF shimming.
Considering arbitrary shim settings, even when normalizing to a head SAR of 3.2 W/kg, peak local SAR on average becomes a factor of ~ 2.5-fold higher at 14 T than at 7 T, see Fig. 6. For 7 T, the peak local SAR limit is generally not exceeded when operating at the head SAR limit, for 14 T this is on average the case for 20% of all RF shims. However, when trying to achieve a uniform flip angle with either RFshimming or k T -points, for most RF coils the global head SAR limit is exceeded before the local SAR limit, even at 14 T. Only when using k T -points and the 16L and the 8D8L array, the first level controlled mode peak SAR limit of 20 W/kg is exceeded at a head SAR of 3.2 W/kg (Table 2).
Compared to available literature, the simulation study in this manuscript has several limitations which need to be placed in perspective and can be addressed in future studies. First, the simulated coil arrays exhibit fairly strong interelement coupling, especially the arrays with more than eight transmit channels. This likely leads to an underestimation of the RF shimming performance, especially for arrays with more than eight transmit channels. Although the loop coils were decoupled with nearest-neighbor decoupling, this did not reduce coupling between next-nearest neighbors or coils in the feet-head direction. Simulating the coils ideally decoupled [23,32], would have likely introduced overestimation of the shimming degrees of freedom as full decoupling can never be achieved in a real coil array. Another option would have been the introduction of decoupling capacitors or inductors into the simulation, however, complex decoupling strategies will be required for next neighboring elements. Furthermore these decoupling strategies would apply mainly to the loop coils and are less applicable to arrays that include dipole antennas.
In general, it is assumed that because of stronger tissue loading, higher field strengths will lead to decreased interelement coupling. In this work, we show that coupling does not strongly decrease at 14 T compared to 7 T and for some 14 T arrays could even increase compared to the reference 7 T array. Possible explanations include waveguide modes that propagate within the bore or increased reflection of the signals from the coil array at the air-tissue interface. Note that since we did not exhaustively investigate all possible array sizes and configurations, definite statements about coupling levels at various field strengths cannot be made.
Increased coupling may reduce the degrees of freedom when using RF shimming or pTx, which would then translate into suboptimal flip angle homogeneity or higher peak local SAR levels. Also, coupling impacts head SAR levels because scattered power is not deposited in the patient resulting in lower head SAR levels. However, coupling will also reduce B 1 levels so the reduced head SAR with coupling is not likely to be advantageous. All these effects are included in the presented L-curves. In practice, if coupling deteriorates performance, the coil array may be built with decoupling circuitry. This has not been investigated in this study.
Another point that was not fully addressed in this study is the complete range of freedom in type, geometry and arrangement of the RF coil elements. For example, by modifying the loop coil elements into loopholes [33], selfdecoupled coils [34] or coaxial loops [35][36][37], inter-element coupling could have been reduced. Dipole antennas can be decoupled by introducing passive decoupling elements [38], metamaterial structures [39] or using folded dipole antennas in combination with an RF shield [40]. Various methods have been suggested to reduce SAR for dipole antennas [3,4,39,41,42], this could further improve array performance. The length and width of the antennas considered in the current manuscript have been based on previous optimization studies [25,26], however, these studies were not specifically aimed at 14 T brain imaging. A parametric study involving a more complete range of antenna lengths and width will likely show directions of further improvement. Different coil elements such as meander strip-line antennas [43], coaxial dipoles [44] or folded dipoles [41] could be another method to improve transmit performance. Finally, the coil arrays in this work include a 62 cm diameter RF shield (gradient shield) and are placed on a 30 cm diameter ring. Recent work by Zhang et al. [45] has indicated that for increasing field strength, the diameter of the RF shield has a very strong impact on SNR, and that the use of cylindrical RF transmit coils reduces the SNR of closely fitting receive arrays. It is likely that the implications of this work also hold for these transmit coil arrays.
In addition, improvements can be made in the methodology for solving the RF shimming coefficients. By solving for an absolute valued target vector the problem turns non-convex and increasingly difficult to solve for a global optimum. Either reformulating the problem to a convex one, by choosing a reasonable complex valued target B 1 distribution or an algorithm that is better able to solve for nonconvex minimization problems. In addition, there is room for improvement in the optimization of the RF shimming coefficients. Although solving the regularized magnitude least squared (MLS) problem has potential benefit [46], there are drawbacks to the current approach.
First, by regularizing only on forward power we are not able to conclude that an optimal peak SAR has been reached as well. Therefore, additional experiments are needed that regularize on peak SAR and show the trade-off between peak SAR and flip angle homogeneity.
Second, opposed to the current Tikhonov regularization, algorithms that enforce hard constraints can be beneficial to adhere to the SAR restrictions. For example, the work of [47] has used a primal-dual interior point method and stated that the results benefit from a constraint on both power and SAR restrictions. Moreover other work demonstrates that SAR reducing optimization strategies can improve performance [48,49].
For sake of completeness, L-curves when regularizing on peak local SAR are depicted in supplementary Figs. S1 and S2. Note that in our work, the use of regularization on peak SAR levels makes the optimization landscape very irregular often resulting in solutions ending up in local minima and staggered L-curves. Therefore, smooth L-curves were created using a range of different starting values for each value of the Tikhonov parameter. The results show a reduced peak local SAR value for all coil arrays. However, the relative performance of all coil arrays stays more or less the same (Supplementary Fig. S1).
However it remains unclear whether such improved optimization strategies reveal a difference in the relative performance between the proposed coil designs.
As discussed above, there are still many design possibilities to explore for 14 T head coil arrays. With this work, we provide initial directions in choosing coil designs that improve flip angle homogeneity and reduce local SAR. Moreover, we demonstrate that at 14 T, the choice of RF coil has an impact on the achievable uniformity and peak SAR.
Conclusion
We investigated the trade-off between SAR and flip angle uniformity for various head coil designs and parallel transmit strategies at 14 T in comparison to 7 T. We demonstrate that the type of coil element and the number of transmit channels has a strong impact on the transmit performance at 14 T. For arbitrary drive vectors, peak SAR is on average 2.5 times higher at 14 T than at 7 T. When operating at the head SAR limit of 3.2 W/kg, the peak SAR limit is exceeded for 20% of all drive vectors at 14 T, whereas at 7 T the peak SAR limit is almost never exceeded. However, when using either RF shimming or k T -points to achieve a uniform flip angle in the brain, head SAR and not peak SAR becomes the limiting factor for 14 T head imaging. Especially for k T -points pulses, high head SAR leads to a reduction of the maximum achievable duty cycle. Nonetheless, to achieve a uniform flip angle at 14 T, k T -points pulses are required. Using a 5-spoke k T -points results in a reduction of the flip angle coefficient of variation from 30 to 6% for the best performing array (15D) at 14 T, for a maximum duty cycle of respectively 31% and 18%. Consequently, a relatively high duty cycle of uniform flip angles can be obtained within SAR guidelines in the human brain at 14 T.
Data availability
The simulation data that was generated and analyzed during the current study are available from the corresponding author on reasonable request.
Conflict of interest
The authors report no conflicts of interest. The authors alone are responsible for the content and writing of this article.
Ethical approval This article does not contain any studies with human participants or animals performed by any of the authors.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2023-04-01T06:18:16.663Z | 2023-03-31T00:00:00.000 | {
"year": 2023,
"sha1": "b1b975fac6d9e1c3c98f048eebfb7b6e90ea6d23",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10334-023-01067-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "3403d83edf3192d1e893cf38ab55e2ec44ff07a8",
"s2fieldsofstudy": [
"Engineering",
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231677628 | pes2o/s2orc | v3-fos-license | Correlation between small and dense low‐density lipoprotein cholesterol and cardiovascular events in Beijing community population
Abstract The relationship between small dense low‐density lipoprotein cholesterol (sdLDL‐C) and different cardiovascular events has been observed in several large community studies, and the results have been controversial. However, there is currently no cross‐sectional or longitudinal follow‐up study on sdLDL‐C in the Chinese hypertension population. We analyzed the association of plasma sdLDL‐C levels with major adverse cardiovascular events in 1325 subjects from a longitudinal follow‐up community‐based population in Beijing, China. During the follow‐up period, a total of 191 subjects had MACEs. Cox regression analysis showed that sdLDL‐C is a major risk factor for MACEs independent of sex, age, BMI, hypertension, diabetes, smoking, SBP, DBP, FBG, eGFR in the general community population (1.013 (1.001 −1.025, P < .05)), but the correlation disappeared after adjusting for TC and HDL‐C in Model 3. Cox analysis showed that hypertension combined with high level of sdLDL‐C was still the risk factor for MACEs ((2.079 (1.039‐4.148)). Our findings in the Chinese cohort support that sdLDL‐C is a risk factor for major adverse cardiovascular events in hypertension subjects.
| Subjects
The research was conducted in the Pingguoyuan community in Beijing, China. A public recruitment announcement was first published on the community bulletin board with the consent of the community management committee. The subjects voluntarily participated in the study, which aimed to recruit community residents.
Initially, all the subjects received physical examinations at the community medical center. From September 2007 to January 2009, a total of 1828 subjects aged ≥ 18 years were recruited. Bedridden patients, or patients suffering from mental disorders or severe systemic diseases (including myocardial infarction, coronary artery disease, cerebrovascular events, severe liver insufficiency, severe renal insufficiency, immune system diseases, endocrine and metabolic disorders [except for type 2 diabetes]) were excluded from this study. In the end, 1680 subjects were enrolled in this study, and the enrolled subjects were followed up for a second time between September 2017 and October 2018 via a face-to-face questionnaire by two cardiovascular physicians. An average follow-up time of 9.5 years (8.7-10.5 years) was completed by 1325 subjects, during which 191(including 84 deaths) major adverse cardiovascular events (MACEs) occurred, and 355 subjects were lost to followup. We followed up their families over the phone to get the data.
The proportion of subjects lost to follow-up was approximately 21.1%, and the follow-up rate was 78.9%. The entire protocol of this study was approved by the ethics committee of the PLA General Hospital, and each subject signed an informed consent form.
| Clinical data collection
Medical history was collected by cardiovascular specialists with unified training. All the subjects were investigated to determine the following: cardiovascular history, hypertension history, diabetes history, family history, smoking history, drinking history, and medication history. A standardized questionnaire was used to investigate the subjects' name, sex, nationality, age, education level, occupation, marital status, family address, contact information, and other information. Blood pressure, height, weight, and waist and hip circumference were measured by trained doctors. When measuring the height and weight, shoes, socks, and caps were removed, and the height measurements were accurate to 1 cm, and the weight meter measurements were accurate to 0.1 kg. To measure the waist and hip circumference, any coat was removed and belts were loosened, and the tape measure was placed between the upper part of the hip bone and the lower part of the chest. The measurements were made while the subject was exhaling and their abdomen was relaxed, and the readings were accurate to 1 cm. Prior to measuring their blood pressure, the subjects rested in a sitting position for 5 minutes. Then, blood pressure was measured twice with an interval of 1-2 minutes, and the mean value was calculated.
Blood was drawn from subjects with an empty stomach by specially trained nurses. Patients who had not been diagnosed with diabetes mellitus were examined by an oral glucose tolerance test (OGTT).
Fasting blood glucose and postprandial blood glucose, serum TC, TG, HDL-C, LDL-C, and uric acid (UA) were measured with a Roche Diagnostics GmbH kit (Mannheim, Germany) using a Roche 6000 automatic biochemical analyzer (Roche 6000). Serum creatinine (CR) was determined by a Roche enzyme assay kit on the Hitachi 7600 automatic analyzer (Hitachi, Tokyo, Japan). The sdLDL "Seiken" kit (Denka Seiken Co. Ltd, Tokyo, Japan) was used to detect the plasma sdLDL levels on the Hitachi 7180 automatic biochemical analyzer (Hitachi, Japan) [6]. According to the standards of the WHO lipid reference laboratory, all the blood samples were analyzed in the same laboratory.
| Definition of variables
Smoking is defined as smoking at least 1 cigarette a day for at least 1 year. Exercise is defined as at least 1 hour of each exercise performed 3 to 5 times a week. Hypertension is defined as mean systolic blood pressure (SBP)≥140 mmHg and/or mean diastolic blood pressure (DBP) ≥90 mmHg, and/or regular use of antihypertensive drugs. Diabetes is defined as intravenous fasting blood glucose level ≥ 7.1 mmol/L, and/or 2 hours after eating blood glucose level ≥ 11.1 mmol/L, or use of hypoglycemic drugs or insulin. BMI = weight(kg) / height 2 (m 2 ); non-HDL-C = TC(mmol/ L)-HDL-C(mmol/L); eGFR = 141× min (Scr/κ,1) α × max (Scr/κ,1) -1.209 × 0.993 Age × 1.018 [female] (or × 1.159 [male]); Scr is blood creatinine (mg/dL); κ values are as follows: 0.7 for women and 0.9 for men; α is −0.329 for women and −0.411 for men.
The definition of MACE comprised nonfatal myocardial infarction, newly diagnosed CHD (identified by coronary artery imaging or receiving coronary revascularization), stroke (ischemic or hemorrhagic), and cardiovascular mortality.
| Statistical analyses
The categorical variables are expressed as numbers and percentages, and the continuous variables are expressed as the mean ± standard deviation or median (quartile). t test was used for normally distributed data, while the nonparametric Mann-Whitney U test was used for nonnormally distributed data, and the chi-square test was used for classified data. For the comparison between groups, analysis of variance was used
Key messages
This study was granted an exemption from requiring ethics approval by Chinese PLA General Hospital Ethics Committee because this study was a retrospective observational study.
when the data were homogeneous, and Kruskal-Wallis test was used for the comparison between groups when the data were not normally distributed or the variance was not uniform. The quartiles of the sdLDL-C levels were labeled as Q1 (<27.6ng/ml), Q2 (27.6ng/ml ≤ Q2<37.7ng/ ml), Q3 (37.7ng/ml ≤ Q3<53.37ng/ml), and Q4(≥53.37ng/ml). The subjects were divided into four groups according to whether or not they had hypertension: H4: high levels of sdLDL-C (Q3 and Q4) and hypertension; H3: high levels of sdLDL-C (Q3 and Q4) and nonhypertensive; H2: hypertensive and low levels of sdLDL-C(Q1 and Q2); H1: low levels of sdLDL-C(Q1 and Q2) and nonhypertensive. Spearman correlation analysis and multiple linear regression analysis were used to evaluate the correlation between the plasma sdLDL-C levels and biomarkers. The relationship between the sdLDL-C levels and MACEs was analyzed by Cox proportional hazard regression model. Kaplan-Meier survival curves indicating cumulative incidence of major adverse cardiovascular events (MACEs).
SPSS for Windows version 20.0 (SPSS, Chicago, IL, USA) was used for statistical analysis; P < .05 was considered statistically significant.
| Baseline characteristics of the study population
This study initially enrolled 1680 subjects, with an average followup time of 9.5 years (8.7-10.5 years). For various reasons, 355 subjects were lost to follow-up. A successful follow-up rate of 78.9% (1325 subjects) was achieved. One hundred ninety-one cases of MACEs occurred. In the end, the data of 1325 subjects were used in the analysis of this study, including the data of 642 males (48.5%) and 683 females (51.5%), and these subjects had an average age of 58.99 ± 11.28 years. A stratified analysis was made according to whether MACEs occurred and the baseline characteristics and laboratory data of the study population were statistically analyzed as shown in Table 1. There were differences in age, sex, smoking, diabetes, SBP, HDL-C, LDL-C, eGFR, and sdLDL-C between the patients with MACEs and those without MACEs (P < .05). The main characteristics of the population with MACEs were as follows: advanced age, male, smoking, diabetes, higher levels of SBP, LDL-C, and sdLDL-C, and lower levels of HDL-C and eGFR.
| Correlation analysis of baseline sdLDL-C and other indicators
In the univariate model, the baseline sdLDL-C level was positively correlated with age, BMI, SBP, DBP, TC, TG, LDL, FBG, and UA (P < .05).
Multiple linear regression analysis showed that TG, LDL-C and circulating sdLDL-C levels were positively correlated (P < .05) ( Table 2).
| Correlation analysis of sdLDL-C and major adverse cardiovascular events
During the follow-up period, a total of 191 subjects had MACEs. Cox regression analysis showed that sdLDL-C was the main risk factor for
| Correlation analysis of sdLDL-C and major adverse cardiovascular events in hypertensive population
Cox analysis was performed based on whether the subjects were hypertensive ( (1.122-6.027, P < .01)). The data suggest that sdLDL-C is an independent risk factor for major adverse cardiovascular events in people with hypertension ( Figure 1).
| DISCUSS ION
LDL-C plays an important role in the occurrence and outcome of atherosclerosis and cardiovascular diseases. Reasonable application of statins can play a positive role in preventing cardiovascular events, 7,8 but there are also many clinical studies that have found that the risk of cardiovascular disease (CVD) is reduced to less than 30% after the application of statin lipid-lowering drugs. There may be other important risk factors to be considered. [9][10][11] The occurrence and development of atherosclerosis (AS) are not only related to the total amount of cholesterol, but may also be closely related to its heterogeneity. 12 According to the size and density, LDL can be divided into different Whether sdLDL-C is a risk factor for CAD independent of other traditional lipid indicators is still controversial. A case-control study 13 enrolled 109 nonfatal myocardial infarction patients and 121 control patients and found that sdLDL-C was independently associated with the risk of myocardial infarction even after adjusting for risk factors such as age, sex, and weight. In a Japanese prospective study, 14 2034 patients without cardiovascular disease were followed for an average 11.7 years. The results showed that an increased level of sdLDL-C was associated with an increased risk of CVD. For every 10 mg/dL increase in sdLDL-C, the risk of CVD increased by 1.21 (95% CI: 1.12-1.31). Other studies found that sdLDL-C is a risk factor for CAD in people with diabetes, 15 kidney disease, 16 hypertension patients with poor controlled hypertension, 17 and liver transplants. 18 Other research has shown that sdLDL-C has a stronger relationship with cIMT progression than LDL-C. 19 The ARIC study 20 included a total of 11,419 patients followed for 11 years and found that sdLDL-C is a risk factor for coronary heart disease, but after adjusting for LDL-C, apo B, and TC, the predictive value of sdLDL-C is weakened or disappears. Some scholars 21 believe that in patients after myocardial infarction, sdLDL is associated with a reduced risk of all-cause death and noncardiovascular death. In this study, we found that after adjusting for confounding factors (sex, age, BMI, hypertension, diabetes, smoking, TC, HDL-C, FBG, eGFR), the circulating sdLDL-C level was not an independent risk factor for MACEs. To further explore the relationship between sdLDL-C and MACEs in patients with hypertension, it was observed that sdLDL-C was an independent risk factor for MACE (2.079 (1.039-4.148, P < .05)).
LDL-C is the main lipid risk factor for the development of AS, and sdLDL-C is considered to be the main LDL-C component that causes 24 and it is more prone to oxidation reactions. Therefore, sdLDL is more effective in causing atherosclerosis than LDL-C, especially in people with metabolic syndrome, obesity, and diabetes. [25][26][27][28][29] sdLDL-C can participate in atherosclerosis through a variety of mechanisms. Because of its smaller particles, sdLDL-C is more likely to penetrate the endothelial barrier and migrate to the blood vessel wall and is more likely to be oxidized. The decrease in clearance rate increases the possibility of chemical modification in plasma, 30,31 which is considered to facilitate progression of the atherosclerotic plaque.
sdLDL-C can also participate in inflammation by activating immune cells and participate in the development of coronary heart disease. 32 In addition, sdLDL-C is related to the presence of macrophages in plaques, which may be related to unstable arterial plaques. 33 Increasing numbers of studies have confirmed the value of sdLDL-C in CVD risk prediction. sdLDL-C has been listed as one of the newly discovered important cardiovascular risk factors in the adult treatment group of the National Cholesterol Education Program (NCEP). 34 Apparently, cholesterol-and LDL-lowering compounds are also effective in the reduction of sdLDL levels. In addition, improving the lipid profile, especially reducing triglyceride levels, adopting an appropriate regimen, and changing one's lifestyle, can decrease sdLDL levels. 35 In this study, we reported a direct relationship between plasma sdLDL levels and MACEs in a large sample community population. However, since this is a community-based study, it is limited by race, ethnicity, and geographical location.
ACK N OWLED G EM ENTS
Acknowledgments and disclosures: We thank colleagues at the Department of Laboratory Medicine, the PLA General Hospital for help with biochemical measurements. We are also grateful to all study participants for their participation in the study
CO N FLI C T O F I NTE R E S T S
The authors declare that they have no competing interests.
AUTH O R S ' CO NTR I B UTI O N S
XW and PY designed the study; RC, XY, WX, LW, and YZ participated in acquisition of data; XW and PY researched and evaluated the literature; XW undertook the statistical analysis and wrote the first draft of the manuscript. All authors read and approved the final manuscript.
CO N S E NT FO R PU B LI C ATI O N
Not applicable.
E TH I C S A PPROVA L
The study was approved by the ethics committee of the People's Liberation Army General Hospital, and each subject provided informed written consent.
DATA AVA I L A B I L I T Y S TAT E M E N T
Not applicable. | 2020-11-26T09:03:59.704Z | 2020-11-19T00:00:00.000 | {
"year": 2021,
"sha1": "fda010c4e536ed156f67fcd490cd2be8b31c5a65",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jch.14150",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e888572290617be1f88a6eb4e051254229a3654f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55870152 | pes2o/s2orc | v3-fos-license | Achieving the Landau bound to precision of quantum thermometry in systems with vanishing gap
We address estimation of temperature for finite quantum systems at thermal equilibrium and show that the Landau bound to precision $\delta T^2 \propto T^2$, originally derived for a classical {\em not too small} system being a portion of a large isolated system at thermal equilibrium, may be also achieved by energy measurement in microscopic {\em quantum} systems exhibiting vanishing gap as a function of some control parameter. On the contrary, for any quantum system with a non-vanishing gap $\Delta$, precision of any temperature estimator diverges as $\delta T^2 \gtrsim T^4 e^{\Delta/T}$.
Introduction
In the last decades, we have seen a constant improvement in the generation and control of engineered quantum systems, either to test quantum mechanics in a mesoscopic or macroscopic setting, or for the implementation of quantum-enhanced technologies. More recently, controlled quantum systems have become of interest to test and explore thermodynamics in the quantum regime, e.g. for the characterization of work and energy statistics. Indeed experiments in several optical and material systems have been suggested and implemented, with the aim of understanding relaxation, thermalisation, and fluctuations properties in systems exhibiting explicit quantum features or being at the classical-quantum boundary [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21].
In this framework, it has become increasingly relevant to have a precise determination of temperature for quantum systems [22,23,24,25,26,27,28], and to understand the ultimate bounds to precision in the estimation of temperature posed by quantum mechanics itself [29,30,31,32,33,34,35,36,37,38,39]. The problem cannot be addressed in elementary terms since, as a matter of fact, for a quantum system in equilibrium with a thermal bath, there is no linear operator that acts as an observable for temperature and we cannot write down any uncertainty relation involving temperature. In turn, this is somehow connected with the fact that temperature, thought as a macroscopic manifestation of random energy exchanges between particles, does not, in fact, fluctuate for a system at thermal equilibrium. Therefore, in order to retain the operational definition of temperature, one is led to argue that although the temperature itself does not fluctuate, there will be fluctuations arXiv:1510.08111v3 [quant-ph] 14 Jan 2016 for any temperature estimate, which is based on the measurement of some proper observable of the systems, e.g. energy or population.
This line of thought has been effectively pursued in classical statistical mechanics where, upon considering temperature as a function of the exact and fluctuating values of the other state parameters, Landau and Lifshitz derived a relation for the temperature fluctuations of a finite system [40,41]. This is given by δT 2 = T 2 /C where C is the heat capacity of the system itself. In turn, this appears as a fundamental bound to the precision of any temperature estimation. However, the relation has been derived for a system which represents a small portion (but not too small) of a large, isolated, system in thermal equilibrium. Besides, it has been derived assuming the absence of any quantum fluctuation. Overall, its validity is thus questionable if the temperature is low enough or the system is known to exhibit quantum features [42].
A first attempt to establish the Landau bound for finite quantum systems has been pursued by inverting the energy dependence on temperature [43], however still assuming that the system is large enough. Earlier, the concept of temperature fluctuations had caused longstanding controversies [44,45,46,47,48,49,50,51], which have not really solved to date, at least in fundamental terms [52]. In particular, it is not clear whether, and under which conditions, the Landau bound may be confirmed for quantum systems at low temperature (where quantum fluctuations become relevant), and whether the corresponding precision may be achieved in practice.
The Landau bound in quantum systems with vanishing gap
Our starting point is to recall that temperature is not an observable and therefore its value should be estimated through an indirect detection scheme, i.e. by measuring something else, say the observable X on M repeated preparations of the system, and then suitably processing the data sample x 1 , ...x M in order to infer the value of temperature. The functionT (x 1 , ..., x M ) is usually referred to as an estimator and provide an operational definition of temperature for the system under investigation. However, as firstly noticed by Mandelbrot for closed systems [44,48], different inference strategies may be employed, e.g. by starting from different observables, or just by using different estimators (say, the mean or the mode) on the same data sample, thus leading to different, and perfectly acceptable, definitions of temperature, In other words, temperature for a thermodynamic system cannot uniquely defined, and no specific definition can give rise to consensus.
On the other hand, we may give proper and unique definition to the notion of temperature fluctuations. In fact, the variance of any unbiased estimator of temperature is bounded by the Cramer-Rao theorem [53], stating that where δT 2 ≡ Var(T ) = (T − T ) 2 , M is the number of repeated measurements and F (T ) is the so-called Fisher information, given by being X the quantity measured to infer the temperature and p(x|T ) the conditional distribution of its outcomes given the true, fixed, value of temperature. The overall picture arising from the Cramer-Rao theorem is that the notion of temperature may be indeed imperfectly defined whereas, at the same time, the notion of temperature fluctuations may be given an unique meaning. We will fully exploit this approach to establish whether, and in which regimes, the Landau bound to thermometry may be established for quantum systems. To this aim, the crucial observation is that the quantum version of the Cramer-Rao theorem [54,55,56,57,58,59] provides tools to individuate an optimal strategy to infer the value of temperature, i.e. to define a privileged observable related to temperature, which allows one to determine temperature with the ultimate precision. This is done in two steps: i) find the observable that maximizes the Fisher information and ii) find an estimator that saturate the Cramer-Rao bound. The first step may be solved in a system-independent way, upon considering the observable defined by the spectral measure of the so-called symmetric logarithmic derivative, i.e. the self-adjont operator L T obtained by solving the Lyapunov-like equation where T is the density operator of the system under investigation. The second step is, in general, dependent on the system under investigation, though general solutions may be found in the asymptotic regime of large data samples, where Bayesian or maximum-likelihood estimators are known to saturate the Cramer-Rao bound.
In approaching the issue of temperature fluctuation, one often assumes that the quantity that should be measured is the energy of the system. The first thing we can prove using quantum estimation theory is that energy measurement is, in fact, the optimal one for any quantum systems. The only assumption needed to prove this statement is that the system under examination may be described in the canonical ensemble. Let us denote by H the Hamiltonian of the quantum system under investigation and by H|e n = E n |e n its eigenvalues and eigenvectors. At thermal equilibrium the density operator of the system is where β = T −1 , being the Boltzmann's constant set to one, and Z = Tr[exp{−βH}] is the partition function of the system. Inserting Eq. (4) in Eq. (3) we have a solvable equation, leading to where H is the average energy of the system. Eq. (5) shows that the optimal measurement is diagonal in the Hamiltonian basis, i.e. it may be achieved by measuring the energy of the system. The corresponding Fisher information is given by F (T ) ∝ δH 2 /T 4 = c V (T )/T 2 , c V (T ) being the specific heat at temperature T . In turn, this relation reveals that when the specific heat increases then the same happens to the Fisher information associated to temperature, e.g. temperature may be effectively estimated at the classical phase transitions with diverging specific heat [60,61]. On the other hand, if the specific heat is bounded from above than precision of temperature estimation is bounded from below. We now proceed to investigate whether the Landau bound for quantum systems holds also in the low temperature regime. In doing this, we analyze the microscopic origin of the behavior of the specific heat without making any assumptions of the size of the system. To this aim we assume that only the two lowest energy levels of the system are populated (we are in the low temperature regime). The density operator may be written as where the partition function reads as follows Z = 1 + e −∆(λ)/T and ∆ ≡ ∆(λ) = E 1 − E 0 is the energy gap between the two levels. In writing (6) we also assumed that the energy levels of the systems do depend on some external control parameter λ, e.g. an internal coupling or an external field, which may be exploited to tune the energy gap ∆(λ) between the two levels. Using Eq. (2) and the fact that energy measurement is optimal, the Cramer-Rao bound (1) for temperature estimation says that the variance of any temperature estimator is bounded by The function g(x) is depicted in the left panel of Fig. 1. It diverges as e x /x 2 for x → ∞ and as 4/x 2 for x → 0, whereas it shows a minimum g(x m ) 2.27 located at x m 2.4. It follows that in systems where the gap ∆ may be tuned to arbitrarily small values by tuning the external control λ, such that ∆/T x m remains finite, optimal estimation of temperature with precision at the Landau bound ∆T 2 ∝ T 2 may be achieved by measuring energy and a suitable data processing. On the contrary, in system where the gap has a minimum, temperature may be estimated efficiently only down to a threshold, below which the variance of any estimators starts to increase as δT 2 T 4 e ∆/T . The above results are valid for arbitrarily small systems at low temperature and do not depend on the specific structure of the system Hamiltonian, nor on the size of the system. The only requirement is that the system exhibits vanishing gap between its lowest energy levels as a function of some external control parameter. Results are also independent on any specific features of the two-level approximation, assuming that a gap above the first excited level is present in order to make sense of the twolevel description. The range of temperature where the results holds corresponds to the range of validity of the two-level approximation, roughly speaking T of the order of the gap above the first excited level. Results are however robust against this parameter. To confirm this statement, let us consider a three-level approximation where |e 0 e 0 | + e −∆1(λ)/T |e 1 e 1 | + e −∆2(λ)/T |e 2 e 2 | , The resulting Cramer-Rao bound, for energy measurement, is given by where h(x, y) = e −x−y (e x + e y + e x+y ) 2 (1 + e y )x 2 − 2xy + (1 + e x )y 2 .
The function h(x, y) is depicted in right left panel of Fig. 1. It is symmetric and shows a global minimum h m 1.31 located at x h = y h 2.66. It also shows local minima at x m 2.4 for increasing y. Upon tuning the gap ∆ 1 to arbitrarily small values such that ∆/T x m remains finite, we have y = ∆ 2 /T → ∞. On the other hand, h(x, y) approaches g(x) for increasing y and we are thus smoothly back to the two-level case.
Remarks
Estimation theory has been also used to address properties of thermometers, rather than intrinsic properties of the system under investigation. In particular, the role of the numbers N of particles has been analyzed, showing that performing energy measurement on a non thermalizing thermometer made of two-level atoms allows one to improve scaling of precision from N −1/2 to N −1 [30]. The analysis has been also extended to thermometer made of multilevel atoms [35], either fully or partly thermalizing, showing that the sensitivity grows significantly with the number of levels, with the optimization over their energy spectrum playing a crucial role. We emphasize that this results pertain to properties of quantum thermometers. i.e. quantum systems used to probe the temperature of an external bath, whereas our focus has been on establishing intrinsic bounds to precision, thus providing benchmarks to assess any detection scheme. It should be also mentioned that upon employing arguments similar to those used in [30] the analysis of the previous Section may be extended to degenerate systems, where ∆ now represents the average energy per particle.
Another remark concerns a possible, alternative, explanation introduced to account for fluctuations in temperature measurements. The argument is based on the idea that an intrinsic distribution of temperatures may exists, which is consistent with a given thermodynamic state, without implying dynamical fluctuations of the temperature(s) themselves. The argument is usually referred to as the polythermal ensemble hypothesis [41], and it has been somehow criticized [49] since it requires more hypothesis than just assuming the canonical ensemble.
Finally, it should be mentioned the use of a Hamiltonian control parameter to improve thermometric strategies has been already implemented experimentally, e. g. for strongly interacting Fermi gases [62,63].
Conclusions
In the recent years, schemes for temperature estimation involving the interaction of the system with an individual quantum probe have received attention [30,31,32,33,34,35,36,37,64,65] mostly because they provide temperature estimate by adding the minimal disturbance. Our results, which have been obtained with basically no assumptions on the structure of the system under investigation and on the measurement performed to extract information, provide a general benchmark to assess this schemes, and to design effective thermometers for quantum systems.
Our results also provide a framework to reconcile the different approaches to temperature fluctuations. As a matter of fact, temperature itself does not fluctuate, however, there are fluctuations for the temperature estimate based on any indirect measurement. In other words, temperature is a classical parameter which do not correspond to a quantum observable and estimation of temperature necessarily involves the measurement of another quantity, corresponding to a proper observable. In turn, quantumness in temperature estimation is in the measurement stage and in the nature of fluctuations of the measured observable.
The optimal strategy to estimate temperature of a small quantum system turns out to be measuring the energy of the system and suitably process data, e.g. by Bayesian analysis [66,67], in order to achieve the Cramer-Rao bound to precision. In this way, we have shown that the classical Landau bound to precision is recovered, in the low temperature regime, for systems exhibiting a vanishing gap as a function of some control parameter. On the contrary, in systems with a non-vanishing gap ∆ between the lowest energy levels, temperature may be effectively estimated only down to a threshold, below which the variance of any estimator starts to increase as δT 2 T 4 e ∆/T . Notice that this is true independently on the use of an external ancillary system to probe the temperature of the system under investigation. In other words, rather that being a property of the "thermometer" (i.e. of the chosen ancillary system and of the probing interaction scheme), the ultimate precision in temperature estimation is an intrinsic property of the quantum system itself. Our analysis shows the optimality of quantum thermometry based on energy measurements, and provides quantum benchmarks for high precision temperature measurement, as well as an efficient operational quantification of temperature for quantum mechanical systems lying arbitrary close to their ground state. | 2016-01-14T21:07:27.000Z | 2015-10-27T00:00:00.000 | {
"year": 2015,
"sha1": "5ac6ac3a79197af3aead1fbf5af5d1f42bbfc59b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1510.08111",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5ac6ac3a79197af3aead1fbf5af5d1f42bbfc59b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
44217200 | pes2o/s2orc | v3-fos-license | A systematic analysis of nucleosome core particle and nucleosome-nucleosome stacking structure
Chromatin condensation is driven by the energetically favourable interaction between nucleosome core particles (NCPs). The close NCP-NCP contact, stacking, is a primary structural element of all condensed states of chromatin in vitro and in vivo. However, the molecular structure of stacked nucleosomes as well as the nature of the interactions involved in its formation have not yet been systematically studied. Here we undertake an investigation of both the structural and physico-chemical features of NCP structure and the NCP-NCP stacking. We introduce an “NCP-centred” set of parameters (NCP-NCP distance, shift, rise, tilt, and others) that allows numerical characterisation of the mutual positions of the NCPs in the stacking and in any other structures formed by the NCP. NCP stacking in more than 140 published NCP crystal structures were analysed. In addition, coarse grained (CG) MD simulations modelling NCP condensation was carried out. The CG model takes into account details of the nucleosome structure and adequately describes the long range electrostatic forces as well as excluded volume effects acting in chromatin. The CG simulations showed good agreement with experimental data and revealed the importance of the H2A and H4 N-terminal tail bridging and screening as well as tail-tail correlations in the stacked nucleosomes.
a matter of debate [23][24][25][26][27][28][29] . The relevance of the 30-nm fibre in vivo has also recently been the subject of discussion, some authors suggesting an irregular "melted" polymer phase as the major form of chromatin in vivo [30][31][32] (and references cited in 32 ). However, the in vivo existence of the 30-nm fibre is supported by recent work 33 .
However, it is clear that a principal element of condensed chromatin is the close stacking contact between the surfaces of the wedge-shaped cylindrical NCPs. The NCP-NCP stacking has been experimentally observed in NCP crystals 3,4,[34][35][36] , in NCP liquid crystalline phases 37 , in the crystals of the 197 bp chromatosome 38 and tetranucleosome 24 , in folded nucleosome arrays 23,25,39 , and in cryo-electron microscopy (cryo-EM) images of frozen isolated native chromatin 40,41 . Single molecule measurements indeed demonstrate that the nucleosome-nucleosome stacking is energetically favourable 29,[42][43][44][45] . 20 years after the first atomic resolution structure 3 , a large number of NCP structures are reported (now about 150 entries). Comparison between these structures reveals that binding of the DNA on the histone core is flexible, but has certain structural restrictions. The majority of all 120 protein-DNA contacts occur between charged phosphate groups of DNA and Lys + /Arg + of the histone core [46][47][48] and this illustrates that electrostatic interactions dictates the NCP formation.
However, a systematic analysis of the major structural elements in these, in particular the nucleosome stacking has not been undertaken. Furthermore, no convention for parameters that define these major structural elements has been suggested. The development of a convention for the description of NCP-NCP contacts, similar to the generally accepted scheme for the characterisation of double stranded DNA and RNA 49 , is therefore timely. Here, we introduce an "NCP-centred" coordinate system to describe the relative three-dimensional positions of atoms and molecules in the NCP system (including other NCPs). This proposed coordinate frame gives a general representation of the NCP as a flat cylinder and the identification of these coordinates does not require calculation of the superhelical path of the nucleosomal DNA. We analyse over 140 crystal structures containing NCPs, with respect to the DNA superhelical parameters as well as NCP-NCP contacts. Furthermore, combining the information extracted from the analysis of NCP-NCP contacts in the crystals with results of our GC Langevin molecular dynamics (MD) computer simulations of NCP self-association, we reveal and discuss the electrostatic forces responsible for nucleosome interactions. Our analysis demonstrates that the histone N-terminal tails H4 and H2A play key roles in stabilization of the NCP stacking and that tail-tail coordination defines the preferences for the NCP orientation in the NCPs stacks.
Results and Discussion
Overview of NCP structures. The Supplementary Table S2 presents basic information for pdb entries of crystal structures that include NCP. The vast majority of the 147 entries listed in Supplementary Table S2 have been obtained for a single NCP; formed by 145-147 bp of DNA and having a P2 1 2 1 2 1 crystal packing. Following the first crystallization of the NCP 34 , the breakthrough report of the NCP structure at atomic resolution 3 , led to the determination of NCPs with histones from various organisms, histone variants as well as complexes of the NCP with small ligands. Important contributions included the tetranucleosome 24 , a complex of the NCP with the peptide LANA (latency-associated nuclear antigen) 50 and the chromatosome 51 . A number of protein -NCP crystal structures has been obtained and were recently reviewed 52 . The progress in cryo-EM microscopy has resulted in determination of the atomic or nearly atomic resolution structures of single NCP [53][54][55] , NCP with linker histones 38 and nucleosome arrays 39 .
Coordinate system for description of the NCP structure and NCP-NCP contacts. Figure 1A shows the scheme used to define the principal parameters in an "NCP-centred" coordinate system, placed at the centre of mass (COM) of the globular part of the histone octamer (gHO; shown as a red sphere in Fig. 1A). The symmetry axis is drawn as a line connecting the gHO COM and the COM of one (for the NCPs with 145 and 147 bp DNAs) or two (for the crystals with 146 bp DNA) DNA base pairs in the middle of the DNA chain (red rod in Fig. 1A). The NCP plane (green surface in Fig. 1A) was defined using coordinates of the DNA base pairs situated in the two opposite DNA chains about 90° from the symmetry axis (these DNA atoms are highlighted in Fig. 1A by light-and dark-green spheres). First, two vectors normal to the planes formed by the symmetry axis and each of the COMs of the base pairs were drawn (shown in Fig. 1A as light-and dark-green spheres). Next, the NCP plane was defined by setting a median of these two vectors as a normal vector to this plane.
In structural studies of the NCP, the symmetry of the particle is commonly based on a superhelical symmetry of the DNA with specific diameter, pitch and dyad axis that divides the superhelix into two symmetrical halves 2,7,34,35,[56][57][58] . Figure 1B compares the two coordinate systems: based on the NCP symmetry and built as described above (Fig. 1A); and using a least square fitting to an ideal SH using the coordinates of the 129 central dsDNA axis points. For most of the NCP structures, the distance between the origins of the two systems is close (in the range 3-4 Å; Fig. 2A). The centre of the SH (magenta sphere in Fig. 1B) is positioned almost on the symmetry plane of the NCP (green surface in Fig. 1B).
Interestingly, for most NCP structures the origin of the coordinates for the 145-147 bp DNA double helix (using coordinates of the DNA axis) coincides very well with the NCP COM of the globular part of the HO introduced in this work (within less than 2 Å; see Supplementary Fig. S1). E.g. in the illustrative case of the 1KX5 structure, the distance between the centres is only 0.7 Å. The observation that the COM of the gHO coincides with the centre of the DNA superhelix is in fact not trivial and somewhat unexpected. Indeed, the atomic coordinates used to define these reference points are different and the only connection one can find between the two sets of atoms is that the DNA positioning on the HO surface is quite strictly directed by the location of the positively charged arginines 2,3 .
The angle between the dyad and the symmetry axes in the two coordinate systems is small and in the range 0-6° for most NCPs (shown respectively as cyan and red rods in Fig. 2B). The major difference between the coordinates suggested in this work and the frame based on the DNA SH is in the orientation of the NCP plane Scientific REPORTs | (2018) 8:1543 | DOI:10.1038/s41598-018-19875-0 (or equivalently in the orientation of the SH axis and normal vector of the symmetry plane). The angle between these axes is in the range 6-10° for the published NCPs (Fig. 2C, green and orange vectors and planes). If the SH direction is used to build the NCP plane, then this plane looks skewed and cuts through the DNA helices, clearly misrepresenting the orientation of the NCP cylinder (Fig. 1B). Consequently, we consider the present symmetry-based coordinate system to be a better representation of the NCP geometry since it gives a correct orientation of the NCP cylinder.
The NCP-centred coordinate system can be used to characterise the position and orientation of any atom, molecule or structure relative to the NCP. Supplementary Figure S2 shows this set of parameters (distance to the centre, rise, shift, shift orientation). In this work we concentrate on description of NCP-NCP contacts that requires an extended set of parameters given below. However, we will first analyse parameters of the DNA superhelix in the published NCP structures.
DNA structure in the NCP crystals. Figure 3 presents statistics of parameters of the DNA superhelix in the published NCP structures. Supplementary Table S3 gives the detailed data. Bending of the DNA in the inner 129 bp compared to the short DNA stretches at the NCP entry/exit is clearly different (Fig. 3A). The inner DNA has more than one degree per bp larger curvature than the 8-9 bp DNA sections at the ends. It has previously been shown that bending/kinking of the DNA is inhomogeneous and dependent on the DNA sequence [56][57][58] . However, Figure 2. Statistics of mutual orientation of the DNA superhelix (SH) and the NCP symmetry axes in the published NCP structures. (A) Distance between centre of the DNA SH and centre of the suggested NCPsymmetry coordinate system. (B) Angle between SH dyad axis and the NCP symmetry axis. Inset in the graph shows the positions of the symmetry axis (red line marked "symm") and SH dyad axis (cyan line marked "dyad") observed in the 1KX5 crystal 4 ). The green area is for positive values of the angle, orange one is for negative angles. (C) Angle between the SH axis and the normal to the NCP plane. the inset in the graph illustrates a typical case (1KX5 NCP 4 ) of positions of the normal-to-plane vector (green arrow marked "symm") and the SH axis (orange arrow marked "SH"). the highly conserved structure of the HO forces the inner 129 bp DNA loop to conform to restrictions posed by the precise positioning of the key HO residues, which results in the overall bending of the inner 129 bp being strictly fixed.
Furthermore, the inner 129 bp nucleosomal DNA is on the average stretched compared to the double helix of the outer DNA (Fig. 3B). The radius (Fig. 3C) and the pitch (Fig. 3D) of the DNA superhelix calculated from the coordinates of the axis of the inner 129 bp DNA is well-fitted by a normal distribution. The SH radius varies within less than 1 Å (between 39.5 and 40.5 Å) while the SH pitch shows a broader distribution (variation is about 2 Å, between 24.5 and 26.5 Å). The number of base pairs accommodating one full turn of the DNA superhelix in the NCP varies in the range 76.5-79 bp (Fig. 3E). This reflects the ability of the DNA sequence to stretch or contract to accommodate restrictions posed by the DNA-binding sites of the HO.
Parameters describing the NCP-NCP contact. To describe the mutual orientation of the two NCPs in the NCP-NCP contact one needs an extended set of parameters compared to the simpler situation when the position of a single point is defined in the NCP-centred coordinate system (Supplementary Figure S2). Furthermore, the set of parameters can be applied for characterisation of multi-NCP structures that can also include other components (e.g. proteins). The NCP-NCP contact may be characterised by the following parameters ( Fig. 4): 1. Distance between COMs of the NCP1 and NCP2 ("Dist" in Fig. 4A); 2. Rise, the distance from the COM of the NCP2 to the NCP1 plane. The rise may be positive or negative depending if the NCP2 is above or beneath the NCP1 (Fig. 4A). Since in this work only pairwise NCP-NCP contacts are analysed, the absolute value of rise is used. 3. Shift of the NCP2 relative to the NCP1 measured as the distance between the COM of NCP1 and the projection of the COM of NCP2 on the NCP1 plane ( Fig. 4B); 4. Orientation of the shift defined as the angle (ϕ) between the NCP1 symmetry axis and the line connecting the shift point with the COM of the NCP1. The angle ϕ varies in the range −180° and +180° being positive or negative depending on counter clockwise or clockwise turn relative to the symmetry axis ( Fig. 4B); 5. The angle (δ) between the symmetry axis of the NCP1 and the projection of the symmetry axis of the NCP2 on the NCP1 plane (Fig. 4C). The term symmetry axes orientation is used instead of "angle". The angle δ varies in the range −180° and +180°; 6. The NCP-NCP tilt is the angle between the planes of the NCP1 and NCP2, which is equivalent to the angle between the normal vectors to the NCPs planes ( Fig. 4D, left). 7. The NCP-NCP tilt direction is the direction in which the plane of the NCP2 is tilted relative to the symmetry axis of NCP1 and is defined as the angle between the symmetry axis of the NCP1 and the projection of the plane-normal vector of the NCP2 on the NCP1 plane ( Fig. 4D, right).
In principle, six parameters are sufficient to fully define the mutual positions of two bodies (such as NCPs). However it may be practical to use NCP-NCP distance instead of the combination of shift and rise.
Parameters of NCP-NCP stacking in crystal structures. Using the approach described above we analysed the geometry of the NCP-NCP contacts in the published crystal structures. The results are presented in Figs 5 and 6 and in Supplementary Table S4. Figure 5 shows examples of the NCP-NCP contacts. The most typical case of NCP stacking observed in the vast majority of the crystals is depicted in Fig. 5A using the first atomic-resolution structure 1AOI 3 . The NCP-NCP distances are in the range 67-68 Å (Fig. 6A) and the absolute values of the rise between 54 and 57 Å (Fig. 6B). The plane-plane angle is in the range 8-16° (Fig. 6C) and NCP2 is tilted relative to NCP1 in the direction of the NCP1 symmetry axis (Supplementary Figure S3). In most NCP crystals the symmetry axes are oriented in a head-to-tail fashion (around ±180°; Fig. S1 and Supplementary Table S4; see also more discussion below). The shift varies from 38 to 44 Å (Fig. 6D). The NCPs shift is generally equal to two diameters of dsDNA substantially reducing the areas where the negative DNA surfaces of NCP1 and NCP2 are close to each other.
In Fig. 5B NCP-NCP stacking observed in the crystal structure of the tetranucleosome 24 is displayed. This sort of NCP stacking is expected to be common in chromatin 30-nm fibres folded in vitro and in vivo. Here, the NCP-NCP distance is less than 60 Å, the rise is around 55 Å. The NCP-NCP shift is about 20 Å and roughly corresponds to the diameter of the dsDNA. The concertina-like packing of the nucleosomes in the two-start 30-nm chromatin fibre displays head-to-head positions of the NCP symmetry axes and this NCP orientation is expected to be very common. In general, very few structures with head-to-head axes orientation were reported: only three crystals (except the tetranucleosome) show a symmetry axis orientation around 0°; the NCP with the centromeric histone CENP-A 59 , an NCP complex with the centromeric protein CENP-C 60 , and 197-bp nucleosome in a complex with linker histone H1 38 (Fig. 7B, Supplementary Table S4).
A few crystal structures have both rather small values of the shift (around 20 Å, Fig. 6D) and head-to-tail symmetry axis orientation. One of these structures, 5ay8 61 , is shown in Fig. 5C. The difference in the NCP-NCP shift between the 1AOI and 5ay8 crystals (about 20 Å smaller for the 5ay8) reduces overlapping between the NCPs in the crystal; compare ). Although CoHex 3+ is not a biologically relevant cation, due to its symmetry and well-defined structure it is practical in modelling NCP aggregation in vitro and in silico. It was experimentally shown that the columnar hexagonal phase induced by the presence of this trivalent cation is the same as in the presence of the biological trivalent cation spermidine 3+ 73 . The results confirmed the experimental observations about the polyelectrolyte nature of the NCP and chromatin 74 and showed that electrostatic forces make a decisive influence on the NCP -NCP interaction. In the presence of K + , NCPs repel each other, while in the presence Mg 2+ cations the NCP-NCP interaction becomes slightly attractive, which is reflected by formation of small (2-4 NCPs) aggregates. The effective screening of the NCP negative charge in the presence of the tricationic CoHex 3+ resulted in formation of a single aggregate both in systems with ten 72 and twenty 71 NCPs.
In the present work we further examine the results of the NCP + CoHex 3+ simulations using the framework introduced above, focussing on an analysis of the internal structure of the aggregated NCPs. We compare structures of the NCP-NCP contacts observed in the CG simulation of 20 NCPs in the presence of CoHex 3+ with observations in the NCP crystals, nucleosome arrays and NCP liquid crystalline phases. The results of the CG modelling reveals the contributions of the H4 and H2A N-termini in stabilisation of the NCP-NCP contact and allows making predictions about preferential orientations of the NCPs within the condensed phase.
Effective screening leads to formation of a single aggregate of NCPs formed in the presence of CoHex 3+ (Supplementary Figure S5C) with NCP columns and some cases of 'perpendicular' conformations. The aggregation of the NCPs is dynamic with frequent dissociation/association of NCPs and the NCP-NCP distance distribution shows a stacking conformation at a maximum at about 60 Å (Supplementary Figure S5D).
Next we undertake a more detailed analysis of the NCP-NCP contacts observed in the aggregated NCP-CoHex 3+ system and compare the simulation results with the available experimental data. For the CG NCP, the COM, the symmetry axis and the NCP plane of the coordinate system were calculated similarly to these parameters of the NCP atomic crystal structures as shown in Supplementary Figure S5B. 51 . The angle between the NCP planes is equal to 85.5°, NCP-NCP distance is 77.2 Å, shift is 10.9 Å and 167° symmetry axes orientation. In all graphs, orange and blue planes and normal vectors indicate orientation of respectively the NCP1 and NCP2; green and cyan arrows on the planes show the positions of the symmetry axes; right-angle triangles are formed by lines connecting the centres of the NCP1 and NCP2, the projection of the NCP2 centre on the NCP1 plane and the NCP-NCP shift. The thin black arrow in the NCP1 plane shows the projection of the NCP2 symmetry axis on the NCP2 plane. Supplementary Table S3. Ovals highlight the areas where the data from most of the crystal structures are clustered.). Points that are outside the major clusters are labelled by pdb entry codes. Larger green points highlights tetranucleosome structure 24 , magenta hexagon point in (B) is for the one of the two NCP-NCP stacks in the 12-187 nucleosome array reported in the recent cryo-EM study 39 Cyan hexagon point in (B) is for the NCP-NCP stacking in the 12-187 nucleosome array 39 , the shaded box illustrates the results for precipitated ordered NCP phases obtained from combined X-ray diffraction and cryo-EM studies 75 Figure 7 shows correlations between the NCP-NCP distance and the NCP-NCP shift (Fig. 7A) and the angle between the symmetry axes (Fig. 7B). Supplementary Figure S6 displays the correlation between the NCP-NCP distance and plane-plane angle. In these figures the coloured areas depict the density of the conformations observed in the CG simulations; circle points are for the NCP crystals, the hexagon point is for the cryo-EM structure of the nucleosome array 39 , the shaded box are for the electron microscopy (EM) and small angle X-ray scattering (SAXS) data obtained in studies of ordered NCP phases 75 .
Generally, for NCP-NCP contact distances below 75 Å, the the plane-plane angle is small (Supplementary Figure S6). Three groups of shift correlations can be observed: i) More than 100 structures have an NCP-NCP distance in the range 65-69 Å and a shift of 35-45 Å (highlighted by an oval in Fig. 7A). ii). A few crystals show both shorter NCP-NCP distance (54-62 Å) and a smaller shift (10-22 Å). iii) For the structures where the NCP was crystallized in complex with other proteins, NCP-NCP stacking is absent and the NCP-NCP distance is larger than 75 Å while the shift varies considerably.
In the CG simulations, the stacked NCPs are separated by 56-66 Å and shifted relative to each other by 0-15 Å showing a clearly defined area in the distance -shift correlation graph (Fig. 7A). The NCP-NCP shift is rather small and there are large areas of close DNA-DNA distances inside the NCP-NCP stack. Since both the electrostatic and the short range force potentials of the CG model are repulsive, we conclude that charge screening and ion-ion correlation contributions from bound CoHex 3+ ions are efficient enough to establish a large area of NCP-NCP contacts between nucleosomes. The high efficiency of CoHex 3+ in promoting condensation of DNA and chromatin has been well-established in a number of experimental studies 74,76 (and references cited therein). Figure 7B compares the correlation between the NCP-NCP distance and the mutual orientation of symmetry axes of the stacked NCPs (the angle δ in Fig. 4C). In most of the NCP crystals, the symmetry axes of the NCPs in the NCP-NCP stack are oriented in a "head-to-tail" fashion with the angle being very close to δ = ± 180°. One of the structures with monoubiquitinated histones H2B and H4 77 (pdb 5b40) has δ = 120°. Only four crystal structures with stacked NCPs show "head-to-head" axis arrangement: The tetranucleosome 24 (δ = −25°), the 197 bp chromatosome 38 (δ = 0°) and the two NCP crystals with the centromere variant of the histone H3, CENP-A 59,60 (δ = 0° and −8°; see Supplementary Table S4). The "head-to-head" axis orientation is statistically dominating inside the NCP aggregates in the CG simulations. This orientation is most common in the two-start 30-nm nucleosome fibres as it is observed in the tetranucleosome crystal 24 and in the 12-167 array reported in the recent cryo-EM work 39 (hexagon point in Fig. 7B; since atomic coordinates are not available the NCP-NCP distance and dyad-dyad angle cited in the paper are used). Furthermore, combination of SAXS and electron microscopy data shows that this co-linear positioning of the dyad axes is typical for ordered precipitated liquid crystalline NCP phases 73,75 (shown as a shaded box in Fig. 7B). From the comparison of the NCP-NCP distance -axis angle correlation values it can be concluded that very good agreement is observed between the experimental and CG modelling data. However, the simulation data shows that the axis orientation can vary over a range of possible conformations with preference for the head-to-head and head-to-tail orientations. Interestingly, the simulation data displays a local maximum for the perpendicular orientation of the axes (δ around ±90°). Below we will analyse the contribution of electrostatic interactions and the role of the histone tails to the formation of the NCP and the NCP-NCP stacking.
Analysis of ionic contacts in the NCP and NCP-NCP structures.
The major component of the NCP, 147 bp dsDNA, carries a charge of −292 e and electrostatic forces make a major contribution both to the formation of the single NCP and to the NCP-NCP interaction. In order to analyse the histone tail interactions with DNA and the amino acids on the surface of the core, the HO amino acids were divided into two groups: the a.a. of the structured core domain and those of the histone tails. The charge content and the charge-charge contacts in the structured part of the NCP stacking were analysed. (Assignment of the tails is given in Supplementary Table S1 and in Fig. S7A; all other a.a. belong to the gHO). As a criterion of the formation of an ion-ion contact, a distance less than 7.5 Å between P atoms of DNA, CZ and NZ atoms of Arg and Lys; CD and CG atoms of Glu and Asp was defined.
Although the net charge of the gHO is positive (+52 e, Supplementary Table S1), the numerous contacts of the Lys and Arg with the DNA phosphates (respectively, 25 and 38) and with Asp, Glu carboxylates (18 and 38), lead to practically complete neutralisation of the positive charge of the gHO; so the surface of the NCP (DNA + gHO) is effectively negatively charged (Supplementary Figure S7B).
The NCP charged surface has two distinct negative patches formed on either side of the NCP cylinder by seven acidic a.a. (Glu56, 61, 64, 91, 92, Asp90 of the H2A and Glu110 of the H2B histones (Supplementary Figure S7B). This acidic patch is important for formation of chromatin secondary structures and for binding and transcriptional regulation of various nuclear proteins 52,78,79 . A similar analysis of the ion-ion interactions in the NCP has been carried out 80,81 and our conclusions are in a general agreement with these earlier results. The large negative charge on the NCP surface necessitates screening in order to enable stacking between the NCPs, which would otherwise be energetically unfavourable. Clearly the histone tails that carry most of the positive charge of the HO make a significant contribution to this screening. Polyelectrolyte theory predicts that the tails must be electrostatically bound to the DNA practically at all physiologically relevant ionic conditions 74,82 . The H2A and H4 N-terminal tails are situated on the top and on the bottom of the NCP flat cylinder and these tails are particularly important for formation and stabilization of the NCP-NCP contact since they may be distributed between the two surfaces inside the stack. Notably, the H2A and H4 have similar length; number and distribution of charged Lys and Arg (see Supplementary Figure S7C). This similarity indicates that these tails might cooperate and exchange their contributions and positions in the NCP stacking contact.
We combine data of the CG simulations with crystallography data to reveal contributions of the tails to the stabilization of the NCP-NCP stacking. Figure 8 summarises the results of the CG simulations concerning the participation of the H2A and H4 tails in stabilization of the NCP stacking. Figure 8A and B displays spatial of the histone tails around a central NCP and illustrates how the H2A (yellow) and H4 (green) tails cooperatively shield the DNA-DNA repulsion in the stacked NCPs. The SDF gives a three dimensional picture of the averaged density of the particles of each of the tails around a central NCP and may distinguish between the contributions from the tails of the central NCP (internal) from the neighbour NCPs (external). The most common structural element is the columnar NCP stack with the four tails (one H2A and one H4 tail from each of the two NCPs) distributed between the surfaces of each NCP pair. The tails do not overlap and in Fig. 8B it is seen that one external H2A tail contacting the central NCP might occupy two different areas shifted relative to each other by about 90° (indicated in the figure by black arrows). Consequently, the necessity for a tail-tail cooperation in screening the NCP negative surfaces gives preference to the three distinct NCP-NCP orientations that correspond to the angles between the symmetry axes 0°, ±180° and ±90°. The head-to-head and head-to-tail positions are observed experimentally and in the crystal structure of the tetranucleosome 24 and the H4 tails are located in the same areas as in the CG simulations (the coordinates of the H2A tails were not resolved experimentally).
Our CG simulations are based on a model that adequately describes electrostatic and excluded volume effects, which we believe are the most important factors that define the NCP-NCP interaction. The good agreement between the modelling data and available experimental information confirms the validity of this simplified approach. However, the present CG NCP model does not include specific factors such as hydrogen bonding, dipole-dipole and hydrophobic interactions that undoubtedly make important contributions to the NCP-NCP stacking. Specifically, in our simulations, binding of the H4 R17-R23 basic domain to the H2A/H2B acidic patch and the specific and exceptional influence of the H4 K16 [83][84][85][86] to the formation of folded chromatin structures are not adequately captured. It is reasonable to suggest that there exist multiply energetically favourable NCP stacked conformations and those that include the specific H4 tail -acidic patch binding may only contribute to a subset of all possible NCP-NCP conformations. While the role and contributions of the H4 tails to nucleosome -nucleosome interaction is widely appreciated and studied, investigation of similar contributions from the H2A N-termini that are revealed in the present CG simulations have so far not been observed experimentally. Another factor that is important for the NCP -NCP interaction but still did not receive much attention is the ability of the αC helix of the H2B histone that juts out of the HO surface to interact with various DNA and histone domains of the neighboring nucleosomes 87 .
In the majority of the crystals, overlap between the NCP surfaces in the stack is reduced due to the substantial shift (36-44 Å, Fig. 6D). The H2A and H4 tails seem to play important role in stabilisation of this conformation. Supplementary Table S5 lists the availability and binding modes of the H2A and H4 tails in the NCP crystal structures. The data shows that when the coordinates of at least some part of the tails are available, the H2A N-termini interact with the DNA of their own NCP and frequently with the DNA of the other NCP in the stack. One of the H4 tails binds to both their own DNA and to the acidic patch of the neighbouring NCP, while the coordinates of the second H4 tail are often unresolved (Supplementary Table S5). This illustrates the dynamic binding of this tail that can contribute to screening electrostatic repulsion from the DNA of the other NCPs in the crystal.
In most crystals, visualisation of the NCP1 -NCP2 stack shows two symmetrical internucleosome contacts between the histone (a.a. 101-122) and DNA (Supplementary Figure S8). Four lysine and a number of polar amino acids of the H2B αC helix can form ionic and hydrogen bonds with the DNA (as shown in Fig. S8) and this interaction as well as other possible inter-NCP H2B αC helix binding modes 87 might contribute favourably to the formation of the NCP stack.
Conclusions
In this work a simple and universal coordinate system based on the NCP atomic structure and its cylindrical symmetry was suggested. Our approach enables a numerical description of the relative NCP-NCP positions in condensed NCP or chromatin systems. Using this new NCP-centred coordinate system, a set of parameters was introduced to numerically characterise the mutual positions of the NCPs in the published crystal structures (Fig. 4).
An analysis of the NCP stacking revealed that in the most NCP crystals, the nucleosomes are in an almost perfect head-to-tail orientation (the angle between the symmetry axes is close to ±180°) and there is a significant shift of the NCPs relative to each other (36-44 Å). However, there are some examples of the head-to-tail orientation and a smaller (about 20 Å) shift; notably the tetranucleosome 24 and the nucleosome array 39 . The dominance of the "large shift − head-to-tail orientation" structures might be caused by restrictions imposed by crystal packing, particularly the importance of the contacts between DNA ends. However, in nucleosome arrays with NRL less than 190 bp, folding into the two-start 30-nm fibre should lead to the head-to-head stacking. Furthermore, the two-start and interdigitated structures of the 30-nm chromatin fibre, also display the head-to-head orientation 26 . This might mean that in vivo, the head-to-head orientations are common. However, it is also likely that the head-to-tail stacking observed in the crystals might be frequent in vivo where melted liquid-like state of chromatin 32 facilitates in-trans interdigitated contacts between the nucleosomes.
The results of the CG MD simulations are in good agreement with structural data obtained for the NCP crystal structures, NCP liquid crystalline phases and for the nucleosome arrays. The CG MD simulations give valuable insights on the nature of nucleosome interaction and demonstrate that electrostatic interactions plays a decisive role not only in the general phase behaviour of chromatin/nucleosome systems but also is essential for specific structural arrangements inside condensed chromatin. The simulations revealed the novel finding that correlations between the H2A and H4 N-terminal tails are important for shielding DNA repulsion in the stacked NCPs. The areas sampled by these tails do not overlap and make the head-to-head and head-to-tail NCP orientations more populated than the other stacking arrangements.
Analysis of NCP and NCP-NCP stacking structures.
Structures of the nucleosome core particles were downloaded from the Protein Data Bank (PDB; http://www.rcsb.org/pdb/) maintained by Research Collaboratory for Structural Bioinformatics 88 . Since there is no convention about naming and numbering of the histone proteins and amino acids as well as DNA strands and nucleotides, every pdb structure was analysed individually using home-written Fortran scripts. The Chimera software 89 was used to build molecular structures and surfaces and to create figures. Structural parameters of the NCP double stranded (ds) DNA were calculated by Curves+ 90,91 . Parameters of the DNA superhelix (radius, pitch, number of DNA base pairs in one turn, dyad axis) were determined using coordinates of the 129 central dsDNA axis points (of total 145-147 bp) and fitting them to an ideal superhelix using a modified approach developed by Kahn 92,93 . The details of this procedure are given in the Supplementary Material. The choice of this particular number of the DNA base pairs for the SH fitting is based on the detailed analysis by Richmond and Davey 56 who excluded a few bp at the entry/exit of the NCP since these dsDNA stretches exhibit reduced bending and detachment from the HO surface. We apply this standard 129-point frame to define the parameters of all reported NCP structures (except one case where less than 129 bp coordinates were reported) 59 . In the Results and Discussion we show that using this 129 bp length does reflect a real division of the nucleosomal DNA into a tightly bound central and a looser entry/exit domain.
Structures of the stacked NCPs were obtained by making copies of the single NCP according to its crystal packing and selecting the pair of closest NCPs. In a few cases when the NCP was crystallized in a complex with another protein, close NCP-NCP stacking was absent or several NCP-NCP arrangements were possible. These cases were also analysed. NCP-NCP parameters were not determined for PDB structures that contain incomplete NCP 69 , or obtained by cryo-EM method 53,54 .
All the scripts used for the analysis in the present work is available from the authors to be shared with interested users.
Coarse-grained MD simulations of NCP-NCP interaction.
Detailed description of the CG NCP model, force field and setup of the MD simulations as well as methods of analysis of the MD data is given in our earlier work 71,72 and in the Supplementary Methods.
Briefly, the NCP model included 1350 particles and consisted of a CG model of DNA with resolution of 5 particles per two DNA bp (one central bead for four nucleosides forming 2 bp and four beads representing phosphate groups). We used a CG model of the HO core and histone tails with one site per each a.a. DNA consisting of 74 such units, modelling 148 bp, was wrapped around the histone core. The DNA structure was maintained by harmonic bond and angle potentials; the beads of the gHO were placed according to the 1KX5 crystal structure and the integrity of the HO was maintained by applying an elastic network scheme 94 . The histone tails were modelled as 10 strings of linearly-connected beads of length and a.a. sequence according to the 1KX5 structure 4 . Electrostatic interactions were treated explicitly by a Coulombic potential and using a dielectric continuum description of the solvent water, assigning unit charges to the phosphate beads of the DNA and charged a.a. (Lys, Arg, Glu and Asp). The net charge of the CG NCP was −150 e, with DNA, gHO and histone tails carrying respectively −296 e, +52 e and +94 e charge. This CG model of the NCP and description of the CoHex 3+ ion has been thoroughly validated in our previous work and describes adequately a range of experimental observations and data both quantitatively and qualitatively 71,72 .
Radial (RDF) and spatial (SDF) distribution functions were calculated using scripts described in our earlier simulation work 72 Data availability statement. The datasets (NCP structures) analysed during the current study are available in the RCSB protein data bank repository: http://www.rcsb.org/pdb/home/home.do. The datasets (Langevin MD simulations) generated and analysed during the current study are available from the corresponding author on reasonable request. | 2018-04-03T01:29:12.819Z | 2018-01-24T00:00:00.000 | {
"year": 2018,
"sha1": "06bd3ab5db914eee2c5c861ce81a64c08ac53f0b",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-19875-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f749e755670716db30783f735688d6ede3c2de59",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
224947534 | pes2o/s2orc | v3-fos-license | Young Scientist Pioneer 2019: Evaluation of the Effectiveness of Program at SMK Wira Penrissen, Sarawak
Science, technology, engineering and math (STEM) is a new subject module to prepare the students with high skills demanded by the industry based on Malaysian Education Blueprint 2013-2025. The purpose of this study is to review and analyses the effectiveness of the program done in SMK Wira Penrissen, Kota Samarahan, Sarawak that been held in July 2019. The aim of the program is to attract the students to learn STEM according to the syllabus in a fun way. The items reviewed were in terms of the overall program, the objectives, the activities and also the facilitators. This program was attended by 100 Form One students meanwhile the facilitators were the students from multiple backgrounds of study from Universiti Teknologi Malaysia (UTM). The results show that student satisfaction was increasing gradually for best-managed modules indicating that proactive implementation of appropriate responses to student feedback on their learning journey is effective in improving both student satisfaction and learning.
INTRODUCTION
Science, Technology, Engineering and Mathematics (STEM) is an educational program developed to prepare primary and secondary students for college and graduate study in the fields of science, technology, engineering and mathematics (STEM). During the 21st century, workforcerelated to science, technology, engineering, and mathematics (STEM) fields have become increasingly important [1][8] [13]. Many countries have integrated STEM education into the school curriculum providing a meaningful learning environment. However, refers to STAR papers on Sunday (17 Mar 2019), the number of students taking the STEM subject is dropping yearly as only 44% of students in 2018 compared to 48% in 2012.
Nowadays, STEM education is a concern as stated by [10], STEM education is a critical tool for improving students. Parallel to the industrial revolution 4.0 (IR 4.0), the government is committed to prepare the STEM workers to be ready for the challenge in the global talent pool. Thus, the government has given support by developing a pedagogical with a focus on the key principles to teach the subject effectively starting from secondary school. This STEM education widely spread especially in Sarawak as this activity is fully supported by the state minister. According to Bernama (2017), Abang Johari stated that they want the setting up of the labs to raise awareness concerning STEM in the rural areas in Sarawak which would encourage the modernization of farming. It needs to implement in more schools statewide to promote the "fun" side of the stream and encourage students to take the This program aims to attract and expose the secondary students to STEM so that they could choose the pure science stream as their top choice at the upper-secondary level as well as university level. The STEM program was accordingly based on the Malaysian Education Development Plan 2013-2025 for the provision of highly skilled workers in line with industry demands. Through this program, the committee is eager to work with schools, universities, government as well as private agencies to tackle the STEM gap independently. While the committee did their brainstorming to merge some resources, ideas and energy, they have relatively driven much more sustainable change by developing a more systematic approach that created an assortment of initiatives focusing on different STEM aspects and interests. The committee was working to create a joint national strategy to invest federal funds in STEM education by increasing public and youth STEM engagement. It is important to improve the STEM experience to reach a demographic in STEM fields yet proposing better graduate education for the STEM workforce. Therefore, this paper is focusing on the evaluation of the effectiveness of the program at SMK Wira Penrissen, Sarawak. First, the previous study is reviewed to build the module and the result is analyzed in the next section. The evaluation would come from the pre and post questionnaire based on the STEM module prepared. Besides, the interview and facilitator overview also would be considered in this study.
Science, Technology, Engineering and Mathematics (STEM)
STEM is a curriculum-based to educate a student in four specific disciplines which are science, technology, engineering and mathematics with an applied approach. STEM integrates the subjects into a cohesive learning paradigm based on real-world application [5]. STEM is used to understand the students' misconceptions and the best way to address them. Besides, STEM could also work by teaching science through stories, for example, from Charlie and the Chocolate Factory to The Gruffalo. These children's stories were providing a great context for learning science. Consequently, it might seem simple, but, the assessment for the secondary students' perceptions is needed to value the STEM achievement in school [9]. Based on the Obama administration 2009, "Educate to Innovate" campaign, they were urged to motivate and inspire the students to excel in STEM subjects while addressing an inadequately skilled teacher to educate in these subjects. Their goal is to upgrade the American students from the middle of the pack to the international arena ("Achieving a Sustainable STEM Workforce,"). Hence, the learning module needs to be designed with inspiring wide range flexibility to accommodate different student interest [3]. Meanwhile, in Malaysia, the statistic shows that there was only 23% of high school students studied pure science. According to the estimates, Malaysia would need to produce at least 5,000 graduates of Science, Technology, Engineering and Mathematics to reach the top 20 countries in the economy, social development and innovation sectors [7]. Therefore, the teachers and leaders need to explore programs and resources to support professional development and drive the impact of teaching across schools and colleges [2]. Thus, the Malaysian Education Curriculum needs to be changed into an interdisciplinary approach called STEM in order to achieve the Malaysian 21st-century skills [12]. The research trends indicate that the STEM education is more focused on university graduates which emphasis in technology and engineering fields is well established conversely less emphasis on the school level [6]. Due to this concern, it is highly needed for the government to develop an outstanding primary science curriculum. This is because different approaches are needed to identify learners' thinking, clarify learning goals to look into students' understanding, both in and between lessons. The curriculum and assessment should not traditionally teach in the material area, but more to the nature surrounding. Besides, inspiring groups and communities should be developed to discover projects, activities and challenges to engage young people of all ages with the worlds to wonder of STEM subjects and careers. Consequently, this group could share ideas, stories and best practices to attract students' interest in learning STEM. For example, STEM clubs, STEM ambassadors, and STEM enrichment. This activity would allow the student to involve in exploring STEM subjects in innovative and inventive ways outside the curriculum. Therefore, the activity could be a great platform and an enjoyable way to engage with students [11].
Advances in Social Science, Education and Humanities Research, volume 470
Research design and sample
This program employed a pre and post-questionnaires as well as a basic survey towards the effectiveness of the program, modules, and facilitators. The sample comprised of 100 Form One students. All the students from the sample completely answered the survey form. These students consist of 57 females and 43 males. Table 1 shows the distribution of these students in each gender while Table 2 shows the list of modules conducted.
Research design and sample
The questionnaire is divided into two sections, namely, Section A and Section B. Section A contained items on the students' demographic data such as gender. Section B contained 18 items on assessments in STEM-related subjects according to the relevance towards each module being shown to the students. These items are divided into six (6) modules where each module has three questions. Examples of STEM-related subjects in the secondary school curriculum are Biology, Chemistry, Physics, Science, Mathematics and Additional Mathematics. Meanwhile, all the items in the survey form had a fourpoint Likert scale response option, namely strongly disagree, disagree, agree and strongly agree. The students' responses to each item received weighted value from 1 (strongly disagree) to 4 (strongly agree).
Algebraic Expression
• By using 'Sticky Tiles' is one of the easy-to-understand and practised learning patterns in a student's classroom. • Integers and Algebraic Expressions are among the titles that can be combined for learning using this method.
Crystal Snowflakes
• This experiment would form a snow-shaped crystal made from a chenille craft stick. Crystals would form for a minimum of 8 hours until overnight and require only a few ingredients. • The main ingredient in this experiment was Borax powder, a type of compound applied as a catalyst for washing clothes
Photosynthesis
• To study the presence of starch in leaves as the presence of sunlight helps plants in the process of photosynthesis and starch production. • Use leaves, tissue paper, syrup and methanol Friction • Frictional force is the force produced when two surfaces come into contact with each other and friction force is the force that opposes, stops, and obstructs the movement of an object. • The friction force also prevents an object from moving and can cause the movement of an object to slow down and eventually stop.
Oobleck
• Oobleck is a mixture that has fluid and solid properties depending on the force applied to the Oobleck mixture. • If the applied force is high, Oobleck would show solid properties.
Meanwhile, if the applied force is low, the liquid characteristics are shown • Materials are corn flour, water and food colouring.
Plant Reproduction
• The aims are to communicate the structure and function of each part of the flower, to learn the next flowering/flowering process for self-pollination and clogging.
Advances in Social Science, Education and Humanities Research, volume 470
To enable all students to participate fully in the program, six (6) stations were set up and the students were also divided into six (6) small groups. The program has focused on school-based learning systems such as photosynthesis, friction force, plant reproduction systems and more. Before the students are departed into their group, the students were given questions with a time limit to test their understanding before joining the STEM program. Then, the students were given information regarding the module involved in the station. The facilitator would give live instruction and let the student engage in a hands-on activity in the module. Before the group departed to the next station, they would again be tested with five (5) questions from each station to elicit their optimum level of understanding after the module
RESULT AND FINDING
The effectiveness of the program is shown through the feedback survey forms. The feedback survey form is divided into three important elements. There are; (A) the program's aim and objectives, (B) the program's management, (C) the student's perspectives for the program's effectiveness. All these criteria are important to ensure the program's effectiveness while upgrading the activities in the future. Table 3 shows the details elements in the feedback forms meanwhile the survey results are presented in Figure 2. Table 3 The detail elements asses for the program
Code Assessment
A: Program
A1
The objective of this program is to provide exposure to Science, Technology, Engineering and Mathematics (STEM).
A2
The content of the program is in line with school learning.
A3
The activity of each station is effectively managed.
A4
Use of effective teaching aids / experimental materials.
A5
Deliverable and effective facilitator.
B1
The program execution journey went smoothly
B2
The time allocated for each station is appropriate.
C1
My understanding of Science, Technology, Engineering and Mathematics (STEM) has improved since before joining the program.
C2
After this program I can apply the knowledge learned.
C3
I can tell about the knowledge learned throughout this program to my family and friends Advances in Social Science, Education and Humanities Research, volume 470
C4
Overall the program was successful and rewarding.
Figure 2. The result from feedback form
Based on the analysis, there were 88% to 100% of students' answers either 'Agree' or 'Totally Agree' towards all the elements in the feedback form. On the other hand, the remainder engaged in A2 (1%), A3 (1%), A4 (1%), A5 (0%), B1 (1%), B2 (12%), C1 (5%), C2 (2%), C3 (6%) and C4 (0%). Even though the majority of students agree with the suitability of the modules presented in line with school learning, there was 1% 'Disagree' range in the A2 element. Therefore, some modules would brush up again to ensure that the modules are in the syllabus-based. In the meantime, 12% of students were disagreeing for each station's time allocation in the B2 element. During the event day, each group was given 15 minutes at each station and the students' needs to move to the next stations after the time's end. Thus, the total of students in each group and each station's activities has to be cogitated while allocating the time.
CONCLUSION
Throughout this study, more research regarding STEM is required to learn the altered approaches that connect employers with educators while bridging the gap between today's learners and tomorrow's careers. Young Scientist Pioneer Program is a big program that not only involved activities among UTM students but also engagement between other university, schoolchildren and teachers. It has also been able to give new chances for UTM students not only to increase the knowledge about STEM but also enhance their communication, leadership and thinking skills. As a conclusion, the committee believes that connection with impact through collaboration is the challenging task but it is worth doing to come together to develop strategies that build, appeal and preserve a more diverse and sustainable STEM program. | 2020-10-10T19:38:06.865Z | 2020-09-22T00:00:00.000 | {
"year": 2020,
"sha1": "efd40ba96d4b985f425a9fd718afacd5a65155e5",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125944679.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "efd40ba96d4b985f425a9fd718afacd5a65155e5",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
54912506 | pes2o/s2orc | v3-fos-license | The Unfavorable Influence of Transport on the Environment
Transport has become an important factor in the development of the society, both in a positive (transport of people, raw materials, products, information) and a negative sense (traffic accidents, emissions). The rapid growth of transport capacity and the quantity of passenger and freight vehicles is reflected in the increasing environmental burden. Hence compounds emitted from all transport modes occur in all environmental components. These compounds often have adverse effects on ecosystems, animals, plants, and human health, and thus it is important to observe their occurrence in the environment.
INTRODUCTION
Air pollution is mostly spoken about when discussing the increasing environmental burden due to transport; however, the share of other components in the pollution of the environment, such as surface and ground water, soil, and biota, is central as well.The land use for the transport infrastructure and fragmentation of the countryside influences the migration of animals and bio-diversity and cannot be omitted.The production of vehicles and the considerable quantity of waste produced when they are scraped, containing a whole range of dangerous substances, represent a considerable impact on the environment.Whereas the above-mentioned consequences are long-term negative effects, the increasing mobility bring about a rise in the number of cases of acute pollution, mainly during traffic accidents.The accidents may have far-reaching consequences for the environment, particularly during the transportation of dangerous goods.In this respect, the issue of transport in relation to the environment and health of people has become more important in recent times.
AIR POLLUTION
One of the most important issues for transport is air pollution by emissions, mainly as a consequence of the significant risk they have in affecting human health.Recently, the share of automobile transport in air pollution has been rising significantly, which is particularly reflected in urban areas with high traffic volume.The source of emission of pollutants from vehicle engines into free air are exhaust gases formed during the combustion of fuel.They are complex mixtures containing hundreds of chemical compounds in various concentrations, contributing to long term warming of the atmosphere, the so-called "greenhouse effect", and often contain toxic, mutagen, and carcinogenic properties for humans.The most significant harmful pollutants contaminating the air from traffic can be divided into the limited compounds and unlimited compounds to which the emission limits refer.Carbon monoxide (CO), nitrogen oxides (NO x ), nonmethane volatile organic compounds (NM VOC), and particulate matter for diesel vehicles (particulate matter -PM) are rated as the limited pollutants.With the exception of PM in new vehicles, they have decreased as a consequence of stricter limits required by EURO standards, but considering the increasing volume of traffic, principally freight traffic, the total amount of emissions has grown nonetheless.The graphs depicting the development of the limited pollutant production by the individual modes of transport are shown in Figure 1 (Adamec, Dufek, 2002).The unlimited pollutants often have more serious effects on human health, but, currently, due to the lack of information about the compounds and far higher demands on the measuring technology, their production is not monitored.Within this group of pollutants we rate compounds contributing to long-term warming of the atmosphere, i.e. carbon dioxide (CO 2 ), methane (CH 4 ), and nitrous oxide (N 2 O).The other pollutants dangerous to human health, mainly produced during the imperfect combustion of fuel, are poly-aromatic hydrocarbons (PAH), phenols, ketones, tar, 1,3-butadiene and benzene, toluene, and xylenes (BTX).When fuel is burned polychlorinated dibenzo-p-dioxins/furans (PCDD/F) and polychlorinated biphenyls (PCB) could also be produced, in case of chlorine being present in the combustion system.The highest increase is recorded for the emissions of greenhouse gases, CO 2 and N 2 O, where newer vehicles show higher measured values than older types.The reason, in the case of CO 2 , is higher fuel consumption as a consequence of transport performance increase; in case of N 2 O emissions, the increase is caused by the introduction of catalytic converters.Some organic pollutants (PAH) emitted by traffic which are dangerous mainly due to their toxic, genotoxic, and mutagen effects show a similar increasing trend.On the other hand, CH 4 emissions are still decreasing because new vehicles have to meet stricter EURO limits.The Pb and SO 2 emissions which depend on the quality of burnt fuel and their production by traffic is virtually negligible due to the legislative introduction of unleaded fuels on the market since 2001, and the gradual regulation of sulphur content in fuels since 2000.As it is apparent from Table 1, the share of transport on the total air pollution of CO is 37 %, of NO x approximately 30 %, and of volatile organic compounds 24 %.In comparison to 1993, there is an important increase in emissions of almost all monitored pollutants, predominantly PM which is based on the growth of transport performance.In the effort to reduce dangerous exhaust gasses produced by burning fuel, catalytic convertors, used for the modification of automobile exhaust gases, were introduced in the first half of the 1990s.They are devices located in the exhaust pipe-line with a metal carrier which is covered by a catalytic substance on a highly porous layer.This substance allows to speed up the oxidation of the produced CO and hydrocarbons, and to reduce NO x .So-called three-way controlled catalytic convertors are currently used the most; they use a mixture of platinum metals -platinum (Pt), rhodium (Rh) and palladium (Pd) are contained as catalytically effective substances.Considering the fact that catalytic convertors are exposed to high temperature differences, the platinum metals are released into the environment (Farago et al., 1996, Gómez et al., 2001) which may have a negative influence on human health (Barefoot, 1997).
Particulate matter emissions
The above mentioned issue concerns the pollutants produced by the combustion processes of the so-called burning emissions.However, the release of other pollutants, especially PM, is connected with other processes as well, such as the abrasion of various exposed components (brake and clutch lining) when copper (Cu), antimony (Sb), barium (Ba), iron (Fe), aluminium (Al), zinc (Zn), molybdenum (Mo), manganese (Mn), magnesium (Mg), cadmium (Cd), and others are released into the air (Lamoree, Turner, 1999).The abrasion of tyres containing various types of rubber is a source of zinc; other metals like calcium (Ca) and iron (Fe), and elementary carbon are released as well.The whole range of metals also gets into the environment during the mechanical separation from the rusting automobile body-shell and street accessories (litter bins, road signs, lighting, crash barriers, etc.) (Janssen et al., 1997).The problem of resuspension of PM deposited on the roadway and in its near surroundings, initiated by passing vehicles or by the wind flow poses a significant burden to the air quality (Nicholson, 1988).The dust on the road pavement contains particles of bigger fractions which are composed of both metals of a geological origin from the surrounding soil (Al, Si, Ca, Mg) and the above-mentioned metals from the operation of automobiles (Janssen et al., 1997, Vallius, 2005).The particles of chemical (salt) and inert material (sand, gravel, slag) also play an indispensable role in the road maintenance during the winter period, as well as falling-off dirt from vehicles and falling-off parts of transported material.We speak of the non-combustible emissions in this case.The brief overview of the pollutants produced by traffic, including their potential creation, is mentioned in the following Table 2.
Carbon dioxide (CO2)
Fuel combustion containing carbon.Petrol passenger cars produce 3 183 g of this pollutant by combusting 1 kg of fuel, the same for diesel engines and freight.
Carbon monoxide (CO)
Fuel combustion containing carbon with the insufficient access of air or at high temperatures.Petrol passenger cars produce 18 -168 g of this pollutant per kg of fuel, diesel 2.5 to 9 g.kg -1 of fuel.Trucks 7 to 221 g.kg -1 of fuel.It is always dependent on the observation of the EURO limits.
Sulfur dioxide (SO 2 )
Fuel combustion containing sulphur, however, the production is currently minimum due to quality fuels.
Nitrogen oxides (NO x )
Combustion of fuel and air mixture, oxidation of the atmospheric nitrogen at high temperatures.Petrol passenger cars produce 1 to 45 g of this pollutant per kg of fuel, diesel 4.3 to 18.3 g.kg -1 , trucks 10 to 93,3 g.kg -1 of fuel.
Nitrous oxide (N 2 O)
Reaction of atmospheric nitrogen with atmospheric hydrogen mainly in presence of catalytic convertors from the group of platinum metals.Petrol passenger cars produce 0.3 to 1.1 g of this pollutant per kg of fuel, diesel 0.1 to 0.3 g.kg -1 and the same holds for trucks.
Ammonia (NH 3 )
Reaction of atmospheric nitrogen with hydrogen contained in the fuel.Petrol passenger cars produce even 1.4 g of this pollutant per kg of fuel, diesel and trucks then approximately hundredths g.kg -1 of fuel.
Ozone (O 3 )
Secondary chain radical reactions in the earthbound layers of the atmosphere from the molecular hydrogen in the presence of the exhaust gases components, nitrogen oxides and liquid hydrocarbons under the influence of solar radiation.
Lead (Pb)
In the past, mainly combustion of leaded petrol in which it was present as tetraethyl-lead.Anti-knocks have not been used since 2001 on its basis.Its sources are currently balancing weights of the tyres, grease, oils, and particles produced by wearing out of bearings.
Harmful compound
Production in traffic Cadmium (Cd) Wear out of various car components.
Nickel (Ni)
Abrasion of brake pads and various stressed joints.
Chrome (Cr)
Mechanical separation from rotating parts of the engine parts and the brake pads.Platinum metals (platinum Pt, rhodium -Rh, palladium -Pd) Released from car catalytic convertors.
Polycyclical aromatic hydrocarbons (PAH)
Imperfect fuel combustion, or, abrasion of the road pavement surface.Diesel and trucks produce hundredths of grams of this pollutant group out of 1 kg of fuel by its combusting, in case of gas, approximately thousandths of g.kg -1 of fuel.
Methane (CH 4 )
Imperfect fuel combustion.Petrol passenger cars produce 0.1 to 0.9 g of this pollutant per kg of fuel, diesel hundredths of grams and trucks from 0.1 to 0.6 g.kg -1 of fuel.
Volatile organic compounds (NM VOC)
Fuels combustion and evaporation from the cars.Petrol passenger cars produce 1.3 to 40 g of this pollutant per kg of fuel, diesel 0.6 to 2.3 g.kg -1 , trucks 3 to 42 g.kg -1 of fuel.
Benzene (C 6 H 6 )
Fuel combustion and evaporation during their manipulation, distribution and storage.In Europe it is present in automobile petrol in share of around 5 %, sometimes even more than 10 %.
Toluene (C 6 H 5 -CH 3 )
Combustion of fuel containing mixtures with benzene and xylene used as an additive to increase the octane rating of petrol.
Particulate matter (PM)
PM 2.5-10 (large fraction) -predominantly swirling dust from road pavements, abrasion of tyres, and in the combustion processes.It stays in the close proximity of the source.PM 2.5 (fine fraction) -as a consequence of chemical reactions in the combustion of fuels.PM 0.02 (ultra-fine fraction) -from gas emissions during combustion processes.It could be transferred by air even on long distances.. PM 0.01 (nano-particles) -fuel combustion, mainly in petrol engines.Diesel passenger cars produce 0.3 to 4.8 g of this pollutant per kg of fuel, freight then 0. up to 6.3 g.kg -1 fuel, depending on observing the EURO limit.
Polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/F)
Oxidation of carbon particles in burnt gases at the temperatures of 250 -350°C in the presence of hydrogen, water vapour and chlorine.
Polychlorinated biphenyls (PCB)
Synthesis of particles of carbon, hydrogen, water vapour in the presence of chlorine.
Particulate matter (PM) have come into the spotlight lately, due to their negative influence on human health and their increasing content in the air, mainly in urban areas.Therefore, greater attention is paid to PM in this chapter.
PM contains particles of solid and liquid material of a size from 1nm up to 100 mm staying in the air for a certain period of time.We can see them in the atmosphere in the form of a complicated heterogeneous mixture, in terms of particle size and their chemical composition.PM are characterized by their specific physical (form, size, electric charge, surface of the particles, and solubility) and chemical properties (inorganic and organic components) which depend on their source, mechanism of origin and other conditions which influence their occurrence (distance from resources, meteorological conditions).Out of the physical properties a representation of individual size fractions of the particles is principally critical for the emitted particles, which contain ultra fine, fine, and large fractions as well.Out of the total quantity of the total suspended particulate matter (TSP) in the air, 60-65 % are formed by fraction of PM 10 particles, which are particles of a diameter lower than 10µm.72 % PM 10 fraction is formed by particles of a aerodynamic diameter lower than 2.5 µm (fraction PM 2.5 ) and smaller particles in the PM 1.0 fraction (aerodynamic diameter under 1µm) make 52 % (Harrison et al., 2003).In Figure 2 there are photos of PM taken with the use of a scanning electron microscope (SEM).
The possible effects on human health, and the possible health risks which they may have on the exposed population, are connected with the size of particles and their composition (Weijer et al., 2001).The dangers of PM do not lie only in their mechanical properties but mainly in the hazardous organic content (particularly PAH) or in a whole range of inorganic pollutants, such as molybdenum, copper, nickel, cadmium, platinum (Adamec et al., 2004, Krzyzanowsky et al., 2005, Adamec et al., 2006).The total content of particles in the air, apart from the particles emitted directly from the primary resources (traffic, industry), is contributed by the solid particles in a very significant way.This process significantly contributes to their content increase in the air and according to various studies, it could be a source of even 60 % of particles of PM 10 fraction (Braaten et al., 1990).Blowing the sedimentation particles as a consequence of passing cars depends on specific local conditions, on road surface quality, vehicle speed, vehicle weight, and air humidity.The dangers of the re-suspended particles lie mainly in the sorption of other pollutants onto their surface in case of their longer stay on the road or in its surroundings.
PM can be removed from the air by wet deposition during rain, when there is a "washout" of mainly large particles with aerodynamic diameter over 5 µm, or by the change of the air masses, when "clean" air comes to the given area.Another mechanism for eliminating the particles from the air in urban areas is their deposition when dissociation of the particles from the air appears as a consequence of their contact with a solid or liquid surface.The presence of vegetation which greatly enhances the circulation of air in the lowest layer of the terrain surface, supports this deposition considerably.In densely built-up urban areas, grass areas, which are often the only alternative for permanent particle deposition in these places, play an important role in the particle catching.We can generally conclude that particles of smaller dimensions (under 1µm) are caught more easily, whereas particles larger than 5 µm, tend to be reflected by the surface.
Currently, the research in the field of nano-particles, which are particles with dimensions under 100 nm and have some special properties, like bio-persistence, adsorption, or diffusion, and are the subject matter for a range of world research institutes, is developing very quickly.Nano-particles are produced by both natural processes (erosion) and the anthropogenic activities of people (fossil fuel burning, mining, production of nano-materials, etc.) and they are released into the environment.Then they are exposed to various biological, chemical, and physical changes and get in contact with living organisms (Dreher et al., 2003).Currently, the concentrations of ultra-fine particles in the air of large urben areas are monitored and analyzed only at some stations in Europe due to very expensive equipment.The evaluation of the level of air pollution comes from the monitoring of the polluting compound concentration in the atmosphere ground layer in the network of measuring stations.
The evaluation of air pollution is based on the relation of measured imission values to corresponding imission limits and target imission limits.In 2006, the government resolution No. 597/2006 Sb., came into force following the Act on air (Act No. 86/2002 Sb.), which determines the imission limits which are in accordance with the requirements of the EU directives on the environment quality.The regular measurements of particulate matter (mainly fractions of PM 10 ) take place in the Czech Republic in connection with this legislative framework; in 2006 the measurements were executed in 148 localities.These places represent various fields in terms of landscape morphology, presence of residential areas, industrial enterprises, etc.The background localities which are of highest numbers are located mostly in the residential zones of the cities, in rural, and natural areas.Industry is considered to be a dominant source of particulate matter in 12 localities.33 localities are directly influenced by traffic.In these cases, the measuring devices are close to roads and junctions with high traffic volumes.The primary sources of particulate matter are processes related to traffic in these places, mainly fossil fuel combustion in car engines, and the mechanical production of the particles by tyre abrasion, brake pads, and tarmac surface of roads.The monitoring network stations are operated by The Czech Hydrometeorological Institute, Health Institutes of the given areas, or municipal authorities of the urban areas where the measurements take place.Measured data are then freely available on the websites of the Czech Hydrometeorological Institute ČHMÚ.
As is clear from certain long-term measurements (Adamec et al., 2005, Adamec et al., 2007) of concentrations of PM 2.5 fine fraction, the determined annual average concentrations (38.07 µg.m -3 resp.32.49 µg.m -3 ) are significantly higher than limit concentrations for this pollutant (25 µg.m -3 ) proposed in the prepared EU directive for air quality.The results also show the significant seasonal dependence of PM 2.5 concentrations, (Figure 3) when in the lowest measured temperatures the highest concentrations were determined, this also worked vice versa.This difference could have been caused by the loss of volatile components of PM (e.g.ammonium salts) in summer months; these components coagulate and could be caught by filters in periods of low temperatures.However, the above-described trend could have a connection with other aspects as well, mainly with vertical atmosphere stability.Thanks to better ventilation, the particles are better dispersed during hot periods (convection), whereas during colder months (inversion) the ventilation is limited, so PM "accumulation" in the lower layers of the atmosphere occurs near the place of their origin.Some local sources, such as fireplaces and other sources of heating, could also significantly participate in the presence of particles in winter.
The knowledge of distribution of the individual size fractions of PM, i.e. the representation of individual particles in the range of sizes 2.5 to 10 µm, 1 to 2.5 µm, and 0 to 1 µm in the total particle content smaller than 10 µm, is very important for understanding PM behaviour, and, mainly, for an estimation of the health risks.The share of the individual fractions in PM 10 during the summer period is shown in Figure 4, during autumn at the turn of November and December in Figure 4.The share of finer PM2.5 fractions in the total concentration of PM 10 is higher during cold periods when these particles make 92.9 % of PM 10 , whereas during hot periods of year, this share was only 59 % (Adamec et al., 2007).The annual average concentrations of PM 10 (50.24 µg.m -3 resp.35.56 µg.m -3 ) determined within long-term measurements are in accordance with similar towns and cities in Europe, where the concentrations reach values between 44.4 µg.m -3 -53.8 µg.m -3 (Martuzzi et al., 2002).The following concentrations of PAH in the chosen risk elements -cadmium (Cd), arsenic (As), chromium (Cr), nickel (Ni), Mo, Pb, linked to their surface, correspond to values determined in the environment with medium traffic volume or industry in a number of European cities, such as Manchester, London (Harrison (Harrison et al., 1996), Helsinki (Yli-Tuomi et al., 2005), and Budapest (Salma, Maenhaut, 2006).As has already been mentioned, the evaluation of air pollution monitors the relationship of air pollution and the relevant limit values (Table 3), which are determined in connection with the Clean Air Act in the Czech Republic (Act No. 86/2002, Sb.) by the government resolution No. 597/2006, Sb., as amended.The pollution limits are in accordance with the requirements of the EU directives on the environment air quality valid in all EU member states (Act No. 86/2002, Sb.).
The overall air pollution is evaluated by emission balances which compare the production of selected harmful compounds from all sources.The basis for the national emission balance is a Register of Emissions and Air Polluters (REZZO), which has been methodically conducted and operated by The Czech Hydrometeorological Institute (ČHMÚ) since 2003.The pollution sources are recorded in four categories -large, medium, small, and mobilein the REZZO database.The emissions from traffic, together with emissions from agriculture, forestry, civil engineering industry, and the military, are a part of the mobile sources balance.These emissions are calculated as a product of the so-called active data and emission factors.The active data are expressed as consumption of fuel per a kilometre of a given category of means of transport.The emission factor is an emission quantity of a given pollutant which refers to a weight unit of a given fuel or driven distance unit.The CDV methodology (Dufek et al., 2001) based on the calculations of emissions from the consumption of fuels, is used for the determination of emissions on the national and regional levels.The program MEFA, which calculates the emission factors by referring to 1 km of the driven distance (Šebor et al., 2002), is used at the local level.
Pollutant Valid imission limit in the
In Table 4, there is an overview of the emission factors of selected pollutants concerning petrol and diesel passenger cars which meet the EURO 3 limits, e.g.Škoda Fabia, produced between 2001 and 2005.The emission factor for heavy vehicles is also mentioned here as a comparison from the same production date.The emission PM factors for petrol passenger cars are not determined, considering the fact that these types of engines produce very fine, or even ultra-fine, particles, whose production is not currently monitored due to high costs of the measuring equipment.
WATER POLLUTION
The surface and ground water form an important component of the environment and are one of the fundamental raw material sources necessary for maintaing life on Earth.However, human influence gradually leads to its quality reduction, when one of the negative factors affecting its quality are various modes of transport.The seas and oceans may be contaminated by ships, mainly as a consequence of huge tankers accidents when considerable amount of oil leaks into water which brings with it extensive pollution with serious environmental effects.The cases of the tankers Exxon Valdez (1989) and Prestige ( 2002) are among the most infamous accidents.A source of long-term pollution may also come from large ports through handling with transported material or during vessel repairs.
In connection with the pollution of water, both surface and groundwater, the issue of rail transport come into consideration.The pollution sources are, in this case, rail transport, power supply, and switching stations, places for the washing of rail cars, fuel stations; and in the case of diesel traction, the railway vehicles themselves, and their accidents on the rail tracks.The pollution by road transport could also have a random character in the form of car accidents with a leakage of fuel, motor oils, operating liquids, and other pollutants, but also through long-term impact of exhaust gases, abrasions of the tyres and road pavement surface, and fuel dripping.The indicators of the maximum permissible level of surface and ground water pollution are evaluated by the government resolution No. 61/2003, Sb.,as amended, and the the guidelines of the Ministry of the Environment of the Czech Republic, Appendix of the Bulletin of the Ministry of the Environment, No. 8, year 6, 1996.
Surface water
The pollution of surface water is caused by rain water drained from the road surface with a high traffic volume, mainly motorways and dual carriageways.The pollution is highly dependent on the amount of rain falling on motorway surface where the concentrations of the pollutants in the "first drain" are the highest after rainfalls and after a over time it rapidly decreases, whereas, the pollution is directly reflected behind the drain where the drained water is not sufficiently diluted yet.The whole range of pollutants, including metal elements and suspended solid particles produced from the operation of traffic, mainly through the abrasion of road pavement surfaces and tyres, was identified in the surface drainage outlet, (Sansalone, Buchberger, 1995).The pollutants could also come from the materials used for road maintenance, lay-bys, and car parks mainly during winter, when the contamination could be connected with the application of defrosting agents and antifreeze mixtures.A key source of pollutants are also the leaks and spills of fuel when, besides the whole range of organic pollutants like PAH, hydrocarbons and metals are released into the environment (Shinya et al., 2000).Another important risk of possible environmental contamination is represented by filling stations, in whose proximity and on the adjacent car parks the highest concentrations of PAH were found (Smith et al, 2000).The motorways are, according to a range of studies, also the main source of chlorides which do not drain away through the watercourse, but they mainly soak into the soil and rock environments where, under certain conditions, their accumulation and subsequent gradual washout occurs (Runge et al., 1989).In order to monitor the pollution of the drained water, a method of passive sampling with the help of semi-permeable membranes (SPMD) is often used.This method simulates the diffusion process via bio-membranes, which are considered to be crucial in the bioconcentration of the pollutants in living organisms.This method was also used for the monitoring of the pollution level of the water drained from motorway D5 into retention tanks.From the graph in Figure 5, the growth of the PAH concentration within the monitored period is easily apparent (Adamec et al., 2005), nevertheless, the set contents are lower than figures recorded in some foreign studies, and so far they do not exceed the limit concentrations set by the government Directive No. 61/2003(NV, 2003), as amended.
Ground water
A similar problem, although not that urgent, is the contamination of the ground water by pollutants from materials used for road building.During the construction a whole range of materials, which met the technical construction requirements at that time, was used in the past; however, currently, as a consequence of stricter measures (limits), they could have a negative influence on the environment components and human health (Legret et al., 2005).Especially the migrant water in the roadway, drained by drainage systems, could be contaminated by pollutants released from the construction materials.However, this process depends on a lot of factors, like the subgrade type and the surrounding terrain characteristics, ground water management, capillary rise of ground water level, and the amount of precipitation which could penetrate the roadway, mainly as a consequence of damage to its surface layer (cracks, fissures).In the Czech Republic, the construction of asphalt roads uses unmodified and modified road asphalts made of raw materials which influence their final properties, i.e. influence a release of hazardous substances to the environment as well.These could form a basic part of the produced binder (e.g.PAHs, hydrocarbons) or could be added to modify and prepare better asphalt properties, e.g. the reduction of plastic deformations.For these purposes, e.g.elementary sulphur, polyethylene, polypropylene, powder polyvinyl chloride, and others are used.The permeating of some PAHs and metals from the sample asphalt surface and other construction materials are shown in some foreign studies and their results are in correlation with the results of Czech researches (Ličbinský et al., 2007).The release of pollutants from the used materials could currently be considered low, depending on the pollutant properties and on the primary content of the organic and inorganic pollutants in the given material.The major source of pollution lately, due to the increasing traffic volume, are traffic accidents with a leakage of fuel, motor oils, operating liquids, and transported dangerous goods, such as acids, lyes, and other chemicals.
SOIL POLLUTION
As in the case of water pollution, the threat to soil quality in the road surroundings occurs basically by three sorces: by long-term pollution, caused by everyday road traffic, by seasonal pollution, mainly through the influence of road winter maintenance, and by traffic accidents, when the leaking of environmentally harmful materials occurs.
Long-term soil contamination in the surroundings of roads is connected mainly with the pollutant drainage from the road surface and the splashing of water caused by passing cars to the surroundings.Thus, the soil could be contaminated by PAH and its derivatives, particularly nitrated ones (nitro-PAH), hydrocarbons, and some metals as well.Its contamination may also occur through using abrasive winter maintenance materials and chemical defrosting materials.For this purpose sodium chloride, calcium chloride, and their mixtures, as of spreading, sprays or dampers (sprinkling salt), are used; however, during their application it is not possible to prevent dispersion around the road and thus contamination with chlorides.Subsequently, their presence causes the corrosion of metal elements of the road equipment and the increased release of pollutants from their protective coating, which could lead to subsequent pollution by heavy metals.The issue of soil contamination is also closely related to pollutants permeating from roadways when under the influence of water soaking in the road body, their permeating and subsequent transport to the environment occurs.The soil contamination is particularly problematic in large urban areas with a high volume of car transport.Metals from the platinum group (PGE) like platinum (Pt), palladium (Pd), and rhodium (Rh), which are among materials worth mentioning, are classified as toxic metals, and their increasing concentrations could represent a serious risk nowadays.The concentration of Pt emitted from car catalytic converters at sites with high traffic volumes in the Czech Republic, ranges from 9.20 -21.57µg.kg -1 and is comparable with the levels of other European cities (Zereini et al., 1997).At these sites, higher contents of PAH, higher even than in the close proximity of motorways, were found (Tuháčková et al., 2001).The negative impact of traffic on soil is not only reflected in the chemical pollution of the road infrastructure surroundings, but also in the construction of roads and the whole transport network.As far as the land designated for construction is concerned, its original function has to be removed.Such land is converted into areas designated for construction, so, from the viewpoint of the Nature, it results in their degradation.The arable and forest land use figures clearly show (Table 5) that in 2004 almost 988 hectares were designated for road construction, which represented almost a fifth of the capital city's area, Prague.
The road alignment of important roads (nowadays mainly motorways) is considerably conditioned by terrain obstacles.It is easier to build such motorways mainly in the lowlands and valleys along important rivers; however, at these places there is often soil, which is highly valuable for agriculture.The motorways are commonly built in four lanes with a central reserve, i.e. that only the motorway itself and its adjacent strip will require the land use of an area of approximately 3 hectares per 1 km of motorway length.Apart from the motorway itself, other accompanying constructions are also included in the land use -ditches and embankments compensating the road alignment, grade-separated junctions, petrol stations, and other commercial activities linked to providing services to passengers, as well as constructions used to mitigate the negative impacts of traffic, like anti-noise screens or retention tanks for drain water.
FLORA AND FAUNA
Currently, due to transport development and increasing building activity, a debated topic is the impact on biological diversity, i.e. the number of flora and fauna species.Biodiversity is not only threatened by the reduced size of ecosystems or by the hunting of endangered species of animals, but also by the fragmentation of localities.
This fragmentation is understood as a division of natural localities into smaller and more isolated units, and therefore the survival of some species is threatened.One of the main reasons for locality fragmentations is, apart from agriculture and urbanization, mainly the construction and the use of transport infrastructure.The transport network divides the natural localities into smaller, isolated segments which are often smaller than some species need for their survival.The road then acts as a physical barrier for animals and is particularly restrictive for those species which need large areas for their living.This does not only include smaller animals, like amphibians, reptiles and small mammals, but also larger ones, like deer or wild boar, which are very often hit by vehicles.The pollution of the road environment by chemical pollutants influences the amount and species diversity of the soil micro flora and fauna (Tuháčková et al., 2001).
LANDSCAPE PATTERN
Quality and fast transport means shortening of the "perceived distance", because the destinations are much easier to be reached than years ago; therefore the deconcentration occurs for a lot of human activities, which were before concentrated in the cities.Currently, the process of suburbanization is the most typical demonstration; this is characterized by spatial urban growth in the outskirts, which is enabled by easily available individual car transport and which sometimes even develops into its uncontrolled form called "urban sprawl".This process is currently not occurring in a coordinated manner, because the role of land use planning is not powerful enough to ensure positive city development in terms of traffic and land use.The everyday problem of urban areas is the large number of cars, which the current city road network system is unable to accommodate anymore.In the case of huge cities, the move of residential areas and extensive commercial activities from the centres to the outskirts is apparent, without the appropriate connections to other urban areas, This causes increased demands on traffic, particularly individual road traffic, because these zones are usually designed to be predominantly available by personal vehicles, and public transport is usually not designed for these areas at all or only in a limited extent.
The landscape is very negatively influenced by the media advertisements, billboards, placed in the proximity of roads, particularly the busiest motorway sections, where the traffic flow exceeds 15 000 vehicles per 24 hours.The advertisement messages are seen daily by tens of thousands of people, and, therefore, these places are very attractive for the advertising industry.The placement of legally constructed billboards, for which their owners signed the proper contracts, passed through the approval procedure of the administrative bodies, and, therefore, they are usually not located in inappropriate places.The Act No. 114/1992, Sb.,on the environment and landscape protection, states that it is not possible to place an advertising facility in places where it could have an impact on some important landscape features or some protected areas.
As a regulation complementing this Act, the Agency for the protection of the environment and landscape (AOPK) produced the "Methodology for the Evaluation of the Scenic Landscape" in 1999, which further specifies the terms such as natural, cultural, and historical characteristics of landscape, aesthetic value of the scenic landscape, natural value of landscape, cultural dominant features of landscape, etc.
However, besides legal advertising areas, there are a large amount of advertising facilities placed without the proper permits which are very often located in places unsuitable for these advertisements.It is a recent effort of the Road and Motorway Directorate of the Czech Republic, as a road administrator, to deal with this problem and restrict the number of billboards in road environment by up to 80 %.However, the owners of mega-boards, visible from over a long distance, are not too threatened by this effort, because their facilities are often located outside the designated safe area of roads and are often on private land.The assessment of the disturbance of landscape character has a big disadvantage due to the subjective assessments of specific situations.Therefore, there are unified rules for the determination of places where the advertising facility considerably affects landscape and where it should not be palced.An example of how a huge advertising panel could aesthetically violate the landscape is shown in a photo from the D1 motorway (see Figure 9 in the Appendix), which is, for comparison, supplemented with a computer photo-mounting where the advertising panel was removed.
ACCIDENTS
Dangerous chemical substances and chemical products (toxic, flammable, explosive), which can have a negative impact on human health and the environment, are handled with in everyday activities, including industry, trade, or during their transport.Due to the increasing traffic volume, there is more and more contamination of surface and ground water and the rock environment through vehicle accidents on roads.Their consequences are the leakage of dangerous materials -mainly fuel, motor oils, operating liquids, and also transported dangerous items, e.g.sulphuric acids.Currently, the majority of regulations concerning this issue have been cancelled and replaced by new regulations, or, they have been amended.Currently, Act No. 59/2006, Sb., on prevention of serious accidents, is mandatory, which is reflected in the development of the EU legislation, and, when it became effective, the earlier legislation regulations were superseded.
Transport of chemical, toxic, flammable, and explosive materials requires great attention, considering the risks of traffic accidents and the subsequent leakage of these materials during transport.All hazardous substances have their specific properties, and, consequently, they acquire different degrees of danger under different conditions, which is particularly important during their transportation and handling.The international road transport of hazardous goods complies with the The European Agreement concerning the International Carriage of Dangerous Goods by Road -ADR (Accord Dangerousness Route), which the Czech Republic also complies with.The International Carriage of Dangerous Goods Regulations is valid for the International Railway Transport of Dangerous Goods Regulation (RID) as a supplement to the unified legal regulations concerning the Agreement for the Carriage of Goods by Rail (CIM).Air transport of dangerous cargo follows regulations issued by the ICAO (International Civil Aviation Organization) and the regulations of the IATA (International Air Transport Association).The maritime transport of dangerous goods follows The International Maritime Dangerous Goods (IMDG Code).
Traffic accidents with oil products leakage of dangerous substances have the highest share in the creation of environmental accidents and crashes in transport.The following graph in Figure 6 shows the number of leakages of dangerous substances including oil products, with the intervention of fire brigade units between 2001 and 2005.The graph clearly shows the steady trend of leakage of dangerous chemical materials and oil products.
Each accident which causes the deterioration of environmental conditions needs to be assessed as an environmental accident which could cause the instability of the ecosystem (Kvarčák et al., 2000).The principles determined in the above-mentioned international regulations are the first step for the reduction of the number of collisions of vehicles transporting hazardous substances.But despite the observance of the regulations, there is still a risk of vehicle accidents, which could be increased, e.g. by aggravated climatic conditions, an increase in road traffic volume and consequent congestions.In order to avoid traffic accidents and subsequent environmental accidents, it is necessary to search for other ways to maintain safe traffic, which will lead to the minimization of environmental accidents.
WASTE FROM TRANSPORT
The production of huge amount of waste in the form of car wrecks is becoming one of the priority issues of every developed society.Car wrecks consist of up to 80% of recyclable materials, usable as secondary raw materials, e.g.metals or plastic.However, dangerous types of waste, such as leaded accumulators, oil filters, braking and antifreeze liquids, components containing mercury or PCB, or brake pads containing asbestos, could have a negative effect on the environment through improper handling or leakage.In the Czech Republic, this issue is dealt with by Act No. 185/2001, Sb., which defines a car wreck as any complete or incomplete motor vehicle intended for the use on roads to transport people, animals, or goods which has become waste.Everyone who gets rid of waste is obliged to hand it over only to people who operate the devices for the re-use, removal, collection, and repurchase of car wrecks.The fundamental document which modifies the handling of this waste in the EU is the directive of the European Parliament and Council 2000/53/EC on vehicles with expired lifespan, and its appendices issued in the form of Decisions of the Commission of European Communities which are integrated in the Czech legislation in the form of the Waste Act (No. 185/2001, Sb.), as amended by the Implementation Programme of the Czech Republic No. 4. The goal of all legal measures is to complete and improve the existing system of handling car wrecks, which could be an important source of secondary raw materials and energies.
The handling of car wrecks concerns several target groups on various legislative levels.These target groups are public (origin), local authorities, regional authorities (as an authority which issues licenses to facility operators for handling with this waste), and business entities (the authorized operators).The system of deregistration and environmental handling with scrapped passenger cars consists of the following steps.The principle is to hand the vehicle over to the collection network, then the gradual disassembly of the car wreck is executed so that the individual parts could be obtained separately, which individually are of higher value.Apart from this, it is possible to separate other waste containing hazardous substances, and thus reduce the total amount of dangerous waste.The crushing the vehicle body itself allows to gain pure steel scrap; this step could be alo replaced by cutting and pressing, which are less demanding in terms of operation and investments costs.Consistent sorting is an required for the higher level of metallurgical processing for non-ferrous metal components.Approximately 160 thousand cars are decommissioned yearly from operation in the Czech Republic, and up to 9 million vehicles in the whole EU (Božek et al., 2003(Božek et al., , Šooš, 2006)).Their material structure depends on a range of properties, such as the size and type of the vehicle, vehicle producer, model year, the age of the vehicle, or the efficiency of the processing and sorting technologies.In terms of quantity and recycling, ferrous metals (steel and cast-iron) are the most important components, which form around 62-68%, then non-ferrous metals (for example aluminium, magnesium, copper) and their alloys at 3.5-6%, other components such as paints, leather, wood, and paperboard make 5-15% of the car wreck weight.The proportion changes of individual components in the material structure are given by the technological development of the vehicles, where the share of plastic and the so-called light metals is currently increasing.An overview of the average material weight composition is shown in the following Figure 7.In the EU countries, the total expenses for car disassembling are between € 150 and 450, and, with the use of waste crushers, which are mostly used in Germany, France, and Great Britain, between € 50 and 70 per vehicle.Currently, approximately 80 collection points and 8 facilities for car wreck processing (disassembling devices + crushers) are in the Czech Republic.For the environmental processing of passenger cars with expired lifespan a fee amounting to CZK 1200 is charged.The costs for environmental car wreck processing range around CZK 3000, out of which approximately one third of the costs is used for the transport and handling, the other costs include the car wreck processing and the removal of the remaining parts, which include hazardous waste (Sýkora, 2005).
The waste produced by transport is currently a very topical issue, mainly due to the increase in the number of registered vehicles and their average age, which is currently 13.5 years.As far as the vehicle handling of future generations is concerned, waste production will still be on the rise, which has a range of negative effects.Therefore, the prevention and minimization of waste, which will lead to less harmful impact on the environmental, are very important issues concerning waste handling.There is also a connection with vehicle construction, which should, in the production stage, be focused on more effective use of secondary raw materials and energies, the share of materials with dangerous or toxic properties should be reduced in the maximum degree, so that new vehicles would be more environmentally friendly and have a higher potential for prevention, re-use, and material and energetic use (Adamec et al., 2006).9 SUMMARY Several thousands of various chemical compounds, often with mutagenic and carcinogenic effects, have been identified in the environment, out of which a considerable number comes from traffic (e.g. the combustion of fuel, abrasion of exposed parts of vehicles and the road surface).The concentrations of a some of them are regularly monitored and the Czech Republic has to reduce their amount according to its commitments within the EU membership.Nevertheless, the amount of the harmful compounds released to the environment by human activity is increasing rapidly.This unfavourable situation is apparent mainly in big cities with high traffic intensity where the considerable deterioration of air quality occurs, which influences the health of their citizens, mainly children and the elderly.In this respect, it is necessary to pay attention to this issue, which means to care more about the destination of the pollutants produced by traffic, and the associated potential health and environmental risks.
Figure 1 :
Figure 1: Production of limited pollutants by individual modes of transport.
Figure 4 :
Figure 4: The share of the individual PM10 fractions during the period 25 June -5 July 2007 (left graph) and during the period 28 November -5 December 2007 (right graph). | 2018-12-05T06:35:51.378Z | 2011-06-01T00:00:00.000 | {
"year": 2011,
"sha1": "a4493f00b9ccdbc9bfdc2f5c4e655e011a3f9e1c",
"oa_license": "CCBY",
"oa_url": "http://tots.upol.cz/doi/10.2478/v10158-011-0010-z.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a4493f00b9ccdbc9bfdc2f5c4e655e011a3f9e1c",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
231619996 | pes2o/s2orc | v3-fos-license | The Effectiveness of Orange Essential Oil Aromatherapy on Blood Pressure, Pulse Rate, and Respiratory Rate of Patients Scheduled for Coronary Angiography: A Clinical Trial
Background: Patients scheduled for angiography experience pre-operative anxiety, which affects vital signs. Objectives: The current study aimed to investigate the effectiveness of orange essential oil aromatherapy on blood pressure, pulse rate, and respiratory rate of patients before coronary angiography. Methods: In this clinical trial, 70 patients scheduled for coronary angiography were separated into two groups of intervention and control. For those in the intervention group, two drops of orange essential oil were drip by emitters on a polyethylene handkerchief attached to the collar of the participants. Then, they were asked to breathe normally for 20 minutes. Vital signs were recorded before and after providing the intervention. Data were analyzed using SPSS version 16 using Chi-square, Fisher's exact test, independent ttest, U Mann-Whitney, Paired Wilcoxon, and paired t-test. Results: There was no significant difference between the two groups concerning the mean vital signs before providing the intervention. However, after providing the intervention, a significant difference was found concerning pulse rate (71.3±26.79), respiratory rate (15.2±11.04), and systolic blood pressure (117.9±31.47) between the two groups (P<0.001). In the control group, no significant difference was found concerning the pulse rate (75.3±06.72), respiratory rate (17.2±18.28), and systolic blood pressure (123.8±91.91) before and after providing the intervention (P>0.001), and this difference was significant between the two groups (P<0.001). Also, there was no significant difference between the two groups concerning the diastolic blood pressure (P>0.001). Conclusion: This study demonstrated that orange essential oil, as a complementary method, can reduce pulse rate, respiratory rate, and systolic blood pressure in people undergoing coronary angiography. Therefore, it can be considered as a safe and cheap alternative therapeutic option.
Introduction
Coronary angiography is a definitive, reliable, and invasive method to diagnose coronary artery diseases [1]. It's the most widely used diagnostic modality for cardiac patients [2]. In 2010, over 3 million angiographies are performed in the United States [3]. In Iran, more than 260,000 cases of coronary angiography are performed annually [4]. Although it has several advantages, it also contains disadvantages such as high anxiety, mainly due to its invasive nature, which in turn affects the vital signs, such as increased pulse rate, respiratory rate, and systolic and diastolic blood pressure [5]. Vital signs are objective criteria for assessing essential physiological functions of the body, which are also useful for assessing the need for angiography [6]. Increased vital signs is common before angiography that worsens the patient's health condition and thereby affects the angiography outcomes [7]. Anxiety stimulates the sympathetic nervous system causing an increase in heart rate, heart excitability, and blood pressure, which in turn increases the heart's need for oxygen and disrupts blood supply to heart cells [2]. Anxiety and vital signs are mutually supportive [6]. There are pharmaceutical and non-pharmaceutical strategies to control anxiety [8]. Since pharmaceutical strategies may cause side effects, the tendency towards non-pharmaceutical strategies is increasing (e.g. complementary medicine) [9]. Aromatherapy is a common method in complementary medicine [10], that, if needed, can be used by nurses. Complementary medicine is a low-risk, costeffective treatment with limited side effects, which its application is increasing [11,12]. Orange essential oil, also known as Citrus aurantium, is widely used for aromatherapy [13]. It stimulates the central nervous system and has sedative, antiinflammatory, antispasmodic, and anti-bloating effects. Besides, it can reduce blood pressure and dilation [14]. Also, some studies mentioned the anxiolytic effects of orange essential oil during labor [15] and for dentistry services [16]. Findings about the effects of orange essential oil on vital signs are controversial [17,18] and further studies are needed. Furthermore, based on the field observations of the researcher, in most cases, the respiratory rate and heart rate of patients increases before angiography, which in some cases results in delaying or canceling angiography. On the other hand, monitoring and controlling vital signs are among important nursing care. The current study aimed to investigate the efficacy of orange essential oil aromatherapy on blood pressure, pulse rate, and respiratory rate of patients before coronary angiography in Fatemeh Zahra Hospital in the city of Sari in 2019.
Methods
The current clinical trial is confirmed by the Ethics Committee of the Islamic Azad University Sari Branch (code:11-1397 R.IAU.SARI.REC). In the present study, 70 patients scheduled for coronary angiography admitted to the angiography ward of Fatemeh Zahra Hospital in the city of Sari, which is the referral center of Mazandaran province, in 2018 are investigated. Inclusion criteria were complete consciousness, coronary angiography for the first time, not receiving invasive procedures before angiography, and not using sedatives or other therapeutic interventions (e.g. herbal essential oils and medications) six weeks before angiography. Exclusion criteria were sudden changes in vital signs, dangerous cardiac dysrhythmias, and unwillingness to continue the study. The sample size was calculated using the following formula [19]: Type one error and type two error were considered as 5% and 2%, respectively. The study power was also considered to be 80%. Participants were randomly divided into two groups of 35 subjects: control and intervention. Two subjects of the control group were excluded because they couldn't fill the questionnaire due to chest pain and receiving sedatives. Data were collected using a questionnaire on demographic (i.e. age, gender, married status, history of diseases, and type of health insurance) and disease information, vital sign sheet, and medical equipment such as barometer. Information were recorded before and immediately after the angiography. The validity of the mercury barometer was assessed by calibrating the device and its reliability was assessed using the testretest method. For this purpose, the blood pressure of 10 randomly selected patients was measured two times with a distance of five minutes. Then, their mean correlation was compared to the correlation coefficient of the lower 95% of the instrument. Blood pressure was measure after a short rest in laying down position. Also, pulse and respiratory rates were manually recorded for one minute. The orange essential oil was obtained from the Gorgan Essence Company. For those in the intervention group, one hour before the angiography, two drops of orange essential oil were drip by emitters on a polyethylene handkerchief attached to the collar of the participants. Then, they were asked to breathe normally for 20 minutes [20]. Subjects of the control group received two drops of distilled water (placebo), with similar color as the orange essential oil. The vital signs were measured before and after the intervention. It worth noting that a cardiologist supervised the study. Data were analyzed using SPSS version 16 by chi-square, Fisher's exact test, independent t-test, Preventive Care in Nursing & Midwifery Journal (PCNM) 2020; 10(2) U Mann-Whitney, paired Wilcoxon, and t-tests. Statistical significance was considered when p-value<0.05. The chi-square and Fisher's exact tests were used to compare qualitative variables. As data were not normally distributed, U Mann-Whitney and paired Wilcoxon tests were used for intra-and inter-comparison of groups before and after the intervention, respectively.
Results
In total 70 subjects participated in the present study, but in the control group, two subjects were removed due to incompleteness of questionnaires, because of chest pain and receiving sedatives. Therefore, data of 35 subjects in the orange essential oil and 33 subjects in the control group were analyzed. The mean age of participants was 58.8±6.4 and most of them were female (56.4%) and married (77.2%). Most of the participants were educated up to primary only (46.5%) and 89.1% of them had background diseases. There was no significant difference between the two groups concerning demographic information (Table 1).
. Vital signs (pulse rate, respiratory rate, and systolic and diastolic blood pressure) were measured on the day that the patient was scheduled for angiography (before any intervention). Based on the results there was no significant difference between the two groups before providing the intervention; however, after providing the intervention, the two groups were significantly different concerning pulse rate, respiratory rate, and systolic blood pressure (Table 2).
Discussion
This study demonstrated that inhaling orange essential oil could significantly reduce systolic blood pressure, pulse rate, and respiratory rate in the intervention group compared to the control group. Mi-Ra et al. (2018), in a study entitled "Effectiveness of aromatherapy on vital signs, physical, and physical Stress relief and sleep quality of students", concluded that aromatherapy had a positive physiological effect on vital signs, physical, and physiological stress relief, and sleep quality of students [21]. Also, Jafari et al. (2013), in a study entitled "effect of aromatherapy with orange essential oil on salivary cortisol and pulse rate in children during dental treatment", reported a significant association between decreasing anxiety and heart rate [16]. In another study, Moradi et al. (2015), in a study entitled "effects of Lavender aromatherapy on the anxiety and vital signs of patients with ischemic heart diseases hospitalized in cardiac intensive care units" showed that this intervention could reduce anxiety level in patients with ischemic heart disease and had a positive effect on vital signs [22]. Yadegari et al. (2015) in a study aimed to investigate the effect of inhaling jasmine flower on some physiological parameters in patients before laparotomy in the general surgery ward, reported a significant difference between the two groups concerning physiological variables such as pulse rate, respiratory rate, and systolic and diastolic blood pressure [11]. These studies demonstrated the positive effect of aromatherapy on reducing anxiety and improving vital signs of patients before providing medical interventions. In contrast, Cevİk (2017) reported that inhaling orange and lavender essential oil didn't affect the anxiety and vital signs of nursing students while giving an injection for the first time [17]. Also, Hekmat Poe et al. (2017) investigated the effect of orange essential oil on pain and vital signs of patients with limb fractures and concluded that although the blood pressure was lower in the intervention group, it was not statistically significant compared to the control group [18]. As mentioned before, findings concerning the effect of aromatherapy with orange essential oil are controversial. This difference can be attributed to various reasons such as using different oils, duration of the study, and the administration method. Therefore, the authors suggest performing further studies on special patients and different groups.
Conclusion
This study demonstrated that aromatherapy can be used as a complementary method along with routine treatments or even as an alternative method. Therefore, aromatherapy can be used as an effective and uncomplicated strategy for reducing anxiety in anxious situations. Hence, nurses can use this method for stabilizing vital signs. | 2021-01-15T22:45:13.990Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "54360a2b776f68a9f6d804df7c9c837f01322277",
"oa_license": "CCBYNC",
"oa_url": "http://zums.ac.ir/nmcjournal/files/site1/user_files_eafc2e/tahmasebi-A-10-756-1-e2c69c4.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "54360a2b776f68a9f6d804df7c9c837f01322277",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
247096060 | pes2o/s2orc | v3-fos-license | A critical realist theory of ideology: Promoting planning as a vanguard of societal transformation
This article explores the potential values of a critical realist theory of ideology on the analysis of planning issues. In particular, it argues its usefulness in promoting planning as a vanguard of societal transformation. The critical realist theory of ideology revitalizes the epistemological inquiry of beliefs, which enables an evaluation of the social, economic and environmental impacts of the ideas and beliefs embedded in planning. Furthermore, the essence of critical realist theory of ideology is to explain the (re)production of the ideology, which paves the way for transformative planning, as transformation cannot be realized without eliminating constraining social conditions. Finally, critical realism situates its critique of ideology within the wider transformation process by rendering visible the dimensions that can contribute to eradicating the ideology in question, and shaping better planning ideas, including ethical reasoning, utopia thinking and transformative agency. A meta-theoretical framework based on critical realism is proposed to guide a critique of ideology in planning. By using an example of planning for sustainable urban development in Copenhagen and Oslo, the paper demonstrates the ways in which the meta-theoretical framework can be applied to planning in a quest for societal transformation.
Introduction
Planning studies have repeatedly engaged with ideology throughout history. The recent decades have clearly witnessed a renewed interest in inquires on ideology in planning (Davoudi et al., 2020;Grange, 2014;Gunder, 2005Gunder, , 2010Sager, 2015Sager, , 2020Shepherd, 2018;Xue, 2018). The latest attempt at bringing the ideology to the forefront of planning analysis is the Special Issue of Planning Theory (volume 19, issue 1) on Narratives of power: bringing ideology to the fore of planning analysis, which examines the politics of contemporary planning through the lens of ideology. Agreeing on the premise set forth in the Special Issue, that a variety of theories on ideology exists which serve as valuable analytical tools with which to shed light on different planning issues , this paper sets out to explore the distinct values of a critical realist theory of ideology in the analysis of planning, particularly regarding planning in quest of societal transformation. By doing so, the paper aims to enrich the ongoing debates on ideology and planning by demonstrating the ways in which a realist approach to ideology provides a different angle when addressing planning challenges, issues and solutions, as well as promotes the transformative edge of planning.
We live in an era of multiple crises and challenges (e.g. climate change, ecological degradation, pandemics, social injustice, poverty and political instability). To cope with them, it is widely acknowledged that our urban societies need to undergo an urgent transformation. However, mainstream planning has often been criticized for lacking a transformative edge or even being a hindrance to progressive social change (Albrechts, 2015;Rydin, 2013). This sets the scene for the inquiry of ideology in planning in this paper in which I seek to answer that if planning purports to be a vanguard of societal transformation, what kind of theory of ideology might be necessary, illuminating and adequate. Drawing on critical realisma deep realist philosophy of science, I propose a meta-theoretical framework for the critique of ideology, which is particularly useful in fostering the role of planning as a driving force of progressive social change. Although a well-established philosophy of science, critical realism has inspired few planning studies (e.g. Boonstra and Rauws, 2021;Naess, 2015;Xue, 2012); the potentials of its theory on ideology have been underexplored within the planning field. This paper represents a preliminary attempt at joining the two domains, which also contributes to expanding the application of critical realism in planning studies. Therefore, by bringing in a realist position, this article contributes to the recent ongoing debates on ideology and planning, which are dominated by post-structuralist traditions.
The paper is structured as follows. In A Missing Realist Approach to Ideology and Planning, based on a brief review of the recent literature on ideology and planning, and from the perspective of societal transformation, I argue for the advantages of bringing a realist approach to ideology to the debates. In A Critical Realist Approach to the Critique of Ideology, I introduce the core concepts of a critical realist theory of ideology and arrive at a meta-theoretical framework for conducting the critique of ideology in planning. This is followed by an example revolving around a critique of the green growth ideology in urban planning that aims at sustainability transformation (An Example: The Green Growth Ideology in Planning for Urban Sustainability Transformation). The example is based on several studies conducted by the author over a few years, which have investigated, examined and criticized the prevalent sustainable urban development ideas and strategies in the context of Nordic cities: Copenhagen and Oslo (Mete and Xue, 2020;Naess and Xue, 2016;Xue, 2014Xue, , 2015Xue et al., 2017). The purpose of the example is to illustrate ways in which the critical realist theory of ideology could be applied in planning. As such, the example is never a full and systematic account of the critique itself. In Conclusion, the paper concludes by summarising its contributions to the existing academic debates on ideology in planning.
A missing realist approach to ideology and planning A dominance of post-structuralist approach to ideology in planning analysis The concept of 'ideology' has been heavily debated in the history of philosophy and political science. Although the concept is imbued with different connotations, two large traditions can be distinguished (Eagleton, 2007). One tradition concerns the ontological definition of ideology. Davoudi et al. (2020) summarize this tradition as a focus on 'what an ideology is'. Originating from Hegel and Marx who viewed ideology as false consciousness, this tradition takes an epistemological perspective and is preoccupied with the truth and falsehood of ideas and beliefs. The other tradition has less of an epistemological focus and more of a sociological one, with a concern over the function of ideas and beliefs within society. Therefore, this tradition can be said to focus on 'what an ideology does' (Davoudi et al., 2020). However, the distinction between the two traditions is not clear-cut with the boundary between them being blurred.
Despite the blurred distinctions, currently, it is fair to say that the epistemological notion of ideology as a distortion of reality is a rather unpopular conceptualization (Eagleton, 2007;Grange, 2014;Norval, 2000). The reinvigorated interest in the theory of ideology after the 1960s has been dominated by a linguistic approach and, more generally, post-structuralist thinking, which is preoccupied with the function of ideology (Gunder, 2010;Grange, 2014;Laclau, 1996;Norval, 2000). This sociological and political notion of ideology is bound up with the post-structuralist perception of our world as being one which lacks intrinsic meaning, essence and absolute truth. Apparently, this position on social ontology renders it incompatible with the epistemological theory of ideology, as the latter presumes the existence of one reality and some order of truth. In light of poststructuralist thinking, the task of ideology analysis is to investigate the formation of representation, paradigms, symbolization and the ways in which these shape our world (Norval, 2000).
Within the planning literature, there is not an abundance of attempts to reflect on ideologies. Depending on the focus and intention of inquiries, planning scholars have utilized different theories of ideology to shed light on the planning issues in question. Harvey (1985), adopting the Marxist interpretation of ideology, contends that, without being aware of it, planners are committed to the ideology of harmony within the capitalist social order. Therefore, planners contribute to the process of capitalist social reproduction through producing, maintaining and managing the built environment. According to Harvey, the ideological presupposition of social harmony is the most imposing and effective mystification. Hence, he calls for planners to rethink ideology particularly at those historical turning points at which one might identify the 'crisis of ideology' (Harvey, 1985: 182).
Resonating with the dominant post-structuralist approach to ideology, common to the more recent ideological engagement with planning is the adoption of a broadly sociological perspective on ideology that is neutral to epistemology but focuses on its functions and effects. Much literature is devoted to revealing the role ideological mechanisms play in shaping, (de)legitimizing and conditioning planning practices within specific historical and geographical contexts. The unpacking of the ideological mechanisms has been informed by different theories of ideology, such as Freeden's morphological approach (Shepherd, 2018), a rhetorically-informed political theory of ideology (Davoudi et al., 2020), the Laclauian post-Marxist interpretation of ideology (Grange, 2014), Hall's approach to conjunctural analysis and the Lacanian theory of psychoanalysis (Gunder, 2005(Gunder, , 2010. These inquiries on ideology have covered a wide array of planning issues ranging from national planning reforms (Davoudi et al., 2020;Shepherd, 2018), planning culture (Grange, 2014) and housing development (Davy, 2020;Zanotto, 2020) to urban policies (Gunder, 2003(Gunder, , 2005(Gunder, , 2010.
Arguments for bringing a realist approach to ideology in planning analysis
The abovementioned post-structuralist theories of ideology have proven to be very valuable as regard unpacking the ideological mechanisms that shape, navigate and shift planning practices. Without denying their values, I argue that a realist approach to ideology, as represented by critical realism, is particularly advantageous to strengthening the potentiality of planning as a driving force of societal transformation.
Firstly, the post-structuralist approach to ideology is more interested in understanding the ways in which ideological mechanisms are manifested in planning than evaluating the contents of the beliefs and values, as well as their resultant impacts on substantive planning strategies. For example as stated in the editorial by Shepherd et al. (2020), all the articles in the Special Issue focus on the role of ideology and effects instead of tackling it 'head on'. However, the substantive dimension of planning is of importance when it concerns either facilitating or hindering societal transformation, for example such as seen in land use strategy. Compared to urban sprawl, compact land use structure has ramifications on travel behaviour and dwelling patterns which are more in favour of climate mitigation and environmental protection (Newman and Kenworthy, 1999;Naess, 2012). The choice of land use strategy is strongly affected and shaped by beliefs, values and rationales in planning. As beliefs reflect what one believes to be true, good, and ought to be, they play a significant role in shaping, framing and constraining planners' formulation of meaningful aims and objectives, choice of tools and methods, planning process, adoption of actions and strategies, as well as the criteria chosen to evaluate those strategies (Fagence, 1983;Gunder, 2010;Kramer, 1975). Therefore, the content of beliefs and ideas affect the transformative impacts of planning practices. Identifying and criticising the beliefs through scrutiny, evaluation, justification, or falsification have considerable implications for materializing planning's potentiality in fulfilling societal transformation goals, such as climate mitigation, social justice and ecological preservation. This calls for a theory of ideology that enables the evaluation of the beliefs in planning against certain norms and values. Arguably, only a realist position that presumes the existence of moral truth could make an evaluative stance to beliefs possible.
Secondly, through unpacking the functions of ideology in planning, the poststructuralist approach indeed enhances planners' awareness of ideological forces, thus laying a necessary foundation for planners' resistance to them. To further strengthen planning's potential for counteracting ideological pressures or even eradicating them, it will benefit from a critique of ideology that provides deeper insights into the social structures and mechanisms generating and sustaining that ideology. As I will discuss in A Critical Realist Approach to the Critique of Ideology, the theory of ideology informed by critical realism enables an exploration of the social roots forming the ideology in question.
Thirdly, societal transformation is a normative project which indispensably involves an envisioning of desirable futures and normative values on which futures are built. This emphasis on normativity aligns with the very nature of planning that is laden with value inquiries and is heavily dependent on normative judgements. Particularly for planning that is ambitious for societal transformation, it needs to engage in inquiries on value issues and challenge existing values, norms, and beliefs which appear as hindrances to transformation. Post-structuralist theories of ideology that predominantly focus on 'what an ideology does' and are silent on epistemological issues do not explicitly (if not implicitly) engage in an inquiry of defining what the normative values and ethical premises should be. In contrast, a moral realist position holding that there exists the moral real will subject actually existing values to critique, enabling planners to challenge certain ethical premises and articulate alternatives.
A critical realist approach to the critique of ideology Originated by Roy Bhaskar in the 1970s, critical realism is a school of thought in the philosophy of science in parallel with but distinct from philosophies, such as positivism, empiricism, hermeneutics and post-structuralism. Critical realism claims to combine ontological realism, epistemological relativism and judgemental rationalism. Through ontological realism, critical realism embraces the idea that reality exists independently of human consciousness and knowledge of it, and further identifies three layers of reality: the empirical, the actual and the real (Bhaskar, 1975. The empirical layer is comprised only of what is experienced. Nevertheless, not all events can be experienced. Therefore, the actual layer is constituted of all the events or occurrences that happen, independently of whether we experience them or not. The real layer includes the mechanisms which generate the events and experiences. The generative mechanisms are one of the central notions of critical realism explaining the formation of events and phenomenon. Epistemologically, critical realism holds a relativist position, meaning that our knowledge about reality is socially determined and conceptually mediated; it attempts to approach reality, but is always fallible. However, not all knowledge claims are equally fallible. Through some theoretical and methodological tools, we can rationally judge the ability of knowledge to inform us about external reality. These basic ideas constitute the foundations of critical realism's understanding of ideology.
A critical realist theory of ideology
Bhaskar's concept of ideology follows a Marxist account, defined as 'lived systems of false or inadequate ideas' (Hartwig, 2007: 252). In other words, the definition of critical realist ideology focuses on the epistemological dimension of ideas. Following this definition, not all systems of ideas, values and beliefs can be categorized as ideology. Certain requirements must be fulfilled for them to be considered as ideology. Firstly, these ideas are false in the sense of being misleading, inadequate, untrue or representing a form of misconception of reality. Secondly, the existence of these false ideas is necessary, which means that there are social conditions generating and sustaining their existence. Thirdly, the maintenance of the false ideas fulfils certain functions; namely, they are beneficial in its effects on the reproduction of the social relations that generate it (Hartwig, 2007). As such, the critical realist understanding of ideology embraces its content, function and formation, which will be explained in more detail in the following paragraphs.
In terms of the content of ideology, critical realism involves an evaluative stance on beliefs and ideas. Evaluating the validity of the beliefs can be conducted by identifying a theory-practice inconsistency that undermines or even deconstructs the validity of the assumptions, values, or contents of the beliefs; or by identifying a weak, incorrect or blind point in the beliefs (Bhaskar, 2016). It is by virtue of this evaluation that the foundation is laid for the rational shaping of alternative values and beliefs directed at removing, displacing, or transforming the false beliefs at a later stage of the critique of ideology.
The pivotal element of the critical realist theory of ideology can be argued as being its emphasis on explaining the formation of the ideology, in other words, the reasons for which a wrong belief prevails despite its falsity. This explanatory endeavour is framed as an explanatory critique which enquires about the causal role of social structures in the wide acceptance of a false belief. Critical realism provides the possibility of developing a non-reductionist understanding of social explanations to the generation as well as sustaining of an ideology. The underlying generative mechanisms on which these false beliefs are grounded can include features, such as economic structures; cultural norms; political landscape; social structures, for example rules and laws; institutional settings; and discourses (Bhaskar, 2010). Knowledge of the structural causes of the false beliefs paves the way for transformation as the false beliefs cannot be removed without eradicating their causes.
Critical realism further provides a general philosophical account of the ways in which ideology functions in society, framed as a TINA (which stands for 'there is no alternative') formation (Bhaskar, 2016). A TINA formation occurs when a false belief is first presented as a necessity, expressed as 'there is no alternative'. The point is that this is a false necessity, as there indeed exist alternatives, either conventional, reformist, or radical. Once this false necessity is accepted, it entails defensive or supporting mechanisms to sustain the original false necessity and build a defensive shield Norrie, 2010). In other words, the TINA formation indicates the process by which a false necessity is undermined by and must be protected against its own falsity (Bhaskar, 2016). With the TINA analysis, Bhaskar aims to unpack 'the internal nature and patterning of ideology as thinking moves and negotiates the world' (Norrie, 2010: 109). Bhaskar (2016) committed the critique of ideology to transformation and emancipation. With this purpose in mind, Bhaskar further develops the theory of ideology by adding the elements of ethical naturalism, concrete utopianism and transformative agency.
The evaluation of beliefs or ideas can only be conducted against certain ethical standpoints. Critical realism applies a realist thinking to the domain of ethics and morality, known as ethical naturalism. It suggests that there exist moral truths, and we can move from knowledge of the way the world works to moral claims of the way it ought to be through rational ethical reasoning (Bhaskar, 2016). In the criticism of beliefs, ethical reasoning constitutes part of the rational inquiry and judgement through argument. According to Bhaskar (2016), although moral truth exists at the level of the real, it is constellationally embedded within the geo-historical development of human society and its interpretation at the level of the actual is always contingent and variable, dependent on the actual contexts. The judgement of moral claims can be informed by science, but also requires public debate and democracy (Price, 2019). Further, the exploration of moral truth should compare competing alternatives and use judgemental rationality.
The negative criticism of ideology already postulates the possibility of better alternatives. These better alternatives can be suggested as 'an exercise in concrete utopianism, postulating an alternative to the actually existing state of affairs, incorporating unacknowledged and even hitherto unimagined possibilities for the satisfaction of wanted needs and wanted possibilities for development, grounded in sustainably potentially disposable resources in the context of a different social order' (Bhaskar, 2016: 263). Utopia constitutes part of the exercise of normative thinking, by imagining a future state that can better fulfil the moral truth. As Archer (2019) interprets, 'concrete' in concrete utopianism means 'realizable, but non-actualized possibilities', the realization of which may be prevented by identifiable constraints.
For the social structures that generate and sustain the ideology to be removed or transformed, merely identifying and explaining them is insufficient. Agents' actions are necessary. Although agents are born into the social structures which condition (though not determine) their actions, 'the agents canthrough the powers they possessform projects that transgress the situation in which they live' (Danermark et al., 2019: 86). The transformative power of agency is not only embedded in the practice of dismantling the existing ideology and its structural causes, but also in actively searching for and bringing in new norms and ideas that are hitherto unactualized. According to Archer (2000), the 'corporate agency'agents who are organized to collectively express interests and strategic pursuitplays a strategic role in transforming the established structures and cultures. The ultimate outcome of social change or stability is dependent on the social interaction between corporate agents. With the potential bargaining power determined by the initial society's distribution of resources, a corporate agent strategically negotiates with other agents to promote its vested interest (Archer, 1995).
To encapsulate, the aforementioned core concepts of the critical realist theory of ideology can be synthesized into a meta-theoretical framework. As shown in Figure 1, the framework consists of two major blocks: negative criticism of ideology and positive action to eradicate ideology. As a meta-theoretical framework, a critical realist approach to ideology provides no substance other than a conceptual framework guiding the undertaking of the critique. As such, the different steps can be fleshed out with concrete theories, dependent on the specifications of the issues in question.
A critical realist informed critique of ideology in planning
In this section, I will briefly explain the ways in which the meta-theoretical framework can inform the critique of ideology in planning by following the steps illustrated in Figure 1.
The critique will start with the endeavour to render explicit the underlying beliefs and values in a planning strategy or practice, then subject them to scrutiny as well as criticism (Step 1). Beliefs, values, ideas and rationales are indispensable and necessary elements constituting planning, either implicitly or explicitly. Although a planning strategy may appear to be adopted due to 'muddling through' without following definable beliefs, this pragmatism may involve a process of struggles between competing rationales. The critical realist approach will contribute to augmenting the intellectual capability of planners to recognize and criticize the beliefs, especially those widely accepted ones which can significantly impact environment and society.
Identifying the beliefs as ideology in planning would be followed by an attempt at understanding the function of ideology by addressing the ways in which planning is subject to ideological influence and the ways ideology manifests itself (Step 2). Although the TINA formation provides an overarching explanation of the way that ideology functions in society, more concrete theories can illuminate the ways in which ideological mechanisms are manifested within different planning contexts. Most of the existing studies on ideology and planning offer a range of theoretical perspectives to reveal these ideological mechanisms. For instance Davidson (2010) uses the concept of empty signifier to explicate the cynical, ideological functioning of the use of sustainability in city planning. By referring to Hall's conjunctural analysis, Inch and Shepherd (2020) unpack the way by which ideology reaffirms the hegemony in the face of challenge. Here, it shows the potential of critical realism to bridge the trenches between a realist and a poststructuralist approach to ideology. The next step will move to an explanation of social structural conditions that (re) produce the ideology (Step 3). The generation and sustaining of an ideology in planning can be a consequence of multiple mechanisms simultaneously at play which, however, are not readily observable but have to be investigated by looking beneath the empirical manifestation (Danermark et al., 2019). Different theoretical perspectives that can best illuminate the planning issues can be drawn on to shed light on the generative mechanisms. For example a discourse theory can be adopted to unpack the discursive mechanisms that lead to the shaping and shift of ideology in planning (Gunder, 2005;Zanotto, 2020).
Following the critical realist approach, the criticism on the ideological beliefs will naturally lead to a formulation of substantive ethical values based on which a better future can be built (Step 4). Despite planning being a profession that is deeply ethical, the actual existing moralities of planning are characterised by a strong moral relativism (Campbell and Marshall, 1999;Forester, 1999). Ethical debates in planning have primarily focused on process and denied a formulation of universal substantive values and, in turn, the distinction of better and worse planning outcomes (Campbell, 2012;Forester, 1999). This is partly because answering normative questions of 'what should be done' or 'what is better' risk being accused of imposing moral superiority and excluding other perspectives, which generates tensions in a world of difference, diversity and multiple ethical claims. Consequently, the actual existing moralities of planning to a great extent align with the prevailing ethical values embedded in the ideological beliefs, and planners lose the vantage point of providing foundations for progressive and radical subversion as well as transformation. For planners to be able to confront and eradicate the dominant ideological beliefs in planning, an articulation of alternative normative values and a rational reasoning of them are of necessity.
To transform the ideology in planning, planners can go further to envision futures that can better fulfil the articulated normative values (Step 5). Harvey (2000) argued that the existing utopian visions of urban development, such as compact city, new urbanism, competitive and liveable cities, are characterized as a 'degenerate utopia' as they either maintain mainstream capitalist market values or in the process of degenerating into a utopia of neoliberalism. The intriguing question for planners is whether they could construct a 'progressive utopia' capable of providing a rebuttal to the ideological claim of 'there is no alternative' (TINA), thereby functioning as a catalyst for transformation. Methodologically, scenario planning can be a fruitful approach to strategically and tactically exercise utopian thinking, by envisioning and devising different futures (Börjeson et al., 2006). Particularly, normative scenario planning aims at formulating better urban future alternatives which embed values precious to the world. It can also include working back from the desirable future to check the feasibility of the scenarios in the current physical, social, political and economic conditions in order to devise necessary measures and actions for its realization (Höjer and Mattsson, 2000).
The last step in the critique of ideology in planning will be to enable transformative planners (Step 6). Studies on the agency side of planners have demonstrated a growing discrepancy between planners' professional values and their actual ability to realize them (Grange, 2013;Inch, 2010). However alarming this lack of action room may be, more worrying is that planners conform to the dominant ideological beliefs promoted by strong political or societal forces, despite these beliefs possibly lying in conflict with the planners' own professional values. This is the case as depicted in most planning literature on ideology which appears to describe planners as being passive recipients of hegemonic political ideas, susceptible to political penetration (Davoudi et al., 2020;Grange, 2014;Gunder, 2010;Shepherd, 2018). Planners' commitment to the political hegemony in turn reinforces the stability and reproduction of the dominant ideology. The critical realist concept of corporate agency will not only contribute to a greater awareness on the way that ideology shapes planning but also facilitate planners in confronting the ideological illusions and mobilizing forces for transformative actions.
An example: The green growth ideology in planning for urban sustainability transformation
In this section, I will illustrate the ways by which the meta-theoretical framework can inform a critique of ideology in planning by drawing on an example of the green growth idea in planning for urban sustainability transformation. Urban planning for sustainability has a strong intention and ambition for transforming the urban development towards a long-term sustainable future. Therefore, the critical realist theory is very suitable for an inquiry of ideology in this planning issue. The approach adopted is to develop a step-bystep critique following Figure 1 by drawing on theoretical arguments and generalized research evidence as well as by frequently referring to two Nordic cases of urban planning: the region of Copenhagen and Oslo. As already mentioned, this example is to illustrate the application of the meta-theoretical framework and is neither a comprehensive critique of the ideology nor an in-depth study of the cases.
Step 1: Green growth as an ideology in planning for sustainable urban development The concept of sustainable development and the ways of achieving it have been very much contested. Since the 1980s, the mainstream and prevailing thinking about sustainable development has been based on the idea of green growth which holds a strong belief in decoupling economic growth from negative environmental impacts (OECD, 2019;Xue, 2016). Although being increasingly challenged by alternative thinking in the more recent decade, such as degrowth (Schneider et al., 2010), the notion of green growth still has a deep foothold in global and local sustainability politics. Dematerialization through improving eco-efficiency and substitution is theoretically argued to be the main path towards green growth (Nordic Council of Ministers, 1999). Associated with this notion is an acceptance of growth and competitiveness as something inherently good.
It is not difficult to identify this green growth belief in the sustainability planning of both Copenhagen and Oslo, where the belief is automatically accepted in planning documents at different levels. It is exemplified in the following statements: 'We will show the world and especially other big cities that CO 2 emissions can be reduced effectively without adversely affecting economic growth.' (Municipality of Copenhagen, 2007) 'Urban development will facilitate a versatile and competitive business sector that ensures value creation and workplaces for all.' (Municipality of Oslo, 2018: 43) Meanwhile, 'Oslo is Europe's leading environmental city and take care of biodiversity, cultural heritage and the city's distinctiveness', and 'Oslo is a zero-emission city with reduced noise pollution and has equipped itself to meet a changing climate.' (Municipality of Oslo, 2018: 17) Being applied to urban development, green growth implies pursuing urban growth in the demographic, economic and physical aspects whilst reducing the burdens on the environment. This has led to typical land use strategies that aim at enhancing resource-use efficiency. In both Copenhagen and Oslo, densification within the inner-city, brownfields development, compact structure and new development close to urban transport nodes are major land use strategies to increase land use efficiency and meanwhile accommodate a growing need for residential and non-residential buildings as well as infrastructure (Naess et al., 2011;Xue, 2015). In addition, investing in public transport infrastructure is a substitution strategy to replace car driving, thus reducing energy consumption and CO 2 emissions (ibid.).
In light of critical realism, a critical-evaluative stance on the green growth belief is required in order to identify whether it is ideological. There is no space to provide a fullrange critique of this belief other than by in a simplified manner drawing on the conclusions of existing studies. From different perspectives, the idea of green growth is evaluated as a false belief. Empirically, evidence from even the forerunners of environmental sustainability in urban development does not suggest that green growth has occurred in urban development (Tapio, 2005). In the Copenhagen region, despite the strong compact city strategy, growth in the housing stock during the period 1991-2008 has only been relatively decoupled from residential energy consumption (Xue, 2015). Similarly, only relative decoupling has been achieved between the growth in the size of urbanized area and economic growth in the Copenhagen region, thus suggesting that urban growth still leads to encroachment on undeveloped land. A similar trend has been found in the Oslo region, in which traffic growth in the region has only been relatively decoupled from economic growth between 1996 and 2008 (Naess et al., 2011). One may argue that the evidence of relative decoupling suggests that these cities are on the trajectory towards green growth. However, there are limits to what the decupling strategies can achieve in terms of environmental sustainability. For example densification as a landsaving strategy to accommodate growth in the building stock can only be possible when opportunities are available, such as the availability of low-density neighbourhoods or brown fields. When these opportunities for densification are used up, further growth in the building stock will have to take undeveloped land (Naess, 2021). In addition, the current achievement of relative decoupling is a result of low-hanging fruits being picked up. To obtain the same or higher degrees of decoupling seems difficult unless advanced technological innovation can be developed and implemented both on a large scale and rapidly. However, time is what we are lacking, as indicated by Intergovernmental Panel on Climate Change (2021) that we need to change in the very near future in order to avoid climate catastrophe. Apart from these empirical findings that suggest a gap between the belief in the theoretical achievement of green growth and the reality in urban development, the theoretical plausibility of the green growth assumption has also been heavily questioned. The idea of green growth through dematerialization is refuted as being ontologically flawed due to its failure to acknowledge the asymmetrical reliance of human socio-economic activities on nature (Venkatachalam, 2007;Xue, 2014).
Step 2: The function of the green growth ideology in planning The TINA formation can be used to illustrate the ways which green growth ideology obtains a hegemony in the planning for urban sustainability. The necessity is firstly bestowed stating that future urban development must pursue a sustainable development paradigm based on economic growth. During the 1980s in Denmark and following a similar trajectory to many European countries, a conservative-liberal coalition government moved away from a welfarist approach addressing balanced and equal development across the country. Instead, it introduced a neoliberal agenda in pursuit of competitiveness by stimulating growth in major cities and regions (Galland, 2012;Olesen and Richardson, 2012). The promotion of the capital region as a growth locomotive is framed as a 'must': 'A strong and competitive capital city is an important prerequisite for Denmark's spatial development. Denmark must have a capital that can attract companies, jobs and employees in global competition.' (Ministry of the Environment, 2006: 11).
Meanwhile, being sustainable has also become a political agenda since the 1980s. Discursively, the idea of green growth that aims at reconciling growth and sustainability becomes the only and necessary way to embrace both imperatives. As argued above, there is a fundamental contradiction in the green growth belief. Therefore, a series of TINA defence mechanisms, discursively and practically, must be adopted to shield this false necessity as well as to resolve the internal tensions and contradictions. Here, a tactical strategy framed as a 'sustainability fix' (While et al., 2004) or 'green fix' (Holgersen & Malm, 2015) is invented to guard the false necessity. The strategy employs sustainability policies to sustain and enhance urban competitiveness or to overcome a crisis of capital accumulation during economic downturn. This strategy is well-embraced in the sustainability planning of Copenhagen, in which being sustainable is framed as a way to strengthen the region's competitive edge. As stated in the 2006 national planning report (Ministry of the Environment, 2006: 17): 'Green spaces, recreational areas and attractive urban environments are key prerequisites for attracting companies, jobs and employees.' Even green growth per se is regarded as a competition field: 'The world's cities are fighting hard to top the green growth agenda… If the city does not have a clear strategy and remain in control, Copenhagen is at risk of becoming marginalised in low-growth Europe.' (The municipality of Copenhagen, 2011: 26). In this process, the false necessity was sustained by the 'green fix' practices that contradict some, if not all, of the legitimating ideas (here, the environmental sustainability idea). The green fix strategy results in a selective adoption of environmental strategies that can potentially boost (at least not hinder) economic competitiveness. Its success as measured in growth promotion will most likely aggravate environmental deterioration and weaken the sustainability commitment (Holgersen and Malm, 2015;Xue, 2018) .
Despite the TINA defensive mechanisms, the internal tensions and contradictions in the false necessity can only be covered but cannot be removed (Bhaskar 2016). The tensions between the fulfilment of growth and sustainability goals are manifested as fragmentation and even contradictions in planning strategies. In the case of the Copenhagen region, densification in the inner-city has been accompanied with low-density urban sprawl in the surrounding municipalities, the latter being a strategy to compete for investment and residents (Xue, 2018). In Oslo, strategies for sustainable mobility, such as public transport investments and reduction in car parking lots, have been combined with road capacity increases which counteract the goal of sustainability (Naess et al., 2011;Xue et al., 2017). Furthermore, the Oslo region has planned several high-speed transportation infrastructures that will enlarge job and service catchment areas as well as promote regional economic growth. This regional enlargement policy will also likely result in an increase in total travel distance, contradicting the densification strategy that aims at reducing travel volume (Xue et al., 2017) .
Step 3: Social conditions producing the addiction to green growth ideology in planning Despite the belief in a green growth urban future being illusory, why does it still have a persuasive dominance in urban sustainability planning? Following the explanatory critique of critical realism, the explanation of the production and commitment to the ideology should move beyond the sphere of ideas and beliefs and dig into underlying social structures. Here, in accounting for the social necessity of the fallacious belief in urban green growth, the growth compulsion emerging from the capitalist market economic structure plays a crucial role in forming a growth necessity (Gordon and Rosenthal, 2003;Xue, 2012). Notwithstanding differences in the trajectory, both Copenhagen and Oslo have experienced a shift in the political context from Keynesianism to the neoliberalism, leading to the dominance of the market economy (Naess et al., 2011). Fierce competition in the market economy sets the 'growth or die' dynamic in motion and forms the profit-driven economy. Therefore, the growth imperative has a structural characteristic inscribed into the capitalist economic system (Harvey, 2010). The growth imperative from the economic sphere further expands into other social planes by building a growth-based society. A growing economy is the premise for achieving a sufficient level of employment rate, avoiding the collapse of the housing market, and maintaining a functional welfare state. In the cultural domain, the inflation of a consumerist lifestyle is closely linked to the capitalist growth impulsion. The idea of green growth, through improving the resource efficiency of production and consumption to solve environmental problems, does not essentially threaten the growth model and the interests of the business sector, and demands no radical changes in lifestyle (Fournier, 2008). Instead, the green shift is often hailed as a new driver of global and local economic growth.
Urban development, the building sector and the transportation system have played an important role in perpetuating, expanding and saving the capitalist mode of production and consumption (Harvey, 1973(Harvey, /2009). Construction and maintenance of buildings and infrastructure contribute significantly to capital surplus absorption and capital accumulation. For example in Denmark, even during the economic downturn in 2020 when the aggregate national GDP declined by 2.1%, the housing sector contributed to a growth of the national GDP by 0.5%. In addition, dwellings, built environment and transport infrastructure are necessary material preconditions for capitalist production, circulation and accumulation. In Norway, the recently published National Transport Plan (Samferdselsdepartementet, 2021) addresses that an effective transport system lays the foundation for building up a competitive business sector, thus contributing to economic growth. Therefore, planning, given its potential power in shaping land use, infrastructure and built environment, is often subject to the mainstream political propaganda for economic growth and green growth.
Step 4: Post-growth informed normative values in planning for sustainability transformation Through the above steps, a deep insight has been developed into the falsity of the green growth ideology as well as its formation and function. Hence, what could be the alternatives that might lead to the fulfilment of the sustainability targets? Below I attempt to develop an alternative ethical reasoning for sustainability planning, informed by recently emerged post-growth debates (Holden et al., 2017).
Regarding environmental sustainability, respecting environmental limits is a normative value stemming from the fact that we live on a limited planet with finite natural resources and ecological boundaries. Assessed in different ways, human beings have exceeded several environmental limits, perhaps the most precarious being climate change (Steffen et al., 2015). From an anthropocentric perspective, we have a moral obligation to preserve the quality of our planet in order to leave future generations living conditions necessary to meet their needs and live a decent life (Arler, 2001). Adopting a nonanthropocentric view, humans have responsibilities not to overexploit non-human resources for the flourishing of other species which have inherent values even in the absence of humans to enjoy it. Since maintaining economic growth through decoupling cannot remain within ecological boundaries, the key to minimising environmental impacts is to reduce the scale of the total economic output (Martinez-Alier et al., 2010). A sufficiency strategy seeking to lower the affluence level in the Global North is proposed as a necessary way to cope with environmental unsustainability. With regards to urban development, to respect environmental limits through addressing sufficiency, the level of housing consumption per capita should be reduced, the total volume of urban built environment shrunk and individual mobility decreased. Residents in both Copenhagen and Oslo regions have a very high average living standard, measured in housing consumption and mobility level. In both city regions, per capita residential floor area has exceeded 50 m 2 , being among the highest in the world. In 2012, Norway reached the top in Europe in per capita daily driving distance, with an average of 33.5 km (Xue et al., 2017). Considering the ethical arguments above, to transform both locally and globally towards environmental sustainability will require a reduction in consumption levels in both city regions.
Another ethical value to be argued for in a sustainable future is to ensure social justice both intra-generationally and inter-generationally. Inter-generational equity is a main moral concern when judging the necessity for our generation to respect environmental limits. Safeguarding intra-generational equity is based on the moral premise that we are responsible to population groups beyond our specific community or nation. However, it appears to be more challenging and demanding to achieve an equitable distribution when the total resource and consumption level must be reduced. Unlike the situation in a growth society in which everyone will have their share increased through 'making the cake bigger' by economic growth, an equitable distribution within limits cannot be achieved without redistribution from the wealthy to the poor (Naess and Xue, 2016). A case in point would be housing; with a ceiling for average per capita housing consumption, an increase in floor area for those who already overconsume housing space will lead to worsening living conditions for those lacking a dwelling or to them living in substandard housing.
Redistribution within limits is not only for securing social equity but also based on an ethical premise of satisfying human needs. This value is explicitly articulated in the definition of sustainable development by Brundtland Commission, with priority being laid on the world's poor (World Commission on Environment and Development, 1987). Unlike human wants, desires and preferences, needs, are objective, non-negotiable, and universal across cultures and over time (Doyal and Gough, 1991). Failure to satisfy them will always result in serious harm. Thus, there exists a threshold in the sense of 'the minimum quantity of any given intermediate need satisfaction required to produce the optimal level of basic need satisfaction' (Gough, 2014). This implies that future urban development and planning need to secure everyone's ability to obtain a minimum quality of life, for example by safeguarding everyone's access to adequate housing (UN, 1948).
Step 5: A post-growth scenario of sustainable urban development and planning An exercise of utopian thinking can be conducted by developing a post-growth urban sustainability scenario embedding the above formulated values (Table 1). A general principle for a post-growth urban development and planning draws on both the 'ecoefficiency' and 'sufficiency' dimensions in relation to mobility, land use, residential floor space and infrastructure, with an emphasis on the latter. The scenario principles shown in Table 1 will be briefly fleshed out by using Oslo region as a case.
Concerning urban spatial structure, dense development would be combined with setting a cap on urban land consumption per capita. If the urban area per capita within the continuous urbanized area of Oslo could be reduced from 277 m 2 (figure in 2015) to 200 m 2 in 2035, accommodating a projected population growth of 3,50,000 would save 66 km 2 of undeveloped land from being converted to built-up areas. This would in practice mean stricter land regulations for the regional cities surrounding Oslo. The construction of new and more environmentally friendly buildings through densification should be combined with the abolishment of some of the environmentally least favourable built environments, such as car-dependent office parks, shopping malls and single-family home areas. This could widen the possibilities for nature regeneration projects as well as enhance the integrity and coherence of the natural areas and landscapes surrounding the city. Post-growth housing policies would primarily focus on setting maximum standards for housing consumption per capita for the concern of climate mitigation and environmental protection. A mere reduction of 6 m 2 in residential floor area per capita from 50 to 44 m 2 would stabilize the total residential energy consumption, even in the face of population growth in the Oslo region (Mete and Xue, 2020). Meanwhile, the ethical premises for securing an equal distribution of housing and everyone's access to housing suggest setting a minimum standard for housing consumption. Regarding transportation, in contrast to the green growth-oriented transport planning in which continually increasing mobility is considered essential for competitiveness and growth, in a post-growth scenario, reversing the present trend is assumed with stabilized or even declined traffic volume, particularly for passenger cars. This would necessitate a change in our travel behaviours towards a certain degree of collective self-limitation, confining much of our major activities in the local area. In the Oslo region, this suggests containing the current regional enlargement infrastructure policy, in addition to densification, that enhances the Table 1. Principles of the post-growth scenario of sustainable urban development and planning (for a full account of the scenarios, see (Mete and Xue, 2020;Naess and Xue, 2016;Xue et al., 2017) Principles of the post-growth scenario of urban development proximity of activity destinations. Simultaneously, the planned expansion of motorways in Oslo should not only be halted, but the existing road spaces could be replaced by environmentally friendly elements, such as a biking infrastructure, walkable neighbourhoods and better public transport services.
Step 6: Transformative planners The articulated post-growth normative values and urban development scenario suggest an alternative to the sustainability planning that is based on the green growth ideology. In the course of resisting and even eradicating the false belief and pursuing urban sustainability transformation beyond green growth, planners have to overcome the 'passivity' as illustrated in the literature and strategically engage in transformative actions. Inspired by critical realism, the premise for planners to perform transformative actions is to become self-conscious of their values, interests, and projects so as to form a corporate agency. Compared to the primary agency of planners who passively react and respond to their given context, a corporate agency of planners is aware of their role as representing wider public interests, articulates the values of planning and strategically organizes their pursuit. Tactically, corporate planners, whilst pursuing this eradication of the green growth ideology and materialization of post-growth urban sustainability, need to know the distribution of resources among relevant corporate agents and their own (in)accessibility to certain resources. With the bargaining power endowed by planners' accessibility to certain resources, corporate planners can convert this into negotiating strength in a particular relationship to other corporate agents involved. Based on these strengths, planners can engage in strategic interaction with other corporate agents to build alliances to attain joint or mutually compatible goals.
Conclusion
In this paper, I have sought to bring in a critical realist approach to the planning debates on ideology arguing for its values to promote the transformative strength of planning. To recap, the critical realist ideology critique brings the following distinctive values into the debates. Firstly, ideology inquiries in planning have predominantly focused on the functions and effects of ideology. Critical realism goes beyond this focus by drawing attention to the content of the beliefs as well as their impacts on planning strategies. By adopting an evaluative stance on ideas and beliefs, critical realism revitalizes the epistemological inquiry that has been dispensed by most recent ideology theorists. As beliefs shape planning strategies which, in turn, generate interventions into the spatial reality, it is vital to scrutinize the content of beliefs to understand the effectiveness of planning ideas in influencing environment, people and society. This is essential as regards planning strategies that aim at transforming urban development towards desirable futures.
Secondly, critical realism allows the development of a holistic explanation of the (re) production of ideology. Therefore, ideologies in planning can be explained by delving into deep social structures beyond the sphere of our observations and experiences. In this case, the critical realist approach renders it possible to embrace interdisciplinary explorations of explanations and, in a fruitful way, accommodates concrete theories from different disciplines relevant to the exploration. This explanatory critique can also deepen and broaden the understanding of wider social roots that shape ideas in planning theory and practice.
Thirdly, critical realism shows the potential of reconciling a post-structuralist and realist approach to ideology, combining the epistemological dimension of beliefs (by evaluating ideas) and the sociological one (by unpacking the ideological mechanisms).
Last but not least, the critical realist ideology critique moves further from a pure critique of beliefs and ideas to include positive action to transform the ideology. For critical realism, positive action is a necessary and natural step subsequent to negative criticism, as an evaluation of ideas and an explanation of the social conditions for their generation cannot be conducted without normative values and an imagining of better alternatives. Although ideology theories that primarily 'diagnose' ideologies can also inform transformative actions, arguably, the critical realist approach offers 'prescriptions' by providing planners with stronger and more solid theoretical ground on which to carry out transformative actions.
All in all, these distinct values reinforce the transformative edge of the critical realist ideology critique, and the approach is particularly fruitful for planning whose ultimate goal is one of achieving societal progress and transformation.
However, it should be emphasized that any attempt at proposing alternative values and concrete utopias should avoid the trap of the TINA formation, or in other words, they should not be framed as presenting the only possible way. The intention of proposing alternatives is to refute the claim of 'there is no alternative' in the ideology. They do not represent the only possible pathway to the future, and should not be reduced to imperatives for everyone to select a certain action (Sayer, 2012). In the context of urban sustainability, due to limited space, I have only tried to present one post-growth urban development scenario. Several urban future scenarios could have been framed to demonstrate possible strategic choices as well as to compare them. By choosing this approach, it will liberate planners and citizens from the existing ideological constraints, provide more space for debates and secure ground for promoting collective and conscious choices. Furthermore, the ethical reasoning is part of an attempt at exploring moral truth and, by its nature, the proposed ethical propositions are subject to contestation and debate. Unlike the TINA formation, alternative ethical propositions are formulated without the intention to exclude, instruct, preach or even impose. Instead, they invite the engagement of ethical arguments and rational judgement. Through this process, we can discover ethical premises that are close to moral truth in a particular geo-historical context. | 2022-02-26T00:15:45.666Z | 2022-02-22T00:00:00.000 | {
"year": 2022,
"sha1": "c15dc8e98df74bebad77ec9e551448b5c5eb17de",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/14730952211073330",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "a861d85db59177a91cb64e8d238bcda42778cbd3",
"s2fieldsofstudy": [
"Political Science",
"Philosophy",
"Environmental Science"
],
"extfieldsofstudy": []
} |
265064651 | pes2o/s2orc | v3-fos-license | Spatially resolved mapping of proteome turnover dynamics with subcellular precision
Cellular activities are commonly associated with dynamic proteomic changes at the subcellular level. Although several techniques are available to quantify whole-cell protein turnover dynamics, such measurements often lack sufficient spatial resolution at the subcellular level. Herein, we report the development of prox-SILAC method that combines proximity-dependent protein labeling (APEX2/HRP) with metabolic incorporation of stable isotopes (pulse-SILAC) to map newly synthesized proteins with subcellular spatial resolution. We apply prox-SILAC to investigate proteome dynamics in the mitochondrial matrix and the endoplasmic reticulum (ER) lumen. Our analysis reveals a highly heterogeneous distribution in protein turnover dynamics within macromolecular machineries such as the mitochondrial ribosome and respiratory complexes I-V, thus shedding light on their mechanism of hierarchical assembly. Furthermore, we investigate the dynamic changes of ER proteome when cells are challenged with stress or undergoing stimulated differentiation, identifying subsets of proteins with unique patterns of turnover dynamics, which may play key regulatory roles in alleviating stress or promoting differentiation. We envision that prox-SILAC could be broadly applied to profile protein turnover at various subcellular compartments, under both physiological and pathological conditions.
Feng Yuan 1,4 , Yi Li 2,4 , Xinyue Zhou 1 , Peiyuan Meng 2 & Peng Zou 1,2,3 Cellular activities are commonly associated with dynamic proteomic changes at the subcellular level.Although several techniques are available to quantify whole-cell protein turnover dynamics, such measurements often lack sufficient spatial resolution at the subcellular level.Herein, we report the development of prox-SILAC method that combines proximity-dependent protein labeling (APEX2/HRP) with metabolic incorporation of stable isotopes (pulse-SILAC) to map newly synthesized proteins with subcellular spatial resolution.We apply prox-SILAC to investigate proteome dynamics in the mitochondrial matrix and the endoplasmic reticulum (ER) lumen.Our analysis reveals a highly heterogeneous distribution in protein turnover dynamics within macromolecular machineries such as the mitochondrial ribosome and respiratory complexes I-V, thus shedding light on their mechanism of hierarchical assembly.Furthermore, we investigate the dynamic changes of ER proteome when cells are challenged with stress or undergoing stimulated differentiation, identifying subsets of proteins with unique patterns of turnover dynamics, which may play key regulatory roles in alleviating stress or promoting differentiation.We envision that prox-SILAC could be broadly applied to profile protein turnover at various subcellular compartments, under both physiological and pathological conditions.Eukaryotic cells are highly compartmentalized.To achieve proper biological functions, proteins are constantly trafficking within the elaborate cellular architecture throughout their life cycle.Take the secretory pathway as an example.Membrane receptors are initially synthesized in the endoplasmic reticulum (ER) 1 and subsequently transported to the Golgi apparatus before reaching the plasma membrane 2 , where they replace "older" receptors that are sorted to the endosomes and destined for degradation in the lysosome 3 or the proteasome 4 .This journey covers nearly half of the cellular space.In each of these subcellular compartments, protein turnover dynamics is carefully orchestrated to ensure the proper folding and functioning of these biomolecules.The rate of turnover may vary substantially between different proteins [5][6][7] , in different compartments, and may change rapidly when cells are under stress or undergo differentiation 8 .Thus, mapping protein turnover with subcellular precision is valuable for understanding cellular activities under physiological and pathological conditions.
A number of chemical tools have been developed to measure protein turnover dynamics at the whole-cell level.Newly synthesized proteins could be tagged with non-canonical amino acids 9,10 .For example, the application of bioorthogonal non-canonical amino acid tagging (BONCAT) 11 to hippocampal synapses discovered around 300 differentially regulated proteins involved in biological processes such as neurite outgrowth and axonal guidance 11 .More recently, stable isotope labels (pulse-SILAC 12,13 ) have been used to quantify protein turnover in mammalian cell lines, primary neuronal culture 6 and in vivo systems 14,15 , revealing broad distributions in protein turnover half-lives in NIH3T3 cells (median 46 h) 16 and HeLa cells (mean 20 h) 13 .Applications of pulse-SILAC has also resolved the assembly kinetics of mitochondrial respiratory complexes 17 , the mitochondrial ribosome 18 , the nuclear pore complex 7 , etc.While the above tools are powerful for analyzing protein turnover, they are only applicable to the whole-cell proteome level and lack subcellular spatial resolution.
To address this issue, pulse-SILAC labeling has been combined with organelle purification workflow, to specifically measure protein turnover dynamics in the mitochondria 17 and the nucleus 19 .However, organelle purification methods are prone to introducing contamination and are not generally applicable to other subcellular locations, which motivates us to explore alternative methods for studying subcellular proteome turnover.Over the past decade, enzyme-mediated proximity labeling techniques, including engineered promiscuous biotin ligases (BioID 20 , TurboID 21 ) and peroxidases (APEX2 22,23 /HRP 24 ), have been developed for spatially resolved profiling of subcellular proteome.These genetically targetable enzymes are capable of generating highly reactive intermediates in situ (e.g.biotinyl 5'adenylate 20,21 or biotin-conjugated phenoxyl free radicals [22][23][24] ), which rapidly react with proximal proteins to form covalent bonds.However, TurboID labeling typically requires over 10 min, which is unsuitable for studying protein turnover dynamics.The high reactivity and short lifetime (<1 μs) of phenoxyl free radical have enabled APEX2/HRP to achieve 10 nanometer-scale spatial resolution within 1 min of labeling 25 .While these methods have been broadly applied to profile the abundance of proteins within subcellular structures (e.g.mitochondria 26,27 , ER 28 , primary cilia 29 , etc.), they have not been used for investigating the turnover dynamics of subcellular proteome 25 .In a previous work, APEX2 labeling was coupled with multi-isotope imaging mass spectrometry to reveal the heterogeneity in protein turnover in lysosomes 30 .However, this work has not been extended to the proteome level.
In the current study, we aim to develop such a method, named prox-SILAC, which combines proximity labeling with pulse-SILAC metabolic tagging of nascent proteome.We choose APEX2/HRP as the proximity labeling method for its superior reaction kinetics.As outlined in Fig. 1A, APEX2/HRP is targeted to a specific subcellular location via fusion with signal sequence or protein markers.To quantify protein turnover dynamics, the cell culture medium is replaced with heavy SILAC medium containing isotope-labeled lysine and arginine for several hours 31,32 .Proximity labeling is triggered at the end of pulse-SILAC to label all proteins, both old and new, that are in the vicinity of APEX2/HRP.Biotinylated proteins are subsequently enriched and analyzed by liquid chromatography-tandem mass spectrometry (LC-MS/MS).The H/L ratios of mass spec intensities represent the metabolic replacement fractions (MRFs), which are used to quantify protein turnover.
Characterizing prox-SILAC method in the mitochondrial matrix
We chose the mitochondria as a model to examine the spatial specificity and quantitation precision of prox-SILAC method.In human embryonic kidney 293 T (HEK293T) cells, APEX2 is targeted to the mitochondrial matrix (mito-APEX2) via N-terminal fusion with the mitochondrial targeting sequence of human cytochrome c oxidase (COX4) 33,34 .Following pulse-SILAC and proximity labeling, biotinylation was visualized with immunofluorescence, which showed highly colocalized signal between biotinylated proteins and the mitochondrial marker TOMM20 (Fig. 1B).To quantify protein turnover, we chose 4 h, 8 h and 12 h as the duration of pulse-SILAC labeling and performed replicated experiments for each time point.Following proximity labeling, cells were lysed and biotinylated proteins were enriched with streptavidin-coated beads.Both western blot analysis and silver staining confirmed successful biotinylation and protein enrichment across replicates at 8 h and 12 h (Supplementary Fig. 1).For the 4 h experiment, silver staining revealed low protein enrichment efficiency, but sufficient amount of peptides were obtained for subsequent MS analysis (Supplementary Fig. 1).In negative control samples omitting APEX2, the biotin-phenol substrate, or hydrogen peroxide, only endogenous biotinylated proteins were detected (Supplementary Fig. 2).Enriched proteins were digested with trypsin and analyzed by LC-MS/MS.
A total of 229, 612 and 410 proteins were identified and quantified across 4-, 8-and 12-h replicated pulse-SILAC experiments, respectively.The overlap of these three groups yielded a list of 183 proteins (Supplementary Fig. 3, Supplementary Data 1), including 162 (85%) proteins annotated in the Uniprot database, which has an established inventory of mitochondrial proteins (Fig. 1C, Supplementary Fig. 4).This level of spatial specificity is comparable to previous reports 26 .Notably, the coverage of mitochondrial proteome is lower than the previous report using mito-APEX 26 , which we attribute to differences in the sample preparation workflow and LC-MS/MS methods between two experiments (see Methods).As a further demonstration of mitochondrial matrix specificity, proteins identified by mito-APEX2 are mapped to the structure of respiratory complexes in the inner mitochondrial membrane (IMM), revealing that only matrix-exposed protein subunits are captured by mito-APEX2 labeling (Fig. 1G).This observation is consistent with the view that IMM restricts the diffusion of phenoxyl free radicals.For each quantified protein, we define its metabolic replacement fraction as MRF = H/L / (1 + H/L), where H/L is the measured SILAC ratio.The calculated MRF values are highly correlated between replicates, with Pearson's correlation coefficients ranging between 0.90 and 0.95 (Fig. 1D and Supplementary Fig. 5).Taken together, the above analysis demonstrates the high spatial specificity, good proteomic coverage, and high reproducibility of prox-SILAC method.
As we increased the duration of pulse-SILAC, the overall MRF values increased substantially, from an average of 22 ± 11% (mean ± s.d.) at 4 h to 40 ± 10% at 12 h (Fig. 1D and Supplementary Fig. 5).At each time point, we observed a broad and asymmetric distribution of MRFs: whereas a majority of proteins have low MRFs, a small subset of proteins exhibits distinctively higher MRFs (Fig. 1E).For example, the highest and lowest MRFs measured at 4 h (MRF 4h ) differ by as much as 7-fold (79 ± 2% vs. 11 ± 4%) (Fig. 1E, Supplementary Data 1).Since, at steady state, the abundance of cellular proteins should double per cell cycle, the baseline MRF is expected to be approximately 2% per hour.Indeed, the slowest MRFs, as observed in several metabolic enzymes (e.g.FADH1 and ETFB), occur at the low level of 3% per hour.Intrigued by the broad MRF distribution, we divided our dataset into three categories: "high MRF" (>25%), "medium MRF" (between 10% and 25%), and "low MRF" (<10%), according to their MRF values measured at 8 h (MRF 8h ).Gene Ontology analysis reveals that proteins involved in the electron transport chain tend to have higher MRFs, whereas proteins related to mitochondrial organization and lipid metabolism are characterized with low MRFs (Supplementary Fig. 6).
To further analyze the heterogeneity of MRF values within protein complexes, we mapped MRF 8h to the structures of respiratory complexes (RC) I to V (Fig. 1G, Supplementary Data 1).Protein components in RC-I overall have the highest MRF 8h (57 ± 16%), whereas those in RC-V have the lowest (35 ± 14%) (Supplementary Fig. 7, Supplementary Data 1).This observation agrees with the conclusion drawn from a previous study using HeLa cells as a model 17 .Within RC-I, we observed the highest MRF 8h for protein subunits NDUFA13, NDUFB8 and NDUFB9, with an average of 78 ± 6%.Similarly, we observed large variations in MRF 8h in protein components of the mitochondrial , and V (cartoon representation [26] ) (G).The histogram distribution and color map of MRF 8h values are shown on the right.
ribosome (Fig. 1F, Supplementary Data 1), with proteins in the large subunit having higher MRF 8h (58 ± 14%) than proteins in the small subunit (47 ± 11%).Mapping the MRF 8h values to the mitochondrial ribosome structure reveals patches of high and low MRF regions, indicating that neighboring proteins tend to share similar turnover dynamics (Fig. 1F).
Profiling ER protein turnover with prox-SILAC Having established our prox-SILAC method in the mitochondrial matrix, we next focused on the ER, which is a hub of secretory pathway proteins (Supplementary Fig. 11).To label ER proteins, we chose HRP instead of APEX2 for its higher labeling efficiency in the oxidizing environment of ER lumen 23 .HRP is targeted to the ER lumen via both Nterminal fusion with the signal sequence derived from immunoglobin Igκ and C-terminal fusion with the ER retention motif KDEL (ss-HRP-KDEL).Immunofluorescence imaging of HEK293T cells stably expressing ss-HRP-KDEL confirmed the good co-localization between protein biotinylation and the ER marker, calnexin (Fig. 2A).
We performed duplicated prox-SILAC experiments with isotope tagging for 4 h, 8 h and 12 h.Both western blot and silver staining confirmed successful protein biotinylation and affinity enrichment (Supplementary Fig. 12).LC-MS/MS analysis identified a total of 510, 210 and 265 proteins across replicated experiments of 4-, 8-and 12-h prox-SILAC, respectively, with an overlap of 186 proteins (Fig. 2B and Supplementary Fig. 13, Supplementary Data 2).Gene Ontology cellular component (GOCC) analysis revealed a remarkable secretory pathway specificity of 97% (Fig. 2B).Similar to the case of mitochondrial matrix, the MRF values for identified ER proteins are highly correlated between replicated experiments at the three time points, with Pearson's correlation coefficients ranging between 0.96 and 1.00 (Fig. 2C and Supplementary Fig. 14).Interestingly, the MRF 8h values for ER proteins (38 ± 18%) are overall substantially higher than mitochondrial matrix proteins (31 ± 12%) (Fig. 2D, E).This phenomenon may be attributed in part to the anterograde protein trafficking from the ER to the Golgi apparatus, thus causing a rapid clearance of "old" secretory pathway proteins in the ER lumen.
According to this model, ER resident proteins such as chaperones would have significantly lower MRFs than trafficking proteins such as plasma membrane receptors (Fig. 2E).To test this model, we repeated the above ER prox-SILAC experiments by focusing on shorter time windows (i.e. 1 h and 2 h) of isotope labeling, to more accurately characterize the MRF values of ER proteins (Supplementary Fig. 14).Indeed, the MRF 2h values of post-ER secretory pathway proteins (27 ± 20%) are on average 2-fold higher than those of ER resident proteins (14 ± 10%).In addition, we calculated the half-lives (t 1/2 ) of the 183 proteins identified across all five time points.Their t 1/2 values are broadly distributed from 0.9 hr to 24.7 h, with a mean t 1/2 of 14.3 h (Supplementary Data 2, Supplementary Fig. 15).The protein with the shortest t 1/2 (0.9 h) in the ER is amyloid-beta precursor protein (APP), a cell surface receptor relevant to neurite growth and neuronal adhesion.On the contrary, the protein with the longest t 1/2 (24.7 h) is calreticulin, an ER resident protein.
Notably, the measured MRF values reflect the rates of both protein degradation and trafficking.To further evaluate the contribution from protein degradation, we analyzed the turnover dynamics of whole-cell proteome.In two replicated pulse-SILAC experiments, we identified and quantified the MRF 2h values for 2253 proteins in the whole cell lysate (Supplementary Fig. 16), among which, 89 proteins overlapped with our ER prox-SILAC dataset (Supplementary Fig. 17).When comparing their MRF 2h values measured in either whole-cell pulse-SILAC or ER prox-SILAC, we observed distinct patterns between post-ER trafficking proteins (e.g.proteins destined for the plasma membrane, Golgi apparatus or lysosome) and ER resident proteins (Fig. 2F).The MRF 2h values of post-ER trafficking proteins are significantly higher in prox-SILAC than in pulse-SILAC, indicating that their rapid clearance from the ER is caused by trafficking rather than by degradation.Since many cellular proteins are constantly trafficking between multiple subcellular compartments, it would be more informative to measure protein turnover at the subcellular level, which may differ substantially from those measured at the whole-cell level.
To further investigate protein turnover on the plasma membrane, we performed 2-hr pulse-SILAC labeling in HEK293T cells and labeled the cell surface protein population with the membraneimpermeant chemical reagent, Sulfo-NHS-LC-Biotin.Similar to our workflow of prox-SILAC, biotinylated proteins were subsequently enriched with streptavidin-coated beads and analyzed by LC-MS/MS to derive the MRF 2h values (Supplementary Fig. 18).In two replicated experiments, we identified and quantified 508 cell surface proteins (Supplementary Fig. 19).For the 34 proteins identified in both ER lumen and cell surface prox-SILAC datasets, their MRF 2h values differ substantially (Supplementary Fig. 20).Membrane receptors and ion pumps such as TFRC (71 ± 1% vs. 4 ± 0%), ATP1A1 (68 ± 1% vs. 19 ± 0%) and IGF2R (55 ± 3% vs. 5 ± 3%), exhibit much higher ER lumen MRF 2h values than cell surface MRF 2h , which is attributed to the time required for them to traffic from the ER lumen to the plasma membrane.To our surprise, two protein chaperones, SERPINH1 (5.0 ± 0% vs. 86 ± 0%) and PDIA6 (4 ± 5% vs. 56 ± 16%) exhibit substantially lower ER MRF 2h values than cell surface MRF 2h .SERPINH1 (Hsp47) is a collagen-specific molecular chaperone and plays an important role in the collagen biosynthesis 35 .While PDIA6 mainly functions as a chaperone that inhibits the aggregation of misfolded proteins during unfolded protein response (UPR) 36 , it has also been reported to bind integrin β3 subunit on the cell surface to promote platelet activation 37 .We thus speculate that the higher MRF values of protein chaperones may be associated with their distinct functions on the cell surface.Together, the above comparison of prox-SILAC at various subcellular compartments highlight its advantage in investigating spatially resolved protein turnover.
Mapping subcellular protein turnover under ER stress ER stress response, which can be induced through multiple pathways including defective protein folding, has been implicated in the onset and the progression of an array of diseases ranging from cancer to neurodegeneration 38 .It has been known that proteome homeostasis changes considerably when cells are under stress.Specifically, the rates of protein synthesis, degradation and trafficking would be tuned to better cope with the stress condition.To depict a detailed picture of protein turnover remodeling, we applied prox-SILAC to HeLa cells under ER stress induced by thapsigargin, a small molecule inhibitor of Ca 2+ transport in the ER (Fig. 3A, B).Successful induction of ER stress is confirmed by the up-regulation of the chaperone marker BiP (Supplementary Fig. 21).
We performed duplicated ER prox-SILAC experiments for both thapsigargin-treated (2 h) and control samples, identifying 306 and 320 proteins, respectively (Fig. 3C).The overlapping 265 proteins have a secretory pathway specificity of 97% (Fig. 3D and Supplementary Fig. 22, Supplementary Data 3).Compared to the control sample, ER stress causes a global down-regulation of protein turnover in the ER (Fig. 3E), which is consistent with the findings of protein translational shutdown in the previous study of UPR 39,40 and ER-associated degradation (ERAD) 41 pathways.Notably, against this slow turnover background, we observed significantly elevated MRF 2h values for a handful of proteins, including HSPA5, HSPA6, JAGN1 and LMAN1 (Fig. 3E).For example, the MRF 2h of protein chaperones HSPA5 and HSPA6 increased from approximately 4 ± 0% at the basal level to 15 ± 1% and 16 ± 0% under ER stress, respectively.It has been reported that the activation of UPR pathways promotes the expression of chaperones including HSP proteins and BiP 42 .Consistent with this view, our observed increase in MRF 2h indicates the rapid entry of these proteins into the ER lumen.Similarly, the observed MRF 2h for mannose-specific lectin LMAN1 doubled from 5 ± 0% to 10 ± 1% upon ER stress, which is in agreement with stress-induced LMAN1 transcription 43 and its enrichment in the Golgi 44 , thus causing rapid replacement of the ER population with newly synthesized LMAN1 protein.
We also performed 2-h pulse-SILAC experiments using HeLa-SS-HRP-KDEL cell line with or without thapsigargin treatment.We quantified the MRF 2h values of 1429 and 1598 proteins in three replicates of thapsigargin-treated and control samples, respectively, resulting in 1202 overlapped proteins (Supplementary Fig. 23-24, Supplementary Data 3).We extracted the secretory pathway proteins from the pulse-SILAC experiments using GOCC annotations, generating a pulse-SILAC x subcellular information dataset of 584 proteins.Overall, much higher MRF 2h values were observed for post-ER trafficking proteins in the prox-SILAC experiments than pulse-SILAC (Supplementary Fig. 25).While ER stress causes a significant global reduction of MRF 2h values measured in prox-SILAC (mean values 18% in thapsigargin-treated vs. 26% in the control), the decrease is less noticeable in pulse-SILAC (13% in thapsigargin-treated vs. 14% in the control) (Fig. 3E).
Mapping subcellular protein turnover during neurite growth
The cellular proteomic landscape changes dramatically during differentiation 8 .As a neuroblastoma-derived cell line, SH-SY5Y cells could be induced to differentiate into mature neuron-like cells and are often used as a Parkinson's disease model 45,46 .Since neurite growth is often accompanied by a rapid turnover of secretory pathway proteins involved in membrane structural maintenance, we sought to apply ER prox-SILAC to quantify the protein turnover changes in SH-SY5Y cells undergoing stimulated neurite growth.We created an SH-SY5Y cell line stably expressing ss-HRP-KDEL.Confocal immunofluorescence imaging confirmed the co-localization between HRP-mediated protein biotinylation and the ER marker calnexin (Supplementary Fig. 26).To induce neurite growth, we treated SH-SY5Y cells with 10 μM all-trans retinoic acid (ATRA) [47][48][49] for 7 days, followed by applying brain-derived neurotrophic factor (BDNF) 50 for another 7 days (Fig. 4A).At the end of the 14-day differentiation protocol, the neurite length increased from 20 ± 6 μm to 53 ± 12 μm (Fig. 4A).
We focused on the early stage of differentiation and performed duplicated prox-SILAC experiments at days 0 (D0), 7 (D7) and 10 (D10), with cells pulse-labeled with heavy isotope-encoded lysine and arginine for 2 h.A total of 486, 356 and 314 proteins were identified and quantified at D0, D7 and D10, respectively (Fig. 4B, Supplementary Data 4), with secretory pathway specificity ranging between 90% and 94% (Supplementary Fig. 27).Overall, as cells undergo differentiation, protein turnover in the ER lumen gradually decreased, as revealed by the leftward shift in the cumulative distribution curve of their MRF 2h values (Fig. 4C).
Finally, for the overlap of 187 proteins across six replicated experiments performed at D0, D7 and D10, their MRF 2h values are clustered into 6 categories (Fig. 4F, Supplementary Data 4).Category I, which includes several cell adhesion molecules (e.g.N-CAM-L1, N-CAM-1, Nr-CAM), features steadily decreasing MRF 2h values throughout the differentiation process.In contrast, the MRF 2h values for category IV proteins gradually increase during differentiation.Members in this category include prolyl 4-hydroxylase subunits (P4HA1, P4HA2 and P4HB), peptidyl-prolyl cis-trans isomerases (FKBP7 an FKBP10), and reticulocalbins (RCN1, RCN2 and RCN3).Interestingly, category VI proteins feature a peak in their MRFs at D7, which account for 14% (26 out of 187) proteins in our prox-SILAC dataset.Together, the above data demonstrate the power of prox-SILAC to resolve protein turnover dynamics in the model of neuronal differentiation.Our data shed light on the cellular capacity of remodeling protein turnover dynamics as a means of fine-tuning the membrane proteome, which may serve to better adapt to morphological changes such as neurite growth.
In parallel, we performed 2-h pulse-SILAC experiments at D0, D7 and D10 during neurite growth, resulting in the identification of 1687, 1381 and 1496 proteins at these time points, respectively (Supplementary Fig. 28-29, Supplementary Data 4).Similar to our previous comparisons between prox-SILAC and pulse-SILAC in the ER, overall higher MRF 2h values are observed in prox-SILAC at D0 and D7, particularly for post-ER trafficking proteins (Supplementary Fig. 30).However, the difference is much smaller at D10, where the MRF 2h values are lower in both prox-SILAC and pulse-SILAC (Supplementary Fig. 30).
Discussion
To summarize, we have developed prox-SILAC method that combines peroxidase-mediated proximity labeling with pulse-SILAC tagging of newly synthesized proteins.Conventional pulse-SILAC labeling on the whole cell level reveals the protein turnover dynamics, which includes protein synthesis and degradation.In contrast, the MRF values measured with prox-SILAC reflects the combine effects of not only protein synthesis and degradation, but also local trafficking at specific subcellular localizations.In the past, subcellular protein turnover dynamics was measured by combining fractionation with pulse SILAC labeling.This has been successfully implemented for organelles that can be readily purified, such as the mitochondria 17 and the nucleus 19 .However, such approach could not be easily extended to other compartments, including the highly tubular ER structure.Isolated microsome was used to profile ER proteome, but generating incomplete datasets with poor overlap in independent works 51,52 .Over the past decade, APEX2 and HRP have been applied to resolve the proteome in multiple subcellular compartments, including both membranebound space (e.g.ER-PM junction 53 , synaptic cleft 54 , etc.) and membraneless condensates (e.g.stress granule 55 , nuclear speckle 56 , etc.).We envision that prox-SILAC could add to this toolbox by providing protein turnover information, in addition to protein abundance.
In the mitochondrial matrix, we have demonstrated the high spatial specificity of proximity labeling and high accuracy of protein turnover quantitation.In the mitochondria of HEK293T cells, we observed heterogeneous distribution of protein turnover dynamics in respiratory complex I-V subunits, with an overall higher MRF in RC-I and lower in RC-V.The observed differences in MRF 8h values of OXPHOS subunits between pulse-SILAC (average 34%) and prox-SILAC (average 50%) may be attributed to protein translocation.Our data suggest that a small sub-population of OXPHOS proteins, such as NDUFB9 and ATP5C1, may not be accessible to mitochondrial matrixtargeted APEX2 (e.g. on the outer membrane as newly synthesized protein, or outside mitochondria), although we do not have experimental data to support this view.
Notably, remarkable variations in the MRF values of protein subunits within mitochondrial ribosome complexes and respiratory complexes have also been reported in previous work via pulse-SILAC labeling in HeLa cells 17 .In these studies, the H/L ratios exhibited a 5-fold difference in ribosomal proteins and a 4-fold difference in respiratory complex proteins, indicating that these proteins are synthesized and imported into mitochondria in considerable excess over the minimum amount required to support assembly.The above observation reveals a complicated assembly process that warrants further investigation.As OXPHOS components are liable to be oxidized by leaked electrons from the electron transport chain, excessive copies in the mitochondria may act as a reserve to quickly replace the damaged subunits, thus keeping the OXPHOS machineries functioning smoothly.
As the MRF values of duplicated prox-SILAC experiments are highly correlated with Pearson's correlation coefficients >0.9, the mean MRF values are used for subsequent analysis.Notably, negative changes in MRF values during the prox-SILAC time course were observed for some of the proteins, which typically happens when their MRF values are high or almost at saturated levels.Similar trend has also been reported in previous pulse-SILAC studies 17 , which could be attributed to measurement errors of mass spectrometry quantitation.To avoid measurement near the MRF saturation level, we focused on an early time point of 2 h for the subsequent prox-and pulse-SILAC experiments.
Application of prox-SILAC to the ER lumen reveals distinct distribution of MRF for post-ER trafficking proteins versus ER resident proteins.The substantially higher MRF values of post-ER trafficking proteins are consistent with the model of protein trafficking in the secretory pathway.Such distinction could not be achieved with whole-cell pulse-SILAC methods, thus highlighting the unique advantage of the subcellular spatial resolution provided by prox-SILAC.
When cells are challenged with ER stress, UPR and ERAD pathways are activated to alleviate stress, leading to global translational shutdown and the expression of specific proteins involved in stressresponse pathway 57 .Indeed, our prox-SILAC dataset reveals a general trend of suppressed protein turnover during thapsigargin-induced ER stress.Exceptions to this trend include several proteins relevant to ER stress response (e.g.heat shock protein chaperones), which exhibit substantially elevated MRFs under stress condition as compared to physiological condition.For example, the ER resident protein involved in vesicular trafficking, JAGN1 (protein jagunal homolog 1), has recently been implicated in ER stress response 58 .In our dataset, the MRF 2h of JAGN1 increases by nearly 3-fold (from 6 ± 0% to 16 ± 3%) upon ER stress.
Finally, we applied prox-SILAC to investigate protein turnover dynamics in cells undergoing stimulated differentiation, using neuroblastoma cell line SH-SY5Y as a model.SH-SY5Y cells have been commonly used for studying neural differentiation and the pathogenesis of Parkinson's disease 59 .Our data reveal a gradual decrease in protein turnover as cells progress through ATRA/BDNF-induced neurite growth.This trend is particularly evident for cell adhesion proteins, which are involved in regulating cell motility.In contrast, several secretory pathway enzymes involved in protein maturation (i.e.posttranslational modifications, prolyl cis-trans isomerization, etc.) tend to slowly increase their turnover during differentiation.Together, the above examples demonstrate the power of prox-SILAC for mapping subcellular protein turnover dynamics in both acute (on the time scale of hours) stress response and long-term (on the time scale of days) adaptation scenarios.
For all the above prox-SILAC experiments, we have performed the equivalent pulse-SILAC experiments at the whole-cell level.Prox-SILAC offers unique coverage of the subcellular proteome as compared to pulse-SILAC.For example, our mitochondrial matrix prox-SILAC dataset contains 72 proteins that are missed by pulse-SILAC, despite the overall larger coverage of the latter (407 vs. 162 mito proteins).Similarly, our ER-lumen prox-SILAC datasets in HEK293T cells, HeLa cells, and thapsigargin-stressed HeLa cells have uniquely identified 94, 186, and 172 proteins, respectively, as compared to the pulse-SILAC counterparts.In SH-SY5Y cells undergoing differentiation, a total of 262, 180 and 90 proteins are uniquely quantified in ER lumen prox-SILAC at D0, D7 and D10, respectively, as compared to the corresponding pulse-SILAC datasets.
Prox-SILAC also provides the high spatial specificity required for resolving subcellular MRF values.A comparison between our ER lumen prox-SILAC versus ER lumen pulse-SILAC datasets reveals that post-ER trafficking proteins overall have substantially higher MRF values in prox-SILAC than in pulse-SILAC, whereas ER resident proteins appear to have similar prox-SILAC and pulse-SILAC MRF values.This agrees with the vesicular trafficking model: while older proteins are rapidly replaced by nascent proteins in the ER, they are still retained in other subcellular compartments.
In HeLa cells stressed with thapsigardin (Fig. 3E) or SH-SY5Y cells undergoing chemically induced cell differentiation (Fig. 4D, E), while prox-SILAC reveals a global feature of slower protein turnover for ER proteins, the trend is less conspicuous in the whole cell pulse-SILAC dataset (Fig. 3E and Supplementary Fig. 31).However, we note that the difference between prox-SILAC and pulse-SILAC is less striking in SH-SY5Y differentiation experiment than in the case of thapsigargin-induced ER stress in HeLa cells.One reason might be that thapsigargin stress lasts for only 2 h, whereas the differential protocol takes days, allowing sufficient time for the global proteome level to change in response to altered protein trafficking dynamics.
Probe synthesis
Synthesis of biotin-phenol.
Biotin-NHS Biotin-Phenol
Biotin-NHS was prepared by dissolving biotin (4.89 g, 24.0 mmol), Nhydroxysuccinimide (2.30 g, 20.0 mmol) and 1-(3-dimethylaminopropyl)-3-ethylcarbodiimide hydrochloride (4.6 g, 24.0 mmol) to 120 mL warm DMF.The reaction mixture was stirred overnight at room temperature.After the solvent was concentrated to 20 mL under vacuum, pour it into 100 mL cold ethanol.Filter the precipitation to obtain a white solid.The yield is 71%.Biotin-Phenol was prepared by adding biotin-NHS (0.17 mg, 0.5 mmol), tyramine (0.082 g, 0.6 mmol) and triethylamine (210 μL,1.5 mmol) in 15 mL DMF.The reaction mixture was stirred overnight at room temperature.After the solvent was removed under vacuum, the crude mixture was purified by a C18 reverse phase column on semi-preparative UPLC with a gradient of 0 to 60% methanol in water to obtain a white solid.The yield is 65%. 1
Proximity labeling and fluorescence microscopy
Cells stably expressing APEX2 or HRP were plated on glass coverslips placed in the wells of a 24-well plate.For proximity-dependent labeling, cells were incubated in 250 µL of 500 µM biotin-phenol probe in complete medium for 30 min at 37 °C.2.5 µL of freshly prepared 100 mM H 2 O 2 was added into the medium with brief agitation to achieve a final concentration of 1 mM.After 1-min labeling, the reaction medium was replaced with 500 µL quencher solution (1 mM sodium azide, 1 mM sodium ascorbate and 500 µM Trolox in PBS solution) for 2 min.Then cells were washed again with quencher solution and twice with PBS.Cells were fixed with 4% (w/w) formaldehyde for 15 min at 4 °C.
Stained cells were imaged on an inverted fluorescence microscope (Nikon-TiE) equipped with a spinning disk confocal unit (Yokogawa CSU-X1) and a scientific complementary metal-oxide semiconductor camera (Hamamatsu ORCA-Flash 4.0 v.2).This system was controlled with a customized software written in LabVIEW v.15.0 (National Instruments).
Prox-SILAC labeling in cells
For SILAC medium preparation, L-arginine and L-lysine were replaced by heavy-labeled L-arginine-13 C 6 -15 N 4 and heavy-labeled Llysine-13 C 6 -15 N 2 .Each L-arginine-13 C 6 -15 N 4 introduces +10.0083Da mass difference in a tryptic digested peptide, and the L-lysine-13 C 6 -15 N 2 introduces +8.0142 Da mass difference respectively.The 1000x concentrated stock solutions with L-arginine-13 C 6 -15 N 4 (88.8 mg/mL), Llysine-13 C 6 -15 N 2 (154 mg/mL) in PBS was filtered by 0.22-um syringe filter and stored at 4 °C.The pulse-SILAC medium was generated by adding the heavy-labeled amino acid stock into SILAC DMEM with 10% SILAC fetal calf serum.Cells were cultured in normal DMEM medium for 5-7 generation for pro-SILAC experiments.The culture medium was replaced with the pulse-SILAC medium for the indicated duration.For instance, when measuring MRF 4h of proteins, cells were incubated in pulse-SILAC medium for 3.5 h, and then the medium was replaced with pulse-SILAC medium containing 500 µM biotin-phenol probe for another 30 min.APEX labeling was triggered by adding 100x H 2 O 2 solution (100 mM) into cell culture with gentle rocking.After 1-min labeling, the reaction medium was replaced with 500 µL quencher solution (1 mM sodium azide, 1 mM sodium ascorbate and 500 µM Trolox in PBS solution) for 2 min.Then the cells were washed again with quencher solution and twice with PBS.
For cell surface protein labeling, HEK293T-SS-HRP-KDEL cells were incubated in pulse-SILAC medium for 2 h.Then the medium was exchange to HBSS buffer containing 0.5 mg/mL sulfo-NHS-LC-biotin reagent for 2 min.The cells were wash with PBS buffer for three times Cells were lysed with RIPA buffer containing 25 mM Tris•HCl pH 7.6, 150 mM NaCl, 1% NP-40, 1% sodium deoxycholate, 2% SDS and protease inhibitors cocktail for 15 min at 4 °C.The high concentration of SDS facilitates the extraction of membrane proteins.Cells were scraped and lysed via ultrasonication on ice bath.The lysate was centrifuged at 20000 g at 4 °C for 10 min.For western blotting analysis, the collected supernatant was mixed with 5X loading buffer and heated at 95 °C for 10 min.For subsequent proteomic experiments, protein samples were purified by cold methanol precipitation at −80 °C for 3 h.
Western blotting analysis
Protein samples were separated by 10% SDS-PAGE gel under 110 V for 70 min and transferred to PVDF membrane (Bio-Rad) under 240 mA for 70 min.The blots were blocked with 5% BSA in TBST solution (TBS buffer, pH ~7.4,containing 0.1% Tween-20) overnight at 4 °C.The blots were then immersed with 0.25 µg/mL streptavidin-HRP (Thermo Fisher Scientific) at room temperature for 1 h.For detecting the expressing of fusion V5 or HA epitope tag, the blots were incubated with mouse anti-V5 monoclonal antibody (Biodragon, 1:5000) or mouse anti-HA monoclonal antibody (Biodragon, 1:5000) as primary antibody at room temperature for 1 h, followed by HRP-conjugated goat anti mouse lgG as secondary antibody for 1 h.At last, the blots were washed with TBST solution three times, developed with Clarity Western ECL substrate (Bio-Rad) and imaged by a Chemidoc imager (Bio-Rad).
Enrichment of biotinylated proteins and samples preparation for MS analysis
Purified proteins were resolubilized in 0.5% (w/v) SDS aqueous solution and quantified via BCA protein assay.5 mg proteins were incubated with 250 µL streptavidin beads for 3 hours at room temperature with gentle rotation.Then the beads were washed twice with 2% (w/v) SDS aqueous solution, twice with 8 M urea, and twice with 2 M sodium chloride.Proteins on the beads were incubated with 6 M urea and 10 mM dithiothreitol for 15 min, and subsequently alkylated with 20 mM iodoacetamide in the dark at 35 °C for 30 min.After washing twice with triethylammonium bicarbonate buffer, proteins on beads were treated with 4 µg trypsin (Promega) for 16 hours at 37 °C.A small fraction of input, supernatant, flow through, and elute sample (biotin competition under 95 °C) were kept for further western blotting analysis.Thereafter, released peptides were collected from the supernatant following centrifugation at 15000 g for 10 min.The digested peptides were fractionated using high pH reversed-phase peptide fractionation kit.After loading on the column, the samples were eluted by 5%, 7.5%, 10%, 12.5%, 15%, 17.5%, 20%, 50% acetonitrile solution in 0.1% triethylamine respectively.Finally, the eluted samples were mixed pairwise (the nth and n+4th fractions) to yield 4 fractions for LC-MS/ MS analysis.
Protein digestion in solution
Purified proteins were resolubilized in 8 M urea solution (urea dissolved in 50 mM Tris-HCl, pH ~8.5) and quantified via BCA protein assay (about 200 µL to 1 mL urea for 1 mg protein).0.5 M fresh dithiothreitol solution was added to a final concentration of 10 mM and incubated at 55 °C for 30 min.Then 0.5 M iodoacetamide solution was added to a final concentration of 30 mM and incubated at 37 °C for another 30 min in the dark.Thereafter, excess iodoacetamide was neutralized with additional dithiothreitol (final concentration of 20 mM) at 55 °C for 15 min.The urea concentration was then diluted by adding 7X volume of 50 mM Tris-HCl (pH ~8.5).CaCl 2 solution (final concentration of 1 mM) was added to facilitate the digestion as well.Proteins were digested with trypsin (sigma) at a 1:20 ratio at 37 °C for more than 16 h.Digested peptides were fractionated using high pH reversed-phase peptide fractionation kit as aforementioned.
LC-MS/MS analysis
Peptides were separated using a loading column (100 µm × 2 cm) and a C18 separating capillary column (75 µm × 15 cm) packed in-house with Luna 3 μm C18(2) bulk packing material (Phenomenex, USA).The mobile phases (A: water with 0.1% formic acid and B: 80% acetonitrile with 0.1% formic acid) were driven and controlled by a Dionex Ultimate 3000 RPLC nano system (Thermo Fisher Scientific).The LC gradient was held at 2% for the first 8 min of the analysis, followed by an increase from 2% to 10% B from 8 to 9 min, an increase from 10% to 44% B from 9 to 123 min, and an increase from 44% to 99% B from 123 to 128 min.
For the samples analyzed by Orbitrap Fusion LUMOS Tribrid Mass Spectrometer, the precursors were ionized using an EASY-Spray ionization source (Thermo Fisher Scientific) source held at +2.0 kV compared to ground, and the inlet capillary temperature was held at 320 °C.Survey scans of peptide precursors were collected in the Orbitrap from 350-1600 Th with an AGC target of 400,000, a maximum injection time of 50 ms, RF lens at 30%, and a resolution of 60,000 at 200 m/z.Monoisotopic precursor selection was enabled for peptide isotopic distributions, precursors of z = 2-7 were selected for data-dependent MS/MS scans for 3 seconds of cycle time, and dynamic exclusion was set to 15 s with a ± 10 ppm window set around the precursor monoisotope.
Protein identification and quantitation
Raw data files were searched against Homo sapiens Uniprot database (downloaded on Nov 19th, 2020).Database search were performed with MaxQuant software (Version 1.6.2.3).The digestion mode was set to trypsin.The maximum number of modifications allowed per peptide was set to five.The maximum number of missed cleavages allowed per peptide was two.Mass shifts of +57.0214Da (carbamidomethylation, C) was searched as fixed modifications; +15.9949Da (oxidation, M), +42.0105Da (acetyl N-terminal) were searched as variable modifications.Mass shifts of +8.0142 Da (Lys8, K) and +10.0083Da (Arg10, R) were set as heavy label.The FDR were set to <1%.The mass spectrometry proteomics data have been deposited to the Proteo-meXchange Consortium via the PRIDE 60 partner repository with the dataset identifier PXD037569.
For each identified and quantified protein, its metabolic replacement fraction (MRF) was calculated from its SILAC H/L ratio, as follows: Only the proteins identified in both replicates were taken for subsequent analysis.Their metabolic replacement fractions were the arithmetic mean values of the two replicates.
The half-life of protein was calculated by fitting the following exponential equation with the MRF values identified at 1 h, 2 h, 4 h, 8 h and 12 h.
Gene Ontology analysis
To calculate the mitochondrial specificity of our proteome, the mitochondrial proteins were confirmed by searching for "mitochon" in the GO terms from the Uniprot-GO annotations.For secretory pathway specificity, proteins with "endoplasmic reticulum", "Golgi", "plasma membrane", "endosome", "lysosome", "nuclear envelop", "nuclear membrane", "perinuclear region of cytoplasm", "extracellular space" or "vesicle" descriptions in Uniprot-GO annotation were identified as secretory pathway proteins.Further, the secretory proteins with only "endoplasmic reticulum" annotation were regarded as "ER residents".The secretory proteins without "endoplasmic reticulum" annotation were described as "post-ER proteins".
Statistics and reproducibility
Immunofluorescence images are acquired from at least three fields of view.All proteomic experiments contain at least two biological replicates.Statistical analysis (e.g.Pearson's r) of the proteomic data was performed with GraphPad Prism 9 and Excel under a two-sided manner.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Fig. 1 |
Fig. 1 | Characterization of prox-SILAC method and its application to mitochondria.A Experimental scheme of prox-SILAC with APEX2 targeted to the mitochondrial matrix (mito-APEX2).Newly synthesized proteins in the time windows of 4 to 12 h are tagged with heavy isotope-encoded lysine and arginine.Following APEX2 labeling, biotinylated proteins are enriched via affinity purification, digested with trypsin, and quantified via LC-MS/MS analysis.B Representative confocal immunofluorescence images of HEK293T cells showing the localizations of APEX2 (anti-V5), mitochondria (anti-TOMM20), and biotinylated proteins (streptavidin-AlexaFluor 647).Scale bar = 10 μm.Images were acquired from at least three fields of view in three biological replicates.C Specificity analysis for prox-SILAC mitochondrial proteome.Red bars and numbers indicate proteins with prior mitochondrial annotations in the GOCC database.Left: human proteome (from Uniprot database); middle: proteins identified in replicated pulse-SILAC experiments; right: proteins identified in replicated prox-SILAC experiments.D Scatter plots showing the MRF 8h values of mitochondrial proteins identified in replicated prox-SILAC experiments at three time points.E Plot of MRF traces with protein examples highlighted in colors.F, G Mapping of MRF 8h to the structure of the mitochondrial ribosome (PDB ID: 3J9M) (F) and the respiratory complexes I (5XTD), II (6VAX), III (5XTE), IV (5Z62), and V (cartoon representation[26] ) (G).The histogram distribution and color map of MRF 8h values are shown on the right.
Fig. 2 |
Fig.2| Mapping ER protein turnover with prox-SILAC.A Representative confocal immunofluorescence images showing the localizations of biotinylated proteins (streptavidin-AlexaFluor 647), ER marker (anti-Calnexin), and the nucleus (DAPI) in HEK293T cells expressing ss-HRP-KDEL.Scale bar = 10 μm.Images were acquired from at least three fields of view in three biological replicates.B Specificity analysis of the ER proteome.Proteins with prior secretory pathway annotations in the GOCC database are indicated in red[28] .Left: entire human proteome (from Uniprot database); middle: proteins identified in replicated pulse-SILAC experiments; right: proteins identified in replicated prox-SILAC experiments.C Scatter plot showing MRF 4h values of labeled proteins in replicated experiments at three time points.D Heatmap depicting the MRF values of identified ER proteins in replicated experiments at three time points.E Plot of MRF traces with protein examples highlighted in colors.F Scatter plot of MRF 2h values of ER proteins measured in ER prox-SILAC against whole-cell pulse-SILAC.The ER proteins are identified in all prox-SILAC replicates at five time points and the 2-hr whole-cell pulse-SILAC replicates.Red dots indicate post-ER trafficking proteins (i.e.targeted to the cell membrane, Golgi apparatus or lysosome), while blue dots indicate ER resident proteins.
Fig. 3 |
Fig. 3 | Analysis of protein turnover changes under ER stress.A Representative confocal immunofluorescence images showing the localizations of biotinylated proteins (streptavidin-AlexaFluor647), ER marker (anti-Calnexin), and the nucleus (DAPI) in HeLa cells expressing ss-HRP-KDEL proteins.Scale bar = 10 μm.Images were acquired from at least three fields of view.B Experimental scheme of ER stress induction and ER prox-SILAC labeling in HeLa cells.Cells are treated with 1 μM thapsigargin for 4 h prior to prox-SILAC.C Scatter plots showing MRF 2h values of labeled proteins in ER stress (left) and the control (right).D Specificity analysis of proteomic data in ER stress and the control experiments.Proteins with prior secretory pathway annotations in the GOCC database are indicated in red.E Scatter plot comparing the MRF 2h values in ER stress versus the control sample.Red and blue dots represent proteins with 2-fold higher or lower MRF 2h values, respectively.From left to right: proteins in prox-SILAC experiments, a zoom-in view of prox-SILAC data, proteins in pulse-SILAC experiments, proteins with secretome annotations in pulse-SILAC experiments.
Fig. 4 |
Fig. 4 | Profiling the turnover changes of ER proteins during neurite growth.A Schematic time course (top) and immunofluorescence images (bottom) of SH-SY5Y differentiation.Neurites are visualized via staining with anti-MAP2 antibody.The proximal and distal ends of neurites are indicated as white triangles.Scale bar = 50 μm.Statistics are shown at the bottom left, with n = 10 cells per sample and p values calculated from two-sided Student's t-test.*p value between D7/D10 is 0.0238; **p value between D0/D7 is 0.0038.Source data are provided as a Source Data file.B Venn diagrams showing the numbers of quantified proteins at D0, D7 and D10 replicated experiments.C Cumulative distribution curves of MRF 2h values at D0, D7 and D10.D, E Scatter plots showing changes in MRF 2h values of ER proteins between D0 and D7 (D) and between D7 and D10 (E).Red and blue dots represent proteins with 2-fold higher and lower MRF 2h , respectively.F Heatmap showing normalized z-scores of MRF 2h values along differentiation time line.ER proteins are divided into six categories according to their patterns of MRF 2h changes.
The MS characterization was performed on UPLC-MS, calculated for C 18 H 26 N 3 O 3 S: [M + H] + at 364.17 and found at 364.64. | 2023-11-10T06:17:36.242Z | 2023-11-08T00:00:00.000 | {
"year": 2023,
"sha1": "3de04e3b8874bf6f71d578197519e6d15353e87e",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-023-42861-8.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a973fff730b2c20848e5f5d71d897b356c5b956d",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1119697 | pes2o/s2orc | v3-fos-license | ERp29 Regulates ΔF508 and Wild-type Cystic Fibrosis Transmembrane Conductance Regulator (CFTR) Trafficking to the Plasma Membrane in Cystic Fibrosis (CF) and Non-CF Epithelial Cells*
Sodium 4-phenylbutyrate (4PBA) improves the intracellular trafficking of ΔF508-CFTR in cystic fibrosis (CF) epithelial cells. The underlying mechanism is uncertain, but 4PBA modulates the expression of some cytosolic molecular chaperones. To identify other 4PBA-regulated proteins that might regulate ΔF508-CFTR trafficking, we performed a differential display RT-PCR screen on IB3-1 CF bronchiolar epithelial cells exposed to 4PBA. One transcript up-regulated by 4PBA encoded ERp29, a luminal resident of the endoplasmic reticulum (ER) thought to be a novel molecular chaperone. We tested the hypothesis that ERp29 is a 4PBA-regulated ER chaperone that influences ΔF508-CFTR trafficking. ERp29 mRNA and protein expression was significantly increased (∼1.5-fold) in 4PBA-treated IB3-1 cells. In Xenopus oocytes, ERp29 overexpression increased the functional expression of both wild-type and ΔF508-CFTR over 3-fold and increased wild-type cystic fibrosis transmembrane conductance regulator (CFTR) plasma membrane expression. In CFBE41o− WT-CFTR cells, expression of and short circuit currents mediated by CFTR decreased upon depletion of ERp29 as did maturation of newly synthesized CFTR. In IB3-1 cells, ΔF508-CFTR co-immunoprecipitated with endogenous ERp29, and overexpression of ERp29 led to increased ΔF508-CFTR expression at the plasma membrane. These data suggest that ERp29 is a 4PBA-regulated ER chaperone that regulates WT-CFTR biogenesis and can promote ΔF508-CFTR trafficking in CF epithelial cells.
Cystic fibrosis (CF) 3 is the most common lethal autosomal recessive disease among Caucasians and results from a paucity of functional cystic fibrosis transmembrane conductance regulator (CFTR). CFTR is a cAMP-activated chloride (Cl Ϫ ) channel that is localized in the apical plasma membrane of epithelial cells where it has an integral role in regulating the transport of electrolytes and water. The most common mutation of CFTR, ⌬F508-CFTR (deletion of a phenylalanine at position 508), is a temperature-sensitive trafficking mutant (1). ⌬F508-CFTR is retained in the endoplasmic reticulum (ER) where it has prolonged associations with several cytosolic chaperones belonging to the heat shock protein (Hsp) family (2)(3)(4)(5) and with the ER-resident chaperone calnexin (6 -9). ⌬F508-CFTR is targeted for rapid intracellular degradation (10) at least in part by the ubiquitin/proteasome system (11,12) and so mostly fails to reach its appropriate subcellular location at the apical membrane (13,14). Molecular chaperones are potential therapeutic targets to facilitate improvement of ⌬F508-CFTR trafficking, but so far, little is known about the exact role of the chaperones in CFTR folding and trafficking. Of particular note, investigations of classical ER chaperones, including BiP/grp78, endoplasmin/grp94, and calreticulin, have failed to demonstrate a central role for these ER chaperones in promoting the proper folding, assembly, and trafficking of CFTR (2,3,6,15). Sodium 4-phenylbutyrate (4PBA) improves ⌬F508-CFTR intracellular trafficking in CF epithelial cells such as the IB3-1 CF human bronchiolar epithelial cell line (genotype ⌬F508/ W1282X) as early as 4 -8 h after exposure and restores CFTR function at the plasma membrane without altering CFTR mRNA expression (16). Because CFTR mRNA expression was not altered by 4PBA and because 4PBA and other butyrates are considered transcriptional regulators, we previously investigated whether 4PBA alters expression of molecular chaperones * This work was supported, in whole or in part, by National Institutes of Health Grants R01 DK058046 and R01 DK073185 from the NIDDK (to R. C. R.), R01 HL059959 (to S. G.) and K08 HL081080 from the NHLBI (to J. L. K.), and T35 HD0744 (a student research fellowship to L. A. through the American Pediatric Society-Society for Pediatric Research). This work was also supported by grants from the Pennsylvania-Delaware Chapter of the American Heart Association (to L. S.) and the Cystic Fibrosis Foundation (to R. C. R.). 1 implicated in ⌬F508-CFTR trafficking. We initially focused on 70-kDa heat shock cognate protein (Hsc70), a cytosolic chaperone involved in targeting a number of cellular proteins for ubiquitination and proteasome degradation (17). After 48 h of 4PBA treatment, both the expression of Hsc70 and complex formation between Hsc70 and ⌬F508-CFTR decreased in IB3-1 cells. This Hsc70⅐⌬F508-CFTR complex may target ⌬F508-CFTR for rapid intracellular degradation by the ubiquitin/proteasome pathway (4). Our group subsequently observed that 4PBA decreases the steady-state expression of Hsc70 by increasing the rate of Hsc70 mRNA turnover and that this effect requires new mRNA synthesis (18). These data suggested that IB3-1 cells undergo a complex response to 4PBA, a notion that has since been confirmed in genomics and proteomics profiling experiments (19,20). Moreover, it appears that multiple elements of this 4PBA response (i.e. 4PBA-regulated genes or targets) could influence ⌬F508-CFTR trafficking.
To identify candidate 4PBA targets that might improve ⌬F508 trafficking, we performed differential display RT-PCR on 4PBA-treated IB3-1 cells. One cDNA species exhibiting a time-dependent increase in PCR product abundance encoded ERp29, which was of particular interest given the dearth of knowledge about ER chaperones involved in CFTR folding. ERp29 is a resident of the ER lumen (21) that, although expressed ubiquitously in mammalian tissues, is generally abundant in epithelia (22,23). Several lines of evidence implicate ERp29 as a distinct type of molecular chaperone (24,25), but direct evidence for a chaperone activity and identification of physiological client proteins (i.e. folding substrates) remain lacking. Notably, ERp29 lacks classical chaperone and oxidoreductase activities (26) but exhibits chaperone-like properties at both the biophysical and cellular levels (27)(28)(29)(30)(31). It has also been inferred that ERp29 prefers hydrophobic substrates such as membrane proteins (29,32).
Recognizing parallels between the epithelial expression of ERp29 and CFTR, we tested the hypothesis that increased expression of ERp29 in response to 4PBA would contribute to improved ⌬F508-CFTR trafficking. Our data establish that 4PBA leads to increased ERp29 expression and that ERp29 overexpression can improve trafficking of both wild-type (WT-) CFTR and ⌬F508-CFTR. We also observed that reductions in ERp29 expression led to decreased WT-CFTR expression and maturation of newly synthesized channel as well as decreasing CFTR-mediated chloride transport in CFBE41oϪ cells that express WT-CFTR. These data therefore suggest that increased expression of ERp29 may contribute to improved ⌬F508-CFTR trafficking in response to 4PBA and that ERp29 may indeed act as a chaperone of a medically important plasma membrane protein.
Human colon adenocarcinoma T84 cells (cell line CCL248, American Type Culture Collection, Manassas, VA) were cultured in a 1:1 mixture of Dulbecco's modified Eagle's medium and Ham's F-12 medium (Invitrogen) supplemented with 100 units/ml penicillin, 0.1 mg/ml streptomycin, and 5% fetal bovine serum at 37°C in 5% CO 2 . IB3-1 and T84 cells, which endogenously express ⌬F508or WT-CFTR, respectively, were studied alongside our other model systems that utilized heterologous overexpression to minimize potential artifacts confounding these data.
Antibodies-Rabbit antiserum to ERp29 (anti-ERp29) was as described previously (22) or obtained from GeneTex (San Antonio, TX). Rabbit anti-BiP/grp78 was purchased from Sigma-Aldrich. Mouse monoclonal anti-CFTR (clone CF3) was from Abcam (Cambridge, MA), and mouse monoclonal anti-CFTR-C terminus (clone 24-1) was from R&D Systems (Minneapolis, MN). Rabbit CFTR antiserum 169 directed against a human CFTR R-domain peptide was described previously (34). Mouse monoclonal anti-CFTR directed against NBD2 (clone 596) was obtained from Dr. John Riordan (University of North Carolina-Chapel Hill) via the CF Foundation Therapeutics CFTR antibody distribution program. Mouse monoclonal anti-CFTR direct against NBD1 (clone 3g11) was obtained from Dr. William Balch (Scripps Institute, San Diego, CA) via the CFTR Folding Consortium. Mouse monoclonal anti-GAPDH was from Millipore (Billerica, MA).
Differential Display RT-PCR-Differential display RT-PCR was performed using the RNAimage TM kit (GenHunter). Total RNA was isolated from IB3-1 cells treated with 1 mM 4PBA for 0, 4, 8, or 24 h using the RNAwiz reagent (Ambion, Austin, TX) according to the manufacturer's instructions. Reverse transcription used an oligo(dT)-anchored primer, and the subsequent PCR assay (40 cycles: 94°C, 15 s; 40°C, 2 min; 72°C, 30 s; final cycle: 72°C, 5 min) used the same anchored primer in combination with 16 different arbitrary primers (HAP-1 to HAP-16) in the presence of [␣-33 P]dATP (Amersham Biosciences). PCR products were resolved on an 8 M urea, 6% polyacrylamide gel and visualized by autoradiography. Differentially expressed cDNAs were extracted, reamplified by PCR, and cloned into pCRII (Invitrogen). The cloned inserts were sequenced by automated analysis (Nucleic Acid Research Core, Children's Hospital of Philadelphia) and identified by Blast search.
Ribonuclease (RNase) Protection Assay-Quantification of ERp29 mRNA by ribonuclease protection was performed essentially as described previously (4, 18) using the Direct Protect assay kit (Ambion). The ERp29-specific probe template comprised a 258-bp 3Ј-fragment obtained by digestion of pSK-ERp29 with SacI, and the probe was synthesized using a Maxiscript T3 kit (Ambion). Hybridization to 18 S rRNA using pTRI-18S (Ambion) as a template served as an internal control. The expression of ERp29 mRNA relative to that of 18 S rRNA was determined by fluorography and densitometry as described previously (4,18) with statistical analysis as described below.
Quantitative PCR for CFTR mRNA Expression-Total mRNA was extracted with TRI Reagent (Applied Biosystems, Ambion) from CFBE41oϪ WT cells. cDNA was synthesized from total RNA using oligo(dT) primers and reverse transcription reagents from Applied Biosystems. PCR for CFTR and GAPDH was performed using TaqMan Fast Master Mix and a predesigned primer-probe set for CFTR and GAPDH (Applied Biosystems) on a StepOne real time PCR machine (Applied Biosystem).
Depletion of ERp29 by siRNA-ERp29 expression was depleted using an ERp29 siRNA (Dharmacon/Thermo Fisher Scientific) that had been demonstrated previously to specifically decrease ERp29 expression in primary cultures of alveolar type II cells. 4 ERp29 siRNA was delivered to CFBE41oϪ WT cells by transfection with Lipofectamine RNAimax reagent (Invitrogen) or to T84 cells by electroporation according to a commercially available optimized protocol and reagents (Amaxa, Lonza, Cologne, Germany). Control siRNA (Dharmacon/Thermo Fisher Scientific) was delivered under conditions identical to those of the ERp29 siRNA. Cells were either used for pulse-chase experiments (described below) or lysed for immunoblot (described below) or quantitative PCR analysis (described above) 48 h after electroporation or transfection.
For experiments examining the presence of ERp29 in the cell culture media, IB3-1 cells were grown in T-25 tissue culture flasks. Cells were then incubated in fresh medium (5 ml) without (control) or with 1 mM 4PBA for 24 h or were placed in fresh medium (5 ml) without or with 1 mM 4PBA for 24 h after transfection with ERp29 cDNA (4 g, 24 h). The conditioned media were then collected, and cell lysates (total volume, 0.5 ml) were prepared as above. Fresh, unconditioned medium served as a negative control. Proteins in the collected media were concentrated 10-fold (final volume, 0.5 ml) using an iCON Concentrator (molecular mass cutoff, 9 kDa; Thermo Scientific) according to the manufacturer's protocol. Equal amounts of cell lysate proteins were resolved by SDS-PAGE, and the corresponding volume of concentrated, conditioned media was similarly resolved by SDS-PAGE. ERp29 was then detected by immunoblot as described.
Immunoprecipitation-Lysates of IB3-1 cells were prepared as described above except that SDS was omitted from the radioimmune precipitation assay buffer. Antiserum directed against ERp29 (10 l) (22) was incubated with cell lysates (200 g of protein) at 4°C overnight. Immune complexes were captured with protein A/G-agarose beads (Santa Cruz Biotechnology, Santa Cruz, CA) and released by heating the samples at 65°C for 20 min in 2ϫ SDS-PAGE sample buffer. CFTR was detected by immunoblotting as above.
Pulse-Chase Analysis of CFTR Maturation-ERp29 expression was specifically depleted in CFBE41oϪ WT cells as described above. Forty-eight hours after siRNA delivery, cells were starved in Met/Cys-deficient growth media for 1 h, pulselabeled with [ 35 S]Met/Cys (400 Ci/ml; PerkinElmer Life Sciences), for 30 min and then chased with complete media containing an excess of unlabeled Met/Cys for 0 -4 h prior to lysis in radioimmune precipitation assay buffer. CFTR was recovered from 500 g of whole cell lysate protein by immunoprecipitation with protein A/G-Sepharose beads to which monoclonal anti-CFTR (clone 3G11) was covalently coupled (Crosslink IP kit, Thermo Scientific); covalent coupling was performed according to the manufacturer's instructions. Precipitated CFTR was resolved by SDS-PAGE and detected by fluorography. Depletion of ERp29 in these pulse-chase experiments was confirmed by immunoprecipitation with anti-ERp29, resolution of precipitated protein by SDS-PAGE, and fluorography (data not shown).
Transepithelial Ion Transport Measurements in Ussing Chambers-CFBE41oϪ WT cells were grown as polarized epithelial monolayers on SnapWells (Costar, Corning Life Sciences, Lowell, MA) and used when transepithelial resistance was Ͼ500 ohm⅐cm 2 as assessed by an EVOM ohmmeter (World Precision Instruments, Sarasota, FL). After achieving this resistance, cells were transfected with control siRNA or ERp29 siRNA with Lipofectamine Plus (Invitrogen) 48 h prior to assay.
For short circuit current measurements, cells were mounted in a modified, vertical Ussing chamber, and the monolayers were continuously voltage-clamped to 0 mV after fluid resistance compensation using automatic voltage clamps (VCC 600, Physiologic Instruments, San Diego, CA). Filters were mounted in Bath Solution (115 mM NaCl, 25 mM NaHCO 3 , 2.4 mM KH 2 PO 4 , 1.2 mM K 2 HPO 4 , 1.2 mM MgCl 2 , 1.2 mM CaCl 2 , 10 mM glucose) warmed to 37°C. The solution was pregassed and then continuously gas-lifted with a 95% O 2 , 5% CO 2 mixture yielding a pH of 7.3-7.4. The short circuit current (I sc ) was digitized at 0.05 Hz, and data were stored on a computer hard drive using Acquire and Analyze software build 2.2 (Physiologic Instruments). The data acquisition software measured transepithelial resistance automatically by passing a 5-mV, 0.2-s bipolar pulse across the monolayer and calculating resistance (R) by Ohm's law (V ϭ IR); these resistance values remained stable throughout these experiments. A positive deflection in I sc is defined as the net movement of a cation toward the apical bath.
Amiloride (Sigma-Aldrich) was dissolved as 1000ϫ stocks in Bath Solution. Forskolin and 3-isobutyl-1-methylxanthine (IBMX) (Sigma-Aldrich) were dissolved as 1000ϫ stocks in DMSO. CFTR inh -172 (35) was obtained from the CFTR modulator library (Dr. Robert Bridges, Rosalind Franklin Chicago Medical School) and dissolved as a 1000ϫ stock in DMSO. A basolateral to apical chloride gradient was imposed by replacing the apical bathing solution with a low chloride buffer containing 115 mM sodium gluconate, 25 mM NaHCO 3 , 2.4 mM KH 2 PO 4 , 1.2 mM K 2 HPO 4 , 1.2 mM MgCl 2 , 1.2 mM CaCl 2 , and 10 mM glucose. This buffer was also pH 7.3-7.4 when gassed with a 95% O 2 , 5% CO 2 mixture. CFTR functional expression was determined in the presence of 10 M amiloride and was defined as I sc that was inhibited by apical application of 10 M CFTR inh -172 after treatment of the cells with 10 M forskolin and 100 M IBMX and imposition of the basolateral to apical chloride gradient.
Electrophysiological Analyses in Xenopus Oocytes-Wholecell current measurements were performed 24 -48 h after injection using the two-electrode voltage clamp (TEV) method as described previously (35,36). Oocytes were placed in a 1-ml chamber containing modified ND96 buffer (96 mM NaCl, 1 mM KCl, 0.2 mM CaCl 2 , 5.8 mM MgCl 2 , 10 mM Hepes, pH 7.4) and impaled with micropipettes of 0.5-5-megaohm resistance filled with 3 M KCl. The whole-cell currents were measured by voltage clamping the oocytes in 20-mV steps between Ϫ140 mV and ϩ60 mV adjusted for resting transmembrane potential. Whole-cell currents were digitized at 200 Hz during the voltage steps, recorded directly onto a hard disk, and analyzed using pClamp software (version 8 or 8.1; Axon Instruments, Foster City, CA). To reduce the potential for error from series resistance, the voltage clamp (Axon Geneclamp 500B) was configured to clamp the bath potential to 0 mV. In this configuration, we independently monitored the oocyte membrane potential during our clamp protocol and routinely observed membrane potentials that were Ͻ5% depolarized from our target holding potentials.
CFTR was activated by perfusion of the oocyte with buffer containing 10 M forskolin and 100 M IBMX for 25 min (35,36). In all experiments, CFTR-mediated Cl Ϫ current (or functional expression) was defined as the difference between current measured before and 20 min after perfusion with forskolin/ IBMX. Whole-cell currents were recorded at Ϫ100 mV for comparisons, and all measurements were performed at room temperature.
Densitometry and Statistical Analyses-Densitometry of fluorograms was performed using an Alpha-Innotech 2200 Image Analysis System (Alpha-Innotech, San Leandro, CA) with two-dimensional integration of the selected band. The density of the lane surrounding the band was similarly determined by two-dimensional integration and used as a base-line density for background subtraction. For comparisons within an experiment, the density of the zero time or control was arbitrarily set to 1.0, and data are expressed relative to this control (mean Ϯ S.E.). Statistical significance was then determined by Student's t test or one-way ANOVA as appropriate. For experiments examining the influence of ERp29 overexpression on WT-or ⌬F508-CFTR functional expression in oocytes, data at Ϫ100-mV holding potential are presented as mean Ϯ S.E. with significance determined by one-way ANOVA in comparison with oocytes injected with WT-or ⌬F508-CFTR alone.
For all other data, statistical significance was determined by a two-tailed analysis using either Student's t test or a one-way ANOVA as appropriate. All statistical analyses were performed with SigmaStat version 2.03. A p value of Յ0.05 was considered significant.
4PBA Increases ERp29 Expression in IB3-1 CF Epithelial Cells-
In IB3-1 cells, 4PBA decreases steady-state Hsc70 expression by increasing the rate of Hsc70 mRNA turnover (4,18). Interestingly, the increase in Hsc70 mRNA turnover requires new mRNA synthesis (18), suggesting that 4-PBA causes a more global cellular adaptation. We performed differential display RT-PCR on RNA isolated from IB3-1 cells treated with 1 mM 4PBA to characterize alterations in gene expression associated with cellular adaptation to 4PBA. Among the mRNA species that were differentially expressed over the 24-h exposure, we identified ERp29 as exhibiting a time-dependent increase in abundance (data not shown). This finding was of particular interest because no ER chaperones have been clearly identified as 4PBA targets before. We confirmed and quantitated this 4PBA-induced increase in ERp29 mRNA abundance using a ribonuclease protection assay (Fig. 1). ERp29 mRNA expression increased significantly by ϳ50% after 2 h and remained similarly elevated throughout the 24-h exposure to 4PBA.
A corresponding increase in whole-cell expression of ERp29 protein was observed by immunoblot analysis of lysates from 4PBA-treated IB3-1 cells ( Fig. 2A). Densitometric analysis suggested that ERp29 protein expression increased by a maximum of ϳ50% after 4 -8 h of 4PBA treatment (Fig. 2B). In contrast, using similar techniques, we found that Hsc70 expression decreased ϳ40% in IB3-1 cells after treatment with 1 mM 4PBA for 24 -48 h (4, 18).
Because ERp29 expression is reported to be induced by ER stress in some circumstances (25,28,38), we assessed whether 1 mM 4PBA was causing a generalized ER stress or an unfolded protein response. Increased expression of BiP/grp78, another ER-luminal chaperone, is a characteristic of ER stress or an unfolded protein response (39). As shown in Fig. 2C, BiP expression, normalized to the cytosolic marker GAPDH, did not change upon exposure to 1 mM 4PBA. Furthermore, IB3-1 cellular morphology was not affected by 1 mM 4PBA over the course of our experiments. These data as well as previous observations that 1 mM 4PBA does not alter expression of calnexin (16) and several other ER chaperones in IB3-1 cells (19) suggest that at such moderate exposures 4PBA causes neither a general alteration of ER chaperone expression nor a cellular stress response. However, BiP/grp78 and several other stress response proteins were reported to be up-regulated after harsher exposures to 4PBA (5 mM,48 h;Ref. 20). We also observed changes in IB3-1 cell morphology (including modest rounding of cells) upon such harsher 4PBA exposures (data not shown). These observations further suggest that there are significant differences between the IB3-1 cellular responses to 1 versus 5 mM 4PBA with 5 mM 4PBA inducing a cellular stress response that was not evident upon exposure to 1 mM 4PBA.
Depletion of ERp29 Is Accompanied by Decreased Expression of WT-CFTR-Induction of ERp29 in 4PBA-treated bronchiolar epithelial cells led us to hypothesize that ERp29 is a chaperone for CFTR. Notably, ERp29, although ubiquitously expressed, is enriched in lung and has been implicated in membrane protein biogenesis (22,23,29,32,40). We initially tested this hypothesis using an RNA interference approach in CFBE41oϪ WT bronchial epithelial cells, which overexpress WT-CFTR, and in T84 colonic epithelial cells, which endogenously express WT-CFTR. As shown in Fig. 3 (A and B), delivery of ERp29-directed siRNA to CFBE41oϪ WT cells caused an ϳ50% decrease in ERp29 and ϳ40% decrease in CFTR wholecell expression relative to cells transfected with control siRNA. Similarly, there was a 60% decrease in ERp29 and CFTR expres- sion in T84 cells (Fig. 3, C and D). In neither cell line was expression of BiP/grp78 altered by delivery of ERp29 siRNA (Fig. 3), suggesting that the siRNA-mediated depletion of ERp29 caused neither a global alteration in ER chaperone expression nor an ER stress response. This diminution of CFTR expression upon ERp29 depletion was not due to decreased CFTR mRNA in the ERp29 siRNA-treated CFBE41oϪ WT cells; quantitative PCR showed that CFTR mRNA was apparent at 18 -19 cycles in both control and ERp29 siRNA-transfected CFBE41oϪ WT cells lines (n ϭ 3 experiments). These data provided initial support for the hypothesis that ERp29 is a chaperone for CFTR.
ERp29 Enhances Functional Expression of WT-CFTR and ⌬F508-CFTR in Xenopus Oocytes-We initially used the Xenopus oocyte expression system and TEV technique to examine the influence of ERp29 overexpression on functional expression of WT-and ⌬F508-CFTR. Whole-oocyte currents were measured 24 -48 h after injection of cRNA encoding CFTR (WT or ⌬F508) alone or co-injection of cRNAs encoding CFTR and ERp29. Fig. 4A illustrates the whole-oocyte current/voltage (I/V) relationship for oocytes injected with both WT-CFTR and ERp29 prior to (closed circles) and after (open circles) 20 min of incubation with 10 M forskolin/100 M IBMX (which activate endogenous protein kinase A and subsequently CFTR). These data suggest that ERp29 overexpression does not alter the characteristic linear whole-cell I/V relationship that is typically observed for forskolin/IBMX-activated CFTR in oocytes (36). Fig. 4B shows the forskolin/IBMX-stimulated whole-cell currents measured at a holding potential of Ϫ100 mV in oocytes injected with CFTR alone (10 ng) compared with currents obtained in oocytes co-injected with CFTR (10 ng) and increasing amounts of ERp29 cRNA. In the corresponding immunoblot (Fig. 4B), no ERp29 expression was observed in oocytes injected with CFTR alone, whereas overexpressed ERp29 was readily detected in those injected with ERp29 cRNA.
To assess the mechanism by which ERp29 overexpression increased CFTR functional expression, we determined the amount of CFTR in the plasma membrane by surface biotinylation (Fig. 4C). Oocytes were co-injected with cRNAs encoding CFTR and increasing amounts of ERp29 (0, 1, 10, and 30 ng), and CFTR surface expression was determined 48 h after injec-FIGURE 2. 4PBA increases ERp29 protein expression in IB3-1 cells. IB3-1 cells were incubated at 37°C with 1 mM 4PBA for the indicated times. Equal amounts of whole-cell lysate protein were resolved on 10% SDS-polyacrylamide gels. A, representative immunoblot performed with an ERp29-specific rabbit polyclonal antibody (22). B, densitometric analysis (mean Ϯ S.E.) of 10 independent experiments normalized to zero time. Statistical significance was determined by ANOVA in comparison with the t ϭ 0 sample. C, expression of BiP/grp78 relative to that of GAPDH was determined by densitometry of immunoblots equivalent to those in B (n ϭ 4 independent experiments, all normalized to zero time). Immunoblots were probed with anti-BiP and anti-GAPDH, and each data set was normalized by GAPDH content (mean Ϯ S.E.). By ANOVA, there were no significant differences for GAPDH or BiP/grp78 as a function of 4PBA exposure in these experiments.
ERp29 Promotes CFTR and ⌬F508 Trafficking tion. Surface expression of WT-CFTR was readily detected in oocytes co-injected with CFTR and 1 ng of ERp29 cRNAs, less robust in oocytes co-injected with CFTR and 10 ng of ERp29, and not detected in oocytes injected with CFTR alone or in oocytes co-injected with CFTR and 30 ng of ERp29. We suspect that our inability to detect CFTR in the latter two cases (oocytes injected with CFTR alone or co-injected with CFTR and 30 ng of ERp29 cRNA) was due to surface expression levels being below the limit of detection; this has been a consistent finding in other experiments in our laboratory (not shown). Nevertheless, the observed surface expression pattern of CFTR parallels that of CFTR activity as measured by TEV (Fig. 4B) with co-injection of 1 ng of ERp29 cRNA causing the greatest CFTR surface and functional expression. These data are therefore con-sistent with ERp29 acting as a chaperone that promotes WT-CFTR expression at the plasma membrane.
Next we examined whether ERp29 overexpression also influenced the functional expression of ⌬F508-CFTR. Although ⌬F508-CFTR is a "trafficking-defective" mutant, in oocytes, ⌬F508-CFTR is delivered to the plasma membrane (41) because oocytes are maintained at 18°C, and temperatures Յ27°C are permissive for ⌬F508-CFTR trafficking (1,42). As shown in Fig. 5A, ERp29 overexpression did not alter the characteristic linear I/V relationship of ⌬F508-CFTR (36). Similarly, Fig. 5B shows whole-cell currents measured at a holding potential of Ϫ100 mV in oocytes co-injected with ⌬F508-CFTR and increasing amounts of ERp29 cRNA. Oocytes co-injected with ⌬F508-CFTR and 1 ng of ERp29 cRNA had ϳ2.3-fold greater forskolin/IBMX-stimulated currents (Ϫ0.9 Ϯ 0.3 A, n ϭ 22) than did controls injected with ⌬F508-CFTR cRNA alone (Ϫ0.4 Ϯ 0.0 A, n ϭ 27; p Ͻ 0.05 by ANOVA; Fig. 5B). Currents obtained with co-injection of 10 ng of ERp29 (Ϫ0.9 Ϯ 0.2 A, n ϭ 19) were similarly ϳ2.25-fold greater than controls (p Ͻ 0.05). In contrast, currents in oocytes co-injected with 30 ng of X. laevis oocytes were injected with cRNAs for WT-CFTR (10 ng) or ERp29 (10 ng) alone or co-injected with WT-CFTR (10 ng) and either 1, 10, or 30 ng of ERp29 cRNA. A, I/V relationship (adjusted for resting transmembrane potential) of whole-cell current was determined by TEV 24 -48 h after co-injection with 10 ng of CFTR and 1 ng of ERp29 cRNAs before (closed circles) and after (open circles) stimulation with forskolin/IBMX as detailed under "Experimental Procedures." Data are presented as mean Ϯ S.E. for n ϭ 29 oocytes. B, forskolin/ IBMX-stimulated currents at Ϫ100-mV holding potential (adjusted for resting transmembrane potential) were determined for oocytes injected with cRNA for WT-CFTR alone (10 ng; closed bar) or ERp29 alone (10 ng; open bar that is not visible due to small magnitude) or co-injected with CFTR and either 1, 10, or 30 ng of ERp29 cRNA (gray bars). Data are presented as mean Ϯ S.E. for the indicated number of oocytes, and statistical significance was determined by ANOVA in comparison with oocytes injected with WT-CFTR alone. Immunoblot detection of ERp29 in similarly injected oocytes was performed as described under "Experimental Procedures." Groups of 10 oocytes were lysed, and 10% of this lysate (i.e. one oocyte equivalent) was loaded per lane. C, detection of CFTR at the oocyte surface by surface biotinylation was performed 48 h after injection as described under "Experimental Procedures." Biotinylated CFTR was revealed with a streptavidin-HRP conjugate after precipitation with CFTR antiserum 169. Equivalent results were obtained in three independent experiments. FIGURE 5. ERp29 enhances functional expression of ⌬F508-CFTR in Xenopus oocytes. X. laevis oocytes were injected with cRNA for ⌬F508-CFTR (10 ng) or ERp29 alone (10 ng) or co-injected with cRNAs for ⌬F508-CFTR (10 ng) and either 1, 10, or 30 ng of ERp29. A, I/V relationship (adjusted for resting transmembrane potential) of whole-cell current was determined by TEV for oocytes co-injected with 10 ng of ⌬F508-CFTR cRNA and 1 ng of ERp29 cRNA before (closed circles) and after (open circles) stimulation with forskolin/IBMX. Data are presented as mean Ϯ S.E. for n ϭ 22 oocytes. B, forskolin/IBMX-stimulated current at Ϫ100-mV holding potential (adjusted for resting transmembrane potential) of oocytes injected with ⌬F508-CFTR alone (10 ng; closed bar) or ERp29 alone (10 ng; open bar) or co-injected with ⌬F508-CFTR and either 1, 10, or 30 ng of ERp29 cRNA (gray bars). Data are presented as mean Ϯ S.E. for the indicated numbers of oocytes, and the significance of differences from oocytes injected with ⌬F508-CFTR alone was determined by ANOVA.
ERp29 were not significantly higher than controls (Ϫ0.6 Ϯ 0.2 A, n ϭ 14 versus Ϫ0.4 Ϯ 0.0 A, n ϭ 27; p not significant). Again, oocytes injected with ERp29 cRNA alone had forskolin/ IBMX-stimulated currents that were not significantly different from uninjected oocytes, indicating that the forskolin/IBMXstimulated currents in these experiments were mediated by ⌬F508-CFTR. Together, these data suggest that ERp29 increases ⌬F508-CFTR functional expression in oocytes in a manner similar to its effect on WT-CFTR. Consistent with the relatively low current values observed for ⌬F508-CFTR (versus WT-CFTR; Fig. 4), we were unable to detect ⌬F508-CFTR at the plasma membrane by surface biotinylation (not shown). These data further support the hypothesis that ERp29 is a chaperone for CFTR.
Overexpression of ERp29 Increases ⌬F508-CFTR Plasma Membrane Expression in IB3-1 Cells-We next addressed whether altered ERp29 expression modifies ⌬F508-CFTR trafficking to the plasma membrane in CF epithelial cells. Surface biotinylation experiments were performed in IB3-1 cells that overexpressed ERp29 or had been treated with 4PBA as a positive control (Fig. 6). ⌬F508-CFTR was not detected at the surface of IB3-1 cells grown under control conditions as expected but was clearly present at the surface of these cells after treatment with 1 mM 4PBA. When ERp29 was overexpressed by transient transfection, surface expression of ⌬F508-CFTR was even more robust than in cells treated with 4PBA for 24 h, commensurate with greater ERp29 overexpression revealed by immunoblotting (Fig. 6A). Consistent with the data of Fig. 2, densitometric analysis indicated that cells treated with 4PBA had ϳ40% increased ERp29 whole-cell expression of ERp29 (Fig. 6B, gray bars; #, p ϭ 0.01 versus control; n ϭ 3), whereas ERp29-transfected cells had ϳ4-fold increased expression. Densitometry also indicated that ERp29-transfected cells had 4-fold greater ⌬F508-CFTR surface expression than 4PBA-treated cells, which in turn had ϳ5-fold greater ⌬F508-CFTR surface expression than control cells (Fig. 6B, open bars; *, p Ͻ 0.05 versus 1 mM 4PBA; n ϭ 4). As a control for integrity of the plasma membrane during these biotinylation experiments, we assessed the recovery of biotinylated GAPDH in concurrent experiments. In contrast to CFTR, biotinylated GAPDH was undetectable in the neutravidin-precipitated proteins despite strong detection of GAPDH immunoreactivity in whole-cell lysate, confirming that our protocol specifically labeled proteins at the cell surface (Fig. 6A, middle panel).
These data suggest that, in human CF bronchiolar epithelial cells, overexpression of ERp29 promotes ⌬F508-CFTR trafficking to and expression at the cell surface. Furthermore, these data imply that a mechanism of action for 4PBA on CFTR trafficking may be via the induction of ERp29 expression. Finally, these data imply that ERp29 increases ⌬F508-CFTR functional expression in oocytes (Fig. 4) by increasing channel number at the oocyte plasma membrane, similar to its mechanism of action on WT-CFTR.
ERp29 Is Detectable at Surface of IB3-1 Cells and in IB3-1 Cell Growth Media-Previous data have suggested that ERp29 may move through the secretory pathway and be found extracellularly in some situations (milk and cultured thyrocytes; Refs. 27 and 43) but not others (tooth enamel proteins and plasma; Refs. 22 and 44). Therefore, we examined whether ERp29 could be detected either at the surface or in the culture media of IB3-1 cells (Fig. 7).
As expected, whole cell expression of ERp29 was moderately increased after treatment with 1 mM 4PBA and more so upon transfection with ERp29 either in the absence or presence of 4PBA (Fig. 7A). By immunoblotting, ERp29 was readily detected in the culture medium of IB3-1 cells under control conditions and upon ERp29 transfection (Fig. 7B). In contrast, ERp29 was not found in the culture medium of 4PBA-treated IB3-1 cells, and 4PBA decreased ERp29 secretion from ERp29transfected cells. In control experiments, ERp29 was not detected in fresh culture medium (Fig. 7B). Similarly, GAPDH was undetectable in the conditioned culture medium (data not shown), indicating that our detection of ERp29 there was not an artifact of cell lysis.
In surface biotinylation experiments (Fig. 7C), ERp29 was undetectable at the surface of IB3-1 cells under control conditions. However, treatment with 1 mM 4PBA, overexpression of ERp29 via transfection, or the combination of ERp29 transfection and 4PBA treatment all led to ERp29 being detected at the IB3-1 cell surface. GAPDH was not detected in the neutravidin-captured fraction (data not shown), again indicating that ERp29 surface expression was not an experimental artifact. These data suggest that, in IB3-1 cells, ERp29 can move beyond the ER and be externalized. Interestingly, 4PBA treatment increased the surface expression of ERp29 both in transfected and untransfected cells (Fig. 7C) while decreasing the amount of ERp29 secreted into the culture media (Fig. 7B). These data suggest that 4PBA promotes the association of ERp29 with the IB3-1 cell surface in addition to elevating ERp29 expression.
ERp29 Co-immunoprecipitates with ⌬F508-CFTR in IB3-1 Cells-As the above data supported ERp29 being a chaperone for CFTR, we investigated whether ERp29 interacts with WT-CFTR and ⌬F508-CFTR in epithelial cells using a co-immunoprecipitation approach. ERp29 appears generally unable to maintain stable interactions with clients following cell lysis (22,27,40,45). However, as shown in Fig. 8, immunoprecipitation of endogenous ERp29 resulted in modest recovery of ⌬F508-CFTR from IB3-1 whole-cell lysate. Because these experiments were performed in the absence of 4PBA, this likely reflects intracellular ERp29 and ⌬F508-CFTR because IB3-1 cells exhibit no surface expression of ⌬F508-CFTR under these conditions (Fig. 7A). Control immunoprecipitations lacking anti-ERp29 were negative for ⌬F508-CFTR. In contrast, immunoprecipitation of endogenous ERp29 from lysates of T84 cells did not result in recovery of endogenously expressed WT-CFTR under identical experimental conditions. These results are consistent with reports suggesting that, in comparison with WT-CFTR, immature ⌬F508-CFTR has more robust interactions with a variety of chaperones at or within the ER (2, 4, 6).
The observed co-immunoprecipitation of ⌬F508-CFTR and ERp29 is consistent with ERp29 influencing ⌬F508-CFTR trafficking via a direct or close interaction. Although such a stable interaction was not evident for WT-CFTR, we feel a transient interaction could still occur given the above data demonstrating that depletion of ERp29 decreases WT-CFTR expression and that ERp29 overexpression can promote increased WT-CFTR functional and surface expression in oocytes. These data also provide provisional identification of CFTR as a putative disease-relevant membrane protein substrate for ERp29. Fur- A, whole-cell lysates were prepared, and ERp29 and GAPDH (as a loading control) were detected by immunoblot. Data are representative of n ϭ 4 independent experiments. B, growth medium was collected and concentrated 10-fold as described under "Experimental Procedures." ERp29 was detected in the conditioned medium by immunoblot but not in a similarly concentrated fresh culture medium control. Smaller or larger immunoreactive species were not detected. Immunoblots representative of n ϭ 4 independent experiments are shown. C, surface proteins were biotinylated, captured with neutravidin beads, and resolved by SDS-PAGE as described under "Experimental Procedures." ERp29 was detected in the biotinylated fraction. The "Input" lane represents the immunoreactivity of 10% of the total protein subjected to neutravidin capture in the Control lane. GAPDH immunoreactivity was absent from the biotinylated fraction (not shown), suggesting that intracellular proteins were not labeled by biotin in this experiment. These data are representative of n ϭ 4 independent experiments. IP, immunoprecipitation; WB, Western blot. Whole-cell lysates of IB3-1 and T84 cells were prepared as described under "Experimental Procedures." ERp29 and its stably interacting partners were then immunoprecipitated from equal sample amounts (200 g of protein) using anti-ERp29 and resolved by SDS-PAGE. Input, 20 g of whole-cell lysate; No ab, immunoprecipitation protocol performed with omission of anti-ERp29 as a control for nonspecific precipitation. CFTR (arrows) was detected with monoclonal anti-CFTR (clone CF3). These data are representative of three independent experiments. IB, immunoblot.
ther work is required to establish whether the observed interaction between ERp29 and ⌬F508-CFTR is direct or occurs via intermediaries.
ERp29 Depletion Impedes CFTR Maturation and Decreases CFTR Functional Expression-Because of its predominant ER localization and its apparent traversal of the secretory pathway, we hypothesized that ERp29 would influence the trafficking and maturation of newly synthesized CFTR. Accordingly, we performed a pulse-chase experiment in CFBE41oϪ WT cells that had been treated with either ERp29-specific or non-targeting (control) siRNA (as in Fig. 3 above). As shown in Fig. 9, under control conditions, newly synthesized CFTR progressed over 1-2 h of chase from the lower molecular weight, ER-localized form (Fig. 9, arrow B) to the higher molecular weight, mature form that results from glycosyl processing in the Golgi (Fig. 9, arrow C). The mature form predominated after 4 h of chase. In contrast, after siRNA-mediated depletion of ERp29, less immature CFTR (Band B) was present after pulse-labeling even though, as described above, CFTR mRNA expression was unchanged as assessed by quantitative PCR. Also, the appearance of mature CFTR when ERp29 is depleted was at least delayed if not frankly prevented. The rate at which Band B disappeared (ϳ20% over the 1st h of chase) when ERp29 was depleted was similar to that under control conditions (ϳ30% over the 1st h). Together, these data suggest that ERp29 may facilitate or promote both assembly of nascent CFTR in the ER and its trafficking to and eventual maturation in the Golgi. Because ERp29 appears to traverse the secretory pathway, this may occur by ERp29 accompanying or "escorting" newly synthesized CFTR throughout its trafficking itinerary.
We then performed Ussing chamber experiments to assess the CFTR-mediated chloride transport in control and ERp29 siRNA-treated CFBE41oϪ WT polarized monolayers (Fig. 10). As shown in Fig. 10A, control siRNA-or ERp29 siRNA-transfected CFBE41oϪ WT had similar base-line short circuit currents (Ϫ2.6 Ϯ 0.7 A/cm 2 for I sc control versus Ϫ5.9 Ϯ 1.9 A/cm 2 ; p not significant) as well as negligible amiloride-sensitive I sc . In the presence of a basolateral to apical chloride gradient, there was a significant ϳ40% reduction in CFTR-mediated I sc (i.e. I sc activated by forskolin/IBMX and inhibited by CFTR inh -172) in the ERp29 siRNA-treated cells (157.7 Ϯ 20.1 A/cm 2 , n ϭ 12) versus the control siRNA-treated cells (262.2 Ϯ 40.8 A/cm 2 , n ϭ 12; p ϭ 0.03; Fig. 10B). Together, these data indicate that specific depletion of ERp29 decreases the maturation and function of CFTR in the CFBE41oϪ model of bronchial epithelia.
DISCUSSION
The development of novel, mechanism-based therapies for CF is founded on the hypothesis that repair of mutant CFTR dysfunction will result in improved clinical outcomes (46). For ⌬F508-CFTR, repair of function requires "correction" of its aberrant intracellular trafficking. 4PBA is one of the prototype ⌬F508-CFTR "correctors" with demonstrated positive effects both in vitro (16) and in pilot human studies (47,48). However, the mechanism by which 4PBA corrects ⌬F508-CFTR remains unclear.
We (4,18) and others (5) have suggested that 4PBA may improve ⌬F508-CFTR trafficking by altering expression of the cytoplasmic 70-kDa heat shock proteins Hsc70 and Hsp70. We have also shown that such alterations in Hsc70 and Hsp70 expression can modulate the intracellular trafficking of the epithelial sodium channel ENaC (49). However, it is also clear that the cellular response to 4PBA in IB3-1 CF epithelial cells is quite complex (19,20).
Here, we sought to identify additional 4PBA-regulated species in IB3-1 CF epithelial cells that could contribute to improved ⌬F508-CFTR trafficking and identified ERp29 as a potential 4PBA-regulated chaperone of ⌬F508-CFTR. We found that ERp29, a resident of the ER lumen, has increased mRNA and protein abundance in IB3-1 cells after treatment with 4PBA and that ERp29 interacts with ⌬F508-CFTR in these cells. We also demonstrate that ERp29 overexpression promotes WT-and ⌬F508-CFTR functional expression in Xenopus oocytes; for WT-CFTR, this effect correlates with increased CFTR abundance at the plasma membrane. Overexpression of ERp29 similarly led to increased ⌬F508-CFTR surface expression in IB3-1 cells, and siRNA-mediated depletion of ERp29 led to decreased expression and maturation of WT-CFTR as well as decreased CFTR-mediated chloride current in CFBE41oϪ WT cells. Together, these data suggest that ERp29 is a target of 4PBA and that ERp29 can indeed function as a bona fide molecular chaperone or escort for an integral membrane protein.
The ER utilizes two execution pathways to control the biogenesis and trafficking of membrane or soluble secretory proteins. In one, correctly folded and assembled proteins are packaged into membrane carriers to be transported from the ER to the Golgi complex on the way to various cellular or extracellular destinations. These carriers are formed by the activity of cytosolic protein complexes, which for many proteins, including CFTR, involve COP II (coat complex II) machinery (50). In contrast, proteins that are recognized as inappropriately folded or assembled are diverted to proteasome-mediated degradation in the cytosol, a process known as ER-associated degradation (51).
Folding of newly synthesized integral membrane proteins like CFTR is likely facilitated by chaperone machines residing both in the cytosol and in the ER. Some central chaperone components of these machines are functionally modulated by cochaperones with the objective of balancing the kinetic and thermodynamic constraints of client proteins to create an environment that will optimize their correct folding (50). Many components of this chaperone machinery also recognize mis-folded proteins like ⌬F508-CFTR and facilitate their degradation via ER-associated degradation.
A number of cytoplasmic and ER-resident chaperones and co-chaperones appear to be involved in regulating the biogenesis of WT-and/or ⌬F508-CFTR. For example, cytoplasmic proteins such as Hsp90, Hsp40s, and the Hsp70 nucleotide exchange factor HspBP1 facilitate the Hsp70-dependent folding of CFTR (3)(4)(5)(52)(53)(54). In contrast, the cytosolic Hsc70interacting protein CHIP is an E3 ubiquitin ligase that promotes CFTR degradation (55). Furthermore, data in yeast suggest that ER-associated degradation of CFTR is predominantly controlled by cytosolic rather than ER-resident chaperones (56), although a recently characterized ER membrane protein, Derlin-1, does appear to be pivotally involved (57,58).
Relatively little is known about the influence of ER-resident chaperones on the biogenesis of WT-or ⌬F508-CFTR. Calnexin, an ER membrane protein, interacts with immature forms of CFTR and ⌬F508-CFTR by binding to a luminal loop (6,8,9,59). However, calnexin has a more prolonged interaction with ⌬F508than WT-CFTR (6), and calnexin overexpression leads to enhanced retention of ⌬F508 in the ER (8). Intriguingly, it appears that the classical ER chaperones BiP/grp78 and endoplasmin/grp94 do not interact with CFTR stably (2,3,6), and neither BiP nor calnexin appear to have central roles in ER-associated degradation of CFTR (8,9,56,59). These findings raised the prospect that other more recently discovered ER residents could play a key role in the regulation of CFTR biogenesis.
Here, we provide evidence that WT-and ⌬F508-CFTR trafficking can indeed be influenced by the novel ER-luminal resident ERp29 and that ERp29 likely acts upon newly synthesized CFTR. These data provide strong functional evidence that ERp29 is, in fact, a bona fide molecular chaperone or co-chap- erone, extending recent reports by us and others (31,40). The presumed function of ERp29 as a chaperone was supported previously by cellular studies of thyroglobulin, a soluble secretory protein (27,28,30). Moreover, ERp29 was found to induce a conformational change in the polyoma virus VP1 protein, which in turn facilitated passage of the virus across the ER membrane and successful infection (60). Biophysical characterization and activity assays also suggested that ERp29 is functionally distinct from the classical ER chaperones (26,29).
It has also been suggested that ERp29, like its Drosophila paralogue Windbeutel, could act to escort secretory proteins through post-ER compartments in addition to or instead of the ER chaperone functionality (25,27,40). Windbeutel has a crucial role escorting a glycosaminoglycan-modifying enzyme (Pipe) to the Golgi. In the Golgi, Pipe directs the ventral activation of an extracellular serine proteolytic cascade, which leads to determination of the ventral side of the embryo (61,62). The findings of ERp29 in conditioned cell media ( Fig. 7 and Ref. 27) and in milk (43) are consistent with an escort role from the ER to the cell surface. However, because ERp29 was not found extracellularly in other physiological contexts such as in tooth enamel matrix (22) and plasma (44), it remains unclear whether extracellular ERp29 reflects an active escort role or passive egress.
Our present data (Fig. 7) demonstrate that ERp29 is secreted into the growth medium under control conditions and more so when ERp29 is overexpressed. Under control conditions, ERp29 was not detected at the surface of IB3-1 cells, but interestingly, 4PBA treatment increased the amount of ERp29 at the cell surface while decreasing that found in the culture media. In ERp29-overexpressing IB3-1 cells, whereas some ERp29 was detectable at the surface, a similar shift in ERp29 from the culture medium to the cell surface was observed with 4PBA treatment. These data associate improved ⌬F508-CFTR trafficking after 4PBA treatment or ERp29 overexpression with increased cell surface expression of ERp29 in IB3-1 cells.
Our data also suggest that depletion of ERp29 decreases the amount of newly synthesized immature CFTR (Fig. 9). Although we cannot rule out that ERp29 depletion slows the initiation of CFTR synthesis, we feel this is less likely as ERp29 depletion does not alter CFTR mRNA expression. Instead, we posit that ERp29 may promote completion of synthesis and/or stabilization of newly formed WT-CFTR as well as delivery of CFTR to the Golgi or later compartments (Fig. 9). Noting evidence for a similar role with connexin43 (40), it seems reasonable to speculate that ERp29 could be accompanying or escorting ⌬F508or WT-CFTR through the exocytic pathway and also that ERp29 may remain associated with ⌬F508or WT-CFTR at the cell surface. Whether ERp29 facilitates WT-CFTR and ⌬F508-CFTR trafficking by acting as a chaperone in the ER or as a post-ER escort or by both means warrants further investigation.
Recent data suggest that the ⌬F508 mutation in the cytosolic first nucleotide binding domain (NBD1) of CFTR can alter the interaction of NBD1 with the cytoplasmic face of the transmembrane domains of CFTR (63). Although it is reasonable to hypothesize that such ⌬F508-related structural alterations at the cytoplasmic face of the transmembrane domains may por-tend altered structure and interactions of the luminal transmembrane domain face, this does not appear to be the case for the functional interaction of ERp29 with WTversus ⌬F508-CFTR. In oocytes, ERp29 overexpression induced by injection of moderate amounts of cRNA increased the functional expression of both WT-and ⌬F508-CFTR by ϳ3-fold. These data suggest that the luminal/extracellular loops of WT-and ⌬F508-CFTR are similarly recognized by ERp29.
That ERp29 overexpression induced by injection of moderate amounts of cRNA was more effective at promoting ⌬F508or WT-CFTR functional expression than that with higher amounts was not surprising. We have seen and extensively discussed potential mechanisms underlying similar effects for chaperone regulation of ENaC trafficking in oocytes (49), and there are other such reports for ER chaperones (15,64).
Our data suggest that association of ⌬F508-CFTR with ERp29 is detectable by co-immunoprecipitation in IB3-1 cell extracts but that such an association of WT-CFTR with ERp29 is not readily detected in T84 colonic epithelial cells. Similar findings were made for the interaction of ERp29 with misfolded viral protein (45) and for calnexin with ⌬F508-CFTR (8). Interestingly, association of ERp29 with thyroglobulin was only demonstrable by co-immunoprecipitation after cross-linking, suggesting a weak or transient interaction (27). Here, we found that ERp29 co-immunoprecipitated with ⌬F508-CFTR in the absence of cross-linking, suggesting a more stable or robust interaction; this is consistent with previous suggestions that ERp29 could favor hydrophobic substrates such as integral membrane proteins (29,32). That ERp29 appears critical for the appropriate processing and assembly of connexin43 hemichannels (40) as well as the expression of Surfactant Protein B, which has significant hydrophobic character, in alveolar type II cells 4 supports this notion. Similarly, we have observed that ERp29 overexpression increases functional expression of the epithelial sodium channel ENaC in the oocyte system. 5 Collectively, these data enrich the hypothesis that ERp29 may function as a general chaperone or escort for membrane and/or hydrophobic secretory proteins.
In summary, our findings from siRNA-mediated depletion, overexpression, and 4PBA treatment are consistent with increased expression of ERp29 contributing to improved ⌬F508-CFTR intracellular trafficking in CF epithelial cells. These data establish ERp29 as a functional target of the prototype ⌬F508-CFTR corrector 4PBA. These data may also implicate ERp29 as a critical element in the mechanism by which 4PBA improves the trafficking and secretion of other mutant proteins such as ABCA3 (65), Surfactant Protein C (66), and ␣ 1 -antitrypsin (67). Furthermore, our results support ERp29 functioning as a molecular chaperone of integral membrane proteins and are the first data demonstrating that overexpression of an ER resident can facilitate trafficking of WT-CFTR and ⌬F508-CFTR. ER-luminal chaperones such as ERp29 can now be regarded as candidate pharmaceutical targets for novel CF therapies. Further investigation of the role of ERp29 in the | 2018-04-03T03:10:41.393Z | 2011-04-27T00:00:00.000 | {
"year": 2011,
"sha1": "21a7dd0d7887fae7e90e42a3ce0db6049207e83f",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/286/24/21239.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "59890a24c29a826b140bacec012775d83089180a",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
187802083 | pes2o/s2orc | v3-fos-license | DIGITALIZATION OF BUSINESS REGISTER
The authors in this paper analyze new system and possibilities that will rise for the companies and citizens regarding the digitalization of the market and business register. From June 2017, business registers in all EU countries are interconnected. This means that anybody can search for information on companies registered in any EU country. We can also search for companies in Iceland, Liechtenstein or Norway. Very important is that the registers can share information on foreign branches and cross-border mergers of companies. This system – Business Registers Interconnection System (BRIS) – is a joint effort by EU governments and the European Commission. This paper introduces the cooperation of business registers at European level.
INTRODUCTION
Over 10 million citizens are involved in cross-border judicial procedures each year. 1 That means a greater need for cooperation between different national judicial systems, and improved access to information on the judicial process in different European countries.
Businesses expand beyond national borders, using the opportunities offered by the internal market. Because of that there is an increasing demand for access to information on companies in a cross-border context. Very often offi- cial information on companies is not always readily available on a cross-border basis. 2 The European e-Justice Portal 3 facilitates this cooperation by providing information and access to justice services across the EU.
The European Access Point to the interconnection of the national business registers (Business Registers Interconnection System -BRIS) is now, from 8 June 2017, available on the European e-Justice Portal. 4 EU citizens, and all others, can now search from the Portal within the national registers of 10 EU/EEA Member States and this number will steadily increase when more countries' implementations become fully operational. One of the member states which are already in BRIS is Croatia.
DEVELOPMENT OF BRIS
28 EU member states have different legal systems and they have as many different possibilities for companies. This is one of the main reasons for the harmonization and digitalization of business register.
The need for cross-border cooperation of business registers was identified nearly two decades ago, which led to the launching of the so-called European Business Register (EBR) initiative. This was voluntary project undertaken by the business registers with the support of the European Commission 5 . After that came Business Register Interoperability Throughout Europe (BRITE) and the Internal Market Information System (IMI). Finally, came of initiative the e-Justice which aim was to assist the work of judicial authorities or practitioners and facilitate the access of citizens to judicial, legal information.
The European e-Justice Portal was introduced to improve citizens' access to justice, to facilitate procedures within the EU and to make the resolution of disputes or the punishment of criminal behavior more effective 6 . In 2012, European Commission launched an Action Plan on company law and corporate governance which supported three key objectives: enhancing transparency, engaging shareholders and supporting companies' growth and their competitiveness. The proposal was also to launch a company law codification in order to make the regulatory framework more user-friendly 7 .
In the past years, the cross-border dimension of business has grown from a company and a consumer perspective. In order to take a proper decision the information is needed. The business register contains official information on the companies which refers to all particularly important matters, reports defined by the law 8 .
In Europe, business registers offer a range of services different from one Member State to another. However, the core services provided by all registers are to register, examine and store company information, such as information on a company's legal form, its seat, capital and legal representatives, and to make this information available to the public 9 .
European Commission has made a survey on business registers in 2013 and reached following data and information 10 : A. concerning business registers covered by the Directive 11 : − 18 (67%) member states have only one central business register − 7 member states have one central business register plus other regional or local registers (the ones with regional registers have no local registers and vice-versa) − 2 member states do not have a central register but have regional and/or local registers, and they are interconnected; 7 Communication from the Commission to the European parliament, the Council, the European economic and social committee and the committee of the regions Action Plan: European company law and corporate governance -a modern legal framework for more engaged shareholders and sustainable companies/* COM/2012/0740 final */ C. concerning Company Unique Identifier: − less than half of the member states use Unique Identifiers for registering companies, different from the Registration Number.
As a result, BRIS is ensuring the availability of information on companies registered in any EU member state and EEA country.
BRIS is a part of the European e-Justice Portal, the one-stop-shop for citizens, businesses and legal professionals across Europe.
The BRIS infrastructure is a joint effort by EU governments and the European Commission. It facilitates public access to information on European companies and ensures that all European business registers can communicate with each other electronically in a safe and secure way.
The aim of BRIS is to enhance confidence in the single market through transparency and up-to-date information and reduce unnecessary burdens on companies 13 .
LEGAL BASIS AND TECHNICAL DATA
The term »Digitalization« is the representation of communication in writing or sound by electronic means and the concept thus concerns electronic communication including the transmission of information and the storage of such communication electronically and electronic access and retrieval from such storage 15 .
The term "Business register" comprises the national commercial registers, companies' registers, and any other register storing company information and making it available to the public within the meaning of Directive 2009/101/EC 16 .
The gold of BRIS is to access to up-to-date and official information on companies. Business registers play an essential role in this regard. Business register, examine and store company information, such as information on a company's legal form, its seat, capital and legal representatives, and they make this information available to the public. They may also offer additional services, which may vary from one country to another. The minimum standards of the core services are set by European legislation 17 ; in particular Member States have to maintain electronic business registers 18 since 1 January 2007. Business registers in Europe operate on a national or regional basis: they only store information on companies registered in the territory (country or region) where they are competent. The directive requires the establishment of an information system that interconnects the central, commercial and companies registers (also referred to as business registers) of all Member States. The directive accents need to improve transparency and access to company information at EU level and need to provide updated reliable information on companies and their foreign branches. The Regulation details the technical specifications for the system.
Citizens may request any of documents, available in business registers and they will be provided.
Anybody can get information about companies registered in business registers in the EU, Iceland, Liechtenstein or Norway using the 'Find a company' service on e-Justice portal.
At the moment citizens and others can only request information that the national registers provide free of charge. You can also find links to webpages of national registries 22 .
CONCLUSION
From the beginning (June 2017), the business register of Croatia (Court Register) is part of BRIS 23 . Through e-Justice Portal you can search for information on companies registered in Croatia and 9 other countries. Consumers, creditors and other business partners can access to all basic information free of charge-name of company, companies' registers number, seat, country, EUID, etc.
The gold of BRIS is accomplished for companies registered in Croatia: to increase confidence of EU member states and its citizens and others in the Single Market by ensuring a safer business environment for consumers, creditors and other business partners; and to access to up-to-date and official information on companies.
BRIS provides a higher degree of legal certainty as to the information in the European business registers and help improve the cooperation between business registers in Europe for procedures concerning cross-border mergers, and the exchange of relevant information regarding companies and their branches.
As soon as all of the member states will connect, BRIS will definitely increase legal certainty and confidence in the internal market. To facilitate access to information on companies across borders, all member states need to participate to make network of business registers whole.
The digitalization of business register is expected to bring the easier and transparent functioning of the transfer of data especially across the border and to implement possibilities of electronical correspondence in everyday work.
The BRIS enhance transparency and gives possibility for gaining information in the EU.
The EU company law needs to recognize technical developments further and develop more technological possibilities to ensure that companies, citizens and authorities benefit from the digital age. There should be more usage of opportunities offered by digital technology that has no (cross) border. | 2019-06-13T13:13:56.041Z | 2017-12-01T00:00:00.000 | {
"year": 2017,
"sha1": "238328ed7b0a97f95742cfcf5b7ec088f1a3b233",
"oa_license": null,
"oa_url": "https://hrcak.srce.hr/file/284436",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "4f235983dbbc92d2d503605caa28f4d0c40d6131",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
88878116 | pes2o/s2orc | v3-fos-license | DETERMINATION METABOLIC AND NUTRITIONAL STATUS IN DAIRY COWS DURING EARLY AND MID LACTATION
: The objective of the present study was to investigate nutritional and metabolic status in Simmental cows during early and mid-lactation. Fifteen early lactating cows and 15 mid lactating cows were chosen for the investigation. Blood samples were collected to measure beta-hydroxybutyrate (BHB), non-esterified fatty acids (NEFA), triglycerides (TG), glucose and the activity of aspartate transaminase (AST). Early lactation as compared to mid lactating cows were found to have significantly higher (P<0.05) blood serum concentrations of NEFA, BHB and AST and lower blood serum concentrations of glucose (P<0.05) and TG (P>0.05). Significantly negative correlations were observed between BHB and glucose (P<0.01), BHB and TG (P<0.05), NEFA and glucose (P<0.05). Significantly positive correlations were observed between NEFA and BHB (P<0.05), NEFA and AST (P<0.05), glucose and TG (P<0.01). The results suggest that these parameters can serve as useful indicators of the nutritional and metabolic status of dairy cows during lactation.
Introduction
Production diseases, those associated with improper nutrition or management are common in dairy cows. Dairy cows suffer from negative energy balance (NEB) during the first weeks of lactation due to energy expenditure associated with milk production and limited feed intake, resulting high mobilization of lipids from body fat reserves, and hypoglycaemia (Veenhuizen et al., 1993;Drackley, 1999;Oetzel, 2004). Nutrition, age, heredity, body condition score (BCS), management and energy imbalance are various risk factors which possibly play a role in NEB, periparturient fatty liver and ketosis (Morrow et al., Radojica Djoković et al. 2 1990;Pechova et al., 1997;Duffield et al., 1997). Clinical ketosis in dairy cows usually occurs between the second and seventh week of lactation. Nevertheless, most of cows in this stage of lactation may suffer a subclinical form of ketosis defined as increased blood ketone bodies without any other symptoms but accompanied by considerable decrease in milk yield and susceptibility other diseases (Duffield et al., 1997). Consequently, stressors and poor nutritional management causing reduction in dry matter intake will result in large increases in NEFA around calving (Drackley, 1999). NEFA are preferentially and greatly accumulated as TG in the liver, primarily because of a decrease in the very low density lipoproteins (VLDL) synthesis by hepatocytes (Herdt et al., 1982;Sevinc et al., 2003). However, when steatosis occurs, endogenous liver synthesis decreases, leading to a reduction in blood glucose, total proteins, albumins and globulins, cholesterol, TG and urea. (Veenhuizen et al., 1993;Drackley, 1999;Sevinc et al., 2003;Djokovic et al., 2007;Djokovic et al., 2011). Fatty liver infiltration and hepatocyte degeneration involve cell membrane damage and hepatocyte destruction (15) coupled with the release of cytoplasmic enzymes (aspartate aminotransferase (AST), gamma-glutyml transferase (GGT), lactate dehydrogenase (LDH)) and marked increases in their circulating activities (Lubojacka et al., 2005;Pechova et al., 1997).
The objective of the present study was to investigate nutritional and metabolic status in Simmental cows during early and mid lactation.
Animals, diets and milk production
This experiment was carried out in a dairy herd (166 Simmental cows) suffering from several metabolic and reproductive disorders (Farm: Miličić-Ćurćić, Mrsać, Kraljevo). Two groups (n=15 cows) of clinically healthy cows were chosen from the herd. Group 1 consisted of early lactation cows, in the first month of lactation (16.1±9 days), and Group 2 included mid lactation cows between 3 to 5 months of lactation (124.8±27 days). The cows were mid-yielding with a preceding lactation of about 6500 l. The body condition scores (BCS) of the test cows were 3.42 ± 0.55 (early lactation) and 3.27 ± 0.74 (mid lactation) (Ferguson et al., 1994). The experimental cows were kept in tie-stall barns. Diet and the housing facilities were adapted to research purposes, with diet suited to the energy requirement of early and mid lactation cows. Early lactating cows were fed a diet consisting of 7 kg lucerne hay, 20 kg maize silage (30% Dry Matter, DM), 5 kg concentrate (18% crude proteins, CP). Mid lactating cows received a diet consisting of 5 kg meadow hay, 7 kg lucerne hay, 30 kg maize silage (30% DM), 8 kg concentrate (18% CP). ). Dietary nutrient contents for dairy cows in early and Determination metabolic and nutritional… 3 mid lactation are given in Table 1. The chemical analysis of the feed was performed by Weende methodology (Givens et al., 2000).
Biochemical analysis of blood
Blood samples were collected at 10:00 h or 4 to 6 hours after milking and feeding, by puncture of the jugular vein into sterile disposable test tubes, without anticoagulant. After clotting for 3 hours at 4°C and centrifugation (1500g, 10 minutes, 4°C), sera were carefully harvested and stored at -20°C until analysis. Blood samples collected on fluoride were immediately centrifuged in the same manner and plasmas were assessed for glucose concentrations. The following biochemical blood components were measured by different colorimetric techniques using spectrophotometers (Cobas Mira, Roche, Belgium and Gilford Stasar III, Gilford, USA): BHB and NEFA levels were measured by Randox (United Kingdom) kit, AST and glucose by Human (Germany) kit, and TG by Elitech (France) kit.
Statistical analysis
Difference between metabolic adaptation in early and mid lactation was confirmed by difference in concentration of metabolic parameters, by t-test. Pearson's test was performed to evaluate significant correlations between biochemical metabolites in pooled sample including cows in early and mid lactation. For this purpose was used statistic software Statgraphic Centurion (Statpoint Technologies Inc.Warrenton, Va, Virginia, USA).
Results and Disscusion
Blood biochemical metabolites in early lactation and mid-lactation cows were compared in this study. Homeostasis induces intense lipid mobilization and ketogenesis, and the liver has been adapted to metabolic changes in dairy cows (Drackley, 1999). Intensive postpartum lipid mobilization and ketogenesis are Radojica Djoković et al. 4 sufficient for a series of compensatory metabolic processes with changes in blood metabolic profile during early lactation in healthy cows (Drackley, 1999;Cincovic et al., 2012). Results of blood biochemical metabolites, for both groups of cows are shown in Table 2. The correlation coefficients among the biochemical parameters calculated for all cows in this experiment are summarized in Table 3. In early lactating cows, NEFA and BHB values were significantly higher (P<0.05) than in mid-lactating cows. NEFA concentrations > 0.40 mmol/l indicate problems with energy balance and subsequent intensive lipomobilization (Oetzel, 2004). According to this report, in early lactating cows, NEFA values in blood were 0.38 ± 0.29 mmol/l, showing evidence of high lipomobilization in the present study. Given the fact that serum NEFA concentrations > 0.70 mmol/l are associated with ketosis (Oetzel, 2004). These are the result of some early lactating cows in the present study having NEFA concentrations above the values indicative of subclinical ketosis. Subclinical ketosis also may be diagnosed when serum BHB concentrations are above 1.2 mmol/l, while clinical ketosis is associated with BHB concentrations above 2.6 mmol/l (Oetzel, 2004;Duffield, 2000). The results of early lactating cows in the present study showed BHB concentrations above the value indicative of subclinical ketosis (1.59±0.25 mmol/l). The data presented show that serum NEFA may be used for detecting high lipomobilization, but not subclinical ketosis. This is in agreement with (Duffield, 2000), who stated that the Determination metabolic and nutritional… 5 use of NEFA is a better indicator of energy imbalance in prepartum animals than BHB, but BHB is more useful at postpartum. In the present study, a significant positive correlation was established between NEFA and BHB (P<0.05) in the sera, suggesting that both parameters are helpful indicators of EB during lactation.
Blood glucose values in mid-lactation cows were within the physiological range 2.5 -4.2 mmol/l (Radostis et al., 2000), whereas hypoglycemia (2.29 ± 0.48 mmol/l) was detected in early lactating cows. Taking this criterion into account, early lactating cows had indicative values, but did not display any clinical signs, suggesting that they had a typical subclinical condition. In fact, a significant correlation was observed between NEFA values and glucose (P<0.05) and BHB and glucose (P<0.01). Similar correlations were observed by other authors (Bobe et al., 2004;Djokovic et al., 2011).
Fat infiltration into the liver may also affect the concentration of some blood components. (Morrow et al., 1990;Lubojacka et al., 2005). Serum level of TG, is an indicator of hepatic functionality, and decreases in their concentration may suggest fat infiltration in the liver (Lubojacka et al., 2005;Djokovic et al., 2007). The concentration of serum TG was significantly lower (P<0.05) in ketotic cows compared to healthy cows (Djokovic et al., 2007). These results may show that TG accumulate in the liver cells of ketotic cows and causes blood TG to decrease. In the present study, TG in the blood was lower (0.12 ± 0.02 mmol/l vs 0.15 ± 0.04 mmol/l) in both groups of cows, but without significant difference. This study has shown a possibility of the development a fat infiltration of the liver in early lactation cows which was confirmed by a significant correlation between TG and glucose, (P<0.01) and TG and BHB (P<0.05). When fat infiltrates the liver, a hepatocyte degeneration involve cell membrane damage and hepatocyte destruction, and the levels of enzymes that indicate liver injury (AST, GGT, and LDH) are generally augmented (Pechova et al., 1997;Lubojacka et al., 2005;Djokovic et al., 2011). AST values in the present study were statistically higher (P<0.05) in early lactation cows than in mid-lactating cows. AST activity higher than 100 U/l is indicative of hepatic disorders (González et al., 2011). These are result, early lactation cows in our study showed a changes in the morphological and functional state of liver cells, probably due to mild fat infiltration. Also, a positive correlation (P<0.05) was observed between AST activity and NEFA values. Mild fatty infiltration of liver in dairy cows during transition and maximum lactation is considered to be almost physiological (Bobe et al., 2004). In the present study, all data concerning serum AST activities suggested that the process of lipomobilization was sufficient to cause mild fat infiltration of liver cells in of the early lactating cows.
Conclusion
In conclusion, on the basis of changes of blood biochemical metabolites, this study suggests that early lactation cows showed physiological adaptive changes, which were associated with subclinical ketosis and mild fat infiltration of liver cells. They can serve as useful indicators of the nutritional and metabolic status of dairy cows during lactation. | 2019-04-01T13:16:27.862Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "5d0a5e04a00f80f09517b0ab71caa80adbfea843",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.2298/bah1601001d",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "15f9524c91f93caa5a917e28a8e114f3d78127a6",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
687201 | pes2o/s2orc | v3-fos-license | Integrating technology into complex intervention trial processes: a case study
Background Trials of complex interventions are associated with high costs and burdens in terms of paperwork, management, data collection, validation, and intervention fidelity assessment occurring across multiple sites. Traditional data collection methods rely on paper-based forms, where processing can be time-consuming and error rates high. Electronic source data collection can potentially address many of these inefficiencies, but has not routinely been used in complex intervention trials. Here we present the use of an on-line system for managing all aspects of data handling and for the monitoring of trial processes in a multicentre trial of a complex intervention. We custom built a web-accessible software application for the delivery of ENGAGE-HD, a multicentre trial of a complex physical therapy intervention. The software incorporated functionality for participant randomisation, data collection and assessment of intervention fidelity. It was accessible to multiple users with differing levels of access depending on required usage or to maintain blinding. Each site was supplied with a 4G-enabled iPad for accessing the system. The impact of this system was quantified through review of data quality and collation of feedback from site coordinators and assessors through structured process interviews. Results The custom-built system was an efficient tool for collecting data and managing trial processes. Although the set-up time required was significant, using the system resulted in an overall data completion rate of 98.5% with a data query rate of 0.1%, the majority of which were resolved in under a week. Feedback from research staff indicated that the system was highly acceptable for use in a research environment. This was a reflection of the portability and accessibility of the system when using the iPad and its usefulness in aiding accurate data collection, intervention fidelity and general administration. Conclusions A combination of commercially available hardware and a bespoke online database designed to support data collection, intervention fidelity and trial progress provides a viable option for streamlining trial processes in a multicentre complex intervention trial. There is scope to further extend the system to cater for larger trials and add further functionality such as automatic reporting facilities and participant management support. Trial registration ISRCTN65378754, registered on 13 March 2014.
Background
Running a multicentre complex intervention trial generates significant trial management demands both centrally and in local teams independent of the complexity of the trial design. Complex intervention trials are likely to require data collection via multiple formats in order to monitor and record multiple and differing aspects of the intervention [1]. These may include the more traditional paper Case Report Forms (CRFs), video observations and audiorecordings, the latter usually requiring multiple pieces of specialist equipment, all contributing to increased costs. Complex interventions also require some degree of assessment of how accurately the intervention is delivered to ensure consistency of delivery across multiple sites. Again this can be costly in terms of personnel hours required and can contribute to the general inefficiency of a trial.
Paper CRFs remain the most widely used data collection tool and are perceived to be quick to implement and relatively simple to manage [2]. Benefits of using paper CRFs include the ease of testing, distribution to sites and simplicity at the point of study closure and archiving. However, the need for transcription to electronic systems and/or photocopying of paper CRFs for sending to the trial centre is costly and introduces likely avenues for error and duplication of effort [3]. Further, lengthy monitoring visits may be required to ensure the quality and appropriateness of the data being collected, necessitating comparison of transcribed records against the original source material. A move to electronic source CRF (eSource) data collection provides a method for reducing this burden. Initial set-up can be time-consuming, however, and additional training is often required for correct usage [3]. Currently, eSource data collection has failed to replace traditional paper CRFs in most complex intervention trial environments, although the use of web-accessible data capture methods to improve trial efficiency is an area of emerging interest.
Our aim was to use modern and portable eSource technologies to improve many of the management and data collection errors and inefficiencies that are often experienced in complex intervention trials, and to test this in a small-scale study. ENGAGE-HD is a multicentre, single-blinded complex intervention trial of a complex physical activity intervention in Huntington's disease where the intervention was delivered at the participant's home. Trial assessments were performed over three time points at eight sites across the UK [4] Although a relatively small trial, there were a number of specific complexities that made this a good case study candidate for establishing proof of principle for such a system: each site had a coordinator, one or more coaches for the physical and social (control) intervention and assessors blinded to the arm allocation of the participant. Briefly, site coordinators scheduled appointments and visits, aided data collection and acted as the main point of contact for the Central Study Team. The blinded assessors conducted the evaluations for the primary trial outcomes at three time points in a hospital-based setting, with each assessment requiring the completion of 9-15 individual CRFs. The intervention coaches visited participants at their home on six occasions between assessments 1 and 2 where they were required to deliver the intervention and collect weekly diary data from participants. Therefore, data collection needed to be as efficient as possible to maximise data capture across all settings and time points. In this paper we describe the system development, implementation and methods to capture impact of a custom-designed web-based system developed with the intention of streamlining trial processes in ENGAGE-HD.
System development
The web-based system for ENGAGE-HD was designed to be a robust, intuitive and flexible system with the primary aim of aiding trial management and data collection. We incorporated a number of differing functional modules into the system which provided a facility for monitoring and assessing intervention fidelity and for performing the randomisation of participants. Development took a period of 3 months, encompassing alpha-testing, which focussed on the initial look and feel of the system, and beta-testing which focussed on the user acceptability of the system. Development and testing was undertaken by an in-house software developer at an approximate cost of £7250, with data manager input at an approximate cost of approximately £9134 for the initial development. We estimate that the developer provided maintenance and ongoing support for 3 h over the duration of the recruitment period which was an additional cost of £3782. This culminated in primarily eSource data collection, with the option of using paper CRFs where desired or required. All system users were issued with their own username and password to access the facility with differing levels of permissions conferred to individual users dependent on their role. For example, the blinding of assessors was maintained by denying them access to participant allocation information and the intervention coaches were given specific access to upload audiorecordings to the database and to access a summary report of the subsequent fidelity evaluation.
The platform was created to be accessible through desktop computers and tablet devices to allow for maximum functionality and real-time data capture. However, where site staff felt more comfortable with completing paper CRFs, the software was designed so that the CRF could be generated as a PDF directly from the platform and printed for manual use. The software was also built to enable participant randomisation after the completion of the initial assessment without the need to contact the Central Study Team and also allowed for real-time data entry during home visits and assessments. Randomisation was performed using minimisation [5] in order to balance between groups based on data obtained at the baseline assessment (age, sex and Unified Huntington's Disease Rating Scale-Total Motor Score (UHDRS-TMS).
The software added extra functionality as a data collection tool with in-built data validation rules to minimise errors at the point of data entry. With controllable format of data entry fields, data could only be entered in the correct format, and values outside of prespecified ranges generated a warning message to alert the researcher before the form was saved. The system was also engineered with the capability to upload audio-recordings for fidelity assessment. Intervention coaches could record the audio of their interaction with the participant using the iPad, which was then securely uploaded to the database before being deleted according to data protection strictures. The audio-recording was rated for fidelity of the delivery of the intervention in line with the theoretical framework on which it was developed [6]. Feedback from the intervention trainer was delivered to intervention coaches via Skype® on the tablet device.
The software design automatically stored inputted data in a structured query language (SQL) database from which interim extracts could be made available on demand. Further design aspects enabled the Central Study Team to receive e-mail alerts once assessment data had been submitted so that data queries could be raised and resolved in a timely fashion. Any changes made to the data forms after the original submission were automatically saved, logging all details required by Good Clinical Practice (GCP) requirements: the user who made the change, the date of the change and the values before and after the change.
Device configuration
The web-based system was intended to be accessible to researchers in real time so we decided to invest in portable, web-enabled devices to make this possible. A number of factors were considered in the choice of device to be used in the trial. These ranged from availability, cost, portability and security to the ease and acceptability of use in the trial setting. We concluded that the device which would offer the best compromise over all was the iPad Air® as it is a widely used device, with uniform interface design across most applications and would require the least amount of additional training for users. The ability to use the videoconferencing software Skype® and integrated video-and audio-recording meant that a single device had all the required functionality to perform all trial-related tasks at site. All devices were configured to use a secure Wi-Fi connection where available or to fall back on 4G data services, which allowed reliable connectivity at all research sites.
Implementation
Prior to implementation at research sites, the system was fully tested and validated using all devices and interfaces that would be used as part of the trial. Each site was visited by the Central Study Team for staff training in using the iPad, uploading data through the website and the use of the supplementary software packages. Checks were also carried out on the availability of secure Wi-Fi access and cellular network coverage. Each site was required to provide their agreement to use the device and system as instructed and to maintain device updates where necessary. Support and additional training was also given through e-mail and telephone contact with the trial manager on a needs basis.
Methods to capture impact
Our evaluation of the impact of the system was based on two principal criteria: (1) trial data quality and (2) user feedback. To assess data quality, we made a comparison of the number of data queries raised to the number of data items entered and the time taken to resolve data queries. We also descriptively summarised the proportion of missing data items in the minimum dataset. We also noted how many sites were using paper CRFs by the third and final participant assessment.
Feedback was obtained through structured telephone interviews with members of research staff from across six of the eight sites involved in the trial. We aimed to get feedback from at least one person per site who had used the database. We were able to conduct interviews with six members of site staff which included: two intervention coaches, three site coordinators (one of whom also acted as a coach) and one blinded assessor. Interviews consisted of a predetermined set of ten questions covering topics such as the usability and accessibility of the system and support requirements. Interviews were conducted by the database developer and responses to questions were recorded verbatim in real time. Responses were summarised according to the questions asked using the method of qualitative description [7,8].
Trial participants
Target recruitment for ENGAGE-HD inflated for losses to follow-up was 62, requiring 46 for final analysis. Forty-six participants were recruited (n = 25 male and n = 21 female). Mean age was 59.4 years (standard deviation (SD) 10.1).
Training and on-site support
The development of our bespoke, purpose-built electronic trial management system required intensive resource in the initial set-up period. Significant time was needed to accurately write all the necessary metadata, which included the use of in-built validation rules to prevent the entry of erroneous data. Although not formally recorded, we estimate that the development of the database required 12 weeks of data manager time and approximately 10 weeks of developer time, running concurrently, to build and sufficiently test the system.
The initial site set-up process required intensive training during the initiation visit. In addition to standard site initiation tasks, an extra 1-2 h was set aside to familiarise key staff with the use of the iPad and the functions of the ENGAGE-HD database. The requirement for additional training differed between individual sites, based largely on staff familiarity with using iPads or other electronic tablet devices.
Step-by-step guides were produced to help staff with the use of additional applications required for audioand video-recording. Further support (via e-mails and telephone calls) was also sometimes required during the initial months after recruitment began until staff were sufficiently confident with using the system.
Data quality
We evaluated the impact of the system on data collection and quality by looking at the completeness and quality of the data that was received through the database. Table 1 shows the number of completed electronic CRFs broken down by individual study sites. Two sites achieved a completion rate of 100%, with the site returning the fewest completed forms still achieving a completion rate of 96%. Across the study, 2639 of 2678 CRFs were successfully completed, giving an overall return rate of 98.5%. Individual sites were contacted for missing CRFs on multiple occasions and this was monitored monthly by the Trial Management Group. After a period of approximately 3 months, as none of the data contained in the missing CRFs were critical for the primary trial analysis, the Trial Management Committee decided to record this 1.5% of nonreturned CRFs as missing data.
We were also able to assess the number of data queries per site and the median time it took to resolve those data issues. Overall there were 141 data queries for the whole trial, which were resolved in a median time of 3 days ( Table 1). All but one site routinely resolved data queries within 1 week of the query being raised. We calculated that for participants in the physical intervention there would be a total of 3134 data points across the whole trial and 1674 for participants in the social arm. Assuming that all participants completed the trial as intended and that all CRFs were completed, this would give a total of over 100,000 individual data points. Therefore, the number of data queries raised across the trial constitutes approximately 0.1% of all the data entered for the trial.
In order to assess fidelity, coaches in the intervention arm were also asked to audio-record one of their home visits, using an in-built iPad application and then upload that file to the Central Study Team through the ENGAGE-HD database. Sixteen participants completed the intervention, and in all cases the coaches successfully uploaded an audio-recording via the ENGAGE-HD database. Audiorecordings captured by the iPad device were of sufficient sound quality to be accurately transcribed, and formed the basis of further fidelity analysis reported elsewhere [6].
Lastly, we reviewed the reported protocol deviations for the trial to assess the impact of the database on protocol adherence. Out of the 30 protocol deviations reported for the ENGAGE-HD trial, three were related to the use of the database. Two incidents could be attributed to user error: the wrong age category was selected on the randomisation form, and one audio-recording was not uploaded to the database within 48 h of being taken so it could be deleted from the iPad. The other incident was a case where the blinded assessor had difficulty accessing the database and was unable to upload the baseline information immediately. Although the data was recorded on back-up paper copies, the process lead to a delay in the randomisation of the participant so that they were not informed of their group allocation until a number of days after the assessment. The other incidences of protocol deviations were largely attributed to scheduling issues or intrasite communications leading to the unblinding of assessors.
User impact
We considered the impact of the integrated system on both the trial staff and the site staff. For trial staff, one of the major advantages we found with using this system was that it allowed real-time monitoring of the progress of the trial and of individual participants. The database was programmed to send e-mail alerts to the trial and data managers as soon as participant assessment data had been entered. This enabled prompt review of the data collected, allowing swift generation and resolution of data queries, as described above. Further, this level of access allowed the Central Study Team to monitor the progress of intervention visits and flag up if there were any obvious delays. If such a problem was noted, communication with the site could be initiated immediately to offer further support and advice if required. We performed semi-structured interviews with a mix of site staff (n = 6) involved in ENGAGE-HD to investigate their views on the impact of the database system on the delivery of the study. A summary of the results from these interviews can been in Table 2.
One interviewee was new to the collection of trial data, but the remainder had performed similar work before and they all indicated that the majority of data collection was using traditional paper CRFs. Three people mentioned that data collected on paper CRFs was then entered on to a database via a desktop PC and a further person said that they had worked on studies previously where data was uploaded directly into a database.
Half of the staff interviewed indicated that their experience of using iPads was limited and although the remainder had experience through personal use, no one had previously used an iPad in a work context. All respondents were extremely positive about the training they received for using the iPad and database. All respondents stated that the training and support they received was adequate and no one had any suggestions for improvements. Specific responses highlighted that queries were responded to quickly, the standard operating procedures (SOPs) provided were clear and easy to follow and that the site initiation training was good. Half of the staff interviewed (n = 3) also singled out that efficient management of the trial was important in how well supported they felt.
To gain more specific perspectives of site staff, we asked them to describe the relative advantages and disadvantages of fully eSource data collection using the supplied iPad. The summary of responses can be found in Table 2. These included benefits such as ease of use and data collection, mobility of use and the ability to use the additional apps to facilitate the work. Some of the drawbacks mentioned included limited battery life, only having one device per site, and that on-line forms could be redesigned to allow more feedback.
All interviewees said that they took advantage of some the additionally functionality (Skype®, audio-recording, camera, e-mail) of using an iPad. The specific apps and the amount they were used were dependent on the research role of the specific interviewee. Intervention coaches particularly liked the use of the audio-recording and Skype® functions to discuss and receive feedback on the delivery of the intervention with the intervention trainer. Site coordinators employed the use of the camera to take high-resolution images of documents which could then be e-mailed securely to the coordinating centre.
Lastly, we asked the site staff if their experience of using the iPad and database for ENGAGE-HD had influenced their views on working on trials in the future. The response to this was overwhelmingly positive, with the move to paperless systems being particularly popular, although it was noted that different studies would have specific requirements.
Overall the feedback of the research staff interviewed on the use of the iPad and database for delivering ENGAGE-HD demonstrated high levels of acceptability to site users.
Discussion
Through this case study, we have demonstrated that is possible to produce reliable, web-accessible software for the purposes of data collection and management and trial management in a multisite trial of a complex intervention. The return rate and quality of data collected was particularly high and the system had high user acceptability ratings. The use of the system also allowed for efficient trial management by enabling sites to perform randomisation of participants themselves and by reducing the need for extensive and repeated on-site data monitoring, which offset the initial intensive training required during site set-up.
The system we designed facilitated the immediate collection of source data during study visits and assessments via use of the iPad, negating the need for secondary data transcription once it had been collected. It seems likely that this portable collection facility, which incorporated prompts and validation checks in real time, ensured that the overall form completion rate for the trial was high. Importantly, the software enabled the facility to provide paper back-ups of all CRFs when access to the live system was not possible or desired and, as such, data collection was not compromised.
Data validation rules that were embedded into the software ensured that the accuracy of completed eSource data was high, resulting in a low administrative burden in terms of generating and resolving data queries. This specific database design feature allowed us to be confident that when assessment data had been entered and checked the data was sufficiently clean for analysis, thereby negating the need for lengthy data cleaning once the trial was finished. We believe that this time-saving efficiency more than compensated for the higher additional set-up time required for developing and programming the software. Additionally, the combination of high levels of data completion and accuracy along with the efficient site communication meant that intensive site-monitoring visits were not deemed necessary to ensure the efficient delivery of the trial.
The lack of multiple copies of paper CRFs in this trial also conferred other benefits. Firstly, archiving at sites can be problematic due to constraints on space in suitable facilities. The reduction in the volume of paperwork generated in ENGAGE-HD through the implementation of a largely paperless system is, therefore, an important consideration. Secondly, we were able to reduce the number of costly site-monitoring visits, due to a decreased need to carry out quality control inspections of paper records against the transcribed copies on digital systems.
The use of this web-based system in ENGAGE-HD had added benefits beyond data collection. The randomisation was embedded in the programme ensuring that participants could be effectively randomised on the day of their screening assessment without the need for contacting the coordinating centre. This feature reduced the timelines of progressing any given participant through the trial by removing the need for staff at the sites and the coordinating centre to communicate directly. Remote and automated randomisation in itself is not novel and the approach has been published elsewhere [9][10][11] as an effective method for allocating participants, but to our knowledge it has not previously been embedded within the study data collection system alongside other trial-monitoring and management features. Perhaps, the most important design feature of the ENGAGE-HD database was the integrated method for fidelity monitoring, a vital aspect of delivering complex interventions. This allowed rapid assessment of the fidelity of the intervention delivery by all coaches with all participants in the physical intervention arm and for useful feedback to be delivered in a useful and timely fashion. For all participants in the intervention arm, an audio-recording was successfully uploaded for fidelity assessment, which indicates the utility of the database for this process. Additionally, the individualised permissions' settings designed for each user enabled intervention coaches to access confidential feedback on their delivery of the intervention through the system, which they could refer back to at any time. We believe that this aspect of the system was key to the high levels of intervention fidelity in the trial which we have reported elsewhere [6].
It was important to gain the perspectives of end users of the database who were not involved in its development to determine the acceptability of the technology in a research environment. If end users are not comfortable or satisfied with the data collection and management modules, they will either not engage with using the system or, if enforced, their use maybe more inefficient and less accurate. In general, the feedback we received from site staff through the telephone interviews conducted was positive and the move from paper CRF data collection to eSource was widely welcomed. Intervention coaches were particularly receptive to the use of Skype® and audio-recordings for fidelity monitoring and found the feedback received useful for intervention delivery [6].
Some staff had reservations about using the iPad and navigating the database whilst assessing participants, but we found that additional support and training from the coordinating centre alleviated these fears. Further, the feedback that we received revealed that the additional benefit of using an iPad on a cellular network was in part that it was unaffected by many of the security features present on NHS networks, which prevent the use of some web-enable applications such as Skype®.
This case study provides preliminary indications that our system was indeed beneficial to trial processes. We do, however, acknowledge that our evaluation of impact could have been strengthened by obtaining a wider range of perspectives. A better approach would have been to conduct face-to-face interviews with site staff rather than rely on a structured telephone interview.
We note that a major limitation in the evaluation of the system as a whole is the lack of comparator (such as a purely paper-based system), which is necessary to draw firm conclusions about the efficiency of our system. ENGAGE-HD was a feasibility trial of a complex intervention and we did not plan to formally evaluate the efficacy of our system in reducing data queries or data cleaning. In spite of differential data query rates across our sites, the overall data return rate across all ENGAGE-HD sites was, however, similar. Furthermore the real-time monitoring of data and in-built data validations to our system meant that data cleaning occurred automatically. This suggests that the system described here provides the added advantage of site support during data collection but we are not able to confirm this without a full-scale comparative evaluation. We await, with interest, the results of the TRANSFoRM study which is currently evaluating the utility of an electronic platform in the recruitment and follow-up of participants in primary care research [10] as this is likely to provide more support for the approach we describe here.
Whilst we found that the use of the on-line database and portable technologies in the delivery of ENGAGE-HD were largely beneficial, we recognise that this system had limitations. In order for the system to be functional, internet access is required at all times; therefore, secure Wi-Fi or adequate cellular coverage must be available. We circumnavigated this issue by purchasing 4G-enabled tablet devices, but recognise that this adds a significant extra cost. Although lack of accessibility was an initial concern, we resolved this by ensuring that paper copies could be obtained without internet access. This ensured that accessibility did not become an issue during the trial.
One of the aims of the ENGAGE-HD trial was to design and implement an on-line system to streamline processes associated with fidelity monitoring, participant randomisation, site communication and data collection in a multicentre complex intervention trial. Electronic data capture is becoming increasingly popular as a method of data acquisition in clinical trials for reducing both the time and costs required to deliver a study [12,13], but the number of studies using such applications remains comparatively small [14]. Current methods of electronic data capture usually require the transcription of paper records to eSource via manual input or scanning technologies, both of which can introduce error and a significant time burden on the research team. With both of these methods, data still needs to be checked for accuracy and validity, further adding to trial running costs. Such processes contribute to general inefficiencies in trials where large amounts of data are being collected [15]. Additional time-saving benefits of automated trial processes and data collection compared to traditional paper CRF methods and management are being increasingly recognised in industry-sponsored trials [12,16,17].
For future versions of a similar trial database, we would like to investigate the addition of further functional modules that could further reduce time burdens in administration and recording of necessary documentation. For example, we found that although protocol deviations were reported in a timely fashion, there were significant delays in receiving the necessary signed documentation. We believe that by transferring this process to an on-line facility, documentation would be completed in a more timely fashion. Similarly, we would integrate screening logs and safety reporting into future versions of the database to improve the speed of documentation return and improve consistency between sites.
In this trial we continued to monitor outcome data as it was uploaded to check accuracy and completeness. As a result, future iterations should take further time during the set-up period to add-in additional, stricter validation rules or 'flags' for specific outcomes. A flagging system could be incorporated so that data outside of accepted normative values could produce a prompt to check the validity of the data entered. Coupled with 'free-text' sections, where those entering data could add further detail and explanation, this would prevent largely erroneous data from being entered and reduce the need for 'by eye' data monitoring. Further, an investment in developing robust data validations at the outset can be realised in subsequent trials where metadata can be simply and reliably replicated.
Conclusions
Here we have demonstrated that the use of portable and real-time technologies at the researcher-participant interface in a multicentre complex intervention trial is a viable and efficient method for improving trial management and data collection procedures. There is scope to extend the system further by adding other functional modules to aid in other aspects of trial management. This includes schedule planning, automatic compliance, safety reporting and dissemination of trial documentation updates. Although assessing the validity of such a system in a larger-scale trial is still required, the success of this system in ENGAGE-HD has provided a basis for an accessible, secure platform for the refinement and delivery of complex intervention trials. | 2016-11-30T08:43:22.317Z | 2016-11-17T00:00:00.000 | {
"year": 2016,
"sha1": "f2cadcca67cc95d60527f3af8ea124c3a08dddc5",
"oa_license": "CCBY",
"oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/s13063-016-1674-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2cadcca67cc95d60527f3af8ea124c3a08dddc5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216465217 | pes2o/s2orc | v3-fos-license | The Impact of Organizational Behavior on Organizational Citizenship Behavior A Field study on BEMO Saudi Fransi Bank
The aim of this research is to study the three dimensions of organizational behavior (evaluation of the worker’s personality, the motives of the worker, its interactivity within the work team) and the dimensions of organizational citizenship (altruism, general compliance, conscience awareness and civilizational behavior) among Bemo Saudi Fransi employees. (163) questionnaires were distributed to those who accepted to participate in the study and based on the approval and direction of the relevant departments in Bemo Saudi Fransi Bank, where the number of the sample's employees is (690) employees, and it was concluded that there was a statistically significant effect of the organizational behavior variables (evaluation of the worker’s personality, The motivations of the worker) while there is no relation to the variable (interactivity within the work team) in the behavior of organizational citizenship towards the management of the bank (altruism) and the presence of a statistically significant effect of organizational behavior variables (evaluation of the worker’s personality, worker motivations, the ability of the worker to interact within the work team) in the behavior of Organizational citizenship towards bank management (A. Compliance with the public, awareness of conscience and civilized behavior). It was also found that there are fundamental differences attributable to the gender variable of the sample, as the males volunteer to carry out the tasks that are not required of them except that they take breaks that they do not deserve and do not spend their time working without making an effort, while the females help others who were absent from their work and seek accuracy in their work times while the absence of fundamental differences Regarding the variables of organizational behavior and organizational citizenship in relation to the variable of the academic level of the sample. It was recommended that employees should be involved more in the management of a bank by setting up awareness sessions for them and doing some administrative matters within the International Journal of Human Resource Studies ISSN 2162-3058 2020, Vol. 10, No. 2 http://ijhrs.macrothink.org 103 management of a bank and opening means and channels of communication with management continuously and periodically, and trying to support the sense of creativity and innovation for workers in Bemo Saudi Fransi Bank and to conduct training courses and that By training employees to improve the image of a bank’s management and to perform the job with perfection and dedication.
The aim of this research is to study the three dimensions of organizational behavior (evaluation of the worker's personality, the motives of the worker, its interactivity within the work team) and the dimensions of organizational citizenship (altruism, general compliance, conscience awareness and civilizational behavior) among Bemo Saudi Fransi employees. (163) questionnaires were distributed to those who accepted to participate in the study and based on the approval and direction of the relevant departments in Bemo Saudi Fransi Bank, where the number of the sample's employees is (690) employees, and it was concluded that there was a statistically significant effect of the organizational behavior variables (evaluation of the worker's personality, The motivations of the worker) while there is no relation to the variable (interactivity within the work team) in the behavior of organizational citizenship towards the management of the bank (altruism) and the presence of a statistically significant effect of organizational behavior variables (evaluation of the worker's personality, worker motivations, the ability of the worker to interact within the work team) in the behavior of Organizational citizenship towards bank management (A. Compliance with the public, awareness of conscience and civilized behavior). It was also found that there are fundamental differences attributable to the gender variable of the sample, as the males volunteer to carry out the tasks that are not required of them except that they take breaks that they do not deserve and do not spend their time working without making an effort, while the females help others who were absent from their work and seek accuracy in their work times while the absence of fundamental differences Regarding the variables of organizational behavior and organizational citizenship in relation to the variable of the academic level of the sample. It was recommended that employees should be involved more in the management of a bank by setting up awareness sessions for them and doing some administrative matters within the
Introduction
Organizational behavior has received a lot of attention from researchers in human resources management in its various specializations, as researchers focus on the role of organizational behavior in the work environment. They clarified that awareness of employees in organizational behavior greatly affects many organizational goals, including employee attitudes toward the company and between them and others, which include job satisfaction, intent to leave, organizational commitment, and behavior at work. Hence this study came to highlight the employees 'perception of organizational behavior and the perception of organizational citizenship for employees of Bemo Saudi Fransi Bank. The descriptive analytical approach will be followed through collecting quantitative data and subjecting it to appropriate statistical analyzes to answer the research hypotheses. The study community has been chosen banks due to the ability to access the sample within the bank, taking into account the various departments it contains and its important and vital role in providing various services and dealing with many segments of customers, especially during the crisis in the Syrian Arab Republic, and despite the The abundance of different studies and research in the field of organizational behavior and organizational citizenship as independent variables, but studies in the field of their relationship are still limited, especially in the banking and administrative sector.
Previous Studies
Some studies dealt with the concept of organizational behavior and its relationship to the awareness of organizational citizenship, so the researcher had to stop at some of these studies, choose what serves the research topic, and review the most important results, and then show what distinguishes this study from them, and from previous studies that were relied upon where AlA'wasa, S., 2018. The Impact of Organizational Justice on the Counterproductive Work Behavior. International Journal of Business and Social Science, Jordan aimed to find the effect of organizational behavior on the realization of effective citizenship by examining both the employee's personality and his motives on the altruism variable and conscience awareness within a government service center in Jordan, by distributing a questionnaire to 340 employees, where it was found that there was a statistically significant effect of a variable Personality of the worker on selflessness and awareness of the conscience, as there was a significant effect of the motives of the worker on altruism, while there was no effect of the variable of motivations of work on awareness of the conscience.
This study worked to understand what is related to the existence of an effect of the worker's personality variable on altruism. The questionnaire was distributed to 210 non-academic employees within universities located in northern Jordan, and it was found that there was no such effect among non-academic employees in universities in Jordan, while an effect was found for each of Variable inclinations and values of the factor on altruism, general compliance, and conscience awareness. This study aimed to find out the effect of the components related to organizational behavior on job satisfaction and organizational citizenship. The questionnaires were distributed to 5 Islamic banks in Pakistan from Gujranwala section which includes 56 branches. It was distributed to 319 employees from 25 branches, information was collected through a structured questionnaire containing 26 items, and a five-Likert scale was used.
The study revealed an effect of the components of organizational behavior on job satisfaction and organizational citizenship. But it still lacks the application of regulatory citizenship and its components in Islamic banks in Pakistan. Moreover there is a huge communication gap as most managers did not have awareness of this.
Study Problem
Several previous studies have talked about the employee's awareness of organizational behavior is one of the main criteria in determining his inclinations towards the management of a bank in which he works, where organizational behavior focuses on the employees 'awareness of what they offer to the organization and the benefits they receive, and therefore the imbalance in this awareness (such as feeling injustice And other feelings) may be reflected negatively on the level of organizational citizenship of workers in the management of a bank.
Thus, the study problem revolves around the following questions: 3-What is the effect of organizational behavior in its three dimensions (assessing the employee's personality, worker's motivations, interactivity within the work team) on organizational citizenship towards the bank with its dimensions (altruism, general compliance, conscience awareness and civilizational behavior) among employees of Bemo Saudi Fransi Bank?
Study Importance
The importance of this study is highlighted in its modern topic, which has become the focus of concerns of modern organizations. In addition to the lack of studies that dealt with the subject of the study, this study can add a new vision on the importance of awareness of organizational citizenship, and it could be a starting point for more studies and research on organizational behavior and awareness of organizational citizenship, and the researcher hopes that this study will add an increase in his knowledge and enrich his information In this field and to classify this study as a new addition to the Syrian and Arab library.
And in practical way The importance of the study stems from the need to understand the reality of the practice of perceiving organizational citizenship in light of the availability of organizational behavior among the employees of Bemo Saudi Fransi Bank. It is also the first study of its kind in Bemo Saudi Fransi Bank.
This will help bank administrators understand the differences and behaviors in managing a bank in order to improve it and avoid its consequences in the future.
Study Objective
The study seeks to reveal the relationship between the three dimensions of organizational behavior (evaluation of the employee's personality, worker motivations, interactivity within the work team) and the dimensions of organizational citizenship (altruism, general compliance, conscience awareness and civilizational behavior) among Bemo Saudi Fransi employees.
Study Variables
The independent variable: organizational behavior and consists of four dimensions, namely: Personality of the worker The worker's motives The ability of workers to interact within the team Which was based on the Likert pentatonic scale through the questionnaire consisting of 13 questions. ISSN 2162-3058 2020 The Dependent variable: organizational citizenship Which was based on the Likert pentatonic scale through the questionnaire, which consisted of 20 questions.
International Journal of Human Resource Studies
Organizational citizenship was divided into:
Study Sample
This study targets workers in Bemo Saudi Fransi Bank, which is considered one of the largest traditional private banks operating in the Syrian Arab Republic, and (163) questionnaires were distributed to those who accepted to participate in the study and based on the approval and guidance of the relevant departments in Bemo Saudi Fransi Bank, where the number The sample employees (690) employees.
Study Tool
A questionnaire was used based on measures developed by other researchers for each individual dimension in order to test the study questions. The questionnaire was divided into two parts: The first section, which consists of demographic and functional questions.
Section Two:
The first axis is organizational behavior, and it consists of the following dimensions: 1. The first dimension assesses the personality of the worker and consists of 6 questions. Where the questions were adopted from the scale (Akram, et al., 2016).
2. The second dimension of the worker's motives and consists of 3 questions. Where the questions were adopted from the scale (Özer, et al., 2017).
3. The third dimension is the employability of workers within the team and consists of 4 questions. Where questions were adopted from the scale (Qureshi, et al., 2016) The second axis is awareness of organizational citizenship, and it consists of the following dimensions: 4. The first dimension of altruism consists of 5 questions. Where the questions were adopted from the scale (Ismail, et al., 2018) 5. The second dimension is general compliance and consists of 5 questions. Where questions were adopted from the scale (Majeed, et al., 2018) 6. The third dimension is conscience awareness and consists of 5 questions. Where questions were adopted from the scale (Podsakoff, et al., 1990) 7. The fourth dimension of civilized behavior and consists of 5 questions. Where questions were adopted from the scale (Podsakoff, et al., 2000) International Journal of Human Resource Studies ISSN 2162-3058 2020 The Likert pentaton scale was used to classify the sample responses as follows: 5 Strongly Agree, 4 Agree, 3 Neutral, 2 Disagree, 1 Strongly Disagree.
Study Hypotheses
Depending on the above, the following assumptions will be adopted: H1 The first main hypothesis: There is a high level of organizational behavior among workers in organizational citizenship in Bemo Saudi Fransi Bank. It divides the following hypotheses: H1.1: There is no statistically significant effect of the three dimensions of organizational behavior (assessment of worker personality, worker motivations, interactivity within a team) and after altruism of the dimensions of organizational citizenship.
H1.2: There is no statistically significant effect of the three dimensions of organizational behavior (assessment of worker personality, worker motivations, interactivity within a team) and after general compliance with the dimensions of organizational citizenship. H1.3: There is no statistically significant effect of the three dimensions of organizational behavior (assessment of the worker's personality, worker's motivations, interactivity within the team) and after conscience awareness of the dimensions of organizational citizenship. H1.4: There is no statistically significant effect of the three dimensions of organizational behavior (assessment of worker personality, worker motivations, interactivity within the work team) and after civilized behavior from the dimensions of organizational citizenship.
H2: The second main hypothesis: There are no fundamental differences with regard to the awareness of organizational citizenship with regard to demographic variables of the sample in Bemo Saudi Fransi Bank. It divides the following hypotheses: H2.1: There are no significant differences in the perception of organizational citizenship with regard to the gender variable of the sample.
H2.2: There are no significant differences in the perception of organizational citizenship with regard to the variable academic level of the sample.
Organizational Behavior
We need an explanation of the behavior of the people with whom we work, and the research and the suffering that we encounter may lengthen in trying to understand others and even in understanding ourselves, we may need to know the reasons leading to the behavior, and also the reason to continue this behavior or switch from it, and if we move to The field of business and the organizations in which we work, the greater the need for superiors, colleagues, and subordinates to understand each other, because this understanding greatly affects the economic outcomes of work.
Organizational behavior: a field of knowledge interested in studying the impact of individuals, groups, and organizational structure on behavior within organizations for the purpose of International Journal of Human Resource Studies ISSN 2162-3058 2020 applying this knowledge to improve the interactivity of bank management.
And Greenberg & Baron defined organizational behavior as (an area interested in knowing all aspects of human behavior in organizations, through the systematic study of the individual, group, and organizational processes, and that the primary goal of this knowledge is to increase organizational effectiveness and increase the well-being of the individual.
Concept and Definition of Organizational Citizenship Behavior
In-depth studies of organizational citizenship behavior have begun within the broader understanding of social exchange theory.This concept can be viewed from a two-way perspective related to the types of volunteer practices aimed at achieving organizational dimension, while the other relates to the desire to help colleagues perform their roles and duties to achieve better Balance between personal and organizational dimensions, where organizational citizenship behavior reflects advanced levels of professionalism and immersion in work, and the professional and ethical maturity of leading systems and their employees to frame their practices with a highly ethical system. Selflessness, heterosexuality, belonging, conscientiousness, active participation and initiative to develop organizational practices, provide additional voluntary efforts away from formal control and external control.
The concept of organizational citizenship is one of the administrative concepts recently produced by modern administrative thought and has attracted the interest of researchers and practitioners alike. The focus of the concept is on the factor that is one of the most important organizational resources at all. It is a basic pillar of development and progress in any society, as it is known that most developed countries have reached their prosperity and development in various fields because of the attention they have placed on their human resource.
Over the past decade, there has been increased interest in the concept of organizational citizenship behavior, which is not specific and is not officially linked to the incentive and performance appraisal systems of organizations, which is important for all organizations as organizations that rely solely on formal behavior are fragile and fragile systems, and organizations must leave part Of indeterminate behavior of individuals so they have the ability to deal with unexpected situations that require innovation behave by individuals.
Bernard's writings in 1938 on the real desires of individuals and their willingness to provide good services and actions are the beginning of the real spark in analyzing the driving foundations of organizational behavior, which Katz later relied upon in 1964, when he identified three main types of driving foundations for organizational behavior (1964 Katz,). At the end of the 1970s, the term Organizational Citizenship Behavior (OCB) was introduced by Organ in 1977, describing automatic collaborative and innovative behaviors when he studied the relationships between job satisfaction and performance, and then a study (Organ, 1977). It has shown that although there is no strong relationship between job satisfaction and productivity, job satisfaction is strongly associated with organizational citizenship behaviors because the latter is less restrictive and dependent on both the individual's ability and the technology employed at work compared to productivity. Since 1983, research and studies have continued. That dealt with organizational citizenship behaviors both in theory and in
Study Methodology
Reliance was placed on philosophy that adopts the deductive approach as the method of study, that is, thinking will be focused on starting from the general to the private, i.e. converting the theory that talks about the relationship between organizational behavior and organizational citizenship to specific hypotheses that can be tested, in addition to relying on the quantitative approach in research using The questionnaire is distributed in the field to the study sample, and in this framework, the literature on the topic of the research was seen at the beginning in order to define the concepts of organizational behavior and organizational citizenship to be studied, and then a questionnaire will be designed to collect and recommend In the data of the research sample, the analysis of that data, and the testing of hypotheses of the research using appropriate statistical methods and programs for this purpose, this study will rely on two main sources to obtain the data, namely: -Primary sources: where the questionnaire will be relied upon to measure independent and dependent variables, and it will be distributed to the sample items.
-Secondary sources: relying on books and studies related to the subject through libraries and previous studies.
Statistical Analysis Methods
Descriptive statistics will be used in this study, through studying standard averages and deviations, in addition to the independence sample T test and simple and multiple linear regression analysis.
Statistical Analysis of Study Data and Results
Where it is observed that the quality of the sample is distributed according to gender, where it is observed that 75.5% of the employees are females while 24.5% of the employees are male, while for the study it was observed that 68.7% of them have university degrees, 23.3% of them are graduates and 8% of Holders of secondary education certificates and less, and this indicates that the major majority of the employees of Bemo Saudi Fransi Bank hold degrees.
The sample was also distributed according to age, where it is noted that the majority of employees are between 18-25 years, as the sample was divided between 58% between 18-25 years for employees. As for ages between 26-32 years, the proportion of employees was about 24%, while for employees it is about 9% of the sample are above 32 years old.
Hypotheses Test
The first hypothesis: 1-What is the effect of Bemo Saudi Fransi's awareness of organizational behavior (evaluation of the employee's personality, worker's motives, the worker's interactivity within the work team) in organizational citizenship towards the management of the (altruism) bank?
To test the first sub-hypothesis, a linear regression coefficient test was used, which says what International Journal of Human Resource Studies ISSN 2162-3058 2020 is the effect of awareness of the three dimensions of organizational behavior (evaluation of the employee's personality, worker's motives, interactivity within the work team) in the altruism dimension towards the management of a bank where the value of sig = 0,000 which is less than 0.05 and according to the curricula, practical research shows The presence of a significant effect of organizational behavior on the distance of altruism in organizational citizenship, as the value of the correlation coefficient = 78.4%, which is positive, that is, there is a direct relationship between the variables and the value of the determination factor 61.5%, that is, the relationship strength is strong, as was the value of the modified determination factor 60. 8%, i.e. 60.8% of the variables that influence after altruism J of organizational behavior variables.
From this it is possible to calculate the linear regression equation, which appears as follows: Whereas, X1: evaluation of the worker's personality, X2: the motivation of the worker, From it, each increase by 42.6% of the employee's personality assessment and 37.7% of the worker's motivations increases the employee preference, which is considered a dimension of citizenship behaviors.
This effect can be demonstrated through the work of Bemo Saudi Fransi Bank to establish the concept of organizational behavior among employees, which helps in the citizenship behaviors they own, and that the increase in personal relations between employees and management has led to a decrease in the altruism variable among employees.
2-There is no statistically significant effect of the three dimensions of organizational behavior (evaluation of the employee's personality, worker's motivations, interactivity within the team) and organizational citizenship dimensions (general compliance).
To test the second sub-hypothesis, a linear regression coefficient test was used which says what effect did Bemo Banque Saudi Fransi perceive organizational behavior (assessment of the employee's personality, employee's motives, the worker's interactivity within the work team) in the general compliance dimension. Where the value of sig = 0.000, which is less than 0.05, and according to the practical research methodology, there was a significant effect of organizational behavior on the general compliance dimension in organizational citizenship, and the value of correlation coefficient = 79%, which is positive, meaning that there is a direct relationship between the variables and the value of a coefficient. Determination 62.4%, i.e. the strength of the relationship is strong, as was the value of the adjusted coefficient of determination, 61.7%, meaning that 61.7% of the variables that affect the general compliance dimension are among the variables of organizational behavior.
From this it is possible to calculate the linear regression equation, which appears as follows: Y = 0.958 + 0.467 X1 -0.249 X2 + 0.480 X3 Whereas, X1: evaluation of the personality of the worker, X2: the motivations of the worker, X3: the susceptibility of the worker to interaction within the team International Journal of Human Resource Studies ISSN 2162-3058 2020 From it, each increase by 46.7% from the evaluation of the worker's personality and 48% from the worker's interactivity within the work team, and every decrease by 24.9% of the worker's motives increases the general compliance of employees and workers alike, which are considered to be dimensions of citizenship behaviors. It can be clarified that exaggeration of the process of worker motivation among employees led to a decrease in general compliance, given that there are many motives in which the worker works in the bank, regardless of his love for his work or the bank itself.
3-There is no statistically significant effect of the three dimensions of organizational behavior (assessment of the worker's personality, worker's motives, interactivity within the team) and organizational citizenship dimensions (conscience awareness).
To test the third sub-hypothesis, a linear regression coefficient test was used, which says what is the effect of the awareness of Bemo Saudi Fransi employees on the organizational behavior (assessment of the employee's personality, the worker's motives, the employability of the worker within the work team) in the dimension of conscience. Where the value of sig = 0.000, which is less than 0.05 and according to the practical research methodology, there was a significant effect of organizational behavior on the conscience consciousness dimension in organizational citizenship, and the value of correlation coefficient = 68%, which is positive, that is, there is a direct relationship between the variables and the value of a coefficient. Determination 46.3%, i.e. the strength of the relationship is strong, as was the value of the adjusted coefficient of identification 45.2%, meaning that 45.2% of the variables that affect the dimension of consciousness of conscience are variables of organizational behavior.
From this it is possible to calculate the linear regression equation, which appears as follows: Y = 1.239 + 0.562 X1 -0.239 X2 + 0.347 X3 Whereas, X1: evaluation of the personality of the worker, X2: the motivations of the worker, X3: the susceptibility of the worker to interaction within the team From it, each increase by 56.2% from the evaluation of the worker's personality and 34.7% from the worker's ability to interact within the work team, and every decrease by 23.9% from the worker's motives increases the awareness of the employee conscience, which is considered a dimension of citizenship behaviors.
It can be explained that the exaggeration of the workability of the worker within the work team and the lack of some respect between the employees led to a decrease in conscience awareness due to their conviction not to be held accountable because of the friendly relations between each other. 4-There is no statistically significant effect of the three dimensions of organizational behavior (evaluation of the worker's personality, worker's motivations, interactivity within the team) and organizational citizenship dimensions (civilized behavior).
To test the fourth sub-hypothesis, a linear regression coefficient test was used, where the value of sig = 0.159 which is greater than 0.05 and according to the methods of practical research shows that there was no significant effect of organizational behavior on the International Journal of Human Resource Studies ISSN 2162-3058 2020 dimension of civilized behavior in organizational citizenship.
The second hypothesis: 1-There are no significant differences in the perception of organizational citizenship regarding the gender variable of the sample.
For the first sub-hypothesis test, an independence sample T test was used, Where it is noted from the previous tables that there are no fundamental differences in relation to the variables of organizational behavior and organizational citizenship with regard to the gender variable of the sample, considering that the value of SIG is greater than 0.05 except in the altruism variable where it is noted that the value of SIG was smaller than 0.05 and this is evidence of the existence of fundamental differences due to a variable Gender for the sample, where it is noticed that the average value of male answers to the questions of altruism is 3.3 which is slightly greater than the average of female answers, where it can be explained that males volunteer to carry out tasks that are not required of them except that they take breaks that they do not deserve and do not spend their time working without making an effort while Female helps The other people who are absent from their work and be precise in their working hours, and this is due to the different nature of both males and females.
2-There are no significant differences regarding the awareness of organizational citizenship regarding the variable of the academic level of the sample.
For the second sub-hypothesis, an independence sample T test was used Where it is noted from the previous tables that there are no fundamental differences with respect to the variables of organizational behavior and organizational citizenship with regard to the variable of the academic level of the sample.
Results
The following was concluded: 1-There is a statistically significant effect of organizational behavior variables (evaluation of the employee's personality, worker's motives) while there is no relative (interactivity within the work team) variable in the altruism of the dimensions of organizational citizenship towards a bank's management, and this is similar to the majority of previous studies such as the (broader) study 2018) and the study (Al-Ajlouni, 2018) and the study (Lawrence, 2018), and there is also a statistically significant effect of organizational behavior variables (assessment of worker personality, worker motivations, worker interactivity within the work team) in the general. compliance dimension of organizational citizenship dimensions towards management bank. This is similar to a study (Lawrence, 2018).
2-The confirmation of the concept of organizational behavior among employees, which helps in the citizenship behaviors they possess, and that the increase in personal relations between employees and management has led to a decrease in the altruism variable among employees 3-There is a statistically significant effect of organizational behavior variables (evaluation of the employee's personality, worker's motives, the worker's interactivity within the work team) in the conscience awareness of the dimensions of organizational citizenship toward bank management. This is similar to the majority of previous studies such as the study (Al-Ajlouni, 2018) and the study (Lawrence, 2018), and it conflicts with the study (the broader, 2018), and there is no statistically significant effect for the three dimensions of organizational behavior (assessment of the worker's personality, worker motivations, interactivity within The work team) on the dimension of civilized behavior from the dimensions of organizational citizenship towards the management of a bank, and this conflicts with the study (Lawrence, 2018) 4-The exaggeration of the process of worker motivation among employees led to a decrease in general compliance due to the presence of many motives in which the worker works in the bank, regardless of his love for his work or the bank itself.
5-The exaggeration of the workability of the worker within the work team and the lack of some respect between the employees led to a decrease in conscience awareness due to their conviction not to be held accountable because of the friendly relations between each other.
6-There are fundamental differences attributable to the gender variable of the sample, as males volunteer to carry out tasks that are not required of them, except that they take breaks that they do not deserve and do not spend their time working without making an effort, while females help others who are absent from their work and are careful in their work schedules.
7-There are no fundamental differences with respect to the variables of organizational behavior and organizational citizenship with regard to the variable of the academic level of the sample, there is no sense of creativity and innovation for employees in Bemo Saudi Fransi Bank, and the employees are not interested in improving the image of bank management or performing the work with perfection and dedication. 8-Males volunteer to do the tasks that are not required of them, except that they take breaks that they do not deserve and do not spend their time working without making an effort, while females help others who are absent from their work and seek accuracy in their work hours, and this is due to the different nature of both males and females.
Recommendations
Based on the previous results, the following recommendations were reached: 1. Working to involve employees more in managing a bank by setting up awareness sessions for them and doing some administrative matters within the management of a bank, in addition to establishing and developing the concept of organizational citizenship for workers in particular in order to improve work and contribute to improving the performance of this bank management through reward Workers with voluntary behavior, holding honoring parties for them, and considering citizenship behavior as one of the criteria for meeting the performance of employees and workers.
2. Attempting to pay some of the incentives for employees to increase the degree of organizational loyalty, including the direction of Bemo Saudi Fransi Bank, and give the workers the ability to challenge the results issued by the administration in order to increase International Journal of Human Resource Studies ISSN 2162-3058 2020 the degree of loyalty to the organization, and to open the means and channels of communication with management continuously and periodically, and try to support the sense of creativity And innovation for employees of Bemo Saudi Fransi Bank, in addition to conducting training courses by training workers to improve the image of bank management and performing the work with perfection and dedication. | 2020-04-02T09:28:46.202Z | 2020-03-30T00:00:00.000 | {
"year": 2020,
"sha1": "c36ae0ef59c4d963c4da85d266b2bd4632e9ce6e",
"oa_license": "CCBYNC",
"oa_url": "https://www.macrothink.org/journal/index.php/ijhrs/article/download/16776/13008",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "4135b3c976806349d0530866d6254a8a9a45fd7e",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Psychology"
]
} |
123540560 | pes2o/s2orc | v3-fos-license | Gamma ray bursts monitoring with the ARGO-YBJ experiment in scaler mode
We report on the search for Gamma Ray Bursts (GRBs) in the energy range 1 −100 GeV in coincidence with the prompt emission detected by satellites, using the Astrophysical Radiation Ground-based Observatory at YangBaJing (ARGO-YBJ). With its big active surface ( ∼ 6700 m2) and large field of view ( ≥ 2 sr) the ARGO-YBJ air shower detector is particularly suitable to detect unpredictable and short duration events such as GRBs. The search has been performed using the single particle technique in time coincidence with satellite detections both for single events and for the piling up of all the GRBs in time and in phase. Between November 2004 and June 2010 115 GRBs, detected by different satellites (mainly Swift and Fermi), occurred within the field of view of ARGO-YBJ. For 94 of these we searched for a counterpart in the ARGO-YBJ data finding no statistically significant emission. Search methods and results are discussed.
Introduction
The study of GRBs has been carried out mainly from space detecting the primary photons.Due to the fast decrease of the spectrum, the operating energies are usually in the keV -MeV range, and only EGRET in the past and now Agile and the Fermi Gamma Ray Space Telescope reached the GeV region, with maximum detectable energies of 30 and 300 GeV, respectively.From ground level, the search can be performed by means of large area extensive air shower detectors operating at high altitude, measuring the secondary particles generated by the interaction of the primary photons with the Correspondence to: C. Vigorito (vigorito@to.infn.it)atmosphere nuclei.This search, which started many years ago (see for example O'Brian and Porter, 1976;Morello et al., 1984;Alexandreas et al., 1994;Castellina et al., 2001;Aglietta et al., 1996) as a particular way to use experiments designed for gamma ray astronomy, requires very stable and reliable detectors.Moreover, at lower energies the number of secondary particles reaching the ground, often only one, does not allow the measurement of the arrival direction, making unfeasible an independent detection.
Forty years after their discovery and more than ten years after the detection of the first afterglow by BeppoSAX, the physical origin of the enigmatic GRBs is still under debate, allowing a great variety of very different models.In these conditions, and mainly in the > 1 GeV energy region, any result could be of great importance to approach the solution of the GRB dilemma.
The sensitivity of ARGO-YBJ may reveal the spectral cutoff in an energy range only partially covered by the satelliteborn detectors and may put constraints on the emission models.
The detector
The ARGO-YBJ experiment is located at 4300 m a.s.l.(vertical atmospheric depth 606 g cm 2 ) at the YangBaJing Cosmic Ray Laboratory (30 • 06 38 N 90 • 31 56 E, Tibet, P.R. of China).The detector is composed of a single layer of Resistive Plate Chambers (RPCs) operated in streamer mode (Aielli et al., 2006) the improvement of the accuracy in the core position determination.The detector has two independent data acquisition systems corresponding to the shower and scaler operation modes.In shower mode the arrival time and location of each particle are recorded using the pads allowing the detailed reconstrunction of the shower lateral distribution and the arrival direction.In scaler mode the total counting rate of each cluster is integrated continuously in a t = 0.5 s window and recorded for 4 different multiplicity channels C ≥i with i = 1,2,3,4 (being 150 ns the coincidence window).The corresponding measured rates are 40 kHz, 2 kHz, 300 Hz and 120 Hz, respectively.
Although this latter technique does not provide information about the energy and arrival direction of the primary cosmic ray, it allows a very low energy threshold of 1 GeV overlapping the highest energy region directly investigated by satellite experiments.Moreover, the use of four different channels sensitive to different energies will provide, in case of positive detection, information on the high energy spectrum slope and possible cutoff (Aielli et al., 2009a).
Since for the GRB search in scaler mode the authentication is only given by the satellite detection, the stability of the detector has to be deeply investigated.Details of this study are widely discussed in Aielli et al. (2008), together with the determination of the effective area, upper limit calculation and expected sensitivity.
GRB monitoring in scaler mode
The present update collects data from November 2004 (corresponding to the Swift satellite launch) to June 2010, with a detector active area increasing from ∼ 700 to ∼ 6700 m 2 .During this period, a total of 115 GRBs, selected from the GCN Circulars Archive1 , was inside the ARGO-YBJ field of view (i.e. with zenith angle θ ≤ 45 • , limited only by atmospheric absorption); for 94 of these gamma ray bursts ARGO-YBJ data were available and they have been investigated by searching for a significant excess in the counting rates coincident with the satellite detection.In order to extract the maximum information from the data, two GRB analyses have been implemented: -search for a signal from every single GRB; -search for a signal from the pile-up of all GRBs (stacked analysis).
For both analyses, the first step is the data cleaning and check.For each event, the Poissonian behaviour of the counting rates for multiplicities ≥1, ≥2, ≥3, ≥4 for all the clusters is checked in a period of ±12 h around the GRB trigger time using the normalized fluctuation function: (1) In this formula, s is the number of counts in a time interval of 10 s, b the number of counts in 10 s averaged over a time period of 100 s before and after the signal, and σ the standard deviation, with about 400 independent samples per distribution.The interval of 10 s has been chosen to avoid any systematic effect caused by environment and instrument (such as atmospheric pressure and detector temperature variations).The expected distribution of f is the standard normal function; all the clusters giving a distribution with measured σ > 1.2 or with anomalous excesses (i.e.n. of entries > 2%) in the tail σ > 3 in at least one multiplicity channel are discarded.This guarantees that our data fulfill the requirements on stability and reliability of the detector.In the present running conditions the detector efficiency after the quality cuts is 92 % and the dead time ≤ 1%.
Search for single GRBs
The counting rates of the clusters surviving our quality cuts are added up and the normalized fluctuation function is used to give the significance of the coincident on-source counts.In this formula, s is the total number of counts in the t 90 time window given by the satellite detector and b the number of counts in a fixed time interval of 300 s before and after the signal, normalized to the t 90 time.
Due to the correlation between the counting rates of different clusters (given by the air shower lateral distribution), the distributions of the sum of the counts are larger than Poissonian and this must be taken into account in calculating the significance of a possible signal.The statistical significance of the on-source counts over the background is obtained again in an interval of ±12 h around the GRB trigger time, using Eq. ( 17) of Li and Ma (1983) (for more details see Aielli et al., 2008).All the results presented here are obtained using the single particle counting rate (C 1 = C ≥2 −C ≥1 ), corresponding to the minimum primary energy in the ARGO-YBJ scaler mode.Figure 1 shows the distribution of the significances for the whole set of 94 GRBs.
No significant excess is found, 3.52 σ being the maximum significance obtained, with a chance probability of 2.1% taking into account the total number of GRBs analyzed.
Fluence upper limits
With the lack of a positive signal the fluence upper limits are obtained in the 1 − 100 GeV energy range adopting a power law spectrum and considering the maximum number of counts at 99% confidence level (c.l.), following Eq.( 6) of Helene (1983).For this calculation, two different assumptions are used for the power law spectrum: a) extrapolation from the keV -MeV energy region of the spectral index measured by the satellite experiments when available; b) a differential spectral index α = −2.5.Since the mean value of spectral indexes measured by EGRET in the GeV energy region is α = −2.0(Dingus et al., 1997), we expect the true upper limits to lie between these two values.For GRBs with known redshift, an exponential cutoff in the spectrum is considered to take into account the effects of extragalactic absorption, which is calculated using the values given in Kneiske et al. (2004).For the subset of 68 GRBs with known spectral index α, as measured by the satellite experiments, the fluence upper limits have been calculated according to hypothesis a) and they are shown in Fig. 2 as a function of the zenith angle.
For the subset of 16 GRBs with known redshift, the fluence upper limits for the two assumed spectra are shown in Fig. 3. Since the measured low energy differential spectral indexes for these GRBs are always greater than −2.5, the higher upper limits refer to this extrapolation; for 3 GRBs the measured low energy spectrum is a Cutoff Power Law and only the value obtained assuming α = −2.5 is shown.For the other GRBs the rectangles indicate all the upper limits corresponding to differential spectral indexes ranging from the low energy measurement to the fixed value α = −2.5.
Energy cutoff
The cutoff energy of GRBs is actually unknown.The following procedure is developed in order to determine an upper limit to this energy at least for some GRBs exploting the ARGO-YBJ scaler mode data.When using as the GRB spectrum the extrapolation of the index measured in the keV-MeV region by satellite experiments, the extrapolated fluence is plotted together with our fluence upper limit as a function of the cutoff energy E cut .If the two curves cross in the The values represented by the red triangles are obtained taking into account the extragalactic absorption for the 3 GRBs with known redshift; for the others z = 1 is adopted.limit to the cutoff energy.For these GRBs we can state that their spectra do not extend over the obtained E cut upper limit, with a 99% c.l. if the spectral index measured by satellites keeps constant.Figure 4 shows the cutoff energy upper limits as a function of the spectral index for the 18 GRBs for which the intersection occurs in the quoted energy range.For 3 of these (red triangles in Fig. 4) the knowledge of the redshift allows the estimation of the extragalactic absorption.When the GRB redshift is unknown a standard value z = 1 is adopted in the calculation.
Pile-up of all GRBs -Stacked analysis
The search for cumulative effects by stacking all the GRBs either in fixed time durations or in phases of t 90 could enhance a possible signal, making it significant, even if the emission of each GRB is below the sensitivity of the ARGO-YBJ detector.In this case, less information could be given with respect to the single GRB coincident detection, but we must consider that with the stacked analysis we increase our sensitivity by increasing the number of GRBs, while for the single GRB search we decrease our sensitivity because of the increasing number of trials.On this basis the analysis is performed supposing a common timing feature in all GRBs.
First, all the events during a time window t (with t = 0.5, 1, 2, 5, 10, 20, 50, 100, 200 s) after T 0 (the low energy trigger time given by the satellites) for all the GRBs are added up.This is done in order to search for a possible cumulative high energy emission with a fixed duration after T 0 .A positive observation with a fixed t could be used as an alternative value to the standard t 90 duration.Some indications of such a delayed high energy component has been recently shown for some GRBs by satellite measurements.The resulting overall significance of the GRBs stacked in time with respect to random fluctuations is −0.70σ .
A second search is done to test the hypothesis that the high energy emission occurs at a specific phase of the low energy burst, independently of the GRB duration.For this study, all the 79 GRBs with t 90 ≥ 5 s (i.e.belonging to the subset of "long GRB" population, commonly defined by t 90 ≥ 2 s) have been added up in phase scaling their duration.This choice has been done for both physical and technical reasons, adding up the counts for GRBs of the same class and long enough to allow a phase plot with 10 bins given our time resolution of 0.5 s.There is no evidence of emission at a certain phase, and the overall significance of the GRBs stacked in phase (obtained adding up all the bins) with respect to background fluctuations is −0.89σ .
Conclusions
In this paper we have reported a study concerning the search for GeV photons from 94 GRBs carried out by the ARGO-YBJ air shower detector operated in scaler mode.In the search for GeV gamma rays in coincidence with the low energy GRBs detected by satellites, no evidence of emission was found for any event.The stacked search, both in time and phase, has shown no deviation from the statistical expectations, therefore excluding any integral effect.
The fluence upper limits obtained in the 1 − 100 GeV energy range depend on the zenith angle, time duration and spectral index, reaching values down to 10 −5 erg cm −2 .If we consider our sensitivity in terms of expected number of positive detections, our estimate gives a rate between 0.2 and 1 per year, which is comparable with similar evaluations for other experiments working in different energy regions (e.g.Albert et al., 2007).
Finally, the capability of the detector shower mode (here not discussed) to measure the arrival direction and energy of individual showers above a few hundred GeV allows the ARGO-YBJ experiment to study the GRBs in the whole 1 GeV−1 TeV range.A detailed discussion on the methods and the first published results can be found in Aielli et al. (2009b).
Edited by: J. Poutanen Reviewed by: two anonymous referees and grouped in 153 units, called clusters, of area 5.7×7.6 m 2 each.A cluster is made of 12 RPCs (1.225 × 2.850 m 2 ) each read by 10 pads (55.6 × 61.8 cm 2 ) representing the space and time pixels of the array.The clusters are organized in a central full coverage carpet (130 units, 5600 m 2 , 93% of active surface) enclosed by a guard ring (23 units), which allows the extension of the instrumented area up to 100 × 110 m 2 , the increase of the fiducial area and Published by Copernicus Publications on behalf of the Arbeitsgemeinschaft Extraterrestrische Forschung e.V.
Fig. 1 .
Fig. 1.Distribution of the statistical significances of the set of 94 GRBs with respect to background fluctuations, compared with a Gaussian fit.
Fig. 2.Fluence upper limits for GRBs with known spectral index as a function of zenith angle.Red triangles for GRBs with know redshift; black dots when z=1 assumed.
Fig. 3 .
Fig. 3. Fluence upper limits of GRBs as a function of redshift.The rectangles represents the values obtained with differential indexes ranging from the low energy measurement up to α = −2.5; the three arrows are the upper limits for this latter case only (GRBs with Cutoff Power Law spectrum; see text for details).The red point shows the integral fluence extrapolated in our sensitivity range for the GRB090902B observed by the LAT instrument on board the Fermi satellite.
Fig. 4 .
Fig. 4. Cutoff energy upper limits of GRBs as a function of spectral index obtained extrapolating the measured keV-MeV spectra.The values represented by the red triangles are obtained taking into account the extragalactic absorption for the 3 GRBs with known redshift; for the others z = 1 is adopted. | 2018-12-21T03:28:23.062Z | 2011-07-04T00:00:00.000 | {
"year": 2011,
"sha1": "1b80b35e0fa6ab92fe5230268f4b1c31637ee26a",
"oa_license": "CCBY",
"oa_url": "http://www.astrophys-space-sci-trans.net/7/239/2011/astra-7-239-2011.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "1b80b35e0fa6ab92fe5230268f4b1c31637ee26a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
7117096 | pes2o/s2orc | v3-fos-license | The enhancer of the immunoglobulin heavy chain locus is flanked by presumptive chromosomal loop anchorage elements.
We have located presumptive chromosomal loop anchorage elements within the mouse heavy chain immunoglobulin locus. Analysis of 31 kilobases spanning diversity, joining, enhancer, switch, and the mu and delta constant regions reveals that only a single 1-kilobase segment exhibits specific binding to nuclear matrices. It is of particular significance that the transcriptional enhancer element resides within this matrix association region (MAR). Fine structure mapping indicates that binding is mediated by A+T-rich approximately 350-base pair segments that reside on either side of the enhancer. The MAR sequences residing 5' of the enhancer contain topoisomerase II consensus sequences like the MAR located upstream of the kappa light chain gene enhancer. The heavy chain gene MARs, however, exhibit a lower affinity for matrix association compared to the kappa gene MAR. Significantly, the juxtaposition of enhancer elements with MARs appears to be evolutionarily conserved within the immunoglobulin genes, suggesting that MARs may act as positive and/or negative regulators of enhancer function.
conserved within the immunoglobulin genes, suggesting that MARs may act as positive and/or negative regulators of enhancer function.
DNA within interphase nuclei and mitotic chromosomes is organized into topologically constrained looped domains of about 10-100 kilobases in length (1-4). Given the chemical complexity of the mammalian genome, one can estimate that roughly lo5 chromosomal loop anchorage sites would exist in a single diploid nucleus. With these considerations in mind, a number of interesting questions can be entertained. Do anchorage sites punctuate gene clusters into functionally distinct chromatin domains? Do such sites target genes in the nucleus to specific compartments? Are common anchorage elements shared by functionally diverse genes or only by related gene families? Finally, are different regions of a given gene anchored depending on transcription, replication, development, or tissue type?
To search for presumptive chromosomal loop anchorage elements, in an earlier study we developed an in vitro DNA binding assay that localizes matrix association regions (MARs)' within cloned genes (5). This approach can be complemented by the nuclear halo mapping procedure of Laemmli * This research was supported by Grants GM22201, GM29935, and GM31689 from the National Institutes of Health and by Grant 1-823 from the Robert A. Welch Foundation. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
$ To whom correspondence should be addressed.
The abbreviations used are: MAR or M, matrix association region; regions of immunoglobulin genes: D, diversity; V, variable; J, joining; S, switch; C, constant; H, heavy chain; E, enhancer; Top0 11, topoisomerase II; LIS, lithium diiodosalicylate; SDS, sodium dodecyl sulfate; CTAB, cetyltrimethylammonium bromide; kb or bp, kilobase or base pairs; SAR, scaffold attached region. and co-workers (6), which employs nuclear fractionation of endogeneous sequences to identify scaffold attached regions (SARs) (7). A limited comparison between these two procedures reveals that SARs appear to be strictly analogous to MARs ((5, 7 ) vide infra). Interestingly, SARs are located nonrandomly and at specific sites adjacent to a series of functionally distinct Drosophila class I1 genes (6-9). Available evidence suggests that the positions of these contact points appear not to vary with gene expression, but technical limitations preclude a definitive conclusion that anchorage is constitutive (7). Furthermore, studies on the K light chain immunoglobulin gene in mouse cells reveal a MAR adjacent to the transcriptional enhancer element that is in the same position both before and after recombination and gene expression (5). In addition, certain SARs of Drosophila genes also appear to be located near enhancer-like elements (7). Significantly, Drosophila gene SARs share a number of features in common with the mouse K gene MAR, including overall A+T richness, the presence of topoisomerase 11 consensus sequences, short clusters of certain characteristic A+T-rich sequences, and the ability to compete for abundant binding sites in mouse nuclear matrix preparations (5-9). The presence of topoisomerase I1 sites is interesting, considering that this protein is the major component of the mitotic chromosome scaffold (10-12) and is recovered in high yield in certain nuclear matrix preparations (13). Whether topoisomerase I1 is responsible for anchoring these sequences at the base of chromosomal loops in the interphase nucleus in uiuo, however, remains to be demonstrated.
The juxtaposition of the K gene enhancer with a MAR that contains topoisomerase I1 sites suggests that a functional relationship may exist between transcriptional enhancement and DNA swiveling at the base of chromosomal loops ( 5 ) . Consistent with this view is the apparent in uivo localization of topoisomerase I1 adjacent to the SV40 enhancer (14) and the in vitro activation of dynamic supercoiling of the Xenopus 5 S gene by trans-acting factors (15). In the present study we address these issues further by an analysis of the mouse heavy chain immunoglobulin locus. Previous studies have identified a powerful tissue-specific enhancer (EH) in the intron between the joining (JH) and switch recombination sequences (S,) just upstream of ~1 constant region exons (C,) (16, 17) (for review of immunoglobulin genes, see Ref. 18). We show here that this enhancer is flanked on both sides by presumptive chromosomal loop anchorage elements. The 5' MAR appears similar to the K gene MAR as it contains topoisomerase I1 sites. Therefore, the heavy and light chain mouse immunoglobulin genes have a strikingly similar DNA sequence organization with respect to MARs and enhancers.
RESULTS AND DISCUSSION^
Portions of this paper (including "Experimental Procedures,"
Regions within the Heavy Chain Locus
, . | 2018-04-03T03:36:24.874Z | 1987-04-15T00:00:00.000 | {
"year": 1987,
"sha1": "66ae41e9e364e4165628ef04395e1b529eacae99",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/s0021-9258(18)61200-1",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "3c6ac50033296ca603416147194b1287b4401591",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
340224 | pes2o/s2orc | v3-fos-license | Elemental gesture dynamics are encoded by song premotor cortical neurons
Quantitative biomechanical models can identify control parameters used during movements, and movement parameters encoded by premotor neurons. We fit a mathematical dynamical systems model including subsyringeal pressure, syringeal biomechanics, and upper vocal tract filtering to the songs of zebra finches. This reduced the dimensionality of singing dynamics, described as trajectories in pressure-tension space (motor “gestures”). We assessed model performance by characterizing the auditory response "replay" of song premotor HVC neurons to presentation of song variants in sleeping birds, and by examining HVC activity in singing birds. HVC projection neurons were excited and interneurons were suppressed with near-zero time lag, at times of gesture trajectory extrema. Thus, HVC precisely encodes vocal motor output via the timing of extreme points of movement trajectories. We propose that the sequential activity of HVC neurons represents the sequence of gestures in song as a “forward” model making predictions on expected behavior to evaluate feedback.
We assessed predictions of the biomechanical model by taking advantage of a neuronal replay phenomenon [4][5][6] . Neurons in the nucleus HVC, a secondary motor/association cortical structure which is the most central structure known to be essential for singing, emit precise premotor activity when a bird sings [5][6][7] , and have responses that are very similar in timing and structure 6 that are highly selective for the bird's own song (BOS) when a bird listens to playback of song 8,9 . In zebra finches, there is a striking state-dependent neuronal replay phenomenon 4 associated with song learning 10 , so that the strongest and most selective auditory responses are recorded in sleeping birds. We used responses to song in sleeping adult zebra finches as a proxy for evaluating the structure of singing, and then tested emerging hypotheses in singing birds.
Validating a song model: static parameters
The avian vocal organ is a nonlinear device [11][12][13] capable of generating complex sounds even when driven by simple instructions 14,15 . We extended a low dimensional model of the avian syrinx and vocal tract that can capture a variety of acoustic features like the precise relationship between fundamental frequency and spectral content of zebra finch song 16,17 . The model used here is summarized in Fig. 1. A two dimensional set of equations describes the labial dynamics (see Methods) ( Fig.1, x(t), red trace). Flow fluctuations are fed into a vocal tract, generating an input sound P i (t) (green trace). The tract filters the sound and is characterized as a trachea, modeled by a tube, which connects to the oro-esophageal cavity (OEC), here modeled as a Helmholtz resonator 18 (see Methods) . The output of the model is a time trace representing the uttered sound (P out (t)) (blue trace).
Using this model, we created synthetic versions of the songs our test birds sang. Time dependent parameters of the model describing the labial dynamics were reconstructed to account for the time dependent acoustic properties of the sound (see Methods). Following 3,16,17 , for each bird's song we used an algorithmic procedure to reconstruct unique functions for the air sac pressure (α(t)) and the tension of syringeal labia (β(t)). The result of the procedure for one song is illustrated in Fig. 2, showing that many features observed in the spectrograph of the recorded song ( Fig. 2a) were also apparent in the synthesized song (Fig. 2b). Relatively simple time traces of reconstructed pressure and tension arose from fitting the bird's song (Fig. 2c). These two functions drove the nonlinear equations for the labia to produce a wide range of diverse acoustic features. The parameter space of pressure vs. tension was organized by bifurcation curves (Fig. 2d, black lines), i.e. curves in the parameter space that separated regions where the model presented qualitatively different dynamics (sound patterns). Only one region (Fig. 2d, gray region) corresponded to oscillatory behavior, i.e. labial oscillations resulting in sound pressure fluctuations. Two features of the pressure-tension trajectories resulting in sound output were apparent (Fig. 2d). One, most of the control parameters were maintained close to bifurcation curves, facilitating rapid changes in the quality of sound output with small changes in parameter values. Two, there were many sounds that were characterized principally by movements in pressure or tension but not both. tension functions (Fig. 2c). Gestures include movements that do not result in phonation, such as pressure patterns associated with mini-breaths between syllables 19 but our recordings here were limited to airborne sounds. In a sample of 8 modeled songs, there were 13±4 gestures per motif (largest basic unit of song, a repeated sequence of syllables). The distribution of gesture durations (mode = 22.5±2.5 ms, range 4-142 ms) was non-Gaussian, with 33% of the gestures ≤ 30 ms, and a long tail corresponding to slowly varying sounds such as constant frequency harmonic stacks (Fig. 2e).
This simple model captured essential features of sound production in a framework of labial tension and subsyringeal pressure over which birds have direct motor control [20][21][22] . Whereas the actual syrinx has considerable additional complexity, the model provided for substantial dimensionality reduction. This allowed us to capture a wide range of acoustic features in a small set of time dependent parameters. We tested the model by comparing responses of HVC neurons to the broadcast of the modeled song (mBOS) and BOS in sleeping birds (Fig. 3). Responses to a grid of mBOS stimuli with identical timing but different spectra from BOS identified optimal estimates for two remaining free static parameters ( Supplementary Fig. 1). In sleeping birds, song system neurons are exceptionally selective and it was far from trivial to induce a response: for example mBOS generated without the OEC component failed to elicit response. In a case where we mis-estimated the duration of a component of BOS by 5 ms, a neuron responded strongly to BOS but not at all to the synthetic song ( Supplementary Fig. 2b). Over a population of 30 neurons, the best mBOS elicited 58%±8% of the response to BOS (Supplementary Note 1). Both phasic projection neurons (HVC (p) ) (N=15) and tonic interneurons (HVC (i) ) (N=15) responded selectively to mBOS over non-BOS stimuli (Supplementary Note 1). These results show that a low dimensional model representing an approximation of peripheral mechanics is sufficient to capture behaviourally relevant features of song.
Projection neurons burst at gesture extrema
We then evaluated the activity of HVC neurons relative to model dynamics, analyzing the timing of spike bursting relative to the pressure-tension trajectories used to synthesize mBOS. This identified a compelling relation between the timing of HVC (p) spikes and the pressure-tension trajectories. For example, in Fig. 4a the spiking of two neurons (coded with different colors) is shown relative to the BOS spectrograph, oscillograph and reconstructed pressure and tension time series. One neuron bursts once, at the transition between descending frequency modulations and a constant frequency "high note". The other neuron bursts twice, once when the pressure during a high note reached a maximum, the other time at the transition between a high frequency chevron and a broadband frequency modulated sound. Similar relations between spike burst timing and gestures were observed for 14 of the 15 HVC (p) ( Supplementary Figs 2 and 3). In one case, a neuron emitted bursts in the interval between syllables. We hypothesize this pattern might arise if the bursts are associated with mini-breaths during singing 19 . Only the 17 bursts occurring during phonation were considered for further analysis.
Examining the responses of the HVC (p) on pressure vs. tension plots demonstrated that neurons burst preferentially at gesture trajectory extrema (GTE) associated with gestures ( Fig. 4b). A gesture has at least two GTE, at its beginning and end, and up to two additional GTE, if the absolute maxima of pressure and/or tension represent unique and distinct time points. No additional GTE result in cases where the absolute maximum is not distinct in time, e.g., multiple local maxima with same magnitude. Of the 17 bursts (14 HVC (p) ), 11 (65%) were aligned with onsets/offsets, and 6 (35%) were aligned with pressure or tension maxima. In a sample of 5 songs, there were 28±4 GTE per song (165 total GTE). From a total of 60 gestures, 20 (33.4%) had only onset and offset GTEs; 30 (50%) had in addition a unique peak in pressure (3 GTEs per gesture); 5 (8.3%) had in addition a unique peak in tension (3 GTEs per gesture); and 5 (8.3%) had in addition unique peaks in both pressure and tension. The distribution of time intervals between successive GTE (mode = 9±1ms, range 4 -116 ms) was non-Gaussian, with 66% of the intervals ≤ 30 ms (Fig. 4c). This is graphically emphasized with tick marks showing all GTEs in Fig. 4a and Supplementary Figs 2,3. Most gestures corresponded to notes (the smallest unit of song organization recognized by ornithologists), yet motor activity at GTE maxima could subdivide notes, for example where a neuron burst and the pressure reached a maximum in the middle of a constant frequency harmonic stack ( Supplementary Fig. 2). These examples highlight that for some HVC (p) the patterns of activity would not be interpretable with a purely spectrographic analysis of song 5 . We also observed cases where HVC (p) burst at the onset of relatively pure pressure-only or tension-only trajectories, with a preponderance for pressureonly trajectories (Fig. 2d). If such neurons project to distinct regions of HVC's afferent targets, which are organized based on the syringeal muscles and interactions with respiratory system, such observation could help resolve the long-standing riddle of HVC's topographic organization.
To quantify these observations, we calculated the time between each spike in each burst to the closest GTE for all 17 bursts. The resulting distribution was approximately Gaussian, with bursts on average preceding the closest GTE (mean = -5.6 ± 0.3 ms, σ = 6.7 ± 0.3 ms; Fig. 4d). A bootstrap procedure (Supplementary Note 2) confirmed that the correspondence to the closest GTE was statistically significant (F test, P<0.045). This indicates that the timing of HVC (p) bursts is associated with the timing of GTE. Given a minimal delay between activity of HVC (p) and sound production estimated between 25-50 ms 23 , the minimal 15 ms delay for auditory feedback to HVC 8 , and that the duration of intervals between GTE varied greatly ( Fig. 4c), it is remarkable that the timing of HVC (p) bursting was synchronized with near-zero time lag to a model of actual behavioral output.
Interneurons are suppressed at GTE
We also noted a relation between the minima in the activity of HVC (i) and the timing of GTE. To characterize this, for each interneuron, we bined the spikes in 10 ms windows for each acoustic presentation. The resultant average response traces were smoothed and the minima in the smoothed traces were identified (see Methods). For an example neuron, the average response is shown in green, the superimposed smoothed curve in black, and the minima in red dots (Fig. 5a, bottom panel). Each HVC (i) did not have minima at all GTE, but across all neurons, we observed a close alignment between the times of the minima and the times of GTE. (A non-significant relation was observed for maxima of HVC (i) activity; Supplementary Fig. 4.) Computing the differences between the time of each minimum that occurred during phonation and the closest GTE resulted in a distribution that was approximately Gaussian (mean = -0.82 ms ± 0.60 ms, σ = 7.3 ± 1.4 ms; Fig. 5b). We compare this distribution to the distribution of randomly positioned minima within each motif using the bootstrap procedure and found them to be significantly different (F test, P<0.016, Supplementary Note 2). Additional tests identified marginally significant locking to GTE for one of four birds (Supplementary Note 3). Thus, the precise activity of HVC (i) 7 can help shape the timing of HVC (p) . This suggests a simple model where bursts of activity of HVC (p) suppress activity in HVC (i) , whose ongoing activity helps shape the next HVC (p) burst.
A representation of gestures during singing
Given that our results were obtained by broadcasting songs to sleeping birds, it is natural to inquire if during singing the activity of HVC neurons are also locked to gesture transitions. Previous results have demonstrated tight temporal locking comparing daytime singing activity and auditory-driven responses during sleep of single RA neurons in zebra finches 4 , and HVC neurons in awake swamp sparrows and Bengalese finches that respond to auditory stimulation 6 , but similar observations have yet to be reported for zebra finch HVC neurons. We made recordings from HVC in singing birds (N = 3 birds), including 5 phasic neurons bursting during phonation (recorded in two of the three birds, Fig. 6, Supplementary Fig. 5); one neuron had two bursts per motif, and 10 tonic neurons. We confirmed that during singing, all sparse bursts of HVC (p) occurred at gesture transitions (Fig 4e). Following the same analysis as in sleeping birds (but here, since each motif of song could vary, it was independently modeled), we observed for singing birds even more precise timing of HVC (p) than was observed during sleeping (cf. Fig. 4d, e). The Gaussian fit for the population of phasic neurons recorded during singing (mean = -1.35 ms ± 0.10 ms, σ = 4.0 ± 0.1 ms; Fig. 4e) was significantly different from the bootstrapped random distribution (F test, P<0.025, see Supplementary Note 2 and Supplementary Fig. 6). The minima activity of tonic neurons recorded during singing also showed precise timing relative to GTEs (Gaussian fit for the minima: mean = -0.12 ms ± 0.4 ms, σ = 4.0 ± 0.4 ms; Fig. 5c), and this was significantly different than the bootstrapped random distribution (F test, P<0.002). Additional analyses demonstrated significant locking of minima to GTE in two of three singing birds (Supplementary Note 3). As for sleeping birds, the maxima of tonic neural activity showed no evidence of a significant locking to the GTEs (Supplementary Fig. 4c). Finally, examining the data from a prior study of zebra finches 24 we observed that during singing the timing of HVC (RA) bursts were closely associated with the timing of HVC (X) bursts ( Supplementary Fig. 7. In light of our results, this supports the hypothesis that all classes of HVC neurons are active in relation to the timing of gestures, although the multiple subtypes of HVC (RA) , HVC (X) , and HVC (i) have yet to be evaluated.
Previously it was concluded that the timing of song syllables was unrelated to the timing of HVC (p) discharge 5,24 in singing birds. Given the sparse bursting of these cells this led to the idea that the output of HVC had a time clock-like function with a nearly uniform "tick" size of approximately 10 ms 23 supported by a "syn-fire" chain of synaptic activity across HVC (p) 5 . Instead we find that the bursting of HVC (p) and modulation of HVC (i) activity is timed to significant instances of motor gestures. The sequential firing across the population of HVC (p) unfolds in an ordered fashion 5 but time is not explicitly represented in HVC. Instead, the statistics of HVC activity are closely tied to syringeal/vocal tract mechanics. Given the broad distribution of times between GTE, if HVC activity is synchronized with GTEs this is inconsistent with a syn-fire network that is active at every moment. The distinction between these two models of HVC has additional broad implications for the functional organization of the song system, for song learning, and for motor coding.
Since gestures vary greatly in duration, and RA only has access to the times of GTE, then downstream components of the motor pathway (RA and presumably brainstem) should generate independent dynamical information to sustain the detailed structure within each gesture (cf. 23,25 ). Previous experimental results, including the effects of electrical stimulation of HVC or RA during singing 26 and lesions of nuclei afferent to HVC 27 implicate information in HVC encoding larger units of song. This might arise if some gestures or transitions are over-emphasized in HVC relative to others. Finally, gestures are learned, which is consistent with the physiological properties of HVC neurons: integration over hundreds of milliseconds and multiple syllables, non-linear summation over syllables in a sequence preceding the excitatory response, and selective response to BOS 4,8,9,28,29,30 . The information about groupings of gestures such as syllables can be carried in these integrated signals. This also re-emphasizes synaptic modification in HVC, not just changes at HVC-RA synapses, are associated with feedback mediated sensorimotor learning (cf. 23 ). HVC also projects to the cortico-basal ganglia pathway which contributes to learningmediated synaptic modification in RA by introducing variance into song output 31,32 . This suggests the hypothesis that the variance is structured not in an auditory framework but around specific features of song motor gestures.
A forward model for vocomotor control
If activity in HVC is in synchrony with little time lag with motor gestures occurring at the periphery this would tend to bring it into temporal register with fixed (circa 15 ms) delayed auditory 33 , proprioreceptive 20 , or brainstem 34 feedback. This allows movements to be represented in HVC by gestures of greatly varying duration (with dynanics principally generated through internal HVC interactions) while each gesture is referenced to a common time framework for evaluating feedback (with feedback arriving through distinct, extrinsic inputs). This suggests that projection neurons represent a prediction about the actual behavioral output at that moment in time, constituting an unexpected form of a "forward" or predictive model to resolve the problem of the delay in sensorimotor control 35 . Assuming that behavior is subdivided into gestures, and only the transitions (GTE) are represented by HVC output (HVC (p) ), then the intervals between the transitions could accumulate feedback information by modifying the tonic activity of HVC (i) and subsequently the spike bursting of HVC (p) . Indeed, HVC receives multiple sources of feedback including input form the primary motor cortex RA 36 , thalamic input carrying brainstem respiratory, auditory, and proprioreceptive information 21,34,37 , and forebrain auditory input 38 .
We have described song organization based on gestures, by taking advantage of the dynamical systems modeling framework to go beyond spectrographs. These features of motor systems organization could obtain generally 39 . Our data support Sherrington's longstanding hypothesis that the motor cortex is a synthetic organ, representing segments of whole movements 1,40 . In humans the production of speech and the performance of athletes and musicians are an exceptional example of highly precise learned skilled behavior that could share mechanisms to those described here. Developing corresponding models for human speech production should help inform speech and language pathologies where sequential behavior is disrupted.
Subjects, songs, and surgeries
All procedures were in accordance with a protocol approved by the University of Chicago Institutional Animal Care and Use Committee. Songs were recorded from 12 birds and electrophysiology was conducted on 9 adult male zebra finches (Taeniopygia guttata) bred in our colony. Birds were prepared for recordings with surgeries using standard techniques to implant a head pin (for auditory experiments) 10 or motorized microdrive (for singing experiments) 5 . For auditory experiments, adults were maintained on a 16/8 reversed light cycle in sound isolation boxes. Songs were recorded and filtered using custom software (SABER, A.S. Dave) then edited (Praat, P. Boersma and D. Weenink, www.praat.org). Edited songs included two or three repetitions of one motif, and were typically 2 -4 s in duration. Birds were prepared for recordings with surgeries using standard techniques to implant a head pin (for auditory experiments) 10 or motorized microdrive (for singing experiments) 5 . Birds were allowed to recover for 2 or 3 days before the first of the days of recordings, and rest for at least 2 days between recording sessions.
Electrophysiology, stimulus presentation, and spike analysis
HVC extracellular recordings were performed in head-fixed sleeping or singing tethered birds. Recordings were post-processed with a spike-sorting algorithm (Klusters, L. Hazan, klusters.sourceforge.net and custom software written by C.D. Meliza) to separate the times of spike events for each unit. For experiments in singing birds, all well-isolated neurons are reported. For auditory experiments, only BOS responsive neurons were recorded. The auditory stimuli were presented randomly with an interstimulus interval of 7±1 s. The neural response to each song is quantified in terms of the Z score 25 : where μ S is the mean response during the auditory stimulus (S) and μ BG is the mean response during background activity (BG). The denominator of the equation is the standard deviation of (S -BG). The background was estimated by averaging the firing rate during a 2 sec period. The Z score of the mBOS, CON, and REV were normalized to the BOS Z score, and averages across neurons were reported as mean of normalized responses±s.e.m. For interneurons, the strength of the response varied across the motifs 42 . We picked the last (second or third) motif, which gave the strongest response, to analyze the timing of spikes relative to GTE. This minimized false peaks and troughs in the response profiles. In singing birds, interneurons fired reliably for each motif and all motifs were incorporated into the analysis. The average response of each interneuron (1 ms resolution) was smoothed using a Savitsky Golay filter (polynomial local regression 41 ) and the minima were identified using a 21-point sliding window.
Reconstruction of motor gestures
We assumed flow induced oscillations of opposing labia as a sound source model for birdsong production 14 . This model assumes that for high enough airflow values, the labia start to oscillate with a wavelike motion. Assuming two basic modes active (a flapping like motion and a lateral displacement of the tissues, appropriately out of phase), a system of equations describe the dynamics of the medial position x(t) of one of the opposing labia, at one of the sound sources. These read where the first term in the second equation is the restitution in the labium, the second term accounts for the dissipation, and the last term for the force due to the interlabial pressure. The average pressure p av can be written in terms of the displacement and its velocity 3 . These equations describe a set of qualitatively different dynamical regimes. To gain independence from the details of any particular model presenting these regimes, we worked with a normal form that unfolds into a Saddle node in limit cycle bifurcation and a Hopf bifurcation. The normal form, which is analytically derived 43 , constitutes the simplest set of equations for any model in which oscillations arise in either of these two bifurcations. Once this reduction is performed, the selection of parameters that allow obtaining a sound with specific acoustic features gives rise to unique values. The normal form equations are shown in Fig. 1, and display the same set of dynamical regimes 3 as the physical model, with scaling through a time constant γ. Once x(t) is computed, the pressure at the input of the tract is computed as where T is the time for a sound wave to reach the end of the tube and return, and α(t) is proportional to the average mean velocity of the flow. The transmitted pressure fluctuation P i (t)=(1-r) P i (t-0.5T) forces the air in the glottis, which is approximated by the neck of a Helmholtz resonator (used to model the OEC 3,44 ), i.e., a large container with a hole, such that the air in its vicinity oscillates due to the springiness of the air in the cavity. A linear set of three ordinary differential equations accounts for the dynamics of the air flow and pressure in this linear acoustic device 3 , resulting in the final output pressure P out (t) (Fig. 1).
We reconstructed the parameters driving the equations of the normal form (α(t) and β(t)), as well as the parameters describing the tracheal length and the OEC cavity in such a way that the synthesized sounds presented the same fundamental frequencies and spectral content as natural song. Reconstructions over sequential sound segments gave estimates of the time dependence of physiological parameters used during song production. A linear integrator ( = 2.5 ms) was used to compute the envelope of the sound signal. A threshold was used to identify phonating segments. For those longer than 20 ms, we decomposed the recorded songs into successive 20 ms segments (time between consecutive segments ∆t = 1/20000 s). These were short enough to avoid large variation of the physiological gestures, and long enough to compute spectral content. For each segment, we computed the spectral content index (SCI) 16 and the fundamental frequency. A search in the parameter space (α(t), β(t)) was performed over a grid so that the synthetic sounds produced would match the fundamental frequencies of the song segment being fitted. Over the set of (α(t), β(t)) values selected, a search was performed so that SCI of the synthetic sounds matched the value of the song segment 3 . For sound segments shorter than 20 ms, the fundamental frequency was computed as follows. First, we selected the relative maxima of the sound signal that reached the sound envelope. Then, the fundamental frequency was computed as the inverse of the time difference between the next two consecutive selected maxima. The SCI at that time was estimated as the average value among all the possible SCI values, corresponding to that frequency in the framework of the model 16 . With those estimations of fundamental frequency and SCI, (α(t), β(t)) were computed. Brief segments were typically fast trills. We modeled those as rapid oscillations of pressure and tension, with the amplitude of the pressure oscillations such that the maxima fall in the phonating region, and amplitude of the tension oscillations such that the frequency range of the vocalization was reproduced. We found that most of the parameters could be well approximated by either fractions of sine functions, exponential decays, constants, or combinations of those.
Using these analytic functions as parameters of the model to generate a synthetic copy of the recorded song resulted in a noiseless surrogate song (e.g., Supplementary Fig. 1, Noise=0). The addition of noise allowed the gradual recovery of realistic timbric features. In the text, the dimensionless variable Noise varied between 0 and 40, with Noise=5 corresponding to a fluctuation size equal to 2.5 percent of the maximum range of the β(t) parameter. Notice that the timbric effect will be more important for low frequency sounds, which explore a small range of β(t).
For each bird, the length of the trachea was chosen so that the frequencies close to 2.5 kHz and 7 kHz in the bird's song were the first and second resonances of a tube closed at one end. This corresponds to a length of 3.5 cm 45 . Typically, zebra finch songs present a third important resonance around 4 kHz. The parameters of the Helmholtz resonator were adjusted so that its resonant frequency would account for this resonance 3 . The synthetic songs for sleeping birds were generated before doing the electrophysiological experiments. For singing birds all song reconstructions were also performed blind to the spike data.
Supplementary Material
Refer to Web version on PubMed Central for supplementary material. Spectrographs of a bird's song (a) and model synthetic song (b). Song is described by fitted parameters α(t) and β(t), proportional to air sac pressure and labial tension, respectively (c). Each sound is generated by a continuous curve in the parameter space of the model, a "gesture" (d). Oscillations in the vicinity of a SN bifurcation present rich spectra, typical of zebra finch song. Note that the spectrally poor "high note" (green) is distant from the SN bifurcation. The gray area indicates the region of phonation. The distribution of gesture durations for five birds is displayed in (e). | 2016-03-14T22:51:50.573Z | 2013-02-27T00:00:00.000 | {
"year": 2013,
"sha1": "0ce96701edb3cb526d6e6a95c677f74faf310ddc",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc3878432?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "c3091fd6a7afe5899509605c12759792b6a46a7b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
248655114 | pes2o/s2orc | v3-fos-license | Effect of interfacial surface treatment on bond strength of particulate-filled composite to short fiber-reinforced composite
Abstract Objective The aim was to investigate the effect of different interfacial surface treatments on the shear bond strength (SBS) between a short fiber-reinforced flowable composite (SFRC) and a particulate-filled flowable composite (PFC). In addition, SBS between two successive layers of similar materials was evaluated. Materials and methods One-hundred and forty-four specimens were prepared having either SFRC (everX Flow) as a substructure composite and PFC (G-aenial Flo X) as a surface composite or having one of the two materials as both substructure and surface layer. Eight groups of specimens were created (n = 18/per group) according to the interfacial surface protocol used. Group 1: no treatment; Group 2: ethanol one wipe; Group 3: ethanol three wipes; Group 4: phosphoric acid etching + bonding agent; Group 5: hydrofluoric acid etching + bonding agent; and Group 6: grinding + phosphoric acid etching. Group 7: only PFC layers and Group 8 (control) only SFRC layers without any surface treatment. After one-day storage (37 °C), SBS between surface and substructure composite layers was measured in a universal testing machine, and failure modes were visually analyzed. SEM was used to examine the bonding surface of the SFRC composite after surface treatment. SBS values were statistically analyzed with a one-way analysis of variance (ANOVA) followed by the Tukey HSD test (α = .05). Results The SBS between successive SFRC layers (Group 8) was statistically (p < .05) the highest (43.7 MPa) among tested groups. Surface roughening by grinding followed by phosphoric acid etching (Group 6) resulted in a higher SBS (28.8 MPa) than the remaining surface treatments. Conclusion Flowable composite with glass fibers (everX Flow) showed higher interlayer SBS compared to PFC flowable composite. Interfacial surface roughness increases the bonding of PFC to the substructure of SFRC.
Introduction
Direct composite restoration, also known as particulate-filled composite (PFC) restoration, is a common restorative procedure for treating lost tooth structure. It has been reported that general dental practitioners in public dental facilities spend more than half their time applying direct composite restorations [1]. Aside from the capability to adhere to tooth structures via bonding systems, direct PFC composite restorations are less expensive than indirect ceramic/composite restorations [2]. The application of direct PFC composites has expanded to include not just posterior intra-coronal restorations, but also extra-coronal restorations [2]. Nevertheless, mechanical properties and polymerization shrinkage are still issues with contemporary PFCs. In small and medium-sized cavities, PFC restorations have shown satisfactory overall clinical performance, with annual failure rates ranging from 1 to 3 percent [3,4]. However, the clinical performance of PFC restorations is clearly associated with restoration size. Large PFC restorations have proven to be more likely to fail due to fractures, resulting in shorter lifespans [3,4].
The reinforcing phase of PFCs has been thoroughly studied with the purpose of improving their viability for application in high-stress areas. Efforts have been made to alter the type of filler used, as well as the size and silanization of the filler [5][6][7]. Among the strategies investigated, reinforcing the PFC with short glass fibers has proven to be one of the most successful [6,8,9]. Short fibers improved the material's facility to withstand crack propagation and reduced the stress intensity at the crack tip, where a crack spreads in an unstable way [10]. As a result, an enhancement in composite toughness was observed [10,11]. In 2019, the flowable version of short fiber-reinforced composite (SFRC) was introduced with the promise of easy handling and better adaptability in limited spaces [12,13]. Compared to PFC, this SFRC was found to have enhanced mechanical properties in terms of fracture toughness and fatigue resistance [12][13][14]. It should be taken into account that SFRC is recommended to be used as bulk base or core foundation and should not be used as a top surface layer. According to the manufacturers' recommendations, SFRC should be covered with a layer (1-2 mm) of flowable or packable PFC to ensure sufficient esthetic appearance.
Many in vitro studies have looked at bi-layered composite structures using SFRC as the substructure and PFC as the top surface layer [14][15][16][17]. In these investigations, SFRC was used to reinforce extensive direct composite restorations as substructure foundations by supporting the PFC layer and acting as a crack prevention layer. However, there is little knowledge regarding the interlayer bond strength between SFRC and PFC. A previous investigation showed that ethanol application might cause some dissolution of the polymer matrix of fiber-reinforced composite, resulting in increasing surface roughness [18]. The question arises as to whether one may use ethanol wiping to expose the fibers from the surface of SFRC. This might improve the interlayer bonding by means of micromechanical interlocking.
Accordingly, this research aimed 1. to investigate the effect of different interfacial surface treatments on the shear bond strength (SBS) between SFRC and PFC and 2. to determine the SBS of two successive layers of similar materials.
Materials and methods
Two commercially available flowable composites, one PFC (G-aenial Flo X) and one SFRC (everX Flow) were used in this study (Table 1).
Specimen preparation
A total of 48 acrylic blocks were prepared in cold cure auto-polymerized acrylic resin (Vertex-Dental B.V., Zeist, The Netherlands). Three standardized holes (diameter ¼ 6 mm, depth ¼ 4 mm) were prepared in each block using a bench drill press machine (DP2000A, Rexon Industrial Corporation, Ltd., Taichung, Taiwan). The holes, later to be filled with the substructure composite, were drilled so that they were in an equal distance in relation to each other. A total of 144 specimens were then fabricated having either SFRC as a substructure composite and PFC as a surface composite or having the same material as both substructure and surface composite. Specimens were divided into 8 groups (n ¼ 18/per group) according to the used treatment protocol for substructure composite surface ( Table 2).
SFRC composite was used as a substructure in Groups 1-6. SFRC was applied into the drilled holes in a bulk increment of 4 mm, flattened (plastic instrument) and light cured (Elipar TM S10, 3 M ESPE, Germany) for 40 s from the top surface. The wavelength of the light was between 430 and 480 nm and light intensity was 1200 mW/cm 2 (Marc Resin Calibrator, BlueLight Analytics Inc., Canada). After curing, the surface of SFRC was manipulated with different surface treatment protocols before the application of surface PFC (Table 2). In Group 1, no surface treatment was applied. In Group 2, the substructure composite surface was exposed to ethanol (concentration 99%) for 10 s (one wipe). In Group 3, the composite surface was exposed to ethanol for 30 s followed by air-drying for 10 s (three wipes). In Group 4, the composite surface was etched with 37% phosphoric acid (Scotchbond Universal Etchant, 3 M ESPE, USA) for 10 s, then rinsed with water for 10 s and air dried for 5 s. Etching was followed by the application of bonding agent (G-Premio Bond, GC Corp, Tokyo, Japan). The bonding agent was abundantly placed on the surface for 40 s. Then the excess was removed by blowing with air for 5 s followed by light curing (Elipar TM S10) for 10 s.
In Group 5, the composite surface was acid-etched by 4.5% hydrofluoric acid (Ivoclar Vivadent, Schaan, Liechtenstein) for 60 s followed by rinsing with water and air-drying. Subsequently, the composite surface was treated with the bonding agent as in Group 4. In Group 6, the composite surface was ground on 320 grit silicon carbide paper using an automatic grinding machine (Rotopol-1; Struers A/S, Copenhagen, Denmark) and then acid-etched as in Group 4. In groups 7 and 8 (control), the cured substructure composite (SFRC or PFC) was immediately covered with a surface layer of the same material and without any surface treatment.
To allow application of the surface PFC layer (the stub), a transparent polyethylene mold (inner diameter 3.6 mm and height 3 mm) was positioned centrally on the flat substructure SFRC surface. The PFC was applied (2 mm thick layer) and light-cured through the mold from the top and lateral curved surfaces for 40 s (Elipar TM S10). Then, the mold was carefully removed, and specimens ( Figure 1) were stored for 24 h in water (37 C) before testing.
Interlayer debonding test
The strength of the bond between the surface and substructure composite layers was measured using a shear bond strength test (Figure 1). The specimens were fixed in a mounting jig (Bencor Multi-T shear assembly, Danville Engineering Inc., San Ramon, CA, USA) and a shearing rod was placed parallel to and against the interface between the two composite layers. Then, at room temperature (23 ± 1 C) and a crosshead speed 1.0 mm/min, a universal testing machine (Model LRX, Lloyd Instruments Ltd., Fareham, England) was utilized to load the specimens until failure. Data were recorded by PC software (Nexygen, Lloyd Instruments Ltd., Fareham, England). The bond strength was calculated by dividing the maximum load at failure (N) with the bonding area (mm 2 ). The results were recorded in megapascal (MPa).
Microscopic analysis
Failure modes of specimens were visually examined and analyzed using a stereomicroscope at magnification force of 15 (Wild M3Z, Wild Heerbrugg, Switzerland). The failure modes were then classified either as adhesive failures between the two composite layers or as cohesive failures within either the substructure or the surface composite.
The effect of surface treatment on SFRC was evaluated using scanning electron microscopy (SEM) (JSM 5500, Jeol, Japan). Before examination, specimens were coated with a gold layer in a vacuum evaporator using a sputter coater (BAL-TEC SCD 050 Sputter Coater, Balzers, Liechtenstein).
Statistical analysis
Data were statistically analyzed using one-way analysis of variance (ANOVA) followed by the Tukey HSD test (a ¼ .05) to test for differences in shear bond strength between the groups using SPSS version 23 (SPSS, IBM Corp., NY, USA).
Results
The interlayer shear bond strength results are presented in Figure 2. One-way ANOVA demonstrated a significant difference between the groups (p < .05). Only grinding followed by phosphoric acid etching (Group 6) resulted in statistically higher shear bond Figure 3. Ethanol-treated surfaces (Groups 2 and 3) resulted in entirely cohesive failures as did Group 1 (without surface treatment) except from one specimen, while roughening (Group 6) or treating the surface with acid etching and bonding agent (Groups 4 and 5) increased the number of adhesive failures. In groups 7 and 8, having two layers of similar material, all specimens showed cohesive failure in substructure layers. Figure 4 shows SEM images of SFRC after ethanol surface treatment under different magnifications. Ethanol treatment resulted in irregular surfaces with some short fibers protruding from the matrix.
Discussion
Bi-layered composite restorations where flowable SFRC is placed at the cavity bottom as a substructure and veneered with PFC (packable or flowable) have been the recommended technique for restoring stressbearing posterior teeth as they provided enhancement in load-bearing capacity when tested in vitro [9,16]. In this scenario, surface roughness, surface freeenergy, material reactivity, viscosity, the presence of an oxygen inhibition layer, and the increment material employed all have an influence on the bonding between two composite layers [19,20].
In the current study, the existence of an oxygen inhibition layer on the surface of the cured SFRC substructure layer (without any treatment) may explain that the bond strength to the PFC surface layer (Group 1) was within the same range as that observed after using different surface treatments (Figure 2). In general, this finding is in line with many studies in the literature, in which the existence of an oxygen inhibition layer in between two successive dimethacrylate-based composites improve the interfacial bond strength [19][20][21][22][23]. In other words, the oxygen inhibition layer appears to act as an adhesive layer, chemically binding successive composite increments. Bijelic-Danova et al. showed that the existence of short fibers in SFRC has a beneficial effect on the thickness or depth of the oxygen inhibition layer and thus on the interfacial bonding strength [20].
Our results did not fully support the assumption that ethanol surface treatment might enhance the bond strength between SFRC and PFC layers by exposing more fibers from the surface. However, specimens in the ethanol-treated groups predominantly showed cohesive failures, which could be a sign of micro-mechanical interlocking between the monomer from PFC and the fibers in the SFRC substructure ( Figure 4). In the study by Basavarajappa et al., it was found that the surface roughness of fiber-reinforced composite was influenced by ethanol at varying concentrations and treatment time [18]. This was likely related to the swelling and resolidification of the polymer surface between the glass fibers which were not affected by ethanol [18]. It is also possible that some of the residual monomers may have leached from the polymer matrix [24] and had a minor effect on the dimensions of the polymer matrix between the fibers ( Figure 4). However, the orientation of the exposed fibers at the interface (Figures 4 and 5) affects the bonding and load transfer behavior. Nevertheless, this issue should be investigated further to confirm the effect in practice.
Another aspect in this study was the use of an adhesive. Groups in which an adhesive was applied between the layers (Groups 4 and 5) showed no improvement in the interfacial shear bond strength compared with Group 1 (without surface treatment), and the predominant mode of failure was adhesive ( Figure 3). This result could be attributed to the brittleness caused by the existence of a relatively thick adhesive layer at the interface. Roughening the SFRC surface by grinding followed by phosphoric acid etching (Group 6), resulted in a higher shear bond strength compared to the group without surface treatment ( Figure 2). This favorable finding may be explained by the resulting high surface irregularity, which increases the bonded surface area and offers higher micro-mechanical interlocking at the interface between SFRC and PFCs [25,26]. Moreover, this procedure of grinding and etching the surface with phosphoric acid could be beneficial in the case of composite repairs where there is no oxygen inhibition layer.
Our findings are in accordance with evidence from another investigation [27], which showed that treating the composite substrates with hydrofluoric acid adversely affected the morphological features of PFC substrates thereby resulting in poor repair bond strength when compared with the use of air-particle abrasion [27]. According to € Ozcan et al., when composite substrates are exposed to hydrofluoric acid, a water monolayer may penetrate via voids to the filler, which in turn, may disorganize the silane layer that is responsible for stabilizing the filler-resin interface [27]. This may weaken the particle or fiber-matrix interface that leads to filler dissolution.
There is no consensus as to a required minimum composite interlayer shear bond strength value. However, based on literature, values in the range 15 MPa to 35 MPa seem relevant [19][20][21][22][23]25,26,28,29]. In our study, the shear bond strength values obtained were within this range, except for the significantly highest value (43.7 MPa) found between the two SFRC layers (Group 8). This superior result could be explained by the presence of randomly orientated fibers in SFRC, which are shown to affect the oxygen inhibition depth [20,30] and by a micro-mechanical interlocking between the protruding short fibers on the interlayer surfaces ( Figure 5). This interlocking could have an impact on the bond strength values, particularly in the case of shear stress. In addition, the superior mechanical properties of the SFRC, especially the fracture toughness would enhance its ability to resist shearing stresses [31,32].
The results of this investigation must be seen in the perspective of some limitations. The interlayer bond strength of composites was determined using a shear bond strength test, where the tensile-bond strength could be more accurate in detecting bond strength differences between materials [33]. However, the shear bond test set up has been the most commonly employed laboratory technique for evaluating the bond strength of adhesives and composite restorations.
Furthermore, the shear bond strength was measured without any aging, and thus long-term water storage and/or thermocycling are warranted to evaluate the long-term durability of the interlayer bonds.
Within the limitations of this study, it can be concluded that the interlayer bond strength between SFRC and PFC when an oxygen-inhibited layer is preserved, was within the same range as that observed between successive PFC layers.
Disclosure statement
No potential conflict of interest was reported by the authors. | 2022-05-10T15:39:43.727Z | 2022-05-05T00:00:00.000 | {
"year": 2022,
"sha1": "2572f06332d5f95dab75b09074f49dbd68a96231",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/26415275.2022.2070489?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "84349b179fa08441ded5554ba62b42d004aa68a8",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
104379675 | pes2o/s2orc | v3-fos-license | Long noncoding RNA UCA1 promotes osteosarcoma metastasis through CREB1-mediated epithelial-mesenchymal transition and activating PI3K/AKT/mTOR pathway
Highlights • UCA1 is upregulated in both osteosarcoma tissues and cell lines.• UCA1 promotes osteosarcoma metastasis both in vitro and in vivo.• UCA1 increased CREB1 expression by functioning as a ceRNA of CREB1 against miR-582.• The pro-metastasis role of UCA1 is achieved by promoting EMT.• UCA1 enhances EMT through CREB1-mediated the PI3K/Akt/mTOR pathway.
Introduction
Osteosarcoma is the most frequent primary bone tumor which mainly occurs in adolescents and children [1]. In the past few decades, despite advancements in complete surgical resection and neoadjuvant chemotherapy for osteosarcoma patients, the prognosis is still unfavorable due to the high rate of metastasis. The 5-year survival rate of patients with osteosarcoma having distant metastases is approximately 30% [2]. Therefore, understanding of the molecular mechanisms of osteosarcoma metastasis is important to improve the prognosis of osteosarcoma patients.
Long noncoding RNAs (lncRNAs) are typically longer than 200 nucleotides with little or no protein coding capacity. Accumulating evidences indicate that lncRNAs are aberrantly expressed in many human tumors and play critically important roles in diverse biological processes, such as proliferation, apoptosis and metastasis [3]. LncRNAs regulate gene expressions through multiple mechanisms, including modulating transcription, indirect degradation of RNA transcripts and regulating various post-transcriptional processes [4].
Urothelial carcinoma-associated 1 (UCA1), which is a novelly identified lncRNA, is widely expressed in various tumors and plays key roles in multiple biological processes [5]. Previous studies have demonstrated that UCA1 was highly expressed in ovarian cancer [6], nonsmall cell lung cancer [7] and prostate cancer [8]. UCA1 was initially found in bladder cancer and could enhance the tumorigenicity of cancer cells both in vitro and in vivo [9]. Bioinformatic analyses showed that UCA1 was highly expressed in tumor tissues from 361 gastric cancer patients and acted as an oncogene to promote cell proliferation. UCA1 also functions as a tumor promoter in colorectal cancer [10]. It was reported that high expression of UCA1 was associated with tumor proliferation and indicated worse prognosis in colorectal cancer [5]. In osteosarcoma, previous study has found that UCA1 was overexpressed in osteosarcoma and promoted cell growth. However, there has been no research focusing on the role of UCA1 in osteosarcoma metastasis. Therefore, to explore how UCA1 participates in osteosarcoma progression, especially its effect on the invasion and migration of osteosarcoma cells, aiming at discovering novel targets for treatment becomes our goal of study.
In this study, we validated that the UCA1 was significantly upregulated in osteosarcoma and, for the first time, we confirmed that UCA1 promotes the metastasis of osteosarcoma both in vitro and in vivo. Also, we are the first to demonstrate that UCA1 interacts with and suppresses miR-582 as a potential ceRNA. The pro-metastatic effect of UCA1 is achieved by inhibiting the epithelial-mesenchymal transition (EMT) via CREB1-mediated PI3K/AKT/mTOR pathway.
Cell culture and tissue samples
Normal osteoblast cell line hFOB1.19, osteosarcoma cell lines Saos-2, HOS, U2-OS and MG-63 were obtained from the Cell Bank of Chinese Academy Sciences. The hFOB1.19 cells were cultured in D-MEM/F-12, Saos-2 in McCoy's 5A, U2-OS cells in RPMI-1640 and HOS cells in MEM. All of the media and the supplemented 10% fetal bovine serum were from Gibco. All cells were incubated at 37°C in a 5% CO2 atmosphere.
Human tissue samples of paired normal and primary tumor tissues from 30 osteosarcoma patients were collected from No. 215 Hospital of Shannxi Nuclear Industry. The study was approved by the Medical Ethics Committee and written informed consents were obtained from all patients.
RNA extraction and quantitative real-time PCR (qPCR)
Total RNA from both tissue samples and cell lines were extracted using TRIzol reagent (Invitrogen). First-strand cDNA was synthesized from 500 ng of total RNA using the PrimeScript RT Reagent Kit (TaKaRa) for detecting mRNA, lncRNA and miRNA levels. Real-time PCR was performed in triplicate using SYBR Premix Ex Taq (TaKaRa) on CFX96 Real-Time PCR Detection System (BioRad). Expression levels of mRNAs, lncRNAs and miRNAs were normalized to GAPDH, 18S rRNA and U6, respectively.
Oligonucleotides, plasmids construction and cell transfection
The Control/UCA1 shRNA and the full-length human UCA1 sequence were synthesized by Genscript (Nanjing, China) and then subcloned into pLKO.1 and pCDNA3.1 vectors, respectively. According to the manufacturer's instructions, cells were transfected with Control/ UCA1 shRNA vectors or Control/UCA1 vectors using Lipofectamine ® 2000 reagent (Invitrogen) at final concentrations of 2 µg/ml for 48 h.
For stable overexpression of UCA1, the synthesized full-length UCA1 cDNA was cloned into lentiviral expressing vector pLV-puro. After production of lentiviral particles according to the standard protocols, cells were transfected for 24 h with 2 µg/ml polybrene (Sigma) and selected by using 2 µg/ml puromycin for 7 days.
The synthetic miR-NC/miR-582 and inhibitor NC/miR-582 inhibitor purchased from GenePharma (Shanghai, China). All of the oligonucleotides were transfected into cells using Lipofectamine ® 2000 reagent (Invitrogen) at final concentrations of 50 nM.
CCK8 assay and apoptosis assay
Cell proliferation was detected by a Cell Counting Kit-8 (CCK-8) assay (Dojindo) as manufacturer's instructions. Cells were seeded into 96-well plates (2 × 10 3 cells/well) for 1, 2, 3 and 4 days' observation. Cell apoptosis was detected by using Apoptosis Detection Kit (KeyGEN) according to the manufacturer's instructions. Cells were stained with florescein isothiocyanate-conjugated Annexin V and PI and analyzed with a FACScan flow cytomete.
Wound-healing assay and matrigel invasion assay
For the wound-healing assays, wound closures were observed by taking photographs under a microscope 0 and 48 h after scratching. Matrigel invasion assays were performed with Matrigel (BD) and 8 µm, 24-well trans-well chambers (Millipore) following the manufacturer's instructions. Cells (1 × 10 5 ) in 200 µl of serum-free medium were added to the upper chamber and cultured for 48 h. Migrated cells were fixed with ethanol and stained with 0.1% crystal violet and photographs were taken of 6 randomly selected fields for each well.
Western blot analysis
Western blotting was performed according to the standard protocol. In brief, cell lysates quantified with the BCA method were subjected to sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to a polyvinylidene floride (PVDF) membrane. The PVDF membrane was incubated with 1:1000 primary antibodies and then 1:5000 goat anti-mouse or goat anti-rabbit IgG-HRP conjugate secondary antibodies (Akea). Blots were detected by ECL system (Alpha Innotech) according to the manufacturer's instructions.
Animal models
To evaluate the lung metastatic potential of osteosarcoma cells in vivo, 5 × 10 6 U2-OS/Lenti-Control or U2-OS/Lenti-UCA1 cells in 200 µl of serum-free medium were injected into six-week-old, male nude mice through the tail vein (n = 6 per group). Four weeks later, the mice were sacrificed. Individual organs from the mice were removed, and metastatic tissues were analyzed with H&E staining.
RNA immunoprecipitation (RIP) assay
Cells were co-transfected with pLV-MS2 or pLVUCA1-WT-MS2 or pLV-UCA1-MT-MS2 and pMS2-GFP (Addgene). After 48 h of transfection, cells were subjected to a RIP assay by using 5 µg GFP antibody (Sigma-Aldrich) or negative control IgG using RNA Immunoprecipitation Kit (Millipore) according to the manufacturer's instructions. For anti-AGO2 RIP, cells were transfected with miR-NC/ miR-582. After 48 h of transfection, cells were used to perform anti-AGO2 RIP assay using 5 µg anti-AGO2 antibody (Millipore) as described above.
Statistics analysis
Data were analyzed using the SPSS 19.0 software and expressed as the mean ± standard error of the mean (s.e.m) from at least three separate experiments. Two tailed Student's t-tests were used to evaluate statistical significance between two independent groups of samples. The significance of correlations between miR-582 and UCA1 levels were judged via the Pearson product-moment correlation coefficient. Differences were considered significant when *p < 0.05 and ***p < 0.001.
UCA1 is upregulated in osteosarcoma and promotes its growth
To detect the expression level of UCA1 in osteosarcoma, strictly paired primary tumor tissues and normal tissues were collected from 30 osteosarcoma patients. As displayed in qPCR assays, UCA1 expression was significantly upregulated in osteosarcoma tissues, with the average level over two folds higher than that in normal tissues (Fig. 1A). Similarly, compared with the normal osteoblast cell line hFOB1.19, osteosarcoma cell lines Saos-2, HOS, U2-OS and MG-63 all showed increased level of UCA1 (Fig. 1B). The following clinical analysis showed that increased UCA1 expression was significantly associated with metastasis, larger tumor size and higher clinical stage of osteosarcoma (Table 1). Given the upregulated UCA1 in both the tumor tissues and cell lines, we deduced that UCA1 may promote the growth of osteosarcoma. To clarify this, U2-OS cells, which had the lowest level of UCA1, were transfected with pHBLV Control/UCA1 plasmid to overexpress UCA1 (Fig. 1C) and MG-63 cells, which had the highest level of UCA1, were transfected with Control/UCA1 shRNA to silence the UCA1 expression (Fig. 1D). Results of CCK8 assays showed that UCA1 overexpression significantly promoted cell proliferation (Fig. 1E) and there was a significant repression of growth in UCA1 knockdown group (Fig. 1F) compared with the control groups respectively. Moreover, we found that overexpression of UCA1 markedly inhibited the apoptosis, as shown by an increase of Annexin V-PE positive cells (Fig. 1G), whereas knockdown of UCA1 increased the cell apoptosis (Fig. 1H). These results suggested that UCA1 promotes the growth of osteosarcoma cells by increasing proliferation and decreasing apoptosis.
UCA1 promotes the invasion and migration of osteosarcoma both in vitro and in vivo
To test the effect of UCA1 on the invasion and migration of osteosarcoma cells, we introduced wound-healing and matrigel invasion assays as well as the Epithelial-Mesenchymal Transition (EMT) markers detection into U2-OS and MG-63 cells treated in the same way as above. We observed that when the endogenous UCA1 was knocked down, MG-63 cells showed decreased capability of migrating to the monolayer of wounded cells ( Fig. 2A) and invading through the matrigel treated transwell chambers (Fig. 2B). At the same time, the expression of epithelial marker E-cadherin was upregulated both in mRNA and protein levels, while the mesenchymal markers of Vimentin, Snail1 and ZEB1 were downregulated (Fig. 3A). On the contrary, when the ectopic UCA1 was expressed, U2-OS cells presented increased migration and invasion ability compared with control cells (Fig. 2C and 2D) and the expression pattern of EMT markers were also opposite (Fig. 3A). To further explore the pro-metastasis effect of UCA1 in vivo, we established U2-OS cells that can stably express UCA1 using lentiviral particles and injected these cells into the tail vein of nude mice. Four weeks later, we observed that compared to the control group, U2-OS cells overexpressing UCA1 had stronger capability of invading and migrating to the lung, indicated by more (with the average number of metastasis nodules in UCA1 overexpression group three times higher than that in control group) and larger metastatic nodules in the H&E staining of tissue sections observed under microscope (Fig. 3B). In summary, these data suggested that UCA1 promotes the invasion and migration of osteosarcoma cells both in vitro and in vivo.
UCA1 directly interacts with miR-582 in osteosarcoma cells
Recent studies have indicated that lncRNAs exert regulatory functions through targeting miRNAs which bind to their 3′-untranslational regions (3′-UTR). After screening and comparison using online software program miRanda (http://mircrorna.org), we found that miR-582 had complementary base pairing with UCA1 (Fig. 4A). We constructed reporter vectors (pmirGLO) containing the wild type (UCA1) and mutated 3′-UTR (UCA1 Mut) of UCA1 where miR-582 is binding. Results of the luciferase reporter assay showed that when we transfected 293T cells with miR-582 mimics together with UCA1 vector, the luciferase activity significantly decreased whereas in the empty vector or UCA1-mut groups, no changes were observed (Fig. 4B). Next we confirmed the direct interaction between UCA1 and miR-582 using RIP assay to pull down the endogenous miRNAs that interact with UCA1 (Fig. 4C). The following qPCR analysis showed that compared to MS2 empty vector and UCA1-mut group, the UCA1 RIP was significantly enriched for miR-582 in U2-OS cells (Fig. 4C). Since it is well defined that miRNAs bind to their targets and induce translational repression and/or RNA degradation through AGO2, a core component of the RNA-induced silencing complex (RISC), we performed anti-AGO2 RIP assay. Results showed high UCA1 level in miR-582-overexpressing MG-63 cells when we use AGO2 to pull down the endogenous lncRNAs, suggesting the direct binding of miR-582 to UCA1 (Fig. 4D). Furthermore, we found that ectopically expressed UCA1 significantly reduced the miR-582 level (Fig. 4E) but overexpression of miR-582 did not alter the UCA1 level (Fig. 4F), indicating that miR-582 could bind to UCA1 but did not induce its degradation. Then we found the miR-582 level was significantly decreased, with the average expression 47.8% lower in tumor tissues than that in the corresponding normal tissues from 30 osteosarcoma patients (Fig. 4G), and the level of miR-582 was negatively correlated with the level of UCA1 in the tumor tissues (Fig. 4H). All of the above data showed that UCA1 directly interacts with UCA1 in osteosarcoma cells.
UCA1 functions as a ceRNA to upregulate CREB1 and promotes the metastasis of osteosarcoma cells through the AKT pathway
To further explore which gene related to the EMT of osteosarcoma cells is mediating the pro-metastatic effect of UCA1 through interaction with miR-582, we scanned the possible target genes of miR-582 via the publicly available algorithms and focused on the gene CAMP Responsive Element Binding Protein 1 (CREB1). We analyzed the 3′-UTR of CREB1 which includes the putative binding site of miR-582 (WT 3′-UTR) and synthesized the mutated binding site (MT 3′-UTR) (Fig. 5A). After we sub-cloned these two 3′-UTR sequences into the reporter vector, we found the luciferase activity significantly decreased in miR-582 overexpressing MG-63 cells co-transfected with CREB1 WT vector and notably increased in U2-OS cells co-treated with miR-582 inhibitor and CREB1 WT vector compared to control groups (Fig. 5B) compared with the MT 3′-UTR groups, offering a direct evidence of miR-582 targeting CREB1. Then we found that overexpression of UCA1 increased the level of CREB1 in both the protein and mRNA levels, while the co-transfection of miR-582 abrogated this upregulation (Fig. 5C). Meanwhile, we also observed increased proliferation, invasion and migration in UCA1 overexpressing U2-OS cells (Figs. 5E, 6A and C). Similarly, the ectopic expression of miR-582 neutralized the enhancement of these malignent behaviors (Figs. 5E and 6A, C). On the contrary, we found the depletion of UCA1 decreased the expression of CREB1 and the inhibition of miR-582 overcame this suppression (Fig. 5D). Also, the proliferation, invasion and migration of MG-63 cells were decreased with the downregulation of UCA1 and restored with the following transfection of miR-582 inhibitor (Figs. 5F, 6B and D). All of the data suggested that UCA1 functions as a ceRNA to upregulate CREB1 in osteosarcoma cells.
To further study how CREB1 mediates the pro-metastatic effect of its ceRNA UCA1, we tested the activity of AKT/mTOR pathway, which is closely related to the EMT and metastasis of osteosarcoma cells. Western blot analyses showed that when we overexpressed CREB1 in U2-OS cells, both of the phosphorylated AKT and mTOR increased compared to the empty vector group, indicating the activation of this pathway (Fig. 6E, left). Meanwhile, the epithelial marker E-cadherin was downregulated and the mesenchymal marker Vimentin was upregulated (Fig. 6E, left). However, after we treated the CREB1- overexpressing cells with DMSO(NC)/LY-294002, which is an inhibitor of the AKT/mTOR pathway, we found that besides the decreased phosphorylation of AKT and mTOR, the EMT process of U2-OS cells was also overturned (Fig. 6E, right). All of the above data suggested that CREB1 could promote the EMT and metastasis of osteosarcoma cells through activating the AKT/mTOR pathway, thus mediating the prometastatic effect of its ceRNA UCA1.
Discussion
LncRNAs can function as a vital regulator in a broad range of tumor biological processes, such as proliferation, apoptosis and migration. lncRNA UCA1 is a recently identified noncoding RNA and dysregulation of UCA1 is frequently observed in human cancers. Previous studies have shown that a high LncRNA expression is significantly correlated with poor osteosarcoma prognosis. The increased expressions of LncRNA XIST, SOX2-OT, MALAT1, HNF1A-AS1, PARTICLE, CCAL, 91H, HULC, BCAR4, TUG1, BANCR, MEG3 and HOTTIP indicate poorer prognoses of osteosarcoma patients, while the decreased expressions of LncRNA DANCR and TUSC7 indicate better prognoses of osteosarcoma patients. LncRNA UCA1, BCAR4, HULC, and MALAT1 are independent risk factors for poor overall survival rate in osteosarcoma patients. LncRNA DANCR, PARTICLE, HNF1A-AS1, XIST, CCAL and MEG3 can predict metastasis in osteosarcoma patients [11]. However, the role of UCA1 in regulating osteosarcoma invasion and migration remains unclear. In the present study, for the first time, we revealed that UCA1 promotes the invasion and metastasis of osteosarcoma cells via suppression of miR-582 both in vitro and in vivo.
We firstly confirmed the increased expression of UCA1 in osteosarcoma tissues and cell lines and clarified the positive correlation between UCA1 level and the malignant features of clinical patients. The biological function tests showed that UCA1 promoted the proliferation and inhibited the apoptosis of osteosarcoma cells. Importantly, we found that UCA1 increased the invasive and metastatic capacities of osteosarcoma cells both in vitro and in vivo. Since tumor cells obtain migratory and invasive capabilities by EMT, which causes the cells to lose their intercellular adhesion [12] and to generate an aggressive tumor phenotype that leads to proximal or distant via direct invasion or lymphatic and circulatory systems [13], we detected the alterations of EMT markers following the increase or decrease of UCA1 expression. Results showed that overexpression of UCA1 significantly increased the expression of epithelial marker E-cadherin and decreased expression of the mesenchymal markers Vimentin, Snail and ZEB1, indicating the process of EMT was promoted. Conversely, knockdown of UCA1 suppressed the process of EMT in osteosarcoma cells. For the first time, we demonstrated that UCA1 promotes the process of EMT and thereby induces the invasion and metastasis of osteosarcoma cells.
It has been reported that lncRNAs exert their functions by interacting with different molecules, such as miRNAs, mRNAs and proteins. Among them, the miRNAs can be targeted by binding to the 3′-UTR of lncRNAs and thus making the lncRNAs acting as a miRNA "sponge" or ceRNA [14].Currently, the functional pattern of UCA1 in osteosarcoma cells remains unclear. Recent studies have found that miR-582 acted as a tumor suppressor and could regulate bladder cancer progression both in vitro and in vivo [15]. However, the function of miR-582 in osteosarcoma still remains unclear. In the present study, for the first time, we demonstrated that UCA1 acts as a ceRNA to sequester miR-582. Luciferase reporter and RIP assays demonstrated a direct interaction between UCA1 and miR-582. We also found a negative correlation between UCA1 and miR-582 expressions in osteosarcoma tissues. It is well known that miRNAs are key players in gene regulation and perform functions relying on targeting genes. Using bioinformatic analysis, we found that the 3′-UTR of CREB1 mRNA had a complementary site for miR-582 binding. Luciferase reporter assay, western blot analysis and rescue experiments were performed to further confirm that CREB1 is indeed a target of miR-582 in osteosarcoma cells. Hence, we are the first to demonstrate that UCA1 functions as a ceRNA of CREB1 through interacting with miR-582, thus suppressing its binding to the 3′ UTR of CREB1 mRNA and increasing CREB1 expression in osteosarcoma cells. The PI3K/AKT/mTOR pathway is frequently activated and regulates cell survival, growth and migration in various tumors [16]. Although previous study has found that EMT could be mediated by PI3K/AKT/mTOR pathway [17], the effect of CREB1 on this pathway and the possible role of CREB1 in EMT still remain unknown. For the first time, our results showed that CREB1 induced EMT of osteosarcoma cells by activating the PI3K/AKT/mTOR signaling pathway.
In conclusion, we confirmed that UCA1 is upregulated in osteosarcoma and is related to more advanced clinical features. Importantly, our present study is the first to demonstrate that UCA1 promotes the metastasis of osteosarcoma both in vitro and in vivo. Also, for the first time, we revealed that UCA1, as a ceRNA against miR-582, positively regulates the expression of its target CREB1, thus enhancing the EMT process via CREB1-mediated the PI3K/Akt/mTOR pathway and leading to the invasion and metastasis of osteosarcoma cells. These findings suggested that UCA1 could be a potential biomarker and a promising therapeutic target for osteosarcoma treatment.
Not applicable
Funding Not applicable
Availability of data and materials
All data generated or analyzed during this study are available from the corresponding author on reasonable request. | 2019-04-10T13:13:34.930Z | 2019-03-01T00:00:00.000 | {
"year": 2019,
"sha1": "45f99e2027a44efbcb0ac95926915c951cc4f019",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jbo.2019.100228",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d1823d1c9a7c908172f8fb0b25c4ccb3ed57ff48",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245572177 | pes2o/s2orc | v3-fos-license | Membrane Applications in the Dairy Industry
The membrane separation process is a promising technique to be employed in the dairy industry. It can be used to treat effluents and for the recovery of molecules of interest to add value to product formulations. The study aspired to present the potential industrial use of membranes for dairy industries. Detailed research for scientific articles on the subject was conducted from the Scopus scientific database, and the use of wastewater treatment in the dairy industry has increased significantly over the years. The findings presented are expected to provide a valuable theoretical framework for producers and industries related to milk production and whose objective is to optimize production performance and expressive benefits from an economic and environmental viewpoint. Moreover, membrane-based wastewater treatment has low energy demand, high volume recovery, significant water quality, multiple applications of purified water, eco-friendly performance, and favorable future perspectives.
Introduction
The dairy industry is one of the largest and most successful sectors of the agricultural economy. The industrial process for producing dairy milk and dairy milk products involves the manufacture of pure milk and dried milk, as well as cheese, ice cream, protein concentrates and isolates, yogurts, and butter [1]. According to the Food and Agriculture Organization of the United Nations (FAO), milk production in 2020 exceeded 906 million tons and has indicated a promising scenario [2]. Nevertheless, statistical projections report a significant demand in some countries, mainly emerging markets with a high population advance in the coming years [3].
This panorama is extremely alarming since the amount of wastewater is 2.5 times higher than the processed milk, considering volume units [4]. The high concentration of organic and inorganic elements, biological oxygen demand (BOD), and chemical oxygen demand (COD) are prevailing characteristics of this type of effluent [5][6][7][8]. Also, dairy effluents are extremely harmful since there are high concentrations of compounds rich in organic matter and microorganisms [9]. This situation characterizes the current requirement to adopt processes that aim to increase productivity to meet the growing demand, which does not result in serious economic and environmental impacts. Accordingly, to overcome the expressive consequences and severe environmental and human risks caused by the inadequate disposition of dairy effluents in the environment, the implementation of strategies to improve the optimization of the milk production process and advance the performance of effluent treatment and obtaining high-quality water for reuse is widely appropriated [9,10].
Using aerobic filters, biofilms, and anaerobic processes, the bio-based treatment showed promising conclusions [4,9]. Nevertheless, applying this type of treatment requires many process steps, which limits its use and causes high costs and process time [4]. Thus, other categories of treatments have been explored over the years. The chemical and physicochemical approaches, by applying the electrocoagulation process (EC) and the addition of natural coagulants/flocculants, respectively, significantly facilitate the segregation and elimination of particles in the wastewater [11]. Nonetheless, the particular use of a range of inorganic coagulants presents high risks to the environment and human health [12]. This obstacle emerged on alternative treatments that improve the water quality present in industrial dairy effluents and are beneficial from an environmental and economic perspective. Correspondingly, prominent techniques have been scientifically reported, such as membranebased processes.
A considerable number of studies involving membrane-based processes applied to industrial dairy effluents are noticeable. Membrane filtration presented encouraging results and was based on high-level performance to obtain high-quality water for recycling [13,14]. The limitation status of membranes is directly related to the particularities of the product, such as the type of material, thickness, working pressure, permeate flux capacity, etc. [15]. Polymeric membranes are the most adapted for wastewater treatment, and the most common polymeric membranes are microfiltration (MF), ultrafiltration (UF), nanofiltration (NF), and reverse osmosis (RO) [16]. Therefore, the use of membranes has been intensified due to several advantages, such as the facility of the process, the high quality of the reused product, and economic benefits [12]. Based on a more restrictive scenario regarding the use of water originating from industrial effluents, the escalation in the application of membrane-based processes emerges as an ingenious alternative for future prospects [17].
The favorable background is identified in particular membranes applied in agroindustries and the integration of membranes for additional treatments. The association of MF and NF with physicochemical methods decreased 96% COD and 99% turbidity and color, proving to be a treatment that promotes water for reuse [12]. Furthermore, integrated MF and RO and MF and NF systems reduced 100% turbidity and up to 96% color [17]. Consequently, the use of membranes can optimize the wastewater management of dairy industries and enhance the recovery of purified effluents.
Additionally, it is interesting to verify the intensification of the scientific production scenario regarding the application of the dairy industry wastewater and the use of membranes to treat effluents. Transparent research for scientific articles published in the last 5 years was carried out from the Scopus ® database considering the following keywords: dairy/wastewater/treatment (Figure 1 (a)) and dairy/wastewater/treatment/membrane ( Figure 1 (b)). The use of treatments applied to the dairy industry wastewater has increased significantly over the years. The largest production of scientific articles occurred in 2020 (104 reports) and more than doubled compared to 5 years ago (42 reports). According to the application of membranes for dairy industry wastewater treatment, the largest report of manuscripts was observed in 2020 (14 reports) and strongly higher when compared to 5 years ago (4 reports). Considering this promising scenario, it is interesting to investigate scientific production worldwide, indicating the regions of substantial application of effluent treatment associated with the dairy industry ( Figure 2). The spatial characterization of scientific production shows the countries that have explored and reported findings on the subject. India, China, US, and Brazil reported several studies on the dairy industry wastewater treatment. This scenario is closely related to the large population of these countries and, consequently, to the high demand for dairy products over the years. In this context, countries like India, Brazil, and China lack adequate infrastructure to treat effluents on a large scale. As a result, the development of studies on this narrative has intensified in recent years. Furthermore, Poland, Iran, Ireland, Spain, and Italy presented promising results regarding exploiting this type of effluent. Many countries in the Middle East, such as Iran, have serious water scarcity problems, mainly due to the region's predominantly desert climate. Thus, the production of studies that aim at alternatives to treat industrial effluents for various purposes is highly expansive [18].
A similar scenario has been widely observed in the European continent over the years since European countries have limited water availability due to the high urban demographic growth and the large amount of water to supply cutting-edge industries and an extremely developed agricultural sector [13]. Overall, the incidence of many European countries as the largest scientific producers regarding the application of wastewater treatment and the use of membranes strategies (Figure 2) is pertinent and indicates the level of innovation that has been adopted in the region recently. Accordingly, the purpose and importance of this review are to provide pertinent information about the application of the membrane separation process in the treatment of wastewater from the dairy industry.
Use of Membranes in Wastewater Treatment
Membranes can be used at different stages of the production process in the food industry. They can be used to purify raw materials, separate products, or treat effluents generated from industrial activities. The effluent from the dairy industry can consist of milk, whey, detergents, cleaning products in general, production residues, dyes, flavorings, among others. The main objective of using membranes is to separate molecules in suspension or solution by their molecular weight, using different membranes produced with different materials. In addition, the membranes have a wide range of pore sizes and can retain molecules of different sizes. For that, it is necessary to know the main objective of applying the membranes to choose the appropriate pore size for the target molecule [19,20].
If the target molecule is very small and is suspended in a solution containing larger molecules, an integrated membrane system is indicated; that is, the solution must be passed through a larger pore membrane and successively into smaller pore membranes until retention of the target molecule. The type and size of the membrane to be used vary according to the type and size of the contaminant/target molecule retained [21]. In this way, the formation of pie or clogging of the membrane by the larger molecules is avoided, complicating the process and making it necessary to use cleaning protocols every little time, which delays the production process. Severe clogging can shorten the life of the membrane and cause fiber breakage, in the case of hollow fiber membranes, for example. For this reason, primary coagulation treatments, electrocoagulation, advanced oxidative processes, centrifugation and chemical, and biological treatments are often used to remove coarse solids.
In addition to being selective, membranes can be functionalized to meet industry demands. Rahimi et al. [22] studied two antifouling ultrafiltration polyethersulfone membranes to treat synthetic milk processing wastewater prepared with low-cream milk in membrane bioreactors. The authors reported a COD removal efficiency greater than 95% for membranes prepared with 0.1 wt.% of NH2 functionalized multiwalled carbon nanotubes and Ocarboxymethyl chitosan modified Fe3O4 nanoparticles.
Value-Added Products
Membranes are a physical method of separating two phases, in which the retained phase and/or the permeate phase may be of interest. In the treatment of effluents, the interest is to remove pollutants from the effluent that will later be discarded. However, when the solution to be passed through the membrane has high added value components, as in the case of whey, the part retained in the membrane is of interest. Concentrating the molecules present in the whey can be used in new products to add value and nutrients or be sold separately, generating value for the company. Combined to use all milk components in food products, this action reduces the organic load of the effluents to be discarded, generating environmental and social benefits [23].
Whey is the main pollutant in the treatment of effluents from the dairy industry because of its organic composition (carbohydrates, mainly lactose) and volumetric load [4,24]. This corroborates the nutritional and biological importance of giving whey an appropriate destination, integrating it into dairy products, such as yogurts, creams, ice creams, pastes, or bakery products, such as cookies and bread, cakes, cookies, among others. Menchik et al. [23] characterized acid whey from Greek-style yogurt and cottage cheese, and milk permeate. The authors reported that the dairy coproducts presented milk solids, such as lactose, protein, and minerals, that can be used in several applications with nutritional benefits.
Gómez et al. [25] performed an assessment via process simulation to produce whey syrups (glucose syrup, and glucose and fructose syrup) and protein concentrate (WPC) attached to a cheese-processing plant. As whey has a high concentration of lactose, lactose hydrolysis can increase the sweetness of the powder and be used as a sweetener in different products or be obtained as sweetener syrups. The authors showed a base case and three other alternatives for obtaining the products and evaluating technical, economic, and environmental aspects. For this purpose, ultra and nanofiltration membranes were evaluated in addition to the use of free or immobilized enzymes for hydrolysis. Moreover, the authors also suggest methods of treating the effluent generated after the process, requiring only coagulation with ferric sulfate and anionic polymer and sedimentation for subsequent disposal [25].
When thinking about whey manufacturing in developing countries, many factories do not have enough money to purchase the necessary equipment to produce the powder, such as the spray dryer. Thus, one way to overcome this obstacle is to implement a plant in a consortium with regional companies. Therefore, to reduce the cost of transporting the liquid to be sprayed, an effective strategy is to concentrate the liquid with membranes in order to reduce its volume. In this way, the retained/concentrated membrane can be reused and directed to the generation of value for the regional companies. In addition, as shown in Figure 3, the treated water can still be reused in other industrial processes, such as for cleaning equipment. Figure 4 schematically presents all the possibilities of membrane-based processes in the dairy industry. Membranes can also separate and isolate compounds of interest from other milk proteins, as in the case of the isolation of the immunoglobulin and lactoferrin fractions [19]. In addition to the interest in proteins, there is also interest in isolating dairy lipid fractions due to their phospholipid content [20]. The highest cost of implementing the membrane system concerns the operational cost. However, the company receives the payback in a short time [26]. This corroborates with the objective of the present article, which was to show ways of using the effluent from the dairy products manufacturing process to add value to products and promote favorable future perspectives.
Conclusions
The dairy industry effluent consists of molecules with economic and nutritional value that can be used in other processes. Moreover, these molecules can be easily recovered with the use of membranes. This article elucidated alternatives for using the effluent from the dairy industry to generate value in developing countries as well as the main scientific publications. | 2021-12-31T16:19:41.924Z | 2021-10-17T00:00:00.000 | {
"year": 2021,
"sha1": "2adcbe4e426081b1609c61651496a23b82d7ba0e",
"oa_license": null,
"oa_url": "https://doi.org/10.33263/briac124.50125020",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9dc07f51fd6b8f51ba9786642029622088c1e119",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": []
} |
266051423 | pes2o/s2orc | v3-fos-license | From mouse to human
A deep analysis of multiple genomic datasets reveals which genetic pathways associated with atherosclerosis and coronary artery disease are shared between mice and humans.
GENETICS
From mouse to human A deep analysis of multiple genomic datasets reveals which genetic pathways associated with atherosclerosis and coronary artery disease are shared between mice and humans.
ARYA MANI O ver time, various substances that travel through blood -such as cholesterol, inflammatory cells, and cellular debriscan accumulate in the walls of arteries, resulting in their narrowing.This build-up of materials, known as atherosclerosis, can also occur in the blood vessels that supply nutrients and oxygen to the heart, leading to coronary artery disease.Understanding what causes atherosclerosis is crucial for developing effective preventive and therapeutic strategies for coronary artery disease.
Genome-wide association studies -which compare common DNA variations in populations with and without a specific trait or disease -have identified numerous genetic variants linked with an increased risk of atherosclerosis (Khera and Kathiresan, 2017;Tcheandjieu et al., 2022).These variants are either causal or associated with various aspects of atherosclerosis, such as lipid metabolism, inflammation, and endothelial function.Despite significant advances in genetic research, it remains unclear which of these variants drive the condition, and in which genes and/or tissues these variants exert their effects.How other factors that are known to influence atherosclerosis, such as environment, sex and lifestyle, impact gene expression also cannot be inferred from these types of investigations.
To overcome these limitations, researchers use animal models that have been manipulated to develop a certain disease.Mice are the most commonly studied species, and have been used to observe how altering specific genes and controlling various environmental factors affect the way atherosclerosis and coronary artery disease develop.However, mice and humans differ significantly in terms of their physiology and genetics.For instance, their lipid metabolism and immune responses vary, and certain genes implicated in mice might not have direct equivalent functions or effects in humans, making it difficult to translate finding from studies in mice to clinical applications.Now, in eLife, Montgomery Blencowe and Xia Yang from the University of California, Los Angeles (UCLA) and colleaguesincluding Zeyneb Kurt and Jenny Cheng as joint first authors -report how the genetic pathways and mechanisms associated with atherosclerosis and coronary artery disease compare between these two species (Kurt et al., 2023).
Kurt et al. meticulously analyzed various sources of data, including mouse genomic data from the Hybrid Mouse Diversity Panel, and human genomic data from the CARDIoGRAM-plusC4D consortium, GTEx database, and STARNET.In addition to results from genomewide association studies (GWAS), these datasets include information on which genes are active and which variants alter the expression level of these genes (known as expression quantitative trait loci, or eQTL for short) in specific tissues of interest: the liver and vasculature tissues of Related research article Kurt Z, Cheng J, McQuillen CN, Saleem Z, Hsu N, Jiang N, Barrere-Cain R, Pan C, Franzen O, Koplev S, Wang S, Bjorkegren J, Lusis AJ, Blencowe M, Yang X. 2023.Shared and distinct pathways and networks genetically linked to coronary artery disease between human and mouse.eLife 12:RP88266.doi: 10.7554/eLife.88266humans, and the aorta (which is part of the vasculature) and liver tissues of mice.
First, the team (who are based at institutes in the United States, United Kingdom and Sweden) used the GWAS, gene expression, and eQTL data from mice and humans to determine which genes have similar expression profiles and are therefore likely to be connected, and which genes have a major role in the two conditions.Using these co-expression gene networks, together with another tool known as gene set enrichment analysis, they were able to identify the signaling pathways associated with coronary artery disease and atherosclerosis in humans and mice.Remarkably, this revealed a significant overlap in the pathways linked to coronary artery disease and atherosclerosis, with approximately 75% and 80% of identified pathways being associated with both diseases in the vasculature and liver tissue, These shared pathways encompass well-known processes, such as lipid metabolism, and introduce novel aspects like the mechanism that breaks down branched chain amino acids.
The analysis also uncovered pathways that were specific to each species, such as the insulin signaling pathway in the aorta of mice, and interferon signaling in the human liver.Kurt et al. then used a probabilistic model known as the Bayesian Network to pinpoint which genes were predominantly driving these species-specific pathways, and identified the subnetwork of genes immediately downstream or neighboring these drivers.The genes that drive the mouse-specific pathways were validated using single-cell RNA sequencing data, which revealed that the subnetwork of genes changed expression in the aortas and livers of mice with coronary artery disease and/or atherosclerosis.
Further analysis revealed that some of these previously unknown key driver genes were also hits in a recent GWAS of coronary artery disease, suggesting they have a crucial role in the disease.This included a key driver of coronary artery disease in both humans and mice, the ARNTL gene (also known as BMAL1) which is a transcriptional activator that forms a core component of the circadian clock and negatively regulates adipogenesis (Guo et al., 2012).
Interestingly, a common variant in the ARNTL gene has been associated with coronary artery disease and other factors linked to this condition and atherosclerosis, such as body mass index, diastolic blood pressure, triglyceride levels, and type 2 diabetes (van der Harst and Verweij, 2018;Pulit et al., 2019;Sakaue et al., 2021;de Vries et al., 2019, Vujkovic et al., 2020).
Furthermore, values derived from the GTEx dataset suggest that the alternative variant reduces the expression of the gene in whole blood.Deletion of ARTNL in certain blood cells has also been shown to predispose mice to acute and chronic inflammation (Nguyen et al., 2013).Use of functional genomics, particularly in the context of sex differences, will likely establish the causality of ARNTL and other predicted key driver genes (Gunawardhana et al., 2023).
The findings of Kurt et al. are a pivotal contribution to our understanding of coronary artery disease and atherosclerosis in mice and humans.The integrative genomic study also creates avenues for further research, such as applying the same approach to larger GWAS datasets and incorporating variants that impact the splicing or quantity of protein produced into the analysis.Employing additional mouse models of atherosclerosis and coronary artery disease, and analyzing other relevant tissues, could also help identify additional cross-species similarities and differences.These future studies, together with the work by Kurt et al., will help researchers to determine how well findings in mice relate to human coronary artery disease and atherosclerosis, and whether these results could translate to clinical applications. | 2023-12-08T06:17:09.074Z | 2023-12-07T00:00:00.000 | {
"year": 2023,
"sha1": "7a9cacda9471f5d0fc7f104df5013908eef80220",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "807c418a8a4601517037e7b8d8ad44b0941c0f47",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1730295 | pes2o/s2orc | v3-fos-license | Metabolic approaches to breast cancer treatment and prevention
Standard systemic therapy for breast cancer treatment and prevention are generally directed at reduction in cell growth and increase in apoptosis by direct inhibition of DNA synthesis or signalling pathways that control cell proliferation and cell death. Data are accumulating that calorie restriction (CR) and/or exercise, or agents that target metabolic pathways may have antitumour effects either alone or in combination with standard therapies for cancer prevention and treatment, and merit further investigation. Here, we outline
The first observation, that energy restriction could retard tumour growth, was reported nearly 100 years ago [1]. More recently, Dirx and colleagues [2] conducted a meta-analysis of 14 studies of the effect of calorie restriction on spontaneous mammary tumour development in mice and found that the pooled risk difference between mice given restricted access and those given free access to food was 0.55 (95% confidence interval 0.41-0.69), irrespective of the type of restricted nutrient. Whether CR affects human tumour development is not clear. Epidemiological studies indicate lower risk for breast cancer in women who have a period of weight loss as compared with those who maintain or continued to gain weight in mid-life [3,4]. Excess weight at diagnosis and weight gain during breast cancer treatment increases mortality from cancer and co-morbid conditions, whereas diet modification reduces recurrence [5,6].
Although the data outlined above suggest that CR is effective at preventing breast cancer and possibly inhibiting the growth of overt tumours, currently it is unrealistic to expect CR to be implemented on a population basis. This has led to studies on whether intermittent CR is potentially more acceptable [7] and investigation of pharmacological agents that might mimic the effect of CR (calorie restriction mimetics [CRMs]) [8]. In order to establish targets for CRM, it is important to gain an understanding of the cellular and biochemical basis of CR.
Several animal studies have shown that CR inhibits cell proliferation and stimulates apoptosis (for review, see [8]). The experimental data are consistent with cell cycle arrest at the G 1 /S transition and a shift into G 0 with appropriate reductions in cyclin kinases and increases in p21 and p27. Caspase activity assays showed that caspase 9 and 3 are elevated in the mammary tumours of CR rats, as compared with carcinomas from fed animals given free access to food, which implies the mitochondrial pathway of apoptosis is activated by CR [9].
Mechanistic studies on cell proliferation reduction and apoptosis activation are highly important but they do not indicate how a reduction in calories is translated into these effects on mammary tumour cells. Reduction in the major components of the diet (carbohydrate, lipids and proteins) must affect tumour cell metabolism in some way, with secondary results in alterations in cell cycle and cell death pathways. It is of interest that Dirx and coworkers [2] reported in their overview that CR was effective whatever the component of diet was restricted. Thus, there may be more than one metabolic pathway that mediates CR.
In general, breast tumour metabolic pathways appear to be different compared with the normal cells from which they arise. Warburg [10], in the 1920s, was the first to show that glycolysis was markedly increased, and later Weinhouse and colleagues [11] demonstrated that fatty acid synthesis was also markedly increased in tumours. Warburg suggested that high glycolytic rates in tumours were related to defects in tumour mitochondria, and considerable support for this hypothesis has recently accumulated [12].
An important advance in our understanding of how cells sense their own energy status was the discovery of adenosine monophosphate-related kinase (AMPK), which is activated when cellular energy levels are reduced (low AMP/ATP ratio) [13]. Activation of AMPK results in reduction in cell synthetic (anabolic) pathways and increase in catabolic pathways and oxidative phosphorylation (to generate ATP), with associated reduction in cell proliferation. Some metabolic pathways in tumours are shown in Figure 1. Increased glycolysis and lipid synthesis may be thought of as oncogenic and enzymes in the AMPK/tricarboxylic acid (TCA) cycle pathway as tumour suppressors. Indeed, transfection of glycolytic enzymes into normal cells results in immortalization [14], and mutations in components of the LKB1/AMPK pathway result in tumour formation consistent with their tumour suppression function (Figure 1).
The mechanism of action of some known CRMs is consistent with the 'oncogene'/tumour suppressor gene concept outlined above. For example, inhibition of glycolysis in MCF7 cells with 2-deoxyglucose [15] or of fatty acid synthesis [16] with the fatty acid and synthase inhibitor C93, or stimulation of AMPK synthesis with the antidiabetic drug metformin [17] all inhibit proliferation of human MCF7 cells in vitro. Also, enhancement of TCA cycle activity is associated with inhibition of cell growth. In tumours entry of pyruvate into the TCA cycle may be reduced either by metabolism to lactate or inhibition of pyruvate dehydrogenase by pyruvate dehydrogenase kinase (PDK). Inhibition of lactate dehydrogenase activity by small interfering RNA [18] or of PDK by 2-chloroacetate stimulates mitochondrial activity [19] and is associated with inhibition of tumour cell growth or direct stimulation of TCA cycle activity using cell-permeating α-ketoglutarate [20].
The examples given above indicate that alteration in tumour cell metabolism is a potential approach, either alone or in addition to standard therapies for the prevention and treatment of breast cancer. Simplified view of metabolic pathways in tumours. Enhanced pathways are shown as thick lines. Potential inhibitors of glycolysis and lipid synthesis and stimulators of the tricarboxylic acid (TCA) cycle are shown. Calorie restriction would be expected to enhance adenosine monophosphate-related kinase (AMPK) activity, which inhibits lipid synthesis and generally stimulates TCA cycle activity. | 2014-10-01T00:00:00.000Z | 2007-12-20T00:00:00.000 | {
"year": 2007,
"sha1": "9291d0534e509ecda19df793345f7c3e62f8611e",
"oa_license": "CCBY",
"oa_url": "https://breast-cancer-research.biomedcentral.com/track/pdf/10.1186/bcr1825",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "132579fe7b98a380e005f66bcebbc7915e294daa",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267582022 | pes2o/s2orc | v3-fos-license | Sentinel-2 Versus PlanetScope Images for Goldenrod Invasive Plant Species Mapping
: A proliferation of invasive species is displacing native species, occupying their habitats and degrading biodiversity. One of these is the invasive goldenrod ( Solidago spp.), characterized by aggressive growth that results in habitat disruption as it outcompetes native plants. This invasiveness also leads to altered soil composition through the release of allelopathic chemicals, complicating control efforts and making it challenging to maintain ecological balance in affected areas. The research goal was to develop methods that allow the analysis of changes in heterogeneous habitats with high accuracy and repeatability. For this reason, we used open source classifiers Support Vector Machine (SVM), Random Forest (RF), and satellite images of Sentinel-2 (free) and PlanetScope (commercial) to assess their potential in goldenrod classification. Due to the fact that invasions begin with invasion footholds, created by small patches of invasive, autochthonous plants and different land cover patterns (asphalt, concrete, buildings) forming heterogeneous areas, we based our studies on field-verified polygons, which allowed the selection of randomized pixels for the training and validation of iterative classifications. The results confirmed that the optimal solution is the use of multitemporal Sentinel-2 images and the RF classifier, as this combination gave F1-score accuracy of 0.92–0.95 for polygons dominated by goldenrod and 0.85–0.89 for heterogeneous areas where goldenrod was in the minority (mix class; smaller share of goldenrod in canopy than autochthonous plants). The mean decrease in the accuracy analysis (MDA), indicating an informativeness of individual spectral bands, showed that Sentinel-2 bands coastal aerosol, NIR, green, SWIR, and red were comparably important, while in the case of PlanetScope data, the NIR and red were definitely the most important, and remaining bands were less informative, and yellow (B5) did not contribute significant information even during the flowering period, when the plant was covered with intensely yellow perianth, and red-edge, coastal aerosol, or green II were much more important. The maximum RF classification values of Sentinel-2 and PlanetScope images for goldenrod are similar (F1-score > 0.9), but the medians are lower for PlanetScope data, especially with the SVM algorithm.
Introduction
Plants and animals, by occupying new habitats, develop natural population strategies; this process is naturally limited by the availability of suitable habitats and species interactions.Moreover, natural barriers and ecological corridors are disrupted by human activities, thereby increasing the scale and extent of the spread of invasive species, leading to the displacement of representatives of native fauna and flora, changing the stability of the ecosystem [1].The extinction of indigenous species and the destruction of biodiversity is so significant that it is regulated by international and domestic law.In 2014, the European Union (EU) issued Regulation No. 1143/2014 (with subsequent updates) on the prevention and management of the introduction and spread of invasive alien species.Goldenrod (Solidago spp.) is not currently on lists of the most threatening native plant species, but its lush and long flowering period significantly distracts some native pollinators (wasps, bees, bumblebees, butterflies, and beetles) adapted to native flowering plant species, the disappearance of which causes a decline in the populations of the insect representatives concerned.The goldenrods spread large numbers of seeds and reproduce vegetatively; moreover, their allopatric properties inhibit germination and nutrient uptake and growth of neighboring plants [2].The presence of honey-giving goldenrod flowers in devastated areas allows the presence of some insect species, while at the same time their essential oils (EOs) negatively affect plant growth and development [3], e.g., substances from Solidago canadensis leaves were the most poisonous to flies (Musca domestica) and mosquitoes (Culex quinquefasciatus), while oils from Solidago gigantea leaves were poisonous to Spodoptera littoralis larvae.The above elements mean that goldenrod is considered a moderately invasive species, but it is widely distributed [4], which leads to the systematic transformation of habitats from which indigenous plants, insects, and the rhizosphere are eliminated [5].Invasions start from small, local areas, usually entering under canopies of trees and bushes, which makes it difficult to identify emerging individuals, but they regularly occupy new areas.An interest in using remote sensing to identify invasive and expansive species has increased significantly in recent years due to open source tools and data that are characterized by high temporal, spatial, and spectral resolution, allowing the identification of plant species and communities at local and regional scales (without interpolation) [6,7].This is important for tracking the dynamics of invasion and an assessment of threats to biodiversity.Most works have used aerial data or high-resolution satellite imagery; Sabat-Tomala et al. [8], based on airborne hyperspectral HySpex images, Random Forest (RF), and Support Vector Machine (SVM) algorithms, classified three species of expansive and invasive plants (including Solidago spp.) with accuracy of around 0.90 (F1-scores), emphasizing that the results obtained for spring, summer, and autumn images were similar, and the fusion of all the data allowed goldenrod to be mapped with an F1-score of 0.99.However, it is still a challenge to develop a method for identifying the early stages and new outposts of the invasion, consisting of spectrally heterogeneous objects [8].This problem was analyzed by Rizaludin Mahmud et al. [9], identifying goldenrod in the heterogeneous urban landscape of Japan.Using hyperspectral signatures, the authors developed the Solidago altissima flower index (SAFI), which, combined with high-resolution WorldView-2 (WV-2) satellite images, allowed the identified spectral characteristics to be applied on a landscape scale.This method was based on the Maximum Likelihood algorithm, scoring an overall accuracy of 72% (OA) [9].Akandil et al. [10] identified Solidago gigantea using high-resolution multispectral images acquired with a drone.The images were captured during the peak of a goldenrod bloom in August, distinguishing homogeneous patches on the basis of visual interpretation.The Maximum Likelihood algorithm classified goldenrod with 95% producer accuracy (PA) and user accuracy (UA).A different approach in terms of the date of acquisition of the images, falling in spring before the development of goldenrod flowers, but also using AISA aerial hyperspectral images, was proposed by Ishii and Washitani [11]; the General Linear Model algorithm classified the first three MNF bands (after Minimum Noise Fraction transformation), achieving 94% UA and 80% PA.The results obtained for Solidago altissima suggest that hyperspectral data recorded in early spring may be a valuable asset in detecting invasion footholds before other plant species develop in the study area.The same observation was confirmed by Sabat-Tomala et al. [8] based on airborne HySpex images.
Previous work on goldenrod mapping focuses mainly on high-resolution aerial images, e.g., the above-cited UAV [10] or hyperspectral [8] studies, which offer good results but, due to the range of the acquired images, they concern only small case studies.Therefore, a much more important element is the verification of medium-resolution satellite images and the early detection of invasion footholds, i.e., the first stages that will allow the identification of small and heterogeneous patches.Another problem is the compactness of patches of invasive species, which often reproduce through underground roots or produce big numbers of seeds that are dispersed far by wind, allowing them to quickly colonize new areas, e.g., along expressways and railway routes.These areas are difficult to identify, because there is a heterogeneity of technical infrastructure and plants, and some plants do not dominate in the canopy, but constitute dangerous sources of invasion, e.g., hiding in bushes, woodlots, under plant canopies, and other places where mowing machines do not have access.
Being a part of the European Space Agency (ESA) Copernicus program, the Sentinel-2 mission is invaluable for monitoring.The Sentinel-2A and Sentinel-2B satellites offer a shortened revisit time of up to 5 days (10 days at the equator with one satellite) and a wide swath of imaging reaching 290 km using the MultiSpectral Instrument (MSI) and offering 13 spectral bands, with a spatial resolution of 10 m, 20 m, and 60 m (Table 1) [12].The images are valuable to a wide range of applications especially since the data are free, corrected, and well provided via the Copernicus Data Access Service platform [13].PlanetScope, a project by Planet Labs, has revolutionized the availability of satellite data with its constellation of CubeSats [14].It provides daily acquisition and high spatial resolution, e.g., the currently operated SuperDove sensor offers a 3.7 m pixel size (32.5 km × 19.6 km), covering 20,000 km 2 as a maximum image strip per orbit, which enables the capture of up to 200 million km 2 of Earth's surface every day via the 130-satellite PlanetScope constellation, which are located on 450-580 km orbits [15,16].The SuperDove satellites, part of the third generation of the fleet, are equipped with the PSB.SD instrument, which captures eight spectral bands with 16-bit radiometric resolution (Table 1).Plan-etScope images are available at various processing levels, with Level 3B offering images after radiometric correction, geometric correction, and orthorectification [16].Although PlanetScope data are commercial, the company provides access for educational and research purposes through the Planet's Education and Research Program [17].
Both systems, Sentinel-2 and PlanetScope, have unique features and applications.Sentinel-2 is irreplaceable in projects requiring a broad spectral range (especially NIR and SWIR), frequent revisit times, and multiscale environmental monitoring.Conversely, PlanetScope provides frequent data updates with high spatial resolution, which is key for quick responses to changes, where data recency and frequency are a priority.
The topic of identifying other expansive and invasive plant species in Sentinel-2 data is described in [18,19].Sentinel-2 images offer high temporal resolution, allowing the capture of the different stages of phenological development of invasive plants in relation to native species, which is one of the key elements of the strategic advantage of alien species over native plants.Sentinel-2 thus fills the time gaps caused by the lack of airborne hyperspectral data [20], providing valuable data to conduct monitoring on regional and continental scales and significantly reducing costs [21].
In our studies, we focus on RF and SVM classifiers, because these classifiers allow the analysis of a significant amount of input data, which is important in the identification of plant species that change during the growing season.Additionally, the process of optimizing classification parameters is relatively simple and fast, and the results obtained achieve high accuracy.RF, especially, offers high efficiency in identifying changing vegetation, and requires a relatively small number of training patterns (300-700 pixels).But in the literature, there is a belief that deep learning methods offer better classification results, so below we would like to point out that many studies do not confirm this opinion, especially in the case of heterogeneous vegetation.In the case of deep learning networks, if the amount of entered data is large, the optimization and classification time is significantly extended compared to the use of machine learning algorithms [22][23][24].Due to an independence and automatic selection of the training set in neural networks, there is a risk of over-fitting the model, as well as the selection of irrelevant training samples [23,25].Often, the use of deep learning requires much more data to train the network than in the case of other algorithms in the field of artificial intelligence [25,26].Subsequently, the selection of matching spectral bands is a challenge to prepare a well-functioning model, because most often the training of the network takes place on a limited number of bands, and expanding it with a larger number is extremely time-consuming and requires increased memory for relearning, which traditional machine learning algorithms do not have to deal with [27].The authors [23,28] pointed to the lower performance of neural networks than algorithms such as SVM or RF in the case of noisy data; heterogeneous classes are also a problem in deep learning, as the boundaries of individual classes are distorted and the inability to distinguish different pixels within a class treated as a large patch becomes apparent [24,29].Lake et al. [30] identified leafy spurge (or wolf's milk leafy spurge; Euphorbia virgata, which is also an invasive yellow flowering plant in the US) based on a deep learning network (CNN) and high-resolution images of WorldView-2 (eight bands) and PlanetScope (four spectral bands, 3 m spatial resolution).The obtained overall accuracy was 96.3% based on CNN + Long Short-Term Memory (LSTM), while for the variant considering only CNN, the accuracy of PlanetScope data classification was 89.9% and, for WorldView-2, it was 96.1%.
The aim of this paper is to assess the usefulness of widely available Sentinel-2 and high-resolution commercial PlanetScope multispectral data, as well as the Random Forest (RF) and Support Vector Machine (SVM) algorithms for the identification of goldenrod (Solidago spp.).Previous methods of monitoring goldenrod were based on traditional botanical research, which is burdened with high costs and is time-consuming; hence, in many countries, orthophotomaps, airborne imaging, or high-resolution satellite images are used to identify goldenrod at local scales.Due to the nature of the plants investigated, which form heterogeneous polygons with native species during the first stages of the invasion, the research challenge was to assess the accuracy of the classified patches.Various scenarios were tested, including the random selection of a different number of pixels (in 100 repetitions of iterative classification), using entire polygons with the dominant share of goldenrod, but also polygons where goldenrod was beginning to appear, creating different mix patterns.
Materials and Methods
Goldenrod mapping was based on multitemporal Sentinel-2 and PlanetScope satellite images, which were acquired on the same dates, representing the flowering period, early and late autumn (faded inflorescences are big, dry, and durable, which reflect a significant amount of radiation), allowing goldenrod to be distinguished from surrounding plants.As training and verification patterns, field-verified polygons were used; two types of classes were created: (a) goldenrod, which represented homogeneous core areas of goldenrod, and (b) the mix class, in which goldenrod is present but mixed with a significant dominance (much above 50%) of bushes, single trees, or other grassland species (Figure 1) to capture initial stages of goldenrod invasion.Based on goldenrod occurrence polygons, randomized training and verification pixels allowed the classification, in 100 subsequent iterations, of the spatial distribution of the plant; this was intended to capture the spectral variability of individual hatchlings representing different stages of invasion; Random Forest and Support Vector Machine classifiers were used as open source algorithms in the R programming language (Figure 2).period, early and late autumn (faded inflorescences are big, dry, and durable, which reflect a significant amount of radiation), allowing goldenrod to be distinguished from surrounding plants.As training and verification patterns, field-verified polygons were used; two types of classes were created: (a) goldenrod, which represented homogeneous core areas of goldenrod, and (b) the mix class, in which goldenrod is present but mixed with a significant dominance (much above 50%) of bushes, single trees, or other grassland species (Figure 1) to capture initial stages of goldenrod invasion.Based on goldenrod occurrence polygons, randomized training and verification pixels allowed the classification, in 100 subsequent iterations, of the spatial distribution of the plant; this was intended to capture the spectral variability of individual hatchlings representing different stages of invasion; Random Forest and Support Vector Machine classifiers were used as open source algorithms in the R programming language (Figure 2).Goldenrod is well identified in the field, because tall and intensely yellow flowering plants form compact patches, displacing other plants from the habitat being taken over; in the foreground on the right, there is an expansive native species of wood small-reed (bushgrass; Calamagrostis epigejos; upper photo).Also, goldenrod does very well in the immediate neighborhood of shrubs and individual trees, first eliminating other grasses and perennials, and then spreading its own seeds to neighboring areas.Its light weight and adaptations to anemochoria allow it to colonize relatively remote areas (lower photo).Photo credit: Bogdan Zagajewski.
Figure 1.
Goldenrod is well identified in the field, because tall and intensely yellow flowering plants form compact patches, displacing other plants from the habitat being taken over; in the foreground on the right, there is an expansive native species of wood small-reed (bushgrass; Calamagrostis epigejos; upper photo).Also, goldenrod does very well in the immediate neighborhood of shrubs and individual trees, first eliminating other grasses and perennials, and then spreading its own seeds to neighboring areas.Its light weight and adaptations to anemochoria allow it to colonize relatively remote areas (lower photo).Photo credit: Bogdan Zagajewski.
Research Target and Area
The giant (tall) goldenrod (Solidago gigantea Aiton) and the Canadian goldenrod (Solidago canadensis L.) belong to the Asteraceae family (formerly Compositae), which is one of the biggest groups among plants, with about 24,000-32,000 species.Both species are perennials reaching considerable sizes (giant goldenrod reaches up to 250 cm, while Canadian goldenrod reaches up to 150 cm in height).Their yellow inflorescences bloom from July to October; however, the period of the most intense bloom falls between the
Research Target and Area
The giant (tall) goldenrod (Solidago gigantea Aiton) and the Canadian goldenrod (Solidago canadensis L.) belong to the Asteraceae family (formerly Compositae), which is one of the biggest groups among plants, with about 24,000-32,000 species.Both species are perennials reaching considerable sizes (giant goldenrod reaches up to 250 cm, while Canadian goldenrod reaches up to 150 cm in height).Their yellow inflorescences bloom from July to October; however, the period of the most intense bloom falls between the second half of August and the end of September (Figure 3).It is worth noting that the allelopathic properties of chemical substances secreted by goldenrod inhibit development of other plant species [2], allowing it to gain a competitive advantage and facilitate new invasions [31].In Europe, these species grow in both natural and anthropogenic environments, commonly occurring on the edges of forests, roadside belts, as well as meadow wastelands, creating characteristic compact, single-species patches (Figure 2).Both Solidago species show similar adaptations, enabling the rapid expansion of their range, thanks to which they currently occupy almost all of Europe [3].Nagy et al. [32] confirmed that mowing, grazing, and periodic flooding had negative consequences for development of S. gigantea, but short-term mowing does not improve diversity of the Solidago canopy, while long-term agricultural treatments improved diversity of natural plants.The research area is a fragment of the Silesian-Cracow Upland (near Częstochow Figure 4), where there are large areas of wasteland and industrial areas, where a dynam increase in invasive and expansive species is observed.Field studies were conducted i early September (3-4/09/2021) to capture the period when goldenrod was in full bloom which allowed the identification of the full range of the species occurrence, especially i heterogeneous patches, in this area (Figure 4) to identify large patches of (a) homogeneou goldenrod, and (b) heterogeneous polygons with the constant presence of various specie We were aware that such heterogeneous classes would be difficult to identify and ma reduce scored classification results, but the presence of even single specimens o goldenrod causes rapid colonization.For this reason, the mix class is a valuable indicato of the direction of changes.Goldenrod occurs mainly on post-agricultural and pos industrial wastelands, where high dynamics of changes resulting from secondar succession lead to difficulties in the identification of sufficiently large and numerou homogeneous reference targets, so we focused on field-verified goldenrod pattern identifying homogeneous and heterogeneous polygons, which makes it possible t distinguish it from other objects of a heterogeneous habitat [9].The research area is a fragment of the Silesian-Cracow Upland (near Częstochowa; Figure 4), where there are large areas of wasteland and industrial areas, where a dynamic increase in invasive and expansive species is observed.Field studies were conducted in early September (3-4 September 2021) to capture the period when goldenrod was in full bloom, which allowed the identification of the full range of the species occurrence, especially in heterogeneous patches, in this area (Figure 4) to identify large patches of (a) homogeneous goldenrod, and (b) heterogeneous polygons with the constant presence of various species.We were aware that such heterogeneous classes would be difficult to identify and may reduce scored classification results, but the presence of even single specimens of goldenrod causes rapid colonization.For this reason, the mix class is a valuable indicator of the direction of changes.Goldenrod occurs mainly on post-agricultural and post-industrial wastelands, where high dynamics of changes resulting from secondary succession lead to difficulties in the identification of sufficiently large and numerous homogeneous reference targets, so we focused on field-verified goldenrod patterns, identifying homogeneous and heterogeneous polygons, which makes it possible to distinguish it from other objects of a heterogeneous habitat [9].
Input Data
The reference material for classification consisted of polygons identified during field reconnaissance of the study area.Efforts were made to ensure that the size of the patterns exceeded the area of 20 × 20 = 400 m 2 to fit a 10 m pixel of the Sentinel-2 image; the same patterns were used to classify PlanetScope images, whose pixel size is smaller (3 × 3 m); hence, the number of used pixels for Sentinel-2 and PlanetScope classifications is different (Table 2).The location of polygons was confirmed using Locus Map ver.4.3.3software (Asamm Software, Prague, Czech Republic), which allowed the documentation of the patterns with two geotagged photos, e.g., to verify post-classification maps.As background classes, forests, agricultural crops, built-up areas, and surface water were added (Table 2), but these classes had no effect on the presented accuracies.
From all the obtained patterns (Table 2), an equal number of randomly selected and balanced pixels (50-700 for Sentinel-2 data, and 50-1000 pixels for PlanetScope) of all classes was applied; a group of 50 training pixels was selected from the whole set (Table 2) for the first classification, and then verification was carried out on independent pixels (verified in the field, but not used for training).This procedure was repeated 100 times in order to select various patterns, while maintaining comparability of samples (an identical research procedure was carried out for all objects and images, repeating it for sets of 100, 200, ..., 1000 pixels).The results indicated that an optimal training set should consist of 500-700 pixels (increasing this number of training pixels does not increase accuracy, but the classification time increases significantly).
Input Data
The reference material for classification consisted of polygons identified during field reconnaissance of the study area.Efforts were made to ensure that the size of the patterns exceeded the area of 20 × 20 = 400 m 2 to fit a 10 m pixel of the Sentinel-2 image; the same patterns were used to classify PlanetScope images, whose pixel size is smaller (3 × 3 m); hence, the number of used pixels for Sentinel-2 and PlanetScope classifications is different (Table 2).The location of polygons was confirmed using Locus Map ver.4.3.3software (Asamm Software, Prague, Czech Republic), which allowed the documentation of the patterns with two geotagged photos, e.g., to verify post-classification maps.As background classes, forests, agricultural crops, built-up areas, and surface water were added (Table 2), but these classes had no effect on the presented accuracies.
From all the obtained patterns (Table 2), an equal number of randomly selected and balanced pixels (50-700 for Sentinel-2 data, and 50-1000 pixels for PlanetScope) of all classes was applied; a group of 50 training pixels was selected from the whole set (Table 2) for the first classification, and then verification was carried out on independent pixels (verified in the field, but not used for training).This procedure was repeated 100 times in order to select various patterns, while maintaining comparability of samples (an identical research procedure was carried out for all objects and images, repeating it for sets of 100, 200, ..., 1000 pixels).The results indicated that an optimal training set should consist of 500-700 pixels (increasing this number of training pixels does not increase accuracy, but the classification time increases significantly).Two sets of data were prepared for these two scenarios, (a) the goldenrod patterns (with significant majority of the species-almost goldenrod homogeneous polygons) and (b) goldenrod patterns (the same as above), but also an additional mix class, which was represented by a mix of goldenrod with a dominance of other objects, e.g., bushes and single trees.Such differentiation was intended to optimize the field verification process.
Goldenrod classification was carried out on the open access satellite Sentinel-2 images representing surface reflectance (level-2A), which were obtained from ESA's Copernicus Open Access Hub [33].Commercial satellite PlanetScope data were obtained from Planet Explorer [34] based on the Planet Science Education and Research (E&R) Program.We acquired 3 images from each sensor from almost identical periods (Sentinel-2: 9 September 2021, 29 October 2021, 3 November 2021; PlanetScope: 11 September 2021, 30 October 2021, 3 November 2021) to capture the spectral diversity of goldenrod (early September scenes represent goldenrod covered with intense yellow perianth, while November scenes present faded and dry inflorescences that reflect a large amount of radiation, which allowed the distinction of this species from other plants, already frozen).The images were acquired from the third generation of Super Dove satellites with an installed PSB.SD instrument: 8 spectral bands (6 of which have a spectrum interoperable with Sentinel-2 bands), 16-bit radiometric resolution, and 3 m pixel size.We used the orthorectified OrthoScene product (level 3B), which was geometrically corrected and radiometrically calibrated [34].We deliberately did not change the pixel size of the PlanetScope images, e.g., trying to resample it to 9 m to keep more comparable with 10 m Sentinel-2 pixels, because the higher spatial resolution allowed a more accurate identification of goldenrod patches (which are very heterogeneous in the initial stages of the invasion).On the other hand, the advantage of higher spatial resolution compensated for the lower spectral resolution, in particular, no SWIR channels and only one NIR band (Table 1).The second important question was whether it makes sense to resample 60 m Sentinel-2 bands (B1, B9) to 10 m, but the effects of using them are positive, because B1 and B9 showed high informativeness [35].This may be due to the fact that the absorption maxima of the photosynthetic active radiation of the two most important chlorophylls are 430 and 662 nm for chlorophyll a and 453 and 642 nm for chlorophyll b, while the central wavelength for B1 is 442.7 nm (Sentinel-2a) and 442.2 nm (Sentinel-2b), with a bandwidth for both sensors of 21 nm, which covers the key ranges for plants.On the other hand, B9 covers the turgor range of plant cells and tissues, i.e., the state of hydration including the state of tension of the cell wall as a result of the hydrostatic pressure inside the cell.
Spring and early summer scenes offered worse informativeness of images, and early flowering meadows or low crops with soil clearances did not contribute significantly more important spectral ranges than in autumn images, when overblown and dry inflorescences reflected a significant amount of the characteristic spectrum.Zdunek [36] showed that the most informative Sentinel-2 image for Solidago identification in 2021 was the scene acquired at the beginning of November 2021.This may be due to the fact that the dry inflorescences reflect the characteristic species electromagnetic spectrum, allowing the identification of this plant species against other plants, which are in the final stage of vegetative development, and are often already frozen by the first autumn frosts.The images were verified using the QA band for the detection of any potential distortions, and then twelve-band Sentinel-2 data were resampled using the nearest neighbor method for 10 m pixel sizes (using ESA SNAP 9.0 software).Then, the single scenes were stacked into a 36-band data cube of Sentinel-2 and 24-band PlanetScope multitemporal compositions using GDAL [37].
Classification Procedure
Pixel values were extracted from the satellite input data.An iterative accuracy assessment was then carried out, where the set was randomly split into training and validation in a 50:50 ratio (split polygon-wise to ensure as much objectivity as possible).Random Forest and Support Vector Machine classifiers were trained simultaneously on the same data sets.The procedure was carried out 100 times.On the best data set (which achieved the highest mean F1-score for all classes), a variable importance analysis was also carried out on Random Forest for individual channels and dates to measure the impact of their significance.The pixels to be trained were also randomly selected (50, 100, 150, 200, 300, 500, 700, 1000 pixels) in order to maintain balanced data sets and not to favor any one class during the learning process of the algorithm.In the second scenario, polygons were randomly selected that covered about 700 pixels for training.Algorithm hyperparameter tuning was also carried out for Random Forest, ntree = 500 and mtry = 5, and for the Support Vector Machine radial kernel, cost = 100 and gamma = 0.01.
The following measures were used to assess accuracy: the producer accuracy, user accuracy, and F1-score [38].In order to assess the impact of individual channels and dates of image acquisition, a variable importance analysis based on the Random Forest classifier measure of mean decrease accuracy (MDA) was used [22,39].
Results
The classification results confirmed that goldenrod creates spectrally characteristic patterns, which, regardless of the algorithm used, are correctly identified on both the Sentinel-2 and PlanetScope images; the highest F1-score accuracies of individual iterations oscillated around 0.92-0.95(Tables 3 and 4).Nevertheless, spatial differentiations of the share of individual objects (goldenrod, grasses, herbaceous plants, bushes, single trees) generated varied signals for different patches, and this was reflected in the results of individual iterations of different scenarios (Figure 5).The Random Forest classifier produced better results than SVM, and this concerned both the number of classification iterations obtaining F1-score values above 0.90, as well as the median and mean values for all 100 iterations (Tables 3 and 4, Figure 5), and the increase in the number of training pixels reduced the worst classification results, allowing an increase in the median and mean F1-score values of Sentinel-2 data (Figure 6).SVM in individual iterations obtained high values of the classification, but it was surprising that in the case of the PlanetScope images, the median of 100 iterations reached only the value of a 0.43 F1-score (Figure 7, Table 4).
The median value for patterns consisting of at least 400 training pixels indicated an optimal pixel value range of around 400-700 pixels for the recognition of goldenrod (Tables 3 and 4, Figures 6 and 7).In the case of the PlanetScope data, there is clearly no improvement in the lowest classification values (Figure 7).Reportedly, low classification values occur regardless of the number of pixels used in the training patterns, and this phenomenon was not observed in the case of the Sentinel-2 data (Table 3); this is due to the small pixel size of the PlanetScope images, which allows the identification of small areas in the initial stages of invasion that do not contain goldenrod (Table 4).The median value for patterns consisting of at least 400 training pixels indicated an optimal pixel value range of around 400-700 pixels for the recognition of goldenrod (Tables 3 and 4, Figures 6 and 7).In the case of the PlanetScope data, there is clearly no improvement in the lowest classification values (Figure 7).Reportedly, low classification values occur regardless of the number of pixels used in the training patterns, and this phenomenon was not observed in the case of the Sentinel-2 data (Table 3); this is due to the small pixel size of the PlanetScope images, which allows the identification of small areas in the initial stages of invasion that do not contain goldenrod (Table 4).exceeds 0.70, with SVM producing significantly better results (by about 10 percentage points) than RF, but increasing the training pixels significantly improves the lowest classification accuracy (Figures 6 and 7).This allowed the identification of the mix class in the best iterations with an accuracy of almost 0.90 for the SVM classifier, while RF achieved the best results at the level of 0.85 (F1-score).For this reason, the results presented below come from the RF classifier and are based on classifications obtained from patterns consisting of 700 training pixels, randomly selected in 100 subsequent iterations.It is worth noting that the Sentinel-2 images, which contain several NIR and SWIR bands, offer much higher accuracy (around 20 percentage points) than the PlanetScope data (Figure 5).This difference may be due to the fact that PlanetScope has only one NIR band (B8), which turned out to be crucial for both sensors (the most informative in the case of PlanetScope, and in the case of Sentinel-2, the band B9 scored second place; Table 5); additionally, the SWIR band (B12), which is not present in the PlanetScope scanner, and also coastal aerosol, green, red, and red-edge were crucial.Based on the fact that for both Sentinel-2 and PlanetScope, the optimal period for the identification of goldenrod was the period of intense yellow flowering, and the PlanetScope scanner offers the yellow band (B5), the MDA analysis confirmed the usefulness of the B5 in summer during goldenrod flowering, while late autumn images were much less informative (Table 5).
Table 5. Mean decrease in accuracy (MDA) presenting an impact of individual image acquisition periods and spectral bands on the result of goldenrod identification.Explanation of the colors used in the tables: MDA value < 10 the cell is marked with yellow, 10-20: neutral ecru, 20-30 light green, >30: green.Analyzing the impact of the number of training pixels on the obtained goldenrod classification results (Figure 6), it is clearly visible that regardless of the classifiers (RF or SVM), the maximum classification accuracy of polygons dominated by goldenrod (marked as yellow and orange in Figure 6) reaches similar values, which means that the training and verification field-verified patches are sufficiently homogeneous to identify goldenrod even on the 50 to 700 pixels of the Sentinel-2 data (Table 3), and 50-1000 pixels of PlanetScope (Table 4).Nevertheless, apart from more homogeneous surfaces (identified as goldenrod), there are also heterogeneous patches (marked as "mix" class; covered by other elements of land cover, where goldenrod occupies the area in minority), which are an important indicator of invasion directions.To identify such polygons, the number of training pixels should reach 500-700.In the case of the mix class, the median F1-score exceeds 0.70, with SVM producing significantly better results (by about 10 percentage points) than RF, but increasing the training pixels significantly improves the lowest classification accuracy (Figures 6 and 7).This allowed the identification of the mix class in the best iterations with an accuracy of almost 0.90 for the SVM classifier, while RF achieved the best results at the level of 0.85 (F1-score).For this reason, the results presented below come from the RF classifier and are based on classifications obtained from patterns consisting of 700 training pixels, randomly selected in 100 subsequent iterations.
Sentinel-2
It is worth noting that the Sentinel-2 images, which contain several NIR and SWIR bands, offer much higher accuracy (around 20 percentage points) than the PlanetScope data (Figure 5).This difference may be due to the fact that PlanetScope has only one NIR band (B8), which turned out to be crucial for both sensors (the most informative in the case of PlanetScope, and in the case of Sentinel-2, the band B9 scored second place; Table 5); additionally, the SWIR band (B12), which is not present in the PlanetScope scanner, and also coastal aerosol, green, red, and red-edge were crucial.Based on the fact that for both Sentinel-2 and PlanetScope, the optimal period for the identification of goldenrod was the period of intense yellow flowering, and the PlanetScope scanner offers the yellow band (B5), the MDA analysis confirmed the usefulness of the B5 in summer during goldenrod flowering, while late autumn images were much less informative (Table 5).
Table 5. Mean decrease in accuracy (MDA) presenting an impact of individual image acquisition periods and spectral bands on the result of goldenrod identification.Explanation of the colors used in the tables: MDA value < 10 the cell is marked with yellow, 10-20: neutral ecru, 20-30 light green, >30: green.Based on the Sentinel-2 images and the Random Forest classifier, 898.54 km 2 of mixed patterns (initial stages of goldenrod infestation, which remain in the minority) has been identified, which is 7.45% of the Sentinel-2 image area, while the area where goldenrod dominates other plants is 90.84 km 2 (0.75%; Figures 8-10).
Sentinel
This area was quite well verified during field research in 2021 and 2022, confirming, on the one hand, a large and fast expansion of goldenrod, but on the other hand, many preventive activities were observed, e.g., mowing and plowing fields where goldenrod appeared.These areas are mainly located in the eastern part of the research area, in the vicinity of big cities (Częstochowa, Tarnowskie Góry, and the northern part of the Silesian agglomeration), where part of the area is intended for the development of new technical infrastructure, e.g., logistics centers, and commercial, service, and production facilities; this is due to optimal location in relation to motorways.The second reason for this is the presence of many post-industrial and post-mining residues, as well as post-agricultural areas, which are automatically entered through secondary succession.
Approximation to selected areas is confirmed by the common occurrence of goldenrod, and the difference in detection is mainly due to the resolution of the images; on the one hand, the three-meter PlanetScope images are characterized by greater spatial accuracy, but they lack the NIR and SWIR ranges, which turn out to be valuable, and PlanetScope's yellow band (B5) was relatively important only in the flowering phase (Table 5).Based on the Sentinel-2 images and the Random Forest classifier, 898.54 km 2 of mixed patterns (initial stages of goldenrod infestation, which remain in the minority) has been identified, which is 7.45% of the Sentinel-2 image area, while the area where goldenrod dominates other plants is 90.84 km 2 (0.75%; Figures 8-10).appeared.These areas are mainly located in the eastern part of the research area, in t vicinity of big cities (Częstochowa, Tarnowskie Góry, and the northern part of the Silesi agglomeration), where part of the area is intended for the development of new technic infrastructure, e.g., logistics centers, and commercial, service, and production faciliti this is due to optimal location in relation to motorways.The second reason for this is t presence of many post-industrial and post-mining residues, as well as post-agricultu areas, which are automatically entered through secondary succession.Approximation to selected areas is confirmed by the common occurrence goldenrod, and the difference in detection is mainly due to the resolution of the imag on the one hand, the three-meter PlanetScope images are characterized by greater spat Based on tens of thousands of pixels verified in the field (Table 2), the classification results confirm the high usefulness of the proposed methodology, because the three highest results of 100 individual classification iterations of both RF and SVM for Sentinel-2 and PlanetScope data, regardless of whether these classifications were based on 50-700 (Sentinel-2) or 50-1000 (PlanetScope), obtained F1-scores ranging from 0.90 to 0.95 (Tables 3 and 4).However, differences were observed for the median results, where for 700 pixels, the highest classification value was obtained for RF and Sentinel-2 (F1-score: 0.87), SVM (Sentinel-2): 0.81, RF (PlanetScope): 0.64, and SVM (PlanetScope): 0.42 (Tables 3 and 4).The obtained error matrices (Table 6) confirm the distinguishability of individual spectral features of the analyzed objects, including goldenrod and heterogeneous patches created by them during invasion.Slight mixing of classification results is beyond doubt, as goldenrod encroaches on all surrounding surfaces, including the immediate vicinity of houses, yards, and gardens.
accuracy, but they lack the NIR and SWIR ranges, which turn out to be valuable, and PlanetScope's yellow band (B5) was relatively important only in the flowering phase (Table 5).
Discussion
Both Sentinel-2 and PlanetScope images produced very high accuracies of goldenrod classification; in each case, homogeneous patches of goldenrod produced the highest RF and SVM classification results > 0.9 measured using the F1-score, but looking in detail, regarding RF and scenarios involving training patterns consisting of 400 pixels, the achieved results of F1-scores > 0.9 were produced 22 times (out of 100 classification iterations).For patterns consisting of 500 pixels, it was 31 times, 600 was 28 times, and 700 was 35.For the SVM algorithm, it was 10 times for patterns consisting of 400 pixels, 6 times for 500 pixels, 13 for 600, and 13 for 700; regarding the median values for RF and a pattern consisting of 700 pixels, the F1-score was 0.87, and for SVM, it was 0.81.In the case of PlanetScope, the regularities are similar, i.e., RF turns out to be better, because for the classification results based on 400, 500, 600, 700, and 1000 training pixels, the obtained classification results were measured 16, 16, 16, and 20 times, respectively, using an F1-score > 0.9, while for the SVM algorithm, it was 8, 8, 9, 12, and 7 times.What is noteworthy is the maximum F1-score value, which is similar for the Sentinel-2 results, while the medians are significantly lower, because for RF and classification based on 700 pixels, it was 0.64 (F1-score), and for SVM, it was 0.42.Therefore, an analysis of the influence of the number of pixels on the classification result allowed the definition of the required number of patterns, which are obligatory for field verifications.
Attention should be drawn to reasons for the situation that PlanetScope is characterized by nine times higher spatial accuracy than Sentinel-2 images (it is worth remembering that some of the bands have been resampled from the original resolution of 20 m and 60 m to 10 m), and yet the median PlanetScope classification accuracy is lower; therefore, the mean decrease in accuracy (MDA) confirmed that the most informative spectral range was NIR (for PlanetScope, it is in first place (the most informative band), for Sentinel-2, it is in second place); however, comparing the different periods of image acquisition (reflecting different vegetative stages of goldenrod), the most useful Sentinel-2 bands were the green, red-edge, and NIR (B9) bands, at the same time for which the corresponding PlanetScope bands did not show significant input.Dao et al. [40], analyzing invasive grassland species, confirmed that statistically important spectral ranges were especially the blue (below 500 nm), red-edge (680-730 nm), and near-infrared (NIR, beyond 730 nm).This was not confirmed by Mouta et al. [41], who identified the invasive species Acacia longifolia using Sentinel-2 images, and ecological models demonstrated the usefulness of the blue (458-523 nm) and green (543-578 nm) channel, in which the yellow flowers of Acacia as well as other yellow plants can be better distinguished due to the high presence of carotenoids and other pigments.This observation was not confirmed in our research, because the yellow range of the PlanetScope (B5) obtained an MDA value of 11.39 during flowering, which was below the average (17.51;Table 5); in the remaining periods, the usefulness of the yellow band was even lower, taking the last, eighth place in the ranking of all spectral channels offered with the PlanetScope detector.Gholizadeh et al. [42] used the partial least squares linear discriminant analysis (PLSLDA) [43] model to distinguish Lespedeza cuneata ('sericea') from native species based on airborne imaging spectroscopy and vegetation functional traits with an overall accuracy of about 94%.Their results proved that the specific phenology and structure of invasive plants are important factors for their identification; for sericea, this was the distribution of photosynthetic and photoprotective pigments such as chlorophyll a + b and carotenoids, and for this reason, they indicated that bands from the PAR region are the most important.In addition, they highlighted the usefulness of red-edge, NIR, and SWIR to distinguish other invasive species, which also confirms our assumptions.Similar observations on eight species of natural high-mountain grasslands of the Tatra Mountains were made by Kycko et al. [44], which, based on biophysical variables and spectral features obtained with the ASD FieldSpec 4 spectrometer, showed statistically significant differences for different ranges of the electromagnetic spectrum for individual plant species exposed to trampling and the same species located away from tourist trails.This was also confirmed for High-Arctic plant species [45].
An interesting element was the comparison of the Random Forest and Support Vector Machine classifiers.RF is recommended due to the speed of the classification process, a simpler procedure for optimizing the classification process, and better classification results in the case of fewer training samples.On the other hand, SVM offered higher classification results, but the classification procedure was longer and required more training pixels.As other research has shown [46], the use of neural network algorithms (including deep learning) offered comparable results, but on a smaller amount of input data; however, in the identification of species, it was more important to use multitemporal data to obtain the spectra in different periods of phenological development, which allows a more accurate identification of the studied species than even more advanced algorithms based on less spectral data.
In a small part of our research area, Sabat-Tomala et el.[8], using airborne HySpex hyperspectral images (430 spectral bands with a resolution of one meter), which were acquired in three periods of the 2016 growing season, the beginning of flowering (June), the optimum of flowering (end of August), and the beginning of autumn (end of September), based on SVM and RF classifiers, identified goldenrod and three expansive plant species with slightly better accuracies (five percentage points higher).In our case, the best iterations of the multitemporal Sentinel-2 offered F1-scores of 0.92-0.95,while the median F1-score was 0.88, and in the three-meter PlanetScope images (3 × 8 spectral bands), the best results achieved similar values (0.92), while the median was about 20 percentage points lower, thus indicating less stability in the performance of classification algorithms for PlanetScope than for Sentinel-2, as also shown by Jensen et al. [29].Smerdu et al. [47] used a single aerial orthophoto (RGB + NIR bands), Sentinel-2 multitemporal compositions, and an SVM classifier in their attempt to identify the invasive Japanese knotweed in Ljubljana's heterogeneous urban landscape; the results for multitemporal composite scenes were seven percentage points higher than for single scenes (90% for Sentinel-2; 83% for aerial images).So, multitemporal satellite data offered similar results to single-scene airborne hyperspectral images, and were a few percentage points higher than few-spectralband airborne orthophotomaps using the same classifiers (RF, SVM) [48,49]; this was also confirmed by Kluczek et al. [35].Similar results of leafy spurge (Euphorbia virgata), which is also an invasive yellow flowering plant in the US, were achieved, with an overall accuracy of 96.3% (based on CNN + Long Short-Term Memory), while for the variant considering only CNN, the accuracy of PlanetScope data classification was 89.9%, and for WorldView-2, it was 96.1% [30].These results are identical to our all-goldenrod outcomes from the multitemporal Sentinel-2 and PlanetScope data based on the Random Forest classifier as well as SVM (but only for sets of 700 training pixels; Figures 6 and 7), and higher than those achieved for another goldenrod species, Solidago altissima, on the high-resolution WorldView-2 images (overall accuracy of 72%) [9].This result may suggest that the number of spectral bands is important, as WorldView-2 also has eight spectral bands, the same as PlanetScope, and thus achieves lower results than Sentinel-2, which also offers SWIR bands such as B11 (1610 nm) and B12 (2190 nm).Wang et al. [50] concentrated on the classification of invasive Pedicularis spp., using multitemporal Sentinel-2 data, reporting F1-scores ranging from 0.92 to 0.97, while PlanetScope achieved F1-scores in the range of 0.71 to 0.82, which is a similar observation to our ones, in that PlanetScope offers several percentage points lower than Sentinel-2.The researchers also stressed that the number of features is more important than the spatial resolution.Further attempts to compare Sentinel-2 data with PlanetScope were made by Marzialetti et al. [51], who calibrated Random Forest models for both satellite platforms, showcasing high predictive performances with R2 > 0.6 and RMSE < 0.008.Although Sentinel-2 exhibited slightly superior performance, the PlanetScope-based model effectively delineated invaded areas.
In the case of the extensive comparisons of Sentinel-2 and PlanetScope data, Kluczek et al. [35] investigated the effectiveness of both satellite images (additionally airborne HySpex hyperspectral images) for the classification of montane woody species; the multitemporal Sentinel-2 data cube, comprising 21 scenes, delivered comparable results with HySpex (F1-scores of 0.93 for RF and 0.89 for SVM); in contrast, the three highresolution PlanetScope images yielded less precise results, with F1-scores of 0.89 for RF and 0.87 for SVM.Very important in this case was the use of multitemporal data, which allowed the identification of plant species' spectral properties along the vegetation period.The observation was confirmed by Zagajewski et al. [39], who, using multitemporal Sentinel-2 data for woody species, achieved the best overall accuracies for SVM (86%; median), RF (84%), and ANN (84%), and for Landsat data, the best results were obtained for the RF classifier (84%), and the worst were for ANN (76%).
Also, the date of acquisition of the imagery is important as shown by Rakotoarivony et al. [52], who focused on Lespedeza cuneata and identified the optimal period for detection to be the mid-tolate growing season, achieving an overall classification accuracy of 81% (mid-August) and 82% (end of September) using PlanetScope data.The observation was also confirmed by Han et al. [53] and Sabat-Tomala et al. [8] in the case of goldenrod, bushgrasses, and blackberries.
Nevertheless, the key element is to provide appropriate training patterns in the heterogeneity of the patches they form in the studied ecosystems, which, taking into account the phase of vegetation development, allow the determination of the spectral characteristics of the tested plants, which allows the learned algorithm to assign individual pixels of the image to the required class.Sabat-Tomala et al. [8] and Rizaludin Mahmud et al. [9] have highlighted a need for field identification, but the achieved results allowed the identification of goldenrod with high accuracy, and high-resolution aerial data allow for constant monitoring of the directions of goldenrod invasion.In the case of Sentinel-2 data, due to the 10 m pixel, a good solution is to use the mix class, which allows mapping of surfaces with goldenrod in the initial stages.
Conclusions
Our research confirmed that the remote sensing identification of goldenrod is possible using both Sentinel-2 and PlanetScope data.Sentinel-2 images offer lower spatial resolution, but higher spectral resolution, including in the NIR and SWIR range, which produced better research results.Although goldenrod forms compact surfaces covered with yellow perianth, and the PlanetScope sensor offers a yellow spectral range, this does not play a key role in the recognition of goldenrod, although the optimal period for identifying this plant is its flowering period.We have identified the following observations: • the recommended classification method of goldenrod is the use of two classes: (a) homogeneous goldenrod (canopy almost homogeneously covered by goldenrod) and (b) a mix class (confirms the presence of goldenrod, but other plants dominate).
The classification results may be lower by a few percentage points, but reflect more accurately cores of the invasion and also indicate the beginnings and directions of the invasion (around the homogeneous patches of goldenrod are visible mixes).
•
multitemporal images of Sentinel-2 offer more information than high-resolution Plan-etScope images.In the case of compact and dense patches covered with goldenrod, the differences in accuracy (median of the F1-score) are comparable, but in the case of new invasions with a heterogeneous canopy, the classification scores differ by up to 50% in favor of the Sentinel-2 images.
•
the RF algorithm offers better identification results of pure goldenrod patches than SVM for both Sentinel-2 and PlanetScope images, but SVM offers higher median results of heterogeneous patches of goldenrod and native plants in early stages of the invasion.
•
the RF algorithm obtains the best identification of goldenrod from the Sentinel-2 images with F1-score results exceeding 0.9 in any scenario involving any number of homogeneous goldenrod training pixels.
•
the best period to identify the goldenrod is the time of flowering, and the highest MDA values were obtained using the September Sentinel-2 and PlanetScope scenes, but the autumn images are not much worse.This may be due to large dry, faded inflorescence panicles, which reflect a large amount of the spectrum characteristic of this species.
Figure 1 .
Figure1.Goldenrod is well identified in the field, because tall and intensely yellow flowering plants form compact patches, displacing other plants from the habitat being taken over; in the foreground on the right, there is an expansive native species of wood small-reed (bushgrass; Calamagrostis epigejos; upper photo).Also, goldenrod does very well in the immediate neighborhood of shrubs and individual trees, first eliminating other grasses and perennials, and then spreading its own seeds to neighboring areas.Its light weight and adaptations to anemochoria allow it to colonize relatively remote areas (lower photo).Photo credit: Bogdan Zagajewski.
Figure 2 .
Figure 2. Research method based on multitemporal Sentinel-2 and PlanetScope images; the training and verification patterns were randomly selected pixels from field-verified goldenrod occurrence polygons; 100 iterations of Random Forest and Support Vector Machine classifications were carried out, allowing the selection of optimal sets of patterns that were used for the final classifications.
Figure 2 .
Figure 2. Research method based on multitemporal Sentinel-2 and PlanetScope images; the training and verification patterns were randomly selected pixels from field-verified goldenrod occurrence polygons; 100 iterations of Random Forest and Support Vector Machine classifications were carried out, allowing the selection of optimal sets of patterns that were used for the final classifications.
2 Figure 3 .
Figure 3.A close look at goldenrod in the different vegetation stages-in the peak of the flowerin period (left photo) and after the flowering period (right photo).Photo credit: Karolina Barbar Zdunek.
Figure 3 .
Figure 3.A close look at goldenrod in the different vegetation stages-in the peak of the flowering period (left photo) and after the flowering period (right photo).Photo credit: Karolina Barbara Zdunek.
23 Figure 4 .
Figure 4. Research area.Field research was conducted in the southeastern quadrant of the Sentinel-2 scene (bordered by a green line), which overlapped with the PlanetScope data (white line; the same polygons were used to classify both multitemporal satellite images).Due to the size of the polygons and the scale of the Sentinel-2 image, the research polygons have been presented in the form of a yellow signature, not reflecting the real size of the patterns acquired in the field.Polandʹs border is outlined with a red line on the left figure.
Figure 4 .
Figure 4. Research area.Field research was conducted in the southeastern quadrant of the Sentinel-2 scene (bordered by a green line), which overlapped with the PlanetScope data (white line; the same polygons were used to classify both multitemporal satellite images).Due to the size of the polygons and the scale of the Sentinel-2 image, the research polygons have been presented in the form of a yellow signature, not reflecting the real size of the patterns acquired in the field.Poland's border is outlined with a red line on the left figure.
49 Figure 5 .
Figure 5. Median value of the F1-scores as a measure of classification accuracy of goldenrod according to the scenarios tested: RF, Random Forest (yellow box); SVM, Support Vector Machine
Figure 5 .
Figure 5. Median value of the F1-scores as a measure of classification accuracy of goldenrod according to the scenarios tested: RF, Random Forest (yellow box); SVM, Support Vector Machine (orange box); S-2, Sentinel-2 image classification results; PS, PlanetScope (green box); 50-1000 number of training pixels used for classification.SVM results for PlanetScope data achieved F1-score values below 0.55.
Figure 6 .
Figure 6.The impact of the number of pixels used in training patterns on the accuracy of postclassification maps based on Sentinel-2 multitemporal images.Explanations: max/minmaximum/minimum value of F1-score during 100 iterations of the classification process; Q1/Q3first/third quartile; IQR-interquartile range; RF-Random Forest classification; SVM-Support Vector Machine classification; mix-other plants (shrubs, single trees, grasses, and perennials) dominate on given plots, but goldenrod is present; goldenrod-polygons dominated by goldenrod, but other plants may also occur.
Figure 6 .
Figure 6.The impact of the number of pixels used in training patterns on the accuracy of post-classification maps based on Sentinel-2 multitemporal images.Explanations: max/min-maximum/minimum value of F1-score during 100 iterations of the classification process; Q1/Q3-first/third quartile; IQR-interquartile range; RF-Random Forest classification; SVM-Support Vector Machine classification; mix-other plants (shrubs, single trees, grasses, and perennials) dominate on given plots, but goldenrod is present; goldenrod-polygons dominated by goldenrod, but other plants may also occur.Remote Sens. 2024, 16, x FOR PEER REVIEW 13 of 23
Figure 7 .
Figure 7.The impact of the number of pixels used in training patterns on the accuracy of postclassification maps based on PlanetScope multitemporal images.Explanations: max/minmaximum/minimum value of scored F1-score during 100 iterations of the classification process; Q1/Q3-first/third quartile; IQR-interquartile range; RF-Random Forest classification; SVM-Support Vector Machine classification; mix-other plants (shrubs, single trees, grasses, and perennials) dominate on given plots, but goldenrod is present; goldenrod-polygons dominated by goldenrod, but other plants may also occur.
Figure 7 .
Figure 7.The impact of the number of pixels used in training patterns on the accuracy of post-classification maps based on PlanetScope multitemporal images.Explanations: max/min-maximum/minimum value of scored F1-score during 100 iterations of the classification process; Q1/Q3-first/third quartile; IQR-interquartile range; RF-Random Forest classification; SVM-Support Vector Machine classification; mix-other plants (shrubs, single trees, grasses, and perennials) dominate on given plots, but goldenrod is present; goldenrod-polygons dominated by goldenrod, but other plants may also occur.
Figure 8 .
Figure 8. Goldenrod distribution map with the dominance of goldenrod (polygons marked in pink to highlight small surfaces with intense color), as well as mixes with other species (yellow marked polygons), based on multitemporal Sentinel-2 images and the Random Forest classifier.In the background, Sentinel-2 image is presented.
Figure 9 .
Figure 9. Goldenrod distribution map with the dominance of goldenrod (pink marked), based multitemporal Sentinel-2 images and the Random Forest classifier.In the background, Sentine image is presented.
Figure 9 .
Figure 9. Goldenrod distribution map with the dominance of goldenrod (pink marked), based on multitemporal Sentinel-2 images and the Random Forest classifier.In the background, Sentinel-2 image is presented.
Figure 10 .
Figure 10.Referencing the results of Sentinel-2 and PlanetScope image iterative classification method (randomly selected 700 training pixels for each classification, the whole procedure was repeated 100 times) to the field situation.An orthophotomap was used in the background (pixel size of 0.05 m, prepared by the Polish Head Office of Geodesy and Cartography).Photo credit: Karolina B. Zdunek.
Figure 10 .
Figure 10.Referencing the results of Sentinel-2 and PlanetScope image iterative classification method (randomly selected 700 training pixels for each classification, the whole procedure was repeated 100 times) to the field situation.An orthophotomap was used in the background (pixel size of 0.05 m, prepared by the Polish Head Office of Geodesy and Cartography).Photo credit: Karolina B. Zdunek.
Table 2 .
Unbalanced set of patterns used to classify goldenrod; from this set, 50-700 randomized pixels for Sentinel-2 and 50-1000 pixels for PlanetScope were selected for iterative classifications based on 50:50 stratified random sampling.
Table 3 .
Achieved F1-scores based on iterative classification of Sentinel-2 images and randomized pixel selection, depending on the number of training pixels used for classification (three highest results of iterative classification, number of iterations whose results achieved higher F1-score values than 0.90, median and mean for 100 iterations; 1st, the highest (1st) F1-score; 2nd, 2nd highest F1-score; 3rd, 3rd highest F1-score; F1-score > 0.90, number of iterations with F1-score > 0.90).
Table 4 .
Obtained F1-scores based on the PlanetScope multitemporal data depending on the number of training pixels used for classification (three highest results of iterative classification, number of iterations whose results achieved higher F1-score value > 0.90 median and mean for 100 iterations; 1st, the best F1-score; 2nd, the 2nd best F1-score; 3rd, the 3rd best F1-score; >0.90, number of iterations with F1-score > 0.90).
Table 6 .
The error matrices of multitemporal Sentinel-2 images with the Random Forest (RF) and Support Vector Machine (SVM) classifiers based on 700 training pixels.Explanations: OA-overall accuracy; PA-producer accuracy; UA-user accuracy; F1-F1-score. | 2024-02-11T16:37:05.731Z | 2024-02-08T00:00:00.000 | {
"year": 2024,
"sha1": "0523afca3aeb6f280d2b9af6e574a741581b7461",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/16/4/636/pdf?version=1707400548",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2b8ec044ad916947fdc8103993f89001d30b8d7b",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
237625644 | pes2o/s2orc | v3-fos-license | A Case Report of ROHHAD Syndrome in an 8-year-old Iranian Boy
Introduction Rapid-onset obesity concurrently with hypoventilation, hypothalamic, autonomic dysregulation (ROHHAD) is an uncommon disease that presents with multiorgan disorders during early childhood, with fewer than 100 cases reported around the world. We aim to present a case of ROHHAD syndrome admitting with rare neurologic symptoms. We also present our treatment regimen. Case Presentation An 8-year-old boy was admitted to our department with ataxia and gait disturbance that led us to the final diagnosis after a thorough investigation. He had multiple admissions and was treated for other diagnoses. His first symptoms started from age 5 with obstructive apnea. He underwent an adenectomy surgery at that time, but the symptoms continued. A year after the surgery, he was admitted again due to his somnolence but was diagnosed only with hypothyroidism and anemia. At the age of 7 years and 8 months, he was admitted to our department with ataxia and abnormal gait from the past year with instability and numerous falls. He also had shown hyperphagia that had been resulted in 10 kilograms of weight gain in six months. He was experiencing gradual behavioral symptoms, including episodes of self and hetero aggression and impulsivity. His other symptoms included fatigue, somnolence, gastrointestinal dysmotility, hyperhidrosis, central hypothyroidism, polyuria, precocious puberty, and rapid obesity. His laboratory investigation revealed hyperprolactinemia. Conclusions Our case indicates that ROHHAD is a complex disease with divergent signs and symptoms that needs to be kept in mind for diagnosis and should be treated with a high level of collaboration of various medical specialties. Since late diagnosis of this syndrome leads to a magnificent increase in morbidity and mortality rates, it is vital to pay extreme attention to this syndrome. The diagnosis should be considered even more in children over two years old with rapid-onset obesity, which is accompanied by other symptoms. Here, our patient's complaint was ataxia that revealed the underlying cause after investigation.
Introduction
Rapid-onset obesity concurrently with hypoventilation, hypothalamic, autonomic dysregulation (ROHHAD) is an uncommon disease that presents with multiorgan disorders during early childhood. The ROHHAD term and the criteria for diagnosis of this syndrome were first defined by Ize-Ludlow et al. (1) This syndrome is a distinct entity from congenital central hypoventilation syndrome (CCHS), which is presented with paired-like homeobox 2B (PHOX2B) mutation (2). ROHHAD can be seen by a variety of signs and symptoms. Since there is no single confirmatory diagnostic test, and hence late diagnosis of this syndrome leads to late intervention and eventually high morbidity and mortality, it is essential to have a high suspicion for this syndrome, especially in children over two years old with rapid-onset obesity (1, 3).
Case Presentation
Our patient is an 8-year-old boy, first admitted to our department at age of 7 years and 8 months, with ataxia and abnormal gait from past year. The patient was born at term, from a 25-year-old mother who also has a healthy 4-year-old girl. Her mother had no complications during her pregnancy, and the infant was delivered via cesarean section for failure to progress. There is no consanguinity between his parents. He was born with a normal-range weight of 3,800 g and a birth height of 49 cm. The neonatal period was normal, and the patient had normal growth and psychomotor development. At age of five, he was admitted to another hospital because of obstruction, apnea, and snoring during sleep and went through adenectomy surgery. A year after the surgery, he was admitted again due to his somnolence during the day, but he was diag-nosed only with hypothyroidism and anemia. Later, he was discharged with oral treatment of levothyroxine and folic acid, and iron. At age of 7 years and 8 months, he was admitted to our department with ataxia and abnormal gait with instability and numerous falls when he was walking. He also had shown hyperphagia, which had been resulted in 10 kg weight gain (from 31 to 41.5) in 6 months prior to his admission. With his 125 cm height, he had a BMI of 26.6. Taking a thorough medical history revealed other signs and symptoms, leading us to the final diagnosis. His BMI progression is shown in Figure 1 based on a chart from National Center for Health Statistics (4).
Six months prior to the diagnosis, he was experiencing general weakness and fatigue, and lethargy during days while doing his routine activity, forcing him to sit down and while he was playing around. He still was suffering from somnolence. It was reported that he has punched his head multiple times and has hit his head to the wall unprovokedly and without any reason. He also had gastrointestinal dysmotility and constipation from the past year. The patient had enuresis and polyuria, and urinary incontinency from the past month. In addition, the patient began to display signs of dysautonomia: a tendency to bradycardia and intestinal motility disorders (severe constipation).
His physical examinations revealed tachycardia with a heart rate of 135 bpm, normal blood pressure of 110/65 mmHg, and tachypnea with a respiratory rate of 35. He was an overweight boy with BMI of 26.6. He had central obesity with adipomastia and supraclavicular fat pad, and alongside his buffalo hump, his face was moonlike. He also was suffering from excessive sweating. His physical exam also showed central precocious puberty, with Tanner stage 4 for pubic hair and armpit hairs (Table 1). No purple striae, plethora, acne, acanthosis nigricans, or hyperpigmentation have been found. He had no alteration in pain perception nor sign of pulmonary hypertension: JVP, hepatomegaly, pedal edema. His extremities were not cold ( Table 1).
On arrival, an arterial blood gas on room air revealed a 7.35 for pH, PaCO 2 = 50.0 mmHg, PaO 2 = 55.0 mmHg, and HCO 3 -= 24 mmol/L, and SPO 2 of 83%. His laboratory investigation revealed hyperprolactinemia, while MRI of the brain and the hypothalamus and pituitary gland with contrast-enhancing revealed no abnormalities. Primary hypercortisolism was ruled out. Our complete laboratory data at the time of diagnosis are shown in Table 2.
Because of association of ROHHAD syndrome with Neural Tumour Syndrome, we asked for a positron emission tomography (PET-scan) alongside with chest and abdomen computed tomography scan (CT-scan). The patient was further diagnosed using Metaiodobenzylguanidine scan (MIBG scan) and a contrast-enhanced MRI scan of the Poor social ability 6 Aggression 7 Ataxia 7 brain and the hypothalamus, and pituitary gland. His 68 Ga DOTATATE PET/CT revealed several small intraperitoneal and retroperitoneal lymph nodes without significant uptake, but only metabolically inactive bilateral cervical on left axillary and intraperitoneal lymph nodes. His Abdomen CT-scan also revealed lymph nodes in celiac chain. The chest CT-scan showed that he had prominent lymph nodes. His heart echo showed mild LVH, mild diastolic dysfunction. Eye examination for Fuchs' dystrophy of cornea and retinitis pigmentosa was normal, and he did not have strabismus.
He had repetitive apnea on a polysomnography test, so we started non-invasive ventilation for the patient. Our treatment included oral Hydrocortisone tablet, fluticasone, and salmeterol spray with salbutamol spray with montelukast tablet. For his behavioral changes, we started 10 mg fluoxetine and aripiprazole 20 mg daily. On followup visits, our patient did not exhibit any sodium imbalance, but he had high blood pressure, so we used captopril.
He was back to our department for configuration for his CPAP device. At this time, he was 8 years old, and he had gained another 13 kg weight and had become 54 kg with a height of 130 cm. He had a BMI of 31.95, which is +7.31 SD for his age, so we started a diet with an exercise regime. His behavioral changes improved, and he has been showing less aggressive behavior at home. He did not have any ataxia. Further, his sleep quality improved with CPAP device, and he was less somnolence during the day. However, he still suffered from autonomic dysregulation symptoms like constipation and excessive sweating. His follow-up laboratory data are shown in Table 2.
Discussion
ROHHAD is a complex disease with divergent signs and symptoms that needs to be in mind for diagnosis and should be treated with a high level of collaboration of various medical specialties, including endocrinologists, psychiatrists, surgeons, pneumologists, oncologists, neuropediatricians, and cardiologists, among other specialists. Early management is essential to improve prognosis and prevent serious complications, even in case of conservative treatment.
years after the initial symptom (5). The underlying pathogenesis of ROHHAD is unclear, but a multitude of other genetic predisposition factors, including immunological or paraneoplastic, have been mentioned. Nevertheless, no epigenetic factors or genetic changes about this tumor have been recognized, yet there is no evidence of correlation with the autoimmune process. The theory that there may be an association between ROHHAD and paraneoplastic or autoimmune factors was first mentioned in 1995 (6). Associating with neural crest tumors highlights an immune-mediated process as in opsoclonus myoclonus ataxia syndrome (7). This theory should be considered because of the satisfactory outcomes with immunomodulatory or immunosuppressive treatments. (7,8) This case report, in which only one of a monozygotic twins had RO-HHAD, has raised doubts about genetic predispositions of this disease (9). ROHHAD patients display normal development and growth till the first symptoms arise, which happens between age of 1.5 to 9 years. Weight gain is the most common first sign, which is usually proceeded with hypothalamic dysfunction (1). The following symptoms have been reported to occur in ROHHAD patients in subsequent months and years after the initial symptoms: alveolar hypoventilation, hypothalamic dysfunction consisting of hypernatremia or hyponatremia manifested by thirst and antidiuretic hormone secretion abnormalities, diabetes insipidus, polyuria/polydipsia, hyperprolactinemia, central hypothyroidism, central precocious or delayed puberty, growth hormone (GH) deficiency, adrenocorticotropic hormone (ACTH) deficiency and in parallel autonomic dysfunction, including lightnonresponsive pupils, impaired gastrointestinal motility (constipation), body temperature disorders (hypothermia, hyperthermia), sweating disorders, reduced pain sensation, and behavioral disorders mostly irritability and aggression and fatigue and social withdrawal and poor school performances and neurological abnormalities, including seizure and blurring of consciousness (10). Although there is no confirmatory diagnostic test, the diagnosis of ROHHAD is exceptionally demanding. Hence, late diagnosis of this syndrome leads to late intervention and eventually high morbidity and mortality, it is essential to have a high suspicion for this syndrome, especially in rapid-onset obesity in children over two years old (1,3). ROHHAD has clinical diagnostic criteria and must include rapid weight gain and hypoventilation initiated after the age of one and a half years. Patients must also have hypothalamic dysfunction with at least one of the mentioned disorders: hyperprolactinemia, rapid-onset obesity, failed growth hormone stimulation test, central hypothyroidism, electrolyte imbalances, corticotrophin deficiency, or altered onset of puberty (11). Genetic investi-gations should be considered to rule out other disorders with overlapping features, including CCHS and Prader-Willi syndrome. Basic cardiopulmonary, central nervous system, and neuromuscular evaluations should be performed to eliminate any chance of other diagnoses or secondary complications. Even though malfunction of the respiratory centres and their chemoreceptors is the most obvious answer for respiratory problems, scarce evidence has been published on this theory. A deficit in chemosensory function is thought to be the reason for persistent hypoventilation; however, Carroll et al. (12). explained that the responses of ROHHAD patients to hypoxia and hypercarbia are similar to those of healthy young adults. Yet, decreased inspiratory drive and tidal volumes during some stimuli concurring with a lacking behavioral perception of asphyxia make these patients prone to hypoxemia and hypercarbia (12).
Conclusions
Our case indicates that ROHHAD is a complex disease with divergent signs and symptoms that needs to be in mind for diagnosis and should be treated with high level of collaboration of various medical specialties. Since late diagnosis of this syndrome leads to a high chance of morbidity and mortality, it is vital to pay extreme attention to this syndrome. This is even more critical for children over 2 years old with rapid-onset obesity, which is accompanied by other symptoms. Here, the patient's complaint was ataxia which, revealed the underlying cause after investigation. | 2021-08-19T19:52:38.440Z | 2021-05-22T00:00:00.000 | {
"year": 2021,
"sha1": "d0416af7b8f1d75c8a9044c35eb91ba6651ae06d",
"oa_license": "CCBYNC",
"oa_url": "https://sites.kowsarpub.com/ijem/cdn/dl/85126b32-ee3f-11eb-a469-53ad2a217cd3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eeb5c7cc0b44d59a07ee23ca4b01b01c8e4ad56b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
128379325 | pes2o/s2orc | v3-fos-license | Ecological Water Quality and Management at a River Basin Level: A Case Study from River Basin Kosynthos in June 2011
The European Parliament and Council decided a policy on the protection, an appropriate treatment and management of water field leading on the Water Framework Directive 2000/60/ΕC (WFD, European Commission, 2000) in October 2000. The WFD obliges Member States to achieve the objective of at least a good ecological quality status before 2015 and requires them to assess it by using biological elements, supported by hydromorphological and physico-chemical ones. The assessment must be done at a basin level and authorities are obliged to follow efficient monitoring programs in order to design integraded basin management plans. Efforts are being made to adapt national programmes for the WFD requirements (Birk & Hering, 2006). In most European countries, river monitoring programmes are based on benthic macroinvertebrate communities (SánchezMontoya et al., 2010).
Introduction
The European Parliament and Council decided a policy on the protection, an appropriate treatment and management of water field leading on the Water Framework Directive 2000/60/ΕC (WFD, European Commission, 2000) in October 2000. The WFD obliges Member States to achieve the objective of at least a good ecological quality status before 2015 and requires them to assess it by using biological elements, supported by hydromorphological and physico-chemical ones. The assessment must be done at a basin level and authorities are obliged to follow efficient monitoring programs in order to design integraded basin management plans. Efforts are being made to adapt national programmes for the WFD requirements (Birk & Hering, 2006). In most European countries, river monitoring programmes are based on benthic macroinvertebrate communities (Sánchez-Montoya et al., 2010).
The WFD (EC, 2000) suggests a hierarchical approach to the identification of surface water bodies (Vincent et al., 2002) and the characterization of water body types is based on regionalization (Cohen et al., 1998). The directive proposes two systems, A and B, for characterizing water bodies according to the different variables considered (EC, 2000). The WFD allows the use of both systems, but considers system A as the reference system. If system B is used by Member States, it must achieve at least the same degree of differentiation. System A considers the following obligatory ranged descriptors: eco-region, altitude, geology and size, whereas system B considers five obligatory descriptors (altitude, latitude, longitude, geology and size) and fifteen optional ones.
Study area
The Kosynthos River is located in the north-eastern part of Greece, flows through the prefectures of Xanthi and Rhodopi and discharges into the Vistonis lagoon ( Figure 1) as a result of the diversion of its lowland part in 1958. Kosynthos' length is approximately 52 Km (Pisinaras et al., 2007). In the present study, 8 sites were selected in Kosynthos river basin ( Figure 1) during the period June 2011, depending on the different pressures that presented in the area. Four sites belonged to the mountainous area and the rest sites to the low-land one. The Kosynthos river basin belongs to the water district of Thrace (12 th water district), covering an area of 460 Km 2 . The region consists of forest and semi-natural areas (69.6%), rural areas (27.7%), artificial surfaces (2.5%) and wetlands (0.3%) (Corine Land Cover 2000). It is considered to be a mountainous basin (Gikas et al, 2006) of steep slopes and its average elevation is about 702 m. In total, the 7.3% of the basin is protected by the Ramsar Convention or belongs to the EU Natura 2000 sites.
Geologically speaking, the study area belongs entirely to Rhodope massif ( Figure 2) consisting of old metamorphic rocks (gneisses, marbles, schists), observed mainly in the northern part of the basin. Moreover, igneous rocks (granites, granodiorites) have intruded the Rhodope massif through magmatic events during Tertiary and outcrop in the central part of the basin. Because of the granite intrusion in the calcareous rocks and the contact metamorphosis, a sulfur deposit is created, consisting mainly of pyrites. Quaternary and Pleistocene mixed sediments cover the south-eastern part of the catchment. The boundary between the highland area and the lowland is characterized by a sharp change of slope. From a hydrogeological point of view, two main aquifers are developed within the aforementioned geological formations: 1) an unconfined aquifer in the Quaternary deposits of lowlands and 2) a karst aquifer in marbles of the northern part of the basin (Diamantis, 1985). Karst aquifer system often discharges groundwater through springs in the hilly part of the basin, where permeable marbles are in contact with impermeable basement rocks. Previous studies (Hrissanthou et al., 2010;Gikas et al., 2006) show significant sediment transportation to Vistonis lagoon from Kosynthos river because of intense erosion. However, no deltaic deposits are observed in the outfall of Kosynthos, while an inner delta is created right before the stream's diversion (Figure 3). The steep topography combined with the inclination of the diverted section prevents the transportation of coarse sediments, allowing only fine-grained fractions to Vistonis lagoon.
Typology
In this study system B was selected because the basin of Axios River (a transboundary Greek-FYROM river) belongs to two different ecoregions according to System A. In order to distinguish the water bodies of the Kosynthos river basin, apart from the obligatory descriptors the slope, from the optional ones, was selected and a new category in the basin descriptor was added (0-10 Km 2 ). The rivers were characterized according to the MED-GIG intercalibration exercise (Van de Bund et al, 2009
Approximate water balance
The estimation of the approximate water balance of Kosynthos catchment is based on monthly rainfall and temperature data of 7 weather stations (Genisea, Iasmos, Xanthi, Semeli, Gerakas, Thermes, Dimario) distributed equally across and beyond the basin, for the period 1964-1999 and GIS technique (Voudouris, 2007). As part of the estimation process, components of the hydrological cycle (precipitation P, actual evapotranspiration E , infiltration I and surface runoff R), instream flow, available water capacity and water needs (demand for urban, farming, irrigation and industrial water) of the river basin are calculated.
Quality elements
Dissolved oxygen (DO mg/l), water temperature (WTemp, o C), pH and conductivity (μS/cm) were measured in situ with probes (EOT 200 W.T.W./Oxygen Electrode, pH-220, CD-4302, respectively). TSS (mg/l), nutrients (N-NH 4 and P-PO 4 , mg/l) and oxygen demand (BOD 5 , mg/l) were estimated following A.P. H.A. (1985). Flow was quantified with a flow meter (type FP101) and stream discharge (m 3 /s) was calculated for each site. The percentage composition of the substrate was visually estimated according to Wentworth (1922) scale. The Habitat Modification Scores (HMS) was calculated to assess the extent of human alterations at each site (Raven et al., 1998).
Benthic macroinvertebrates were collected using a standard pond net (ISO 7828:1985, EN27828:1994 with the semi-quantitative 3-minute kick and sweep method according to Armitage et al (1983) and Wright (2000) proportionally to the approximate coverage of the occurring habitats (Chatzinikolaou et al., 2006). The animals were preserved in 4% formaldehyde.
In the laboratory, they were sorted and identified to family level. To assess the ecological quality of each site the Hellenic Evaluation System (HES) (Artemiadou & Lazaridou, 2005) and the European polymetric index STAR ICMi (European Commission 2008/915/EC) were applied to the benthic macroinvertebrate samples.
Statistical analysis
For the statistical analyses all data were log (x+1) transformed except for pH and temperature which were standardized. Parameter expressed as percentages (substrate) was arcsine transformed (Zar, 1996). The hierarchical clustering analysis, based on Bray-Curtis index (Clarke and Warwick, 1994) was applied to the samples of benthic macroinvertebrates for grouping them.
Similarity percentages analysis (SIMPER analysis) (Clarke & Warwick, 1994) was used to distinguish the macroinvertebrate taxa contributing to similarity and dissimilarity between the groups. Redundancy Analysis (RDA) was performed in order to detect covariance between environmental variables and abundances of taxa (Ter Braak, 1988). Correlated variables were excluded with the use of the inflation factor (<20) and the Monte Carlo permutations test (p<0.05).
Impress analysis/DPSIR and SWOT analysis
Impress analysis estimates the impacts taking into account the morphological alterations and the pollution pressures. The morphological alterations estimated through the calculation of a Habitat Modification Score (HMS) (Raven et al., 1998) which is based on the artificial modifications. The pollution pressures are treated differently for point and non point sources. As point sources of pollution are considered the urban wastewater and septic tanks, producing BOD, N and P combinations, which are calculated according to the emission factors (Fribourg- Blanc and Courbet, 2004) whereas livestock according to Ioannou et al., (2009) and Andreadakis et al., (2007) calculated the pollutants.
The human population and the species numbers of breeding animals derived from the Greek National Statistical Service. Industries data, the point sources of pollution, are not available from National Services. Non point sources of pollution, being the land uses, are determined using the Corine Land Cover 2000 and their pollutants are calculated according to the immission factors of WL-Delft et al, (2005). The morphological alterations of pressures were significant if the agricultural land cover was more than 40% (LAWA, 2002) and urban land cover more than 2.5% (Environment Agency, 2005) of the total extent of the river basin.
The pressures from pollution sources would be significant if the total immissions exceeded the proposed limits for irrigation (Decision 4813/98) and for fish life (European Commission 2006/44/EC). All limit scores were adjusted to the river basin, taking into consideration the river flow, estimated as 5.8 m 3 /s (Gikas et al., 2006). Multiplied by the estimated river flow, the limit scores were adjusted to the river basin.
The impact assessment, the evaluation of likelihood of failing to meet the environmental objectives and the risk management used the methodology proposed by Castro et al., (2005).
Finally, the conceptual model DPSIR (at a river basin level) and SWOT analysis (at the level of Municipalities Mykis and Dimokritos) were applied.
Typology
In accordance with the hierarchical approach, the river flowing in the basin is separated in two main water bodies, due to the canalization of the low-land part of Kosynthos in 1958. Therefore, the diverted part is characterized as heavily modified water body (HMWB), while the rest of the river is characterized as natural water body (NWB). The classification of the river by types leads in 17 types in the catchment area, of which 15 in the drainage network ( Figure 4). Finally, the subdivision of a water body of one type into smaller water bodies according to the existing pressures, results in 44 water bodies, from which in 9 sampling of biological, hydromorphological and physico-chemical parameters were executed in June 2011. Based on the European common intercalibration river types ( Van de Bund et al., 2009), two types (RM1 and RM2) appear in the river basin.
Approximate water balance
The climate is semi-humid with water excess and deficiency during winter and summer respectively (Angelopoulos & Moutsiakis, 2011). The annual rainfall (P) is influenced by the www.intechopen.com Ecological Water Quality and Management at a River Basin Level: A Case Study from River Basin Kosynthos in June 2011 29 elevation (H) of the region (P=0.92H+625, R 2 =0.95). The mean annual precipitation in the basin for the period 1964-1999 is 1085.6 mm ( Figure 5). Based on Turc method, the coefficient of the actual evapotranspiration was estimated to be 71% of the mean annual precipitation. The remaining amount is allocated to surface runoff (15.3%) and infiltration (13.4%). A great amount of water infiltrates in marbles and alluvial deposits and then a part of this amount discharged through springs. Instream flow ('environmental flow') is a term that refers to the water required to protect the structure and function of aquatic ecosystems at some agreed level. (Zhang et al, 2006). In accordance with the legislation (M.D. 49828/2008), instream flow equals to 30% of surface runoff and the rest 70% is estimated as available water potential. Assuming that 50% of the infiltration also involves in the available water, the total water potential of Kosynthos basin is calculated to be 86.9x10 6 m 3 /yr for the period 1964-1999.
The relevant agents considered for the calculation of water demand are the municipalities that configure the Kosynthos river basin (Municipalities of Myki, Xanthi, Dimokritos, Iasmos). The needs for urban and farming water are calculated equal to 1.7x10 6 m 3 each for irrigation water 62x10 6 m 3 and for industrial water 6.3x10 6 m 3 (demographic and population data 1991-2001. N.S.A.G.). It should be although mentioned that any analysis of water resource management suffers the same handicap with regard to the availability of complete and homogenous information. particularly on municipality level (Torregrosa et al., 2010). Comparing the amount of water potential of Kosynthos catchment for the period 1964-1999 with the water demands, the approximate water balance for the same period is characterized as positive; the water potential is greater that the water demands.
Quality elements
The results of the physico-chemical parameters of the river water are presented in Table 2. Ammonium concentration was found to exceed the boundaries of Cyprinid life in all sites, except the site Oraio which exceeded the boundary of portable water and the site Tsai which exceeded boundary of Salmonid life. Also, T.S.S. concentration exceeded the boundary of portable water in sites Kimmeria. Chalikorema and Ekvoles. The substrate composition is represented in Figure 2. The sites Oraio, Byz. Gefyri and Kimmeria are mostly consisted of fine substrate. According to the index HMS most of the sites are characterized as "Predominantly unmodified" (HMS score 3-8, Figure 6). In this study 22.005 benthic macroinvertebrates were identified belonging to 48 different taxa. Abundances were found to be higher in the site Oraio and the site Chalikorema had the lowest. The ecological quality of the sites Tsai, Sminthi and Kimmeria, according to the Hellenic Evaluation Score (HES), was characterized as good, sites Oraio Byz. Gefyri, Chalikorema and Ekvoles as moderate and Glauki and Drosero as poor (Figure 7). By the European polymetric index STAR ICMi it was found the same quality, except for the site Kimmeria which was characterized less than good (Table 3). This difference is related to the fact that the HES index takes into account more sensitive taxa.
Statistical analysis
The hierarchical clustering analysis, based on Bray-Curtis index, clustered the benthic macroinvertebrates of the different sites into three clusters (Figure 8). The groups clustered modified sites with an excess of human activities (Ekvoles & Xalikorema) (Group A), the inland delta sites (Drosero & Kimmeria) (Group B) and the high altitude sites (the rest of the stations) (Group C). Simper Analysis showed that the dissimilarity between the groups was around 51%. The families Gammaridae and Simulidae were the key taxa for the differences between the clusters (Figure 9). According to CCA the eigenvalues of the first two axes accounted for 73.8% of the variance. P-PO 4 was the variable best correlated with the first axis, whereas the second axis was best correlated with discharge ( Figure 10).
Pressures from pollution sources and morphological alteration pressures
The total emissions, immissions loads produced within the river basin Kosynthos and the environmental quality standards for irrigation and fish life are presented in Table 4. Only BOD exceeded the limits for the salmonid life standard. It is evident that livestock breeding is the most polluting activity (Figure 11). Agriculture is the second diffuse pollution source of total nitrogen (30%) and nitrogen immissions. The morphological alterations for the urban land cover were 1.5% and the agricultural land cover was 27.7% lower than the proposed levels so not significant. Table 4. Comparison of emission and immission loads with maximum permitted immission loads. Fig. 11. BOD, total P and total N immissions that each activity produces.
Impact assessment
The impacts from the morphological alterations are probable, because the mean score of HMS is 5.5. Also, the impacts from the pollution pressures are probable, because the mean biological quality is inferior to good quality and because the nitrogen of N-NH 4 exceeds the limit for potable water in the site Oraio. Hence, the impacts from the morphological alterations and pollution pressures are probable. The likelihood of failing to meet the environmental objectives for the morphological alterations is medium because the impacts are probable and there are no significant pressures (urban land cover 1.5% and agricultural land cover 27.7%). Additionally, for the pollution pressures, the likelihood of failing to meet the environmental objectives is medium because the impacts are probable and there are no data for significant pressures (lack of the inputs of industrial pollutants). So in the Kosynthos river basin, an operational monitoring of the risk management for both the morphological and pollution pressures is proposed.
www.intechopen.com
Ecological Water Quality and Management at a River Basin Level: A Case Study from River Basin Kosynthos in June 2011 37
DPSIR and SWOT analysis
According to the DPSIR framework there is a chain of causal links starting with 'driving forces' (D) (human and economic activities) through 'pressures' (P) (emissions, waste) to 'state' (S) (physical, chemical and biological) and 'impacts' (I) on ecosystems, human health and functions, and eventually leading to political 'responses' (R) (prioritization, target setting, indicators). Consequently, all the above were examined in the Kosynthos river basin ( For the sustainable development of the study area, SWOT analysis was applied in Municipality Mykis, which is in the mountainous part of the basin and in Municipality Dimokritos, which is in the lowland part of the basin, in order to estimate the Strengths, Weaknesses, Opportunities and Treats. Based on the SWOT analysis, which is a useful tool for local authorities and decision makers (Diamantopoulou & Voudouris, 2008), some recommendations are proposed to maximize the existing opportunities (
Discussion
In this study System B was selected because of the flexibility in the choice of abiotic parameters and better distinction in relation to the animals than the System A (Dodkins et al., 2005). According to Kanli (2009) the descriptor "Altitude" significantly affects the structure of communities of benthic macroinvertebrates in relation to the other descriptors used in the typology. Also, according to Rundle et al. (1993) and Brewin et al. (1995) "Basin size" is the second most important descriptor that affects the structure of biocommunities after the altitude. In this case, the hierarchical clustering analysis, based on Bray-Curtis index, showed that the descriptor of "Altitude" was the most important descriptor for the separation of benthic macroinvertebrates. For Mediterranean types of RM there was no apparent difference between the stations on the distribution of benthic macroinvertebrates (most of them were R-M2).
The approximate water balance for the period 1964-1999 is characterized as positive, since the water potential in the basin is sufficient to meet the needs arising from activities. The intense infiltration due to the karstic marbles of the Rhodope Mass. and the hydraulic conditions developed in the mountainous area by the presence of impermeable formations does not allow high surface runoff. Moreover, the largest city in the basin (Xanthi) is not watered from this basin.
The concentration of total suspended solids is affected by the dissolution of mineral matter and the intense evaporation (Voudouris, 2009). In this study, the highest TSS concentration measured in the lowland sites (47.6 mg/l) due to the large sediment transportation, mainly fine-grained material derived from the intense erosion, weathering and dissolution of lithological formations because of steep slopes.
The physico-chemical and biological characteristics are modified from the discharge and are related to the ability dissolution of pollutants (Prat et al., 2002). According to Hubbard et al.
(2011) the importance of intense flooding in rivers demonstrates the inverse relationship between supply and nutrient concentration. In this study, in the site Kimmeria was found the lowest discharge (0.6 l/s) and low concentration of P-PO 4 and N-NH 4 . This occurs because the actual band width is greater than the measured during the sampling period. Instead the highest concentration of N-NO 3 may be due to the influx of water from underground sources upstream of the site. Finally, in the site Oraio it was measured the second smallest discharge (0.8 l/s) and the highest values of nutrients, because the active band width is small and leads to accumulation of nutrients.
The ecological water quality of the site Tsai is connected to the absence of pressures. In the sites Oraio, Glauki and Byz. Gefyri, the ecological water quality is characterized as poor due to the present livestock feeding and the septic tanks. Also, the sites Chalikorema and Ekvoles were characterized as moderate because of the intensive agricultural land use, livestock feeding and septic tanks. Finally, the ecological quality in the site Sminthi and Chalikorema is good, because of the self-purification of the system and the presence of water sources respectively. Impress Analysis showed that the immissions loads in the basin of Kosynthos is lower than the proposed irrigation limits (Decision 4813/98) issued for another region. It is suggested that an adoption of a similar Decision for Xanthi is important, since it is a rural and agricultural basin with intense activity in the lowland section. Also, the immissions loads did not exceed the limits for the cyprinid life standard, although the total organic load exceeded the limits for salmonid life standard. As livestock breeding appears to be the most polluting activity there is a certain amount of uncertainty involved due to lack of data concerning the location of breeding farms, their grazing fields, their antipollution technologies and the disposal processes of pollutants into the environment (Ioannou, 2009). The latter is mainly due to intense livestock activity observed in the municipality Dimokritos (40% total organic load from the entire river basin). Consequently, a risk management operational monitoring is proposed for both morphological and pollution pressures in the Kosynthos river basin, in order to achieve good quality status in 2015.
Conclusions
In conclusion, in the Kosynthos river basin 15 river types are present in the hydrographic network, according to the System B of the WFD. When taking into account the existing pressures in the basin, 44 water bodies are detected. The approximate water balance for the period 1964-1999 is characterized as positive. Among the nine stations selected for sampling benthic macroinvertebrates and according to Hellenic evaluation system of the ecological quality in three stations (Tsai, Sminthi, Kimmeria) water quality was estimated as good, in four stations (Oraio, Gefyri, Chalikorema, Ekvoles) medium and in two (Glauki, Drosero) as poor. Finally, by applying the Impress Analysis, operational monitoring was recommended. | 2017-09-17T04:32:53.572Z | 2012-05-16T00:00:00.000 | {
"year": 2012,
"sha1": "1a8b03d021e4444c338faaa1f4b021c6d8273339",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/36796",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1a8b03d021e4444c338faaa1f4b021c6d8273339",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
225560083 | pes2o/s2orc | v3-fos-license | The Construction of College English Network Technique Platform Based on Computer Aided System
With the progress of computer technology and the prosperity of information age, the society has higher and higher requirements for English learners. Due to the limitation of English standard, many university students’ employment problems become more serious. This situation poses a greater challenge to university English technique. What we should do to improve the efficiency of university students’ English instruction has become a major concern in academia. At present, the establishment of College English network online technique platform has improved the passion of university students to learn English. Researchers found that online technique platform can help students improve their English level. This paper carefully studies the establishment of College English online teaching terrace under the background of computer-aided system, and finally comes to a conclusion.
Introduction
The purpose of English teaching in colleges and universities is to improve students' comprehensive English application ability. It can enhance students' communicative ability and autonomous learning ability. With the renewal and progress of scientific research ability, computer-aided system plays an essential role in the construction of College English network online explanation platform. It was quickly recognized and accepted by people.
Generally speaking, computer technology is a comprehensive product of network technique and communication technique [1] . Its progress has a profound impact on the quality of English instruction in China. The construction of College English online teaching platform based on computer aided system is an important trend in the reform of College English instruction.
Advantages of the construction of College English Online teaching platform based on computer aided system
The study of each subject and the construction of corresponding technology have internal meaning. Their emergence and use must have many advantages.
Assisting traditional English technique mode
The traditional English teaching method is boring. Students generally don't like the English classroom environment. This kind of thought will lead to the lack of students' enthusiasm in English learning. The establishment of online teaching platform can help teachers to find the corresponding technique resources on the network [2] . Teachers can also have live teaching activities through the online platform under special circumstances.
Create a daily learning environment for students
The process of learning English is a process of accumulation. Words and grammar are not that kind of knowledge which can be learned quickly. The establishment of online teaching platform can help students find corresponding learning resources and references in their daily life. In this way, students can be exposed to English learning skills and practical technique in their daily life. It can create a daily English learning environment for students.
The transformation of the new concept of College English Network Technique based on computer aided system
The computer-aided system applied in English technique has made a great impact on the traditional concept of English technique. Therefore, we can discuss the main part of the change of English technique concept.
Student-centered curriculum design
The traditional technique idea is that students take teachers as the center and teachers teach knowledge to students according to qualitative thinking [3] . The establishment of College English online teaching platform will take students as the center in the whole teaching process [4] . The traditional teaching method is replaced by an independent, open and interactive method. This way also puts forward higher requirements for teachers' own ability.
The mode of interaction between students and machines replaces the mode of interaction between students and teachers
In the process of online teaching and study, students and teachers are faced with machines. They can only learn interactively with machines. Video and audio become students' teachers. The original classroom would become a live room on the Internet.
The reduction of the number of instructions between classes
In the process of traditional English teaching, teachers can guide students' learning problems in the process of recess. In the process of online teaching, students can hardly get the teacher's guidance between classes. Therefore, the way of online learning will improve students' ability of autonomous learning. This is also a change in learning philosophy.
The construction of College English online teaching platform based on computer aided system
The construction of online teaching platform needs not only excellent computer technology, but also stable technique resources and teacher resources (see Table 1).
Resource construction of the platform
There are two kinds of resource construction of College English online teaching platform. One is the construction of technique resources. The other is the construction of teacher resources. The construction of teacher resources is relatively simple. The construction of technique resources includes the construction of various courseware and technique environment [5][6] .
Main purpose of platform construction
In the process of building the online teaching platform of College English, we should make clear the main idea of English online teaching. Its main purpose is to ask students to learn English independently. Moreover, network resources are the main source of students' learning resources. Pictures, audio and video make students access to all kinds of knowledge. Therefore, to some extent, the stability of network is essential and crucial.
The establishment of the feedback system of College English Online teaching platform based on computer assisted system
According to the theory of computer network platform construction, each unique network platform should have an independent feedback system. Feedback system refers to that users of the platform can reflect their using experience and the progress they have made in the use of the platform at any time.
At present, the construction cost of feedback system is relatively high. Therefore, many people do not recommend the construction of the feedback system of the online platform. I think the construction of feedback system is essential. On the basis of students' learning knowledge, we should also understand the students' feelings of learning. This is a performance of responsibility to students.
Conclusion
In fact, we can not only see the advantages of one technology, but also its disadvantages. The College English Online teaching platform based on computer assisted system has many advantages. However, it is likely to cause a lack of emotional communication between teachers and students. Therefore, we should learn to use its advantages and discard its disadvantages. | 2020-10-30T08:11:11.731Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "b0149dd18181252ad588a329164eaad6f2de6b80",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1578/1/012066",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "538b48cc98906a86fbb8fe82244a5beb911d36cd",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
266488079 | pes2o/s2orc | v3-fos-license | Clinical Utility of Breast Ultrasound Images Synthesized by a Generative Adversarial Network
Background and Objectives: This study compares the clinical properties of original breast ultrasound images and those synthesized by a generative adversarial network (GAN) to assess the clinical usefulness of GAN-synthesized images. Materials and Methods: We retrospectively collected approximately 200 breast ultrasound images for each of five representative histological tissue types (cyst, fibroadenoma, scirrhous, solid, and tubule-forming invasive ductal carcinomas) as training images. A deep convolutional GAN (DCGAN) image-generation model synthesized images of the five histological types. Two diagnostic radiologists (reader 1 with 13 years of experience and reader 2 with 7 years of experience) were given a reading test consisting of 50 synthesized and 50 original images (≥1-month interval between sets) to assign the perceived histological tissue type. The percentages of correct diagnoses were calculated, and the reader agreement was assessed using the kappa coefficient. Results: The synthetic and original images were indistinguishable. The correct diagnostic rates from the synthetic images for readers 1 and 2 were 86.0% and 78.0% and from the original images were 88.0% and 78.0%, respectively. The kappa values were 0.625 and 0.650 for the synthetic and original images, respectively. The diagnoses made from the DCGAN synthetic images and original images were similar. Conclusion: The DCGAN-synthesized images closely resemble the original ultrasound images in clinical characteristics, suggesting their potential utility in clinical education and training, particularly for enhancing diagnostic skills in breast ultrasound imaging.
Introduction
Breast cancer remains a significant global health challenge and is the most commonly diagnosed cancer in women worldwide.Historically, the global incidence of breast cancer has been rising, partly due to increased life expectancy and lifestyle changes.In the early 20th century, breast cancer was relatively less common, but by the late 20th and early 21st centuries, it emerged as one of the most frequently diagnosed cancers.This trend is particularly evident in developed countries, where widespread screening and awareness campaigns have contributed to earlier detection.In 2020 alone, approximately 2.26 million cases were recorded globally, and about 685,000 women worldwide died from breast cancer, making it the leading cause of cancer death among women [1].However, despite the increase in incidence, mortality rates have gradually declined in many developed countries since the 1990s, largely due to improved treatment modalities and effective screening programs.The 5-year survival rate for breast cancer in these countries is now over 80%, underscoring the importance of early detection and diagnosis in providing effective treatment [1].
One breast cancer screening modality is ultrasonography, which we widely use because it is minimally invasive and low-cost [2].Meanwhile, diagnostic accuracy varies because it depends on experience and individual competence.Although beginners need to learn by assessing many images, medical images may contain personal information, which limits experience with cases outside of the institution and opportunities to encounter images of infrequent diseases.
Artificial Intelligence (AI) in medical fields, particularly in breast ultrasound imaging, plays a critical role in enhancing diagnostic accuracy.Lesion detection involves AI algorithms identifying abnormal areas or tissues, thereby aiding in early disease recognition.Segmentation, another crucial function, refers to the process where AI delineates the contours of lesions, separating them from normal tissue for precise analysis.High diagnostic rates signify AI's effectiveness in correctly diagnosing conditions based on imaging data, reducing errors and improving patient outcomes.AI's integration into medical imaging thus represents a significant advancement in diagnostic techniques, offering more accurate, efficient, and reliable evaluations [3][4][5][6][7][8].Additionally, AI has been found to be useful for diagnosing axillary lymph node metastasis and predicting lymph node metastasis in breast cancer [9,10].Ozaki et al. reported on the use of a deep learning model, employing convolutional neural networks for differentiating between normal and metastatic axillary lymph nodes in breast ultrasound images.The model, trained on over 600 images, achieved a sensitivity of 94%, a specificity of 88%, and an area under the curve of 0.966.Its diagnostic performance was comparable to an experienced radiologist and superior to less experienced readers.With assistance, the diagnostic accuracy of the residents improved significantly, demonstrating the model's potential as an effective diagnostic aid in breast ultrasound imaging [9].Zhou et al. reported on the feasibility of using deep learning to predict axillary lymph node metastasis in primary breast cancer patients from ultrasound images.The study used images from two hospitals, training three convolutional neural networks on one dataset and testing on both.The best-performing model, Inception V3, achieved an AUC of 0.89, with 85% sensitivity and 73% specificity, outperforming radiologists in diagnosis.This suggests that deep learning models can effectively predict clinically negative axillary lymph node metastasis, offering a potential early diagnostic tool for breast cancer patients [10].
In recent years, significant progress has been made in AI techniques for medical image synthesis [11][12][13][14].Among them, an image-generation model called the generative adversarial network (GAN) was published in 2014 and has attracted much attention [15].GAN involves two networks that are trained simultaneously as neural network models, with one performing image-generation and the other performing discrimination of the generated image to produce a realistic virtual image [16].In recent studies, GANs produced high-quality medical images, and they have been the subject of several active studies, including opticnerve papillary optical coherence tomography image synthesis for glaucoma detection [17], 3D magnetic resonance angiography cerebro-vascular image synthesis [18], skin-lesion segmentation [19], and chest X-ray image synthesis for pneumonia diagnosis [20].Since the announcement of Chat GPT (Chat Generative Pre-trained Transformer, developed by OpenAI, San Francisco, CA, USA) in November 2022, generative AI has rapidly gained popularity as a form of social media.Chat GPT is a state-of-the-art large-scale language model that evaluates user input to simulate human-like conversations [21].Large-scale language models are revolutionizing various fields, including healthcare.In medicine, they assist in analyzing patient data, generating medical reports, and providing diagnostic suggestions.Their ability to understand and process complex medical literature and patient information can support doctors in decision-making.However, challenges like data sensitivity, the need for highly accurate and unbiased outputs, and ethical considerations are crucial.The integration of these models in healthcare promises to enhance patient care and medical research, but it requires careful implementation to address potential risks.The Microsoft Corporation (Washington, USA) and OpenAI are studying the use of ChatGPT for clinical support and medical education [22].
In previous studies, we have used a deep convolutional GAN (DCGAN) [23], an adversarial generative network using deep learning, to generate breast mass synthetic images.These are the first successfully generated breast ultrasound images so realistic that even radiologists could not distinguish between the synthetic and original images [24].The second study demonstrated DCGAN's capability to produce high-quality synthetic ultrasound images of normal, benign, and malignant breast tissues, highlighting its potential in creating realistic virtual representations of tumor development, including growth and malignant transformation processes [25].However, in these previous studies, the synthetic breast ultrasound images were generated without separating the images by tissue type.Meanwhile, in the present study, the DCGAN was used to generate breast ultrasound synthetic images by pathological diagnosis of tissue type.The generated images were evaluated by a radiologist along with the actual images and compared in detail.The evaluation focused on whether the generated images looked realistic and adequately represented each tissue's characteristics.Furthermore, the possibility of creating a generated image for each pathological tissue was discussed to see if it could be used to improve physicians' diagnostic skills and for educational purposes.Two radiologists meticulously examined both the original and generated images, subsequently scrutinizing correct response rates pertaining to histological categorization and the benign or malignant assessment of tumor images.Further, an assessment of concordance between the original and the synthetic images was conducted utilizing the kappa coefficient.
This investigation underscores the efficacy of our DCGAN model in generating images closely resembling the originals.These generated images hold potential utility for medical students and fledgling physicians in honing their skills in interpreting breast ultrasound images.
Patients
The subjects were patients who underwent breast ultrasound examinations at Tokyo Medical and Dental University Hospital from September 2014 to August 2022.The inclusion criteria were female patients whose masses were diagnosed as benign or malignant by pathological analysis or by >1-year follow-up examinations at our hospital.The exclusion criteria were (a) patients who were treated with hormonal therapy, chemotherapy, or radiation therapy, (b) patients who were <20 years old, and (c) patients who were unable to express their consent due to illness or advanced age.
Our medical ethics committee approved this retrospective study and waived the requirement for written informed consent from patients (approval number: M2019-232; approval date: 13 December 2019).All methods were carried out in accordance with relevant guidelines and regulations (Declaration of Helsinki).
Ultrasound Examinations
Five radiologists with 4-20 years of experience performed the ultrasound examinations on an Aplio XG scanner with an 8.0 MHz linear probe PLT-805AT (Toshiba Medical Systems, Tochigi, Japan), an Aplio 500 scanner with an 8.0 MHz linear probe PLT-805AT (Toshiba Medical Systems, Tochigi, Japan), or a LOGIC E10s scanner with a linear matrix probe ML6-15-D (GE Healthcare, Chicago, IL, USA).The radiologists acquired static images in the vertical and horizontal planes and measured the maximum diameter of the masses.The investigator was aware of the clinical and mammographic findings at the time of the ultrasound examination.The patients underwent the examination in a supine position with their arms down.
Data Set
One medical student and one radiologist specializing in breast cancer imaging extracted typical breast ultrasound images of five histological types associated with the relevant clinical course and pathology results.
To achieve uniform image quality, which is critical as varying lesion sizes can degrade the quality of synthesized images, this study specifically targeted lesions smaller than 15 mm.
In the study, while pathology results were the gold standard for determining the histological type, some benign masses, such as typical cysts and fibroadenomas, were diagnosed as benign based on imaging findings and clinical course, without the need for pathological examination.A total of 1008 breast ultrasound images were extracted.Approximately 200 ultrasound images of breast masses were collected for each histological type.The histological types included cysts and fibroadenomas for benign masses and invasive ductal carcinoma of the breast (scirrhous, solid, and tubule-forming types) for malignant masses.These three histological types are known to comprise a large portion of invasive breast cancers [26].
Details of the data for each histological type are shown in Table 1.The viewing software TFS-01 (Toshiba Medical Systems, Tochigi, Japan) was used to convert the ultrasound Digital Images in Communication (DICOM) images to Joint Photographic Experts Group (JPEG) figures, which were trimmed to 40 × 40 mm squares that included the chest wall in Microsoft Paint (Microsoft, Redmond, WA, USA) for analysis.
Image Synthesis
Image synthesis was performed on a DEEPstation DK-1000 (UEI, Tokyo, Japan) containing GeForce GTX 1080 graphics-processing unit (NVIDIA, Santa Clara, CA, USA), Core i7-8700 central processing unit (Intel, Santa Clara, CA, USA), and a Deep Analyzer graphical user interface-based deep learning tool (GHELIA, Tokyo, Japan).DCGAN was used to construct the images [23].DCGAN represents a significant advancement in the field of generative models, particularly in image generation.In a DCGAN, the discriminator, which distinguishes between real and generated images, comprises multiple layers: strided convolution layers that reduce the spatial dimensions of the input, batch normalization layers for stabilizing learning by normalizing the input to each layer, and LeakyReLU activations which allow a small gradient when the unit is inactive, preventing the dying ReLU problem.The generator, responsible for creating images, uses convolutional-transpose layers that perform the reverse of convolution, effectively upscaling input to a larger spatial dimension.It also utilizes batch normalization layers and ReLU activations, known for their efficiency in deep networks.The unique aspect of DCGAN is the use of these strided transpose-convolution layers, enabling it to upscale a latent vector into a volume of the same shape as an image, bridging the gap between the latent space and the image space effectively [23].
The parameters for the generator and discriminator were the same as those reported in previous studies [23][24][25]: optimizer algorithm = Adam (lr = 0.0002, β1 = 0.5, β2 = 0.999, eps = 8 × 10 3 ).The image data were input and output at 256 × 256 pixels.After building the models, we generated 10 images with 200 epochs.In this study, a radiologist evaluated the quality of images synthesized at various epochs, and as a result, 200 epochs were selected for use.Moreover, we randomly selected 10 original images.Figure 1 shows five examples of the synthetic and original breast ultrasound images.
Image-Evaluation Method
Two radiologists (with 13 and 7 years of experience) were given individual reading tests using a viewer software EV Insite R (PSP Corporation, Tokyo, Japan).The readers conducted their image interpretations without knowledge of the patients' clinical information or other imaging results, such as mammography.
The reading test consisted of a total of 50 randomly arranged synthetic images of 5 histological types (10 each): 2 benign masses (cysts and fibroadenomas) and 3 malignant masses (scirrhous, solid, and tubule-forming invasive ductal carcinoma types).The readers rated each image for (1) benign or malignant status and (2) histological type.
To minimize the impact of the first reading test, a subsequent reading test using original ultrasound images was conducted more than one month after the initial test with synthetic images.This delay was intended to reduce recall bias and ensure an unbiased evaluation of the original images.
FOR PEER REVIEW
5 of 11 into a volume of the same shape as an image, bridging the gap between the latent space and the image space effectively [23].
The parameters for the generator and discriminator were the same as those reported in previous studies [23][24][25]: optimizer algorithm = Adam (lr = 0.0002, β1 = 0.5, β2 = 0.999, eps = 8 × 10³).The image data were input and output at 256 × 256 pixels.After building the models, we generated 10 images with 200 epochs.In this study, a radiologist evaluated the quality of images synthesized at various epochs, and as a result, 200 epochs were selected for use.Moreover, we randomly selected 10 original images.Figure 1 shows five examples of the synthetic and original breast ultrasound images.
Image-Evaluation Method
Two radiologists (with 13 and 7 years of experience) were given individual reading tests using a viewer software EV Insite R (PSP Corporation, Tokyo, Japan).The readers conducted their image interpretations without knowledge of the patients' clinical information or other imaging results, such as mammography.
The reading test consisted of a total of 50 randomly arranged synthetic images of 5 histological types (10 each): 2 benign masses (cysts and fibroadenomas) and 3 malignant masses (scirrhous, solid, and tubule-forming invasive ductal carcinoma types).The readers rated each image for (1) benign or malignant status and (2) histological type.
To minimize the impact of the first reading test, a subsequent reading test using original ultrasound images was conducted more than one month after the initial test with synthetic images.This delay was intended to reduce recall bias and ensure an unbiased evaluation of the original images.
The percentage of correct responses for the 50 questions as a whole, the percentage of benign correct responses, the percentage of malignant correct responses, and the percentage of correct responses for each histological type were calculated.Furthermore, the agreement between the two readers' answers was assessed by determining the kappa statistic, which was calculated by comparing the two readers' choices from five possibilities and was interpreted as follows: <0.2 slight; 0.21-0.40fair; 0.41-0.60moderate; 0.61-0.80fairly high; 0.81-1.0almost perfect.
Results
Table 1 shows the cyst group comprised 202 patients with a mean age of 50.2 years, the fibroadenoma group consisted of 201 patients with a mean age of 50.4 years, the The percentage of correct responses for the 50 questions as a whole, the percentage of benign correct responses, the percentage of malignant correct responses, and the percentage of correct responses for each histological type were calculated.Furthermore, the agreement between the two readers' answers was assessed by determining the kappa statistic, which was calculated by comparing the two readers' choices from five possibilities and was interpreted as follows: <0.2 slight; 0.21-0.40fair; 0.41-0.60moderate; 0.61-0.80fairly high; 0.81-1.0almost perfect.
Results
Table 1 shows the cyst group comprised 202 patients with a mean age of 50.2 years, the fibroadenoma group consisted of 201 patients with a mean age of 50.4 years, the scirrhoustype group included 201 patients with a mean age of 63.3 years, the solid-type group involved 200 patients with a mean age of 63.9 years, and the tubule-forming-type group encompassed 202 patients with a mean age of 58.4 years.Notably, the mean age of patients with malignant tumors exhibited a discernible tendency to surpass that of patients with benign tumors.Moreover, concerning mean length diameter, the three invasive ductal carcinoma types also surpassed those with benign tumors, exceeding 8.57 mm.
The percentage of correct diagnoses is shown in Table 2.For distinguishing benign or malignant cases, both readers achieved 100% accuracy with synthetic images, while their performance slightly varied with original images (94% and 96%).In identifying all histological types, reader accuracy was higher with synthetic images (86% and 78%) compared to original images (88% and 78%).
A comparison of the benign and malignant types showed that the diagnostic rates for cyst and fibroadenoma, which are benign, tended to be higher than those for malig-nant types.Among the malignant types, the scirrhous-type IDC tended to have a higher percentage of correct diagnoses.The kappa coefficient, which evaluates the agreement between the two responses, was 0.650 in the reading test using the original images and 0.625 in the reading test using the synthetic images, indicating good agreement.
Discussion
In this study, the DCGAN was used to generate breast ultrasound synthetic images showing the characteristics of each pathological type.Experienced radiologists were able to identify the histological type of the synthetic images, with a positive diagnostic rate as high as that of the original ultrasound images.Furthermore, the kappa coefficient indicated good agreement between the two radiologists' ratings based on the synthetic and original images.These results indicated that the DCGAN-generated synthetic ultrasound images had clinical properties similar to those of the original ultrasound images.
In recent years, breast imaging diagnostics have advanced significantly, with improvements in imaging technology and diagnostic accuracy [27][28][29][30][31][32].This progress is not only due to enhancements in the quality of mammography and ultrasound images and the establishment of more refined reading techniques but also to the emergence of new modalities such as contrast-enhanced Magnetic Resonance Imaging (MRI), Positron Emission Tomography/Computed Tomography (PET/CT), and PET mammography.These advancements have made substantial contributions to the field of breast cancer detection and diagnosis.Contrast-enhanced MRI has become increasingly important in detecting breast cancer, particularly in women with dense breast tissue where traditional mammography may be less effective.This modality offers superior sensitivity in identifying malignancies, making it a valuable tool in comprehensive breast cancer screening and diagnosis.PET/CT, which combines the anatomical detail provided by Computed Tomography (CT) with the metabolic insight of Positron Emission Tomography (PET) imaging, has emerged as a powerful modality for detecting metastatic breast cancer and assessing treatment response.Similarly, PET mammography, a novel approach that integrates the high-resolution anatomical imaging of mammography with the functional imaging capabilities of PET, offers enhanced diagnostic accuracy, especially in complex cases.These technological advancements in breast imaging have not only improved the ability to detect breast cancer at earlier stages but also enhanced the precision in characterizing tumors, thereby facilitating more personalized and effective treatment planning.The effective utilization of AI by radiologists is expected to not only improve diagnostic performance in the future but also to significantly reduce radiologists' workload and contribute to healthcare cost savings.AI's potential to streamline diagnostic processes and enhance accuracy promises both operational efficiency and reduced strain on healthcare systems, making it a valuable tool in modern medical practice [9,[33][34][35].GAN is one of the most remarkable methods and has been applied to medical imaging and proven useful in various areas, such as image enhancement, registration, generation, reconstruction, and transformation between images.In our previous study, the DCGAN was used to generate breast ultrasound synthetic images with a virtual complementary image of a tumor [23,24].In this study, after adjusting the quality of the generated images, synthetic images of two benign and three malignant masses were generated, and their clinical characteristics were evaluated.We succeeded not only in generating high-quality breast ultrasound synthetic images but also in generating synthetic images based on the features of each histological type.
Breast ultrasonography is a diagnostic modality that relies on the skill and experience of the operator.Therefore, it is necessary to learn from a large number of cases to perform accurate examinations and diagnoses, and acquiring a sufficient number of cases requires much time and money, which is problematic.In addition, to improve the diagnostic accuracy for rare diseases, it is necessary to share these cases among multiple medical institutions to maintain learning opportunities and training in diagnostic images of diseases rarely encountered.However, there are limitations in data handling, as the use of test results and medical data for medical research requires the patient's explicit consent from the standpoint of privacy protection [36].The ethical issue of protecting patient privacy often restricts the sharing of real patient images for purposes of medical research.
Previous studies have used medical synthetic images created by GANs to train convolutional neural networks [37][38][39], so the use of GANs may reduce the cost and time of data collection and reduction and solve the problem of insufficient datasets in case learning.Furthermore, based on this study's finding that breast ultrasound images generated by GAN have the same clinical characteristics as the original examination images, the learning effect of training with secondary images by GAN may be as effective as the learning effect from examining original images.In summary, using GAN-generated images for medical learning overcomes privacy issues and can greatly facilitate the sharing of research data to enhance research and the training of physicians, which will help maintain the expertise needed for highly accurate diagnosis, including in rare cases, and improve knowledge in the use of these research and training techniques.This methodology can enhance trainees' exposure to a diverse range of cases, including rare pathologies, thereby broadening their diagnostic expertise and adaptability in clinical settings.It is probable that the use of GAN-synthesized breast ultrasound images will increase at an ever-accelerating pace.
There were several limitations in this study that should be considered.First, this retrospective study was conducted at a single institution.The fidelity of image quality may exhibit variability attributed to the age of ultrasound equipment and disparities among ultrasound examination manufacturers.While this study excluded images with artifacts from the collected dataset, it remains conceivable that their presence could exert an influence on the quality of the generated images.Therefore, a larger multicenter study is needed to assess the validity of our study.Similarly, although the original images collected in this study were trained on a relatively small number of cases, we believe that it will be possible to use a larger volume of clear noise-free images of the lesion area with the ultrasound image artifacts removed to create more sophisticated generated images.Second, not all of the patients whose original images were used in this study were pathologically diagnosed.Therefore, the final histopathological findings may differ from the diagnosis given by the physician at the time of the ultrasound examination, which was used to select the patients in this study.Third, this study was limited to five representative masses observed in breast tissue.Therefore, exploring other histological types of breast masses in future research could potentially expand the clinical application of synthetic images generated by GANs in a broader range of scenarios.Subsequent research endeavors will conduct supplementary experiments to enhance the precision performance in image synthesis, employing well-balanced datasets.
Conclusions
The synthetic images generated by the DCGAN accurately represented the characteristics of each of the five histological types studied, demonstrating that the physician readers could make diagnoses similar to those made from the original images.The use of GANsynthesized breast ultrasound images in medical education offers an innovative approach to training, circumventing limitations imposed by privacy concerns and data scarcity.
Figure 1 .
Figure 1.Representative generated and original images for each histological type.Figure (A) cyst; (B) fibroadenoma; (C) invasive ductal carcinoma (IDC)-scirrhous type; (D) IDC-solid type; and (E) IDC-tubule forming type.The images on the right are originals, and those framed in yellow on the left are synthesized (generated) images.
Figure 1 .
Figure 1.Representative generated and original images for each histological type.Figure (A) cyst; (B) fibroadenoma; (C) invasive ductal carcinoma (IDC)-scirrhous type; (D) IDC-solid type; and (E) IDC-tubule forming type.The images on the right are originals, and those framed in yellow on the left are synthesized (generated) images.
Table 1 .
Ultrasound image data by histological type.
Table 2 .
Percentage of correct diagnoses by readers (reading tests using either synthetic or original images (each set interpreted 1 month apart). | 2023-12-23T16:05:57.544Z | 2023-12-21T00:00:00.000 | {
"year": 2023,
"sha1": "346571081037beebb51e7b0b84b7e73f308be3cb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1648-9144/60/1/14/pdf?version=1703153839",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ecae7b6e12c286f12296aea7c221000016fd9a43",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250179918 | pes2o/s2orc | v3-fos-license | Investigating associative, switchable and negatable Winograd items on renewed French data sets
The Winograd Schema Challenge (WSC) consists of a set of anaphora resolution problems resolvable only by reasoning about world knowledge. This article describes the update of the existing French data set and the creation of three subsets allowing for a more robust, fine-grained evaluation protocol of WSC in French (FWSC) : an associative subset (items easily resolvable with lexical co-occurrence), a switchable subset (items where the inversion of two keywords reverses the answer) and a negatable subset (items where applying negation on its verb reverses the answer). Experiences on these data sets with CamemBERT reach SOTA performances. Our evaluation protocol showed in addition that the higher performance could be explained by the existence of associative items in FWSC. Besides, increasing the size of training corpus improves the model’s performance on switchable items while the impact of larger training corpus remains small on negatable items.
Introduction
A Winograd schema (Levesque, 2011) consists of two anaphora resolution problems (items) differing by two keywords (successful/available in (1)) which change the answer to a question targeting the referent (Paul and George) of an ambiguous anaphor (he). A Winograd item is supposed to be Google-proof or non associative, meaning that it should be insensitive to simple statistics such as lexical co-occurrence 1 . The idea behind this challenge is that if a system is capable of solving these schemas, it should be capable of commonsense reasoning and hence, could be called "intelligent". (1) Paul tried to call George on the phone, but he wasn't successful/available. Question : Who wasn't successful/available ? Response : Paul/George Since the launch of the first Winograd Schema Challenge (Morgenstern et al., 2016), the reference data set in English has evolved from 273 (WSC273) to 285 items (WSC285). 2 A large variety of methods have been explored to resolve the WSC, including logical formalisms (e.g. Bailey et al., 2015), information retrieval approaches (e.g. Emami et al., 2018), neural networks (e.g. Liu et al., 2017) and neural language models (e.g. Trinh & Le, 2018). The recent literature is dominated by the use of large pretrained language models such as GPT (Radford et al., 2019). SOTA performance has been reached by Sakaguchi et al. (2020), who fine-tuned RoBERTa (Liu et al., 2019) on a large data set they crowd-sourced (WinoGrande) and achieved 90.1% accuracy on WSC273, vs. 92.1% for humans (Bender, 2015) and 50% at random.
Despite the accuracy of models which is now close to human performance, it can be questioned whether systems have become truly capable of commonsense reasoning. Trichelair et al. (2019) found that not all items are equally robust and categorized items into two subsets : associative and switchable. An associative item is an item where the correct answer can be deduced by solely looking at the clause containing the pronoun/possessive adjective (but he wasn't successful/available in (1)). In a switchable item, the referents can be switched (Paul and George in (1)), causing the correct answer to shift accordingly. It was demonstrated that the then-state-of-the-art performance by Trinh & Le (2018) was mainly due to the simpler associative subset and on the other hand, insensitive to the switching operation. The evaluation on separate subsets makes it possible to investigate the performance of a system on difficult items and, moreover, tests the robustness of a model's decisions when items are slightly modified.
When the French version of WSC (214 items, henceforth FWSC214) was developed (Amsili & Seminck, 2017a), it was found that some items were associative. However, it was considered that this would not be of much influence, as a system trying to exploit this feature could obtain at most 55% accuracy, vs. a human baseline of 93.6% (Amsili & Seminck, 2017b). A more recent approach (Seminck et al., 2019) using small pretrained language models (without fine-tuning on WSC-problems) also pointed at some associativity in the data set, but failed on a large number of other items, performing only at 52% accuracy.
The aim of the present work is to get a better understanding of Winograd items by dividing FWSC into three subsets. After transforming FWSC214 to FWSC285 based on the most recent version of WSC 3 , we identified an associative and a switchable subset inspired by Trichelair et al. (2019). Moreover, 1. Even though nowadays Google-proofness and associativity are often taken to refer to the same property, there is still a difference in the methods used to ensure Google-proofness vs. non associativity. In the first case, the idea is that counting co-occurrences in a corpus shouldn't suffice to choose the appropriate answer -for instance in the schema A tree fell on the roof, we'll have to remove/fix it, any search in a corpus will give a higher co-occurrence count to the pair roof/fix vs. roof/remove ; in the second case the idea is that a speaker hearing only the question and the possible answers will be biased towards one of the answers. The bias may come from lexical co-occurrence or from world knowledge.
2. https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html, consulted February 23, 2022. 3. We extended FWSC214 based on English items in the spirit of ensuring a certain comparability with the English data set, French-only items would be added in the future.
we proposed a new negatable subset which can help test a model's sensitivity to negation. Then, we fine-tuned the CamemBERT model (Martin et al., 2020) on a machine-translation of WinoGrande and compared its performance on our three different subsets. 4 2 Update of the current French data set In this section, we will explain how we first adapted the existing FWSC214 to FWSC285 and how we created the associative, switchable and negatable subsets.
From FWSC214 to FWSC285
12 items were first removed from FWSC214 because they were neither in WSC273 nor in WSC285. Minor modifications were then made so that the items were closer to the English version. For example, "Nicolas" was replaced by "L'homme" in (2) to better match the English version. We also modified certain items to improve their naturalness. For instance, we changed "était" (was) to "avait l'air" (looked) in (3) since it is odd to assume a fish's feelings.
Then, a total of 83 new items were translated from WSC285, with significant adaptations of 13 items.
Since an item is only valid if the candidate answers have the same number and gender, we had to replace some answers with nouns of opposite gender as in (4) where "le plateau de théâtre" (the stage masculine) was chosen instead of "la scène" (feminine). Besides, some oppositions in English do not have an equivalent in French. In these cases, we established opposition on other elements as in (5).
(4) WSC285 : There is a pillar between me and the stage, and I can't see/see around it. FWSC285 : Il y a un pilier entre moi et le plateau de théâtre, et je n'arrive pas à le voir/contourner.
(5) WSC285 : They broadcast an announcement, but a subway came into the station and I couldn't hear/hear over it. FWSC285 : Ils ont diffusé une annonce quand une voiture est arrivée dans le parking souterrain. La voiture/l'annonce était trop bruyante et je n'ai pas pu l'entendre.
Nuances related to some English verbs are also hard to translate. (6) gives an example where Shakespeare can either refer to the author or his writings depending on the birthdate of the other author. We adapted the item by establishing a new opposition based on word order. The last type of problems are related to naturalness.
(6) This book introduced Shakespeare to Ovid/Goethe ; it was a major influence on his writing. Adaptation : Ce livre a fait découvrir (Ovide à Shakespeare)/(Shakespeare à Ovide) ; il a eu une influence majeure sur son écriture.
All the new items were first translated with DeepL 5 . The main author of this work made a first adapted version which was in turn improved and validated by a native French speaker who speaks also English. Finally, a monolingual native French speaker was consulted to improve the naturalness of the translated items without changing the meaning.
Associative, switchable and negatable subsets
We designed a psycholinguistic questionnaire to determine which items are associative. Human participants were presented only the question and possible answers and were asked whether, with no context, one answer seemed more likely than the other. Participants were explicitly instructed to look for biases with the possibility of answering "no bias" (7). While Trichelair et al. (2019) considered only the association between a keyword and the right answer, our design differentiates positive and negative associativity, demonstrated respectively by (8-a) and (8-b).
(7) Qu'est-ce qui est trop grand ? (What is too large ?) 1. la coupe (the trophy) 2. la valise (the suitcase) 3. pas de biais (no bias) (8) a. Positively associative : Qu'est-ce que je dois réparer ? (What should I repair ?) Correct answer : le toit (the roof ) Wrong answer : l'arbre (the tree) b. Negatively associative : Qu'est-ce qui avait l'air délicieux ? (What looks delicious ?) Correct answer : le ver (the worm) Wrong answer : le poisson (the fish) Among our 42 participants, two were excluded because the average response time was too short (< 1 ′′ ). The experiments lasted 23 to 33 minutes. Figure 1 shows the distribution of items according to the percentage of subjects answering correctly (which we consider as correlated to associativity) and the isolated right cluster indicates clearly the existence of positively associative items for which more than 76% of the subjects chose the right answer without looking at the context. Using the same method, we identified 3 negatively associative items where more than 81% of the participants chose the wrong answer. We also established a switchable subset of 141 items and a negatable subset of 38 items. An item is negatable if a verb in the item can be negated without inducing semantic awkwardness. Naturally, both switching and negation should also induce an answer switch. We consulted two native French speakers and an item is only classified as switchable or negatable if both speakers come to an agreement. (9) shows these two operations on one item.
Experiments with CamemBERT on FWSC285 and its subsets
We fined-tuned CamemBERT large on the machine-translated version of WinoGrande. Concretely, we reframed the WSC task as a sentence classification problem by replacing the pronoun with the two candidate answers. The correct sentence is marked 1 and the wrong one 0 (10) : (10) Original item : Claire a frappé à la porte de Sylvie, mais elle n'a pas eu de réponse. 1 → Claire a frappé à la porte de Sylvie, mais Claire n'a pas eu de réponse. 0 → Claire a frappé à la porte de Sylvie, mais Sylvie n'a pas eu de réponse.
We used DeepL to translate WinoGrande into French. The first issue of this approach is that the second occurrence of the answer candidate (Claire or Sylvie in (10)) is sometimes translated back to a pronoun. It is also important to note that the translated corpus does not comply strictly with the definition of WSC items, because DeepL may not translate the same word consistently. For instance, because can be translated parce que in one sentence and car in another sentence. Since WinoGrande training corpora come in five sizes, from xs (160 items) to xl (40 938 items), included into one another, we used all these partitions in various experiments on our data sets (Table 1).
As shown by Table 1, we achieved 68% accuracy on FWSC285 and 66% on FWSC214 with the xl training set. Our model, using a simple classification protocol, achieved thus a new SOTA performance. After testing on the entire FWSC285, we ran the best model (trained on xl) on the positively associative and non-associative subsets of FWSC285. The best model scored 90% accuracy on the positively associative subset, while a test on 10 random samples of the same size of the positively associative subset, all drawn from the non-associative items, yielded only 59% accuracy on average. These experiments highlight the significant contribution of the associative subset to the overall performance, observed also in studies regarding WSC in English (Trichelair et al., 2019). It is also worth noting TABLE 1 -Accuracy on FWSC285 depending on the data set and the size of training set that the best model fails on all the 3 negatively-associative items. Although the sample is too small to suggest that the model has simply used co-occurrence as main cues to tackle FWSC, it would be interesting to build more negatively associative items in the future to test the validity of this hypothesis.
We evaluated further the model's robustness against perturbations on the switchable and negatable subsets. It can be seen from Table 1 that the accuracy for the switched subset improves consistently when the train set size is enlarged, while the score for the negated subset remains unchanged beyond the medium size training set. Besides, the accuracy for the negated subset is significantly lower than the unnegated subset, even when more and more training data are used. This highlights the interest of our negatable subset. Although enlarging the size of our training corpus does improve the robustness of our model to the switching operation, probably because a large amount of data facilitates a more abstract and general representation of the candidate answers. The sensitivity to negation, however, doesn't increase even when the largest corpus is used. This insensitivity is reminiscent of the study of Ettinger (2020) where BERT, in a zero-shot setting, fails to understand negation in a cloze task (fill-in-the-blank sentences). Since a robust commonsense reasoning system must include the understanding of negation, it would be interesting to build more negatable items in the future to test models on their ability to understand negation.
Discussion
Although we achieved SOTA performances on FWSC214 (66%) and FWSC285 (68%), these performances are expected since we used a SOTA pretrained language model fine-tuned on a large corpus while previous studies on FWSC used models trained on much smaller corpora and could not leverage the power of transfer learning. Besides, our performances were partly due to the existence of associative items. The main contribution of the present work is thus the update of current Winograd items in French and the creation of three subsets allowing for a more robust evaluation protocol of the Winograd Schema Challenge. More advanced metrics could be derived using our subsets, such as the group-scoring method used by Elazar et al. (2021) which assigns a point only if both items of one schema get solved. In our case, an item could be considered as solved only if its switched and negated versions (if they exist) get solved as well. The negatable subset of Winograd items seems particularly challenging, it would be interesting to see if models enhanced with information about the syntactic structure of items (Xu et al., 2021, e.g.) can perform better. It is also worth noting that both operations (switching and negation) proposed in this study lead to an answer switch. Abdou et al.
(2020) proposed seven perturbations (tense switch, number switch, etc.) which didn't alter the answer and showed that language models were more likely to switch the answer in case of some perturbations (e.g., number or gender alternations) than humans. Similar data sets could be created in French to allow further investigations into humans' and language models' sensitivity to linguistic perturbations.
One major limitation related to our training process is the quality of the fine-tuning corpus. It is difficult to know if the insensitivity to negation observed in Table 1 is due to our training strategy or if the general quality of our corpus hinders the model from learning. It is thus necessary to build a pretraining corpus of higher quality either by improving the current machine-translated items, or by designing a crowdsourcing procedure as Sakaguchi et al. (2020).
A final note concerns the WSC task itself. With the quasi-human performance achieved on WSC, a debate has been raised on whether the challenge has been defeated or not (Kocijan et al., 2022). However, the same performance is far from being reached on FWSC (the best performance is 68% in our work, vs. 93.6% achieved by humans (Amsili & Seminck, 2017b)). Also, Elazar et al. (2021) raised the question of whether the commonsense reasoning ability is inherent to the language model or learned during the fine-tuning process and called for more studies using a zero-shot setting. We'd like to point out that using fine-tuning or not, the same evaluation protocol is always necessary to test the robustness of a model's decisions. | 2022-07-02T13:04:17.498Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "d47a7df0e731d67644eeb1ab9d5ea6b8fe1d5b25",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "d47a7df0e731d67644eeb1ab9d5ea6b8fe1d5b25",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
268986969 | pes2o/s2orc | v3-fos-license | Reversing valproic acid-induced autism-like behaviors through a combination of low-frequency repeated transcranial magnetic stimulation and superparamagnetic iron oxide nanoparticles
Transcranial magnetic stimulation (TMS) is a neurostimulation device used to modulate brain cortex activity. Our objective was to enhance the therapeutic effectiveness of low-frequency repeated TMS (LF-rTMS) in a rat model of autism spectrum disorder (ASD) induced by prenatal valproic acid (VPA) exposure through the injection of superparamagnetic iron oxide nanoparticles (SPIONs). For the induction of ASD, we administered prenatal VPA (600 mg/kg, I.P.) on the 12.5th day of pregnancy. At postnatal day 30, SPIONs were injected directly into the lateral ventricle of the brain. Subsequently, LF-rTMS treatment was applied for 14 consecutive days. Following the treatment period, behavioral analyses were conducted. At postnatal day 60, brain tissue was extracted, and both biochemical and histological analyses were performed. Our data revealed that prenatal VPA exposure led to behavioral alterations, including changes in social interactions, increased anxiety, and repetitive behavior, along with dysfunction in stress coping strategies. Additionally, we observed reduced levels of SYN, MAP2, and BDNF. These changes were accompanied by a decrease in dendritic spine density in the hippocampal CA1 area. However, LF-rTMS treatment combined with SPIONs successfully reversed these dysfunctions at the behavioral, biochemical, and histological levels, introducing a successful approach for the treatment of ASD.
SPIONs characteristics
According to TEM, the nanoparticles displayed an appropriate shape (Fig. 1a) and had a diameter of 28.6 nm after the coating process, calculated using the ImageJ program based on 100 random particles (Fig. 1b).The DLS test revealed that the SPIONs possessed a negative charge (− 11.7 mV) and exhibited a hydrodynamic diameter of 71.2 nm (Fig. 1c).
Furthermore, the MTT assay data demonstrated no significant change in cell viability across all concentrations (0-100 µg/ml) compared to the control group (Fig. 1d).The graph (Fig. 1e) illustrates that the coated nanoparticles exhibit superparamagnetic properties (zero remanence and coercivity) with magnetization saturation at 43 emu/g.Moreover, the FTIR results confirmed the successful binding of Fe 3 O 4 to chitosan (Fig. 1f).The total SPION content in the hippocampus after 30 days from lateral ventricle injection was 21.45 ± 0.69 µg/ ml, whereas it was 9.96 ± 0.14 µg/ml in those that did not receive SPIONs. Figure 1g displays the structure of the coated SPIONs.
Three-chamber test
This study, which investigated the effects of LF-rTMS on social behavior, is presented in Fig. 2. The sociability index was calculated by dividing the time spent exploring the social chamber by the total time spent exploring both the social and nonsocial chambers.One-way ANOVA revealed significant main effects of autism and rTMS treatment on the sociability index (F (5, 41) = 25.89,p < 0.001; Fig. 2a).The Tukey post hoc test indicated a significant decrease in the sociability index for the ASD group compared to the sham group (p < 0.001).Additionally, rTMS treatment significantly increased the sociability index in both the LF + ASD and LF + NP + ASD groups (p < 0.001).
The direct social interaction index was defined as the time rats spent sniffing social stimuli minus the time spent sniffing nonsocial stimuli, divided by the sum of these times.Data analysis revealed significant differences among the groups (F (5, 42) = 24.57,p < 0.001; Fig. 2b).Post hoc test analysis demonstrated significant distinctions between the ASD group and the sham group (p < 0.001).Furthermore, rTMS treatment in both the LF + ASD and LF + NP + ASD groups showed that rats displayed significant social responses compared to untreated autistic rats (p < 0.001).In the LF + NP + ASD group, a significant increase in direct social interaction was observed compared to the LF + ASD group (p < 0.05).
The social novelty index was defined as the time spent investigating novel social stimuli minus the time spent investigating familiar stimuli, divided by the sum of the abovementioned times.The data revealed significant differences between the experimental groups (F (5, 42) = 6.494, p < 0.001; Fig. 2c).Autistic rats exhibited a significant reduction in social novelty preference (p < 0.01).The combination of LF-rTMS treatment with SPIONs demonstrated a significant treatment effect (p < 0.001).The three-chamber test is presented in Fig. 2d.
Marble burying test
The study's findings indicated significant differences in the marble burying test (F (5, 42) = 5.482, p < 0.001; Fig. 3).Multiple comparisons revealed that autistic rats buried significantly more marbles compared to the sham group (p < 0.01).However, the LF + ASD group showed significant differences compared to the autistic group (p < 0.01).Additionally, treatment with SPIONs led to significant improvements in the LF + NP + ASD group (p < 0.001).
Open field test
The experiment aimed to measure anxiety-like behavior and locomotion in rats using an apparatus where spending more time in the center indicated less anxiety.One-way ANOVA showed significant differences in anxiety-like behavior between the groups (F (5, 42) = 3.805, p = 0.006; Fig. 4a).The data revealed that autistic rats spent significantly less time in the center of the apparatus compared to the sham group (p < 0.05).However, compared to the autistic group, the LF + NP + ASD group demonstrated reduced anxiety-like behavior (p < 0.05).
Self-grooming time was also recorded when the rats were in the open field.Significant differences were observed between groups (F (5, 42) = 7.314, p < 0.001; Fig. 4b).ASD rats exhibited significantly more self-grooming behavior compared to the sham group (p < 0.001).LF-rTMS significantly reduced this behavior (p < 0.01), and when this treatment was combined with SPION injection, self-grooming behavior decreased significantly even further (p < 0.001).Additionally, the rat's locomotion behavior was recorded as the distance traveled in the apparatus.However, none of the groups showed significant differences in locomotion compared to the sham and ASD group (Fig. 4c).
Forced swim test (FST)
The results of the present study indicated a statistically significant difference in the FST among the groups regarding the time of immobility (F (5, 42) = 6.085, p < 0.001; Fig. 5a).Post hoc analysis revealed that the time of immobility in the ASD group was significantly higher compared to the sham group (p < 0.01).However, treatment with LF-rTMS with SPIONs significantly decreased the time of immobility in rats (p < 0.01).
Furthermore, concerning climbing time, there were also significant changes among the experimental groups (F (5, 42) = 9.877, p < 0.001; Fig. 5b).The autistic group exhibited a significant reduction in climbing time compared to the sham group (p < 0.001).Treatment in LF + NP + ASD groups showed significant increase in climbing time (p < 0.001).
Biochemical parameters
Statistical analysis of MAP2 levels in the hippocampus showed significant differences between groups in our study (F (5, 24) = 7.479, p < 0.001; Fig. 6a).The data demonstrated that the ASD group had significantly lower levels of MAP2 in hippocampal tissue compared to the sham group (p < 0.01).However, autistic rats treated with LF-rTMS with SPIONs showed a significant increase in MAP2 levels in the hippocampus (p < 0.05).Furthermore, one-way ANOVA of SYN levels in the hippocampus also showed significant differences among groups (F (5, 24) = 6.175, p < 0.001, Fig. 6b).SYN levels in the hippocampus were reduced in the ASD group (p < 0.01), and treatment with LF-rTMS, with or without SPIONs, significantly increased SYN levels (p < 0.01).
Additionally, BDNF levels exhibited significant differences in this study (F (5, 24) = 4.946, p = 0.003, Fig. 6c).VPA exposure in rats significantly reduced BDNF levels compared to the sham group (p < 0.01).When this treatment was combined with SPION injection, BDNF levels increased significantly in this region (p < 0.01).
Dendritic spine density
Figure 7 displays the spine density in the CA1 area of the hippocampus.Spine density exhibited significant differences in this area between groups (F (5, 30) = 4.593, p = 0.003).The ASD group displayed significant reduced density compared to the sham group (p < 0.01), and LF-rTMS treatment with SPIONs significantly increased spine density in this area (p < 0.05).
Discussion
In this study, our aim was to use LF-rTMS and SPIONs for the treatment of VPA-induced ASD model and investigate the potential mechanisms in the hippocampus.This is in consideration of the fact that the hippocampus plays a significant role in the pathophysiology of the VPA model of ASD.For this purpose, we demonstrated the dysfunctions caused by prenatal VPA in this model at the behavioral, biochemical, and histological levels.At the behavioral level, our data showed major dysfunction in social behavior, anxiety and repetitive behavior.Furthermore, we observed a reduction in stress coping strategies in this model, which is noteworthy because the hippocampus plays a key role in stress management 30 .These behavioral dysfunctions are associated with reduced MAP2, SYN, and BDNF levels in the hippocampus, as well as a decrease in CA1 hippocampal dendritic spine density.After treatment with LF-rTMS, we observed some improvement.However, when this treatment was combined with SPION injection, the effectiveness of the treatment increased.
Previous studies suggest that rTMS can affect brain excitability, behavior, and physiological changes.Repeating rTMS for several sessions has cumulative effects, and all the mentioned outcomes become more significant.Magnetic pulses generated with a coil induce an electrical field when they reach the neurons, causing neural stimulation.The final outcome may be seen in a change in synaptic efficacy related to the excitability of the neuron and other chemical factors like BDNF 31 .This may eventually lead to changes in the behavioral symptoms of ASD.
Studies suggest that LF-rTMS may lower the risk of seizures in patients, primarily due to its inhibitory effects on the cortex when compared to HF-rTMS 18,32 .Moreover, multiple studies have explored the potential therapeutic benefits of LF-rTMS for both individuals with autism and animal models 18,33,34 .
While there have been reports suggesting that a human coil can stimulate the entire brain of small rodents 35 , it is worth noting that evidence indicates a reduction in magnetic field intensity with increased distance from the surface of the coil 36 .To address this limitation, some research has shifted focus toward the utilization of SPIONs and their magnetoelectric properties to effectively stimulate specific brain regions 37 .SPIONs, when exposed to a magnetic field such as that produced by TMS, can generate local magnetic pulses and stimulate surrounding areas 29,37 .These particles exhibit remarkable stability both outside and inside the body.
Our findings demonstrated that chitosan-coated SPIONs remain in the hippocampus for a period of one month.This aligns with a prior study in which Fe 3 O 4 particles persisted in the brain for over a month 25 .Additionally, our TEM images and FTIR spectra indicate the presence of well-formed chitosan-coated SPIONs and the successful bonding of chitosan with Fe 3 O 4 .These findings are also consistent with previous research on these particles 38 .
Studies have reported that the consumption of VPA by pregnant women during early pregnancy may increase the risk of ASD in their children 39 .Consequently, many studies suggest using VPA to induce ASD in small rodents 3 .There are several protocols for ASD induction in rodents with VPA.One study aimed to compare prenatal VPA-induced ASD to a postnatal VPA model, and the results suggested that ASD symptoms are more pronounced in offspring in the prenatal model, with a much lower mortality rate in offspring 40 .Although the most commonly used model for ASD induction is 600 mg/kg VPA at 12.5 days of pregnancy, this period is crucial 41 .Studies have shown that day 12 of pregnancy in rats is the most critical day for ASD and social impairment induction 42 .Furthermore, several clinical and preclinical studies have explored the use of TMS as a therapeutic intervention for ASD 18,43,44 .
Our data showed that LF-rTMS intervention has the potential to reverse core autism behavioral deficits in the VPA model of ASD, similar to findings described in another study involving a maternal separation-induced ASD model 18,22 .Interestingly, when this treatment is combined with SPION injection, it appears to have a stronger impact on behavioral characteristics.This effect is in line with findings from another study involving a depression model 28 .
Atypical hippocampal development in ASD has recently received increased attention in research.This dysfunction is associated with deficits in memory processing, social interaction, and spatial reasoning 5 .Studies have suggested that the CA1 area of the hippocampus plays a crucial role in social memory, which is essential for social interactions in animals 45 .In our studies, we observed reduced dendritic spine density in the CA1 area of the hippocampus in autistic rats.Another study also demonstrated a reduction in dendritic spine density in this region when animals were exposed to prenatal VPA 46 .
Our results further indicate that treatment with LF-rTMS + SPIONs can increase spine density in this area.From another perspective, studies have reported that MAP2 is associated with spine density and structures 47 .Our findings from MAP2 also support the spine density results obtained through Golgi-Cox staining.Additionally, an important factor influencing dendritic spines in the CA1 area is BDNF.Research has shown that BDNF can impact dendritic spine density and dendritic length in this region 48 .
SYN is well known for its involvement in synaptic plasticity mechanisms, and it is one of the most crucial proteins within synaptic vesicles.The synaptic vesicle is a site within the presynaptic neuron where neurotransmitters are stored 11 .An increase in SYN levels can enhance neurotransmitter secretion into the synaptic space, thereby facilitating synaptic transmission between neurons 49 .Our results indicated a reduction in SYN levels in the hippocampus of autistic rats, and LF-rTMS therapy has been shown to increase SYN levels, consequently enhancing neural transmissions.
One limitation relates to the size of the rat brain, posing a challenge in focusing TMS coil stimulation specifically on the hippocampus.Unintended stimulation of other brain areas may influence our results.Using a smaller coil with a new design could potentially improve the precision of stimulation in the desired area.Additionally, the injection of SPIONs into the lateral ventricle may result in their dispersion with cerebrospinal fluid to other brain regions rather than concentrating in the hippocampus.For future studies, exploring alternative methods, such as direct injection of SPIONs into the hippocampus or developing less invasive ways to target SPION delivery to the hippocampus, would be beneficial.Another limitation of our study is related to the timing of behavioral analysis to assess the long-term effect of TMS.It may be worthwhile to conduct behavioral and molecular analyses several months after treatment.Furthermore, assessing gender diversity to identify sex-dependent effects of this treatment on the ASD model is worthwhile.Due to the wide-ranging effects of VPA on the body, future studies may benefit from specific pharmacological methods or genetic models.This is another limitation of our study.
Our study suggests that ASD involves dysfunction in hippocampal dendritic spines and neural transmission, which can also have an impact on several core ASD behaviors.LF-rTMS treatment, when combined with SPION injection, has the potential to improve behavioral symptoms associated with ASD.The increased spine density in the CA1 area of the hippocampus is associated with this treatment and is mediated by an increase in BDNF, MAP2, and SYN levels.Furthermore, our research highlights the success of deep brain stimulation with SPIONs, offering a promising avenue for further investigation into the effects of TMS on deep brain structures and the development of novel treatments for neurological disorders.
Animal study and experimental procedure
This study adhered to the ARRIVE guidelines.This study comply with the rules and guidelines of Shahid Beheshti University.The study was approved by Research Ethics Committees of Shahid Beheshti University with approval ID: IR.SBU.REC.1401.108.Adult Wistar rats were housed in standard animal facilities, maintaining controlled conditions (22 ± 1 °C, humidity: 45 ± 3%, and a 12-h light/dark cycle).The rats had free access to standard food and water.Male and female rats were co-housed in cages with nesting materials, and successful mating was confirmed by the presence of a white plug in the vagina or cage.
Following successful mating (Fig. 8a), female rats were randomly divided into two groups: one group received vehicle, while the other group received an intraperitoneal injection of 600 mg/kg of VPA (Darou Pakhsh Co., Tehran, Iran) at 12.5 days after successful mating 50 .One of the side effects of VPA on 100% of rat offspring is a twisted tail observed at one or several locations, as shown in Fig. 8b.Later, on postnatal day 21, male rats were separated from the others and randomly assigned to one of six groups, each consisting of 8 rats: Sham, Control + SPION (NPC), Control + LF-rTMS (LFC), VPA (ASD), ASD + LF-rTMS (LF + ASD), and ASD + SPION + LF-rTMS (LF + NP + ASD).
On day 30, stereotaxic surgery was performed under ketamine/xylazine anesthesia to inject SPION solution (15 mg/ml, 2.5 µl) into the lateral ventricle of the brain.
Repetitive transcranial magnetic stimulation (rTMS)
According to a previous study, rats underwent LF-rTMS (1 Hz, 20 trains, 30 pulses in each train, with 2-s inter-train intervals, totaling 600 pulses) 18 from postnatal day 31 to 44, conducted between 8 a.m. and 12 p.m. Throughout the experiments, the stimulation intensity for LF-rTMS sessions remained consistently set at 100% of the average resting motor threshold (50% of the maximum output).The determination of this threshold was established through an initial experiment conducted on conscious animals.In this experiment, the average stimulation intensity required for bilateral forelimb movement was evaluated through visual observation in a group of 4-5 rats 18 .
To ensure the rats immobility during the experiment, a cloth restraint based on a previous study 51 was utilized.The TMS stimulator employed in this research was the Super Rapid 2 device (MagStim, UK), equipped with a 70 mm human 8-shaped coil (D70 air film coil).To acclimate the rats and minimize stress, they were restrained and exposed to TMS device noise for one week prior to the experiment.All treatments were administered in a quiet, low-light environment to minimize potential stressors.During the experiment, the coil was positioned over the area between the eyes and ears of the rats.For the sham group, the coil was oriented in the opposite direction to ensure that no electromagnetic stimulation was delivered, while the auditory conditions remained unchanged.
SPIONs coating
To prepare coated nanoparticles (Fig. 1g), a mixture was created by combining 11.60 ml of acetic acid (MERCK, Germany) with 40 mg of chitosan (Iran Chitosan, Iran), and 0.14 g of iron-oxide nanoparticles (Fe 3 O 4 ) (Arminano, Iran) was added to 200 ml of distilled water and stirred for 15-20 h.This resulted in a color change from black to brown.The mixture was then subjected to centrifugation and washed twice with distilled water.Finally, the nanoparticles were dried at 65 °C to obtain a powdered form 52 .The morphology of the coated SPIONs was examined using a transmission electron microscope (TEM, Philips EM 208).The hydrodynamic diameter/zeta potential of the nanoparticles were measured using a HORIBA S-Z100 instrument.Fourier transform infrared spectroscopy (FTIR) was employed to identify the chemical groups and interactions in the chitosan-coated SPIONs (Bomem, Japan).Additionally, the magnetic hysteresis curve was measured using a vibrating sample magnetometer (VSM) from Magnetic Kavir Kashan Co., Iran.
Assessment of cytotoxicity
Following the previously described method 53 , the MTT assay was employed to assess the cytotoxic effects of SPION on human neuroblastoma cells (SH-SY5Y).Various SPION concentrations (ranging from 0 to 100 μg/ml) were introduced to the cells in 96-well plates and incubated for 24 h.Cell viability was subsequently determined using a microplate reader at 570 nm and compared to the untreated control group.
Three chamber test
To assess social behavior and the ability to adapt to new social stimuli, the researchers employed the threechamber task (Fig. 2d).The experimental setup consisted of an 80 × 80 × 40 cm Plexiglass box divided into three equal chambers.The task involved three stages: familiarization, social interaction, and novelty response 2 .
In the first stage, the rat was introduced into the center chamber of the empty apparatus and allowed to explore for a duration of 10 min.Subsequently, in the second stage (10 min), one of the side chambers housed an (a) After mating on the 12.5th day of pregnancy, valproic acid (VPA) or saline was injected intraperitoneally (600 mg/kg) to induce autism spectrum disorder (ASD).On postnatal day 30, superparamagnetic iron oxide nanoparticles (SPIONs) were injected into the lateral ventricle of the brain (15 mg/ml, 2.5 µl).From days 31 to 44, low-frequency repeated transcranial magnetic stimulation (LF-rTMS) treatment was administered for 14 consecutive days.Subsequently, rats were subjected to behavioral testing, and on day 60, rats were sacrificed for brain sample collection for further analysis.(b) Anatomical malformation of the tail of a rat compared to a healthy rat after prenatal VPA exposure.
Vol:.( 1234567890 www.nature.com/scientificreports/unfamiliar rat, while the other remained unoccupied but contained a metal cage.The test rat was then returned to the center chamber to assess its level of social interaction and sniffing behavior. In the third stage (10 min), the metal cage in one of the chambers was filled with a new, unfamiliar rat, while the metal cage in the other chamber still contained the same rat (familiar rat) from the previous stage.This step aimed to evaluate social novelty.
Marble burying test
The marble burying test was conducted in a 40 × 40 × 40 cm box containing 20 glass marbles arranged in 5 rows.The test spanned 30 min, during which the rats were permitted to interact with the marbles.Upon completion of the test, the researchers counted the number of marbles that had been buried under the bedding material as an indicator of repetitive digging behavior 2 .
Open field test
In the open-field test, each rat was positioned in a 40 × 40 × 40 cm box that was partitioned into a central circular area and squares surrounding it.The rat was allotted 15 min to explore the box, while various parameters were observed.These parameters included the time the rat spent in the central circular area, the duration of selfgrooming behavior, and the distance traveled by the rat 2 .
Forced swim test
Rats were subjected to a swimming test in a cylindrical plastic container filled with water (30 cm in diameter and 50 cm in height) maintained at a temperature of 25 ± 1 °C54 .The rats were given two minutes to acclimate to the cylinder.This test measures the response to acute stress, which may be influenced by certain neurological disorders, such as ASD 55 .The duration of immobility and climbing time was subsequently monitored for 5 min.Following the completion of each swimming test, the rats were gently dried with a towel, and the water in the tank was replaced.
Biochemical parameters
At post-natal day 60, following the behavioral tests, rats were anesthetized with a combination of ketamine/xylazine (80 mg/kg and 10 mg/kg).Subsequently, the brains were removed, and the hippocampus was homogenized with a lysis buffer solution for further processing.
Microtubule-associated protein 2 (MAP2)
The samples were assessed using a conventional MAP2 sandwich ELISA procedure, as detailed in a prior research publication 56 .
Synaptophysin (SYN)
The ELISA procedure for quantifying synaptophysin concentration was conducted following the methodology outlined in a previously referenced study 57 with an ELISA kit (CSB-E13827r).Briefly, the synaptophysin ELISA involves preparing samples, coating the microtiter plate with synaptophysin antibody, adding samples or standards, incubation, washing, adding detection antibody conjugated to an enzyme, incubation, washing again, adding TMB Development Solution for color development, adding Stop Solution to halt the reaction, and measuring signal intensity at 450 nm for quantitative analysis of synaptophysin levels.
Brain-derived neurotrophic factor (BDNF)
The assessment of supernatant BDNF levels was conducted by employing a BDNF ELISA kit (RAB 1138, Sigma, USA), adhering strictly to the manufacturer's guidelines.The rate of the reaction was determined using a microplate reader (Bio Tek, USA) set to a wavelength of 450 nm.
Protein assessment
To quantify the protein content in the samples, we utilized the BCA (bicinchoninic acid) method.The absorbance at 562 nm was measured to quantify the amount of protein in accordance with a previous study 58 .
SPIONs content measurement
In accordance with a previously published article 59 , the SPIONs content in the hippocampus was measured in the supernatant.Briefly, to develop color in a 96-well plate, hippocampus supernatant was mixed with an iron assay solution, and the absorbance was read at 562 nm.The data were expressed as µg/ml.
Golgi-Cox staining
The Golgi impregnation method was employed as described previously 60 .Brain blocks were fixed and immersed in a solution for two weeks (1% mercury chloride, 0.8% potassium chromate, 1% potassium dichromate, 0.5% potassium tungstate), followed by transfer to another solution (1% lithium hydroxide and 15% potassium nitrate).Subsequently, the brains were placed in a sucrose-buffer solution, and 100 μm slices were prepared using a cryotome.Spine density in the CA1 hippocampal area was calculated using ImageJ software.
Statistical analysis
For data analysis, we utilized GraphPad Prism (ver.9.5.1).The statistical analysis comprised a one-way ANOVA, followed by a Tukey test.The Shapiro-Wilk test was employed to assess normality, and ROUT (Q = 0.5%) analysis was used to identify outliers.The homogeneity of variance was calculated using the Brown-Forsythe test.The results are presented as the mean ± SEM, and p values less than 0.05 were considered statistically significant.
Figure 1 .
Figure 1.Characteristics of chitosan-coated superparamagnetic iron oxide (Fe 3 O 4 ) nanoparticles (SPIONs).(a) Transmission electron microscopy (TEM) shows that these particles have a spherical shape, with chitosan well coated around the iron core.(b) The mean diameter of these particles was calculated based on the measurement of 100 random particles in TEM images (28.6 nm).(c) Dynamic light scattering (DLS) testing revealed the mean hydrodynamic diameter of these particles (71.2 nm).(d) The MTT assay of SPION showed no significant toxicity at different concentrations.(e) Vibrating sample magnetometer (VSM) provided information on the magnetic hysteresis of chitosan-coated SPIONs.(f) Fourier transform infrared spectroscopy (FTIR) analysis showed several distinct bands indicating the presence of iron and chitosan in the final product.These bands include 3445 cm −1 (O-H and N-H), 2922 cm −1 and 2852 cm −1 (C-H), 1449 cm −1 (C-N), 1034 cm −1 (C-O-C), and 583 cm −1 (Fe-O).(g) The chemical structure of chitosan-coated SPION (KingDraw).
Figure 4 .
Figure 4.The effect of low-frequency repeated transcranial magnetic stimulation (LF-rTMS) and superparamagnetic iron oxide nanoparticles (SPIONs) on the open field test.(a) The time spent in the center of the apparatus as an indicator of anxiety behavior.(b) Self-grooming behavior representing repetitive and convulsive behavior.(c) The distance traveled through the apparatus during the experiment.(N = 8, mean ± SEM).*p < 0.05, **p < 0.01, ***p < 0.001.
Figure 8 .
Figure 8.The timeline of the study and anatomical malformation caused by prenatal valproic acid (VPA).(a)After mating on the 12.5th day of pregnancy, valproic acid (VPA) or saline was injected intraperitoneally (600 mg/kg) to induce autism spectrum disorder (ASD).On postnatal day 30, superparamagnetic iron oxide nanoparticles (SPIONs) were injected into the lateral ventricle of the brain (15 mg/ml, 2.5 µl).From days 31 to 44, low-frequency repeated transcranial magnetic stimulation (LF-rTMS) treatment was administered for 14 consecutive days.Subsequently, rats were subjected to behavioral testing, and on day 60, rats were sacrificed for brain sample collection for further analysis.(b) Anatomical malformation of the tail of a rat compared to a healthy rat after prenatal VPA exposure. | 2024-04-08T06:17:01.395Z | 2024-04-06T00:00:00.000 | {
"year": 2024,
"sha1": "dac87533f9880249831e223e93391867d88ff858",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "af6670edbd0dd9a8a0f6dd7a9b5cdd8117301ed8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
155281693 | pes2o/s2orc | v3-fos-license | Immobilization studies of Candida Antarctica lipase B on gallic acid resin-grafted magnetic iron oxide nanoparticles
Purpose: Here, we present the successful preparation of a highly efficient gallic acid resin grafted with magnetic nanoparticles (MNPs) and containing a branched brush polymeric shell. Methods: Using a convenient co-precipitation method, we prepared Fe3O4 nanoparticles stabilized by citric acid. These nanoparticles underwent further silica modification and amino functionalization followed by gallic acid functionalization on their surface. Under alkaline conditions, we used a condensation reaction that combined formaldehyde and gallic, to graft the gallic acid−formaldehyde resin on the surface. We then evaluated the polymer-grafted MNPs to assay the Candida Antarctica B lipase(Cal-B) immobilization via physical adsorption. Conclusion: Furthermore, during optimization of parameters that defined conditions of immobilization, we found that the optimum immobilization was achieved in 15 mins. Also, optimal immobilization temperature and pH were 38ºC and 7.5, respectively. In addition, the reusability study of immobilized lipase polymer-grafted MNPs was done by isolating the MNPs from the reaction medium using magnetic separation, which showed that grafted MNPs reached 5 cycles with 91% activity retention.
Introduction
Candida Antarctica lipase B (Cal-B) is an enzyme with numerous applications in a broad range of catalytic reactions and is often used in industrial chemical processes [1][2][3][4] such as kinetic resolutions, aminolysis, esterification, transesterification, hydrolysis, and stereoisomeric transformations 3,5-8 and even the synthesis of glucolipids. 9,10 Cal-B is highly selective, offering stability in an acidic pH environment, as high-quality end-product, and with fewer side-products from chemical reactions. It is also highly effective at high temperatures. 11 Uppenberg et al 12 were the first to elucidate the structure of Cal-B. It is comprised of 317 amino acids and weighs~33 kDa. The structure reveals a conserved core that is eight-stranded with a twisted β-sheet sandwiched by α-helices on both sides. 12,13 The active site contains a serine. However, the sequences around the active serine site are different in Cal-B as compared to other lipases. Cal-B contains the foundation for the Ser-His-Asp/Glu residues that form the catalytic triad. The surface that surrounds the tip of the active site is hydrophobic, and this allows for interaction with lipid surfaces in hydrolysis. During hydrolysis, the surface has aliphatic residues that form side chains which are oriented toward the solvent. 12 Despite the unique and excellent features of Cal-B in biotransformation, these properties are absent once they are dissolved in organic solvents. They are also easily denatured when subjected to high temperatures, mechanical shear, and various solvent effects. 14 It is also difficult to recover the enzyme from reaction solutions or separate it from both substrates and products. Altogether, these factors underlie the major reasons for the seldom use of biocatalysts in the pharmaceutical industry. Proteins, more especially enzymes, are amphipathic, which makes them intrinsically active at their surface and lends them to be highly absorptive. 15 Adsorption-driven immobilization is due to enzyme binding to a solid support. 16 One way to improve enzyme performance is to change their bound state through either ionic forces, physical adsorption, hydrophobic bonds, and Van der Waals forces or combination of all these forces. 17 Theoretically, it is possible to develop catalysts that portray significant advantages relative to free enzyme using the technique of enzyme immobilization. [18][19][20] Previous work using this method has demonstrated remarkable improvements in performance including increased enzyme activity (up to a factor of 100) in organic solvents, increased enantioselectivity, remarkable long-term stability, increased temperature stability, and more efficient recovery by filtration or centrifugation. [17][18][19] Consistent with this, many studies report the effective use of immobilized Cal-B in transformation reactions for compounds with low molecular mass 21,22 and reactions involving polymerization. 23,24 However, adsorption induces conformational changes that affect catalyst rate and specificity. 25 As a result, research on immobilization has focused on matrix selection and the optimization of conditions. [26][27][28][29] Enzyme immobilization offers the advantage of efficiently increasing enzyme yield and recovery. This method offers a wider range of possibilities such as enzyme reuse in other types of reactions, and increasing enzyme stability and activity. Furthermore, postreaction, these enzymes also demonstrate a higher temperature resistance and are more tolerant to organic solvents. 30,31 Gallotannins are a subclass of plant tannins that are hydrolyzable. These compounds have recently received a growing amount of attention due to the excellent characteristics that make them suitable for use as adhesives in place of phenolic compound derived from petroleum. The hydrolysis of these compounds produces gallic acid, an aromatic ring with a carboxyl group, and three adjacent hydroxyl groups. 32 Previous work has demonstrated that polymer-coated nanoparticles have more magnetic susceptibility and a larger polymer surface area for enzyme immobilization. 33 Furthermore, lipases that are immobilized using a magnetic support can be removed from a reaction system and stabilized using a bed reactor that has been fluidized with an external magnetic field. 34,35 Last, using magnets as a support reduces the cost of operations and total capital. 36,37 In this study, we demonstrate a facile, bio-inspired approach to immobilizing Cal-B enzyme onto poly(gallic acid−formaldehyde)-modified magnetic nanoparticles (MNPs). Polyphenolics demonstrate better adhesive performance due to the phenolic hydroxyl groups they possess. 38 The phenolic moieties present in gallic acid are reactive to nucleophiles such as primary amines, making them capable of immobilizing Cal-B enzyme. To test this idea, we functionalized magnetic silica nanoparticles with amino groups and modified them with gallic acid. Under alkaline conditions, we carried out a condensation reaction between gallic acid and formaldehyde to prepare a resin graft of gallic acid−formaldehyde onto magnetic silica nanoparticles. We then used the obtained nanoparticles as an adsorbent for enzyme Cal B.
Our results demonstrate that poly(gallic acid−formaldehyde)-modified MNPs exhibit high efficiency for enzyme Cal-B immobilization. Even after many cycles of magnetic separation and reuse, we found that the enzyme retained a high degree of activity. We observed maximal Cal-B lipase binding and expression by optimizing the parameters of immobilization by changing the time of immobilization and loading of enzymes. Finally, we evaluated how stable the lipases were in their free and immobilized states by measuring the storage, pH, and temperature.
Synthesis of amino-functionalized MNPs (Fe 3 O 4 @SiO 2 @APTMS)
We prepared MNPs via the co-precipitation of iron (II) and iron (III) chloride under basic conditions. This process was stabilized by the chemical adsorption of citric acid on the surface. We modified the Stober method 39 and used silica to coat nanoparticles of magnetic particles using a basic mixture of ethanol and water set at 30°C and magnetic fluids for seeding. We then ensured that the amino groups were anchored to magnetic silica particles using the reaction of magnetic SiO 2 with APTMS to obtain Fe 3 O 4 @SiO 2 @NH 2.
40
Gallic acid-modified MNPs (Fe 3 O 4 @SiO 2 @GA) We dispersed 3.0 g of a functionalized form of amino MNPs (Fe 3 O 4 @SiO 2 @APTMS) in 40 mL of 1,4-dioxane using sonication. A solution was made by dissolving 3.0 g of gallic acid in 20 mL of 1,4 dioxane, and 0.2 g of boric acid was added to this solution. A solution of gallic acid −boric acid was added to the nanoparticle dispersion while stirring at 800 rpm in a 250 mL glass reactor. The reaction mixture was kept stirring at 80°C for 10 hrs under nitrogen. Subsequently, the flask was cooled and the products (Fe 3 O 4 @SiO 2 @GA) were separated using magnets and intensely sonicated using water and acetone. The product was then dried using a vacuum at room temperature.
Gallic acid−formaldehyde resingrafted MNPs (Fe 3 O 4 @SiO 2 @PGA) 3.0 g of gallic acid-grafted MNPs (Fe 3 O 4 @SiO 2 @PGA) was dispersed in 80 mL of deionized water and then ultrasonicated for 20 mins. This suspension was kept stirring at 800 rpm, while the reaction was heated and maintained to 65°C. 12.0 g of gallic acid was added to the nanoparticle suspension with the successive addition of 57 mL of formaldehyde solution. Afterward, 40 mL of ammonia solution was added while maintaining the temperature at 85°C. This reaction mixture was kept stirring for 3 hrs. After cooling, the reaction mixture was acidified with 1 N HNO 3 . Fe 3 O 4 @SiO 2 @PGA nanoparticles were separated and rinsed with H 2 O DMF followed by ethanol. Last, the final product was vacuum dried at room temperature. were prepared by first mounting the MNPs on a copper grid with a 200-mesh carbon coating. We then placed 20 µL of the liquid sample containing nanoparticles on a grid of copper and left it to dry overnight at room temperature. Micrographs were then taken on a JEOL 1200-EX instrument with the voltage accelerated at 100 kV. (b) X-ray diffraction (XRD) analysis: we used a Rigaku X-ray diffractometer to examine the crystalline phase of the polymer-grafted MNPs. (c) FTIR spectroscopy: to examine the extent of infrared absorption, a Shimadzu 8300-Fourier transform infrared spectrophotometer with a resolution of 1 cm −1 in the transmission mode was used. We milled 3 mg of the polymer and mixed it with 100 mg potassium bromide. We pressed this into a 12 mm diameter solid disk prior to the measurement of FTIR. (d) Thermogravimetric analysis (TGA).
Characterization of MNPs
We examined the thermal stability and grafting amount of the nanoparticle using TGA (STA 6000
Enzymatic assay
The immobilized enzyme was added to a beaker that contained 13 mL of distilled water, 2.5 mL of a buffer of sodium phosphate (100 mM, pH 7.5), and 1 mL of ethyl acetate and incubated for 5 mins at room temperature, and then constantly stirred. The reaction was halted using 3 mL acetone solution. The activity of the immobilized enzyme was further determined with the titrimetric method using 0.1 N NaOH and 1% phenolphthalein as an indicator for ethyl acetate. The reaction was titrated to the end-point, using the pink color change for phenolphthalein as the indicator.
Results and discussion
To synthesize functionalized magnetic silica nanoparticles of gallic acid (Fe 3 O 4 @SiO 2 @GA), we first prepared iron oxide nanoparticles by the co-precipitation of Fe(II) and Fe(III). Next, we sought to improve the affinity of Fe 3 O 4 nanoparticles to target species, which was accomplished by introducing reactive silanol groups on the surface of Fe 3 O 4 using tetraethyl orthosilicate (TEOS). The TEOS-modified Fe 3 O 4 nanoparticles were subsequently treated with APTMS. This treatment with APTMS allowed the nanoparticles to covalently bind to the free -OH groups at the surface of the particles, while the amino group at the end could couple with the carboxylic acid group in gallic acid. This final step of the reaction allowed for the formation of the gallic acidfunctionalized magnetic silica nanoparticles (Fe 3 O 4 @SiO 2 @GA). The resultant MNPs were then treated with gallic acid and formaldehyde solution under alkaline conditions to yield a gallic acid−formaldehyde resin grafted with MNPs (Fe 3 O 4 @SiO 2 @PGA). The polymer-grafted MNPs were then exposed to a lipase-containing solution to achieve enzyme immobilization (Figure 1). To observe and analyze the degree of surface modifications and polymer grafting of the nanoparticles, we carried out an FTIR spectrum of the MNPs (Figure 2). In examining the FTIR spectrum of the MNPs only, we observed a broad stretching vibrational peak at 3,474 cm −1 . The spectrum includes O-H bonds attached to iron atoms. We also observed an absorption at 587 cm −1 that are Fe-O bonds. The silica coating of the iron oxide nanoparticles portrayed several Si-O vibration bands around 1,082 cm −1 . We also observed stretches of alkyl C-H at 2,975 and 2,930 cm −1 and primary amines that showed N-H bending at 1,555 cm −1 in the APTMS-functionalized MNPs, as compared to that of silicacoated MNPs. Furthermore, we observed additional aromatic C-H stretching vibrations at 3,100 cm −1 , which confirmed the covalent bonding of gallic acid on MNPs. Using the FTIR peak analysis, the polymerization of gallic acid−formaldehyde was confirmed on the surface of Fe 3 O 4 @SiO 2 @GA.
We also obtained TEM images of magnetic uncoated iron oxide and polymer-grafted silica core/shell MNPs as shown in Figure 3A and B. We observed that the size of the Fe 3 O 4 nanoparticles ranged from 15 to 20 nm. In Figure 3B, we demonstrate that core MNPs were covered with silica coating and polymer grafting. During the process, we observed aggregation of some MNPs. In sum, a silica shell followed by polymer grafting seemed to be formed around the nanoparticles.
We used XRD to examine the crystalline structure of the MNPs that were polymer grafted. This method revealed that the diffraction pattern of MNPs is consistent with a previously reported structure of crystalline silicacoated MNPs (Figure 4). 41,42 As shown in the figure, peak characteristic for Fe 3 O 4 indicated by their indices (220), (311), (400) and (511), and (440) were examined for polymer-grafted nanoparticles and showed that the resulting nanoparticles were Fe 3 O 4 alone with an inverse structure. 41 The amorphous nature of grafted polymer accounts for the initial broad peak.
To quantify the amount of modification and polymer grafting on the magnetic Fe 3 O 4 nanoparticles, we carried out a TGA, as demonstrated in Figure 5. We observed a minor weight loss in the MNPs in the temperature range of 100°C, which we attributed to the remaining H 2 O. Our data demonstrate that the % weight of polymer grafted on MNPs is 40.
We measured GA-MN's magnetic moment measured using the sample's magnetometry, with a magnetic strength in the range of −15,000 Oe to 15,000 Oe. We show a plot of M vs H, at room temperature in Figure 6. Our data showed that the saturation magnetization (Ms) value of enzyme bounded polymer-grafted MNPs was 21 emu/g. We found this to be lower in value as compared to the Ms value (65 emu/g), which is exhibited by the unmodified iron oxide MNPs. The reduction in MNPs' magnetic moment is because of an equal amount of nonmagnetic coating and the polymer grafting. We validated this combination using the TGA. Regardless, we found the magnetization value to be adequate for enzyme removal of loaded MNPs after the reaction was completed. We believed this to be due to the high sensitivity of the MNPs to the external magnet. In Figure 7, we demonstrate the successful separation of enzyme-loaded polymer of grafted nanoparticles from the reaction mixture using an external magnetic field.
Optimization of immobilization time
In Figure 8, we demonstrate the results of experiments to determine the optimal time needed for the immobilization of the lipase (gallic acid−formaldehyde)-grafted MNPs (Figure 8). Our results show that the EAU activity of Cal-B immobilized MNPs reached equilibrium in 15 h. However, after that, the enzymatic activity of the MNPs that were immobilized was found to be constant.
Optimization of enzyme loading
To determine optimal enzyme loading conditions, we determined the effect of adding various quantities of enzyme to the portion of the enzyme that was immobilized. Our results demonstrated that increasing the amount of the loaded enzyme directly increased the initial enzyme concentrate. However, that quickly reached a maximum value. To further evaluate the loading capacity, the polymer-grafted MNPs were loaded with different concentrations of the Cal-B enzyme and examined for their levels of protein binding and lipase activity. Results are listed in Table 1. We observed a direct correlation between the activity of immobilized enzyme and an increase in enzyme loading concentration, Si Figure 1 Schematic diagram for the preparation route of Cal-B enzyme immobilized polymer-grafted magnetic silica nanoparticles.
at 15,000 EAU/g. However, the opposite was the case as the activity decreased when concentrations >15,000 EAU/g were used. Therefore, our results demonstrate that the Cal-B molecules are immobilized proximal to each other, thereby preventing any deactivation caused by the unfolding of the enzyme, and hindering the support surface. 16,43 Also, lack of increase of activity with increase of nominal amount in contact with the support could be assigned to the aggregation of CALB in polar solvent. 44 Effect of pH value on immobilized lipase activity We also carried out a comparative study of the pH between free and immobilized Cal-B enzyme on polymer-grafted nanoparticles. The variation in the comparative activity of both free and immobilized Cal-B enzyme at varied pH values is showcased in Figure 9A. Our results demonstrate that the best pH values were 7.0 and 7.5 for the immobilizes and free enzyme, respectively. The free lipase had an optimum pH that was shifted by 0.5 units to alkaline pH values after immobilization. Our results show that the immobilized enzyme on polymer-grafted MNPs demonstrates adaptability in a wide region of pH values compared to the free lipase. The method of immobilization, structure, and charge of the matrix determines the observed difference and shift. We contend that our results are due to the extent of stabilized lipase molecules due to the multipoint immobilization on the polymer-grafted MNPs.
Effect of temperature on immobilized lipase activity
The restricted conformational mobility of lipase following immobilization often occurs due to deactivation forces. As such, we investigated the effect of temperature on lipase molecules that were either immobilized or free, and where we used ethyl acetate as a substrate ( Figure 9B). The optimum temperature of nonimmobilized or free lipase appeared at 27°C, but the lipase that was immobilized was obtained at 38°C, which is much higher than the free lipase. Interestingly, immobilized lipase was still activated at temperatures above 38°C. As the temperature increased, the relative activity of immobilized lipase degreased to a less degree as compared to that of free lipase. Immobilized lipase demonstrated an optimum temperature that was up to 38°C, and higher than its soluble version. Furthermore, we observed resistance to high temperatures in the immobilized lipase. Repeated enzyme use in experiments and industrial applications is perhaps the most important benefit of enzyme immobilization. We carried out a set of experiments on catalyst reusability to determine the stability of immobilized enzyme. After seven cycles, immobilized lipase activity was significantly decreased as shown in Figure 10. Nonetheless, we still observed a baseline level of stability, as activity remained at 91% even after the first five cycles. After the completion of five cycles, the activity of immobilized Cal-B deceased drastically. As such, it can be concluded that during the initial round of adsorption, the enzyme is neither deactivated nor denatured. Observed decreases in activity may result from extrinsic factors such as leakage of enzymes, desorption of residual lipase that was strongly adsorbed, and configuration changes.
Conclusion
We demonstrate that a gallic acid resin grafted with magnetic iron oxide nanoparticles provides significant support for Cal-B enzyme immobilization. Synthesized polymergrafted MNPs were thoroughly characterized to understand their structure. The gallic acid resin-grafted MNPs displayed a high lipase loading capacity due to its coreshell nanostructure and strong adhesive interactions between the enzyme and grafted polymer. Indeed, the immobilized Cal-B enzyme on gallic acid resin-grafted MNPs showed improved enzymatic activity and favorable thermal and pH stability compared to the free enzyme, Cal-B. We also demonstrated that the enzyme-immobilized MNPs showed good reusability and could be efficiently recovered magnetically. Overall, our work demonstrates that the immobilization of enzyme onto gallic acid resin-grafted magnetic iron oxide nanoparticles is efficient and cost-effective. Number of reuse Relative activity % Figure 10 Reusability of Cal-B enzyme-loaded polymer-grafted magnetic silica nanoparticles for hydrolytic activity. | 2019-05-17T13:46:36.054Z | 2019-05-01T00:00:00.000 | {
"year": 2019,
"sha1": "03d9b3035425bf7027797ca9b645c3f42842ab4a",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=49578",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "03d9b3035425bf7027797ca9b645c3f42842ab4a",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
250669804 | pes2o/s2orc | v3-fos-license | Correlation between the nucleation of a Griffiths-like Phase and Colossal Magnetoresistance across the compositional metal-insulator boundary in La1-xCaxMnO3
Detailed measurements of the magnetic and transport properties of single crystals La1−xCaxMnO3 (0.18 ≤ x ≤ 0.27) are summarized; comparisons between which (i) not only confirm that Griffiths Phase-like(GP) features are not a prerequisite for CMR, but also demonstrate that the presence of GP-like characteristics do not guarantee the appearance of CMR; (ii) indicate that whereas continuous magnetic transitions occur for 0.18 ≤ x ≤ 0.25, the universality class of these transitions belongs to that of nearest-neighbour 3D Heisenberg model only for x ≤ 0.20, beyond which complications due to GP-like behaviour occur.
Transition metal oxides exhibit a range of fascinating properties that include multiferroicity, superconductivity, and colossal magnetoresistance (CMR). Many of these appear to be critically sensitive to differing types/levels of ion substitutions, particularly striking in the case of CMR perovskites manganites [1][2][3] in which dramatic changes in resistivity result from a metal-insulator (M-I) transition, the temperature of which exhibits a marked field-dependence (the latter often occurring in close proximity to a paramagnetic-ferromagnetic (PM-FM) transition).
Below we summarize a study of single-crystal La 1-x Ca x MnO 3 with 0.18 ≤ x ≤ 0.27, spanning the compositionally driven M-I boundary region, 0.18 ≤ x c ≤ 0.22 [2][3]. The appearance of CMR in manganites was initially considered within the framework of a spin-dependent Double Exchange (DE) model [4] in which FM interactions between the localized Mn t 2g spins are mediated by the hopping of itinerant e g spins. The onset of metallicity is thus linked with the establishment of an infinite (percolation) pathway of DE metallic bonds, the same bonds that establish an infinite FM "backbone", so that the occurrence of metallicity and ferromagnetism are coincident. Subsequent discussions/studies have emphasized the necessity of including the Jahn-Teller effect and the electron-phonon interaction [5]. Currently, the role of spontaneous electronic phase separation [6] and relationship between the occurrence of a Griffiths-like Phase (GP) and CMR [7][8][9], amongst others, are regarded as pivotal ingredients in this context. The physical mechanism underlying CMR nevertheless remains controversial. That FM and metallicity are governed by percolative mechanisms appears generally agreed, however questions have arisen recently as to whether they emerge coincidental. The present study focuses on the doping range where both ferromagnetism and metallicity first emerge, although not, as detailed measurements confirm, coincidentally.
Measurements of the magnetic and transport behaviour of high quality La 1-x Ca x MnO 3 single crystals with the nominal compositions x = 0.18, 0.19, 0.20, 0.21, 0.23, 0.25, and 0.27, grown using the floating zone technique [10], were carried out using standard techniques [8]. Fig. 1 reproduces the temperature dependent resistivities ρ(T,H) measured in static magnetic fields of 0, 30 kOe, and 90 kOe; the insets display the associated magnetoresistivities (Δρ=[ρ(0)-ρ(H)]/ρ(H)). These data confirm that the M-I boundary lies between 0.19 ≤ x c ≤ 0.20 in the series studied. In Fig. 2 the inverse ac susceptibilities (1/χ) of these same samples (measured in both zero-field and in various static biasing fields up to 1 kOe) are plotted as a function of temperature. The characteristic depression of some of these data below the higher temperature Curie-Weiss line are clearly evident, a symptom of so-called GP behaviour [7][8][9][11][12][13][14][15][16], viz., an inverse susceptibility of the form: (1) The occurrence of GP-like features have been reported in range of doped perovskites based on a various physical measurements [7][8][9][11][12][13][14][15][16]; their presence has been attributed to the influence of disorder on the phase complexity in the magnanites and related systems [7][8][9][11][12][13][14][15][16]. Whereas GP-like features have been shown to correlate closely with CMR in this system near optimal doping [7], a comparison of the data in Figs 1 and 2 provide (i) further confirmation of the earlier conclusion [8] that GP-like features are not a prerequisite for CMR -the x = 0.20 and 0.21 samples both exhibit CMR, whereas only the latter displays GP-like features, and (ii) allows the important additional caveat that the appearance of GP-like features do not guarantee the emergence of CMR -the x = 0.19 specimen exhibits GP-like features but has an insulating ground state and hence no appreciable magnetoresistance. These data raise further questions regarding our current understanding of the fundamental mechanism(s) underlying CMR [8][9]. A more detailed examination of these data is provided by conventional analysis of critical behaviour in the vicinity of a continuous PM-FM transition; this is based on the scaling law equation of state relating the reduced magnetisation, m(h, t), and susceptibility χ(h, t), to the usual (reduced) linear scaling fields t = (T-T C )/T C and h ∼ H i /T C (where the internal field H i = H a -N D M, in the usual notation) [17][18], viz., G ± (x) being the derivative of F ± (x) wrt its argument. Fig. 3a reproduces scaling plots of the magnetization and Fig. 3b of the susceptibility for all samples; only the 0.18, 0.19 and 0.20 samples are fit using isotropic 3-D Heisenberg model exponent values (γ = 1.387, β = 0.365, and δ = 4.783) [19]. As the static applied field is the conjugate field for uniform ferromagnetism, rather than its disorder GP counterpart, the GP-like features appearing in the x=0.19 sample are rapidly suppressed by small external fields -as in La 1-x Ba x MnO 3 (x = 0.27) [20] -and "conventional" critical behaviour ensues [20]. In contrast, in samples with x > 0.20, complications from GP-like behaviour yield non-universal exponent values (5 < δ < 19, for example).
The present data provide a careful delineation of the compositionally driven M-I boundary as lying between 19 and 20% Ca substitution in this series of single crystals, thereby supplying incontrovertible evidence supporting the conclusion that the emergence of metallicity and ferromagnetism is not coincidental in La 1-x Ca x MnO 3 . An immediate corollary to this conclusion is the question of what are the principal mechanisms underlying ferromagnetism in this composition range. The answer to this latter question, as argued recently [21], is that whereas ferromagnetic DE, stabilized by hole delocalization, dominates in the metallic regime immediately above the compositionally controlled M-I boundary, the relevant interaction below this boundary is ferromagnetic super exchange (SE).
Returning to GP-like behaviour, with the adoption of a working definition of the Griffiths temperature, T G , as the temperature at which a marked onset depression in the inverse zero-field ac susceptibility first occurs [11][12][13] (marked by vertical arrows in Fig. 2), a phase diagram for La 1-x Ca x MnO 3 (x < 0.33) can be constructed, Fig. 4. This affords comparisons with those reported previously for La 1-x Sr x MnO 3 (0.075 ≤ x ≤ 0.175) [11] and La 1-x Ba x MnO 3 (0.10 ≤ x ≤ 0.33) [12]. In the present system, the GP-like regime terminates in close proximity to the M-I boundary, but the emergence of such features near this boundary may be particularly sensitive to various aspects of the underlying "disorder", possibly including the oxygen stoichiometry (precise measurements of which are not currently available to us). The latter may play a role in both the lack of GP-like features at x = 0.2 and their reappearance at x = 0.19, as well as in the variation evident in T G and T C estimates for x > 0.21. The termination of this region is consequently marked as hatched, the latter also delineating -likely non-coincidentally -the M-I boundary at 0.19 ≤ x c ≤ 0.20 in the series studied. The remaining lines -drawn as guides for the eye -joining the T G (upper) and T C (lower) estimates appear somewhat different from the essentially triangular structure predicted by both Griffiths' original diluted FM Ising model and the ±J random bond approach [22]. The present data clearly exhibit scatter around such model predicted boundaries, although what is consistent with such predictions is the narrowing gap between T C and T G as the Ca doping is increased toward "optimal" levels, x = 0.33. Additional evidence supporting the narrowing gap between T C and T G around x = 0.25 can be seen in data reported by Belevtsev et. al., [14]. Elements of such scatter are evident in data from other systems. In La 1- x Ba x MnO 3 [14], for example, T C estimates also displays some structure near x = 0.27, while in Sm 1x Ca x MnO 3 [15] the reported T G values are neither constant -they decline by some 6% between x = 0.85 and 0.92 -nor is the corresponding phase diagram reminiscent of the model predicted forms mentioned above. All of the latter attest to the yet unresolved subtleties displayed by GP-like behaviour and CMR in the manganites. Support for this work by the Natural Sciences and Engineering Research Council (NSERC) of Canada, the University of Manitoba (in the form of a Fellowship to WJ), and MISIS are gratefully acknowledged. | 2022-06-28T02:18:22.226Z | 2010-01-01T00:00:00.000 | {
"year": 2010,
"sha1": "2a1d3ea631c6cc4eacee00108503abc35615d92a",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/200/1/012072",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "2a1d3ea631c6cc4eacee00108503abc35615d92a",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
69356704 | pes2o/s2orc | v3-fos-license | Performance Analysis of Denoising Algorithms for Human Brain Image
Digital images have an important role in areas like X-ray, face recognition, and so on. Different image capturing devices inherit different types of noises. Sometimes low image quality is an obstacle for analysis and measurement. The denoising of an image is an important task. An ideal denoising technique should be able to remove the noise while preserving the quality of the image. In this paper, we examine four noise removal algorithms and present a performance analysis of PSNR and RMSE of several filters for various noises and simulate the performance using MATLAB for human brain image.
Introduction
The noise is a fundamental attribute which is present in all images. Hence digital images and their analysis for making them noise free are having a significant role in fields such as traffic detection, face detection, remote sensing, astronomy, reading medical images, etc. Excessive noise reduces the visibility of some structure and object; especially it is more severely affected in case of medical images (magnetic resonance imaging [MRI] and X-ray). Therefore, the denoising of medical images is important for not only just to eliminate the noise but also to limit it to a clinically acceptable level. Optimization of denoising procedures requires the study and performance analysis of denoising algorithms with respect to the noise present in the image. Hence while selecting an appropriate denoising algorithm; one should also focus in maintaining their edges and originality.
Digital images have an important role in areas like television, ultrasound, X-ray, face recognition, and so on. Different image capturing devices consist different types of noises [1]. Noise is generally defined as the values of random pixels in recorded image which are different from the true intensities of real image [2]. Sometimes low image quality is an obstacle for analysis and measurement. So the denoising of an image is an important task. An ideal denoising techniques should be able to remove the noise while preserving the quality of the image [2,3]. In case of medical fields, the role of medical images has an important role in pathological diagnosis or surgical intervention. However, MRI and ultrasound (US) images normally affected by noise which was produced by electrical fluctuation or radiation which make it difficult for pathological diagnosis.
Denoising and enhancement of the medical images can be useful in feature extraction, image restoration and reducing distortion of complex images like MRI of human brain [4][5][6][7]. The noise infected may be Gaussian noise, speckle noise, Poisson noise, etc. Sometime the The main objective of this paper to compare the performance of various known noise filters (average filter, median filter, Gaussian filter, and Wiener filter) in case of medical imaging under MATLAB environment [8]. A test image of human brain MRI has been chosen for experiment [9]. Two measurement parameter root mean square error (RMSE) and peak signal to noise ratio (PSNR) were used to compare the results of four noise filters.
Several Noises
Noises are unwanted and undesirable effects on images. It reduces the true identity of image and shows effects like blurred corners, unseen lines, unknown dots, etc. To apply the denoising techniques it is very important to understand the noise model and their characteristics. Table 1 shows several types of noises and their sources and effects [10,11].
The detailed noise models and their characteristics are described as follows.
Gaussian noise
Gaussian noise is also called amplifier noise which is independent of the signal intensity. It is distributed evenly over the whole image because its probability distribution density func-tion (PDF) is equal to the normal distribution [1]. It can be expressed by Eq. (1) (1)
Salt and Pepper noise
Salt and Pepper noise is also referred as impulse noise or data drop noise because some pixels in the image dropped their original values. In 8-bit transmission system, some pixels become white as salt (maximum gray level 255) and some become black as pepper (minimum gray level 0). We can see those pixels in image as black and white dots. The source of Salt and Pepper noise is disturbance in image signal, over heated/faulty components or dust particles in image acquisition source [1,3,12]. Its PDF is given by Eq. (2): otherwise. (2)
Poisson noise
Poisson noise is also called shot noise. It occurs when the number of photos sensed by a sensor is not calculative or insufficient to produce the true identity of source data [12]. The X-ray and gamma ray sources emit many photos per unit time and have random fluctuations. In the medical images, this noise is the result of gathered spatial and temporal randomness. The random pixels are suffered by independent noise values. This noise has root mean square value proportional to square root of the intensity of the image [10,12]. The PDF for Poisson noise is given as Eq. (3):
Speckle noise
Speckle noise mostly occurs in coherent imaging systems (CIS) and synthetic aperture radars (SAR). The true pixel values are multiplied by random values which make it multiplicative in nature which results increasing the mean gray level of image. The distribution of Speckle noise can be represented by Eq. (4): Table 2. Functions used in the denoising algorithms
Filter Function
Average where H * (u, v) = complex conjugate of degradation function, P n (u, v) = power spectral density of noise, P s (u, v) = power spectral density of non-degraded image, H(u, v) = degradation function
Several Noise Filters
Study of noise model is very important in digital image processing as noise has been seen at different stages like image acquisition and transmission. Similarly, a denoising of the image is also very important step in image processing.
So, the prior knowledge of noise models might be to the user to choose the appropriate technique of denoising.
Denoising algorithms can be linear as well as non-linear, where the difference lies between the fast processing and preserving the details of subject images [10]. Linear algorithms are sufficiently fast whereas non-linear are capable of image preservation. Some image denoising filters are shown in Table 2.
Average filter
The average filter is like a smoothing filter. It replaces the value of every pixel in an image with the average of the gray levels defined by the filter mask. Its filter mask computes the average of neighboring pixels and replaces the value with centered pixel value. The undesirable effects of the average filter are blurred image. Average filtering example using 3 × 3 filter mask can be understood by Figure 1.
Median filter
In median filtering, it replaces the value of a pixel by the median of the gray levels in the neighborhood of the pixel. The main advantage of this filter is it preserve sharp edges and avoids blurring the image. A median filtering example using 3 × 3 filter mask presented in Figure 2.
Gaussian filter
The Gaussian filter removes the high-frequency components from the image (low-pass filter). It is a smoothing filter in the 2D convolution operation that is used to remove noise and blur image. The degree of smoothing is controlled by standard variance. For example in Figure 3, for standard variance 5.5 and 3 × 3 Gaussian kernel, the pixel values can be calculated as:
Wiener filter
This is a good deblurring linear filter. It provides a solution for stationary signals and signal estimation problems. It also esti-mates the desired signal using statistical approach and reduces the amount of noise present in a signal by comparison with an estimation of the desired noiseless signal. This filter performs well for noises like Poisson noise and Speckle noise [11].
Methods
Comparing restoration results requires a measure of image quality. The commonly used measures are MSE and PSNR [13]. The MSE between two images I and K is defined by Eq (5): where I and K are original and denoised images of size m and n, respectively.
The PSNR in dB is defined as (Eq. (6)): where MAX i 2 is the maximum possible pixel value of the image (8 bits) and I and K are the original and denoised images, respectively.
RMSE has the same units as MSE and its quantity has been estimated as the square root of MSE [13]. It gives a relatively high weight to large errors. It can be calculated by square root of MSE (Eq. (5)). It can be expressed by Eq. (7).
Dynamics
The performance analysis is done by the following steps ( Figure 4).
A sample image for the performance analysis is a human brain MRI image as shown in Figure 5. All four types of noises are first induced to human brain image.
The reference image is converted to a gray scale image. Four types of noises, Gaussian noise, Salt & Pepper noise, Poisson noise and Speckle noise are added using MATLAB ( Figures 6-9).
The Gaussian noise is distributed in whole image, however the Salt & Paper noise is independent and uncorrelated to image pixels. So, it is can be seen by black and white particles. The reason behind the occurrence of this noise is sudden changes of image signals. Poisson noise is also termed as shot noise, and it follows Poisson.
In this noise, the different pixels were affected by independent noise intensity, however in Speckle noise the gray level of the image was affected.
Performance Analysis by Four Filters
Denoising was performed using four filters: average filter, median filter, Gaussian filter, and Wiener filter.
The performance of average filter is satisfactory in these results as it takes the average value of neighborhood pixels in mask and replaces it with current value. It has some blur effect on image. It is mostly using to remove the irrelevant details from an image ( Figure 6).
The quality of image filtered by median filter is better than other filtered images. The functionality of median filter is based on ranking of pixel values in filter region and its center value is replaced the median value (Figure 7).
Gaussian filter removes Gaussian noise, but it is not good for images having Poisson noise. The reason behind this is Gaussian distribution is continuous whereas the Poisson distribution is discrete. Median and Wiener filter have better performance (Figure 8). For Poisson noise and Speckle noise, Wiener filter eliminates a good amount of noise because it minimizes the MSE between the estimated random processes (Figure 9). The benefit of minimizing the MSE randomly Wiener filter reduces the relative insignificance in many cases of electrical conduction (Poisson noise). Speckle noise is caused by coherent processing of backscattered signals from multiple distributed and roughly targets. The noise occurs multiplicative and additive. The main advantage of Wiener filter is to compute a statistical estimate of an unknown signal and a related signal that might consists of an unknown signal of interest that has been corrupted by noise. The estimate of unknown signal of interest is used to denoise the image.
The performance of average filter is satisfactory in these results in compare to median filter and Wiener filter. Gaussian filter is faster than median filter because multiplying and adding are probably faster than sorting. Wiener filter is good in removal of Poisson noise and Speckle noise but in Salt and Pepper noise, it is not so effective in compare to other filters.
Result and Discussion
The gray scale image of human brain is taken and noising and denoising is carried out using MATLAB.
Calculation of PSNR and MSE values has been done using MATLAB functions (peaksnr = psnr (A, X) & err = immse (A, X), where A and X are noisy image and denoised image, respectively). Table 3 shows the PSNR values. Higher the PSNR higher the filtering quality. The PSNR shows the performance of the filter and the RMSE describes the noisiness.
As shown in Table 3, the Gaussian filter has the highest PSNR for Gaussian noise. Similarly, the median filter removes Salt and Pepper noise at good rate. Wiener filter works good on Poisson and Speckle noise.
From Table 4, it is easy to understand the performance of filters. Gaussian filter has the lowest RMSE value for Gaussian noise. It demonstrates if the noise follows the Gaussian distribution or normal distribution Gaussian filter is significantly useful. Based on performance and simulation results, we can say that Gaussian filter has good performance on Gaussian noise with median filter but lacking in case of Salt & Pepper noise and Speckle and Poisson noise. Median filter and Wiener Filter have constantly good performance overall, but Wiener filter is less effective for Salt and Pepper noise than median filter. Average filter has good performance on Gaussian noise (5 × 5 mask) and speckle noise. The performance of median filter is constantly good and Wiener filter is suitable to remove Poisson noise.
Concluding Remarks
Denoising medical images is generally quite a difficult. We here attempt to denoise four types of noises (Gaussian, Salt & Pepper, Poisson, and Speckle noise) on human brain image using four filters (average, median, Gaussian, Wiener) in MATLAB environment. We have analyzed the performance of various filters for different noises. The PSNR shows the performance of the filter. Based on performance and simulation results, we could conclude that Gaussian filter and median filter were good on Gaussian and Salt & Pepper noises, respectively.
The performance of Wiener filter was good on Poisson noise. This analysis can be further extended by including more noise types like anisotropic noise, Rayleigh noise, gamma noise, etc. and more denoising filters like adaptive filter, order static filter, geometric mean filter etc. One can also use hybrid filtering approach by using two or more filters. More study can be done in this field to provide more optimal solutions which results better and effective methodologies. Medical images like X-ray, MRI, ultrasound images can be used for denoising using various filtering algorithms.
Conflict of Interest
No potential conflict of interest relevant to this article was reported. | 2019-02-19T14:07:18.433Z | 2018-09-25T00:00:00.000 | {
"year": 2018,
"sha1": "697bb2182db2f5dae080a54d33378ccbdee2d965",
"oa_license": "CCBYNC",
"oa_url": "http://www.ijfis.org/journal/download_pdf.php?doi=10.5391/IJFIS.2018.18.3.175",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "c08b737f40dca80c0cf5a8b4eba84439f269634e",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
34577940 | pes2o/s2orc | v3-fos-license | Tissue Engineering of Craniofacial Tissues – A Review
Engineering of craniofacial tissues would have a profound impact on various treatment strategies in dentistry. The role of mesenchymal stem cells in the formation of all craniofacial and dental structures has been well demonstrated. Stem cells from various craniofacial structures like dental epithelium, dental follicle, apical papilla, exfoliated deciduous teeth, periodontal ligament and dental pulp are readily accessible and have immense potential in craniofacial research for regenerative and tissue engineering applications. Various signalling molecules that pattern the morphogenesis of craniofacial tissues have been explored. This review article epitomises how dentistry should evolve and highlights the need for close partnerships between basic and clinical scientists.
Introduction
A cadre of craniofacial tissue engineers with interdisciplinary skills in stem cell biology, genetics and molecular biology, materials science and mechanical engineering, as well as a clinical knowledge of dental, oral and craniofacial disorders is needed to advance the field of craniofacial tissue engineering. Biologically, mesenchymal cells are primarily responsible for the formation of virtually all dental, oral and craniofacial tissues and have been demonstrated to generate these key structures. Many dental and craniofacial structures are readily accessible, thus presenting a convenient platform for biologists, bioengineers and clinicians to test tissue-engineered prototypes [1,2]. Strategically, tissue engineering technologies pioneered outside the dental community have a profound implication on dental practice.
When shifting rehabilitation strategies from prosthetic to regenerative-such as with a tissue engineering approach-one must deal with the uniqueness of the craniofacial structures in their development and function. In the fields of maxillofacial surgery and periodontics, tissue engineering has been used to generate alveolar bone [3] and to regenerate oral tissues lost to cancer, decay, and periodontitis [4]. Cultured human dental pulp and gingival fibroblasts adhere to biodegradable scaffolds, and proliferate and differentiate in vitro [5,6] and in vivo [7]. While aiming high at tadpole-style regeneration [8], a cell-based therapy could bring about the regeneration of a complete fully functional tooth composed of enamel instead of ceramic, dentin instead of composite, pulp instead of guttapercha and root/periodontium instead of a titanium fixture. Successful bioengineering of whole tooth crowns composed of accurately formed enamel, dentin, and pulp tissues has been reported [9]. A tissue engineering approach could treat devastating craniofacial congenital malformations, such as clefts and craniosynostosis, freeing children from the morbidity of the actual treatment [10].
Tissue engineering is generally considered to consist of three key components: (i) stem/progenitor cells, (ii) signaling molecules and (iii) a scaffold or extracellular matrix (ECM) which has been represented in Figure 1.
Stem cells Mesenchymal Stem Cells (MSCs)
MSCs are self-renewable and have been experimentally differentiated into all mesenchymal or connective tissue lineages [11,12] and, in many cases, have been used to engineer the craniofacial structures that their prenatal predecessors, mesenchymal cells, are capable of generating during development.
The capacity of MSCs in the de novo formation and/or regeneration of craniofacial structures is too natural an endeavour to make one wonder why this potential has not been exploited substantially up until the past decade. MSCs have been reported to differentiate into hepatic [13], renal [14], cardiac [15], and neural cells [16,17].
Whether highly purified or cloned MSC populations are necessarily needed for the engineering of craniofacial structures is not clear. Stem cell populations that generate native craniofacial structures, such as the mandibular joint, are heterogeneous and likely include both mesenchymal and hematopoietic stem cells. The morphogenesis of the articular condyle requires stem cells, chondrocytes and osteoblasts in addition to angiogenesis [18]. Second, host cell invasion and stem cell homing are likely inevitable in porous biomimetic scaffolds that are used as carriers for delivering stem cells and/ or stem-cell-derived tissue-forming cells [19,20].
Dental Epithelial Stem Cells (EpSC)
The concept that teeth can be formed from primordia derived in vitro from non-dental cultured cells is an important step in the tissue engineering of teeth for replacement. However, this approach relies on the ability of such embryonic primordia to develop into the complete organ when transferred to the adult. The development of an embryonic primordium into a tooth in the adult jaw can be a unique idea [21].
The embryonic oral epithelium is a simple, two-cell-thick ectoderm. This epithelium can be engineered to express the appropriate signals to initiate odontogenesis to engineer a complete tooth primordium entirely from cultured cells. The identification of stem cells in dental pulp and from exfoliated deciduous teeth also raises the possibility that a patient's own tooth cells could be used to generate new tooth primordia [22,23]. The ability to tissue-engineer an organ rudiment such as a tooth primordium constitutes a major component of a regenerative medicine procedure [24]. However, such organ primordia must be capable of developing into the complete organ in situ, in the appropriate site in the adult body. The renal capsule and anterior chamber of the eye are two adult sites that have been routinely used to support ectopic organ and tissue development, because they are immune-compromised and can provide an adequate blood supply to the transplanted tissue. To date, there have been no demonstrations of development of a complete organ at its normal location in the adult body following transplantation of an embryonic primordium. However, seeding cultured tooth germ cells on bio-degradable scaffolds and then implanting them has successfully resulted in bioengineering tooth tissues [25]. Non-dental mesenchymal cells, when placed in contact with embryonic oral epithelium and transplanted to an ectopic site, generated tooth structure [26]. This report is pivotal in that it demonstrates that uncommitted MSC, in association with oral epithelium, can be instructed to mimic developmental events leading to growth of a tooth structure comprised of enamel, dentin, and pulp with a morphology resembling that of a natural tooth. These observations offer exciting opportunities for replacement of natural teeth damaged through disease or trauma and for those missing in hypodontia. There are obvious practical obstacles still to be overcome before this might be available as a routine clinical treatment, but it provides an elegant example of the translation of basic science research to the clinical arena.
The rodent incisor is a unique model for studying dental EpSC since, in contrast to human incisors or other vertebrates, this tooth grows throughout life. An EpSC niche, which is located in the apical part of the rodent incisor epithelium (cervical loop area), is responsible for a continuous enamel matrix production [27][28][29][30][31]. In this highly proliferative area, undifferentiated epithelial cells migrate toward the anterior part of the incisor and give rise to ameloblasts [32].
Dental EpSC can be isolated from post-natal teeth but exhibit complex problems that strongly limit their clinical application in humans. Ideally alternative sources should be easily accessible, available from adult individuals and the derived cells must have potential for enamel matrix production. The use of non-dental EpSC will only be possible with the transfer of genes, creating an odontogenic potential to nondental epithelia prior to any association with mesenchymal cells. This is certainly one of the most exciting goals of the next decade in tooth engineering [32].
Making entire teeth with enamel and dentin structures in vivo is a reality and not a utopia. However, these bioengineered teeth have been produced in ectopic sites and are still missing some essential elements such as the complete root and periodontal tissues that allow correct anchoring into the alveolar bone. In a unique study proposed for growing teeth in the mouse mandible, epithelial and mesenchymal cells were sequentially seeded into a collagen gel drop and then implanted into the tooth cavity of adult mice. With this technique the presence of all dental structures such as ameloblasts, odontoblasts, dental pulp, blood vessels, crown, PDL, root and alveolar bone could be observed [33]. Thus, the implantation of these tooth germs in the mandible allowed their development, maturation and eruption indicating that stem cells could be used in the future for the replacement of missing teeth in humans.
Dental Follicle Precursor Cells (DFPCs)
Dental follicle (DF) contains progenitor cells that form the periodontium, i.e., cementum, periodontal ligament (PDL) and alveolar bone. Precursor cells have been isolated from human DFs of impacted third molars. These cells form low numbers of adherent clonogenic colonies when released from the tissue following enzymatic digestion [34].
After transplantation into immunocompromised mice, DFPC transplants (i) express human specific transcripts for BSP, OCN, and collagen type I, (ii) genetic expression for BSP and OCN was increased more than 100 times, (iii) genetic expression was decreased for collagen type I transcripts, (iv) dentin, cementum or bone formation was not observed in the transplant in vivo [36].
DFPCs can also differentiate, form robust connective tissues and produce clusters of mineralized tissue [37,38]. These connective tissues form biological diaphragms. Here, typical markers for PDL and cementum were expressed, such as collagen type XII and the CAP [37].
Dental Follicle Stem Cells (DFSC)
Most people have an impacted third molar that does not occlude and subsequently have the impacted tooth extracted to avoid inflammation or orthodontic therapy. Such extracted teeth usually contain DF and are commonly discarded as medical waste. Hence, the DF is a candidate source for isolating MSC. Recent elaborate differentiation from stem cells to PDL, cementoblasts, and osteoblasts is expected to be elucidated in the future.
The existence of progenitor cells in bovine DF after transplantation has been reported [42]. Cultured bovine DFSCs at the root-forming stage, in combination with hydroxyapatite, successfully generated cementum tissues and fibroblasts in vivo. Interestingly, cementum formation was observed only at the interface of the hydroxyapatite beads in the implants. Unlike alveolar osteoblasts, DFSCs do not form bone tissue by this method. Although it is unclear whether DFSCs have the characteristics of MSC, they are clearly progenitor cells for cementoblasts [42]. Human DF progenitor cells were also reported in the DF of wisdom teeth at the root-formation stage [34]. Subsequently, mouse DFSCs at the crown-formation stage were shown to differentiate into osteogenic, adipogenic and chondrogenic lineages [46], while rat DFSCs have the capacity for osteogenesis, adipogenesis and neurogenesis but not chondrogenesis [39]. Thereafter, postnatal MSCs from human dental tissue such as DF at the apex of tooth root, PDL and dental pulp were isolated and compared to various other dental stem cells. The results showed that DFSCs have excellent proliferation rates and a capacity for osteogenesis and adipogenesis [43]; however, their capacity for chondrogenesis remains unknown.
DFSCs have been isolated from the follicle of human third molars and express the stem cell markers Notch1, STRO-1 and Nestin [34]. STRO-1 positive DFSC can differentiate into cementoblasts in vitro [35] and are able to form cementum in vivo.
[42]. In addition, immortalized dental follicle cells were transplanted into immune-deficient mice and were able to recreate a new PDL-like tissue after 4 weeks [40]. It has been proved that enamel matrix derivatives (EMD) activated human dental follicle stem cells (hDFSCs) toward the cemento-blastic phenoltype. hDFSCs acquired cementoblast features under stimulation of BMP-2/-7 and EMD in vitro [35]. The three main lineages were highly undifferentiated state of PDL-type lineage and cementoblastic or alveolar bone osteoblastic lineage. The profound cellular heterogeneity of DFSCs suggests that heterogeneous cellular constituents might play a role in tissue regeneration as much as the individual lineages might do [46].
Stem Cells from Human Exfoliated Deciduous Teeth (SHED)
The isolation of post-natal stem cells from an easily accessible source is indispensable for tissue engineering and clinical applications. The isolation of mesenchymal progenitors from the pulp of human deciduous incisors were named SHED (Stem cells from Human Exfoliated Deciduous teeth) and exhibited a high plasticity since they could differentiate into neurons, adipocytes, osteoblasts and odontoblasts [23]. The discovery of SHED sheds a light on the intriguing possibility of using DPSC (Dental Pulp Stem Cells) for tissue engineering [47,48].
The obvious advantages of these cells are (i) higher proliferation rate compared with stem cells from permanent teeth [23], (ii) easy to be expanded in vitro, (iii) high plasticity since they can differentiate into neurons, adipocytes, osteoblasts and odontoblasts, and (iv) readily accessible in young patient [23], especially suitable for young patients with mixed dentition [49]. SHEDs induced bone formation and produced dentin under in vivo conditions; and they were able to survive and migrate in murine brain after transplantation into immunecompromised animals [23].
SHEDs are distinctive with osteoinductive ability and high plasticity. When SHEDs are seeded in porous poly L-lactic acid (PLLA) prepared within human tooth slice scaffolds and transplanted into the subcutaneous tissue of immunedeficient mice, it is observed that they differentiated into odontoblast-like cells and showed morphologic characteristics resembling those of odontoblasts. Moreover, an increase in micro-vessel density was found in the co-implantation. It is also verified that the transplanted SHEDs were capable of differentiating into blood vessels that anastomosed with the host vasculature [50]. Thus SHEDs might be an ideal resource of stem cells to repair damaged tooth structures and induce bone regeneration.
The striking feature of these cells is that they are capable of inducing recipient murine cells to differentiate into boneforming cells, which is not a property attributed to DPSCs following transplantation in vivo. When single-colony-derived SHED clones were transplanted into immune-compromised mice, only one-fourth of the clones had the potential to generate ectopic dentin-like tissue equivalent to that generated by multicolony-derived SHED [23]. However, all single-colony derived SHED clones tested are capable of inducing bone formation in immune-compromised mice. While SHED could not differentiate directly into osteoblasts, they appeared to induce new bone formation by forming an osteoinductive template to recruit murine host osteogenic cells [23].
SHED proliferate faster with greater population doublings than DPSCs and BMMSCs (Bone marrow derived mesenchymal stem cells) (SHED > DPSCs >BMMSCs). SHED form sphere-like clusters when cultured in neurogenic medium that either adhere to the culture dish or float freely in the culture medium, which can be dissociated by passage through needles and subsequently grown on 0.1% gelatin-coated dishes as individual fibroblastic cells. This phenomenon suggests a high proliferative capacity analogous to that of neural stem cells [23]. In a tooth slice model (horizontal section, 1 mm thick), it was shown that SHED seeded onto synthetic scaffolds seated into the pulp chamber space and formed odontoblastlike cells that localized against the existing dentin surface [50]. Investigators have isolated SHED and termed the cells 'immature DPSCs' (IDPSCs). IDPSCs express the embryonic stem cell markers Oct 4, Nanog, stage-specific embryonic antigens (SSEA-3, SSEA-4), and tumor recognition antigens (TRA-1-60 and TRA-1-81) [52].
Stem Cells from Apical Papilla (SCAP)
Apical papilla refers to the soft tissue at the apices of developing permanent teeth [53,54]. Apical papilla is more apical to the epithelial diaphragm, and there is an apical cell-rich zone lying between the apical papilla and the pulp [55]. Human SCAP have been isolated and their potential to differentiate into odontoblasts was compared to that of the PDL stem cells (PDLSCs) [53]. SCAPs exhibit a higher proliferative rate and appear more effective than PDLSC for tooth formation. Importantly, SCAP are easily accessible since they can be isolated from human third molars.
SCAP show characteristics similar to, but at the same time different from, those of DPSCs. SCAP appear to be the source of primary odontoblasts that are responsible for the formation of root dentin, whereas DPSCs are likely the source of replacement of odontoblasts that form reparative dentin. In a study with minipigs as a model, the surgical removal of the root apical papilla at an early developing stage halted root development, despite the pulp tissue being intact, whereas other roots of the tooth, containing apical papilla, maintained normal growth and development. Interestingly, only a combination of SCAPs and PDL stem cells induced the formation of a dental connective tissue, namely, the attachment of an artificial tooth crown in the alveolar bone [53].
SCAP along with PDLSCs are capable of generating a bioroot with PDL tissues. A mini-swine model was used, and the autologous SCAP and PDLSCs were loaded onto HA/TCP and gelfoam scaffolds, respectively, and implanted into sockets of the lower jaw. Three months later, the bio-root was formed in the porcine [53]. The bio-root structure was comprised of dentin randomly deposited by the SCAP. The bio-root was encircled with PDL tissue and appeared to have a natural relationship with the surrounding bone.
Periodontal Ligament Stem Cells
PDL itself contains progenitor cells; human stem cells have been isolated from PDL. They express stemness markers such as STRO-1, CD146 and they are able to form alizarin redpositive nodules and cementoblastic/osteoblastic markers (ALP, OCN, BSP) in vitro. These cells have been called PDLSC. They were the first candidate stem cells for periodontal tissue engineering [57] and have the potential to form collagen fibres and generate cementum/PDL-like structures in vivo serving as reliable sources for periodontal tissue reconstruction [57].
In vitro expanded PDLSCs were transplanted subcutaneously into the dorsal surfaces of immunocompromised mice using HA/TCP as a carrier and showed the capacity to differentiate into cementoblasts and to form cementum/PDL-like structures [58]. PDLSCs show the capacity to form collagen fibres connecting to the cementum-like tissue; these fibres are similar to Sharpey's fibres and they suggest the potential to regenerate PDL attachment.
An alternative line of research being actively pursued in periodontal regenerative research is the possibility of utilizing stem cell implantation to enhance regenerative outcomes. The principle of this approach is the isolation of MSC from bone marrow stroma or dental tissues, expansion of cell numbers ex vivo, and re-implantation of cells into the wound seeded into a suitable porous scaffold material or other matrix material, including collagen matrices, β-TCP and combined β-TCP-ECM scaffolds. The suggestion that cell re-implantation could be used in this way was first raised many years ago, although only recent advances have suggested that this could be a feasible approach. A number of animal studies have now reported the isolation, ex vivo expansion, and reimplantation of such cells derived from bone marrow or dental sources into periodontal wounds, where they appear to have been able to contribute to the regenerative processes [57][58][59]. Further studies are required to determine the full potential of this therapeutic approach. Studies may need to address the potential differences between differently derived stem cell sources, such as marrow stroma, PDL, and dental pulp cells. In addition, recent studies suggest that cells may retain their embryonic positional programming, and this may have profound effects on the ability of implanted cells to contribute appropriately to tissue regeneration at heterotopic sites [60,61]. This observation may have important implications for regenerative stem cell therapies in general, but its significance remains to be investigated further.
PDLSCs implanted into nude mice generated cementum/ PDL-like structures that resemble the native PDL as a thin layer of cementum that interfaced with dense collagen fibres, similar to Sharpey's fibers. After a three-week culture with an adipogenic-inductive cocktail, PDLSCs differentiated into Oilred-O-positive, lipid-laden adipocytes [58]. Upon four-week doi: 10.7243/2050-1218-2-6 osteo/odontogenic inductions, alizarin-red-positive nodules formed in the PDLSC cultures, similar to MSCs and DPSCs [58,62]. Thus, the PDLSCs have the potential for forming periodontal structures, including the cementum and PDL.
A study in miniature pigs [57] showed that autologous PDLSCs are capable of forming bone, cementum and PDL if they are transplanted onto HA/TCP carrier into surgical, created periodontal defects. Precursor cells from human DFs of wisdom teeth have been isolated and characterized for their periodontal regeneration potential. These cells formed in vitro a membranous consisting of a connective-like matrix and a mesothelium-like cellular structure with nuclei of granular calcification. It has been demonstrated that immortalized mouse dental follicle cells are able to re-generate in vivo a PDL-like tissue [40].
Nondental stem cells such as embryonic stem cells [63], BMMSC [64], induced pluripotent stem cells (iPS) [65], adiposederived stem cells and cultured periosteum [66] have also been investigated for periodontal regeneration. These cells all have the potential for differentiating into osteoblasts, cementoblasts, and fibroblasts and to form cementum/PDLlike structures under appropriate conditions.
PDLSCs are also capable of differentiation to the cementoblastic lineage. The conditioned medium from developing apical tooth germ cells (APTG-CM) was shown to be able to provide a cementogenic micro-environment and induce differentiation of PDLSCs along the cementoblastic lineage. When transplanted into immunocompromised mice, the induced PDLSCs showed tissue-regenerative capacity to produce cementum/PDL-like structures, characterized by a layer of cementum-like mineralized tissues connected with PDL-like collagen fibres. This has important implications for periodontal engineering [67].
Dental Pulp Stem Cells
Although the regenerative capacity of the human dentin/ pulp complex is not well-understood, it is known that, upon injury, reparative dentin is formed as a protective barrier for the pulp [68]. Accordingly, one might anticipate that dental pulp contains the dentin progenitor cells that are responsible for dentin repair. Dental pulp contains proliferating cells that are analogous to bone cells, because they express osteogenic markers and respond to many growth factors for osteo/ odontogenic differentiation [69][70][71]. In addition, dental pulp cells are capable of forming mineral deposits with distinctive dentin-like crystalline structures [22,72,73]. DPSCs have also been isolated from extracted human third molars [22,74].
The dental pulp consists of a population of cells that contain lineage specific progenitor cells, as well as stem cells with multi-lineage differentiation capabilities [75,76]. Dental pulp tissue is expected to be a cellular source for bone tissue repair and engineering [77] because when it is cultured in osteogenic-inducing media phenotypes similar to those of osteoblasts, as represented by the production of collagen type I, osteopontin, and BSP as well as mineralizing capability are formed [74,78].
Following physiological stimulation or injury, such as caries and operative procedures, stem cells in pulp may be mobilized to proliferate and differentiate into odontoblasts by morphogens released from the surrounding dentin matrix [79]. Tissue engineering with the triad of dental pulp progenitor/stem cells, morphogens and scaffolds may provide a useful alternative method for pulp-capping and root canal treatment [80].
Dental pulp contains highly proliferative cells that can be activated upon injury and undergo proliferation and differentiation toward osteoblastic phenotypes to provide for dentin repair. It is demonstrated that DPSCs can generate a dentin/pulp-like structure in vivo. When DPSCs are transplanted in conjunction with HA/TPC powder, a scaffold "odontoconductive", into immune-compromised mice, a collagenous matrix deposit is observed after 6 weeks [22]. The dentin matrix contains a reservoir of bioactive growth factors which can mediate these processes and the functional roles of members of the TGF-β super-family in reparative dentinogenesis suggests a possible role in novel therapeutic mediators or tissue engineering solutions to dental regeneration. During dental disease or trauma, tissue damage and inflammation at sites of injury may compromise the ability of the pulpal ECM to mediate reparative events, and there may be advantages to providing a suitable matrix to encourage cell migration and differentiation at such sites. Recent studies using the tooth slice ex vivo culture system have demonstrated the ability of bioactive growth factor TGF-ss1 contained within alginate hydrogels, to induce odontoblast-like cell differentiation within the dentin-pulp complex, with subsequent up-regulation of dentin matrix secretion [81].
An osteo-dentin like matrix was observed two weeks after subcutaneous implantation in rabbits, when poly (lacticco-glycolic acid) polymeric porous scaffolds grafted with DPSC were engineered. It is observed that cells formed mineralised-like structures even without the addition of any differentiation chemicals. These cells may undergo differentiation into hard tissue forming cells when provided with an appropriate substrate [82]. One important feature of pulp cells is their odontoblastic differentiation potential. Human pulp cells can be induced in vitro to differentiate into cells of odontoblastic phenotype, characterized by polarized cell bodies and accumulation of mineralized nodules [72,73,83]. DPSCs isolated with enzyme treatment of pulp tissues form CFU-Fs with various characteristics [22,84]. There are different cell densities of the colonies, suggesting that each cell clone may have a different growth rate, as reported for BMMSCs [75]. Within the same colony, different cell morphologies and sizes may be observed. If seeded onto dentin, some DPSCs convert into odontoblast-like cells with a polarized cell body and a cell process extending into the existing dentinal tubules [84,85].
To elucidate the self-renewal ability, investigators isolated doi: 10.7243/2050-1218-2-6 cells from DPSC implants at two months post-subcutaneous implantation, by enzymatic digestion and subsequent expansion in vitro. The isolated cells from the DPSC implants were sorted by fluorescent-activated cell-sorting (FACS) with human beta1-integrin monoclonal antibody. The isolated human cells were re-implanted into immunodeficient mice for two months. The recovered secondary implants yielded the same dentin/pulp-like structures as the primary implants.
Human DSPP was expressed in dentin-like structures, confirming the human origin of the odontoblast/pulp cells in the secondary DPSC implants [86].
Signalling molecules
Various growth factors and cytokines are conjugated to the ECM, such as BMP, fibroblast growth factor-2, interleukin-6, nsulin-like growth factor (IGF), platelet-derived growth factor (PDGF), transforming growth factor-β1,etc. [87][88][89][90]. This co-localization acts as a storage pool of growth factors and may reduce growth factor degradation [91], protecting them from the local microenvironment, while facilitating the presentation of the growth factors to cell surface receptors [92]. Several studies have reported on roles for the ECM in cell proliferation, differentiation, migration, and apoptosis through interaction with cells and cell receptors [93][94][95][96][97]. During development, families of protein molecules govern the signaling that patterns the morphogenesis of tissues where cells become enmeshed into an ECM. Among the identified families, fibroblast growth factors (FGFs) (potent angiogenic factors and key players in cellular proliferation and differentiation) [98][99][100], hedgehog proteins (fundamental regulators of animal development) [101], members of the BMP superfamily (potent molecules for bone and cartilage induction) [102], Wingless-and Int-related proteins (Wnts) (believed to promote the maintenance and proliferation of stem cell) [103], platelet-derived growth factors (PDGF) (regulators of growth, division, and angiogenesis) [104], and vascular endothelial growth factors (VEGF) (signaling proteins for the de novo formation of blood vessels and the growth of blood vessels from pre-existing vasculature) [105] were evaluated for their contributions to craniofacial development, and avenues are currently being explored for their value when partnered with the stem-cell-based engineering of craniofacial defects [106,107].
Among the various growth factors (GF), BMPs (mainly BMP-2 [108], BMP-4 [109], and BMP-7 [110]) have been more frequently applied in orofacial reconstruction. Since BMPs possess a well-established bone inducing effect in vivo, it is only recently that their effects have been exploited at the cellular level, where members of the family have been found to induce adipose-tissue-derived stem cells [111,112]. Major bone defects have been engineered by a combination of bone marrow stromal cells and BMP-7 [113], or by exclusive reliance on the growth factors [114,115].
Platelet-rich plasma (PRP), used as a growth-enhancing substance, has been widely applied [116]. When PRP was used alone, the assorted mixture of wound-healing GFs demonstrated a solid foundation for soft-tissue healing [117], but not for hard-tissue healing, where its contribution to either alveolar bone or maxillary sinus augmentation continues to generate controversy [118]. While the GFs contained in the PRP are mitogenic, osteogenic, and angiogenic [119], the blend could lack the osteoinductive ingredient that, in the form of osteogenic cells, improved bone regeneration [120]. A controlled study in a canine model reported that the PRP/ cell construct regenerated alveolar defects comparably with a cancellous autograft [121]. BMPs have been implicated in tooth development, and the expression of BMP2 is increased during the terminal differentiation of odontoblasts [122,80]. BMP2 also induces a large amount of reparative dentin on the amputated pulp in vivo [123]. It has been suggested that BMP2 may regulate the differentiation of pulp cells into odontoblastic lineage and stimulate reparative dentin formation [80]. After treatment with BMP2, when human pulp progenitor/stem cells with HA/TCP as a scaffold were implanted into immunocompromised mice, tubular dentin was formed [22,75]. Beads soaked in human recombinant BMP2 induce the mRNA expression of DSPP, the differentiation marker of odontoblasts after implantation onto dental papilla in organ culture. The expression of DSPP was increased later, but not that of enamelysin/MMP20 (matrix metalloproteinase) and PHEX (phosphate-regulating gene with homologies to endopeptidases on X-chromosome). The enhanced expression of enamelysin/MMP20 and Phex on day 21 in pellet cultures, compared with monolayer cultures, suggested that the differentiation of pulp cells into odontoblastic lineage was more advanced. BMP-2 and BMP-7 have been shown to stimulate regeneration of periodontal tissues, particularly bone and cementum, in a range of animal models including rodents, dogs, and non-human primates [124][125][126][127]. Larger amounts of reparative dentin formation on the amputated pulp with a BMP2-supplemented pellet compared with a control pellet. The ECM of the pellet functions as a natural scaffold, which retains and releases BMPs [128].
Treatment of monolayer cultures of bovine pulp cells with rhBMP2 significantly increased the expression of α1(I)collagen and ALPase activity [122]. The increase in ALPase activity by rhBMP2 was higher in pellet cultures than in monolayer culture on day 14. The expression of DMP1 (dentin matrix protein), DSPP, enamelysin, and PHEX was more increased by rhBMP2 in pellet cultures than in monolayer cultures. The expression of Cbfa1 and Cbfa3 transcription factor has been reported in the dental papillae of mouse tooth germ at cap and bell stages, and later is limited in the odontoblastic layer [129].
Indeed, animal studies have demonstrated that the addition of a whole range of different factors, including BMP-2, BMP-7, GDF-5 (BMP-14), PDGF, FGFs, IGF-1 and TGF-β may be able to stimulate regeneration of the periodontal tissues [124,126,[130][131][132][133]. The possible application of GDF-5 (BMP- 14) for periodontal regeneration has been tested. Although GDF-5 is a member of the BMP family and shows osteoinductive potential, it has also been shown to be able to stimulate soft tissue repair in tendons and ligaments [134,135]. Initial animal studies of GDF-5-stimulated periodontal regeneration suggest that it may be able to increase regeneration without associated ankylosis [135,136].
Pre-clinical investigations have revealed encouraging results with respect to the repair of periodontal defects with growth factors such as PDGF [137][138][139][140], BMPs [110,127], and fibroblast growth factor-2 [132,141,142]. PDGF has demonstrated the promise to regenerate bone in periodontal osseous defects, as shown in clinical trials when used alone [130,143,144] or in combination with IGF-1 [145,146]. BMPs have been shown to induce bone regeneration in peri-implant and maxillary sinus-floor augmentation procedures [147,148], extraction socket defects, [149][150][151] and mandibular discontinuity defects [152,153]. The use of BMPs for oral and craniofacial reconstruction has recently been reviewed [80,154].
The local regulation of cellular function during wound healing is carried out by the interplay of cytokines and growth factors. Initial granulation tissue formation and neovascularization are regulated by release of factors including the platelet-derived growth factor (PDGF), together with expression of other factors that may include vascular endothelial growth factor (VEGF) and transforming growth factor beta (TGF-β) [155,156]. The osteoblastic differentiation cascade may be of particular importance when considering regeneration of alveolar bone and the other periodontal tissues.
The key factor in the initiation of osteoblast differentiation is the commitment of osteogenic stem cells to an osteoblastic lineage. Osteoblastic commitmentis have been known to be regulated by the action of BMPs and more recently it has become clear that the BMP signalling pathway is intimately associated with the Wnt signalling pathway. Differentiation of these committed (FGFs) and PDGF, TGF-β, and the insulin-like growth factors, undergoing a series of amplifying divisions followed by increasing expression of genes regulating osteoblast differentiation, including the tissue-specific transcription factors runx2 and osterix [157]. Thus, physiologically, the balance between signalling molecules and their inhibitors is a key factor for the regulation of tissue homeostasis, repair, and regeneration. It is postulated that expression of growth factor inhibitors may be a limiting factor in the regenerative potential of the periodontium, particularly as the different tissue compartments (bone, PDL, gingivae) encroach on each other.
Scaffolds
During tissue engineering, cells and growth factors are combined with a porous biodegradable scaffold to repair and regenerate tissue. The scaffold acts as a temporary matrix while the cells secrete the ECM that is required for tissue regeneration. Scaffolds can be used to induce the formation of desired tissue following the in growth of cells from surrounding areas and as carriers for seeded autogeneous cells that are cultured in bioreactors and subsequently reimplanted in the patient [158]. The greatest challenges faced in tissue-engineered devices, regardless of tissue type, is promoting healing in three dimensions. Allowing angiogenesis throughout the scaffold is one of the most critical factors to the success of the scaffold.
Scaffolds have been fabricated for dental applications with a variety of natural and synthetic biomaterials, such as proteins [159], ceramics [160], metals [161], and polymers [162]. An appropriate scaffold for tissue engineering will be one that is created with biology in mind. The goal is the integration of the new tissue grown in the scaffold with the host tissue. Ideally, the scaffold provides a temporary pathway for regeneration and will degrade either during or after healing, thereby obviating the need to remove the material later and eliminating possible side effects associated with foreign materials in the body.
For tissue engineering, the ECM is required to act as a scaffold for cell attachment and to modulate cell proliferation and differentiation. The ECM is a mixture of proteins including collagens, proteoglycans, and laminins forming an elastic network surrounding most cells and tissue structures [163,164]. The composition, distribution and organization of the ECM is diverse, depending on tissue types and developmental stage [165,166]. The ECM binds to cell surface receptors, triggering changes in gene expression in the nucleus resulting in feedback mechanisms, which in turn modulate the ECM. The functions of the ECM and those of scaffolds in engineered tissues are tabulated in Table 1.
The ideal scaffold for generating new tissue should (i) be biocompatible, biodegradable and non-toxic, (ii) support the attatchment, migration, proliferation and differentiation of cells and the synthesis of new matrix, (iii) have adequate doi: 10.7243/2050-1218-2-6 mechanical strength to withstand physiological stresses and minimise stress shielding in the surrounding host bone [167], (iv) have an integral, interconnected porous network, (v) permit neovascularisation for tissue growth. Various other properties of scaffolds like pore size, pore shape, pore-wall thickness, pore interconnectivity, pore-wall surface area, porosity, surface morphology, rate of degradation, surface chemistry, and mechanical stability affect bone healing.
Biomaterials Applied for Tissue Engineering
Over the last century, biocompatible materials such as metals, ceramics and polymers have been extensively used for surgical implantation. However, metals and ceramics are not biodegradable and their capability to be processed is very limited. Thus they cannot be used as effective scaffolds. Polymer materials have been widely used for tissue engineering because of easy control over biodegradability and processability. There are two kinds of polymer materials: synthetic polymer and naturally derived polymers [168][169][170][171][172][173]. The main biodegradable synthetic polymers include polyesters, polyanhydride, polycaprolactone, polycarbonate, polyfumarate and polyorthoester [174][175][176][177][178][179]. The polyesters such as poly(glycolic acid) (PGA), poly(lactic acid) (PLA), and their copolymer of poly [lactic-co-(glycolic acid)] (PLGA) are most commonly used for tissue engineering. The naturally derived polymers include proteins of natural extracellular matrices such as collagen and glycosaminoglycan, alginic acid, chitosan, Matrigel and polypeptides, etc. [180][181][182][183][184].
The naturally derived polymer materials are biodegradable and possess known cell-binding sites. Potential disadvantages include the level of immunogenicity, the speed of degradation, and a perceived limited ability to tailor native matrix for specific properties. Matrigel, a well-known extract from the Engelbreth-Holm-Swarm sarcoma that is rich in basement membrane components [185], is widely used in basic science for the study of cell-matrix interactions and increasingly in tissue engineering, particularly in areas such as nerves [186], adipose tissue [187] and skeletal muscle [188]. Collagens synthesized by several cell types [189], are used in various formats such as gels, meshes, and sponges or in combination with slow-release gelatine microspheres as cell scaffolds for the engineering of tissues such as adipose tissue [190], blood vessels [191], and skin [192]. The mechanical strength of collagen scaffolds and the rate of absorption have been of concern [189,193], resulting in the use of cross linking agents to alter the biomedical, thermal, and mechanical properties of collagen [193]. Other extracellular matrices used as scaffolds include fibrin and fibrinogen. [187,194,195]. A recent study focusing on the development of fibrin-based matrices reported that a modified fibrin/L1Ig6 matrix could induce angiogenesis activity in human umbilical vein endothelial cells in vitro [196]. Studies looking at the addition of fibronectin to tissue engineering scaffolds suggest that fibronectin may be critical for scaffold vascularisation [197,198].
Bone tissue engineering is the most important aspect of craniofacial tissue engineering. Porous scaffolds are designed to support the migration, proliferation, and differentiation of osteo-progenitor cells and aid in the organization of these cells in three dimensions. These scaffolds may be made from a wide variety of both natural and synthetic materials. Aside from autografts and allografts of cancellous and cortical bone, [226][227][228] naturally derived materials include cornstarch-based polymers, [229] chitosan (a polysaccharide derived from chitin, found in crab shells), [230,231] collagen, [232] and coral [233,234]. Of these materials, coral has proven to be an effective clinical alternative to autogenic and allogenic bone grafts for certain applications [235,236]. Scaffolds created from marine coral exoskeletons that are hydrothermally converted to hydroxyapatite, the mineral component of bone, are used for the repair of metaphyseal long bone cyst and tumour defects, which occur at the metaphysis of long bones. Synthetic materials include inorganic materials such as calcium phosphates [237][238][239] and organic materials such as poly(phosphazenes), [240] poly(tyrosine carbonates), [241] poly(caprolactones) [242], poly(propylene fumarates) [243], and poly(α-hydroxy acids) [244][245][246][247]. Composites of inorganic and organic materials have also been successfully used to create scaffolds for bone grafts [248][249][250].
Poly(α-hydroxy acids) are the most commonly used polymeric materials for the creation of tissue-engineering scaffolds for bone. The most common of the poly(α-hydroxy acids) are poly(glycolic acid), poly(lactic acid) (PLA), and copolymers of poly(lactic-co-glycolic acid) (PLGA). The degradation products of these materials are easily metabolized and excreted.
Hybridization of Scaffold Materials
The synthetic biodegradable polymers are easily formed into desired shapes with relatively good mechanical strength. Their periods of degradation can also be manipulated by controlling the crystallinity, molecular weight, and copolymer ratio. The scaffolds derived from synthetic polymers however lack cellrecognition signals, and their hydrophobic property hinders smooth cell seeding. In contrast, naturally derived collagen has the potential advantages of specific cell interactions and hydrophilicity, but scaffolds constructed entirely of collagen have poor mechanical strength. Therefore, these two kinds of biodegradable polymers have been hybridized to combine the advantageous properties of both constitutes [251][252][253][254].
The wettability of a polymer scaffold is considered very important for homogeneous and sufficient cell seeding in three dimensions. Because biodegradable synthetic polymers are relatively hydrophobic, it is difficult to deliver a cell suspension in a manner that uniformly distributes transplanted cells throughout their porous scaffolds. Numerous techniques have been used to evenly seed cells in PLA-and PLGA-based porous scaffolds, such as prewetting the scaffold with ethanol solution and then replacing it with distilled water and cell Suspension, introducing hydrophilic poly(ethyl glycol) into the hydrophobic PLA and PLGA network, or hydrolysis [255,256]. Hybridization with collagen improved the wettability of synthetic sponges with water, which facilitated cell seeding.
Three-dimensional scaffolds for bone tissue engineering should be osteoconductive so that osteoblasts and osteoprogenitor cells can adhere, migrate, differentiate, and synthesize new bone matrix. Hydroxyapatite has frequently been used for implants, coatings, and scaffolds in bone tissue engineering [257,258]. Hydroxy-apatite particulates have been incorporated into poly (a-hydroxyl acids) to construct hybrid three-dimensional porous scaffolds of poly(a-hydroxyl acids) and hydroxyapatite. The incorporated hydroxyapatite shows a good degree of biocompatibility and osteoconductivity. The osteoconductivity of hydroxyapatite can be further improved by associating a protein matrix with the mineral structure [259]. To mimic the unique structure of collagen and hydroxyapatite in natural bone, several kinds of hybrid biomaterials of collagen and hydroxyapatite have been developed [260][261][262].
Nanofibrous Scaffolds
Native ECM is a hierarchically organized nanocomposite. Understanding tissue organization from the molecular to the macroscopic level will likely guide the rational design of synthetic ECM substitutes [263]. Tissue engineering scaffolds should recapitulate this hierarchical organization.The ECM is the nanofibrous protein network that surroundscells in all tissues to support their many functions [264] and can be emulated with a nanofibrous polymer scaffold. The nanofibers can be designed to promote cell functions such as migration, adhesion, proliferation, differentiation, and tissue neogenesis [265]. The scaffold should have an internal interconnected porous network to allow for cellular integration into the scaffold among other functions. It can even be designed to release growth factors to tailor tissue development [266].
With recent advances in nanotechnology, insight into nanoscale organization is accumulating, and nanofibrous and nanocomposite scaffolds that attempt to mimic the nanoscale morphological features of natural ECM are being developed [267][268][269]. A recent development in calcium phosphatebased scaffolds has been the fabrication of nanofibres [270].
Nanometre-sized fibres have a large surface-area to volume ratio and can be processed so that they have high porosity. These features are advantageous because they permit the delivery of drugs or growth-factors and allow cell migration and nutrient diffusion. Nanofibrous scaffolds have been fabricated by three techniques: electrospinning, self-assembly, and phase separation. Nanofibrous scaffolds provide a cellular platform for bone formation. However, although calvarial defect models exist [271], bone regeneration research is often geared toward the engineering of long bones. Because of differing developmental origins and ossification processes of the cranial skeleton and appendicular skeleton [272], current approaches to the engineering of long bones may require some optimization to restore craniofacial bones.
Huang et al., used a nanofibrous self-assembling peptide amphiphile to support enamel formation [273]. Following injection of the peptide into a mouse incisor, the incisor was cultured in a kidney capsule for 8 weeks. Ordered crystalline hydroxyapatite was found in "pearls" near the incisor. These results suggest that self-assembling peptides could form initial stages of tooth development.
Ongoing research has begun to address the implantation of an engineered mandibular construct into a functional load-bearing model. Since both mass transport and mechanical properties depend upon 3D scaffold architecture, computational design techniques are needed to predict and ultimately optimize a microstructure to achieve the desired balance [274]. Architectural 3D scaffold design with the desired anatomical shape can be produced from image-based [275,276] or computer-aided design (CAD)-based approaches [277]. Scaffolds from these design approaches can then be built directly or indirectly by Solid Free-Form Fabrication (SFF) [277,278,274], and have been applied in craniofacial reconstruction [279,280].
The ultimate goal is to use integrated design/fabricated scaffolds for functional mandibular condyle and other craniofacial reconstruction. Integrated design/fabrication methods not only make functional reconstruction possible, but also permit the testing of design hypotheses concerning scaffold/cell carrier combinations, eventually leading to optimal reconstruction methods.
DPSCs when seeded onto a phase-separated nanofibrous PLLA scaffold, nanofibers promoted attachment and proliferation of human DPSCs in vitro [281]. Electrospun nanofibrous polymer scaffolds have also supported adhesion of DPSCs [282,283]. Odontogenic differentiation of DPSCs was enhanced on the phase-separated nanofibrous scaffold compared with the solid-walled scaffold in vitro, shown by increased alkaline phosphatase (ALP) activity, dentin marker gene expression, and calcium deposition [281]. These nanofibers promoted collagen matrix deposition, dentin sialoprotein secretion, blood vessel in-growth, and mineralization in a subcutaneous implantation study of cultured DPSCscaffold constructs in nude mice [281]. Similar to osteogenic doi: 10.7243/2050-1218-2-6 differentiation, odontogenic differentiation was enhanced by nanohydroxyapatiten electrospun nanofibers [284]. A mineral phase such as hydroxyapatite (HA) can also be incorporated into the nanofibrous scaffolds to form a composite bone matrix-mimicking scaffold [285]. Nano-Ha has been incorporated into a 3D phase-separated nanofibrous gelatine scaffold [286].
Studies on PDSC have shown good attachment and proliferation of periodontal ligament cells on electrospun PLGA [287] and electrospun gelatin [288] scaffolds, as well as periodontal ligament cells' osteogenic differentiation potential [287]. Furthermore, human periodontal ligament cells cultured on self-assembled peptide nanofibrous scaffolds promoted deposition of the main periodontal ligament ECM components, collagen type I and type III [159].
Strategies that involve using the scaffold itself to deliver cell growth factors such as vascular endothelial growth factor, [289] platelet-derived growth factor, [290] and recombinant human bone morphogenic protein [291,292] have been investigated to enhance and accelerate the wound healing process. A biomimetic approach to improving cell attachment has been the modification of biomaterials with the arginineglycine-aspartic-acid (RGD)-peptide sequence [293]. Proteins with these sequences bind to cell surface receptors, facilitating the attachment of cells to substrata.
A range of materials have been suggested and employed as sustained delivery materials for growth factors [294]. These particularly include use of collagen matrices [124,295], gelatin pellets [296], alloplastic calcium phosphate materials such as β-TCP and HA [130], and resorbable polymers such as PLA [297]. All of these materials tend to result in rapid initial loss of growth factor together with a slower prolonged release of factor over several days. Furthermore, it is clear that a delivery system can also serve as a scaffold or guided tissue regeneration membrane, and that manipulation of the material's properties can alter drug release kinetics, which may in turn affect biological responses. In addition to the testing and development or rhPDGF for use in periodontal regeneration, much of the research into growth factor application for periodontal regeneration has focused on the use of BMPs.
Many biodegradable biomaterials have been used and fabricated into various shapes for tissue engineering. These scaffolds showed promising results, guiding tissue development. The processing of the chosen materials into appropriate three-dimensional scaffolds with desired shapes and pore structures will be critical. Thus, scaffolds play a pivotal role in tissue engineering and are the focus of research in regenerative medicine.
Conclusion
The last few decades have seen rapid advances in our understanding of the genetic control of cellular behaviour during embryonic tooth development, leading to the programming and deprogramming of genes responsible for development of the dental tissues. This has provided for the foundation of knowledge by robust mechanistic data upon which we can build strategies for the tissue engineering and regeneration of teeth.
Most of the dental and craniofacial tissues are easily and readily accessible, thus presenting a convenient platform for, biologists, bioengineers, scientist and clinicians to test and apply tissue-engineered prototype. The impact of craniofacial tissue engineering extends beyond dental practice.
The optimal cell source, scaffold design, and the search for use of an appropriate multipotent or pluripotent stem cell in tissue engineering is an emerging concept. Certainly, many areas of stem cell research and their potential clinical applications are associated with controversies; therefore, it is important to address the ethical, legal, and social issues early. As more scientific knowledge will be gained from stem cell research, hopefully, some of the current ethical and technical concerns will be answered or removed in the future. Conversely, craniofacial tissue engineering could not have advanced to the current stage without the incorporation of interdisciplinary skill sets of stem cell biology, bioengineering, polymer chemistry, mechanical engineering, robotics, etc. with the ultimate goal of functional tissue restoration. Thus, craniofacial tissue engineering and regenerative dental medicine are integral components of regenerative medicine.
The scope of regenerative and tissue-engineering applications to dental science is immense; capable of bringing quantum advances in treatment strategies for patients. The need for high quality research in the basic sciences is imperative to ensure development of novel clinical treatment modalities that will revolutionise patient health care in the future.
The field of regenerative medicine is here to stay, as exemplified by the nascent but exponential growth of examples of translation from bench to bed side. The current hurdles are by no means insurmountable and, therefore, it is reasonable to assume to look forward to a more mature and highly rewarding field.
Competing interests
The authors declare that they have no competing interests.
Publication history
Editor: Rolf Zehbe, Berlin Institute of Technology Berlin. | 2019-03-22T16:06:56.289Z | 2013-09-18T00:00:00.000 | {
"year": 2013,
"sha1": "21868d530687d9e3f3f7d57ce3d123634f2c6466",
"oa_license": "CCBY",
"oa_url": "http://www.hoajonline.com/journals/pdf/2050-1218-2-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0adf96ff7cc0052773cad7be21437c1a53687811",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
36021135 | pes2o/s2orc | v3-fos-license | Oral manifestations of HIV
The infection of the root canal system is considered to be a polymicrobial infection, consisting of both aerobic and anaerobic bacteria. Because of the complexity of the root canal infection, it is unlikely that any single antibiotic could result in effective sterilization of the canal. A combination of antibiotic drugs (metronidazole, ciprofloxacin, and minocycline) is used to eliminate target bacteria, which are possible sources of endodontic lesions. Three case reports describe the nonsurgical endodontic treatment of teeth with large periradicular lesions. A triple antibiotic paste was used for 3 months. After 3 months, teeth were asymptomatic and were obturated. The follow-up radiograph of all the three cases showed progressive healing of periradicular lesions. The results of these cases show that when most commonly used medicaments fail in eliminating the symptoms then a triple antibiotic paste can be used clinically in the treatment of teeth with large periradicular lesions.
There is no particular oral lesion which is associated only with HIV-AIDS but there are certain manifestations like oral candidiasis, oral hairy leukoplakia (OHL) which are associated very frequently and are considered AIDS-defining diseases and have also been included in the clinical classification of HIV by CDC in category B. [3]
Fungal Infections
Candidiasis: Oral or pharyngeal candidiasis are the commonest fungal infections observed as the initial manifestation of symptomatic HIV infection. [4][5][6][7] Many patients can have esophageal candidiasis as well. It is usually observed at CD4 counts of less than 300/µl. The commonest species of candida involved are Candida albicans although nonalbicans species have also been reported. There are four frequently observed forms of oral candidiasis: erythematous candidiasis, pseudomembranous candidiasis, angular cheilitis, and hyperplastic or chronic candidiasis: [8] 1. Erythematous candidiasis presents as a red, flat, subtle lesion either on the dorsal surface of the tongue and/or the hard/soft palates. Patients complain of a burning sensation in the mouth more so while eating salty and spicy food. Diagnosis is made on the basis of clinical examination, a potassium hydroxide preparation which demonstrates the fungal hyphae can be used for confirmation. 2. Pseudomembranous candidiasis appears as creamy white curd-like plaques on the buccal mucosa, tongue, and other oral mucosal surfaces that can be wiped away, leaving a red or bleeding surface [ Figure 1]. 3. Angular cheilitis is erythema and/or fissuring and cracks of the corners of the mouth. Topical amphotericin B can also be used in the treatment for resistant candidiasis and can be prepared by dissolving 50 mg in 500 ml of sterile saline (0.1 mg/ml). Clotrimazole 1% cream, miconazole or ketoconazole 2% cream, and nystatin ointment are useful medications for angular cheilitis and for application to a removable denture base when there is candidal infection involving the underlying mucosa.
Systemic treatment for oral candidiasis involves the use of imidazole (ketoconazole) and triazole (fluconazole and itraconazole) antifungal medications. Fluconazole is given in the dose of 100-200 mg/day. The duration of treatment with oral imidazoles usually is around 7-10 days but in cases of suspicion of esophageal involvement, the duration can be extended to 21 days. As per the recent guidelines there is no role of prophylaxis for candidiasis in HIV patients.
Histoplasmosis: Histoplasmosis is a granulomatous fungal disease caused by Histoplasma capsulatum. The clinical presentation ranges from an asymptomatic or mild lung infection to an acute or chronic disseminated form. Oral histoplasmosis appears as chronic ulcerated areas located on the dorsum of the tongue, palate, floor of the mouth, and vestibular mucosa. Focal or multiple sites can be involved. In AIDS patients, histoplasmosis is rarely curable, but it can be controlled with long-term suppressive therapy consisting of the administration of amphotericin B and ketoconazole.
Cryptococcosis: Oral manifestations are quite unusual and only two cases have been reported in the literature. [9,10] The lesions consist of ulcerations of the oral mucosa, but the
Viral Infections
Oral hairy leukoplakia: These lesions are usually seen on the lateral surface of the tongue, but may extend to the dorsal and ventral surfaces [ Figure 2]. Lesions may be variably sized and may appear as vertical white striations, corrugations, or as flat plaques, or raised, shaggy plaques with hair-like keratin projections. In most cases, OHL is bilateral and asymptomatic. When it leads to discomfort it is usually associated with superimposed candidal infection. OHL has been shown to be associated with a localized Epstein-Barr virus (EBV) infection and occurs most commonly in individuals whose CD4 lymphocyte count is less than 200/µl. Histological investigations reveal typical epithelial hyperplasia suggestive of EBV infection. This condition usually does not require treatment but use of oral acyclovir, topical podophyllum resin, retinoids, and surgical removal have all been reported as successful treatments. [11] Herpes simplex and varicella zoster virus infections: Herpes simplex virus (HSV) is responsible for both primary and recurrent infections of the oral mucosa. These infections are acquired in childhood and after initial pustular lesions. The virus remains dormant, but in later stages of immunosuppresion the virus can be activated and can lead to various manifestations. Oral manifestations, represented by diffuse mucosal ulcerations, are accompanied by fever, malaise, and cervical lymphadenopathy. Ulcerations that follow the rupture of vesicles are painful and may persist for several weeks. Recurrent HSV usually appears in keratinizing oral mucosa (i.e., palate, dorsum of tongue, and gingiva) as ulcerations but in most HIV-seropositive patients, this rule is not followed. In these patients, the lesions may show unusual clinical aspects and persist for many weeks. Contact with the varicella zoster virus (VZV) may result in varicella (chicken pox) as a primary infection and herpes zoster (shingles) as a reactivated infection. In HIV infection, herpes zoster frequently presents with early cranial nerve involvement and carries a poor prognostic significance. There may be involvement of multiple dermatomes and these lesions might get secondarily infected as well. The lesions are usually associated with severe postherpetic neuralgias. Human papilloma virus: In some patients with HIV infection, human papilloma virus (HPV) causes a focal epithelial and connective tissue hyperplasia, forming an oral wart.
In HIV-infected patients, oral HPV-related lesions have a papillomatous appearance, either pedunculated or sessile, and are mainly located on the palate, buccal mucosa, and labial commissure. The most common genotypes found in the mouth of patients with HIV infection are 2, 6, 11, 13, 16, and 32. Surgical removal, with or without intraoperative irrigation with podophyllum resin, is the treatment of choice. [12] Molluscum contagiosum: Molluscum contagiosum is caused by an unclassified DNA virus of the poxvirus family. Lesions appear as single or multiple papules on the skin of the buttocks, back, face, and arms. Molluscum contagiosum usually affects children and young adults and is spread by direct and indirect contact. The typical lesion is an umbilicated papule that may itch, leading to autoinoculation. Lesions may persist for years and eventually regress spontaneously. The occurrence of disseminated molluscum contagiosum has been reported in HIV-infected patients. These lesions usually subside with immune reconstitution when patients are started on HAART.
Bacterial Infections
The most common oral lesions associated with bacterial infection are linear erythematous gingivitis, necrotizing ulcerative periodontitis, and, much less commonly, bacillary epithelioid angiomatosis and syphilis. In the case of the periodontal infections, the bacterial flora is no different from that of a healthy individual with periodontal disease. Thus, the clinical lesion is a manifestation of the altered immune response to the pathogens.
Linear erythematous gingivitis: This entity appears as a band of marginal gingival erythema, often with petechiae. It is typically associated with no symptoms or only mild gingival bleeding and mild pain. Histological examination fails to reveal any significant inflammatory response, suggesting that the lesions represent an incomplete inflammatory response, principally with only hyperemia present. Oral rinsing with chlorhexidine gluconate often reduces or eliminates the erythema and may require prophylactic use to avoid recurrence.
Necrotizing ulcerative periodontitis (NUP):
This periodontal lesion is characterized by generalized deep osseous pain, significant erythema that is often associated with spontaneous bleeding, and rapidly progressive destruction of the periodontal attachment and bone. The destruction is progressive and can result in loss of the entire alveolar process in the involved area. It is a very painful lesion and can adversely affect the oral intake of food, resulting in significant and rapid weight loss. Patients also have severe halitosis. Because the periodontal microflora is no different from that seen in healthy patients, the lesion probably results from the altered immune response in HIV infection. More than 95% of patients with NUP have a CD4 lymphocyte count of less than 200/mm 3 . Treatment consists of rinsing twice daily with chlorhexidine gluconate 0.12%, metronidazole (250 mg orally four times daily for 10 days), and periodontal debridement, which is done after antibiotic therapy has been initiated. [13] Bacillary
Neoplasms
Kaposi's sarcoma: It is the most common intraoral malignancy associated with HIV infection. The lesion may appear as a redpurple macule, an ulcer, or as a nodule or mass. Intraoral KS occurs on the heavily keratinized mucosa, the palate being the site in more than 90% of reported cases. KS is especially common among homosexual and bisexual males and is rarely found in HIV-infected women. Human herpes virus (HHV8) has been demonstrated to be an important cofactor in the development of KS. A histological examination is warranted for the definitive diagnosis of KS. There is no cure for KS. Therapy for intraoral KS should be instituted at the earliest sign of the lesion, the goal being local control of the size and number of lesions. When only a few lesions exist and the lesions are small (<1 cm), intralesional chemotherapy with vinblastine sulfate or sclerotherapy with 3% sodium tetradecyl sulfate is usually effective. Radiation therapy (800-2,000 cGy) is required for larger or multiple lesions; stomatitis and glossitis are common side effects of radiation. Although this entity has been reported in the western literature but its incidence in Indian patients is quite low with only nine cases been reported till date. [14] Non-Hodgkin's lymphoma: NHL is the most common lymphoma associated with HIV infection and is usually seen in late stages with CD4 lymphocyte counts of less than 100/mm 3 . It appears as a rapidly enlarging mass, less commonly as an ulcer or plaque, and most commonly on the palate or gingivae. A histological examination is essential for diagnosis and staging. Prognosis is poor, with mean survival time of less than 1 year, despite treatment with multidrug chemotherapy.
Immune-Mediated Oral Lesions
In HIV there is immune suppression of cell-mediated immunity as the disease progresses but at the same time there is abnormal activation of B-cell immunity. These disorders of the immune system also lead to various oral manifestations.
Aphthous ulcers: They are the most common immune-mediated HIV-related oral disorder, with a prevalence of approximately 2-3%. These ulcers are either large solitary or multiple, chronic, deep, and painful often lasting much longer in the seronegative population and are less responsive to therapy. Treatment requires the use of a potent topical steroid such as clobetesol when the lesions are accessible or dexamethasone oral rinse when not accessible. Systemic glucocorticosteroid therapy may be required (prednisone 1 mg/kg) in the case of large multiple ulcers and those not responding to topical preparations. Alternative therapies such as dapsone 50-100 mg daily and thalidomide 200 mg daily for 4 weeks should be considered in recalcitrant cases. When immunosuppressant drugs are used in order to prevent superadded fungal or bacterial infection, concurrent antifungal medications such as fluconazole, itraconazole, and antibacterial medications such as chlorhexidine gluconate oral rinse may be required.
Necrotizing stomatitis: It is an acute, painful ulceration which often exposes the underlying bone and leads to considerable tissue destruction. This lesion may be a variant of major aphthous ulceration, but occurs in areas overlying the bone and is associated with severe immune deterioration. These lesions can also occur in edentulous areas. Like in major aphthous ulceration, systemic corticosteroid medication or topical steroid rinse is the treatment of choice.
Xerostomia: Xerostomia is common in HIV disease, most often as a side effect of antiviral medications or other medications commonly prescribed for patients with HIV infection, like angiolytics, antifungals, etc. The oral dryness is a significant risk factor for caries and can lead to rapid dental deterioration. Xerostomia also can lead to oral candidiasis, mucosal injury, and dysphagia, and is often associated with pain and reduced oral intake of food. Patient who has residual salivary gland function which is determined by gustatory challenge, oral pilocarpine often provides improved salivary flow and consistency. Oral hygiene should be scrupulously maintained along with the use of dental floss.
Parotid Gland Disease
HIV infection is associated with parotid gland disease. There can be gland enlargement and diminished flow of secretions. Histologically, there may be lymphoepithelial infiltration and benign cyst formation. The enlargement typically involves the tail of the parotid gland or, less commonly, the submandibular gland, and it may present uni-or bilaterally with periods of increased or decreased size. The enlargement can be mistaken for a malignancy but in such cases a needle aspiration with yellow secretions in aspiration would help in making the diagnosis and in such cases further biopsy is unnecessary. Occasional swelling can be managed simply by repeated aspiration and rarely is radical removal of the gland necessary. The pathophysiological mechanism is not known, though cytomegalovirus has been suggested to play a role.
Oral Manifestations As Adverse Effects of Antiretroviral Therapy
With the widespread availability and usage of antiretroviral therapy for the management of HIV, the clinical picture now has shown a paradigm shift. The manifestations due to adverse effects of the HAART are also observed along with the above-mentioned features of immunosuppresion. Thus one should be aware about them as well. Oral hyperpigmentation can be observed if a patient is on zidovudine.
Erythema multiformes is a known side effect of NNRTIs. Xerostomia is also observed in patients on lamivudine, didanosine, indinavir and ritonavir. Lipodystrophy with loss of subcutaneous fat has been reported extensively in patients on stavudine. Other oral effects like paresthesias, lip edema, chelitis, and taste disturbances have been observed in patients on protease inhibitors. [15] The above-mentioned list is not the complete panorama of manifestations which can be observed in an HIV patient but only an illustration of important lesions. It is thus essential that oral healthcare professionals recognize the hallmarks of the illness and provide timely management for better survival of these patients. | 2018-04-03T05:29:06.148Z | 2010-01-01T00:00:00.000 | {
"year": 2010,
"sha1": "f0e9c488baa2411c82e4285104518cbe6513e58c",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0976-237x.62510",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "11af41b4366b11bcd800831551cc83d021164205",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244715683 | pes2o/s2orc | v3-fos-license | Mural Cell SRF Controls Pericyte Migration, Vessel Patterning and Blood Flow
Background: Pericytes and vascular smooth muscle cells, collectively known as mural cells, are recruited through PDGFB (platelet-derived growth factor B)-PDGFRB (platelet-derived growth factor receptor beta) signaling. MCs are essential for vascular integrity, and their loss has been associated with numerous diseases. Most of this knowledge is based on studies in which MCs are insufficiently recruited or fully absent upon inducible ablation. In contrast, little is known about the physiological consequences that result from impairment of specific MC functions. Here, we characterize the role of the transcription factor SRF (serum response factor) in MCs and study its function in developmental and pathological contexts. Methods: We generated a mouse model of MC-specific inducible Srf gene deletion and studied its consequences during retinal angiogenesis using RNA-sequencing, immunohistology, in vivo live imaging, and in vitro techniques. Results: By postnatal day 6, pericytes lacking SRF were morphologically abnormal and failed to properly comigrate with angiogenic sprouts. As a consequence, pericyte-deficient vessels at the retinal sprouting front became dilated and leaky. By postnatal day 12, also the vascular smooth muscle cells had lost SRF, which coincided with the formation of pathological arteriovenous shunts. Mechanistically, we show that PDGFB-dependent SRF activation is mediated via MRTF (myocardin-related transcription factor) cofactors. We further show that MRTF-SRF signaling promotes pathological pericyte activation during ischemic retinopathy. RNA-sequencing, immunohistology, in vivo live imaging, and in vitro experiments demonstrated that SRF regulates expression of contractile SMC proteins essential to maintain the vascular tone. Conclusions: SRF is crucial for distinct functions in pericytes and vascular smooth muscle cells. SRF directs pericyte migration downstream of PDGFRB signaling and mediates pathological pericyte activation during ischemic retinopathy. In vascular smooth muscle cells, SRF is essential for expression of the contractile machinery, and its deletion triggers formation of arteriovenous shunts. These essential roles in physiological and pathological contexts provide a rationale for novel therapeutic approaches through targeting SRF activity in MCs.
temporarily adopt a motile phenotype and extend numerous filopodia. Tip cells spearhead the sprouting vessels and secrete PDGFB (platelet-derived growth factor B), thereby attracting pericytes that express the corresponding PDGFRB (PDGF receptor beta), to comigrate along the nascent vessel sprout. Pericyte recruitment is generally considered to aid vessel stabilization, and numerous studies have demonstrated the importance of pericytes for vascular function. 4 In the brain, pericytes have been shown to express high levels of transporters and are considered to play a crucial role in maintaining homeostasis at the neurovascular unit. 3,5 In accordance, failure to recruit pericytes results in impaired formation of the blood-brain barrier and the blood-retina barrier. 3 Loss of pericyte coverage has been identified as an early event in diabetic retinopathy, which is associated with breakdown of the blood-retina barrier. 6 Most of our knowledge about pericyte function comes from studies in which pericyte recruitment has been compromised, or in which pericytes have been ablated. Only few studies have investigated the physiological consequences that arise if specific aspects of pericyte biology or functions are inactivated. 7
Novelty and Significance
What Is Known?
• Pericytes and vascular smooth muscle cells (vSMCs) collectively known as mural cells cover endothelial cells which form the inner lining of blood vessels. • Pericytes are essential to maintain endothelial barrier function and their loss is associated with numerous diseases. • vSMCs regulate blood flow, but it is not known to what extent changes in blood flow influence blood vessel patterning.
What New Information Does This Article Contribute?
• PDGFB (platelet-derived growth factor B)-PDGFRB (platelet-derived growth factor receptor beta) signaling activates the SRF (serum response factor) transcription factor via its cofactor MRTF (myocardin-related transcription factor) to promote pericyte migration. • Srf deletion in mural cells results in altered pericyte and vSMC morphology, defects in the actin cytoskeleton, and reduced pericyte migration. • Blockade of SRF signaling in pericytes under ischemic conditions mitigates pathological angiogenesis, making SRF a potential drug target in ischemic retinopathies. • In vSMCs, SRF controls the expression of contractile genes, and its deletion leads to severe blood vessel patterning defects and the formation of arteriovenous shunts, which in turn, cause a redirection of retinal blood flow that leaves part of the capillary network poorly perfused.
Pericytes and vSMCs, collectively called mural cells are found in close contact with endothelial cells in all blood vessels. Pericytes line capillaries and venules, whereas vSMCs encircle arteries, arterioles, and veins. During angiogenesis, endothelial cells actively recruit pericytes by secretion of the growth factor PDGFB. Recruited pericytes provide stabilization to the immature vasculature. vSMCs express a battery of contractile proteins that maintain and dynamically regulate vascular tone and blood flow. By knocking out the transcription factor SRF specifically in mural cells, we found that pericytes develop a compromised capacity to migrate during angiogenesis. Under ischemic conditions, mural SRF is overactive, causing pathological behavior of pericytes. Inhibition of SRF signaling might therefore be a potential treatment for ischemic retinopathies. vSMCs lacking SRF show a different phenotype: they lose expression of contractile proteins, causing failure of vascular tone regulation. Lack of vascular tone leads to severe patterning defects in the developing retinal vasculature, ultimately forcing the formation of arteriovenous shunts. Those shunts show a redirection of blood flow leaving large parts of the capillaries undersupplied. Our work unravels distinct functions for SRF in pericytes and VSMCs and illustrate the critical importance of both pericytes and VSMCs for vascular patterning.
Orlich et al SRF Function in Mural Cells of the CNS complicated by the fact that identification and characterization of pericytes is challenging due to the lack of unambiguous markers. Commonly used markers to identify pericytes, such as PDGFRB, NG2 (neural/glial antigen 2), DES (desmin), or CD13, are also expressed by other cell types and therefore accurate identification of pericytes requires, in addition, attention to their localization and morphology. 11 In contrast to pericytes, vSMCs are highly contractile and express a specific set of smooth muscle genes (SMG). Through a basal level of constriction, vSMCs create the vascular tone, which is essential to direct blood flow into capillaries. 5 Vascular tone differs between organs and is dependent on the balance of competing vasoconstrictors and vasodilators. 12 Dysregulation of vSMC contractility is associated with hypertension, aortic stiffness, and chronic venous disease. [13][14][15] SRF (serum response factor) is a conserved, ubiquitously expressed transcription factor that belongs to the MADS-box (Minichromosome Maintenance Factor 1‚ Agamous‚ Deficiens‚ Serum Response Factor) protein family and is known to regulate motile functions in a variety of cell types. [16][17][18] SRF is activated either by Rho/ actin or Ras/MAPK (rat sarcoma/mitogen-activated protein kinase) signaling. 17,18 Those pathways involve different cofactor proteins, either MRTF (myocardin-related transcription factor) or TCFs (ternary complex factor) and result in the expression of distinct sets of target genes. SRF has been reported to drive, among others, the expression of SMGs alpha-smooth muscle actin (Acta2, αSMA) and transgelin (Tagln, SM22α [smooth muscle 22 alpha]) in visceral SMCs, 16,19 but its role in pericytes and SMCs of the vasculature has not been thoroughly investigated. To address this question, we used Pdgfrb-CreER T2 mice, which allow to delete SRF in pericytes and SMCs of the vasculature 7,20,21 and, importantly, prevent the lethality associated with SRF deletion in visceral SMCs. 19,22 We demonstrate that SRF is crucial for distinct functional roles in pericytes and vSMCs, respectively. In pericytes, SRF is essential for cell migration downstream of PDGFRB signaling and mediates the pathological activation of pericytes during ischemic retinopathy. In vSMCs, SRF is essential for the expression of SMC genes, and its deletion triggers the formation of arteriovenous malformations (AVMs).
Data Availability
A The data that support the findings of this study are available from the corresponding author upon reasonable request. The bulk raw sequencing data and the processed counts table of our study are available at the NCBI Gene Expression Omnibus (GSE205491). A detailed description of the methods is provided as Supplemental Material.
Conditional MC-Specific Deletion of Srf
To address the role of SRF in MCs in vivo, we crossed floxed Srf flex1/flex1 (meaning floxed exon 1, also referred to as Srf flox/flox mice) mice 20 with Pdgfrb-CreER T2 mice, which have been shown to efficiently target MCs. 7, 21 We induced MC-specific deletion of Srf (hereafter referred to as Srf iMCKO ) in newborn mice via tamoxifen administration at postnatal days (P) 1 to 3 and focused our analysis on the retinal vasculature. We choose 3 distinct time points (P6, P12, and after 4-8 weeks) for analysis to investigate the role of SRF during vascular sprouting, remodeling, and maturity of the retinal vasculature ( Figure 1A and 1B). To monitor recombination specificity and efficiency, we further introduced the Rosa26 mTmG reporter 23 to the Pdgfrb-CreER T2 ::Srf flox/flox mouse line. The Rosa26 m-TmG reporter expresses membrane-targeted GFP in cells, in which Cre-mediated recombination has occurred and revealed that, at P6, around 80% of MCs had recombined the Rosa26 mTmG reporter (Figure S1A through S1C).
Polymerase chain reaction (PCR) analysis of whole retinal lysates further confirmed the presence of a 380 bp long Srf-exon 1 PCR product (Srf-lx) in Srf iMCKO mice, indicating successful recombination of the Srf-flex1 allele. In contrast, in control mice, only a 1.34 kbp long PCR product, corresponding to the nonrecombined LoxP (Locus of Crossover in P1) flanked gene sequence of the Srf-flex1 allele was amplified ( Figure S1D).
Additional gene expression analysis of retinal MC populations sorted via fluorescence-activated cell sorting (FACS), showed efficient downregulation (over 99%) of Srf expression ( Figure S1E) in Srf iMCKO mice. Taken together, our conditional gene deletion approach allowed for successful and reliable deletion of Srf in MCs.
SRF Function in Pericytes Is Crucial for Normal Development of the Retinal Vasculature
To investigate the effects of mural Srf deletion on retinal angiogenesis, we analyzed Srf iMCKO and control retinas at P6. At this stage, the retinal vasculature is still actively sprouting and capillaries at the sprouting front become invested by pericytes. At the same time, the more proximal vascular plexus is remodeling into a hierarchical network with arteries, arterioles, venules, and veins. In Srf iMCKO retinas, radial vessel outgrowth was delayed and the vascularized area and number of capillary branches were decreased ( Figure 1C through 1E). Increased levels of extravasated erythrocytes also suggested reduced barrier properties of the vasculature in Srf iMCKO retinas ( Figure 1F and 1G). We further observed a decrease in the number of tip cell filopodia and a severe dilation of blood vessels at the sprouting front ( Figure 1H Figure S1H). Besides the abovementioned effects on the angiogenic sprouting front, the loss of Srf in MCs also affected remodeling of the proximal vascular plexus. We observed crossing of arteries and veins, which represents a microvascular abnormality termed nicking and is also observed in the human retina 24 ( Figure S1F and S1G). We also noted abundant deposits of the basement membrane protein collagen IV without attached ECs, so-called empty matrix sleeves ( Figure 1F), which indicate excessive vascular pruning. 25 These observations argue for defective remodeling and instability of nascent vessels in the proximal vascular plexus ( Figure 1F and 1G).
SRF Mediates Pericyte Migration Downstream of PDGFB via Cytoskeletal Rearrangements
Since the vascular defects that we observed shared similarities with mouse models of postnatal pericyte depletion, 7,8,26 we addressed pericyte recruitment and coverage in Srf iMCKO retinas. Immunostainings using antibodies against the pericyte markers NG2, DES, and PDGFRB 2 revealed a 50.6% reduction of pericyte coverage at the angiogenic sprouting front (Figure 2A and 2B; Figure S2A and S2B). Interestingly, vessels at the central plexus only showed a 25.5% reduction of pericyte coverage, suggesting that pericyte migration along newly formed blood vessels was especially affected. In addition, we observed a 25% reduction in pericyte proliferation at the angiogenic front but not at the capillary plexus in Srf iMCKO retinas ( Figure S2C and S2D). Immunostainings for cleaved-caspase3 did not reveal any differences in cell survival ( Figure S2E and S2F).
To specifically address, if pericyte migration is compromised upon Srf deletion, we isolated primary pericytes from the brain (pBPCs) of Srf-flex1 and control mice and performed migration assays. To delete Srf in pBPC cultures, we exposed these cells to Tat-Cre, a membranepermeable version of the Cre recombinase. Subsequent quantitative PCR analysis showed that this approach led to an over 99% reduction of Srf mRNA and Western blot analysis confirmed the loss of the full-length SRF protein ( Figure S2G and S2H). In Srf-KO pBPCs only a truncated and nonfunctional version of the SRF protein is detected at ≈36 kDa. 19 In scratch wound assays, Srf deleted pBPC cultures (hereafter referred to as Srf-KO) showed a significant reduction in collective cell migration, as well as in the speed of individual migrating cells ( Figure 2C and 2D, Video S1). Likewise, we also observed reduced migration of Srf-KO cells in a transwell assay ( Figure S2I and S2J).
Since SRF is known to regulate cellular motility through transcriptional control of genes encoding regulators of actin dynamics in other cell types (reviewed in Olson & Nordheim, 2010), 18 we used the membranepermeable F-actin staining dye silicon-rhodamine (SiR)-Actin, 27 to visualize actin dynamics in living cells, and performed a series of time-lapse experiments. These experiments revealed a substantial reduction of F-actin in Srf-KO cells ( Figure 2E and 2F). In accordance, we also observed a significant reduction of beta-actin gene (Actb) expression in Srf-KO cells ( Figure 2F). We further tested to which degree SRF-mediated cytoskeletal rearrangements are required downstream of PDGFB for pericyte migration. We, therefore, treated starved pBPCs with PDGFB and live imaged the changes in actin dynamics using SiR-Actin. Upon PDGFB stimulation, control pBPCs showed intensified actin dynamics and increased cell motility. In contrast, Srf-KO cells displayed almost no reaction to PDGFB stimulation ( Figure S2K; Video S2). Taken together, these results indicate that the observed migration defects of Srf iMCKO pericytes are caused by the inability of the actin cytoskeleton to respond to the natural PDGFB gradient originating from endothelial tip cells.
To investigate pericyte morphology in vivo, we crossed the Rosa26 mTmG reporter into the Srf iMCKO background and subsequently analyzed labeled pericytes in Pdgfrb-CreER T2 ::Rosa26 mTmG ::Srf-flex1/Srf-flex1 (Srf iMCKO ) and Pdgfrb-CreER T2 ::Rosa26 mTmG ::Srf-flex1/wt littermate control retinas. In the absence of tamoxifen, the Rosa26 mTmG reporter ubiquitously expresses tdTomato. However, upon tamoxifen induction, Cre-mediated recombination leads to expression of membrane tagged eGFP (enhanced green fluorescent protein), which reliably outlines cell morphology ( Figure 2G). We found that control pericytes at the capillary plexus attached tightly to the endothelium and extended numerous thin protrusions that connected pericytes with each other. In contrast, Srf iMCKO pericytes displayed an overall less ramified morphology and only formed short and stubby protrusions. At the angiogenic front, morphological changes were even more pronounced. We noticed that control pericytes often extended filopodia, which were oriented towards the angiogenic sprouting front, suggesting that pericytes might use filopodia, similarly to ECs, for migration and to sense the PDGFB gradient ( Figure 2G). In contrast, SRF-deficient pericytes had entirely lost the ability to form filopodia, appeared partially detached from ECs, and had adopted an abnormal cell morphology ( Figure 2G). The inability of Srf iMCKO pericytes to form filopodia is in line with the actin remodeling defects that we observed in our in vitro experiments and consistent with SRF function in ECs where it also regulates filopodia formation. 28,29 Taken together, our results suggest a crucial role for SRF in pericyte migration via regulation of actin dynamics.
PDGFB Signaling Activates SRF via MRTF Cofactors
The migration defects of SRF-depleted pericytes implied a direct role for SRF downstream of PDGFB signaling in pericyte migration. Most SRF-mediated motility responses have been reported to be regulated via RhoGTPase signaling. RhoGTPase activity stimulates F-actin polymerization, which depletes the cellular G-actin pool. Cytosolic G-actin can bind to MRTFA and MRTFB (myocardinrelated transcription factors A and B) and thereby inhibiting nuclear translocation of MRTFs. Increased F-actin polymerization diminishes cytosolic G-actin levels, thereby enabling nuclear translocation of MRTFA and MRTFB and subsequent activation of SRF-directed transcription. 17 To test if PDGFB also leads to SRF activation via MRTF cofactors, we took advantage of a 3T3 cell line expressing GFP-tagged MRTFA-protein. 30 To observe potential MRTFA-GFP nuclear translocation upon PDGFB stimulation, we starved these cells overnight to cause MRTFA-GFP to predominantly localize in the cytoplasm ( Figure 3A). We subsequently imaged the starved cells using timelapse-microscopy and, after 15 minutes, stimulated with PDGFB (Video S3). Strikingly, PDGFB stimulation led to a 3.5-fold increase in MRTFA-GFP nuclear translocation already within 5 minutes (T=20 minutes, Figure 3A and 3B) and remained at high levels for another 5 minutes (T=25 minutes), before gradually shifting back to the cytoplasm. Thirty-five minutes after stimulation (T=50 minutes, Figure 3A and 3B), the nuclear MRTFA-GFP signal was reduced to 1.5-fold compared to prestimulation conditions and remained at this level for the rest of the experiment.
To investigate if PDGFB-induced nuclear MRTF translocation results in activation of SRF target genes, we performed luciferase-based reporter assays with promoter sequences containing functional (TSM) 2 or mutated (TMM) 2 SRF binding sites ( Figure 3C). 28 PDGFB stimulation of 3T3 cells transiently transfected with the (TSM) 2 reporter significantly increased basal luciferase activity ( Figure 3D). Addition of the MRTF inhibitor CCG-203971 abrogated the PDGFB-induced luciferase activation, demonstrating that PDGFB activates SRF-mediated transcription primarily via MRTF cofactors. The (TMM) 2 construct served as negative control in those experiments, as SRF cannot bind to the mutated promoter sequence and thus is unable to activate the transcription of target genes. In accordance, PDGFB stimulation does not increase Luciferase activity in 3T3 cells transiently transfected with the (TMM) 2 construct. To this end, quantitative PCR analysis of PDGFB stimulated pBPCs further indicated a strong induction of the MRTF-SRF target genes Actb, Acta2, and Tagln ( Figure 3E). Interestingly, we did not observe the induction of immediate early gene response genes Egr1 and c-Fos, which are dependent on TCF-mediated SRF activation. Taken together, our results strongly suggest that PDGFB-dependent SRF activation and transcription of target genes are mediated via MRTF cofactors.
SRF Is a Key Determinant of Pathological Pericyte Activation
Recent work of Dubrac et al 31 have shown that, under certain pathological conditions, pericytes can acquire disease-promoting properties. Using the oxygen-induced retinopathy (OIR) mouse model, it was shown that excessive PDGFB-PDGFRB signaling leads to pathological activation of pericytes, which promotes the formation of neovascular tufts (NVT). NVTs are clustered capillary loops, which show excessive EC proliferation and extravasation of red blood cells (RBCs). During OIR, pericytes undergo a pathological switch accompanied by upregulation of SMGs, which is characterized by strong expression of αSMA. Because our results suggested that SRF regulates both pericyte migration and expression of αSMA, we hypothesized that SRF might be a driving force of pathological pericyte activation during ischemic conditions. To address this hypothesis, we performed OIR experiments and kept P7 Srf-flex1::Pdgfrb-CreER T2 and control pups for 5 days under hyperoxic conditions, which led to vaso-obliteration in the primary vascular plexus ( Figure 4A). We then returned the mice to normal oxygen conditions (21% O 2 ), which provoked a strong hypoxic response and resulting pathological revascularization and NVT formation. During the revascularization phase, we applied tamoxifen to the pups (P12-P14) to induce Srf iMCKO and analyzed the impact on NVT development at P17.
Stainings with the endothelial marker CD31 revealed a significant reduction of NVT development by 38.5% (±9.5%; P=0.0025; Figure 4B and 4C) and exhibited an improved revascularization, evident as a reduction of the avascular area by 35.2% (±21.2%; P =0.00053) in Srf iMCKO OIR retinas. Costainings for NG2 and desmin confirmed the presence of pericytes on NVTs both in control and Srf iMCKO OIR retinas ( Figure 4D; Figure S3A and S3B). Although in control retinas pericytes on NVTs displayed the characteristic upregulation of αSMA (Figure 4D), the αSMA staining in Srf iMCKO OIR retinas was reduced by 92% ( Figure 4D and 4E). We further confirmed by quantitative PCR analysis that mRNA expression of Acta2, the gene encoding αSMA and other SMC genes was almost completely lost in Srf iMCKO OIR retinal lysates ( Figure 4F; Figure S3D), which strongly suggests that pathological activation of pericytes was diminished. We did not observe a compensatory upregulation of the muscle genes Actc1 or Actg1 ( Figure S3C). In contrast, mRNA levels of Vegfa, the main angiogenic driver, were significantly reduced ( Figure 4F). This, in turn, could explain the reduction in NVT formation and decrease in avascular area.
Taken together, our results show that SRF is a necessary player in the pathogenic activation of pericytes during OIR and that αSMA upregulation, a common finding in several vascular conditions, relies on the activation of this transcription factor. The relevance of SRF for the phenotypic switch of pericytes during OIR makes it a potential therapeutic target to prevent pathological activation of pericytes although its role during physiological angiogenesis ( Figure 4G) should not be overlooked.
SRF Deletion in MCs Triggers the Formation of Arteriovenous Malformations
Since in Srf iMCKO mice, Srf is not only deleted in pericytes but also in vSMCs, we wanted to study its requirement for vSMC development and function in further detail. To visualize vSMCs during early retinal development, we performed immunohistochemical staining for αSMA and NG2 at P6. In control retinas, αSMA strongly highlighted part of the contractile machinery located in the cytoplasm of vSMCs, whereas membrane-bound NG2 staining outlined the cell shape. Remarkably, in vSMC of Srf iMCKO retinas, the αSMA signal was lost ( Figure S4A). Quantitative PCR analysis of whole retina lysates from In total, 29 cells were quantified. The experiment was carried out three times (n=3). SEM is indicated in green. C, Vector constructs used for luciferase assay, containing TCF (ternary complex factor), SRF, and AP1 (activator protein 1)-binding sites. Thymidine kinase (TK) minimal promoter (TK120) drives basal LUC (luciferase) expression. Symbols with dashed borders indicate mutated (unfunctional) binding sites. D, Luciferase assay to verify SRF-driven luciferase activity upon PDGFB stimulation. Activity was modulated using vector constructs (TSM) 2 and (TMM) 2 as well as by using the MRTF inhibitor CCG-203971. E, Gene expression analysis using quantitative polymerase chain reaction (qPCR) of MRTF-SRF target genes (Acta2, Tagln, and Actb) as well as TCF-SRF target genes (Ehr1 [early growth response 1] and c-Fos [Fos proto-oncogene]) in starved and PDGFB stimulated Srf-wild type (green and red bars) and Srf-KO (blue bars) pericytes from the brain (pBPCs). Error bars indicate SD of the mean. Statistical significance in D and E was determined using the Shapiro-Wilk test for normality and a one-way ANOVA multiple comparisons test with Tukey-correction (10 comparisons in D and 3 in E). Srf-KO pBPCs confirmed a dramatic drop in Acta2 gene expression ( Figure S4B). Despite the complete absence of the αSMA signal, NG2 staining still marked the vSMC population on arteries ( Figure S4A and S4B), indicating that vSMCs were present but had lost Acta2 expression. An in-depth analysis of NG2-positive cell coverage on arteries indicated only a slight reduction of vSMCs in Srf iMCKO retinas ( Figure S4B). These results strongly argued that SRF is strictly required for Acta2 expression in arterial vSMCs.
To investigate the impact of Srf deletion during vascular remodeling, we analyzed retinas at P12. At this stage, blood vessels sprout perpendicular from the primary retinal plexus to form the deep plexus. 32 The tissue undergoes extensive remodeling and both arteries and veins progressively mature. vSMCs become increasingly important as the blood pressure in arteries increases, and consequently, contractile functions are crucial to regulate the vascular tone. At this stage, control retinas displayed a stereotypical vascular pattern, with hierarchical organized blood vessels and a clear arterial-venous zonation, implicating that blood flow is channeled from arteries into arterioles, capillaries, venules, and subsequently into veins. In contrast, we observed severe patterning defects in Srf iMCKO retinas ( Figure 5A), in which both arteries and veins were significantly dilated (Figure 5B and 5C). Moreover, arteries often were directly connected to veins, a phenomenon called arteriovenous shunting ( Figure 5A). The shunting phenotype was highly penetrant since we observed at least one arteriovenous shunt in every analyzed P12 retina ( Figure S5B). In addition, we noticed that the deep vascular plexus was considerably under-developed, as illustrated by a reduced vascular area and a decreased number of vessel branch points ( Figure 5D and 5E). Remarkably, deep plexus capillaries were completely absent directly underneath the arteriovenous shunts ( Figure 5A).
To clarify if arteriovenous identity is lost in association with shunt formation, we performed immunohistological staining with known markers of arteriovenous-zonation on control and Srf iMCKO retinas at P12 (Figure S4C and S4D). For this purpose, we used SOX17 (SRY-box transcription factor 17), an arterial marker necessary for acquisition and maintenance of arterial identity, 33 and Endomucin, a transmembrane sialomucin expressed solely on the surface of capillary and venous, but not arterial, endothelium. [34][35][36] As expected, control retinas displayed a clear arteriovenous zonation with arteries strongly positive for Sox17 and negative for Endomucin, whereas veins were strongly positive for Endomucin, but showed only a weak signal for Sox17 ( Figure S4D). This pattern of arteriovenous-zonation was maintained in Srf iMCKO retinas, arguing that the vascular malformations are indeed arteriovenous shunts and not dilated, dedifferentiated vessels. We further noticed that, in Srf iMCKO retinas, AVMs became more pronounced over time due to a constant increase in artery and vein diameter and concomitant loss of intermediate capillary plexus ( Figure S5A). Although at P12 some malformations still contained capillary remnants between arteries and veins, the number of AVMs in which arteries and veins directly connected more than doubled at P25, whereas the total amounts of AVMs remain unchanged ( Figure S5B and S5C).
The arteriovenous shunts in Srf iMCKO retinas morphologically resembled vascular malformations characteristic of mouse models of hereditary hemorrhagic telangiectasia (HHT), a disease caused by mutations in the TGF (transforming growth factor)-β pathway. 37 In HHT mouse models, it has been reported that ECs that contribute to shunt formation proliferate at higher rates than neighboring ECs. 38,39 To clarify if a similar mechanism could explain arteriovenous shunt formation in Srf iMCKO retinas, we performed in vivo proliferation assay using EdU in combination with the nuclear EC marker ERG1. Whereas ERG1 staining revealed that EC density was locally increased in the malformed areas, EC proliferation (ERG1 + EdU/ ERG1 + counts) was markedly reduced in Srf iMCKO retinas ( Figure 5F and 5G). Immunostainings with the junctional marker VE-cadherin (vascular endothelial cadherin) further demonstrated that the shape of EC in the malformed regions was severely affected. Although ECs on veins in control retinas were elongated and showed a spindle-like morphology with straight adherens junctions, ECs at arteriovenous shunts had a round morphology, were less elongated, and their adherens junctions appeared irregular with partially overlapping areas and a zig-zag morphology ( Figure 5H). Furthermore, transmission electron microscopy of arteriovenous shunt regions revealed enlarged basal laminae and intraluminal membrane invaginations originating from ECs ( Figure 5I). Interestingly, intraluminal membrane invaginations have also been described in Pdgfb mutant mice, in which pericyte recruitment is defective. 40
SRF Is Critical for the Expression of Contractile Genes in vSMCs
Although, overall, we only observed a marginal difference in MC coverage between control and Srf iMCKO retinas ( Figure 6A; Figure S6A through S6D), we noticed, in arteriovenous shunt areas, that the venous shunt portions showed an increased vSMC coverage, whereas arterial shunt portions were often deprived of vSMCs ( Figure 6A; Figure S6C and S6D). A 3-dimensional segmentation analyses of arteriovenous shunt regions showed striking morphological alterations of vSMCs ( Figure 6B) and high-resolution imaging further revealed changes in the intracellular organization of the cytoskeletal protein Desmin ( Figure S6E). Taken together with the complete absence of αSMA ( Figure 6C and 6D) these results suggested that the contractile ability of vSMCs is compromised in Srf iMCKO retinas. To further investigate a potential reduction of vSMC contractility, we determined the expression levels of genes, encoding typical contractile proteins in SMCs 29,30 and found that Acta2, Mhy11, and Tagln were strongly downregulated in whole retinal lysates of Srf iMCKO mice ( Figure S7A). We further used pBPCs cultures to study the expression of contractile genes. In control pBPCs cultures, the addition of serum leads to a strong induction of SMGs, such as Acta2, Mhy11, and Tagln, whereas in Srf-KO cells the expression of those genes was almost completely lost in any tested condition ( Figure S7B). Altogether, these results show that SRF is indispensable for the expression of genes responsible for the contractile abilities of vSMCs.
To characterize the transcriptional changes that result from SRF deletion in MC in further detail, we utilized an RNA-sequencing approach. We induced control and Srf iMCKO mice with tamoxifen from P1 to P3 and FACsorted PDGFRB + MCs from retinas at P12. We subsequently isolated RNA from those cells and sequenced in triplicates. Expression analysis confirmed a high enrichment of mural-specific genes, such as Pdgfrb, Rgs5, and Notch3, whereas genes typically expressed in other cell types, such as ECs (Pecam1, Cdh5) or astrocytes (Gfap), were underrepresented, which suggested that sufficiently pure MC fractions had been isolated and sequenced (Figure S7C). Principal component analysis of all sequenced datasets showed a sufficient reproducibility between samples within one group and a strong difference between the control and the Srf iMCKO group ( Figure S7D). Differential gene expression analysis using a false discovery rate-adjusted P <0.05 and an absolute log 2 fold change >0.5 identified 2265 upregulated and 2537 downregulated genes ( Figure 6E). Notably, we identified 517 differential expressed genes (350 up/167 down) which are potentially under control of SRF. 41,42 The expression of the majority of these differentially expressed genes (375) is controlled by the MRTF-SRF signaling axis (Figure S7E). Further, gene set enrichment analysis using the Kyoto Encyclopedia of Genes and Genomes and the Gene Ontology databank resulted especially in the identification of pathways linked to contractile functions of myocytes and SMCs ( Figure 6F). Importantly, the gene set enrichment analysis identified numerous differential expressed genes crucial for smooth muscle contraction ( Figure 6G), and the downregulation of Acta2 and Tagln in Srf iMCKO MCs could be confirmed. Strikingly, we also observed a strong downregulation of numerous structural elements of the contractile apparatus, such as myosin light chain 9 (Myl9; log 2 fc: −2.59±0.36), tropomyosin beta chain (Tpm2; log 2 fc: −2.96±0.42), and several members of the actin family, such as Acta1 (log 2 fc: −3.68±1.14), Actc1 (log 2 fc: −2.69±1.03), and Actg1 (log 2 fc: −0.97±0.31). Of note, multiple ion channel proteins important for Ca 2+ release into the cytoplasm, which triggers vSMC contraction, were markedly downregulated (Cacna1d, log 2 fc: Most notably the SRF-cofactor myocardin (Myocd, log 2 fc: 3.08±0.43), which binds to and activates SRF and thereby leads to induction of muscle-specific gene expression. 43 The upregulation of those genes might reflect a compensatory mechanism aimed to limit the consequences that the loss of SRF caused for MC function.
Previous studies have shown that under pathological conditions KLF4 (krueppel-like factor 4) activates promotes a phenotypic switch of SMCs from a contractile phenotype towards a mesenchymal phenotype (reviewed by Yap et al 44 2021). To investigate if in Srf iMCKO mice MCs shift towards a mesenchymal phenotype, we analyzed our RNA-seq data for the expression of relevant marker genes. We found that the expression of the mesenchymal markers Ly6a and Cd34, commonly used to identify such transition 45,46 were not increased ( Figure S7F). Immunostaining for the corresponding proteins SCA1 (stem cells antigen-1) and CD34 only showed detectable labeling in ECs ( Figure S7G and S7H), indicating that the expression of these markers in retinal vSMC is generally very low. In addition, changes in KLF4 expression were not 50 µm (right). G, Quantification of the endothelial cell (EC) density (ERG1 + counts/ICAM2+ area) and EC proliferation (ERG1 + +EdU + /ERG1 + counts). H, High-resolution confocal image of the junctional EC-specific protein VE-cadherin (vascular endothelial cadherin), visualizing the shape of endothelial cells. Scale bar, 20 µm. I, Electron microscopy images of control veins and malformed Srf iMCKO veins visualizing ultrastructural changes. Black arrowheads pointing to EC membrane invaginations. Note also that the basal lamina (BL, pseudocolored in yellow) is thickened in Srf iMCKO . Scale bar, 1 µm. Error bars indicate SD of the mean. Statistical significance in C was determined using the Shapiro-Wilk test for normality followed by an unpaired t test with Welch correction (2-tailed). For the data shown in E and G, the unpaired Mann-Whitney (2-tailed) was used. Number of analyzed animals (n) is indicated. DP indicates deep plexus; MC, mural cell; and rel., values relative to control. significant and expression levels of Atf4, a transcription factor known to prevent proteasomal degradation of KLF4 and to promote the phenotypic switch of SMC towards a fibroblast or macrophage-like phenotype 45 was reduced ( Figure S7F). The expression levels of Lum and Dcn, markers that would indicate a switch towards a fibroblast-like phenotype, 47 were very low in both control and Srf iMCKO conditions and nonsignificantly changed ( Figure S7F). Taken together, our data do not indicate a switch of Srf iMCKO SMCs towards a mesenchymal or fibroblast-like phenotype.
Loss of Vascular Tone in the Absence of Mural SRF
Our transcriptomic analyses of MCs, isolated from P12 Srf iMCKO and control retinas revealed a conspicuous misregulation of genes involved in the regulation of the vascular tone. To explore the biological significance of these results, we investigated the retinal vascular morphology of control and Srf iMCKO mice at 3, 4, and 8 weeks of age in vivo via scanning laser ophthalmoscopy and optical coherence tomography. Scanning laser ophthalmoscopy confirmed a substantial enlargement of arterial and venous vessels in all examined Srf iMCKO retinas ( Figure 7A). Moreover, we were able to classify the individual vascular alterations into three different degrees of severity, ranging from mild and intermediate to severe ( Figure S8A and S8B). Strikingly, there were also major abnormalities in the dynamic movement of vessel walls associated with blood pulsations in Srf iMCKO retinas. Although the vessel wall of arteries in control retinas showed relatively scant movements, we observed exceptionally strong pulsating motion in the arterial wall of Srf iMCKO animals ( Figure 7A through 7C; Video S4). This finding is in strong support of a hemodynamically relevant loss of the vascular tone. Moreover, in our longitudinal study, we further observed that arterial and venous vessels gradually enlarged during the course of the experiment. This effect was quantified by repetitive size assessment of identical vessels at 3, 4, and 8 weeks, respectively ( Figure 7D). In the most severe cases, we observed that venous vessels ruptured ( Figure 7D), which further supports the hypothesis that vSMCs had lost their ability to modulate blood pressure.
To investigate potential perfusion problems that could result from a loss of vascular tone, we analyzed Srf iMCKO and control retinas with scanning laser ophthalmoscopy angiography, for which we used indocyanine green as a contrast agent. These results revealed profound alterations of the capillaries in Srf iMCKO retinas, which are suggestive of associated perfusion defects ( Figure 7A; Figure S8A and S8B).
Redirected Blood Flow via Arteriovenous Shunts Leads to Reduced Capillary Perfusion
We next performed intravital imaging of the retinal vasculature in Srf iMCKO and control animals to analyze if changes in capillary perfusion are a consequence of a pathological blood flow redirection via the observed arteriovenous shunts. To do so, fluorescent microspheres (beads) were injected into the circulation and imaged to calculate flow velocity and distribution over time. Bead velocity and distribution has previously been established to be comparable to labeled RBCs and allows to determine RBC velocity and approximate blood flow rates (nl/s) 48 (Figure 8A through S8F; Figure S8E; Video S5 and S6). Overall, we observed a substantial reduced mean velocity of RBCs in Srf iMCKO vasculature. This is likely a direct consequence of the lack in myogenic response ( Figure 8A through 8C; Figure S8E; Video S5). In contrast, the mean arteriovenous-blood flow rate was not significantly changed in Srf iMCKO retinas ( Figure 8D and 8E). However, when comparing flow rates of individual vessels, it became apparent, that Srf iMCKO veins experience huge variations in flow rates and that the blood flow is predominantly redirected through shunting veins, whereas nonshunting veins experience lower flow rates ( Figure 8A and 8E). As a result, capillary perfusion was also significantly reduced in Srf iMCKO retinas. This was apparent by a reduced number of beads crossing through capillaries between adjacent arteries and veins ( Figure 8A and 8F; Video S6). A postmortem analysis by immunolabeling of RBCs by Ter119 in the retinal vasculature of P12 animals confirmed these observations and demonstrated that RBCs accumulate in AVMs, whereas their presence is reduced in the capillary network ( Figure 8I through 8K).
The reduced capillary perfusion likely has detrimental effects on the retinal tissue, as oxygenation and nutrient supply are expected to be severely decreased. Consequently, we performed an assessment of retinal function the outer retina. 49 A typical finding in the dark-adapted (scotopic) electroretinography in case of retinal hypoxia is a discrepancy between the initial negative wave (the a-wave) and the subsequent positive wave (the b-wave), leading to a waveform shape called negative electroretinography. Indeed, we clearly observed negative scotopic electroretinography in eyes of Srf iMCKO animals as well as reduced b/a-wave ratios ( Figure 8G and 8H), indicating reduced retinal oxygenation, most likely as a result of defective capillary perfusion. The grade of severity of vascular alterations was well correlated with the scotopic electroretinography measurements (Figure 8G and 8H).
To analyze if SRF is also needed for MC function in the mature vasculature, we induced its deletion in adult mice. We injected 8-week-old Srf-flex1::Pdgfrb-CreER T2 and littermate control mice with tamoxifen on 5 consecutive days ( Figure S9A) and analyzed retinas after 2 and 12 months respectively. After 2 months, the retinas showed no obvious vascular phenotype (data not shown). However, after 1 year, we observed a significant dilation of arteries and veins ( Figure S9B and S9C). Coimmunostainings for Desmin and αSMA did not indicate a substantial loss of vSMC coverage on arteries or veins but revealed a marked reduction of αSMA expression, especially on veins ( Figure S9D and S9E). This suggests that SRF deletion in adult mice leads to a reduction in contractile protein expression rather than a loss of vSMCs. In support of this hypothesis, we also observed a reduced expression of Acta2 and Tagln in whole brain lysates of in Srf iMCKO mice (Figure S9F). In contrast, we did not observe changes in the capillary network of those mice and conclude that SRF is dispensable in adult retinal pericytes.
DISCUSSION
Our understanding of MC function has increased considerably in the past decade. It is now well established that pericytes play a key role in maintaining the integrity of the blood-brain barrier and that their dysfunction contributes to the progression of numerous diseases (reviewed in Geranmayeh et al, 50 Hirunpattarasilp et al, 51 and Brown et al 52 ). Yet, despite recent advances, many aspects of pericyte biology still remain poorly understood.
During angiogenesis, pericytes are recruited to sprouting blood vessels via PDGFB/PDGFRB signaling. 11,53 PDGFB, which is produced and secreted by tip cells is retained in the ECM (extracellular matrix) of new vessel sprouts via its retention motif. 54 Pericytes, which in turn express PDGFRB, sense the PDGFB tissue gradient and comigrate along the nascent vessel sprouts. 55 NCK1 (non-catalytic region of tyrosine 1) and NCK2 adaptor proteins have been proposed to mediate PDGFB-dependent PDGFRB phosphorylation. 31 However, which signaling molecules are activated downstream of PDGFRB during pericyte recruitment and how those molecules regulate the cytoskeleton to mediate cell motility has not been well characterized. Here, we demonstrate that PDGFB/PDGFRB signaling triggers translocation of MRTF coactivators to the nucleus, where they associate with the SRF transcription factor and activate expression of a specific gene set that subsequently regulates pericyte migration. Interestingly, MRTF-driven activation of SRF has previously been reported in response to PDGFRα signaling in mesenchymal cells during craniofacial development, 56 suggesting that MRTF/SRF activation might be a conserved feature downstream of PDGFRs. Our data suggest that SRF is a key regulator of cytoskeletal functions in pericytes, as its deletion (Srf iMCKO ) led to severe cytoskeletal and migratory defects in vivo. As a consequence, SRF-depleted pericytes were unable to fully populate the retinal vasculature, which resulted in a reduced pericyte coverage, especially at the sprouting front, and caused vessel dilation as well as reduced barrier properties.
Recent studies have highlighted that pericytes are not only crucial for normal vascular development and for maintaining blood-brain barrier properties in the adult vasculature but can also acquire disease-promoting properties. Examples are the formation of vascular malformations as a consequence of RBP-J (recombination signal binding protein for immunoglobulin kappa J region) deletion in pericytes and their role as promoters of NVT formation in ischemic retinopathy. 31,57 Here, we demonstrate that the pathological features of pericyte activation in ischemic retinopathy are mediated by SRF, which regulates pericyte migration downstream of PDGFRB signaling and activates the expression of SMGs. Accordingly, OIR experiments in Srf iMCKO mice showed reduced NVT formation and improved revascularization. Remarkably, the pathological αSMA expression in pericytes was completely prevented.
In this regard, it is interesting to note that pathological activation of pericytes shares certain similarities with fibrotic reactions in which SMGs are expressed at high levels and excessive amounts of ECM proteins are deposited. 58 The fibrotic reaction is regulated, at least in part by the actin-MRTF-SRF axis, 59 and recently developed small molecule inhibitors that target MRTF function are promising candidates for the treatment of fibrosis. 60 Besides its crucial role in pericytes, SRF also plays essential role in vSMCs, where it regulates the expression of SMGs. These genes typically contain a CArG (− cis) element that serves as an SRF binding side 41,61 and, in part, encoded proteins that enable vSMCs to constrict and thereby increase vascular resistance. 62,63 Through the modulation of vascular resistance, vSMCs can regulate blood flow to satisfy the local demands for oxygen and nutrients. 64 This implies, that a vessel branch that experiences a stochastic increase in flow compared to its neighboring branch must be able to counterbalance that increased flow rate to ensure an equal blood distribution. This is attained by an increase in resistance in the affected branch due to vasoconstriction, which naturally leads to an increased flow in the neighboring branches, where resistance is lower. 65,66 This property of vSMCs has been termed the myogenic response. 67 In Srf iMCKO mice, vSMCs fail to express typical SMGs and are no longer able to mediate the myogenic response. Consequently, flow redistribution cannot be achieved and initial stochastic changes in local blood flow cannot be adequately redirected. We propose, that, as a consequence, some branches develop into arteriovenous shunts that funnel a proportion of the blood directly to the venous circulation ( Figure 7E). This relives the pressure from surrounding vessels and ensures a certain functionality of the retinal vasculature. These shunts form, where one would expect, in the retinal periphery, where the distance between arteries and veins is the shortest. Intravital imaging of those AVMs revealed a pathological blood flow redirection primarily via shunting veins. As a consequence, capillary perfusion was significantly reduced in Srf iMCKO retinas. Electrophysiological measurements confirmed that retinal function and thus, vision, was severely impaired. In this context, it is interesting to note that similar roles of SRF have been reported in visceral SMCs, where Srf deletion led to impaired contraction and thus to severe dilation of the intestinal tract. 19,22 The fully developed retinal vasculature seems to be more robust to changes. Mural Srf deletion in adult mice did not lead to arteriovenous shunt formation, which is likely attributed to the low plasticity of fully matured blood vessels. In this context, it is worth noting that adult deletion of SRF did not result in a complete loss of αSMA in arterial SMCs. It is thus possible, that the remaining αSMA protein levels maintain a sufficient degree of contractile function and that because of this, arteriovenous shunts did not form. However, the diameter of arteries and veins significantly increased in aged Srf iMCKO mice, and we observed a significant reduction of contractile proteins. In contrast, pericytes around capillaries seemed unaffected, suggesting that SRF is dispensable for pericyte homeostasis.
The finding that defective MC function can trigger the formation on AVMs might be of broader medical relevance. AVMs are hallmarks of HHT, a human disease caused by autosomal dominant mutations in genes of the TGF-β signaling pathway, in particular endoglin or ACVRL1 (activin receptor-like kinase 1). 37 In HHT, AVMs commonly form in the nose, lungs, brain, or the liver and affected individuals often suffer from nasal and gastrointestinal bleedings. Rarely occurring AVMs in the central nervous system can even be life threatening. 68 Thus far, MCs have not been directly implicated to trigger HTT-like malformations, although they have been found to be immature on AVMs and are thought to contribute to the instability of vessels. 69 In addition, recent reports indicate the potential importance of MCs coverage in treatment of HHT. 69,70 Furthermore, Sugden et al 71 recently also highlighted the importance of hemodynamic forces in this context and demonstrated that endoglin function is necessary to mediate blood flowinduced EC shape changes which limit vessel diameter and prevent the formation of arteriovenous shunts. This is in line with our hypothesis, which suggests that blood vessel dilation and arteriovenous shunt formation can be triggered if hemodynamic changes are not counteracted. We propose, that, in Srf iMCKO mice, this is likely caused by the loss of the myogenic response. Our study suggests that vSMC can play a fundamental role in the development of AVMs and might put vSMC in AVM research a future focus. | 2021-11-30T14:16:09.632Z | 2021-11-27T00:00:00.000 | {
"year": 2022,
"sha1": "cfc6aec59c9d83735ebb40333939691a502b06f2",
"oa_license": "CCBY",
"oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/CIRCRESAHA.122.321109",
"oa_status": "HYBRID",
"pdf_src": "WoltersKluwer",
"pdf_hash": "e0026d8caea63e8849acf03a6ad34e5f06841b66",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
260180803 | pes2o/s2orc | v3-fos-license | An Empirical Study of Industrial Engineering Curriculum
The purpose of this paper is to identify current and future Industrial Engineering (IE) trends in graduate-level course curriculum. Top ten US industrial engineering schools were benchmarked, as indicated by US News and World Report. Different IE topics were categorized into several categories-Engineering Economics, Human Factors, Information Technology, Manufacturing Processes, Manufacturing Systems, Operations Research, Quality Control, Simulation, Statistics, and Integration of subject areas. The study of the Top Ten Graduate Programs in Industrial Engineering, according to US News and World Report, showed that these schools are focusing on Manufacturing Systems and how the other tools of Industrial Engineering relate to manufacturing. The implication of this study is that teaching methods should reflect the emphasis on Manufacturing Systems and incorporate real world problems.
INTRODUCTION
Over the last years, industry has witnessed a major change in the roles of Industrial Engineers (IE). Industrial Engineering has been defined by the Institute of Industrial & Systems Engineers (IISE) as a field of study that is "concerned with the design, improvement and installation of integrated systems of people, materials, information, equipment and energy" [1]. Industrial Engineers were once thought of solely as shop-floor efficiency experts. However, they are now being asked to perform a diverse set of roles within organizations, from production planning and scheduling to the development of information systems. Industrial Engineers are being hired for jobs in hospitals, distribution centers and other service organizations. The diversity of roles that Industrial Engineers are capable of playing has led to the diversification of curricula within Industrial Engineering departments across the world [2,3]. One of the challenges for these departments is to determine what to offer and what to exclude. The Faculty of Engineering at Sana'a University has been looking toward the future and trying to decide where it wants to be in ten years' time. One of the objectives of the faculty is to establish an IE graduate program based on quality and accreditation standards.
More emphasis is placed on quality educations the region as well as internationally [4].Therefore, for the faculty of Engineering to be positioned similarly to other IE graduate programs, there was a need to understand what other schools teach during the graduate IE programs to ensure the development of innovative ideas and appropriate goals for the future. The purpose of the study was to identify current and future trends in IE graduate-level course curricula. Understanding the need to have well-prepared students who go out to industries and perform effectively, school curriculums have begun to mirror industries. Therefore, the benchmark process in this paper was built from the vantage point of the students being the customer implicitly. As witnessed by Lohmann [5], when a department changes its curriculum or focus, the students in such a department are affected the most. This is why the students are determined to be the customer.
LITERATURE REVIEW ON IE TRENDS
The literature review on IE education has revealed a limited number of researches on the topic of IE trends. In a study by Elsayed [6], the author stressed the fact that courses taught in traditional IE programs included manufacturing processes, manufacturing systems engineering and design of production as core courses, IJIEM in addition to natural sciences courses. Therefore, Mummolo [3] suggested the need to combine innovation and knowledge-based sectors with traditional IE disciplines. The author emphasized the necessity of bridging the gap between innovation and IE programs, to allow IE to compete in new emerging areas such as Nano-bio technologies, new materials, digital factories, energy and environment, safety, security and health. The diversity of roles that Industrial Engineers are playing and the rapidly changing industrial environment drove the need to change the IE curricula. One of the factors reported in the literature affecting the need to change IE curricula is the ongoing technology evolution as confirmed by a recent study by Paté-Cornell and Chen et al. [7,8]. According to the authors, this should help industrial engineers be abreast with recent development. Another factor reported in the literature was the limited emphasis being placed on engineering problem solving in the IE curricula, as confirmed by Elsayed, Grimson, Acar, Baillie et al., Mitchell, and Chen et al. [6,8,9,10,11,12]. Also, industrial engineers need to develop the skills of engineering problem solving, which are critical for industrial settings. This is supported in the literature by Kuo et al. [13] where the authors stressed the importance of the reform of the IE curricula and that of engineering problem solving. Moreover, a number of studies in the specialized literature discussed that industrial engineers need to develop their communication abilities by being introduced to other business disciplines, such as management, finance, economics, organizational psychology and communication [7,10,12,14]. As an example, according to Paté-Cornell [7] Stanford moved toward the integration of other business disciplines in the IE curriculum to prepare students to face the increasing challenge imposed by globalization and the complex business world. Chen et al. [8] also argued that creativity should be introduced in IE programs in the form of industrial communication, creative problem solving and scientific research methodology. Another study conducted by Eskandari, et al. [2] focused on the emerging topics that should be integrated in the IE program. The study concluded that emerging important topics, such as management and leadership, quality engineering, and Supply chain management were important topics that needed to be incorporated in the IE curriculum. One can conclude from the literature review that the IE program is a diversified discipline that needs to be balanced by technical and soft skills. These skills are required from IEs in today's world to keep up with the rapidly changing industrial and business environment.
METHODOLOGY AND DATA COLLECTION
The researchers set out to find information about other industrial engineering schools. Supported by Fisher et al. [15], a conclusion was made that the best source for determining the top ten industrial engineering schools was US News and World Report, which listed the top ten schools in alphabetical order [16]: In this research, three industrial engineers were assigned to perform the task of benchmarking against other Industrial Engineering programs. These individuals were highly motivated because the benchmarking effort allowed them to explore other curriculums and investigate how other schools teach their disciplines. Also, the team was qualified due to their knowledge in most areas related to industrial engineering. A database framework was developed to collect data. The data in the framework was obtained from course catalogs from many universities to understand what subjects are taught in Industrial Engineering programs. From reviewing course descriptions, it was determined that the fundamental unit of analysis should be a topic within a course (example: a course in linear programming would have the Simplex Method as a topic). The determination was made because otherwise there would have been too much variation between subject area and course title of the different schools. For instance, a course titled "Inventory Control" may be taught from a management perspective, while other schools teach the shop floor fundamentals of inventory. Each topic listed in the course description and/or syllabus was then classified into one of ten IE main categories as described in Table1. Some IE programs were teaching traditional course subjects in the context of another subject area. For this reason, it was decided to classify a topic into a Main Category and a Secondary Category when necessary. In these cases, the Main Category would represent the overall context of the course and a Secondary Category would represent the traditional classification for the tool or technique. For instance, several schools taught a planning and scheduling course, which utilized operations research methodology to solve problems. In this example, the Main Category is Manufacturing Systems and the Secondary Category is Operations Research, as shown in Table 2. Use of two or more subject areas in a course, most often occurring in a capstone project course.
RESULTS AND ANALYSIS
Upon completion of the database, the team reviewed the information for quantitative and qualitative trends. First, the team analyzed the number of topics for each Main Category and Secondary Category.
But after this analysis, the team realized that the data could be skewed as a result of many topics taught in a single class. discuss Just-in-Time, Lean Production Systems, Kanban, 5S, One Piece Flow and Standard Work, while an Operations Research class might only discuss the Simplex Method. T herefore, the frequency of Manufacturing Systems would be inflated. For that matter, the team decided to compare how many classes were taught within each Main Category and Secondary Category. The resulting trends are discussed below.
Observed Trends
There are several results that came from this study. The first trend detected from the study was that the three main topic areas that seemed to be stressed the most were Manufacturing Systems, Human Factors, and Operations Research (see Table 4). These three subject areas represent 64% to 65% of the topics taught in the curricula at these schools. It is also interesting to note that only Human Factors would be unnecessarily inflated, and even then, the number of topics is almost double compared to those of Engineering Economics, the next most prolific subject area. Therefore, we can say with some degree of certainty that these three subject areas are the major emphasis of the IE programs in this study. This point is emphasized in Tables 5 & 6. Table 5 presents the amount of Classes per Main Category and Secondary Category. The second trend observed in the study is that most of these 10 schools would integrate some of the subject areas within course offerings. This is manifested in the number of courses that warranted a Secondary Category to go with the Main Category. The subject that is used most often as a Secondary Category is Manufacturing Systems (in 50% of the situations), as shown in Table 7. Again, this is emphasized by Tables 5 & 8. Table 8 compares the percentage of topics versus classes offered per Secondary Category. The results from these tables confirm that Manufacturing Systems is the Secondary Category used most often. This probably indicates an effort to use manufacturing to set the context for the other subject areas. It also reflects the real world more succinctly and is a direct response to the present industry that students should understand how academic subjects are integrated in a practical environment. Another interesting observation is that Manufacturing Systems most often serves as a secondary subject to Information Technology and Engineering Economics, as described in Table 9. The third trend is the development of Supply Chain Management (SCM) under the Industrial Engineering curriculum. Seven out of the ten top Industrial Engineering schools place significant emphasis on teaching the concept of SCM. For instance, Northwestern University and Stanford offer courses in Transportation and Supply Chain Systems. These courses focus on the design of industrial logistic systems, as well as state of the art planning models and practical tools for inventory controls, distribution management and multi-plant coordination. A fourth trend observed in the study is that the teaching methods of these 10 schools tended to emphasize the application of the subject areas. Case studies were often used to bring a concept into a real world context. Design projects and team demonstrations were also a major part of the curricula. These methods are dramatically different from the traditional lecture methods of most university courses. Finally, another trend that was observed in the study is the use of capstone design projects. This differs from the in-class design projects, because the entire class is a design project aimed at integrating some or all of the subject areas within the curriculum. In fact, nine of the ten schools in this study have capstone design projects. Once again, the attempt is to create a realistic context for the learning process, which helps solidify the notions in the students' minds and better prepare them to work in various industries.
CONCLUSIONS
In conclusion, this study benchmarked top ten industrial engineering schools, as indicated by US News and World Report. The purpose of this research was to identify trends in graduate level courses curricula, as well as course delivery methods. Different topics were categorized into the following categories: • Engineering Economics
• Human Factors
• Information Technology The overall contents of the course were categorized with the Main Category and a Secondary Category being the traditional category for the tool or the technique used in that course. Overall, the study of the Top Ten Graduate Programs in Industrial Engineering, according to US News and World Report, showed that these schools are focusing on Manufacturing Systems and how the other tools of Industrial Engineering relate to manufacturing. The implication of this study is that teaching methods should reflect the emphasis on Manufacturing Systems and incorporate real world problems. These real world problems may be brought forth through projects, labs, integrated curricula or experts in the field presenting to the students. As with many organizations, there is plenty of room for improvement in the academia industry, and a good benchmarking strategy will help develop strategies for this needed improvement.
Theoretical and practical implications
This study offers several important implications to the IE education literature, as well as graduate programs. This study extends previous research in the area of IE education. It also adds to the body of IE education knowledge. In addition, the research contributes to the understanding of IE trends and how they can be utilized to establish new IE graduate programs. The findings of this paper suggest placing more emphasis on case studies to bring a concept into a real world context. Also, design projects and team demonstrations were also a major part of the curricula. The implication is that professors and instructors must begin to learn how to teach in a new manner. These methods are more efficient for the student's learning and retaining, although they can be less efficient for the instructor. Similar to the previous trends is the use of industry experts to serve as guest lecturers. These classes are often taught in a seminar format, with a new expert in each class period. There are several issues to be solved when using this method. How do you grade the students? Can you find 18-20 experts in a subject area with time schedules that match your own? How do you integrate the expert knowledge in the rest of your curriculum? That said, this could be a very valuable method of providing context for the rest of the curricula. In the practical implication, this study helps schools to forecast what IE topics to offer in their programs. Since companies typically prompt universities to teach specific new initiatives to their employees, these topics should therefore be taught to the students in the universities to make them more marketable when they are applying for jobs. Current Executive Education courses are identified as shown in Table 10, Manufacturing Systems are taught almost exclusively. Also, we noticed that it is rare for several categories to be taught together in these courses. This is probably because the courses are more specific and the students already have experience in the marketplace, so they do not need the experience gained from project-based classes.
LIMITATIONS OF THE STUDY
As with most benchmark studies, there were some difficulties in collecting and analyzing the data in the manner that was chosen. The three areas that contributed most to the error in the data are the following (in order of significance): • Interpretation of different terminology used by different schools.
Interpretation of Different Terminology
Each topic within a course description was compared to the other topics in the database to determine whether the topic was new or similar to another one from a course at a different school. Often, different schools would use different terminology to describe a similar concept. In these cases, the researcher had to make a subjective decision as to how to categorize the topic. The researcher's expertise could affect the decision of how to categorize the topics. In this benchmark, all four researchers' main area of expertise is Manufacturing Systems. This being the case, many topics were probably repeated using different terminology, since the researchers were not aware of the redundant definitions.
Differences in the Level of Detail
Using these research methods, it was difficult to verify what exactly was covered in a specified course. Some course descriptions used more general terms to describe the concepts covered. Other course descriptions used more specific terms. For example, in a Nonlinear Programming course, a general description might read, 'Discussion of unconstrained optimization techniques'. A more specific description would enumerate the techniques used. For the most part, course descriptions were more specific than general, however the only true way to know what is covered in all courses is to obtain the syllabus.
Current Relevance of Course Descriptions
Another limitation of the study was determined by the fact that course catalogues might not be up to date. The research assumed that the courses listed in the catalogues were still being offered. The resulting error in the study is that there are courses and topics that were entered into the benchmark database which are not currently being taught on a regular basis. Several Main Categories and Secondary Categories per class During the data collection, the topics were separated and logged in an individual record. The appropriate Main Category and Secondary Category were selected based upon the analysts' knowledge of the topic. Therefore, two or more Main Categories may be represented within the same course. For instance, one course taught about systems for production scheduling (Information Technology), heuristics for scheduling (Operations Research) and management of inventory (Manufacturing Systems). This is not very significant until the observed trends are noted. A comparison is made with regard to the number of classes taught for each Main Category and Secondary Category. Some classes may have two or more Main Categories and/or Secondary Categories. However, this limitation could not be avoided due to the assumption that the smallest unit of analysis would be the topic. | 2023-07-27T15:18:10.889Z | 2017-03-30T00:00:00.000 | {
"year": 2017,
"sha1": "971cac9d6f9b029ec84f74d198a316f4ef3438f8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.24867/ijiem-2017-1-105",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "769ea946176e3348c6a3637612ebd120066fcb3c",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
7090676 | pes2o/s2orc | v3-fos-license | Mutual Exclusivity of Hyaluronan and Hyaluronidase in Invasive Group A Streptococcus
Background: Serotype M4 group A Streptococcus lack hyaluronic acid (HA) capsule, but are capable of causing human disease. Results: Encapsulation was achieved by introducing the hasABC capsule synthesis operon in the absence of HA-degrading enzyme hyaluronate lyase (HylA). Conclusion: Capsule expression does not enhance M4 GAS virulence. Significance: We demonstrate a mutually exclusive interaction between GAS capsule and HylA expression.
A recent analysis of group A Streptococcus (GAS) invasive infections in Australia has shown a predominance of M4 GAS, a serotype recently reported to lack the antiphagocytic hyaluronic acid (HA) capsule. Here, we use molecular genetics and bioinformatics techniques to characterize 17 clinical M4 isolates associated with invasive disease in children during this recent epidemiology. All M4 isolates lacked HA capsule, and whole genome sequence analysis of two isolates revealed the complete absence of the hasABC capsule biosynthesis operon. Conversely, M4 isolates possess a functional HA-degrading hyaluronate lyase (HylA) enzyme that is rendered nonfunctional in other GAS through a point mutation. Transformation with a plasmid expressing hasABC restored partial encapsulation in wild-type (WT) M4 GAS, and full encapsulation in an isogenic M4 mutant lacking HylA. However, partial encapsulation reduced binding to human complement regulatory protein C4BP, did not enhance survival in whole human blood, and did not increase virulence of WT M4 GAS in a mouse model of systemic infection. Bioinformatics analysis found no hasABC homologs in closely related species, suggesting that this operon was a recent acquisition. These data showcase a mutually exclusive interaction of HA capsule and active HylA among strains of this leading human pathogen.
The Gram-positive bacterium Streptococcus pyogenes, commonly known as group A Streptococcus (GAS), 3 is a humanspecific pathogen ranked among the top 10 etiological agents of infection-related deaths worldwide (1). Annually, GAS is responsible for ϳ700 million cases of superficial throat (pharyngitis) and skin (impetigo) infections and ϳ650,000 cases of potentially fatal severe invasive infections (e.g. bacteremia/sepsis, necrotizing fasciitis, and streptococcal toxic shock syndrome), with an attendant mortality rate of ϳ25% (1). GAS strains are distinguished serologically on the basis of the immunovariable M protein (2), a major surface-anchored virulence factor that promotes resistance to opsonophagocytosis (3). Throughout much of the world, M1 is the most frequently isolated serotype from GAS infections, followed by serotypes M12, M28, M3, and M4 (4). A key factor in the resurgence of severe invasive GAS infections over the past 30 years has been the global dissemination of a hypervirulent clone belonging to the M1T1 serotype (5).
The surface capsule of GAS is composed solely of hyaluronan or hyaluronic acid (HA), a high molecular mass polymer of alternating glucuronic acid and N-acetylglucosamine residues. The GAS capsule is structurally identical to the HA widely distributed throughout human tissues, allowing GAS to mimic host structures and thwart detection by the host immune system (5). The capsule promotes GAS survival by obstructing antibody binding to epitopes on the bacterial surface, complement deposition (6), and opsonophagocytosis (6,7). Capsular HA contributes to mouse pharyngeal colonization (8), and interacts with CD44 on human keratinocytes to enhance adherence to pharyngeal epithelial cells (9). Nonencapsulated GAS mutants have significantly reduced survival in human blood and are less virulent than encapsulated WT strains in mouse models of invasive GAS infection (10 -13), and a nonhuman primate model of pharyngeal colonization (14).
HA capsule biosynthesis is coordinated by the highly conserved hasABC synthase operon (15). The hasA gene is essential for HA biosynthesis and encodes for hyaluronate synthase, a membrane-bound enzyme that forms the linear HA polymer by the alternate addition of glucuronic acid and 1,3-linked N-acetylglucosamine residues (16,17). Capsule expression is strongly up-regulated upon exposure of GAS to whole human blood (18), and mucoid or highly encapsulated GAS isolates are often associated with pharyngeal persistence, acute rheumatic fever, and severe invasive human diseases (19). Spontaneously arising and irreversible mutations in the control of virulence regulatory system (covRS), a two-component regulator that coordinates the expression of ϳ10 -15% of genes in the GAS genome (20), have been implicated in the initiation and progression of GAS invasive disease (5,21,22). Mutations in covRS up-regulate HA capsule biosynthesis and a multitude of virulence factors important for neutrophil resistance (21). Consequently, covRS mutants display enhanced virulence in mouse models of systemic GAS infection (22). In addition, covRS mutation abrogates expression of the broad spectrum cysteine protease streptococcal pyrogenic exotoxin B (SpeB) (21), allowing the accumulation of human plasmin activity on the GAS surface (23). Plasminogen, a glycoprotein circulating in human blood, is the inactive form of plasmin, a broad spectrum serine protease capable of dissolving blood clots and promoting tissue remodeling (24). Streptokinase is a plasminogen-activating protein secreted by most GAS isolates that is highly specific for human plasminogen (25). GAS bind plasmin(ogen) directly through cell surface receptors, including 1) streptococcal surface enolase (␣-enolase/SEN) (26); 2) streptococcal surface dehydrogenase (SDH), also known as glyceraldehyde-3-phosphate dehydrogenase (GAPDH) and plasmin receptor (27); 3) plasminogen-binding M-like protein (28); and 4) plasminogenbinding M-like protein-related protein (29). Indirect plasminogen binding may occur through the formation of a streptokinase-plasminogen-fibrinogen (Fg) trimolecular complex that attaches to Fg or plasminogen receptors on the GAS cell surface (30). Sequestered plasmin activity on the GAS cell surface cannot be inhibited by host regulators ␣2-antiplasmin and ␣2-macroglobulin (31), allowing GAS to degrade tissue barriers and spread systemically to normally sterile sites (5,22,23).
Although it was long assumed that the HA capsule was an essential virulence factor of the pathogen, genomic analysis recently revealed that disease-associated M4 serotype GAS lack the hasABC operon (32), are nonencapsulated, yet nevertheless, can replicate in human blood ex vivo (33). During recent epidemiology of severe invasive GAS infections in Australian children, M4 GAS surpassed M1 as the serotype most frequently isolated from normally sterile sites (34). Here, we utilize molecular genetics and bioinformatics to investigate the pathogenicity of 17 M4 clinical isolates from this emerging epidemiological trend. Three pulsed-field gel electrophoresis (PFGE) patterns and 2 multilocus sequence types (MLST) were identified, with more than 50% of isolates harboring mutations within covRS, a characteristic of hyperinvasive GAS. All M4 isolates were nonencapsulated and whole genome sequencing of 2 M4 isolates revealed the complete absence of the hasABC capsule biosynthesis operon. We identify and functionally demonstrate a mutually exclusive interaction between GAS HA capsule expression (most serotypes) and expression of a secreted hyaluronate lyase (HylA) (35), which is functional in M4 GAS but harbors an inactivating mutation in encapsulated strains. The implications of this dynamic upon GAS invasive disease pathogenesis and evolution are considered in light of these new observations.
EXPERIMENTAL PROCEDURES
Bacterial Strains and Growth Conditions-M4 GAS strains were isolated from children aged 1-14 years hospitalized with severe invasive infections in Queensland, Australia, between February 2001 and May 2009 (Table 1) (34). M1T1 GAS strain 5448 was isolated from a patient with toxic shock syndrome and necrotizing fasciitis (36). The highly invasive animal passaged variant, 5448AP, is a hyperencapsulated covS mutant (22). Isogenic nonencapsulated mutant 5448⌬hasA was described previously (37). GAS strain 4063-05 (emm4, T-type 4) was isolated in 2005 from the blood of a patient in Georgia, USA. GAS was propagated at 37°C on Todd-Hewitt agar, or in static liquid cultures of Todd-Hewitt broth (THB, Hardy Diagnostics). When necessary, the growth medium was supplemented with 5 g/ml of erythromycin or 2 g/ml of chloramphenicol.
Sequence Typing and PFGE-emm sequence typing was undertaken using established criteria from the Centers for Disease Control and Prevention. T-typing was performed essentially as described elsewhere (38). MLST was undertaken using the primers listed at the Centers for Disease Control and Prevention and the PCR conditions described at the S. pyogenes MLST database. Genomic DNA digests were compared by PFGE using the CHEF-DR II System (Bio-Rad) as described previously (39).
HA Capsule Assays-Capsular HA was extracted according to the method of Hollands et al. (37). Bacterial cultures were grown to mid-log phase (A 600 ϭ 0.4) in THB and serially diluted for colony-forming unit (cfu) enumeration. 5 ml of culture was centrifuged and resuspended in 500 l of sterile Milli-Q water. 400 l of bacterial suspension was added to 1 ml of chloroform, shaken for 5 min in a Mini-BeadBeater-8 (Biospec Products), and clarified by centrifugation at 13,000 ϫ g for 10 min. HA in the aqueous phase was quantified using the ELISA HA Test Kit (Corgenix), as per the manufacturer's directions.
Glycan Analysis-HA was purified from the aqueous phase by DEAE-Sephacel chromatography and analyzed by high-performance anion-exchange chromatography with pulsed amperometric detection (HPAEC-PAD) for monosaccharides representing HA. Briefly, HA present in the aqueous phase was loaded on the DEAE column and washed with 5 ml of 50 mM NaOAc, 150 mM NaCl solution (pH 6.0) to remove contaminating protein. DEAE-bound HA was eluted with 1 ml of 50 mM NaOAc, 1 M NaCl (pH 6.0) solution. High salt was removed by desalting the sample over a PD10 cartridge (GE Healthcare). Finally, the sample was lyophilized and used for monosaccharide analysis. HA was hydrolyzed to monosaccharide constituents using 2 N trifluoroacetic acid (TFA) at 100°C for 6 h. TFA was removed by dry nitrogen flush followed by two times coevaporation with 50% isopropyl alcohol to ensure complete removal of acid. Finally the sample was dissolved in water and monosaccharide profiling was done on the Dionex ICS-3000 using the CarboPac PA1 column (4 ϫ 250 mm; Dionex). NaOH/NaOAc buffer gradient was used as eluent and the monosaccharides were compared and quantified using known amounts of authentic standards as external calibrants.
Whole Genome Sequence Analysis-Genomic fragment libraries were prepared at the Australian Genome Research Facility with the Illumina TruSeq DNA library preparation protocol (40). Random subsets of 1 million read pairs were selected to perform read mapping and de novo assembly for comparative analysis against the published M4 GAS strain MGAS10750 (RefSeq accession number NC_008024) (32).
SpeB Assays and Western Blots-SpeB protease activity in cell-free stationary phase GAS supernatants was determined using the azocaseinoytic assay (41). Western blot analysis of stationary phase supernatants was performed as previously described using rabbit anti-SpeB IgG (Toxin Technology, Sarasota, FL) (42).
Enzymatic Assays-Glycosidase activity assays were performed essentially as previously described (44). HA sodium salt from rooster comb, chondroitin sulfate sodium salt from shark cartilage, heparan sulfate sodium salt from bovine kidney, and chondroitin sulfate B (also known as dermatan sulfate) sodium salt were purchased from Sigma. The substrates were dissolved in 50 mM ammonium acetate buffer (pH 6.5), 10 mM calcium chloride (45). Recombinant HylA (500 or 5,000 pM) and 0.05-0.30 or 1.0 mg/ml substrates were incubated in the ammonium acetate buffer at 37°C. The rate of substrate degradation was measured by monitoring the increase of A 232 over time. The kinetic parameters of M4 HylA with concentration ranges of 0.05-0.30 mg/ml of HA at 37°C were calculated using the following formula and Lineweaver-Burk double-reciprocal plots, V 0 is the initial reaction rates (A 232 /min), K m is the Michaelis-Menten constant, V max is the maximum reaction velocity, and [S] is the substrate concentration.
Bioinformatic Analysis-The distribution of the HylA-encoding gene, hylA, was examined essentially as previously described (16). Briefly, the SEED and NCBI RefSeq genomic databases were searched for HylA protein homologs using BLASTP and subsystem analysis for protein similarity. When no annotated protein homologs were found in a genome, the absence of hylA was confirmed by the lack of tBLASTN matches.
Whole Blood Survival-Bacterial survival post 2 h incubation in whole human blood was analyzed as described previously (47).
C4BP Pull-down and Adherence Assays-Recombinant His 6tagged M proteins and the C4BP fragment (C4BP␣1-2) (48) were expressed and purified as previously described (37,38). Fibrinogen Binding-Mid-log phase GAS (A 600 ϭ 0.4) in 10-ml culture volumes was centrifuged, resuspended in 5 ml of PBS, and 100-l aliquots were added to a round bottom 96-well plate (Costar). Human Fg conjugated with Alexa Fluor 488 (Molecular Probes) was added to a final concentration of 100 g/ml and the plate incubated at 37°C for 1 h with shaking. The plate was centrifuged at 500 ϫ g for 10 min and wells were washed 3 times with 200 l of PBS. Bacteria were resuspended in 150 l of PBS and analyzed by flow cytometry. The average geometric mean of samples without fibrinogen was subtracted from each strain to adjust for background.
Cell Surface Plasmin Activity-GAS cultures were grown overnight to stationary phase. The next day, 300 l of culture was added to 3 ml of THB supplemented with 1 unit/ml of human plasminogen (Calbiochem) and 7 M human fibrinogen (Calbiochem) to facilitate cell surface plasmin acquisition, or THB only as the negative control. Cultures were grown to A 600 ϭ 0.4, divided into 3 ϫ 1-ml aliquots in siliconized tubes, centrifuged for 5 min at 6,000 ϫ g, and bacterial pellets were washed once with 1 ml of sterile PBS. Following resuspension in 200 l of PBS, 10-l aliquots were collected for cfu enumeration, prior to transferring 180 l into V-bottom 96-well plates (Costar) and adding 20 l of substrate S-2251 (Chromogenix). The plate was incubated in the dark for 1 h at 37°C, centrifuged for 10 min at 500 ϫ g, and 100 l from each well was transferred into a flat bottom 96-well plate (Costar). The A 405 was mea-sured using SpectraMax 250 (Molecular Devices). Cell surface plasmin activity was calculated as absorbance units/cfu. The following control wells were used: positive control, 1 unit/ml of human plasminogen ϩ 1 g of streptokinase from group C Streptococcus (Sigma); negative control, 1 unit/ml of human plasminogen only; substrate negative control, PBS only.
Neutrophil Killing Assays-Human neutrophils were isolated from venous blood using the PolymorphPrep system (Axis-Shield) and resuspended to 2 ϫ 10 5 cells/100 l in RPMI 1640 ϩ 2% FBS heat inactivated for 30 min at 56°C. Survival assays were performed as previously described (47). Briefly, 100 l of neutrophil suspension was seeded into 96-well plates and 100 l of mid-log phase bacteria in RPMI ϩ 2% heat inactivated FBS were added for a multiplicity of infection of 1 (M4 GAS) or 0.1 (M1 GAS). The assay plate was centrifuged at 500 ϫ g for 10 min and incubated for 15 min at 37°C ϩ 5% CO 2 . Aliquots were serially diluted and plated onto Todd-Hewitt agar for enumeration. Percent survival was calculated using bacterial control wells grown under the same conditions without neutrophils.
Systemic Infection Model-Cohorts of 10-week-old female CD-1 mice (Charles River Laboratories) were inoculated intraperitoneally with ϳ10 8 cfu in 200 l of PBS, 5% porcine gastric mucin (Sigma), and survival was monitored twice daily for 14 days.
Statistical Analysis-Capsular expression levels, SpeB protease activity, whole blood survival, C4BP binding, fibrinogen binding, and plasmin activity assays were compared by one-way analysis of variance. Neutrophil survival was analyzed using the Student's t test. Kaplan-Meier survival curves were compared using the log-rank test. Differences were considered significantly different at p Ͻ 0.05. All statistical analyses were performed with GraphPad Prism version 5.0b (GraphPad Inc.).
Ethics Approval-Permission to collect human blood under informed consent was approved by the University of California San Diego (UCSD) Human Research Protection Program. Procedures used for all animal experiments were approved by the UCSD Institutional Animal Care and Use Committee.
RESULTS
Typing and Genotypic Analysis Suggest That the M4 GAS Isolates Are Not Clonal-Over the past few decades, M1 GAS has been the most frequently isolated serotype from human infections worldwide (4) and the leading cause of life-threatening invasive syndromes (5). However, serotype M4 was the principal serotype associated with a recent report of severe invasive infections in Queensland, Australia, accounting for 16% of isolates compared with 8% for M1 (34). 17 such M4 isolates from this region, designated SP435-SP451, were obtained from children aged 1 to 14 years with invasive GAS infections between 2001 and 2009 (Table 1). SP435 and SP436 were highly virulent strains isolated from brothers hospitalized for 2-3 weeks (supplemental Table S1). The worldwide resurgence of severe invasive GAS infections over the past three decades has been attributed to the emergence of a single globally disseminated serotype M1T1 GAS clone (20). To determine whether the M4 isolates were clonal in origin, genomic DNA extracts were analyzed by PFGE. Three distinct PFGE patterns were identified, with the majority (65%) of M4 isolates sharing the same pattern (Fig. 1A). MLST classified the strains into 2 groups, with 15 of 17 (88%) identified as ST 39 (Table 1). SP449 and SP451 share a unique and hitherto unidentified mutS allele (supplemental Table S2), and have yet to be assigned a ST by the S. pyogenes MLST database.
M4 GAS Are Nonencapsulated and Lack the hasA Gene-A recent study identified capsule-deficient M4 GAS (33). To ascertain whether our geographically distinct M4 GAS isolates were similarly nonencapsulated, mid-logarithmic phase cultures were screened for HA capsule expression levels using a commercial ELISA-based kit. All M4 isolates were negative for capsule expression, compared with M1 GAS positive control strain 5448 (Fig. 1B). To corroborate the ELISA data, we undertook monosaccharide composition analysis of hydrolyzed glycosaminoglycan-enriched fractions from WT M4 GAS and encapsulated M1 GAS control strain 5448AP (22). Glucosamine (GlcNH 2 ) and glucuronic acid (GlcA), the constituents of HA, were detected for 5448AP (Fig. 1C), verifying capsule expression in M1 GAS. The double peak near GlcA is characteristic of HA. In contrast, M4 GAS was completely deficient in GlcA and had very small amounts of GlcNH 2 , compared with 5448AP (Fig. 1C). These data confirm that M4 GAS lack HA capsule. Multiplex PCR screening of purified genomic DNA revealed that none of the M4 isolates contained the essential capsule synthesis gene hasA ( Fig. 2A) (17), consistent with the previous report (33). In contrast, all M4 isolates were positive for the control gene, speB, encoding for the ubiquitous cysteine protease SpeB (Fig. 2A).
M4 GAS Lack the hasABC Capsule Biosynthesis Operon-To
validate the absence of hasA and further investigate the enhanced virulence potential of the newly emerged M4 GAS, two isolates with different PFGE patterns, SP436 and SP447, were subjected to whole genome sequence analysis (Fig. 2B). Comparison of SP436 and SP447 genomic content to the sequenced M4 genome MGAS10750 (RefSeq accession number NC_008024) (32) reveals Ͼ99% identity at the nucleotide level with most of the sequence divergence between the strains confined to mobile genetic elements. Similar to MGAS10750 (33), SP436 and SP447 lack the hasABC capsule biosynthesis operon, strongly suggesting that this operon is absent in ancestral M4. The genomic region flanking hasABC is highly conserved between M4 GAS strains (SP436, SP447, and MGAS10750) and the M1 reference strain SF370 (RefSeq accession number NC_002737) (49) (Fig. 2C).
When compared with MGAS10750, the genomes of SP436 and SP447 harbor the same integrative conjugative element (32). A total of 7,761,941 and 8,310,842 read pairs were obtained for SP436 and SP447, respectively, corresponding to an estimated average coverage of 862X and 923X. The draft genomes of SP436 and SP447 consist of 75 and 40 scaffolds, respectively, each concatenated into a single circular chromosome of an estimated size of 1.89 and 1.78 Mbp with G ϩ C contents of 38.25% (SP436) and 38.33% (SP447). The innermost circles represent the GC content (black) and GC skew (purple/green) of the central reference strain MGAS10750. The BRIG representation shows for each strain, SP436 (blue) and SP447 (green), respectively, from the innermost to outermost, the sequence similarity and distribution of the number of reads mapped onto the central reference using a window size of 500. The outermost circle represents previously reported regions of difference in MGAS10750, including prophage elements 10750.1 to 10750.4 (red) and integrative conjugative elements 10750.RD-1 and 10750.RD-2 (black). M4 GAS lack the hasABC capsule biosynthesis operon, the location of which is depicted on the outermost ring (orange triangle). C, schematic alignment of the hasABC operon and flanking regions for serotype M4 (SP436, SP447, MGAS10750) and M1 (SF370). M4 GAS are deficient in hasABC and have conserved flanking regions with M1 GAS (99% sequence identity).
10750.RD-1 encoding sortase SrtA, but are missing integrative conjugative element 10750.RD-2, which confers resistance to erythromycin (Fig. 2B). SP436 contains the same four prophages previously identified for MGAS10750, as well as one additional putative prophage carrying the streptodornase encoding gene sdn. In contrast, all four prophages in SP447 have undergone substantial deletion events and become remnants, whereas maintaining all their respective cargo genes intact. Functional annotation of the predicted coding DNA sequences also revealed that many previously identified virulence factors were present in the SP436 and SP447 genomes, including streptolysin O, IgG-degrading enzyme IdeS, streptococcal mitogenic exotoxin Z, SpeB, and C5a peptidase, but not streptococcal inhibitor of complement nor serum opacity factor.
The Majority of M4 GAS Isolates Are SpeB-negative covRS Mutants-Mutations within the covRS two-component regulatory system have been implicated in the initiation of GAS invasive disease (5,21). To investigate whether the M4 isolates in this study underwent selection for covRS mutation in the human host, we first screened the M4 panel for loss of SpeB protease activity. A significant proportion of M4 isolates, 9 of 17 (53%), were negative for SpeB activity (Fig. 3A), suggesting that some may harbor covRS mutations. Western blot analysis of stationary phase culture supernatants confirmed that isolates lacking SpeB activity did not secrete an active 28-kDa SpeB protease into the extracellular milieu (Fig. 3B). Sequence analysis of SpeB-negative M4 isolates confirmed that 8 of 9 (89%) were covRS mutants (Fig. 3C), with 3 isolates (SP436, SP449, and SP450) harboring the same covS deletion mutation at nucleotide (nt) 77 resulting in a truncated CovS protein ( Table 2). SP438 was the only covR mutant, containing a Cys to Thr substitution mutation at nt 575 of the covR gene. SP451 con-tained 2 mutations in ropB, also known as rgg, a transcriptional regulator associated with the loss of SpeB expression in some invasive disease isolates (Table 2) (50). Taken together, these data confirm that several M4 GAS isolates associated with human invasive disease have either covRS or ropB mutations eliminating SpeB protease activity (5).
M4 HylA Specifically Degrades HA-Some Gram-positive bacteria, including Streptococcus pneumoniae (pneumococcus), Streptococcus suis, and Staphylococcus aureus secrete an active HA-degrading HylA enzyme. Yet, in most clinically relevant GAS serotypes, such as M1, this enzyme is inactivated by a single nucleotide substitution resulting in an amino acid change from Asp to Val at position 199 of the lyase (51). The only reported GAS serotypes with a lyase possessing Asp-199 are M4 and M22 (51). To evaluate the enzymatic activity of HylA from M4 and M1 GAS, recombinant His 6 -tagged HylA protein from each serotype was expressed in Escherichia coli and purified by TALON affinity chromatography. Recombi- nant M4 HylA was enzymatically active and degraded HA in a substrate concentration-dependent manner (Fig. 4A). The kinetic parameters K m and V max for M4 HylA were 0.440 mg/ml and 0.091, respectively, as estimated from the Lineweaver-Burk double-reciprocal plot (Fig. 4B). In contrast, recombinant M1 HylA was enzymatically inactive and unable to digest HA (Fig. 4C). M4 HylA was highly specific for HA and did not degrade other glycosaminoglycans, including heparan sulfate (Fig. 4D), dermatan sulfate (Fig. 4E), and chondroitin sulfate (Fig. 4F).
The hylA Gene Is Ancestral and hasABC Was Recently Acquired by Some GAS Serotypes-HylA is well conserved in closely related genomes including Streptococcus agalactiae, S. pneumoniae, S. suis, and S. aureus suggesting that hylA is unlikely to have been independently acquired by these genomes, but may rather be ancestral among streptococcal species ( Fig. 5A; supplemental Table S3). We hypothesize that M1 GAS and other encapsulated serotypes acquired hasABC more recently than hylA, resulting in concurrent HA synthesis and degradation. Preservation of capsule bestows upon GAS resistance to phagocytosis and enhanced survival in vivo, which may have provided selection pressure for inactivating mutations in hylA. Although current data do not exclude that the hylA might be horizontally acquired, the high degree of sequence conservation in HylA proteins among streptococci and other bacterial species (Fig. 5B; supplemental Table S3) suggests that hylA acquisition may have been ancestral to the branching of streptococci. It is possible that hylA is not metabolically essential and that it might be detrimental to certain bacterial products, because a few species have lost this gene (e.g. Streptococcus mutans, Streptococcus uberis, and Streptococcus thermophilus) (supplemental Table S3).
High Levels of Capsule Can Be Induced in M4 GAS Isolates Lacking hylA-To assess whether an active HylA would have the capacity to digest the capsule of the bacterium, we used precise allelic exchange mutagenesis to delete the hylA gene in M1 GAS strain 5448 (encoding an inactive HylA). Complementation of M1 ⌬hylA with a plasmid expressing active HylA from M4 GAS (pHylA), but not the inactive HylA from M1 GAS (pHylA*), completely abolished capsule expression (Fig. 6A). Conversely, to determine whether M4 GAS is capable of synthesizing capsule in the absence of HylA, we constructed a hylA allelic exchange mutant in M4 GAS strain 4063-05, a human blood isolate. Transformation of M4 ⌬hylA with pHasABC, a plasmid expressing the hasABC operon from M1 GAS, resulted in capsule expression (Fig. 6B). However, the amount of capsule detected for WT M4 GAS transformed with pHasABC (M4 pHasABC) was significantly less, compared with M4 ⌬hylA pHasABC (Fig. 6B). As a corollary, these findings suggest that HylA inactivation prevents capsule degradation in GAS serotypes containing the hasABC operon.
Capsule Expression in M4 GAS Does Not Enhance Whole Blood Survival, Reduces C4BP Binding, and Has No Effect on Fibrinogen Binding-Encapsulated M4 GAS (M4 pHasABC) did not display enhanced survival in whole human blood ex vivo compared with the nonencapsulated WT M4 strain (Fig. 6C). In contrast, whole blood survival for WT M1 GAS was superior to the acapsular M1 ⌬hasA mutant (Fig. 6C), consistent with previous reports (41). Several human pathogens, including S. aureus (52), S. pneumoniae (53), S. agalactiae (group B Streptococcus) (54), Neisseria gonorrhoeae (55), and certain GAS serotypes, including M4 (56), bind human complement regulatory protein C4BP to prevent complement deposition and activation on the bacterial cell surface (57). GAS C4BP binding can be mediated by certain M proteins, including M4 protein ( Fig. 6D and (56)), but not M1 protein (Fig. 6D and Ref. 58). Next, we assessed whether capsule expression in M4 GAS affects the binding of purified human C4BP to the bacterial surface. In comparison to nonencapsulated WT M4 GAS, ectopic capsule expression in M4 pHasABC significantly reduced C4BP binding (Fig. 6E). M4 ⌬hylA bound less C4BP than M4 WT (Fig. 6E), suggesting a role for HylA in M4 GAS C4BP binding. Capsule synthesis in M4 ⌬hylA pHasABC exhibited a trend toward reduced C4BP binding compared with M4 ⌬hylA; however, this difference did not reach statistical significance (Fig. 6E).
Human Fg is a plasma glycoprotein involved in the blood coagulation cascade and wound healing processes (59). Fg binding by GAS enhances resistance to phagocytosis by preventing complement C3 convertase deposition on the bacterial surface (60,61), and forms a proinflammatory supramolecular network with M protein that activates neutrophils and contributes to the pathophysiology of streptococcal toxic shock syndrome (62). Capsule deficiency in M4 GAS may enhance Fg binding by fully exposing Fg adhesins on the bacterial surface. To test this hypothesis, the binding of Alexa Fluor 488-labeled human Fg to whole bacteria was assessed by flow cytometry. Nonencapsulated M4 WT, M4 ⌬hylA, and M1 ⌬hasA bound equivalent quantities of Fg (Fig. 6F). Capsule biosynthesis in M4 ⌬hylA, but neither M4 nor M1 WT strains, enhanced Fg binding (Fig. 6F).
Capsule Enhances M4 GAS Plasmin Activity and Neutrophil Survival, but Has No Effect on in Vivo Virulence-The accumulation of plasmin activity on the cell surface is correlated with invasive disease propensity, enabling GAS to degrade host tissue barriers and spread systemically from the site of localized FIGURE 5. A, maximum likelihood phylogenetic tree of HylA proteins in different streptococcal species. Approximate likelihood ratios are shown for branch support. B, section of a multiple sequence alignment (using ClustalW) of HylA protein in sequenced GAS strains, with S. pneumoniae used as an outgroup homolog, showing a Asp to Val substitution that reportedly abolishes the hyaluronidase activity of HylA (51). Only M4 and M22 GAS serotypes are known to possess an active HylA enzyme (51). infection (5). M4 GAS is frequently associated with severe invasive human infections (33,63), so we assessed the capacity of M4 GAS to acquire plasmin activity. M4 WT and ⌬hylA accumulated significantly less plasmin than WT M1 GAS (Fig. 7A), the serotype most often associated with severe invasive GAS infections (4). Capsule expression in WT M4 and M4 ⌬hylA improved plasmin activity (Fig. 7A), and bacterial survival following a 15-min exposure to freshly isolated human neutrophils ex vivo (Fig. 7B). However, capsule expression did not enhance the virulence of WT M4 or M4 ⌬hylA in a mouse model of systemic infection (Fig. 7C). The HylA-deficient mutant M4 ⌬hylA did not display a significant reduction in virulence compared with M4 WT (Fig. 7C). Together, these data suggest that capsule expression may not provide a survival advantage for M4 GAS.
DISCUSSION
After more than a century of research, it is generally accepted that the HA capsule is a major virulence factor, endowing GAS with a protective physical barrier, molecular mimicry, resistance to opsonophagocytosis, and the ability to interact with epithelial cells (6,7). HA capsule is required for colonization of the upper respiratory tract and production of invasive infections in animal models (10 -13), and contributes to human pharyngeal and invasive infections (13,14,19). In this investigation, we report that nonencapsulated serotype M4 GAS was a frequent etiologic agent of severe invasive diseases in children.
Molecular genetic interrogation of a panel of 17 invasive disease isolates identified 3 distinct PFGE patterns and 2 MLSTs. The majority of isolates were SpeB-negative covRS mutants, a distinguishing feature of hypervirulent GAS. All M4 isolates lacked the hasABC capsule biosynthesis operon and did not produce detectable HA capsule. Induction of capsule expression in M4 GAS abrogated C4BP binding, an important immune evasion mechanism to subvert complement attack (64), and failed to enhance survival in human blood and virulence in a mouse model of systemic infection. These data demonstrate that the HA capsule is not essential for GAS to cause life-threatening invasive infections in humans.
M4 and M22 GAS serotypes secrete active HylA, an enzyme that degrades the HA present in the GAS capsule and mammalian connective tissues. Other serotypes contain a single nucleotide mutation in hylA resulting in Asp to Val substitution at amino acid position 199 in the putative substrate-binding site that completely abolishes HylA enzymatic activity (51). In this study, we demonstrate that capsule production is abolished in M1 strain 5448 expressing the hylA gene from M4 GAS. Significantly, transformation of M4 ⌬hylA with a plasmid expressing the hasABC operon from M1 GAS induced capsule expression. These findings demonstrate that M4 GAS has the capacity to synthesize capsule, and that its capsule is stable in the absence of a functional HylA enzyme. However, capsule expression in M4 GAS reduced C4BP binding and neither enhanced whole FIGURE 6. A, capsule expression levels of WT clinical M1 GAS isolate 5448 and isogenic ⌬hylA mutant. The M1 ⌬hylA mutant was complemented with a plasmid expressing the inactive hyaluronidase (HylA) from M1 GAS (pHylA*), or the active HylA from M4 GAS (pHylA). B, capsule expression of WT clinical M4 GAS isolate 4063-05 and isogenic ⌬hylA mutant. M4 WT and ⌬hylA were transformed with a plasmid expressing the hasABC capsule synthesis operon (pHasABC) or empty vector (pDCerm). C, whole blood survival of nonencapsulated WT M4, encapsulated M4 (M4 pHasABC), nonencapsulated hylA mutant (M4 ⌬hylA), encapsulated hylA mutant (M4 ⌬hylA pHasABC), encapsulated M1 GAS, and M1 GAS acapsular control (M1 ⌬hasA) following a 2-h incubation in whole human blood ex vivo. D, association of His 6 -tagged C4BP␣1-2 with M4 protein in co-precipitation (pull-down) assays. C4BP␣1-2 was mixed with M protein in binding buffer for 30 min at 37°C. Ni 2ϩ -nitrilotriacetic acid-agarose beads were added and incubated for 30 min at 37°C. The beads were washed with binding buffer to remove unbound protein. Bound protein was eluted by boiling in non-reducing sample buffer. Fractions corresponding to unbound and bound protein were resolved by non-reducing SDS-PAGE and visualized with Coomassie stain. E, C4BP binding of nonencapsulated M4 GAS (WT and ⌬hylA), encapsulated M4 GAS (M4 pHasABC and ⌬hylA pHasABC), and encapsulated WT M1 GAS. F, fibrinogen binding of nonencapsulated M4 GAS (WT and ⌬hylA), encapsulated M4 GAS (M4 pHasABC and M4 ⌬hylA pHasABC), encapsulated WT M1 GAS, and nonencapsulated M1 GAS (M1 ⌬hasA). All values denote arithmetic mean Ϯ S.E. Data were pooled and normalized from 2 independent experiments, each performed in triplicate. *, p Ͻ 0.05; **, p Ͻ 0.01; ***, p Ͻ 0.001; ****, p Ͻ 0.0001; ns, not significantly different.
blood survival nor virulence in vivo, suggesting that encapsulation may not provide a survival advantage for this serotype. Mouse and other vertebrate models of GAS infection have significant limitations and drawbacks because GAS is a humanadapted pathogen. Therefore, we cannot exclude the possibility that encapsulated M4 GAS would be more virulent in the human host. The absence of capsule in HylA-expressing serotype M4 and M22 GAS strains (33, 51) suggests a competitive co-evolution between HylA and capsule; however, hyaluronidase expression by encapsulated GAS was reported more than 50 years ago (65)(66)(67). Furthermore, some strains of the closely related group C Streptococcus, a bacterial pathogen capable of causing human disease (although less frequently than GAS), naturally co-express capsule and a functional hyaluronidase (65,67).
Highly virulent nonencapsulated strains have been reported for several human bacterial pathogens, including S. agalactiae (68), Haemophilus influenzae (69), and Neisseria meningitidis (70). In the majority of GAS serotypes containing intact covRS loci, encapsulation provides significant advantages over nonencapsulation such as molecular mimicry, resistance to phagocytosis, and enhanced adherence to host epithelial cells. The reason for the unsuccessful acquisition of hasABC or loss thereof remains unclear; however, M4 GAS may possess additional antiphagocytic factors and adhesins to thwart the host immune response and promote the disease process. Understanding the underlying molecular pathogenesis of nonencapsulated GAS invasive disease may augment the development of a new generation therapeutics and provide better health outcomes in the fight against this globally important human pathogen. | 2015-03-06T19:42:58.000Z | 2014-09-29T00:00:00.000 | {
"year": 2014,
"sha1": "5044d3ddaba17d816b633a5af92861089ab2923f",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc4231703?pdf=render",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "cf7cbbe0c86de7e80a717c15dbdbf0ba9d2542a2",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
162043 | pes2o/s2orc | v3-fos-license | Toxicity-related antiretroviral drug treatment modifications in individuals starting therapy: A cohort analysis of time patterns, sex, and other risk factors
Background Modifications to combination antiretroviral drug therapy (CART) regimens can occur for a number of reasons, including adverse drug effects. We investigated the frequency of and reasons for antiretroviral drug modifications (ADM) during the first 3 years after initiation of CART, in a closed cohort of CART-naïve adult patients who started treatment in the period 1998–2007 in Croatia. Material/Methods We calculated differential toxicity rates by the Poisson method. In multivariable analysis, we used a discrete-time regression model for repeated events for the outcome of modification due to drug toxicity. Results Of 321 patients who started CART, median age was 40 years, 19% were women, baseline CD4 was <200 cells/mm3 in 71%, and viral load was ≥100 000 copies/mL in 69%. Overall, 220 (68.5%) patients had an ADM; 124 (56%) of these had ≥1 ADM for toxicity reasons. Only 12.7% of individuals starting CART in the period 1998–2002 and 39.4% in the period 2003–2007 remained on the same regimen after 3 years. The following toxicities caused ADM most often: lipoatrophy (22%), gastrointestinal symptoms (20%), and neuropathy (18%). Only 5% of drug changes were due to virologic failure. Female sex (hazard ratio [HR], 2.42 95%; confidence intervals, 1.39–4.24) and older age (HR, 1.42 per every 10 years) were associated with toxicity-related ADM in the first 3 months of a particular CART regimen, but after 3 months of CART they were not. Conclusions Less toxic and better-tolerated HIV treatment options should be available and used more frequently in Croatia.
Croatia is a southeastern European country (population 4.3 million) with a centralized system of care for HIV since the beginning of the epidemic in 1986. This allows us to analyze ADM on a country level. Croatia has an individualized approach for prescribing CART; however, the availability of drugs and diagnostics is limited, and drugs have become unavailable at times. New drugs are also introduced slowly; for example, there is currently no single-tablet antiretroviral drug combination available, and the combination of fixed dose tenofovir and emtricitabine only became available in June 2010. The majority of southeastern European countries have also an individualized approach to CART with limited options, and currently there are no data concerning both the number of antiretroviral drug modifications and the reasons for these modifications.
The aim of this study was to investigate the frequency of and reasons for ADM, as well as how sex, age, and other risk factors affect ADM during the first 3 years after initiation of CART. We hypothesized that factors associated with drug toxicities are different in the early follow-up period (first 3 months of CART) compared to the period after 3 months of follow-up on a particular CART regimen. We examined a closed cohort of antiretroviral-naïve HIV-infected individuals who started CART in Croatia in the period 1998-2007 and analyzed all ADM occurring in the first 3 years of CART.
Material and Methods
Setting Although a recent increase in cases of HIV infection has been observed among men who have sex with men (MSM) [19][20][21], Croatia is still considered a country with a low-level epidemic [19,20,22]. Health insurance is universal in Croatia and antiretroviral drug therapy became available free of charge in April 1998. An electronic database has been used at the University Hospital for Infectious Diseases (UHID) in Zagreb since 1997.
This database includes all HIV-infected patients under care in Croatia, and data on their basic sociodemographic characteristics, clinical course, and antiretroviral therapy regiments, as well as laboratory findings on CD4 cell count and viral load measurements are available.
Study population
Antiretroviral-naïve patients who were not under care outside Croatia and who started CART between January 1, 1998, and December 31, 2007 and who had at least 1-month follow-up were eligible for the study. We included patients older than 18 years and excluded pregnant women. We also excluded patients in whom CART was initiated during primary infection because treatment in primary infection at the time of the study was given mainly on a theoretical basis and only for a limited time. The study was approved by the Ethics Committee of UHID.
Variables
We defined CART as an antiretroviral drug combination that was likely to suppress HIV-1 RNA to undetectable levels. These initial combinations included 2 nucleoside reverse transcriptase inhibitors (NRTI) in combination with 1 non-nucleoside reverse transcriptase inhibitor (NNRTI) or a protease inhibitor (PI), or 3 NRTIs. We also included patients who started CART with a PI plus NNRTI combination or with 2 PIs, with NRTIs. The type of CART in our analysis was categorized into 2NRTI plus 1PI, 2NRTI plus 1NNRTI, and other combinations. The NRTI backbone was categorized into zidovudine plus lamivudine (ZDV/3TC), stavudine plus lamivudine (D4T/3TC), abacavir plus lamivudine (ABC/3TC), and other. Treatment modifications included a drug switch or an interruption. Switch was defined as a change of at least 1 drug, where the time between cessation of one drug combination and the initiation of another was ≤1 month and interruption when all drugs were stopped for >1 month. A switch from individual drugs such as ABC and 3TC to the single-pill co-formulation of ABC-3TC was not considered a treatment modification.
We reviewed all records of patients with treatment modifications and classified causes of treatment modification as toxic effects or intolerance, physician's choice, patient's choice, treatment failure, availability of drugs, and other reasons. Toxic effects or intolerance were categorized as gastrointestinal, hepatic, hypersensitivity, CNS, neuropathy, lipoatrophy, lipohypertrophy, and other. Lipoatrophy and lipohypertrophy were assessed subjectively if noticed by the patient and confirmed by the physician. Virologic suppression was defined as achieving a viral load of <400 copies/mL after 12, 24, and 36 months of CART.
Statistical methods
We described the baseline characteristics of our patients by the median and interquartile range for continuous variables and as frequencies for categorical variables. The baseline data on patients are presented according to the type of ADM: ADM for toxicity reasons, ADM for other reasons, and no ADM. We compared data on basic sociodemographic characteristics (age, sex, distance from HIV center, urban or rural residence, and HIV transmission risks), HIV disease factors (CD4 cell count, HIV-1 RNA viral load, and prior or concomitant AIDS), type of CART, calendar period of CART initiation, and coinfection with hepatitis B or C virus, using the chi-square or Fisher's exact test for categorical variables and the Kruskal-Wallis test for continuous variables.
We calculated the follow-up time in several ways. The followup time ended 3 years after CART initiation for patients who were on CART at that date. If a patient who stopped all medications did not restart CART before 3 years after CART initiation, the period of follow-up ended with the date of interruption. Patients who died or were lost to follow-up were followed from the date of starting CART until the date of death or date of becoming lost to follow-up.
We used Poisson analysis to compute the rate of toxicity, and rate ratios and associated 95% confidence intervals (CI) for different individual drugs, and compared those according to sex. The total number of days on each individual antiretroviral drug was used as a denominator.
We compared treatment success in patients who had no treatment modification to those who had only a switch type or an interruption type of drug modification. It was assessed by the frequency of the viral loads less than 400 copies/ml and the median absolute CD4 cell count at 12, 24, and 36 months; and by the median increase in the CD4 cell count between baseline and 12, 24, and 36 months of CART. The nearest CD4 counts and VL measurements within 3 months of 12, 24, and 36 months after starting CART were identified and used in this analysis. We conducted a generalized estimation equation model analysis to compare treatment success in the 3 above-mentioned groups.
The main outcome in our study was a drug treatment modification because of toxicity or intolerance. The outcome was initially divided into 3 categories (no drug modification, modification due to toxicity or intolerance, and other modification). We used discrete-time regression analysis to model competing risks and repeated events with a month as the interval of time. Thus, when an event occurred, a new episode (sequence) began, and time was reset according to the outcome of the new drug combination. If the new drug combination was not modified further, time (in months) was reset to 1; otherwise, time was reset according to the number of months on treatment with the drug or drug combination that was subsequently changed. The expanded file for discrete-time analysis had 10 615 observations. We compared modifications due to drug toxicity to no treatment modifications in a binary model in which we censored the event of drug modification due to non-toxic reasons. Crude analysis was done including the sequence and time of event (in months) and 1 fixed or time-varying explanatory variable. Fixed explanatory variables were sex, baseline age, HIV transmission group, distance from HIV center (dichotomized at 160 km), and place of residence (rural vs. urban, based on a population level of below or above 40 000 inhabitants), clinical AIDS before or concomitant with CART, calendar year of CART initiation (1998-2002 vs. 2003-2007) and seropositivity for hepatitis C and positivity for hepatitis B antigen. Time-varying covariates were type of CART regimen and CD4 cell counts. The CD4 cell count in the discrete time model was matched with the months of measurements, and missing data were extrapolated by carrying the last observation forward. Covariates with a p<0.25 in crude analysis were considered as candidates for inclusion in the multivariable model. Sex, age, and antiretroviral drug combinations were included in all models. We constructed separate models for different follow-up times (0-3 and 3-36 months) on a particular drug regimen. We estimated a complementary loglog model, which is a proportional hazards model allowing interpretation of coefficients in terms of hazard ratios (HR). To assess the proportionality assumption, we examined the interaction of time with each independent variable in all our models and, if significant (p <0.05), the interaction was kept in the model. The dependency among repeated observations was accounted for by robust standard errors. The analysis was done using the statistical software package SAS version 9.3.1 (SAS institute Inc, Cary, North Carolina, USA); the level of significance was set at 0.05.
Baseline characteristics
A total of 321 treatment-naïve patients who started ART in the period 1997 to 2007 were included in our study. The median age at baseline was 40 years, 19% were female, baseline CD4 was <200 cells/mm 3 in 71%, and viral load was ≥100 000 copies/mL in 69% of patients. Baseline characteristics according to type of ADM are shown in Table 1. At 3 years after initiation of CART, 274 (85%) individuals were still taking CART, 19 (6%) were known to be alive and not taking CART, 24 (7%) were dead, 2 (1%) had moved, and 2 (1%) were considered lost to follow-up.
Drug modifications
A total of 387 ADM were observed. The reasons for ADM were toxicity (including intolerance) in 176 (45.5%), physician's decision in 117 (30.2%), patient's choice in 37 (9.6%), treatment failure in 19 (4.9%), availability in 36 (9.3%), and miscellaneous in 2 (0.5%). Three hundred sixteen (81.7%) of the ADM episodes were switches, and 71 (18.3%) were interruptions. The total follow-up time of persons on CART was 852.4 years, and the median follow-up time per patient was 3 (IQR, 2.9-3.0) years. During the 3-year follow-up, there were 50 instances of restarting an interrupted antiretroviral regiment. . We calculated the rate of ADM for toxicity reasons per 100 patient-years for each individual drug. As expected, d4T was associated with the highest rate of toxicity-ADM. There were also differences in toxicity-related drug modifications between males and females ( Table 2). Of individual drugs, ZDV and NVP were more frequently switched or interrupted for toxicity reasons in women than in men ( Figure 1).
Virologic and immunological outcomes
Virologic efficacy (defined as achieving an HIV-1 RNA load of less than 400 copies/mL) was highest among patients with no CART changes and lowest among patients with interruption of CART (Table 5). Immunological responses (defined as the increase in the CD4 cell count from baseline and the absolute CD4 cell count) significantly improved between 12 and 36 months in patients with no CART change and those who had only switches, but there was no improvement in patients interrupting CART. The increase in CD4 cell count was greater in patients with no CART change compared to patients with switch-type only drug modifications.
Discussion
We found that CART modifications were frequent in Croatia; more than two-thirds of patients changed CART during the first 3 years of therapy. In the Eurosida study, which included patients starting CART predominantly between 1999 and 2002, 70% of patients remained on their original regimen at 1 year after starting CART [3], whereas in our study only 50% (42% for the study period 1998-2002 and 54% in years 2003-2007) were on the initial CART regiment after 1 year of therapy. The 1-year probability of drug change in the Swiss cohort was 37% in CART-naïve patients starting therapy during 1995-1998 and between 43.8% and 48.8% in the period 2000-2005 [1]. Higher short-term CART modification rates have also been reported in vulnerable populations such as younger individuals, African-Americans, injection drug users, and patients lacking private health insurance [14]. In our study only 5% of drug changes were due to virologic failure; hence, we confirmed previous findings from developed countries that the major cause for drug modifications is toxicity and not virologic failure [1][2][3][4][5][6]8,23]. In resource-limited settings, rates of drug modifications are lower and more frequently driven by virologic failure [24,25].
Thirty-nine percent of our patients had toxicity-related ADM during the first 3 years of CART; 25% had a change during the first year of follow-up. This is higher than in individuals from the Swiss cohort starting therapy between 2005 and 2008, in which 15.8% of individuals modified their treatment because of drug intolerance/toxicity during the first year of CART, [6] and is also higher than the 16% reported in the Eurosida cohort [3]. Data from the ICONA cohort from Italy, including patients starting therapy between 1997 and1999, found treatment discontinuation caused by toxicity in 21% of individuals [4]. The follow-up study from the same cohort included patients from 1997 to 2007 and reported a probability of discontinuation because of intolerance/toxicity of 23.2% at 1-year of follow-up [5]. In resource-limited settings, ADM because of toxicity are less frequently done. For example, the cumulative probability of changes at 2 years due to toxicity in first-line regimens in Switzerland was 23.8%, compared to 11.7% in the townships of Khayelitsha and Gugulethu in Cape Town, South Africa [25].
Several observational studies have found that females have a higher toxicity rate of CART modifications than males [3][4][5][6]14]. Our findings were similar, but we also showed that there is a time-dependent effect and that the average hazard rate of treatment changes due to toxicity after 3-months on a particular CART regiment is similar in males and females. Regimens that contained ZDV or NVP were associated with higher risk of modification for toxicity reasons in women than in men, which is also in accordance with previous findings [26][27][28][29]. The effect of age on toxicity-related CART modifications has been examined in a number of studies, which have reported various findings, possibly because of the diverse populations analyzed. A single-center study from London in 2001 showed that older patients were less likely to modify CART [30]. A study in vulnerable populations from Birmingham, Alabama, USA also found that younger age was a risk factor for treatment discontinuation due to non-gastrointestinal toxicity [14]. Similar to our findings, in the Swiss cohort older age was associated with higher risks of toxicity-related ADM during the first year of CART [6]. There have also been studies that did not find that age was a factor for toxicity-driven CART modifications [5]. In our study, older patients had a higher risk of treatment changes due to toxicity in the first 3 months on a particular CART regiment; however, afterwards the average hazard ratio of treatment changes due to toxicity was not associated with age (Tables 3 and 4).
Results of our study in the crude analysis suggest that patients who started CART with an NNRTI-based regimens compared to a PI-based regimen had a higher risk of treatment modification within the first 3 months after CART initiation. This is in concordance with the adverse effects of EFV (central nervous toxicity) and NVP (rash and hepatotoxicity), which typically occur more frequently at initiation of CART. After about 12 months of therapy, patients on a PI-based regimen tended to have a higher risk of treatment change because of toxicity (Tables 3 and 4). PI-based regimens in our study population were mainly the use of IND with or without ritonavir and LPV, and to a lesser extent nelfinavir. We were not able to find a difference between different nucleoside backbones during the first 3 months of therapy. However, afterwards, as expected, d4T use was associated with a higher toxicity rate compared to ABC and ZDV use.
The toxicity rates causing treatment modifications of individual antiretrovirals has not been widely reported. We found a toxicity rate in men for d4T and LPV modifications (32 events per 100 years of follow-up, and 4.7 events per 100 years of follow-up, respectively) similar to cohorts from Chelsea and Westminster in London (30.5 events per 100 years of follow-up and 4.7 events per 100 years of follow-up, respectively) [23]. The toxicity rate of drug modifications for other antiretrovirals (ABC, ddI, NVP, EFV) was somewhat higher in our study populations. The largest difference was in the toxicity rate drug modifications due to ZDV, which was much more frequently changed in men from London (rate 22.7 per 100 years of follow-up) compared to men from Croatia (5.1 per 100 years of follow-up). However, in the study from London, it could not be assessed whether patients switched ZDV because they actually had lipoatrophy or patients switched to prevent lipoatrophy. Also, patients in Croatia are usually not switched if mild anemia is present, which might not have been the case in London.
Since the majority of patients in our study had a low baseline CD4 cell count and had generally advanced HIV infection, it is not surprising that patients who interrupted CART had worse virologic and immunologic outcomes compared to patients with no treatment modifications or switch-type modifications. When patients with no treatment modification were compared to patients with only switch-type modifications, we observed slightly better outcomes in patients with no treatment changes at 36 months of follow-up. However, because of the relatively small number of patients under study, we should be cautious in making a firm conclusion. Many switches and simplification strategies are available and are generally considered safe in virologically-suppressed individuals [31].
Our study has limitations. For example, the reason for ADM may be multifactorial, but we selected the one considered predominant. We investigated the toxic effects of CART by the occurrence of treatment modification, and some patients choose to stay on treatment despite adverse effects, so the true incidence of toxicities of a particular drug was not estimated. Whether patients stop or change their CART may be influenced by the number and availability of drugs. Our study included only a few patients with a nucleoside backbone of TDF plus FTC, a backbone that is currently widely used in many countries. However, in southeastern European countries such as Croatia and Serbia, TDF and TDF plus FTC have been introduced only after a long delay and are still not widely used. We used a repeated time-to-event analysis, which may provide more power and more efficient coefficients. However, setting the follow-up time is challenging because out of 3 antiretroviral drugs used, only 1 is usually changed. The strength of this single-center clinic-based study is that all patients in Croatia were included and that the collected data were complete, with almost no patients lost to follow-up. | 2016-05-12T22:15:10.714Z | 2013-06-21T00:00:00.000 | {
"year": 2013,
"sha1": "115c8e6ac715cc8a6d4e471fbfed94306be7927e",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc3692382?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "115c8e6ac715cc8a6d4e471fbfed94306be7927e",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256105263 | pes2o/s2orc | v3-fos-license | On efficient quantum block encoding of pseudo-differential operators
Block encoding lies at the core of many existing quantum algorithms. Meanwhile, efficient and explicit block encodings of dense operators are commonly acknowledged as a challenging problem. This paper presents a comprehensive study of the block encoding of a rich family of dense operators: the pseudo-differential operators (PDOs). First, a block encoding scheme for generic PDOs is developed. Then we propose a more efficient scheme for PDOs with a separable structure. Finally, we demonstrate an explicit and efficient block encoding algorithm for PDOs with a dimension-wise fully separable structure. Complexity analysis is provided for all block encoding algorithms presented. The application of theoretical results is illustrated with worked examples, including the representation of variable coefficient elliptic operators and the computation of the inverse of elliptic operators without invoking quantum linear system algorithms (QLSAs).
Introduction
Block encoding [25] is a widely used technique in quantum computing and a crucial component of many quantum algorithms with a potentially exponential advantage over classical algorithms, such as quantum phase estimation (QPE) [23,31], the HHL algorithm [21], quantum singular value transformation (QSVT) [18] and various quantum linear system solvers [1,13,24], to name a few. The idea of block encoding is to embed a linear operator A into a unitary operator U A with larger dimensions after appropriate scaling. The unitary U A is then converted into a quantum circuit, allowing a quantum computer to access U A for actual computations.
The potential advantage of quantum algorithms depends critically on efficient and practical quantum circuits for block-encoding of the operators involved, and the construction of such circuits can be non-trivial in general. Researchers have constructed block encoding schemes leveraging different structures of the operators studied. For example, a block encoding scheme is provided in [5,18] for sparse matrices, and a recipe is presented for hierarchical matrices in [30]. In this paper, we consider the problem of block encoding a large family of dense operators: the pseudo-differential operators (PDOs). PDO is a rich family of linear operators that include many commonly used examples in scientific problems, which is typically given in the following form: where a(x, ξ) ∈ C ∞ (R d × R d ) is called the symbol of A and f is the Fourier transform of f . A major motivation for studying operators with the form (1) is that differential operators often enjoy a simple representation in the Fourier domain. For example, the elliptic operator: with ω(x) > 0 can be represented in the form of (1) with the symbol More generally, an m-th order linear partial differential operator P (x, D) = |α|≤m a α (x)D α with D = − i 2π ∇ x can be represented by where α = (α 1 , . . . , α d ) is the d-dimensional multi-index and |α| = d j=1 α j . Another popular example is the translation-invariant operator. Let φ ξ (x) = e 2πix·ξ be a function of x. If an operator A is translation-invariant, i.e., (Aφ ξ )(x) = a(ξ)φ ξ (x), then in which case we say that the symbol a(ξ) is a multiplier. Apart from the examples mentioned above, the PDO family also contains other operators such as convolution operators, singular integral operators, etc. Moreover, a space of PDOs is often closed with respect to many elementary operations under certain conditions. For example, for the operator A in (4) with symbol a(ξ) ̸ = 0, the inverse of A can be simply represented by In general, the operator defined by C ∞ function a(x, ξ) as in (1) is called a pseudodifferential operator only if a(x, ξ) satisfies some additional requirements such as where ⟨ξ⟩ := 1 + |ξ| 2 , and the space of the corresponding PDOs is denoted by S m . There are multiple monographs on PDOs that interested readers can refer to, such as [34,37]. The PDOs considered in this paper are equipped with a periodic boundary condition on the space domain Ω = [0, 1] d . The frequency variable ξ thus takes the value on the integer grid, and the operator A becomes wheref is the coefficient of the Fourier series of f . In this paper, we derive block encoding schemes for the PDO (5) based on different additional structures of the symbol a(x, ξ). First, we present a block encoding scheme for generic symbols a(x, ξ) without additional structures. We then point out that the success probability of the quantum circuit can be significantly improved if the symbol a(x, ξ) can be expanded into series: a(x, ξ) = j α j (x)β j (ξ), (6) with only O(1) terms. Furthermore, the circuit can be constructed in a much more explicit way with the help of quantum signal processing (QSP) and quantum eigenvalue transformation (QET) if the symbol is a sum of fully separable terms, i.e., it can be expanded as a(x, ξ) = j α j1 (x 1 ) · · · α jd (x d )β j1 (ξ 1 ) · · · β jd (ξ d ), with O(1) terms, where x = (x 1 , . . . , x d ) and ξ = (ξ 1 , . . . , ξ d ). See Definition 2 and Section 5 for details. Complexity analysis is included for all block encoding schemes, and their applications are showcased with specific examples. The contributions of this paper can be summarized as follows: • We provide practical block encoding schemes for pseudo-differential operators, including algorithms applicable to generic PDOs (see Figure 3), efficient block encoding for separable PDOs (see Figure 6) and explicit circuits for fully separable PDOs (see Figure 7). Novel ideas of circuit design, such as the phase multiplication circuit (see Figure 4) and the prototype for diagonal multiplication (see Figure 8), are included in the block encoding schemes.
• We conduct comprehensive complexity analysis for the block encoding schemes proposed. The result for complexity analysis includes the success probability, the number of ancilla qubits needed, and the number of gates used. In addition to theorems applicable to general cases, we also demonstrate possibilities of improving the complexity results by leveraging particular structures of the problem (see Section 6.1 for example).
• We demonstrate the usage of our results with explicit examples. One can use our block-encoding scheme not only as an integrated part of established quantum algorithms but also as an option for conducting operations directly on certain operators. For the example shown in Section 6.2, we use the idea of symbol calculus to directly block-encode the inverse of an elliptic operator and the dependence of the complexity on P (the number of discretization points used for each dimension) is at least quadratically improved compared to previous results for block-encoding the inverse matrix (see Remark 3).
Contents
The paper is organized as follows. In Section 2, we specify the notation used in this paper and provide preliminary results needed in subsequent sections, such as quantum Fourier transform (QFT), the linear combination of unitaries (LCU), quantum signal processing (QSP) and quantum eigenvalue transformation (QET). In Section 3, an algorithm is given for block encoding of generic symbols. For a separable symbol a(x, ξ) = α(x)β(ξ), a more efficient block encoding scheme is provided in Section 4. Then a more explicit and practically feasible block encoding is developed in Section 5 for fully separable symbols of the form displayed in (7). Finally, Section 6 presents the application of the proposed block encoding method to two types of widely used PDOs, including a variable coefficient second-order elliptic operator and the inverse of a constant coefficient elliptic operator. The paper is ended with a conclusion and discussion for future directions in Section 7.
Block encoding
Most of the previous work [2,9,25] assumes that we have access to a matrix by querying two oracles that encode the locations and values of the non-zero elements of the objective matrix. Among them, [18]*Lemma 48 provides a general framework to explicitly construct the block encoding of sparse matrices if we are given these two oracles. Following this routine, [5] constructs the block encoding of banded circulant matrices, extended binary tree matrices, and quantum walk operators.
For general non-sparse matrices, it is clearly impossible to block-encode them in logarithmic time, and [4] proposes a near-optimal scheme for block encoding general unstructured matrices. Many methods are also proposed to implement the block encoding for full-rank dense matrices with certain structures, such as Toeplitz and Hankel systems [26], and linear group convolutions [7] based on quantum Fourier transforms. The authors of [30] introduce a new method for kernel matrices with a hierarchical structure, which can be applied to non-uniform grids the Fourier transform cannot be used.
Quantum PDE solvers
Along with the development of quantum linear system solvers [9,13,18,21,24], many quantum PDE solvers are proposed to take advantage of exponential acceleration. Quantum counterparts of the finite element method (FEM) [28] and the finite difference method (FDM) [6,12] emerged for solving Poisson's equation and wave equation. In [10], adaptive finite difference and spectral methods are proposed to improve the dependence of the complexity on the error ϵ from O(poly(1/ϵ)) to O(polylog(1/ϵ)). It is worth noting that the process of block encoding the discretized differential operator is often not provided in these works, and constructing the block encoding for generic partial differential operators is highly non-trivial.
Numerical algorithms for PDOs
There are also various classical numerical algorithms that compute PDOs efficiently. For example, [15] exploits the following expansion of symbols: The paper presents efficient numerical approximations of β j (ξ) with Chebyshev polynomials and hierarchical splines and further reduces the number of terms in the expansion by SVD or QR decomposition. However, a naive extension to high-dimensional PDOs leads to exponential overhead, as is the case for most classical methods. This is also one of the reasons why a quantum implementation of PDOs can be potentially useful.
Preliminaries and notations 2.1 Notations
We adopt the commonly used notation for binary numbers: for an integer power of two P = 2 p and any y ∈ {0, . . . , P − 1} if y = y 0 + 2y 1 + · · · + 2 p−1 y p−1 = (y p−1 y p−2 · · · y 0 .) in the binary system, the corresponding quantum state is |y⟩ ≡ |y p−1 . . . y 0 ⟩. This extends to an m-tuple x = (x 1 , . . . , x m ) with x j ∈ {0, . . . , P − 1}. The corresponding quantum state is given by |x m ⟩ · · · |x 1 ⟩, where |x j ⟩ = |x j,p−1 · · · x j,0 ⟩ for each j. For a multivariate function g : {0, . . . , P − 1} m → R, we denote by D g the diagonal multiplication operator on the Hilbert space C mp : The notation |v| for a d-dimensional vector v stands for the Euclidean norm We also use the single qubit rotations To simplify the discussion, we assume access to all single qubit rotations, the Hadamard gate, the CNOT gate, the 2-qubit SWAP gate, and the Toffoli gate when counting the number of elementary gates used. If one wants to use certain commonly used universal gate sets such as Hadamard and Toffoli, there will be some overhead linear in the number of gates involved and ploylogarithm in the precision ϵ, as bounded by the famous Solovay-Kitaev theorem [22]. There are also many established results on decomposing commonly seen quantum gates with a certain universal gate set, such as [14,32,36], to name a few.
where I n denotes the n-qubit identity operator. In the matrix form, a (γ, m, ϵ)-blockencoding is a 2 m+n dimensional unitary matrix where * can be any block matrices of the correct size and ∥ A − A∥ ≤ ϵ. In addition, when A is a Hermitian matrix, it is possible to construct U A such that it is also Hermitian, in which case it is called a (γ, m, ϵ)-Hermitian-block-encoding of A. The error ϵ is omitted in the notation of block encodings if ϵ = 0.
For an n-qubit system, the quantum Fourier transform (QFT) is an implementation of where N = 2 n , using a circuit U FT with O(n 2 ) elementary gates and no ancilla qubit. The elementary gates involved include 2-qubit swap gates and 2-qubit controlled rotation gates. We refer the readers to [11,31] for more details on QFT. If only an approximation of U FT is needed, one can use approximated QFT [29], which has gate complexity O(n log(n/ϵ)), where ϵ is the spectral norm error of the approximation.
Linear combination of unitaries (LCU)
Given a few block-encoded matrices, a block encoding of a certain linear combination of them is often needed in practice. To this end, the linear combination of unitaries (LCU) technique has been developed ( [2,9,18]). For example, for two matrices A and B, a block encoding of A + B can be given by the circuit in Figure 1, where U A and U B are block encodings of A and B, respectively.
|0⟩
H For general linear combinations, we recall the following result from [18] for general linear combinations.
where I a+s denotes the identity operator with size 2 a+s × 2 a+s .
Quantum eigenvalue transformation and quantum signal processing
Given a Hermitian block encoding of Hermitian matrix A, one can construct a block encoding of f (A) for a certain function f using the qWeET) technique [18,25] ) be the even and odd part of f (x), respectively. The standard procedure consists of the following steps, where we assume that f (x) is properly scaled such that ∥f e ∥ < 1, ∥f o ∥ < 1, and ∥ · ∥ denotes the L ∞ norm on [−1, 1].
1. Approximate f e and f o with degree deg f e (ϵ) even polynomialf e and degree deg
Find two sequences of phase factors
where p e and p o are complex polynomials with degree deg f e (ϵ) and deg f o (ϵ), respectively, given by (10) Here, the superscripts e and o are omitted for simplicity. The phase factors are then used in the QET circuit shown in Figure 2(b) to construct block encodings U f e (A) and U f o (A) , where the controlled rotation gate CR ϕ is described in Figure 2 There are several methods to find the phase factors in a stable and efficient way. For example, we refer to [8,16,17,20,38] for more details. We summarize the procedure given above in Lemma 2 below, where we assume that f is either even or odd for simplicity.
Discretization of pseudo-differential operators
As mentioned in Section 1, the PDO considered in this paper is defined for periodic functions on Ω = [0, 1] d : In most numerical treatments, the function f is given on a discrete grid where P = 2 p is the number of discrete points used for each dimension. Notice that here we slightly abuse the notation by reusing x for the integer index of the grid points. Since the space variable takes value on the Cartesian grid X, the frequency domain is discretized correspondingly on {− P 2 , . . . , P 2 − 1} d , which leads to the discretized PDO: where Ξ = {0, 1, . . . , P − 1} d . We adopt an abuse of notation and denote the discretized PDO by A too. Though the frequency variable ξ is discretized on (11), by the convention of discrete Fourier transform (DFT) and fast Fourier transform (FFT), the frequency (P/2, . . . , P − 1) is often identified with (−P/2, . . . , −1), respectively, since P is a period for the frequency variable after DFT. In other words, the discretized PDO can be written as and e j is the j-th standard basis vector in C d . As an example, when d = 1, we havẽ To simplify the notation and avoid repetitive use of P , we further definȇ and the discretized PDO becomes It is clear that sup |ȃ| = sup |a|. In what follows, we also refer toȃ as the symbol of the PDO to be computed.
Block encoding for generic symbols
This section is concerned with the block encoding of the PDO (11) (or rather (13)) with a generic symbol a(x, ξ), without assuming any additional structure. In order to compute the PDO in (13), a simple strategy is to first lift the state to the phase space Ξ × Ξ. Then the multiplication ofȃ(x, ξ) in (13) can be performed by diagonal matrix block encodings. Combining the QFT circuit and the block encoding of diagonal matrices, one can construct the entire circuit as illustrated in Figure 3. |f | x f ( x P ) |x⟩ is the normalized input data , U FT is the QFT circuit, U ph and Uȃ are the circuits that perform the multiplication of e 2πix·ξ/P and a(x, ξ) in (13), and the desired output is obtained with normalizing factor 1 |Af | when getting |0 pd+b ⟩ for the pd + b qubits on the bottom. Now we explain the circuit displayed in Figure 3 in more detail. First, we assume that the information of the function f is prepared by a normalized vector 1 |f | For functions f with certain properties such as integrability, the state 1 |f | x f ( x P ) |x⟩ can be constructed efficiently (see [19] for more details). For the rest of the paper, we assume the accessibility of the state 1 |f | x f ( x P ) |x⟩ as an input.
Step 1. Apply QFT and lift the input state to the phase space. We first obtain the representation of f in the frequency domain by QFT. After applying the (inverse) QFT to the state 1 |f | Then the state 1 |f | x,ξf (ξ) |x⟩ |ξ⟩ is obtained by applying the Hadamard gates H ⊗pd to the x-register and putting both registers together.
Step 2. Multiply the phase e 2πix·ξ/P with U ph . A naive way of multiplying the phase e 2πix·ξ/P is to use Proposition 19, which involves many ancilla qubits and reduces the success probability. Here, we develop an efficient implementation for multiplication without involving any extra error or ancilla qubits in the following lemma. Figure 4 implements the unitary operator: with O(p 2 ) gate complexity precisely without ancilla qubits.
Proof. The idea of the construction is similar to the implementation of QFT, which is based on bit-wise controlled rotation. We first write the binary representation of integers and do the following calculation: where the second equality is true because e 2πix j ξ k ·2 j+k−p = 1 when j + k ≥ p. The circuit corresponding to the unitary in (16) can be implemented by a series of controlled rotations Finally, the circuit shown in Figure 4 is obtained after arranging the controlled rotations in the corresponding places.
Step 3. Multiply the symbolȃ(x, ξ). The next component in Figure 3 is the diagonal multiplication Uȃ, which is designed to approximate the map where C a > 0 is a constant that depends onȃ(x, ξ), b is the number of ancilla qubits used for Uȃ and ⊥ is an unnormalized state that is orthogonal to any state of the form |x⟩ |ξ⟩ |0 b ⟩.
As mentioned earlier, the idea is to treat (x, ξ) as a 2d-dimensional variable and utilize the result from arithmetic circuit construction. Leveraging the reversible computational model and the uncomputation technique, any classical arithmetic operation can be implemented by a quantum circuit efficiently. More specifically, one can construct a corresponding quantum circuit using O(polylog( 1 ϵ )) ancilla qubits and O(polylog( 1 ϵ )) gates, where ϵ is the precision one wants to achieve (Cf. [31,32]). We state a general result for an efficient block encoding of diagonal matrices D g , as defined in (8), which is summarized in Proposition 4. A similar idea has been used in [19] to create a given state, in [21] to construct the reciprocals of the eigenvalues and in [35] to implement the diagonal preconditioner. Proof. The circuit U g is constructed as follows. Let t = ⌈log 2 ( Cπ ϵ )⌉, and let θ(x 1 , . . . , x m ) be a map that gives |θ sgn θ t−1 θ t−2 . . . θ 0 ⟩, where (.θ t−1 θ t−2 . . . θ 0 ) is the closest t-bit fixedpoint representation of 1 π arcsin(|g(x 1 , . . . , x m )|/C), and θ sgn is assigned the value 0 if g ≥ 0 and the value 1 otherwise. For an arbitrary basis state |x m ⟩ · · · |x 1 ⟩, we consider the system with t + 2 ancilla qubits |0⟩ |x m ⟩ · · · |x 1 ⟩ |0 t+1 ⟩. Here |x j ⟩ = |x j,p−1 · · · x j,0 ⟩ for each j. Using the reversible computational model and uncomputation ( [31,32]), the classical circuit: can be constructed with O(poly(t) + poly(mp)) gates and O(poly(mp)) ancilla qubits. Then we apply the circuit in Figure 5 on the t + 2 ancilla qubits. The state obtained is: which can then be mapped to . , x m ))| < ϵ C , which means the desired state 1 C g(x 1 , . . . , x m ) |x m ⟩ · · · |x 1 ⟩ is obtained with error at most ϵ C upon measuring the ancilla qubits and getting |0 t+2 ⟩.
Step 4. Sum over the frequency variable. Finally, after applying the Hadamard gate H ⊗pd to the ξ registers, we obtain the state where⊥ is an unnormalized state that is orthogonal to all states of the form |x⟩ |0 pd+b ⟩ and which can be seen from (18). Therefore, one obtains the desired state on the x registers upon measuring |0 pd ⟩ for the ξ registers and |0 b ⟩ for the ancilla qubits. Notice that there is an extra scaling factor 1 √ P d due to the application of Hadamard gates, so the success probability is O(2 −pd ). The complete circuit we use is exactly the one shown in Figure 3.
The block encoding scheme of the PDO (13) constructed in this section can be summarized in the following theorem: For a generic symbol a(x, ξ), a block encoding of the corresponding discretized PDO defined by (13) can be (2 pd 2 C a , O(poly(pd) + polylog(1/ϵ)), ϵ)-block-encoded using the circuit displayed in Figure 3 with gate complexity O(poly(pd) + polylog(1/ϵ)), where C a ≥ sup |a| is a constant.
Challenge. Despite being applicable to generic symbols, one can observe from (20) that the success probability of the circuit in Figure 3 can be low when pd is large. In the following sections, we show that this challenge can be overcome when the symbol a(x, ξ) has additional structures.
Efficient block encoding for separable symbols
As explained in Section 3, the circuit designed as in Figure 3 suffers from exponentially small success probability, despite being simple and applicable to generic PDOs. In this section, we are concerned with symbols with particular structures and an efficient block encoding of the corresponding PDOs with O(1) success probability is constructed.
Separable symbols
We first give the following definition for separable symbols.
Definition 1. A symbol a(x, ξ) is separable if a(x, ξ) = α(x)β(ξ).
As explained in Section 2.4, we identify the frequency (P/2, . . . , P −1) with (−P/2, . . . , −1), respectively, since P is a period for the frequency variable after DFT. We also definȇ where e j is the j-th standard basis vector in C d . With help of the notations (21), the PDO (11) becomes It is clear from the definition thatα is P -periodic since α is 1-periodic, and we also have sup |α| = sup |α|, sup |β| = sup |β|. Figure 6: Circuit for an efficient block encoding for separable PDOs (22). Here b 1 , b 2 are the number of ancilla qubits needed for Uβ and Uα, respectively, 1 |f | x f ( x P ) |x⟩ is the normalized input data, U FT is the QFT circuit, Uβ, and Uα are the block encodings of Dβ and Dα, respectively, and the desired output is obtained with normalizing factor 1 |Af | when getting |0 b1+b2 ⟩ for the b 1 + b 2 ancilla qubits. Dβ and Dα are diagonal matrices defined in (8).
For the PDO (22), we propose an efficient block encoding illustrated by the following circuit.
For the rest of this section, we explain the circuit in Figure 6 with more details and show that it significantly improves the success probability compared with Figure 3. The circuit begins with a QFT step similar to that in Figure 3.
Step 2. Apply QFT and multiply the factorα(x). In order to multiply the factoȓ α(x) in the symbol, we apply QFT and convert the state to the space domain. Since where we have used (24) in the last line. By using Proposition 4 again with m = d, g =α and ϵ replaced by ϵ 2C β , the state |ϕ 1 ⟩ |0 b 2 ⟩ is mapped to |ϕ 2 ⟩ |0 b 2 ⟩ + ⊥ 2 , where ⊥ 2 is an unnormalized state orthogonal to all state of the form |x⟩ |0 b 2 ⟩ and |ϕ 2 ⟩ satisfies ∥ |ϕ 2 ⟩ − 1 Cα DαU FT ⊗d |ϕ 1 ⟩ ∥ < ϵ 2CαC β . In this step, O(poly(pd) + polylog(1/ϵ)) gates and b 2 = O(poly(pd)+polylog(1/ϵ)) ancilla qubits are used. The image of ⊥ 1 is still orthogonal to all states of the form |ξ⟩ |0 b 1 ⟩ since the b 1 ancilla qubits used in the previous step are unchanged. Therefore, the final state is where ⊥ is an unnormalized state orthogonal to all states of the form |x⟩ |0 b 1 +b 2 ⟩ and |ϕ 2 ⟩ satisfies where we have used the inequality (25) and the fact that C α ≥ sup |α| = sup |α| in the last line. By checking the definition of block encoding and adding up the gates and ancilla qubits used, we obtain the following theorem. Theorem 6. If a(x, ξ) = α(x)β(ξ) is a separable symbol as defined in Definition 1, then the discretized PDO (22) can be (C α C β , O(poly(pd) + polylog(1/ϵ)), ϵ)-block-encoded using the circuit displayed in Figure 6 with gate complexity O(poly(pd) + polylog(1/ϵ)), where C α , C β > 0 are constants such that C α ≥ sup |α| and C β ≥ sup |β|. Remark 1. In contrast to the result in Theorem 5, one can observe that the exponential factor 2 pd 2 is removed, and thus the success probability for the circuit in Figure 6 is improved exponentially compared to the one in Figure 3.
Linear combination of separable terms
With the block encoding for PDOs with separable symbols ready, the PDO for a linear combination of separable terms, i.e., can also be block-encoded, thanks to LCU (see Section 2.2). More precisely, we have the following corollary. a (1, a, ϵ)-block-encoding of the discretized PDO A j associated with symbol a j (x, ξ) (see (22)) constructed in Theorem 6 with a = O(poly(pd)+polylog(1/ϵ)). Then (U † L ⊗I a+s )W (U R ⊗I a+s ) is a (δ, a+b, (1+δ)ϵ)-blockencoding of m−1 j=0 y j A j , where I a+s denotes the identity operator with size 2 a+s × 2 a+s . The gate complexity of the corresponding circuit is O(m(poly(pd) + polylog(1/ϵ))).
Efficient block encoding for fully separable symbols with explicit circuits
For separable symbols, the circuit presented in Figure 6 significantly increases the success probability compared to the one in Figure 3. However, this relies on circuits for arithmetic functions (see Proposition 4), which can still be challenging to construct in practice. In this section, we develop a more explicit circuit with the help of QSP and QET (see Section 2.3).
Dimension-wise fully separable symbols
To begin with, we consider the fully separable symbols defined as follows.
is a real even function, a real odd function or an exponential function of the form f (y) = exp(iθy) for some real parameter θ.
Similar with (21), we introduce the following notations: Then the discretized PDO (11) becomes (29) In order to block-encode the PDO (29), we adopt the following circuit, as shown in Figure 7, which is similar with the one in Figure 6: |f | x f ( x P ) |x⟩ is the normalized input data, U FT is the QFT circuit, Uβ k and Uα k are the block encodings of Dβ k and Dα k , respectively, and the desired output is obtained with normalizing factor 1 |Af | when getting |0 b1+b2 ⟩ for the b 1 + b 2 ancilla qubits. Here Dβ k and Dα k are diagonal matrices defined in (8).
Exploiting the fully separable structure of the symbol, one can construct explicit circuits for the diagonal multiplications shown in Figure 7 by leveraging QSP and QET (see Section 2.3). To this end, we first introduce a lemma that gives Hermitian block encodings for two diagonal multiplication prototypes that allow us to build the QET circuit afterward. More specifically, by combining a series of single-qubit rotations, one can construct a diagonal matrix with diagonal elements {exp(ijθ)} P −1 j=0 . Then one can build a diagonal matrix with diagonal elements {sin(ijθ)} P −1 j=0 using a simple LCU circuit, from which a diagonal matrix with diagonal elements {g(ijθ)} P −1 j=0 can be built with QET for some smooth function g. Since the grid points of the frequency variables are taken as {− P 2 , − P 2 + 1, . . . , P 2 − 1}, one also needs the corresponding diagonal matrices where the index j takes values in {− P 2 , − P 2 + 1, . . . , P 2 − 1} rather than from 0 to P . We denote the corresponding matrices by subscript − as opposed to + if the index j goes from 0 to P . During the preparation of this paper, we notice that a similar result is built in [27] independently.
where R z is the single qubit rotation defined in Section 2.1. Let U σ be the circuit displayed in Figure 8, where σ ∈ {−, +}, then U σ is a (1, 1)-Hermitian-block-encoding of sin(θD σ ). In fact, the matrix corresponding to U σ is and this is a Hermitian matrix with the first diagonal block being sin(θD σ ) after ignoring the global phase factor i, since sin(θD σ ) cos(θD σ ) cos(θD σ ) − sin(θD σ ) is Hermitian. It can be seen from Figure 8 that the number of elementary gates used is 2p + 5.
Now, we aim to construct the block encoding of diagonal matrices Dα j =α j (D + ) and Dβ j =β j (D + ) = β j (D − ), namely Uα k and Uβ k in Figure 7. For the case where α j (x j ) = exp(iθx j ) or β j (ξ j ) = exp(iθξ j ), the matrices R − = exp(iθD − ) and R + = exp(iθD + ) constructed in Lemma 8 are exactly the diagonal matrices Dβ j = β j (D − ) and Dα j =α j (D + ), respectively. Therefore, we devote the rest of this section to the case where β j and α j are even or odd real functions. Thanks to the block encodings U + and U − introduced in Lemma 8, what remains to do is to find polynomial approximations of α and β so as to complete the QET procedure described in Section 2.3. Specifically, we restrict the parameter θ to be 0 < θ < π 2P in order to recover θD σ from sin(θD σ ) with the arcsin function. Going through the QET procedure, one obtains the following result.
Proposition 9.
Let U − and U + be the (1, 1)-Hermitian-block-encodings of sin(θD − ) and sin(θD + ) constructed in Lemma 8 for 0 < θ < π 2P with P = 2 p . Assume that g is an even (resp. odd) continuous real function on [−P, P ] and that deg g (ϵ) is the smallest positive integer such that there exists an even (resp. odd) polynomialg with the degree bounded by deg g (ϵ) satisfying sup − sin(P θ)≤x≤sin(P θ) where ∥ · ∥ is the L ∞ norm on [−1, 1] and C g is a constant such that C g ≥ sup |g|. Then there is a (C g , 2, ϵ)-block-encoding for both g(D − ) and g(D + ) with O(p deg g (ϵ)) gates, where D − and D + are defined in Lemma 8.
Proof. The circuit in Figure 2(b) with U A replaced by U σ gives a (C g , 2)-block-encoding forg(sin(θD σ )), and since sup − sin(P θ)≤x≤sin(P θ) where the operator 2-norm is used. Thus the circuit in Figure 2(b) with U A replaced by U σ gives a (C g , 2, ϵ)-block-encoding for where σ ∈ {−, +} and we have used the fact that 0 < θ < π 2P in the first equality. Since O(p) gates are used in U σ , the gate complexity of the circuit described above is O(p deg g (ϵ)), which closes the proof.
The gate complexity of the circuit built with QET in Proposition 9 depends on the smoothness of g. For instance, we have the following corollary.
Corollary 10. Assume that g is an even (resp. odd) differentiable real function on
. Let η g (ϵ, θ) be the smallest integer such that there exists a polynomial u with degree η g (ϵ, θ) satisfying sup |y|< π 2θ |u(y) − g(y)| < ϵ 3 , then the gate complexity of the circuit used in Proposition 9 is |g ′ (y)|. In particular, if g is a polynomial, the gate complexity reduces Proof. Without loss of generality, assume 1 > ϵ 2Cg , otherwise we can just letg = 0 (30). Since 1 θ arcsin(x) is an analytic function whose power series centered at x = 0 has convergence radius 1 > sin(P θ), there is a truncation v of the Taylor series of 1 θ arcsin(x) with degree O(log the coefficients of the Taylor series of 1 θ arcsin(x) at x = 0 are all non-negative, it holds , and therefore |g( 1]. In addition, we have for x ∈ [− sin(P θ), sin(P θ)]. According to Proposition 9, we have deg g (ϵ) = O(log C ′ g θϵ η g (ϵ, θ)) and the gate complexity of the circuit used in Proposition 9 is O p log C ′ g θϵ η g (ϵ, θ) , where the factor p comes from preparing sin(θD σ ) as mentioned in Lemma 8. In the case that g is a polynomial, we can simply let u = g and thus η g (ϵ, θ) ≤ deg g.
For the final step, we first introduce the notation where degα k (ϵ) and deg β k (ϵ) are defined in Proposition 9. Denote by Uα k and Uβ k the block encodings ofα k (D + ) andβ k (D + ) = β k (D − ) obtained in Proposition 9 with ϵ replaced by ϵ/2dC, respectively, where and C ≥ d k=1 (sup |α k | sup |β k |) since Cα k ≥ sup |α k | and Cβ k ≥ sup |β k |. Now we are ready to prove the following theorem that relies on the block encodings d k=1 Uα k and d k=1 Uβ k in Figure 7.
is a fully separable symbol as defined in Definition 2, then the corresponding PDO defined by (29) can be (C, O(d), ϵ)-block-encoded with gate complexity O(dp deg a ϵ 2dC + dp 2 ) using the circuit displayed in Figure 7, where C > 0 is a constant defined in (32), and deg a ϵ 2dC is defined in (31).
Proof. Since each Uα k is a (Cα k , 2, ϵ/2dC)-block-encoding forα k (D + ) according to Propo- Hence by the same argument as the proof of Theorem 6 (especially (23), (24), (25), (26) and (27)), the circuit in Figure 7 gives a (C, 4d, ϵ) block encoding of the PDO (29) [29] for example) can be used to replace the QFT blocks in Figure 7. With similar arguments as in the proof of Theorem 11, one can show that the gate complexity can be reduced to O(dp deg a ϵ 2dC ).
Linear combination of fully separable terms
Similar to Corollary 7, we can block-encode the PDO for a linear combination of fully separable terms, i.e., with LCU (see Section 2.2) and Theorem 11, which is stated in the following corollary. 1, a, ϵ)block-encoding of the discretized PDO A j associated with symbol a j (x, ξ) (see (29)) constructed in Theorem 11 with a = O(d). Then (U † L ⊗ I a+s )W (U R ⊗ I a+s ) is an (δ, a + b, (1 + δ)ϵ)-block-encoding of m−1 j=0 y j A j , where I a+s denotes the identity operator with size 2 a+s × 2 a+s . The gate complexity of the corresponding circuit is O(dp m−1 j=0 deg a ϵ 2d + dp 2 m).
Applications
In this section, we provide worked examples for particular symbols using the circuit shown in Figure 7 and provide complexity analysis, beginning with a variable coefficient second-order elliptic operator.
Second-order elliptic operator with variable coefficients
Recall that the elliptic operator introduced in (2) is of the following form: In this section, we assume that ω(x) > 0 has a low-rank Fourier expansion Many commonly seen functions have low-rank expansions or approximations. For instance, ω(x) = 2 + sin(2π d l=1 x l ) > 0 can be written in the rank-3 form Plugging the form (34) of ω into (3), one obtains the symbol associated with the PDO above a(x, ξ) = 1 + where P = 2 p is the number of discrete points used for each dimension (see Section 2.4).
Notice that the terms e 2πiq j ·x ξ l P and e 2πiq j ·x ξ 2 l P 2 above are fully separable by Definition 2, thus by Corollary 12, we know that the corresponding PDO can be block-encoded. As explained in Section 5, the multiplication of e 2πiq j ·x = d l=1 e 2πiq jl x l can be implemented directly using R − and R + constructed in Lemma 8. Consequently, the number of gates needed for multiplying each e 2πiq jl x l factor without error is O(p), and no ancilla qubits are used. Since ξ l P and ξ 2 l P 2 are polynomials, by Corollary 10, the multiplication of each ξ l P and ξ 2 l P 2 factor can be implemented with O p log 1 ϵ gates to O(ϵ) precision, and O(d) ancilla qubits are used. Going through the proof of Theorem 11, one can see that the PDO associated with e 2πiq j ·x ξ l P and e 2πiq j ·x ξ 2 l P 2 can be (1, O(d), ϵ)-block-encoded with gate complexity O(p log( 1 ϵ ) + p 2 + dp), where the three terms account for implementing the polynomials of ξ l , the QFT of the l-th component, and the multiplication of e 2πiq j ·x , respectively. Finally, going through the LCU step as in Corollary 12 with O(dr) terms, one obtains a (γ, O(d + log(dr)), (1 + γ)ϵ)-block-encoding of the PDO (2) with total gate complexity O(dr(p log 1 ϵ + p 2 + dp)) = O dpr(log 1 ϵ + d + p) , where γ = 1 + 4π 2 (P r j=1 |c j |∥q j ∥ 1 + P 2 d r j=1 |c j |). This result is summarized in the following theorem, where we have used O(d + log(dr)) = O(d + log(r)).
Similar to previous sections and by a slight abuse of notation, we denote the discretization of the operator defined in (33) also by A. Now we can use the QLSA in [13] to get the following corollary: Corollary 14. Let (A, b) be the discretization of the operator and the right-hand side of (33), respectively, there is a quantum algorithm finding the normalized state |A −1 b⟩ = In other words, U A is a (1, O(d + log(r)), 0)-block-encoding of some matrixÃ/γ such that ∥A −Ã∥ < ϵ/γ. Therefore, we have ∥Ã∥ = O(γ), ∥Ã −1 ∥ = O(1), and thus κ(Ã) = O(γ).
The main theorem of [13] gives a quantum algorithm that can output a state |y⟩ that is O(ϵ) close to |(Ã/γ) −1 b⟩ = |Ã −1 b⟩ =Ã So |y⟩ is also an O(ϵ) approximation of |A −1 b⟩ and the overall gate complexity is O γdr log 1 ϵ log P log γP ϵ + d .
Conclusion and Discussion
This paper systematically investigates block encodings for pseudo-differential operators (PDOs) under different structural assumptions. A block encoding scheme for PDOs with generic symbols is developed in Section 3, and the quantum circuit is illustrated in Figure 3. For PDOs with linear combinations of separable symbols, we improve the success probability exponentially and present an efficient block encoding algorithm in Section 4. Then a more explicit and practical block encoding scheme is derived in Section 5 with the help of QSP and QET, along with which the complexity analysis is provided. Plenty of worked examples are given in Section 6, including the block encoding of elliptic operators with a variable coefficient that is difficult to deal with for quantum solvers that use finite difference schemes and the block encoding of the inverse of constant-coefficient elliptic operators without using quantum linear system algorithms. The block encoding schemes presented in this paper enrich the study of the block encoding of dense operators and shed new light on designing practical quantum circuits for scientific computing.
For future directions, one can apply the established results in this paper to other PDOs besides the ones presented in Section 6. One can also use the idea of symbol calculus to implement different operations on the PDO, such as taking square root or exponential, which can help solve certain PDEs in practice. a j b k |j 0 · · · j n−1 ⟩ |k 0 · · · k n−1 ⟩ , and after applying the CNOT gates, the k register is only |0⟩ when j l = k l for all l = 0, . . . , n − 1. which means the state becomes j 0 ,j 1 ,...,j n−1 a j b j |j 0 · · · j n−1 ⟩ |0⟩ + |⊥⟩ =
A Multiplication of two states
where |⊥⟩ is a term that is orthogonal to |j⟩ |0⟩ for any j. Thus the probability of obtaining |0⟩ after the measurement is N −1 j=0 |a j b j | 2 = c 2 , and the outcome of the system register is | 2023-01-24T06:42:08.607Z | 2023-01-21T00:00:00.000 | {
"year": 2023,
"sha1": "cd8abe2204c377b036c483c303d2f25009924a93",
"oa_license": "CCBY",
"oa_url": "https://quantum-journal.org/papers/q-2023-06-02-1031/pdf/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f2baf4d520acc0440ef8abf975d76dd873a883a6",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics",
"Mathematics"
]
} |
246491669 | pes2o/s2orc | v3-fos-license | Blind Face Restoration via Multi-Prior Collaboration and Adaptive Feature Fusion
Blind face restoration (BFR) from severely degraded face images is important in face image processing and has attracted increasing attention due to its wide applications. However, due to the complex unknown degradations in real-world scenarios, existing priors-based methods tend to restore faces with unstable quality. In this article, we propose a multi-prior collaboration network (MPCNet) to seamlessly integrate the advantages of generative priors and face-specific geometry priors. Specifically, we pretrain a high-quality (HQ) face synthesis generative adversarial network (GAN) and a parsing mask prediction network, and then embed them into a U-shaped deep neural network (DNN) as decoder priors to guide face restoration, during which the generative priors can provide adequate details and the parsing map priors provide geometry and semantic information. Furthermore, we design adaptive priors feature fusion (APFF) blocks to incorporate the prior features from pretrained face synthesis GAN and face parsing network in an adaptive and progressive manner, making our MPCNet exhibits good generalization in a real-world application. Experiments demonstrate the superiority of our MPCNet in comparison to state-of-the-arts and also show its potential in handling real-world low-quality (LQ) images from several practical applications.
INTRODUCTION
Face images are always one of the most popular types of images in our daily life, which record longlasting precious memories and provide crucial information for identity analysis. Unfortunately, due to the limited conditions in the acquisition, storage and transmission devices, the degradations of face images are still ubiquitous in most real-world applications. The degraded face images not only impede human visual perception but also degrade face-related applications such as video surveillance and face recognition. This challenge motivates the restoration of high-quality (HQ) face images from the low-quality (LQ) face inputs which contain unknown degradations (e.g., blur, noise, compression), known as blind face restoration (BFR) (Chen et al., 2021;Wang et al., 2021;Yang et al., 2021). It has attracted increasing attention due to its wide applications.
Face images have face-specific geometry priors which include facial landmarks (Chen et al., 2018), facial parsing maps (Chen et al., 2018(Chen et al., , 2021, and facial heatmaps . Therefore, many recent studies (Shocher et al., 2018;Zhang et al., 2018aZhang et al., , 2020Soh et al., 2020) exploit extra face prior knowledge as inputs or supervision to recover accurate face shape and details. Benefiting from the incorporation of facial priors in deep neural networks (DNNs), these methods exhibit plausible and acceptable results on bicubic degraded faces. However, when applied to real-world scenarios, they are not applicable due to more complicated degradation. Additionally, the geometry priors estimated from LQ inputs contain very limited texture information for restoring facial details.
Other methods (Li et al., 2018(Li et al., , 2020b investigate reference priors to generate realistic results. Reference priors can be only one face image, multiple face images, or facial component dictionaries, which can provide many identity-aware face details to the network. Nevertheless, when the identity of LQ is unavailable, the practical applications of referenced-based methods are limited. Additionally, the limited diversity and richness of facial component dictionaries also result in unrealistic restoration results. Recently, with the rapid development of GAN techniques (Goodfellow et al., 2014), generative priors of pretrained face GAN models, such as StyleGAN (Karras et al., 2019(Karras et al., , 2020, are exploited for real-world face restoration (Gu et al., 2020;Menon et al., 2020;Pan et al., 2021). Since face synthesis GANs can generate visually realistic faces with rich and diverse details, it is reasonable to incorporate such generative priors into the face restoration process. These methods first map the LQ input image to an intermediate latent code, which then controls the pretrained GAN at each convolution layer to provide generative priors such as facial textures and colors. This, however, leads to unstable quality of restored faces when dealing with the LQ face image. Due to the low-dimension of latent codes, such a decoupling control method is insufficient to guide the precise restoration process.
Another category of approaches involves performing degradation estimation (Michaeli and Irani, 2013;Bell-Kligler et al., 2019) to provide degradation information for the conditional restoration of LQ face images with unknown degradations. Although this design incorporates human knowledge about the degradation process and implies a certain degree of interpretability, the degradation process in the real world is too complex to be estimated, which fails to bring degradation estimation into full play.
In this article, we investigate the problem of BFR and aim at restoring HQ faces from LQ inputs with complicated degradation. For achieving a better trade-off between realness and fidelity, we propose a multi-prior collaboration network (MPCNet) to seamlessly integrate the advantages of generative priors and face-specific geometry priors. To be specific, we first pretrain an HQ face synthesis GAN and a parsing mask prediction network, and then embed them into a U-shaped DNN as decoder priors to guide face restoration. On the one hand, the encoder part of U-shaped DNN learns to map the LQ input to an intermediate latent space for global face reproduction, which then controls the generator of face synthesis GAN to provide the desired generative priors for HQ face images restoration. On the other hand, the decoder part of U-shaped DNN leverages the encoded intermediate spatial features and diverse facial priors to restore the HQ face in a progressive manner, during which the generative priors can provide adequate details and the parsing map priors provide geometry and semantic information. Instead of direct concatenation, we proposed multi-scale adaptive priors feature fusion (APFF) blocks to incorporate the prior features from pretrained face synthesis GAN and face parsing network in an adaptive and progressive manner. In each APFF block, we integrate generative priors and parsing maps priors with decoded facial features to generate the fusion feature maps for guiding face restoration. In this way, when applying to complicated degradation scenarios, the fusion feature maps can correctly find where to incorporate guidance prior features in an adaptive manner, making our MPCNet exhibits good generalization in a real-world application. The main contributions of this study include: • We propose a MPCNet to seamlessly integrate the advantages of generative priors and face-specific geometry priors. We pretrain an HQ face synthesis GAN and a parsing mask prediction network, and then embed them into a Ushaped DNN as decoder priors to guide face restoration, during which the generative priors can provide adequate details and the parsing map priors provide geometry and semantic information. • We propose an APFF block to incorporate the prior features from pretrained face synthesis GAN and face parsing network in an adaptive and progressive manner, making our MPCNet exhibits good generalization in a real-world application. • Experiments demonstrate the superiority of our MPCNet in comparison to state-of-the-arts, and show its potential in handling real-world LQ images from several practical applications.
RELATED STUDY
Facial geometry prior knowledge: Face images have facespecific geometry prior information, which includes 3D facial prior, facial landmarks, face depth map, facial parsing maps, and facial heatmaps. To recover facial images with much clearer facial structure, researchers begin to utilize facial prior knowledge to design the effective face restoration network. Song et al. (2017) proposed to utilize a pre-trained network to extract facial landmarks to divide facial components and feed the five components into different branches to recover corresponding components. Jiang et al. (2018) developed a DNN denoiser and multi-layer neighbor component embedding for face restoration, which first recovered the global face images and then compensated missing details for every component. Wang et al. (2020) proposed the parsing map guided multiscale attention network to extract the parsing map from LQ and then fed the concatenation of the parsing map and LQ into the subnetworks to produce HQ results. Supposed that the depth map could provide geometric information, Fan et al. (2020) built a subnetwork to learn the depth map from LR and then imported depth into the HQ network to facilitate the facial reconstruction.
Benefiting from the incorporation of facial priors in DNNs, these methods exhibit plausible and acceptable results on bicubic degraded faces. However, when applied to real-world scenarios, they are not applicable due to more complicated degradation. Additionally, the geometry priors estimated from LQ inputs contain very limited texture information for restoring facial details. Since face synthesis GANs can generate visually realistic faces with rich and diverse details, it is reasonable to incorporate such generative priors into the face restoration process. Facial generative prior knowledge: Recently, with the rapid development of GAN techniques (Goodfellow et al., 2014), generative priors of pretrained face generative adversarial network (GAN) models, such as StyleGAN (Karras et al., 2019(Karras et al., , 2020, are exploited for real-world face restoration (Gu et al., 2020;Menon et al., 2020;Pan et al., 2021). Generative Priors of pretrained GANs (Karras et al., 2017(Karras et al., , 2019(Karras et al., , 2020Brock et al., 2018) are previously exploited by GAN inversion (Abdal et al., 2019;Gu et al., 2020;Zhu et al., 2020;Pan et al., 2021), whose primary aim is to map the LQ input image to an intermediate latent code, which then controls the pretrained GAN at each convolution layer to provide generative priors such as facial textures and colors. Yang et al. (2021) proposed to embed the GAN prior learned for face generation into a DNN for face restoration, then jointly fine-tuned the GAN prior network with the DNN. Therefore, the latent code and noise input can be well generated from the degraded face image at different network layers. Wang et al. (2021) proposed to utilize the rich and diverse generative facial priors that contained sufficient facial textures and color information to restore the LQ face images. However, extensive experiments have shown that, due to the lowdimension of latent codes, such decoupling control method is insufficient to guide the precise restoration process and leads to unstable quality of restored faces when dealing with the LQ face image. For achieving a better trade-off between realness and fidelity, we rethink the characteristic of the BFR task and turn to the direction of incorporating various types of facial priors for recovering HQ faces. To that end, we propose a novel multi-prior collaboration framework to seamlessly integrate the advantages of generative priors and face-specific geometry priors, which shows its potential in handling real-world LQ images from several practical applications (see Figure 1). For preserving high fidelity, we reform the GAN blocks in StyleGANv2 by removing the noise inputs to avoid the generation of extra stochastic facial details. Then, we design an APFF block to incorporate the prior features from pretrained face synthesis GAN and face parsing network in an adaptive and progressive manner. In general, our main contribution is to explore the solution of the BFR task from a different perspective and provide an effective method that can achieve promising performance on both synthetic and real degraded images.
METHODOLOGY
In this section, we first describe the degradation model and our framework in detail, then introduce the adaptive prior features fusion, and finally give the learning objectives used to train the whole network.
Problem Formulation
To tackle severely degraded faces in real-world scenarios, the training data is synthesized by a complicated degradation that can be formulated as follows: where x is the LQ face, y is the HQ face image, k σ is a blur kernel, ⊛ denotes convolution operation, ↓ r represents the standard rfold downsampler, n δ refers to the Gaussian noise with SD δ, and the JPEG q denotes the JPEG compression operator with a quality factor q. In our implementation, for each training pair, we randomly select the blur kernel k from the following four kernels: Gaussian Blur (3 ≤ σ ≤ 15), Average Blur (3 ≤ σ ≤ 15), Median Blur (3 ≤ σ ≤ 15), and Motion Blur (5 ≤ σ ≤ 25). The scale factor r is randomly sampled from [4 : 16]. The addictive white gaussian noise (AWGN) n δ is sampled channel-wise from a normal distribution with (0 ≤ δ ≤ 0.1×255). The compression level q is randomly sampled from [10 : 65], where higher means stronger compression and lower image quality.
Overview of MPCNet
To begin with, BFR is defined as the task of reconstructing the HQ face image y from an LQ input facial image x suffering from unknown degradation. Figure 2 illustrates the overall framework of the proposed MPCNet consisting of spatial features encoder network, adaptive prior fusion network, pretrained face synthesis GAN, and pretrained parsing mask prediction network.
U-Shape Backbone Network
The backbone of our MPCNet is composed of the spatial features encoder network and adaptive prior fusion decoder network. It starts with a degraded face image I LQ of size 512 × 512 × 3. When the input is of a different size, we simply resize it to 512 × 512 with bicubic sampling. Then, I LQ goes through several downsample residual groups to generate an intermediate latent space W which is shared by adaptive prior fusion decoder network and pretrained face synthesis GAN (such as StyleGANv2; Karras et al., 2020). To progressively fuse the decoded spatial features and multiple priors, we present the APFF blocks to construct the decoder part of the U-shape backbone network. The feature F 7 decode from the last APFF block is passed on to a single ToRGB convolution layer and predicts the final output I HQ . More details about the APFF block will be given in the next section.
Pretrained Face Synthesis GAN
Due to the high capability of GANs in generating HQ face images, we leverage pretrained StyleGAN2 prior to providing diverse and rich facial details for our BFR task. To utilize the generative priors, previous methods typically map the input image to its closest latent codes Z and then generate the corresponding output directly. However, due to the low-dimension of latent codes, such decoupling control method is insufficient to guide the precise restoration process and leads to unpredictable failures. Instead of generating the final HQ face image directly, we propose to exploit the intermediate convolutional features of pretrained GAN as priors and further combine them with other types of priors for better fidelity.
Frontiers in Neurorobotics | www.frontiersin.org Specifically, given the encoded intermediate spatial features F spatial of the input image (produced by the encoder part of the U-shape backbone network, Equation 2), we first map it to the latent codes F spatial with global pooling operation and several multi-layer perceptron layers (MLP). The latent codes F latent then pass through each convolution layer in the pretrained GAN and generate GAN features for each resolution scale.
The structure of the GAN block is shown in Figure 3, which is consistent with the architecture in StyleGANv2. Additionally, the number of GAN blocks is equal to the number of APFF blocks in the U-shape backbone network, which is related to the resolution of the input face image. For the realness of the synthetic face, the original StyleGANv2 generates stochastic detail by introducing explicit noise inputs. However, the reconstructed HQ face image is required to faithfully approximate the ground-truth face image. For achieving a better trade-off between realness and fidelity, we abandon the noise inputs for all GAN blocks (see Figure 4).
Pretrained Parsing Mask Prediction Network
To further improve the fidelity of the restored face image, we pretrain a parsing mask prediction network to provide the geometry and semantic information for covering the deficiencies of GAN priors. As illustrated in Figure 2D, since learning the mapping from LQ→parsing maps is much simpler than face restoration, the parsing mask prediction network only employs an encoder-decoder framework. It begins with 7 downsample residual blocks, followed by 10 residual blocks, and 7 upsample residual blocks. The last feature F 7 parse is passed on to a single ToRGB convolution layer and predicts the final output I parse .
Besides, we conduct extensive experiments to demonstrate the robustness of the parsing mask prediction network on LQ face images with unknown degradations.
Adaptive Feature Fusion
It is extremely complex to recover HQ faces from the LQ counterparts in real-world scenarios, due to the complicated degradation, diverse poses, and expressions. Therefore, it is natural to consider to combining the different facial priors and let them collaborate to improve the reconstruction quality.
Since each facial prior has its shortcomings especially for a specific application, we propose a novel collaboration module that combines multiple facial priors, in which the feature translation, transformation, and fusion are considered for improving the restoration performance and generalization ability of our MPCNet. The APFF block is designed to integrate generative priors F j GAN and parsing maps priors F j parse with decoded facial features F j spatial to generate the fusion feature maps F j+1 output for guiding face restoration. The rich and diverse details provided by F j GAN can greatly alleviate the difficulty of degradation estimation and image restoration. However, due to the deficiency of the decoupling control method in StyleGANv2, the style condition of F j GAN is unstable and inconsistent with F j spatial , which should be considered before feature fusion.
AdaIN. AdaIN (Huang and Belongie, 2017) is first proposed to translate the content features to the desired style. Due to its efficiency and compact representation (Karras et al., 2020), AdaIN is adopted to adjust F j GAN to have a similar style condition with the restored feature of degraded image. The AdaIN operation can be formulated as: where σ (·) denotes the mean operation and µ(·) denotes the SD operation. With AdaIN operation, F j GAN can, thus, be aligned with F j spatial by style condition such as color, contrast, and illumination. Intermediate generative features F j g1 and F j g2 are generated by f conv1 (·) and f conv2 (·) which denote 3 × 3 convolutions and are exploited to reduce the channel numbers and refine features, respectively. Besides, the intermediate spatial features F j s1 and F j s2 are also generated from F j spatial by the same process.
Spatial feature transform. Motivated by the observation that GAN priors are incapable to capture the geometry information of the overall face structure due to the decoupling control method, we propose to exploit the parsing map prior to providing the geometry and semantic information for covering the shortage of GAN priors. Specifically, we introduce the guidance features F j guide to direct the fusion process of F j GAN and F j spatial . Additionally, the generation of F j guide considers the F j GAN , F j spatial , and F j parse . For spatial-wise feature modulation, we employ Spatial Feature Transform (SFT), named SFT(·), Wang et al. (2018b) to generate the affine transformation parameters with F j parse . At each resolution scale, the SFT(·) learns a mapping function f (·) that provides a modulation parameter pair α, β according to the parsing maps F j parse , and then utilities α, β to provide spatially fine-grained control to the concatenation of F j GAN and F j spatial .
The concatenation of F j GAN and F j spatial is modified by scaling and shifting feature maps according to the transformation parameters: ] denotes the concatenated feature maps, which have the same dimension with α and β, and ⊗ indicates elementwise multiplication.
On the one hand, the facial generative priors generally contain HQ facial texture details. On the other hand, the facial parse priors have more shape and semantic information and, thus, are more reliable for the global facial region. Considering that F j GAN and F j parse can mutually convey complementary information for each other, we combine them for better reconstruction of the HQ face image. We first calculate the errors between generative features and spatial features to highlight the inconsistent facial components that need correction. Then we exploit a gating module softmax(·) to generate the semantic-guided map from parse features. Finally, we combine the semantic-guided maps and the feature of inconsistent facial components to refine the initial spatial features in early layers for obtaining better results. The output of each APFF block can be written as, As a result, this helps to make full use of the rich and diverse texture information from F j GAN as well as shape and semantic guidance from F j parse in an adaptive manner, thereby achieving a good balance between realness and faithfulness. Besides, we conduct APFF block at each resolution scale to facilitate progressive fusion and finally generate the restored face. In this way, when applying to complicated degradation scenarios, the fusion feature maps can correctly find where to incorporate guidance prior features in an adaptive manner, making our MPCNet exhibits good generalization in a real-world application.
Learning Objective
For achieving a better trade-off between realness and fidelity, following previous BFR methods (Chen et al., 2018;Wang et al., 2018a,c;Li et al., 2020a,b), we apply 1) reconstruction loss that constrains the outputs to faithfully approximate to the groundtruth face image, 2) adversarial loss that generates the visually realistic details for the photo-realistic face restoration, and 3) gram matrix loss that helps in better synthesize texture details.
Reconstruction loss. We combine the pixel and feature space mean square error (MSE) to constrain the network outputÎ HQ close to the ground truth I HQ . As shown in follows, the second term is perceptual loss (Yu and Porikli, 2017;Wang et al., 2018b): where ϕ i (·) represents the features from the i-th layer of the pretrained VGGFace model (Cao et al., 2018). λ MSE and λ perc denote the trade-off loss weights parameters. In this study, we set i ∈ [1, 2, 3, 4].
Adversarial loss. Adversarial loss has been proved to be an effective and critical method in improving visual quality. In both generator and discriminator, we incorporate spectral normalization (Miyato et al., 2018) on the weights of each convolution layer to stabilize the learning. Furthermore, we adopt the hinge version of adversarial loss as the objective function (Brock et al., 2018;Zhang et al., 2019), defined as: In this study, L adv,D is used to update the discriminator, while L adv,G is adopted to update the MPCNet for blind face restoration. Gram matrix loss. Gram matrix loss (Gatys et al., 2016) has demonstrated that style transfer helps a lot in synthesizing visually plausible textures. We use pretrained VGGFace (Cao et al., 2018) features of layer relu2_1, relu3_1, relu4_1, and relu5_1 to calculate gram matrix loss, which is formulated as: where ϕ i (·) represents the features from the i-th layer of the pretrained VGGFace model.
Dataset and Experimental Settings
Training datasets. We first adopt the CelebA-Mask-HQ (Lee et al., 2020) to pre-train the face parsing mask prediction network, which contains 30,000 HQ face images with a size of 1, 024 × 1, 024 pixels. As shown in Figure 5, each image of CelebA-Mask-HQ has a segmentation mask of facial attributes.
To build the training set, we randomly choose 24,000 HQ images and resize all images to 512 × 512 pixels as ground-truth. Similar to Li et al. (2020a), we adopt the degradation model in section Problem formulation with randomly sampled parameters to synthesize the corresponding LQ images. Then we adopt the FFHQ dataset (Karras et al., 2019) to train the GAN prior network and the final MPCNet. FFHQ dataset contains 70,000 HQ face images with a size of 1, 024 × 1, 024 pixels. In the same way as CelebA-Mask-HQ, we synthesize the LQ inputs with Equation (1) during training. Testing datasets. We construct one synthetic test dataset and one real-world LQ test dataset to validate the ability of the proposed method on handling the BFR. Additionally, all these test datasets have no overlap with the training datasets. For the synthetic test dataset, we first randomly choose 3,000 HQ images from the CelebA-HQ dataset (Karras et al., 2017). Then the generation way of testing pairs is the same as the training dataset, namely CelebA-Test. For the real LQ test dataset, we collect 1,000 LQ faces from CelebA (Liu et al., 2015) and 500 old photos from the web. We coarsely crop square regions in each image according to their face regions and resize them to 512×512 pixels using bicubic upsampling. In the end, we put all these images together and generate the real LQ test dataset containing 1,500 real LQ faces, namely Real-Test. Implementation. We adopt Adam optimizer (Kingma and Ba, 2014) with δ1 = 0.9, δ2 = 0.99, and ε = 10 −8 to train our MPCNet with a batch size of 8. During training, we augment the training images by randomly horizontally flipping. The learning rate is initialized as 2 * 10 −4 and then decreased to half when the reconstruction loss is no longer dropping on the validation set. Our proposed model is implemented on the Pytorch framework using two NVIDIA RTX 2080Ti GPUs.
Evaluation Index
For synthetic test datasets with ground truth, two widely used image quality assessment indexes, peak signal-to-noise ratio (PSNR) (Hore and Ziou, 2010) and structural similarity (SSIM) (Wang et al., 2004), are used as the criteria for evaluating the performance of models, which are defined as follows: where x is the target image; y is the HQ image which is generated from the LQ image; x i and y i represent the values of i − th pixel in x and y, respectively, and n denotes the pixel number in the image. Then we calculate the PSNR as follows: PSNR(x, y) = 10 · log 10 MAX 2 MSE(x, y) where MAX denotes the maximum possible pixel value of the image. It is set to 255 in our experiments since the pixels of the images are represented using 8 bits per sample. PSNR is used to evaluate the performance of the proposed method in reconstructing HQ images. Instead of measuring the error between the ground-truth HQ image and the reconstructed HQ image, Wang et al. (2004) proposed an image quality assessment metric called SSIM to compute the SSIM of two images, and the SSIM value of the reconstructed HQ image y is computed as follows: where µ x , µ y , σ x , σ y , and σ xy represent the local means, SDs, and cross-covariance for images x and y, respectively. C 1 = (k 1 L) 2 and C 2 = (k 2 L) 2 are variables to stabilize the division with a weak denominator, where L is the dynamic range of the pixel values that are set to 255 and k 1 and k 2 are set to 0.01 and 0.03 in our experiments. Besides, since pixel space metrics are only based on local distortion measurement and inconsistent with human perception, the Learned Perceptual Image Patch Similarity (LPIPS) score (Zhang et al., 2018b) is adopted to evaluate the perceptual realism of generated faces. For a real LQ test dataset without ground truth, the widely-used non-reference perceptual metrics: Fréchet Inception Distances (FID) (Heusel et al., 2017) is used as the criteria for evaluating the performance of the models. We choose 3,000 HQ images from the CelebA-HQ dataset as the reference dataset to evaluate the results of the real LQ test dataset.
Ablation Study
We further conduct an ablation study to verify the superiority of our multi-prior collaboration framework (see Figure 6). To demonstrate the superiority of our prior-integration method, we remove used modules separately and visualize some comparison results of different variants. The characteristics of different model variants used in the ablation study are summarized in Table 1. Pretrained GAN prior: w/o GAN prior denotes the basic model that consists of the decoder part of U-shaped DNN which leverages the encoded intermediate spatial features and parsing map prior priors to restore the HQ face, during which the generative priors are abandoned. This model is in essence equivalent to a parsing map priors guided face restoration network and is included here to demonstrate the importance of generative priors. As the comparison between MPCNet and w/o GAN prior shown in Figure 7 and Table 2, it is evident that the GAN priors can provide diverse and rich facial details for our BFR task.
Pretrained parsing map prior: w/o Parsing map prior denotes the model that consists of the decoder part of U-shaped DNN which leverages the encoded intermediate spatial features and generative priors to restore the HQ face, during which the parsing map prior are abandoned. This model is in essence equivalent to a generative priors guided face restoration network and is included here to demonstrate the importance of parsing map priors. As the comparison between MPCNet and w/o Parsing map prior shown in Figure 7 and Table 2, it is evident that the Parsing map priors can provide the geometry and semantic information for covering the shortage of GAN priors and further improve the fidelity of restored face image.
AdaIN: w/o AdaIN denotes the model that consists of the decoder part of U-shaped DNN which leverages the encoded intermediate spatial features with types of facial priors to restore the HQ face, during which the AdaIN is abandoned. This model is included here to demonstrate the importance of AdaIN. As the comparison between MPCNet and w/o AdaIN shown in Figure 7 and Table 2, it is evident that the AdaIN module can translate the content features to the desired style with effect and, thus, makes the illumination condition of restored face consistent with the original input. Spatial feature transform: w/o SFT denotes the model that consists of the decoder part of U-shaped DNN which leverages the encoded intermediate spatial features with types of facial priors to restore the HQ face, during which the SFT is abandoned. This model is included here to demonstrate the importance of SFT. As the comparison between MPCNet and w/o SFT shown in Figure 7 and Table 2, it is evident that the SFT module can make full use of the parsing map priors to guide the face restoration branch to pay more attention to the essential facial parts reconstruction.
Comparison of Synthetic Dataset for BFR
To quantitatively compare MPCNet with other state-of-thearts methods: WaveletSRNet , Super-FAN (Bulat and Tzimiropoulos, 2018), DFDNet (Li et al., 2020a), HiFaceGAN (Yang et al., 2020), PSFRGAN (Chen et al., 2021), and GPEN , we first perform experiments on synthetic images. Following the comparison experiments setting in Yang et al. (2021), we directly compared with these state-of-the-arts models trained by the original authors in the experiments. Except for Super-FAN, we adopt their official codes and finetune them on our face training set for fair comparisons. Table 3 lists the perceptual metrics (FID and LPIPS) and pixelwise metrics (PSNR and SSIM) results on the CelebA-Test testset. It can be seen that our MPCNet achieves comparable PSNR and SSIM indices to other competing methods, but it achieves significant performance gains over all the competing methods on FID and LPIPS indices, which are better measures than PSNR for the face image perceptual quality. Figure 8 compares the BFR results on some degraded face images by the competing methods. One can see that the competing methods fail to produce reasonable face reconstructions. They tend to generate over-smoothed face images with distorted facial structures. Due to the powerful generative facial prior, it is obvious that our MPCNet is more Figures 9, 10 illustrate the qualitative SR results on two nonintegral scale factors. As shown in these zoom-in regions, we can see that our MPCNet produces better visual results than other methods with fewer artifacts. For example, GPEN and PSFRGAN cannot recover the eyes and mouth regions reliably and suffer from obvious distorted artifacts. In contrast, our MPCNet produces finer details.
Experiments on Different Types Blur Kernels Degradations
We adopt 4 Gaussian blur kernels with different sizes and 4 motion blur kernels in four different directions to test the BFR performance of the competing methods. It can be observed from Table 5 that HiFaceGAN produces relatively low performance on complex degradations. Since HiFaceGAN is sensitive to degradation estimation errors, its performance for complex degradations is limited. By incorporating the prior features from pretrained face synthesis GAN and face parsing network in an adaptive and progressive manner, our MPCNet exhibits good generalization on complex degradations. Figure 11 further illustrates the visualization results produced by different methods. Our MPCNet achieves much better visual quality while other methods suffer obvious blurring artifacts.
Experiments on Different Levels Noises Degradations
We set 6 noise levels to evaluate the restoration performance of the competing methods. In Table 6, we present the PSNR numbers for all noise levels. Since each APFF block can integrate generative priors and parsing maps priors to generate the fusion feature maps for guiding face restoration, when applying to complicated degradation scenarios, the fusion feature maps can correctly find where to incorporate guidance prior features in an adaptive manner, making our MPCNet outperform all the competitive algorithms for all noise levels. Figures 12, 13 present the visual comparison outperforms all the other techniques published in Table 6 and produces the best perceptual quality images. The closer inspections on the eyes, nose, and mouth regions reveal that our network generates textures closest to the The kernel widths are set to 10.
FIGURE 11 | Visual comparison achieved on noise-free degradations with different blur kernels. The blur kernels are illustrated with green boxes. ground-truth with fewer artifacts and more details for all noise levels.
Comparison of Real World LQ Images
To test the generalization ability, we evaluate our model on the real-world dataset. The quantitative results are shown in Table 7. Our MPCNet achieves superior performance and shows its remarkable generalization capability. Although GPEN also obtains comparable perceptual quality, it still fails in recovering the faithful face details as shown in Figures 14, 15.
The qualitative comparisons are shown in Figures 14, 15. The cropped LR face images from real-world images in Figures 14, 15 are 24 × 24 pixels and 36 × 36 pixels, and then we rescale the LR images to a fixed input size for MPCNet of 512 × 512 pixels. Thus, the scale factors of the visual comparisons are 21.4× and 14.2×, respectively. MPCNet seamlessly integrates the advantages of generative priors and face-specific geometry priors for restoring reallife photos with faithful facial details. Since the generative priors can provide adequate details and the parsing map priors provide geometry and semantic information, our method could produce plausible and realistic faces on complicated real-world degradation while other methods fail to recover faithful facial details or produce artifacts. Not only can our method perform well in common facial components like mouth and nose, but it can also perform better in hair and ears, as the parsing map priors can take the whole face into consideration rather than separate parts.
CONCLUSION
We have proposed a MPCNet to seamlessly integrate the advantages of generative priors and face-specific geometry priors. Specifically, we pretrained an HQ face synthesis GAN and a parsing mask prediction network and then embedded them into a U-shaped DNN as decoder priors to guide face restoration, during which the generative priors can provide adequate details and the parsing map priors provide geometry and semantic information. By designing an adaptive priors feature fusion (APFF) block to incorporate the prior features from pretrained face synthesis GAN and face parsing network in an adaptive and progressive manner, our MPCNet exhibited good generalization in a real-world application. Experiments demonstrated the superiority of our MPCNet in comparison to state-of-the-arts and also showed its potential in handling real-world LQ images from several practical applications. | 2022-02-04T14:19:14.484Z | 2022-02-04T00:00:00.000 | {
"year": 2022,
"sha1": "f4575ec917f32c29afedf7d967e8d589a5652f5a",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnbot.2022.797231/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "f4575ec917f32c29afedf7d967e8d589a5652f5a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
16812182 | pes2o/s2orc | v3-fos-license | Pixelized Gas Micro-well Detectors for Advanced Gamma-ray Telescopes
We describe possible applications of pixelized micro-well detectors (PMWDs) as three-dimensional charged particle trackers in advanced gamma-ray telescope concepts. A micro-well detector consists of an array of individual micro-patterned gas proportional counters opposite a planar drift electrode. When combined with pixelized thin film transistor (TFT) array readouts, large gas volumes may be imaged with very good spatial and energy resolution at reasonable cost. The third dimension is determined by timing the drift of the ionization electrons. The primary advantage of this technique is the very low scattering that the charged particles experience in a gas tracking volume, and the very accurate determination of the initial particle momenta that is thus achieved. We consider two applications of PMWDs to gamma-ray astronomy: 1) A tracker for an Advanced Compton Telescope (ACT) in which the recoil electron from the initial Compton scatter may be accurately tracked, greatly reducing the telescope's point spread function and increasing its polarization sensitivity; and 2) an Advanced Pair Telescope (APT) whose angular resolution is limited primarily by the nuclear recoil and which achieves useful polarization sensitivity near 100 MeV. We have performed Geant4 simulations of both these concepts to estimate their angular resolution and sensitivity for reasonable mission designs.
INTRODUCTION
The next generation of medium-energy (0.5 -50 MeV) and high-energy (30 MeV -100 GeV) gammaray telescopes (Compton scatter and pair production telescopes, respectively) will require a substantial improvement in angular resolution in order to greatly improve on the sensitivity of previous and currentlyplanned missions. In both cases, accurate imaging, which decreases the relative influence of background, relies on a good knowledge of the momenta of secondary particles produced in the primary gamma-ray interaction. These secondary particles are the scattered gamma-ray and recoil electron in the case of Compton scattering, and the electron-positron pairs in the case of pair production. Precisely recording these momenta also enables various backgroundrejection techniques and greatly increases the sensitivity of the telescope to the polarization of the incident radiation.
The initial secondary particle momenta are masked by poor spatial resolution and by multiple Coulomb scattering of charged particles within the detector materials. These factors have contributed to an enlarged point spread function (PSF) in current gamma-ray instruments and, in the case of pair production telescopes, have totally suppressed the polarization sensitivity. Improving this picture will require a low-density tracking medium with high spatial readout resolution. We therefore propose basing future gamma-ray instruments on micropattern gas detectors. Here we outline possible designs for Compton and pair telescopes using pixelized gas micro-well detectors under development at NASA/GSFC.
PIXELIZED MICRO-WELL DETECTORS
The micro-well detector (MWD) is a type of gas proportional counter based on micro-patterned electrodes (Deines-Jones et al., 2002a,b). Each sensing element consists of a charge-amplifying well (Fig. 1). The cathode and anode electrodes are deposited on opposite sides of an insulating substrate. The well is formed as a cylindrical hole through the cathode and substrate, exposing the anode. An array of such wells forms a detector, with the active tracking volume bounded by a drift electrode. Ionization electrons produced by the passage of a fast charged particle drift toward the anodes and into the wells. An ionization avalanche occurs in each well, where an intense electric field is set up by the voltage applied between the anode and cathode. Micro-well technology is very robust, and allows large areas to be read out with good spatial (∼ 100 µm) and energy (18% FWHM at 6 keV) resolution at low cost.
We are working with our collaborators Thomas Jackson, Bo Bai, and Soyoun Jung at Penn State University to develop pixelized micro-well detectors (PMWDs) to enable true imaging of charged particle tracks. In this approach, each anode pad is connected to an element of a thin-film transistor (TFT) array. The individual transistor gates are connected in columns, and the outputs are connected in rows. The gate drivers for each column are then activated sequentially, allowing the charge collected on the anode pads to be read out by charge-integrating amplifiers at the end of each row. Thus a twodimensional projected image of the charged particle track is recorded. The third dimension may be determined by measuring the drift time of the ionization electrons using the signals from the cathodes. Ideally, the MWD and TFT array would be fabricated together as a single unit on a robust, flexible substrate such as polyamide (e.g., Kapton TM ).
ADVANCED PARTICLE TRACKER
These combined PMWD-TFT arrays will be assembled into modular detector units called threedimensional track imaging detectors (TTIDs), as shown in Fig. 2. Each TTID comprises two 30 cm ×30 cm, back-to-back PMWD-TFT arrays bounded by drift electrodes (5 cm drift distance on each side) and field-shaping electrodes on the four walls. The front end electronics, gate drivers, and timing electronics, together with their high-density interconnects, are distributed around the periphery of the module and then folded up along the walls. We have developed a concept for a large-volume charged particle tracker based on TTID modules, shown schematically in Fig. 3. We assume the PMWD-TFT arrays have a pitch of 200 µm and that a Xe/CO 2 gas mixture (98%/2%) is used. We also assume 10 ns timing resolution, which gives a drift distance resolution of 140 µm for the maximum electron drift velocity in this gas mixture. The walls of the TTIDs are made of polyamide 300 µm thick. The TTID modules are grouped into 30 cm × 30 cm cubes, and arranged such that the drift direction in each cube is rotated 90 • relative to adjacent cubes; this gives a "stereo" view of extended tracks. A 3 m diameter pressure vessel would allow 61 cubes to be arranged inside per layer, giving 54900 cm 2 of geometric area while fitting within the payload fairing of a Delta III or IV launcher.
We have used Geant4.5.1 to simulate the performance of such a particle tracker. Due to known errors in the multiple Coulomb scattering process in later versions of Geant4, we used a slightly modified version of the multiple scattering class from version 3.2. In the following we describe our preliminary results for two possible uses of the charged particle tracker: an advanced Compton telescope and an advanced pair telescope.
ADVANCED COMPTON TELESCOPE
The Advanced Compton Telescope (ACT) is envisioned as a ∼ 100-fold increase in sensitivity over that of CGRO/COMPTEL, the only Compton telescope that had enough sensitivity to make astronomical observations (Schönfelder et al., 1993). Part of this increase can be achieved by accepting larger Compton scatter angles, increasing the effective area. The rest will have to come from a dramatic decrease in the telescope PSF, which reduces the area of the sky from which a given source's photons could have originated. This will reduce contamination both from internal background and from nearby sources.
There are two components to the PSF of a Compton telescope (Bloser et al., 2004a). The first is the error in the computed scatter angle ∆φ. (This is often referred to as the angular resolution measure, or ARM.) This width is determined by the spatial and energy resolution of the detectors that make up the telescope. The second component, ∆θ, is roughly given by the error in the measurement of the recoil electron's initial direction, projected onto the plane perpendicular to the scattered photon direction. COMPTEL was not able to track the recoil electron at all, and so ∆θ = 2π. The PSF for a single photon was thus an annulus on the sky (the "event circle") with the diameter given by the scatter angle φ and the width by ∆φ. The total angular area of the PSF, A = sin φ∆φ∆θ, was therefore quite large for all but the smallest scatter angles. The ACT must accept scatter angles up to ∼ 120 • or greater, and so good electron tracking may well be critical to keep the PSF, and therefore background, within reasonable limits. This is particularly true for good polarization sensitivity, since the maximum polarization signal will be recorded for events with φ ∼ 90 • (Bloser et al., 2004a).
We have performed Geant4 simulations of an ACT concept using the gas particle tracker described in Sec. 3 with a depth of 2.7 m, filled with Xe/CO 2 gas with a pressure of 3 atm. The tracker is surrounded by a calorimeter made up of CsI pixels to absorb the scattered photon. We assumed CsI pixel dimensions of 5 mm × 5 mm, with a depth of 5 cm on the sides of the tracker and 10 cm on the bottom. We optimistically assumed an energy resolution in the CsI of 5% (FWHM) at 662 keV, which is one of the main contributors to ∆φ. In addition, we used the G4LECS package (Kippen, 2004) to calculate the Doppler broadening, a slight increase in ∆φ due to scattering off bound electrons with unknown momenta in their atomic shells. We simulated 2 MeV photons entering the telescope on-axis, and applied a simple detector response (diffusion of drift electrons in the gas, energy resolution, and binning into pixels) and event reconstruction to the output of the simulation. Our results revealed a flaw in this ACT tracker design, which will be corrected in the next iteration: the recoil electrons lose energy in the polyamide walls of the TTID modules, leading to large errors in the measured electron energy. This can be seen in Fig. 4, which shows the total energy spectrum recorded by the telescope for two cases: 300 µm thick polyamide TTID walls (dotted curve), and TTID walls artificially set to vacuum (solid curve). Such large energy loses in the tracker due to the TTID walls lead both to incorrect energy measurements and to incorrect image reconstruction. In future simulations, the TTID modules will be made larger (50 cm on a side or more), and each will be surrounded by its own mini-calorimeter, so that electrons leaving the tracking volume will immediately have the remainder of their energy measured. For the present scheme, the total energy resolution at 2 MeV is about 5% FWHM.
To demonstrate the imaging capability of our ACT concept, we present in Fig. 5 the image derived at 2 MeV by simple back projection of individual events (without polyamide walls). A fit with a 2dimensional Gaussian gives a 1σ width of 5.6 • . The ability to preform "true" imaging by back projection in this manner is a great simplification over COMP-TEL and other instruments with poor electron tracking, which require complicated reconstruction methods to produce an image from overlapping event circles. The effective area of this ACT concept at 2 MeV is ∼ 3000 cm 2 . Calculation of the sensitivity will require a realistic estimate of the in-flight background.
ADVANCED PAIR TELESCOPE
The angular resolution of a pair production telescope is limited by the multiple scattering of the electron and positron in the detector material and by the unknown recoil of the particle (nucleus or electron) in whose field the pair conversion took place. Hunter et al. (2001) have shown that a pair telescope can nearly achieve recoil-limited resolution, approaching 1 arcmin above a few GeV, if the density of the tracking medium can be made less than ∼ 2 × 10 −5 radiation lengths (RL) per track measurement interval. Bloser et al. (2004b) showed that a pair telescope that achieved this density should in principle also be moderately sensitive to polarization at ∼ 100 MeV, since the azimuthal plane of the pair is weakly correlated with the polarization vector.
We have simulated a concept for an Advanced Pair Telescope (APT) using the particle tracker of Sec. 3. The desired density of 2 × 10 −5 RL per measurement is met with our 200 µm pitch if we use a Xe/CO 2 gas mixture at a pressure of 1.5 atm. A depth of 5.1 m then provides ∼ 0.5 RL of total interaction depth for pair conversion, similar to previous instruments. The telescope does not include a massive calorimeter; rather, the energy of the pair particles may be estimated by their average degree of scattering. (For this reason the small amount of energy lost by electrons with several tens of MeV in the polyamide walls is of no concern for our APT concept.) The same detector response is applied as for the ACT simulations, and the incident photon direction is found by adding the momenta of the electron and positron. We made use of a pair conversion class for Geant4 that includes the effects of polarization on the cross section (Bloser et al., 2004b) to estimate the polarization sensitivity at 100 MeV.
The derived angular resolution, defined as the angular radius Θ 68 containing 68% of the total events, is shown in Fig. 6. The resolution is nearly an order of magnitude worse than that predicted by Hunter et al. (2001). This is due to the diffusion of ionization electrons as they drift to the wells; for our Xe/CO 2 gas mixture the initial ionization cloud has spread by σ d ∼ 1.1 mm after drifting for 5 100 1000 Energy (MeV) 1 10 Θ 68 (deg) Figure 6. Angular resolution of the APT concept as a function energy. The resolution is limited by diffusion of ionization electrons in the gas.
cm (Peisert & Sauli, 1984). This blurs the particle tracks and makes them hard to measure precisely, especially near the pair vertex where the tracks are close together. The current APT design does not have a useful polarization sensitivity for the same reason. We are currently investigating means of reducing the diffusion in order to achieve the desired angular resolution and polarization sensitivity. The most promising method appears to be the addition of an electronegative gas (e.g. CS 2 ) which causes the ionization electrons to attach themselves to the ions, which then drift with much smaller diffusion (Martoff et al., 2000). | 2014-10-01T00:00:00.000Z | 2004-05-14T00:00:00.000 | {
"year": 2004,
"sha1": "d6d53a4f6e7ae9f916ce56af4ae2b35a006985ae",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c03ccf3cde7d8887914d2d0e0a6537984e4ef8b6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
6871475 | pes2o/s2orc | v3-fos-license | Oral and Conjunctival Exposure of Nonhuman Primates to Low Doses of Ebola Makona Virus
Nonhuman primate (NHP) models of Ebola virus (EBOV) infection primarily use parenteral or aerosol routes of exposure. Uniform lethality can be achieved in these models at low doses of EBOV (≤100 plaque-forming units [PFU]). Here, we exposed NHPs to low doses of EBOV (Makona strain) by the oral or conjunctival routes. Surprisingly, animals exposed to 10 PFU by either route showed no signs of disease. Exposure to 100 PFU resulted in illness and/or lethal infection. These results suggest that these more natural routes require higher doses of EBOV to produce disease or that there may be differences between Makona and historical strains.
The filoviruses, Ebola virus and Marburg virus, are among the most lethal human pathogens, with case-fatality rates of up to 90% [1]. The recent outbreak of Zaire ebolavirus (ZEBOV) infection in West Africa, with 28 638 cases and 11 316 fatalities, is the largest ever recorded [2]. This outbreak differs substantially from previous outbreaks not only in the number of people affected but also in the duration and geographic range. Natural questions arise as to why this outbreak is so different from past episodes. Poor public health infrastructure of the affected countries has been attributed as one factor. It is also possible that factors related to differences in virulence or transmissibility of the Makona strain of ZEBOV that caused the outbreak could have also played a role [3]. Filoviruses are transmitted by close contact with infected patients or contact with infectious body fluids. In natural settings, it is presumed that filoviruses gain entry to the host either through small abrasions in the skin or by contact with mucous membranes.
Previous studies in nonhuman primates (NHPs) with filoviruses have shown that very low doses of virus can be lethal. Specifically, for the Angola strain of Marburg virus, intramuscular injection of either 1 plaque-forming unit (PFU) or 10 PFU is uniformly lethal in cynomolgus monkeys [4], whereas smallparticle aerosol delivery of doses ranging from 2 PFU to 59 PFU is also uniformly lethal [5]. For Sudan ebolavirus, aerosol exposure of cynomolgus monkeys to 50 PFU is uniformly lethal [6]. In regard to ZEBOV, intraperitoneal exposure of cynomolgus monkeys to 6 PFU of the Mayinga strain [7] or intramuscular exposure to 18 PFU of the Kikwit strain [8] caused uniform lethality. Additionally, Reed et al concluded that the median lethal dose of the Kikwit strain for small-particle aerosol exposure in cynomolgus monkeys is <10 PFU [9]. While numerous studies have used parenteral or aerosol routes of exposure in macaques, only 1 study has assessed transmission of a filovirus by oral or conjunctival exposure. Specifically, Jaax et al showed that oral or conjunctival exposure of rhesus monkeys to high doses (158 000 PFU) of the Mayinga strain of ZEBOV caused a lethal infection [10]. However, no study has assessed low-dose exposure of any filovirus by the oral or conjunctival routes. Here, we performed a narrowly focused study to examine whether mucosal exposure of cynomolgus monkeys to low doses of the Makona strain of ZEBOV can produce a lethal infection.
Animal Challenge
Six healthy adult cynomolgus macaques (Macaca fascicularis) of Chinese origin (weight range, 4.3-7.0 kg; age range, 4-8 years) were used for these studies. In an initial study, 2 animals were exposed to a target dose of 10 PFU of ZEBOV Makona by droplet administration into the medial canthus of each eye, whereas 2 animals were exposed to a target dose of 10 PFU of ZEBOV Makona by droplet administration to the oropharynx. In a second study, 1 animal was exposed to a target dose of 100 PFU of ZEBOV Makona by droplet administration into the medial canthus of each eye, while 1 animal was exposed to a target dose of 100 PFU of ZEBOV Makona by droplet administration to the oropharynx. All 6 animals underwent physical examinations, had swab specimens collected from mucosal surfaces, and had blood specimens collected at the time of challenge and on days 2, 3, 4, 5, 6, 7, 8, 10, 15, 21, and 28 (for the 10 PFU study) or 3, 5, 7, 9, 14, 21, and 28 (for the 100 PFU study) after virus challenge. Any surviving animals were euthanized on day 28.
Animal studies were completed under biosafety level 4 biocontainment at the Galveston National Laboratory (GNL) and were approved by the University of Texas Medical Branch (UTMB) Institutional Laboratory Animal Care and Use Committee, in accordance with state and federal statutes and regulations relating to experiments involving animals, and the UTMB Institutional Biosafety Committee.
Detection of Viremia and Viral RNA
RNA was isolated from samples using the Viral RNAMini Kit or RNeasy Kit (Qiagen), using 100 µL of blood/swab in 600 µL of buffer AVL or 100 mg of tissue, respectively, per the manufacturer's instructions. Primers/probe targeting the VP30 gene of all known ZEBOV strain sequences were used for quantitative reverse transcription polymerase chain reaction (qRT-PCR), with the probe used here being 6-carboxyfluorescein (6FAM)-59-CCGT CAATCAAGGAGCGCCTC39-6 carboxytetramethylrhodamine (TAMRA) (Life Technologies). ZEBOV RNA was detected using the CFX96 detection system (BioRad) in One-step probe qRT-PCR kits (Qiagen), with cycle conditions and genomic RNA standard determination as previously described [11,12]. Virus titration was performed by a plaque assay with Vero E6 cells obtained from plasma samples as previously described [11,12]. In brief, increasing 10-fold dilutions of the samples were adsorbed to Vero E6 monolayers in duplicate wells (200 µL); the limit of detection was 5 PFU/mL.
Hematologic and Serum Biochemical Analysis
Total white blood cell counts, white blood cell differentials, red blood cell counts, platelet counts, hematocrit values, total hemoglobin concentrations, mean cell volumes, mean corpuscular volumes, and mean corpuscular hemoglobin concentrations were analyzed in blood specimens collected in tubes containing ethylenediaminetetraacetic acid, using a laser based hematologic analyzer (Beckman Coulter). Serum samples were tested for concentrations of albumin, amylase, alanine aminotransferase, aspartate aminotransferase, alkaline phosphatase, γ glutamyl transferase, glucose, cholesterol, total protein, total bilirubin, blood urea nitrogen, creatinine, and C-reactive protein by using a Piccolo point-of-care analyzer and Biochemistry Panel Plus analyzer discs (Abaxis).
Histopathologic and Immunohistochemical (IHC) Analyses
Necropsy was performed on all subjects. Tissue samples from major organs were collected for histopathological and IHC examination, immersion fixed in 10% neutral buffered formalin, and processed for histopathologic analysis as previously described [11,12]. For IHC analysis, specific anti-ZEBOV immunoreactivity was detected using an anti-ZEBOV VP40 protein rabbit primary antibody (Integrated BioTherapeutics) at a 1:4000 dilution. Tissue sections were processed for IHC analysis, using the Dako Autostainer (Dako). Secondary antibody used was biotinylated goat anti-rabbit immunoglobulin G (IgG; Vector Laboratories) at 1:200 followed by Dako LSAB2 streptavidin-horseradish peroxidase (Dako). Slides were developed with Dako DAB chromagen (Dako) and counterstained with hematoxylin. Nonimmune rabbit IgG was used as a negative control.
RESULTS AND DISCUSSION
No study has examined the pathogenicity of the Makona strain of ZEBOV in NHPs when given at low doses and using routes of exposure more likely to be associated with a natural outbreak. In an initial study, we exposed 4 cynomolgus macaques to a target dose of 10 PFU by either the oral route (2 animals) or the conjunctival route (2 animals; the actual dose of 10 PFU received was determined by back titration of each inoculum). Surprisingly, none of the 4 animals showed any evidence of clinical illness, and there were no changes in any hematologic or serum biochemical parameters during the course of the study (Table 1). No infectious virus was detected by plaque assay in plasma from any of the 4 animals, and viral RNA was below the limit of detection as assessed by qRT-PCR in whole-blood specimens from these animals. With the exception of the nasal swab obtained on day 10 from subject O-2 (1.62 × 10 5 genome copies/ mL), all swab samples were negative for all 4 animals at all time points. We were unable to isolate infectious virus from the day 10 nasal swab samples from this animal. Analysis of sera from all 4 animals at days 21 and 28 after exposure showed only low anti-ZEBOV GP ELISA IgG titers in subject O-1 (1:100 at day 21 and 1:50 at day 28). Histopathologic and IHC staining for ZEBOV antigen revealed no significant lesions and no viral antigen present in tissues of any of the animals.
As it appeared that exposure of cynomolgus macaques to a very low dose of ZEBOV Makona by either the oral or conjunctival route did not result in a productive infection causing clinical disease, we performed a second study to determine whether exposure to a slightly higher dose would cause disease in macaques. Two animals were exposed to a target dose of 100 PFU by either the oral route (1 animal) or the conjunctival route (1 animal; actual doses of 120 PFU and 150 PFU were determined by back titration of the oral and conjunctival inocula, respectively). The animal exposed by the oral route (subject O-3) initially showed signs of clinical illness on day 7, which was characterized by fever, anorexia, and the presence of a macular rash (Table 1). This animal rapidly deteriorated and was euthanized on day 8 after infection. A high circulating viremia level was noted in subject O-3 beginning at day 5 and increasing at day 8. Lesions observed by histopathologic staining were consistent with ZEBOV infection and included histiocytosis of axillary and inguinal lymph nodes, lymphocytolysis of white pulp and deposition of fibrin in red pulp of spleen, sinusoidal leukocytosis, rare multifocal necrotizing hepatitis, and numerous pleomorphic intracellular eosinophilic viral inclusion bodies in hepatocytes. These lesions had correlative strong positive IHC staining for anti-ZEBOV VP40 antigen of mononuclear cells (macrophages, monocytes, and dendritic) in red and white pulp of the spleen, sinusoidal spaces of the liver and rare hepatocytes, and mononuclear cells within sinuses of the inguinal and axillary lymph nodes ( Figure 1A, 1C, 1E, and 1G). Conversely, the animal exposed by the conjunctival route (subject C-3) did not show any evidence of clinical illness, with the exception of lymphopenia on day 7 and anorexia on day 10; this animal survived to the study end point (Table 1). Low-level viremia was detected by plaque assay at day 14 in plasma from subject C-3 (Table 1), while viral RNA genomes were detected by qRT-PCR in nasal swabs obtained on days 7 and 9 and the oral swab sample obtained on day 9. A low anti-ZEBOV GP enzyme-linked immunosorbent assay IgG titer (1:50) was noted in subject C-3 at day 21. Histopathologic examination of tissues from this animal failed to show lesions that were consistent with ZEBOV infection. However, weak positive IHC staining for anti-ZEBOV VP40 antigen of mononuclear cells in red pulp of the spleen and in subcapsular and medullary sinuses of the inguinal and axillary lymph nodes ( Figure 1B, 1F, and 1H) was noted at the study end point, although the liver was free of any antigen-positive cells ( Figure 1D).
Previous studies have clearly demonstrated that low doses of the Kikwit or Mayinga strains of ZEBOV administered to macaques by parenteral injection or small-particle aerosol result in uniformly lethal disease [7][8][9]. Additionally, oral or conjunctival exposure of macaques to a high dose of ZEBOV Mayinga resulted in severe disease and near uniform lethality [10]. Our results show that low doses (10 PFU) of ZEBOV Makona administered by the oral or conjunctival routes resulted in minimal replication of virus and failed to cause a lethal infection. There are a number of possible explanations for this result. For example, the dose required to produce a lethal infection may be higher for exposure of conjunctival or pharyngeal mucosal membranes than for parenterally injected virus or smallparticle aerosols delivered deep into the lungs. Each route represents different risks. Parenteral exposure simulates an accidental needle stick; aerosol exposure mimics deliberate release; and mucosal surface exposure simulates the most likely mode of contact during a natural outbreak. Current knowledge suggests that low-dose exposure from a laboratory accident or intentional release represents a significant risk of lethal ZEBOV infection to any exposed individual, whereas it appears that the threshold of the lethal dose for oral or conjunctival exposure may be at least ≥10-fold higher than parenteral or aerosol exposure. While this was a narrowly focused study, our findings are a first step to understanding how much infectious virus is needed for transmission to occur in a natural outbreak. Alternatively, as has been suggested in other studies, there may be phenotypic differences between the Makona strain and the historical Mayinga and Kikwit strains [3]. As in our current study, the numbers are still too low with the Makona strain to have a (21) Days after ZEBOV challenge are in parentheses. Fever is defined as a temperature >1.4°C higher than the baseline level or either ≥0.7°C higher than baseline and ≥39.7°C or 0.6°C higher than baseline and ≥40°C. Mild rash is characterized by focal areas of petechiae covering <10% of the skin. Lymphopenia and thrombocytopenia are defined as a ≥35% drop in numbers of lymphocytes and platelets, respectively. Leukocytosis is defined by a ≥2-fold increase in the white blood cell count, compared with baseline. Hypoalbuminemia is defined by a ≥35% decrease in levels of albumin.
completely sound comparison with the Mayinga and Kikwit strains, but initial steps must be taken to further our understanding. Future studies should focus on determining whether low doses of ZEBOV Makona are lethal by the more conventional intramuscular route and by the small-particle aerosol route and to determine whether low doses of other ZEBOV strains are lethal if administered by the oral or conjunctival routes. | 2018-04-03T03:34:13.067Z | 2016-10-04T00:00:00.000 | {
"year": 2016,
"sha1": "eb5e76849921d4dd822ad7c329692a3180ba59a4",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/jid/article-pdf/214/suppl_3/S263/7935681/jiw149.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb5e76849921d4dd822ad7c329692a3180ba59a4",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
15954618 | pes2o/s2orc | v3-fos-license | Stress and Fear of Exposure to Sharps in Nurses
Background: Injuries caused by sharp objects, which involve biological hazards are considered as one of the most important factors that lead to stress among the nursing staff. Contact with sharp objects is a major concern among healthcare workers, especially nurses. Objectives: This study was done to determine the amount of stress caused by exposure to sharp medical instruments among nurses. Materials andMethods: This was a cross-sectional research on 527 nurses, working at different medical centers across Iran, with a cluster-sampling method. The relevant data was collected with a valid and reliable questionnaire. The Cronbach’s alpha coefficient of internal consistency of this instrument was 0.92 and interclass correlation coefficient was 0.94 Results: The results showed that ward satisfaction, having master of science, age, and number of contacts were significantly able to predict variance in stress scores. The adjusted line regression model explained 36% of the overall variance in stress score (R2 = 0.60) Conclusions: The results of this study showed that exposure to sharp objects may cause high stress in the nursing staff. Considering higher levels of stress in the area of contact care, the provisions on how to deal with patients and safe care can help reduce stress.
Background
Nowadays, with technological advances in medicine, the use of invasive procedures and injections in patients has expanded. As a result, health care staffs are exposed to a high risk of blood-borne pathogens (1). needle stick injury (NSI) is one of the most dangerous occupational situations (2). Among 20 types of blood-borne pathogens, which are transferred through NSI to health care workers, viral infections such as hepatitis B, C and human immunodeficiency virus (HIV) are the most dangerous and common (3)(4)(5). Studies have shown that being exposed to infectious diseases and NSI can cause stress in nurses (6). This particular type of stress has a negative impact on the individuals and influences their families and colleagues (7).
Stress represents a major occupational hazard in modern age and has been incriminated for productivity reduction, absence from job, displacement of personnel, work conflicts and higher health care costs for employees (8). In a study on the psychological consequences of NSI, Greene and Griffiths found that the severity of the illness caused by needle stick is as significant as other psychological traumas. Moreover, the depression resulting from this situation is similar to other psychological traumas, and the duration of the condition is associated with the duration of obtaining the result of the second test. This has a farreaching impact on family relationships, sexual health and presence at the workplace (9). Between 1992 and 2003, Leigh et al. stated that injury stress led to the absence of 903 nurses, physicians and workers. Moreover, 7% of the injuries resulted in loss of more than 31 working days (10). Naghavi et al. performed a study, which measured the incidence of post-traumatic stress disorder (PTSD) after injury, on doctors. Results showed that 12% were suffering from PTSD reactions, therefore suggesting an emphasis on addressing psychological reactions to NSI (11).
Despite numerous studies performed on the prevalence and incidence of injuries from sharp objects in Iran, no research with specific tools has been done concerning the stress caused by exposure to these objects.
Materials and Methods
The present analysis, performed in 2014, had a crosssectional design. The study subjects were nursing staff of several cities that were selected by cluster sampling in three stages. At first, four provinces were randomly chosen from different geographic areas of Iran. In the second stage, 11 hospitals of these cities were selected by the same form of simple randomization and finally, the nurses were enrolled through census sampling from every shift. Hospitals were located in Tehran (four hospitals), Isfahan (three hospitals), Kerman (two hospitals) and Hamedan (two hospitals. Data collection was a two-part questionnaire including demographic characteristics and stress of exposure to sharps injury. The questionnaire section related to the stress of exposure to sharp injury contained 20 closed questions and the Likert response was classified to five scoring levels: very low (score of 1), low (score of 2), average (score of 3), high (score of 4), and very high (score of 5). The minimum total score that could be obtained was 20 and the maximum score was 100. The stress questionnaire had four dimensions, including safety policy (five questions), occupational safety (five questions), contact nursing (four questions) and mental-environmental conditions (six questions). This questionnaire was made by moayed et al. The Cronbach's alpha coefficient of internal consistency of this instrument was 0.92 and interclass correlation coefficient was 0.94 (12,13). The questionnaire assessed stress levels caused by exposure to sharp objects in different situations or circumstances, as well as the amount of stress for nursing staff. In this study, the instrument content validity was used to validate the demographic characteristics. Inclusion criterion was having at least one-year experience at the current workplace. Based on the formula n = and P = 0.3, d = 0.04, the sample size was calculated as 502, and considering the 10% loss, 550 questionnaires were distributed and 527 questionnaires were returned, thus a total of 527 fully completed questionnaires were included in the study.
The research objectives and the study methods were approved by the ethical committee of Baqiyatallah University of Medical Sciences, Teheran, Iran, and the study subjects gave their written consent.
The analyses were performed using SPSS 16.0 (released 2007; SPSS for Windows, SPSS Inc., Chicago, IL, USA). Basic descriptive statistics for quantitative variables was presented using mean (SD) and n (%) for qualitative variables. Normality was examined with a Kolmogorov-Smirnov test. Adjusted linear regression model with 95% Confidence Interval (CI) was used to predict the degree of stress in nurses, who were exposed to sharp objects. Statistical significance was set at P < 0.05.
Results
After review of the collected 527 questionnaires, the mean age of the study subjects was reported at 35.7 (SD = 8.7) years, and of these, 41.2% were male and 58.8% were female. The average work experience was 12 (SD = 8.3) years, with a minimum of one year and maximum of 40 years.
Of the subjects, 83.1% had bachelor's degree and 58.3% reported a history of injury with a sharp object. Stress from risk of injury by exposure to sharp objects in nurses was reported to be 58.37%. The average stress score in nursing staff was 58.37 (SD = 15.08, 95% CI: 57.08 to 59.66). Table 1 confirms that changes in independent variables were significantly able to predict variance in stress scores. The adjusted linear regression model explained 36% of the overall variance in stress score (R2 = 0.60), which was found to significantly predict outcome, F (4, 521) = 8.26, P < 0.001. According to Table 1, the adjusted linear regression model showed risk predictors for stress score in nurses, who were exposed to sharp objects according to ward satisfaction, having Master of Science, age, and number of contacts.
The stress of nurses was measured based on different domains of the questionnaire after standardization (Figure 1).
Discussion
This study, using a proprietary questionnaire, was conducted to investigate the stress of dealing with sharp objects by nursing staff. Mean overall stress of exposure to sharp objects was 58.4 ± 15.08%. Regarding the occupational safety domain, one study showed a significant relationship between carelessness of nurses and injuries from 2 Iran J Psychiatry Behav Sci. 2016; 10(3):e3813. sharp objects, especially in the setting of insufficient number of nurses (1.92 times), weariness and excitement (2.16 times), lack in resources support (1.88 times) and inexperience of nurses (1.74 times). Considering the actual conditions of a shortage of nurses, there is an increase of the degree and amount of damage secondary to injury (14). The amount of stress in the contact -care sector was 38.64%. The study of Torshizi and Ahmadi measured the domain of patient care stress in nurses by using two items, exposure to body fluids and risk of infectious diseases transmission, which were insufficient for the evaluation of stress produced by sharp objects (15). The severity and extent of the stress of a contact -care study conducted on two nurses that were injured during care of an HIV patient, in addition to emotional problems during a 22-month follow up, led to depression, anxiety, insomnia and nightmares in the participants, even after receiving negative laboratory results about any possible infection (16). Stress in the area of mental and environmental conditions was calculated as 46.02%.
Each of the studies focused on only one of the items of the area. In the study of O'Connor, it was demonstrated that injury by sharp objects caused leave or absence and prolonged work interruption due to anxiety or stress disorder, with a rate of one out of 20 individuals, resulting in a reduction of the quality of work (17). The study of Seng et al. showed that only 146 cases out of 242 injured health workers reported their injuries to the authorities (4). Results of other studies showed that improvement in performance attitudes and increasing awareness and education are essential to control and reduce damage (18). The cause of damage is related to the workplace, including improper conditions for disposal of the needles and sharp instruments, overcrowding, noise, heat, chaos, lack or inadequate protective equipment gloves, goggles and gowns, which account for 27.2% and patient-related factors, such as sudden movements, improper use of equipment designed for patient and disease-related damage by sharp objects, which are responsible for 7.6% of the total causes (19).
In the area of safety policy, stress accounted for 43.40%. Serinken et al. showed that personnel-related factors, including failure to use protective equipment, carelessness, lethargy, and lack of proper training, caused 64.9% of incidents (19). It is noteworthy to mention that only 36% of nurses, who were injured with needle stick within the past year, reported this to their supervisor or hospital emergency ward or infection control committee (20). Rampal et al. in Malaysia, showed a high level of awareness and knowledge of the universal precautions, however, there's still a gap between science and practice, which leads to the extended rate of damage (21). However, other results showed that health workers required more training (22). In another study, 38.3% of the subjects had a history of injury from needles and sharp objects within the last six months (23). We calculated the incidence of injuries with sharp objects at 58.3%, which is different to the study of Sharifian et al. because of the three-month period of investigation in their study, compared to our study, which was performed during a longer time period (24).
However, the results of a study in Germany showed a reduced incidence of NSI in nurses inclined to use a safety box. The use of safety equipment reduced the rate of such injuries from 69% to 52% (25). In our study, despite the fact that more than 75% of the participants worked more than the required time, there was no significant difference in the amount of stress.
However, a study in Baltimore confirmed that working for 13 hours or more during a day marked a significant contribution to the incidence of NSI (26).
The results of this study, showed that stress levels in nurses are highly increased by working with sharp objects, an issue, which requires special attention from authorities to implement educational programs and raise the awareness and ability of the staff to prevent injury and to improve the skills of self-control in stressful situations. Mastery over the mind, the environment and the health service is the best course, in addition, to ensure risk-free environments, providing facilities to reduce injuries and making managers understand their duties to follow up and provide medical service and prophylaxis in the event of injury, are necessary. | 2018-04-03T06:21:12.661Z | 2014-08-01T00:00:00.000 | {
"year": 2016,
"sha1": "aea8a9d529825b89b5dd8ce1ccbae3d089a212d4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5812/traumamon.17082",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "aea8a9d529825b89b5dd8ce1ccbae3d089a212d4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252546805 | pes2o/s2orc | v3-fos-license | Objectively measured adherence to physical activity among patients with coronary artery disease: Comparison of the 2010 and 2020 World Health Organization guidelines and daily steps
Background Tailored recommendations for patients after percutaneous coronary interventions (PCI) need physical activity (PA) to be objectively measured and assessed for adherence to guidelines. The recent WHO guidelines removed the daily recommended bout duration, while the potential impact of this change on patients after PCI remains unclear. Aim We evaluated prevalence estimates of adherence to PA recommendations among patients after PCI across the 2010 [≥30 min moderate- to vigorous-intensity PA (MVPA) at ≥ 10-min bout duration] and 2020 WHO guidelines (≥30 min of MVPA of any bout duration), as well as 7,500 and 10,000 steps. Methods We conducted an observational longitudinal single-center study with patients after PCI for chronic or acute coronary syndrome (ACS); maximal age 80 years. Wrist-worn accelerometers recorded participants’ PA data from the evening of hospital discharge over the next 18 days. Results We analyzed data from 282 participants with sufficient minimum wear time (7 days of ≥12 h), including 45 (16%) women; and 249 (88%) with ACS. Median wear time was 18 (17, 18) days. Median participant age was 62 (55, 69) years. Fifty-two participants (18.4%) fulfilled 2010 WHO guidelines and 226 (80.1%) fulfilled the 2020 WHO guidelines. Further, 209 (74.1%) participants achieved ≥7,500 steps/day and 155 (55.0%) performed ≥10,000 steps/day. Conclusion Among participants after PCI, most MVPA was accumulated in bouts <10 min, leading to a fourfold discrepancy between participants fulfilling the 2010 and 2020 WHO PA recommendations. The number of steps/day may be a valid proxy to recent WHO PA recommendations as it is not dependent on the bout-length definition. Clinical trial registration [ClinicalTrials.gov], identifier [NCT04663373].
Introduction
Cardiovascular disease (CVD) remains the leading cause of death globally (1). Recent studies have found that lower levels of objectively measured physical activity (PA) were associated with higher rates of hospital readmission and adverse outcomes among patients after acute myocardial infarction, cardiac surgery, or decompensated heart failure (2)(3)(4). Similarly, daily steps have been associated with CVD risk factors and cardiometabolic outcomes (5,6). In addition, a curvilinear relationship between PA volume and health benefits has been demonstrated, suggesting that the most significant reduction in morbidity and premature death were achieved with increases in PA among patients with coronary heart disease (CHD) (7) and healthy people at the lowest level in the spectrum of PA (8). A recent meta-analysis on PA trajectories among patients with CHD provided evidence supporting the benefits of maintaining or adopting an active lifestyle to improve survival and the possible harms of decreasing PA (9). For instance, compared to always-inactive patients, the pooled risk of allcause mortality was 50% lower in those who remained active [HR (95% CI) = 0.50 (0.39-0.63)], 45% lower in those who were inactive but became active [0.55 (0.44-0.7)], and 20% lower in those who were active but became inactive [0.80 (0.64-0.99)] (9). PA is a foundational therapy for patients with CHD. Therefore, it is crucial to identify patients with low levels of PA, increase their PA, and facilitate a tailored cardiac care approach (10).
According to WHO's 2020 "Guidelines on Physical Activity and Sedentary Behaviour, " adults should be physically active for 150-300 min per week with moderate intensity, 75-150 min per week with vigorous intensity, or an equivalent combination of the two to achieve substantial health benefits (11,12). Moderate-to vigorous-intensity PA (MVPA) has been defined as a metabolic demand of greater than three times resting (3 METs) (12). PA is most commonly assessed by commercial accelerometers calibrated against measurements by metabolic carts so accelerations during activities with >3 METs are classified as MVPA (13,14). Some calibration studies of accelerometers used steady-state activities, such as walking, running, and cycling, which require >3 METs when performed continuously for longer than 1-2 min when metabolism has reached a steady state (13,14). When these accelerations occur for only a few seconds, they do not lead to energy consumption >3 METs. Therefore, WHO 2010 guidelines recommended performing MVPA in bouts of 10 min when the threshold of MVPA had to be reached 80% of the time. However, this bout requirement was lowered in the WHO 2020 guidelines because new evidence suggested that MVPA bouts <10 min also have beneficial effects on health and were associated with reduced all-cause mortality (15). The consequence of not requiring a minimal bout duration is that accelerations of single movements may be counted toward MVPA or a step count goal even if a person never exceeds 3 METs during an entire day. Therefore, the same volume and intensity of activities may result in varying minutes with MVPA when measured and analyzed by different commercial accelerometers whose algorithms are not available to the user.
Since walking is often the chosen exercise for people with heart disease, an alternative criterion to quantify PA is the number of steps; (6) steps per day is a practical PA measure because it is an easy-to-understand recommendation (16, 17). The commonly used artificial recommendation of 10,000 steps per day-promoted by a Japanese pedometer company in the 1960s (18)-was not based on scientific evidence, yet it has been used as the threshold value for providing health benefits in several studies (6,(19)(20)(21)(22). Although achieving 10,000 steps/day was associated with meeting PA guidelines, (20) there is no conclusive evidence about how many steps per day are required for better health outcomes (16). For instance, Lee et al. found that hazard ratios associated with mortality continuously decreased with an increasing mean of daily steps among older women, leveling off at around 7,500 steps/day (16). Other studies supported a threshold of 7,500 steps per day for patients with cardiac conditions to reduce CVD risk factors, CVD morbidity, and mortality, as well as all-cause mortality (5,6,23).
For physically inactive patients with CVD, the usage of activity trackers has been recommended by the newest ESC guidelines for patients with CVD. However, using different evidence-based PA criteria may influence prevalence, therapy recommendations, and tools to promote PA among these patients. Therefore, comparing prevalence across guidelines may help determine actionable recommendations for patient benefit. Thus, we evaluated prevalence estimates of adherence to PA recommendations across different guidelines among participants with coronary artery disease who recently underwent percutaneous coronary interventions (PCI) and wore a wrist accelerometer over 18 days after hospital discharge.
Study population
Our study is a substudy of the Prognostic Impact of Physical Activity Patterns After Percutaneous Coronary Intervention (PIPAP) study (ClinicalTrials.gov identifier: NCT04663373)a prospective observational single-center study that monitors patients' PA and assesses the potential of acceleration and steps parameters for risk quantification. The PIPAP study was approved by the Ethics committee of the Canton of Bern, Switzerland.
We recruited consecutive patients hospitalized for PCI after acute or chronic coronary syndrome (ACS, CCS) on their day of discharge or one day before discharge from December 2020 to March 2022. Substudy participants were provided with a wristworn accelerometer; a study information sheet, including an informed consent form; and a pre-addressed, prepaid envelope to return the signed consent form and accelerometer after the study period. Participants were asked to wear the accelerometer for 18 successive days starting from the evening of the day of their discharge from the hospital. We included patients who were aged <80 years and eligible for ambulatory cardiac rehabilitation, which de facto excluded patients who are frail or cognitively impaired. We also excluded study participants who did not record PA data for ≥7 days for ≥12 h.
Physical activity monitoring
Participants wore tri-axial accelerometers (Axivity AX-3, Axivity Ltd., Newcastle, UK) on their non-dominant wrist for 18 days. We programmed the devices using AX3 GUI V43 (24)-an open-source software-to record tri-axial accelerations of ±8 g at 50 Hz for 18 days starting on the evening of the day of the participant's hospital discharge. We chose 18 days to capture at least 14 days of PA data from participants who were transferred to another hospital before returning home. Transfer to another hospital usually delayed hospital discharge by 1-3 days.
Physical activity data processing
Using AX3 GUI V43, we downloaded PA data as continuous wave accelerometer (.cwa) files and then processed the PA data with the research-driven open-source R package GGIR (version 2.4.0) (25,26). We derived participants' demographic (age and sex) and PCI data from the participating clinic's patient information system.
We calculated the movement component from the raw acceleration data using the default acceleration metric of the package-the Euclidean norm (vector magnitude) minus one (ENMO). It describes the raw tri-axial acceleration data conversion into an omnidirectional measure of body acceleration (27). The resulting ENMO values were expressed in gravity-based acceleration units [milligravity units (mg)] averaged over 5 s epochs.
We defined the following activity domains: <25 mg for inactivity; 25-99 mg for light PA; and ≥100 mg for MVPA, according to O'Donnell et al. (28). Sleep was also identified by the GGIR algorithm as documented and validated by van Hees et al. (29). Time spent in different PA domains was accrued in 1min bouts. During analysis, we conducted autocalibration using local gravity as the reference, and we determined non-wear time over a window size of 60 min with a 15-min sliding window (30, 31).
While we derived activity parameters directly from GGIR, we determined steps by a Windowed Peak Detection opensource algorithm (Verisense_step_algorithm, last updated: 14.04.2021) based on Gu et al.'s (32) design and implemented for use in combination with the GGIR R package available on GitHub (33). We used validated input parameters for the step algorithm from a previous study of 22 participants during an outdoor physiotherapy session from the PIPAP study population (34).
Calculating parameters
We derived the following activity parameters from the GGIR package. First, the algorithm was set to calculate data from midnight to midnight. Next, we calculated the daily minutes with MVPA, inactivity, and sleep time. Further, we computed mean acceleration values in mg over each 24-h cycle. As Rowlands et al. recently suggested, (35) we determined minimal accelerations during the most active 2, 30, and 60 min in mg to compare with studies using different activity thresholds. We also calculated minutes in MVPA as bouts of at least 10 min with 80% of the 5 s epochs having accelerations over the MVPA threshold.
The step counting algorithm Verisense returned the number of daily steps for each valid day (i.e., wear time ≥12 h). Additionally, we calculated cadences for each minute from the meta-data Verisense derived, which included the number of steps for each 5 s epoch. We calculated the mean cadence over the whole 24-h cycle from these values. Moreover, we calculated daily minutes with ≥100 steps/min and 0 steps/min (5). We also determined mean cadences for the most active 1, 30, and 60 min, as proposed by Tudor-Locke et al. (36). We summarized all parameters as the mean of each participant's overall valid days and the median of all participants.
Statistical analysis
We performed all analyses with R Studio (Version 1.4.1106-5). We calculated descriptive statistics reporting the number of participants and percentages of all participants and medians with first and third quartile for continuous activity parameters due to their primarily non-parametric distribution. We performed linear regressions for MVPA based on 1-min bouts and MVPA based on 10-min bouts with daily steps using the lm function. We calculated the proportions of adherence to the 2010 and 2020 WHO guidelines and daily steps for the total sample and for subgroups according to sex, median age of the sample (<62 versus ≥62 years old), and clinical presentation of the disease (ACS versus CCS).
Study participants
Of the 916 patients who met inclusion criteria within our 16-month recruitment period, 369 patients (40.3%) agreed to participate in the study (Figure 1). We excluded 87 of those 369 participants. During the observational period, two participants (0.5%) died before completing 7 days of weartime; ten participants (2.7%) never returned the accelerometer, and nine participants (2.4%) never wore the accelerometers. Twenty-seven additional participants met exclusion criteria: 10 participants had <7 days of ≥12-h wear time, four participants were aged >80 years, and 13 participants (3.5%) did not send the informed consent forms. Seven (1.9%) participants' accelerometers had insufficient battery power. Devices of 25 participants had not yet been sent back and received by us. This resulted in 44 patients (11.9%) who were non-compliant with the study protocol. Consequently, we performed our data analysis with 282 valid recordings (76.4%).
Of the 282 participants with valid recordings, the median age was 61.5 (first quartile 55, third quartile 69) and 46 (16.1%) were women ( Table 1). Thirty-three participants had CCS and 249 participants had ACS (88.3%). A third of all participants started recording on day 1 after PCI (PCI was on day 0), the majority started recording on the second day (75.9%) and by day 3 88.7% had started their recording. Therefore we included all recorded days as of day 2. The median number of days of device wear time ≥12 h was 18 (17, 18), and most participants (79.4%) still recorded day 18, while only 48.9% of patients still recorded
Daily activity measurements
When expressed as mean daily activity over the 18 days, 226 participants (80.1%) had ≥30 min of MVPA on an average day ( Table 1). However, only 52 (18.4%) study participants spent at least 30 min in MVPA with bouts of ≥10 min, thus fulfilling 2010 WHO PA guideline recommendations. The median duration of all participants' mean MVPA time was 57 (33, 82) minutes, and the median of each participant's mean time in bouts ≥10-min MVPA was 7 (1, 22) minutes. Median sleep time was 6.9 (5.8, 7.8) hours, and median inactive time was 12.0 (10.5, 13.5) hours. One-hundred-and-fifty-five participants (55.0%) reached ≥10,000 steps/day, and 209 (74.1%) performed ≥7,500 steps/day. Two-hundred-and-four participants (72.3%) reached a cadence of ≥100 steps/min during the most active minute of the average day. Over the most active 30 min, this cadence was reached by 38 participants (13.5%), and over the most active 60 min, 12 participants (4.3%) reached this threshold.
On day 2 after PCI, 43.5% of participants with available data on that day fulfilled the MVPA criterion of at least 30 min according to 2020 WHO PA guidelines. This percentage increased steadily until day 7, after which it decreased again slightly (Figure 2). A similar percentage of participants fulfilled the criterion of a minimum of 7,500 steps/day. The minimum recommendation of 30 min of MVPA in ≥10-min bouts according to the 2010 WHO guideline was fulfilled by 8.9% on the second day and increased steadily until day 17 when 23.4% fulfilled this criterion. On day two, 28.5% reached 10,000 steps/day and by day 17, 61.3% had reached 10,000 steps/day.
Linear regressions for moderate-to vigorous-intensity physical activity and daily steps
The linear regression of daily mean steps with daily mean MVPA according to the 2020 PA guidelines explained 47.5% of the total variability (r = 0.69, p < 0.0001), while the linear regression of daily mean steps with daily mean MVPA according to the 2010 PA guidelines explained only 13.6% (r = 0.37, p < 0.0001, Figure 3A). Approximately 2,500 steps corresponded to 0-min MVPA per 2020 WHO guidelines, and 5,000 steps corresponded to 0-min MVPA per 2010 WHO guidelines. The intersection of the regression line with 30 min of daily MVPA according to 2020 WHO guidelines corresponded to 6,250 daily steps, while more than 15,000 steps on average were necessary to reach the 30-min threshold according to 2010 WHO guidelines. Similar observations could be made Percentage of participants reaching various criteria for PA after hospital discharge for PCI. PCI was performed on day 0. Fulfillment of PA criteria was calculated for each participant and each day individually.
for the linear regression models of the daily mean cadence of the most active 30 min with daily mean MVPA according to the 2010 and 2020 WHO guidelines ( Figure 3B). The linear regression models for mean cadence with MVPA according to the 2010 guidelines explained 29.1% of the total variance (r = 0.54, p < 0.0001) and only 16.8% (r = 0.41, p < 0.0001) with MVPA according to the 2020 guidelines. The intersection of the regression line with 30 min of daily MVPA per 2020 WHO guidelines corresponded to 60 steps/min, while a cadence of 100 steps/min was observed for reaching the 30-min threshold according to 2010 WHO guidelines.
Adherence to guidelines and steps per day according to age, sex, and coronary artery disease presentation Overall, we found that most of the proportions of adherence did not statistically differ across categories of age, sex, and disease presentation at the PCI ( Table 2). However, the lowest adherence to the 2010 WHO guidelines was observed among women (8.9%), and patients older than 62 years had a lower proportion (74.5%) of adherence to the 2020 guidelines, compared to the patients in the younger group (86%).
Discussion
After recent PCI, PA assessment with wrist-worn accelerometers among our participants was found to be highly feasible with a participation rate of 40 and 87% compliance. We A higher median number of daily steps and more daily min at a cadence ≥100 steps/min was found among participants who reached the average of 30-min MVPA in 10-min bouts when compared to participants who only met the recommendations from the 2020 WHO guidelines (Figure 4).
To our knowledge, this study is the first to quantify the discrepancy between the achievement of PA recommendations with and without the 10-min bout requirement in patients after PCI. Our findings are consistent with other studies conducted on different populations. For instance, a cross-sectional study investigating data from the 2003-2004 National Health and Nutrition Examination Survey (NHANES) (37) and a study on data from the Framingham Heart Study (38) also reported a fourfold discrepancy, whereas a study on a subsample of the NHANES population found a sixfold discrepancy (39). The 2020 WHO guideline was based on studies claiming that bouts shorter than 10 min of MVPA were also associated with reduced all-cause mortality in the general population (15). However, the majority of those studies supporting the health benefits of PA accumulated in bouts of <10 min in duration used a cross-sectional design, with none of the randomized studies reporting on the effects of PA accumulated in bouts of <10 min. Other studies established associations of MVPA acquired sporadically or in bouts ≥10 min with some cardiovascular risk factors. For instance, a study of >1,000 Canadian adults wearing hip-worn tri-axial accelerometers, reported that the time of MVPA with bouts ≥1 min was nearly double the time of MVPA with bouts ≥10 min (40). The presence or absence of metabolic syndrome was equally well discriminated by bouted (≥10-min) or sporadic (1-9 min) MVPA (40). Similar associations of CVD factors with MVPA bouts duration were found in a sample of >2,000 participants from the Framingham Heart Study (38). In another study of over 6,000 adults from the NHANES study, MVPA in bouts and non-bouts were similarly associated with cardiovascular risk factors (37); however, a study among the subpopulation of adults younger than 65 years from the Canadian health measures survey found a four times greater inverse association of obesity with MVPA in bouted compared to sporadic MVPA (39). The Coronary Artery Risk Development in Young Adults (CARDIA) study of approximately 2,000 healthy adults found that accumulating sporadic MVPA, independently of bouts, was a protective factor against the development of hypertension but not against obesity (41).
Bouts of 10 min or longer are likely to represent planned and structured exercise, while shorter bouts more likely reflect activities of daily living. Likewise, the median time with a cadence ≥100 steps/min of our study participants was 8 min/day, indicating many of our participants barely reached this cadence. Hence, most of our participants' steps were performed at low cadences or in bouts shorter than 1 min, which again suggests activities of daily living rather than physical exercise increasing heart rates and cardiac output. In our study, the proportion of participants fulfilling the 2020 WHO PA guidelines was slightly higher than the proportion of participants walking ≥7,500 steps/day-a threshold found to discriminate between cardiovascular risk factors (6). The percentage of participants walking ≥10,000 steps was between the proportions of participants fulfilling the 2010 and 2020 WHO PA guidelines. Unlike MVPA, people can easily verify the number of steps calculated by an accelerometer device by walking a predefined number of steps or by walking at a certain cadence for a defined time. Not only can the number of steps be verified, but it is also an easily followed recommendation, such as walking 3,000 steps or walking at brisk 100 steps/min for 30 min.
It is questionable whether PA of very short duration has the same beneficial effects on patients with CVD as structured exercise. Several mechanisms may explain the known benefits associated with PA in patients with CVD, including endothelial FIGURE 4 Illustration of fraction of patients reaching 2020 versus 2010 World Health Organization guidelines on physical activity, as well as those reaching 7,500 and 10,000 steps per day.
function improvement (42,43) and antiatherosclerotic (43,44) and anti-inflammatory (45) effects. Traditional risk factors for CHD such as diabetes, hypertension, smoking, and hypercholesterolemia are associated with endothelial dysfunction, which in turn results in impaired nitric oxide production, abnormal vasoconstriction, chronic inflammation, and increased oxidative stress (46). Endothelial dysfunction, inflammation (47), and oxidative stress (48) play an important role in both the pathogenesis and prognosis of CVD. Against this background, PA increases beneficial shear stress at the vessel wall, down-regulates the expression of the angiotensin II type 1 receptor (49), and decreases NADPH oxidase activity and superoxide anion production, which in turn decreases the generation of reactive oxygen species and inflammation while preserving endothelial nitric oxide bioavailability and its protective anti-atherosclerotic effects (50). Conversely, physical inactivity increases vascular NADPH oxidase activity and increases vascular reactive oxygen species generation, which in turn contributes to endothelial dysfunction and atherosclerosis (51). Exercise training of distinguished volume and intensity has proven beneficial effects on endothelial function and arterial stiffness (52, 53). At least for weight loss and prevention of obesity, bouts ≥10 min have been suggested as necessary (39,41,54). Future studies need to clarify how recommendations are actionable to patient benefit and whether daily step targets for patients after PCI gauge prognostic importance.
Limitations
Some limitations may affect our study. First, inactive and uninterested patients may have been lost during recruitment since participants' consent required their willingness to wear an accelerometer. Consequently, our study participants may be more active and compliant than typical patients after PCI in clinical settings. With a 40% inclusion rate, it is possible that our study included a higher percentage of physically active patients whereas inactive patients could have refused participation. However, after the recommendation for monitoring objective PA that has been recently endorsed by the ESC, (55) the inclusion process for this and any other future studies is expected to improve. Specifically in our setting, the use of accelerometer is now a standard of care. All patients are recommended to wear the accelerometer for 18 days after hospital discharge from PCI and together with their general practitioners receive their analyzed data and PA recommendations upon returning the device.
Our recruitment team did not enlist patients who did not qualify for ambulatory cardiac rehabilitation because they were too frail or cognitively impaired. Therefore, our results may have been affected by selection bias. However, selection bias did not affect the large discrepancy between the number of participants satisfying 2010 versus 2020 WHO PA guideline criteria, which was our main aim. Second, the median MVPA of 1-min bouts among our study participants was 57 min/day or 399 min/week, fulfilling or even exceeding the recommended range of 150-300 min/week. It is possible that PA measured in our study overestimated PA levels due to the Hawthorne effect since pedometer use has been shown to increase patients' PA (19,22). Wrist-worn accelerometers might also underestimate activities, such as cycling (56). In contrast, activities involving arm movements may overestimate PA levels since the metabolic cost of arm movements is smaller than that of leg movements due to the smaller muscle mass involved in the effort (57). However, since walking is one of the most frequently reported leisure time activities worldwide, this limitation may be negligible (58), especially among patients with cardiac conditions (6).
Third, since most PA data are averaged over 1-min windows, dropping the criterion of 10-min bouts means that bouts as few as 1 min are sufficient for qualifying as MVPA in the 2020 WHO guidelines. However, with many proprietary devices, the minimal bout length is not obvious to the user, and some devices use 15 s or even 5 s epochs (59). The choice of epoch length also affects the calculated daily time spent with MVPA. MVPA time was doubled when epoch length was increased from 4 to 20 or 60 s in a study using hip-worn uni-axial accelerometers (60). Unless a device with a defined wearing location, data sampling rate, epoch duration, and algorithm settings for calculation of MVPA is validated against energy consumption measured by a metabolic cart, it is impossible to know whether time with MVPA is actually time with an energy consumption ≥3 METs.
Our data imply that tracking the global target set by WHO to reduce inactivity by 2025, should take into consideration the discrepancy of values that are consistently reported in the literature. Using the new guidelines to evaluate policies supporting PA in settings where baseline PA levels were measured through different criteria, may be biased and not reflect the reality of the expected change. Finally, whether CV risk can be equally reduced by MVPA with and without the 10-min bout requirement in patients after PCI needs to be investigated in future studies, such as the PIPAP study. Since the identification of MVPA is highly dependent on the duration of analyzed bouts and consequently varies between accelerometer devices and algorithm settings, a target number of steps may be more manageable, understandable, and feasible for people.
Conclusion
This study found a fourfold discrepancy in the frequency of participants fulfilling 2010 and 2020 WHO guidelines for PA among patients following hospital discharge after PCI. In this setting, the recommendations from the 2020 WHO PA guidelines for MVPA were fulfilled easily by activities of daily living, without any planned or structured exercise. Future studies need to clarify how recommendations are actionable to patient benefit and whether daily step targets for patients after PCI gauge prognostic importance.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by the Ethikkommission des Kantons Bern. The patients/participants provided their written informed consent to participate in this study.
Author contributions
NG-J, AB, OF, and MW designed the study. PE, SW, RF, and JF were involved in data collection, processed the data, and performed data analyses. PE, SW, and NG-J drafted the manuscript. All authors approved the final version of the manuscript.
Funding
This study was partially funded by the Swiss Heart Foundation. Open access funding was provided by the University of Bern. | 2022-09-28T13:18:21.850Z | 2022-09-28T00:00:00.000 | {
"year": 2022,
"sha1": "093851faf167d08688641f3541555790029c0281",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2022.951042/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b5e8185bcd20884396fbcd00db0782adbbd4956",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
178534 | pes2o/s2orc | v3-fos-license | Intravitreal Injection of Splice-switching Oligonucleotides to Manipulate Splicing in Retinal Cells
Leber congenital amaurosis is a severe hereditary retinal dystrophy responsible for neonatal blindness. The most common disease-causing mutation (c.2991+1655A>G; 10–15%) creates a strong splice donor site that leads to insertion of a cryptic exon encoding a premature stop codon. Recently, we reported that splice-switching oligonucleotides (SSO) allow skipping of the mutant cryptic exon and the restoration of ciliation in fibroblasts of affected patients, supporting the feasibility of a SSO-mediated exon skipping strategy to correct the aberrant splicing. Here, we present data in the wild-type mouse, which demonstrate that intravitreal administration of 2'-OMePS-SSO allows selective alteration of Cep290 splicing in retinal cells, including photoreceptors as shown by successful alteration of Abca4 splicing using the same approach. We show that both SSOs and Cep290 skipped mRNA were detectable for at least 1 month and that intravitreal administration of oligonucleotides did not provoke any serious adverse event. These data suggest that intravitreal injections of SSO should be considered to bypass protein truncation resulting from the c.2991+1655A>G mutation as well as other truncating mutations in genes which like CEP290 or ABCA4 have a mRNA size that exceed cargo capacities of US Food and Drug Administration (FDA)-approved adeno-associated virus (AAV)-vectors, thus hampering gene augmentation therapy.
Introduction
Retinal diseases are a leading cause of incurable severe visual dysfunction worldwide and hence a major public health problem. Leber congenital amaurosis (LCA, MIM204000) is the earliest and most severe of these diseases and a leading cause of blindness in childhood. It is characterized by major clinical, genetic, and physiopathological heterogeneity with non syndromic and syndromic forms, variable visual outcomes, and more than 45 disease genes with variable inheritance, pattern of expression, and retinal function. 1 The safety and efficacy of AAV-based gene replacement therapy in LCA patients harboring RPE65 mutations have paved the way for treating retinal diseases. [2][3][4] However, FDA-approved AAV vector genomes are limited in size. The 7.9 kb CEP290 cDNA are currently not amenable to AAV-based gene therapy.
CEP290 encodes a 290 KDa centrosomal protein, which has an essential role in the development and maintenance of primary and motile cilia. [5][6][7] CEP290 mutations cause both nonsyndromic LCA and syndromic forms with renal, kidney, neural tube, central nervous systems, and/or bone involvement. 8 Over 100 unique CEP290 mutations are reported which include a recurrent deep intronic mutation underlying 10-15% of nonsyndromic LCA cases (c.2991+1655A>G). [8][9][10][11] This mutation is located in intron 26 where it activates a cryptic splice donor site downstream of a strong acceptor splice site. The transcription of the mutant allele gives rise to a mRNA retaining a 128 bp intronic sequence encoding a premature termination codon along with low levels of the wildtype transcript. Recently, we reported 2'-O-methyl-phosphorothioate (2'-OMePS) splice switching oligonucleotide (SSO) sequences which allowed correcting the aberrant splicing and ciliation in fibroblasts from patients harboring the mutation. 12 Delivery of 2'-OMePS SSOs to the retina is challenging. Approaches of systematically and topically delivered oligonucleotides have not been successful so far to reach intraocular tissues, probably due to the bloodretina barrier, 13 and the impermeable nature of the cornea, 14 respectively. Intraocular administration of SSO to target retinal cells has not been reported to our knowledge. A transgenic mouse harboring the human CEP290 mutant intron has been produced which does not recapitulate the human molecular and clinical phenotypes. 15 Here, studying the wild-type mouse, we report selective skipping of Cep290 premessenger RNA sequences using a unique intravitreal (iv) injection of SSO. We show that both the SSO and skipped mRNA were detectable for at least 1 month and that iv administration of oligonucleotides did not provoke any serious adverse event.
Results
We designed SSOs to skip exon 22 (disruption of the reading frame) and exon 35 (preservation of the reading frame) of the mouse Cep290 wild-type pre-mRNA, respectively. SSO sequences were designed using the m-fold and ESEfinder programs as described previously. 12 For each of the two exons, we produced a set of three 2′-OMePS oligonucleotides. Each set included one SSO targeting the donor splice site (m22D and m35D), one SSO recognizing an exonic splice enhancer (m22ESE and m35ESE), and one control oligonucleotide (m22ESEsense and m35ESEsense, i.e., sense versions of m22ESE, m35ESE SSOs, respectively; Figure 1).
We assessed SSO-mediated skipping in mouse NIH3T3 fibroblasts as described previously (Supplementary Figure S1). 12 Transfection of the cells with the SSOs but not the control oligonucleotides resulted in the production of a mRNA lacking the targeted exon, and a significant reduction in wild-type mRNA and protein abundance, as determined by Sanger sequencing of reverse transcription polymerase chain reaction (RT-PCR) products, RT-qPCR, and Western blot analysis of immune-precipitated cep290, respectively (Supplementary Figure S2).
These data supported the efficiency and specificity of our SSOs to mediate Cep290 exon skipping in vitro. We thus assessed the retinal distribution of SSOs and splicing alteration using the iv delivery route. To this aim, we examined retinal sections and Cep290 mRNA from 8-week-old C57BL/6J mice eyes at day 2 following a unique and unilateral iv injection of variable doses (1, 5, 10 nmoles) of a fluorescently-labeled (6-FAM)-m22D SSO in saline solution (NaCl 9 g/l; pH = 8.7; 290 mOsm/kg); fellow untreated eyes were used as controls. Confocal microscopy analysis of retinal sections from injected eyes detected panretinal fluorescent signals (Figure 2a; Supplementary Figure S3). Signals in the photoreceptor cell layer were maximal at the highest SSO dose (10 nmoles; Figure 2a). RT-qPCR analysis of retinal mRNA identified a transcript lacking exon 22 which in abundance increased with SSO concentrations (Figure 2b). No skipped Cep290 mRNA and no fluorescent signals were observed in uninjected fellow eyes (Figure 2b; Supplementary Figure S3), giving support to the view that iv administration of 2'-OMePS SSO allows manipulating the splicing of the Cep290 pre-mRNA in retinal cells, in a dose-dependent manner.
To estimate the lifetime of SSOs and of mutant mRNAs after a unique iv injection, 10 nmoles of (6-FAM)-m35ESE SSO or (6-FAM)-m35ESEsense control oligonucleotides were injected. The distribution of the flurorescence and abundance of mutant mRNA in the retina were analyzed at days 2, 4, 8, 12, 18, and 30 postinjection (dpi) as described previously. We chose to use a SSO specific to exon 35 to preserve the reading frame and avoid degradation of mutant mRNA by nonsense-mediated mRNA decay mechanisms. Confocal imaging of retinal sections from eyes injected with the (Figure 3a). In contrast, fluorescent signals arising from the (6-FAM)-m35ESEsense control oligonucleotide had disappeared at 12 dpi (Figure 3a), suggesting rapid clearance of oligonucleotides that do not find target. RT-PCR and RT-qPCR analysis of retinal mRNA detected the presence of Cep290 mRNA lacking exon 35 as confirmed by Sanger sequencing (not shown), after the injection of the SSO but not the control oligonucleotide (Figure 3b,c). The abundance of mutant mRNA was maximum at 2 dpi and decreased gradually overtime but it was still measurable at 30 dpi. Surprisingly, that of the wild-type Cep290 mRNA was significantly reduced at all analysis time points including 30 dpi (Figure 3c), suggesting that the m35ESE SSO which target an exonic splice enhancer might recognize the mature mRNA in the cytoplasm of retinal cells, and cause double-stranded RNA-mediated interference.
To assess this hypothesis, we measured the wild-type and mutant mRNA abundance at 2, 6, and 10 dpi of 10 nmoles of the m22D SSO, which targets a part of intronic sequence, absent in the cytoplasm. We observed a moderate decrease of the wild-type mRNA abundance consistent with the production of a mutant mRNA at the expense of the Cep290 pre-mRNA (Supplementary Figure S4). Together, these data are consistent with the hypothesis according to which SSOs recognizing exon-only sequences can be sequestrated by cytosolic mRNA molecules which are not subjected to nonsense-mediated mRNA decay. 16 The high expression of Cep290 in the retina 17 further supports this hypothesis. The subcellular expression of Cep290 mRNA in the mouse retina has not been documented to our knowledge. In the zebrafish, in situ hybridization analysis has shown that the gene is expressed in all retina cell layers, including the ganglion cell layer, the inner nuclear layer and the photoreceptors cell layer. 18 To assess whether photoreceptor cells contribute to the pool of switched mRNA, we designed a SSO specific to the photoreceptor-specific Abca4 gene. 19 Intravitreal injection of 10 nmoles of the Abca4-specific SSO (m10ESE) but not of its sense version (m10ESEsense) resulted in maintained (≥10 dpi) Abca4-specific splicing modification, as evidenced by agarose gel electrophoresis and Sanger sequencing of RT-PCR products (Figure 4). This demonstrates that splicing can be manipulated in photoreceptor cells following a single iv injection of 2'-OMePS SSO.
Finally, to evaluate the ocular toxicity of 2'-OMePS oligonucleotides, we injected 10 nmoles of the m35ESEsense oligonucleotide and we examined the retinal structure integrity, photoreceptor survival, and inflammatory response at 2, 12, and 30 dpi. Retinas from injected eyes were compared to retinas from uninjected contralateral eyes and retinas from 22-day-old rd10 mice eyes (Supplementary Figure S5). The rd10 mouse carries a spontaneous missense point mutation in Pde6b responsible for progressive retinal outer nuclear layer degeneration beginning at postnatal day 16. 20 Hematoxylin-eosin staining, terminal deoxynucleotidyl transferasemediated dUTP nick end labeling (TUNEL), and glial fibrilary acidic protein labeling on histological sections of 22-dayold rd10 mice evidenced decreased thickness of the outer nuclear layer, apoptotic cells, and positive glial fibrilary acidic protein astrocytes, respectively (Supplementary Figure S5).
In contrast, the retinal structure, TUNEL labeling, and glial fibrilary acidic protein staining of uninjected and injected eyes supported absence of retinal cell death and of inflammatory cell infiltration, respectively (Supplementary Figure S5). The transitory TUNEL labeling observed at 2 dpi in treated eyes likely arose from dUTP nick end labeling of the free 3'end of the oligonucleotide (Supplementary Figure S6). The absence of deterioration of the retinal structure and the correlation between the clearance of the TUNEL labeling and that of the m35ESEsense oligonucleotide (Figure 3) strongly support this hypothesis.
Discussion
AAV-based gene therapy developments are ongoing worldwide to treat a wide range of inherited retinal diseases since the report of safety and efficacy in phase 1/2 clinical trials of a RPE65 gene augmentation therapy, by subretinal delivery of AAVs carrying the wild-type RPE65 cDNA. [2][3][4] A severe limitation of this approach however is the limited packaging capacity (~4.9 Kb) of AAV vectors. In addition to CEP290, several other of the most frequently mutated inherited retinal dystrophy genes, e.g., ABCA4, EYS, and USH2A have a cDNA size way exceeding this limit and hence are not amenable for AAV-based gene therapy. Other vectors (e.g., lentiviruses, adenoviruses) with a larger cargo capacity have a limited tropism for photoreceptor cells, and may produce insertional mutagenesis by integration into the host genome. Unintended gene overexpression and/or imbalanced expression of splicing isoforms are other severe limitations of gene augmentation therapy, especially when the gene, like CEP290, is transcribed into multiple isoforms and/or contribute to large protein complexes which stoichiometry has to be respected. 21,22 Rescuing aberrant pre-mRNA processing rather than supplementing a healthy cDNA copy of a gene that is mutated is an interesting therapeutic alternative. It is ideally suited to overcome both size limitations and unintended gene overexpression and imbalanced expression of splicing isoforms by maintaining endogenous transcriptional regulation of the target gene; acting at the pre-mRNA level allows correcting the genetic defect where and when the gene is expressed.
Strategies have been developed which allow the modification of splice patterns of genes to bypass mutations that silence consensus splice sites and/or activate cryptic splice sites and hence promote exonic exclusion and/or intronic retention. Splicing can be altered by masking splice sites, or by targeting regulatory sequences to promote or to block splicing. 23 Drugs, isoform-specific antibodies, trans-splicing approaches, RNA interference, and antisense oligonucleotides have been investigated for their ability to modify splicing in various diseases. [23][24][25][26][27] Compared to most other molecules, nuclease-protected phosphorothioate antisense oligonucleotides are of easy design 28 and inexpensive. In addition, they can be widely used to restore wild-type genetic configuration by allowing skipping of stray pseudoexons such as the one which arises from the splicing of pre-mRNAs harboring the recurrent LCA-causing CEP290 c.2991+1655A>G mutations. 12,29
Cep290 WT ∆ex35
Cep290 WT ∆ex35 Current ongoing trials in patients affected with Duchenne muscular dystrophy have shown that systemic delivery of 2′-OMePS SSOs on a short-term basis is well tolerated. [30][31][32] Systemic administration of 2′-OMePS SSOs to correct the abnormal splicing resulting from the CEP290 c.2991+1655A>G is hampered by the very efficient bloodretina barrier. Likewise the impermeable nature of the cornea hampers non-invasive topical delivery. 14 Here, we present data which demonstrate that iv administration of 2'-OMePS oligonucleotides allows alteration of Cep290 splicing in retinal cells, including photoreceptors as shown by successful alteration of Abca4 splicing using the same approach. We were able to detect both fluorescent signals arising from 6-FAM-labeled antisense oligonucleotides and skipped mRNA 1 month after a single iv injection. Interestingly, it has been previously reported that fluorescent signals in 6-FAMinjected eyes rat completely disappeared at 24 hours after iv injection. 33 The persistence over several weeks of fluorescent signals in eyes that received the labeled 2'-OMePS SSO indicates that the fluorescence is associated with the oligonucleotide itself rather than the fluorescent molecule that was used for labeling. In addition, we observed that the persistence of the fluorescence arising from 6-FAM sense oligonucleotides was strikingly reduced (lost between 2 and 12 dpi) compared to that arising from 6-FAM antisense oligonucleotides (still present at 30 dpi). This indicates that oligonucleotides which do not find target are rapidly cleared. This observation along with the progressive decrease overtime of skipped mRNA abundance in eyes injected with antisense oligonucleotides suggest that the detection of skipped mRNA at 1 month reflects active splice switching rather than mRNA stability.
In this study, we analyzed the retina no longer than 1 month after injection. Other studies have however shown that RNA-interfering phosphorothioate antisense oligonucleotides (PS-AON) could be detected in the retina of rodent and nonprimate humans as full-length oligonucleotide at least for 6 weeks and as N-1 for up to 12 weeks after injection. 33,34 Antisense oligonucleotides with modified phosphorothioate backbones (2'-OMePS SSO) have increased stability compared to PS-AONs. 35 Hence, it is likely that the effect of a single iv injection should exceed 4 weeks.
CEP290 is a dynamic protein which can incorporate into preassembled mutant cilia transition zone. Providing functional CEP290 should allow restoring visual function as long as there are spared photoreceptors. LCA patients with CEP290 mutations have a neonatal-onset severe panretinal loss of photoreceptors. 10,36 While patients retain a central retinal zone of photoreceptors and an intact visual cortex for up to three decades, it has been suggested from gene augmentation therapeutic trials that the earliest the treatment, the better the effect. [36][37][38] Conversely, to subretinal administration which is limited central retina to avoid severe retinal detachment, intravitreal injections of 2'-OMePS seems suited for panretinal treatment. Indeed, following an iv injection of 6-FAM 2'-OMePS SSO, we were able to detect fluorescent signals in all retinal cell layers from the centre to the far periphery supporting the view that the oligonucleotides were able to distribute widely in the target tissue.
An important concern before efficacy study of iv injectionbased SSO-mediated therapy is to investigate the intraocular safety. Here, we report no adverse event at 1 month following a single iv injection of 10 nmoles of oligonucleotide in the mouse. Although this study is limited and 2'-OMePS SSO chemistry differs slightly for PS-AON, it is worth remembering that repeated iv injections of Vitravene, a 22-mer RNA interfering PS-AON has been approved by the FDA to treat CMVinduced retinitis in immune-compromised individuals. 39 In addition, it is worth noting that here the duration of the SSO effect over at least 1 month is consistent with iv injectionbased treatment, even in children. Indeed, although iv injection is invasive, it is a very well-established mode of delivery of therapeutic molecules both in infants with acute retinal diseases and adults with acute or chronic retinal diseases. 40,41 The advantages of SSO-based therapy over viral-based gene augmentation protocols give strong justification to the development of oligotherapy as a mean to bypass the CEP290 c.2991+1655A>G mutation. SSO therapy is not limited to this important therapeutic target or to other deep intronic mutations. The vast majority of CEP290 mutations introduce a premature termination codon in the mRNA. In addition to isolated early-onset severe retinal degenerations, these mutations cause a wide spectrum of multisystemic ciliopathies. 8 Very recently, basal exon skipping (BES), a genetic mechanism through which deleterious mutations may be partially compensated for, has been reported to occur spontaneously at low levels in all CEP290 patient fibroblasts. The gradation of CEP290 dysfunctions likely correlates with the amount of near-full length protein retaining all or some of the full-length protein functionality that can be supplied to the cell through this mechanism. 42,43 The report of selective exclusion of an in-frame CEP290 exon with a premature termination codon through nonsense-associated altered splicing [44][45][46] in an individual with unexpectedly mild retinal disease 47 is quite consistent with the hypothesis. Interestingly, SSOs are ideally suited to enhance BES of mutant exons that begin and end in the same reading frame and which do not code for a crucial protein domain, to increase the abundance of minimally shortened mRNA and hence of near-full length functional CEP290 protein. What holds true for CEP290 could be true for other retinal proteins. Evidence of splicing alteration using intravitreal administration of SSO could open avenues to oligotherapy of a wide range of dysfunction affecting retinal cell wherever their geolocalization in the tissue.
The study we report here was limited by the use of wildtype animals. Availability of a spontaneous animal model of severe retinal dystrophy due to an intronic CEP290 mutation, the rdAc cat, will hopefully help in making the proof of concept of therapeutic efficiency of iv injection based SSOmediated therapy.
Materials and methods
Design of SSOs and control oligonucleotides (ONs). SSOs were designed from the wild-type mus musculus Cep290 (UCSC accession number uc011xmr.1) and AbcA4 (UCSC accession number uc008rel.1) sequences using the m-fold and ESEfinder 48 programs available online at http://mfold.rna. albany.edu/ and http://rulai.cshl.edu/cgi-bin/tools/ESE3/esefinder.cgi, respectively. SSOs were designed to target ESE and donor-splice site sequences of Cep290 exon 22 and 35 and an ESE sequence of Abca4 exon 10, respectively. Sense versions of SSOs (control ONs) targeting Cep290 and Abca4 ESE sequences were designed to control the specificity of SSOs. SSOs and control ONs were synthetized as fluorescently labeled (6-FAM) and unlabeled 2'-O-methyl phosphorothioate oligonucleotides by Sigma-Aldrich (St Quentin Fallavier, France).
Intravitreal injection of SSOs and control ONs. All animal experiments adhered to the French regulation statement for the use of animals in ophthalmic and vision research. Eight-week-old C57BL/6J mice were used for these experiments. Prior to intravitreal injection, the animals were anesthetized by intramuscular injection of mixture solution of ketamine (100 mg/kg) and xylazine (10 mg/kg). The left pupil was dilated by applying one drop of a mydriatic mix containing 10% phenylephrine and 0.5% tropicamide. A 30-gauge needle was used to make an initial puncture of the sclera. 1 μl of saline solution (NaCl 9 g/l, pH = 8.7) containing 1, 5, or 10 nmoles of (6-FAM)-m22D, (6-FAM)-m35ESE SSOs, or (6-FAM)-m35ESEsense oligonucleotide, respectively was injected through this hole using a 33-gauge needle attached to a 5 μl Hamilton syringe, under a binocular magnifier. The needle was kept in the vitreous cavity for about 20 seconds, then withdrawn gently and an antibiotic ointment was applied to prevent infection. The right eyes were left noninjected and used as controls. To assess splice alteration in photoreceptor cells, 10 nmoles of the m10ESE SSO and m10ESEsense oligonucleotide were respectively injected using the same procedure. Injected and contralateral untreated eyes were enucleated at variable time points from 2 to 30 dpi. Sampled eyes were either fixed in PFA 4%, washed in PBS and embedded in paraffin for histological analyses; or dissected to recover retinal RNA; as described below. Between two and five animals were used for each experimental setup.
RNA extraction and cDNA synthesis. Total RNA from treated and untreated NIH3T3 cells and mouse retinas was extracted using the RNeasy Mini Kit (Qiagen, Courtaboeuf, France) according to manufacturer's protocol. All samples were DNase treated by the RNase-free DNase set (Qiagen). Concentration and purity of total RNA was determined using the Nanodrop-1000 spectrophotometer (Thermo Scientific, Illkirch, France) before storage at −80 °C. First-stranded cDNA synthesis was performed from 500 ng of total RNA extracted using Verso cDNA kit (Thermo Scientific) with random hexamer:anchored oligo(dT) primers at a 3:1 (vol:vol) ratio according to the manufacturer's instructions. A non-RT reaction (without enzyme) for one sample was prepared to serve as control in RT-qPCR experiments.
Retinal distribution of fluorescence following a unique intravitreal injection of SSOs or control oligonucleotide. Sevenmicron serial sections of tissues were prepared from paraffined treated and untreated eyes. To evaluate the dissemination of (6-FAM)-m22D, (6-FAM)-m35ESE SSOs and (6-FAM)-m35ESEsense oligonucleotide through the retina, the tissue sections were mounted onto glass slides with Prolong gold antifade mounting media containing 4′,6′-diamidino-2phenylindole (DAPI) (Invitrogen) to stain nuclei and examined by confocal microscopy (ZEISS LSM700). Images were generated using the ImageJ software.
Statistical analysis. The statistical significance of the difference between three or more means was determined using a Mann-Whitney test. Statistical analysis was performed using the Biostatgv program available online at http://biostatgv. sentiweb.fr. *P values < 0.05 were considered significant. Figure S1. Optimization of NIH3T3 transfections. Figure S2. SSO-mediated skipping of Cep290 exons 22 and 35 in NIH3T3 cells. Figure S3. Panretinal distribution of SSO following a unique intravitreal injection. Figure S4. Kinetic analysis of SSO-mediated Cep290 exon skipping following a single intravitreal injection in the C57BL/6J wildtype mouse. Figure S5. Toxicological evaluation of the retina. Figure S6. TUNEL-staining of oligonucleotides. experiments; X.G., J.M.R. data analysis and interpretation; X.G. and J.M.R. wrote the manuscript. There are no competing financial interests. | 2018-04-03T05:12:19.443Z | 2015-09-01T00:00:00.000 | {
"year": 2015,
"sha1": "a7ee5bb49f8c16a16da4d3ba931d1832f4458c17",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/mtna.2015.24",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a7ee5bb49f8c16a16da4d3ba931d1832f4458c17",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118631058 | pes2o/s2orc | v3-fos-license | Manual for Experiments in Engineering Physics
Experiments performed in the Physics Laboratory play a significant role in understanding the concepts taught in the theory. A good accompanying laboratory manual serves as a concise guideline which students can use to complete the experiments without having to refer to several reference books on the subject. A thorough study of the manual prior to the experiment helps the student to immediately start with the performance in the laboratory. The general practice in several universities for the conduct of experimental laboratory class has been to enable students take observations and allow the submission in one week time. However, the observations do not complete the experiment and serve as only one part of learning in the measurement of the physical quantities in the laboratory. The calculations and the submission of the journal before the end of the experimental turn should be an integral part of the laboratory class. With this motivation a scheme is suggested for the conduct of the laboratory class.
PREFACE
Experiments performed in the Physics Laboratory play a significant role in understanding the concepts taught in the theory. A good accompanying laboratory manual serves as a concise guideline which students can use to complete the experiments without having to refer to several reference books on the subject. A thorough study of the manual prior to the experiment helps the student to immediately start with the performance in the laboratory.
The general practice in several universities for the conduct of experimental laboratory class has been to enable students take observations and allow the submission in one week time. However, the observations do not complete the experiment and serve as only one part of learning in the measurement of the physical quantities in the laboratory. The calculations and the submission of the journal before the end of the experimental turn should be an integral part of the laboratory class. With this motivation a scheme is suggested for the conduct of the laboratory class.
Several initiatives are proposed to achieve the above objectives in this carefully prepared manual for the experiments in the Engineering Physics course. The manual focuses on the consistency of the approach towards experimentation and scientific reporting rather than the accuracy in the individual experiment and its technical details. Author hopes that the students will appreciate the simplicity of the manual and find it useful.
The manual comprises views and suggestions of several expert teachers. In particular, the author acknowledges Prof. S. S. Major, IIT Bombay, Mumbai and Prof. S. Jain, PIET Nagpur. Author would also like to acknowledge more than 400 students at St. Francis de Sales College Nagpur, VRCE Nagpur and Physics Department, IIT Bombay, Mumbai. Appendix figure of CRO has been used from the public domain. (ii) Ensure the potentiometer on the voltage source in the zero position before switching on the power supply.
(iii) Note the accuracy of the digital voltmeter. (b) Adjusting the reflection from the mirror of the ballistic galvanometer (i) Keep the photocell in front of the source window so that the sodium light illuminates and enters the wooden case.
(ii) Move the photocell away from the source to increase the distance between them and to ensure that the intensity of the light reaching the photocell is minimum. This distance would be about 30 cm from the source window.
(iii) Switch ON the regulated power supply and confirm the near zero reading in the digital voltmeter for the minimum position of the potentiometer.
(iv) Note the reflected spot on the graduated scale. Adjust the scale height or lateral position to middle the location of the spot on the scale.
(v) Gradually increase the forward voltage across the photocell by rotating the potentiometer (coarse) on the regulated voltage source clockwise till the digital voltmeter reads 7 volts. Use the coarse knob on the voltage source. (ii) Decrease the forward voltage of the photocell by rotating the potentiometer (coarse) anticlockwise from 7 volts down to the zero position.
(iii) Note 12 readings of the voltage (V FB ) and the corresponding deflection (φ) of the spot on the scale. Follow the table 1 to choose the voltage intervals. Calculate the current (I FB ) through the photocell using the calibration.
(iv) Ensure the minimum position of the potentiometer at the end of the measurements.
(v) Plot the forward bias V-I characteristics of the photocell on a graph paper. Choose the origin at the center of the graph paper. Use only the upper half of the graph paper.
Results:
The forward and reverse V-I characteristics of the photocell were studied. The work function of the cathode material of the photocell was found to be …..eV. The accuracy of the digital panel meter which measures the stopping voltage was…….
Conclusions:
The photocell when illuminated with the sodium light was found to produce electric current in nano-amperes. This demonstrates the photoelectric effect using the photocell. The work function the cathode material of the photocell was found using the photoelectric effect.
Precautions:
1) Ensure the zero position of the potentiometer before switching on the power supply to the kit.
2) Ensure the connections before the start of the measurements as well as throughout the experiment. Ensure the reading of the zero voltage in each set of measurements.
3) The illumination on the photocell should be carefully adjusted in order to produce deflection of the galvanometer within the range of the given scale.
4)
If the initial reading in the voltmeter is not zero for the minimum position of the potentiometer, note down the readings as displayed. The calculations are to be performed without applying the zero error correction. The reverse bias curve on the graph paper can be extrapolated to the V = 0 axis with guide to eye.
5)
Ensure that the BG is tuned and the reflection from the mirror on the graduated scale is already set. In case the reflected spot is not seen on the scale, take the help of the instructor.
6) In the reverse bias mode the last observation should be taken for the zero deflection (i.e., zero current) and the corresponding value of the stopping voltage should be noted down.
7) The distances and the suggested values of the voltage in the observation tables are for the reference. Use any nearby values during the experiment and note down the readings as seen in the measuring instruments.
8) Connecting wires should be removed carefully and tied after the finish of the experiment. (d) Increase the source current in the steps of 0.5 Amps and note the corresponding magnetic field as shown in the Gauss meter. Use the table 1 for the steps to be followed.
(e) Reduce the source current to zero by using the potentiometer on the current source.
(f) Remove the Hall probe of the Gauss meter from the space between the electromagnet.
(g) Plot a graph between the source current versus the magnetic field. Choose the origin at the bottom left corner of the graph paper. Draw a straight line which is the best fit and has maximum number of data points instead of joining the data points one by one.
Hall voltage versus magnetic field
(a) Insert the sample in the space between the electromagnet and adjust it in the center. Connect the circuit as shown in the figure.
(b) Increase the current through the sample using the potentiometer on the sample current source to 2 mA and keep it constant.
(c) Increase the source current to the electromagnet and measure the voltage in milli-voltmeter. Follow table 2 for the suggested readings.
(d) Reduce the source current to zero by using the potentiometer.
(e) Increase the constant current through the sample to 4 mA and repeat the step (c).
(f) Note down the magnetic field using the calibration curve in part 1 and write in the observation table.
(g) Plot a graph between the Hall voltage versus the magnetic field for constant sample current.
Choose the center at the bottom left corner. Find the slope of each curve in the graph and calculate the Hall coefficient using the formula.
(h) Find the mean value of the Hall coefficient and calculate the carrier concentration using the formula (3).
Hall voltage versus sample current
(a) Reduce the potentiometers on the sample current source and the electromagnet current sources to zero.
(b) Increase the current through the electromagnet using the potentiometer on the constant current source to 1 Amps and keep it constant.
(c) Increase the sample current and measure the Hall voltage in millivoltmeter. Note 5 readings. Follow table 3 for the steps.
(d) Reduce the sample current to zero by using the potentiometer.
(e) Increase the constant current through the electromagnet to 2 Amps and repeat the step (c).
(f) Note down the magnetic field using the calibration curve in part I s in the observation table.
(g) Plot a graph between the Hall voltage versus the sample current for constant magnetic field. Choose the center at the bottom left corner. Find the slope of each curve in the graph and calculate the Hall coefficient using the formula.
(h) Find the mean value of the Hall coefficient and calculate the carrier concentration using the formula (3).
Conclusions:
The electromagnet was calibrated using the Gauss meter. The Hall Effect was studied and it was found that the Hall voltage depends on the sample current as well as the magnetic field. The Hall coefficient was found from both V H vs B and V H vs I and the carrier concentration at room temperature was calculated for the given semiconducting material.
Precautions:
1) Ensure the zero position of the potentiometer before switching on the power supply to the electromagnets.
2) Ensure the connections, and the appropriate voltage and current ranges in the meters before the start of the measurements.
3) Ensure the reading of the zero voltage in each set of measurements.
4) Occupy more than 50% of the graph paper to ensure good presentation of the results.
5)
Ensure that the position of the sample remains the same between the pole pieces throughout the experiment.
6) Readings should be taken while increasing the magnetic field. If the suggested reading is exceeded appreciably while increasing the field then the next closest reading may be recorded instead of reducing the field.
7)
In V H vs B measurement, the field must be reduced to zero before increasing the sample current I s for each set of readings.
8) Connecting wires should be removed carefully and tied after the finish of the experiment.
Determination of Electric field
(a) Remove the deflection magnetometer and the wooden frame from the bench. (c) Observe the spot on the fluorescent screen of the CRT and if required increase the intensity and focusing of the electron beam. In the absence of electric and magnetic field, the electron beam is not deviated and the spot should remain at the zero reading of the CRT graphical screen.
(d) Increase the DIF INT potentiometer in order to apply voltage to the vertical deflecting plates. This produces electric field and it deflects the electron beam vertically upwards. Note the least count of the voltmeter given on the CRT power supply.
(e) Note down the voltage corresponding to the deflection of 1 cm for the spot on the CRT screen. Note both the values in the observations in the journal.
Balancing configuration with the magnetic field
(a) In order to balance the electric field with the magnetic field, place the two bar magnets on the wooden bench on the left and the right side of the CRT screen. Refer figure (b) for the placing of the bar magnets on the wooden bench in a symmetric manner in order to produce a perpendicular magnetic field with respect to the electric field and the direction of electron beam (along the length of the Cathode ray tube). (b) Arrange the distance between the magnets so that the bright spot on the screen returns to the zero position on the CRT graphical screen. This will balance the electric field on the electron beam with the magnetic field of the bar magnets and the given formula is applicable to determine e/m value. The position of the bar magnets should remain same during the experiment. (e) Shift the deflection magnetometer to the next x position of 2 cm along the wooden frame and repeat the steps from (c) and (d).
Measurement of time period in the Earth's magnetic field
(a) Remove the bar magnets from the wooden bench.
Results:
The e/m of an electron using the Thomson's method was ……. Coulombs/Kg.
Conclusions:
The Thomson's method to determine the e/m of electrons was studied. The ratio of e/m was found to be…..which compares well with the theoretically estimated value of the e/m. The e/m was found in the Thomson's experiment without requiring the value of e or m of the charge particles. If the mass of the charge particle increases with the velocity approaching to the relativistic values, the e/m will decrease.
Precautions:
1) Magnetic meridian should be found by keeping the magnets away from the magnetic needle.
2) The wooden bench should remain in the magnetic meridian throughout the experiment.
3) Stopwatch readings should be taken accurately.
4) t 0 in the Earth's magnetic field should be noted for two sets of oscillations.
5) The number of oscillations is a suggested reading. If the deflection magnetometer is not sensitive for 5 oscillations, only 2 or 3 oscillations may be taken to calculate the time period (T).
Experiment No 4
Study of cathode ray oscilloscope Aim: 1. To use cathode ray oscilloscope (CRO) to determine the amplitude and the frequency of the sinusoidal voltage.
2. To determine the phase difference between the two sinusoidal electrical signals using cathode ray oscilloscope Apparatus: Cathode ray oscilloscope, Ac signal generator, Transformer, Ac voltmeter, RC circuit, Connecting wires where, α p = experimental value of the phase difference y 1 = intercept of ellipse on the y-axis where, α t = theoretical value of the phase difference f = frequency R = resistance and C = capacitance (c) Adjust the voltage control knob to get the waveform in the measurable range on the screen of the CRO.
(d) Use the time base (t/div) knob to get a stable sinusoidal waveform on the CRO screen.
(e) Note the vertical divisions on the CRO screen for the peak to peak distance of the waveform.
(f) Note the corresponding volt/div from the knob in the observation table 1.
(g) Calculate the amplitude of the input signal using div x (volt/div).
(h) Repeat the steps (c) to (g) for three different values of the input signal. (m) Calculate the frequency of the signal on the CH1 by the formula 2.
(n) Repeat the steps (j) to (m) for three different values of the signal frequencies.
Measurement of the phase difference in the dual mode of the CRO
(a) Connect the circuit as shown in figure (b). In this case the signal from the function generator is connected directly to the CH1 as in step 3 and the same signal is connected at the input of the RC circuit. The signal at the output of the RC circuit (with a phase difference) is connected to the CH2 (also called X-input of CRO).
(b) Press the mono/dual press button to switch the CRO in the dual mode. In this mode both the signals, on CH1 and CH2, will be displayed on the CRO screen.
(c) Adjust the volt/div knob for both the signals and use the x-position and time/base knob to display the signals in the suitable voltage and time scales. One of the signals should cross the zero position on the graphical screen of the CRO.
(d) Note down the time period and the time difference between the signals in the observation table 3.
(e) Using the formula, calculate the phase difference between the signals. Compare the phase difference with the theoretical value by noting down the resistance, capacitance of the RC circuit and the frequency of the signal estimated from the time period.
Measurement of phase difference in x-y mode of the CRO
(a) Unpress the dual mode button and press the x-y push button on the CRO front panel. (b) Adjust the x and y-position knobs and place the ellipse symmetric around the voltage and time axes. Adjust the volts/div knob if the size of the ellipse needs to be changed to cover the maximum portion of the CRO graphical screen.
(c) Note the readings of the vertical intercept, y 1 and the horizontal tangent, y 2 for the ellipse on the screen (Refer to the Appendix). Use the observation table 3.
(d) Calculate the phase difference using the formula and compare with the theoretical value.
Measurement of unknown frequency of the signal using CRO
(a) Connect the circuit as shown in figure (c) and switch ON the given ac transformer. In this case the signal from the function generator is applied at the CH1 of the CRO and the unknown signal is applied on the CH2. (e) Using the given formula, calculate the unknown frequency of the input signal on CH2.
(f) Repeat the steps (b)-(e) for three additional frequencies in the multiple of 50 Hz and in each case determine the unknown frequency (of ac mains) by obtaining a near stable Lissajous pattern on the screen of the CRO.
1) Measurement of the amplitude of the input signal
Sr. No. n (peak to peak divisions) p (volt/div) a = p (n/2) (volts) V rms (volts) = a/ (2) Result: The frequency of the unknown source was found to be ……Hz. The phase difference between the two electrical signals was found to be…..and ……degrees. The theoretically estimated value was….degrees.
Conclusions:
The CRO was used to find out the amplitude of the input signal and the unknown frequency of the input signal using the formation of the Lissajous pattern. The CRO was used to find out the phase difference between the two electrical signals generated using an R-C circuit. The theoretical value was estimated for the R-C generator, and compared with the experimentally found values using the x-y and dual modes of the CRO operation.
Precautions:
1) The intensity of the screen is kept at the minimum, and reduced to zero if not in use.
2) The Lissajous patterns should cover the majority portion of the CRO screen for improving the precision of the measurements.
3) The waveforms seen on the screen should be placed symmetrically about the time axis. (k) Choose a set of (V FB2 , I FB2 ) and (V FB1 , I FB1 ) in the linear region of the V-I characteristics.
Estimate the dynamic resistance (R d ) of the Germanium Diode from the forward slopes of the V-I characteristics. (f) Plot the reverse bias V-I characteristics of the Germanium diode along with its forward bias characteristics on the graph paper. Write the scale of x-axis and y-axis on the bottom left corner of the graph paper.
Reverse Bias characteristics of the Germanium Diode
(g) Plot the tangent to the reverse characteristics in the saturation region of the curve. Note the reverse saturation current (I s ). Refer to the supplementary material in the manual for the reverse characteristics and determination of the reverse saturation current.
Energy gap determination
(a) Carefully pour the water in the container and dip the glass tube (with diode and thermometer assembly) inside the container. (e) Switch OFF the heating mantle from the on-board supply and wait till the temperature continues to increase above 90 ºC.
(f) Without removing the glass tube from the water bath, wait for the stable temperature reading in the thermometer as the water starts cooling down.
(g) Note down the current in the digital microammeter corresponding to 90ºC as your first reading.
(h) Read the temperature with every 5 degree fall as the water cools down to 40ºC, and note the corresponding reverse saturation current (I s ) through the PN Junction Diode. Follow the table 3 to choose the temperature intervals. (k) Using the slope, calculate the energy gap of the semiconducting material of the PN-Junction diode with the given formulae.
Observation Table: Least Counts (L.C.) of the measuring instruments
Conclusions:
The forward and reverse bias V-I characteristics of the Germanium diode were studied. The cutin voltage of the Germanium diode was found from the forward bias characteristics. The reverse saturation current (I s ) was found and it was observed that the I s does not depend on the applied reverse voltage. The energy gap of the semiconductor can be found by operating the PN Junction Diode in the reverse bias mode, and measuring the reverse saturation current as a function of the temperature. The negative slope of the Log 10 I s versus 1/T for the fixed reverse bias voltage suggests that the resistance of the semiconductor increases with decreasing temperature.
Precautions:
1) Ensure the zero position of the potentiometer of the voltage source before switching ON the power supply.
2) Ensure the connections, and the appropriate voltage and current ranges in the meters before the start of the measurements as well throughout the experiment.
3) Ensure the reading of the zero voltage in each set of measurements. 4) Note the cut-in voltages and appropriate scales with their units on the graph papers. Occupy more than 50% of the graph paper to ensure good presentation of the results. 5) Care must be taken to handle the heating assembly.
6) The temperature must be recorded closest to the Diode without thermometer touching it.
7) The temperature overshoot must be handled by switching off the heating mantle 5 degree before it reaches to 90 ºC.
8) The suggested readings of the voltages and the temperature in the observation table may be suitably adjusted if required for the experiment. Any 6 temperature readings may be taken with a minimum difference of 2 degrees. 9) Connecting wires should be removed carefully and tied after the finish of the experiment.
Experiment No 6 Input and Output characteristics of the NPN transistor in common base configuration
Aim: 1. To study the input and output V-I characteristics of the given NPN transistor in the common base (CB) configuration 2) To calculate the input resistance and the current transfer gain in the CB configuration of the NPN transistor. where, x 1 , x 2 are the independent variables, and y is the dependent variable. (e) Plot the input V-I characteristics on the graph paper for V CB = 6 V. Choose the origin on the left edge of the graph paper and note the scale (Refer the supplementary material). Determine the cut-in voltage of the Base-Emitter junction and estimate the input resistance R i from the curve with V CB = 6 V.
Results:
From the input characteristics, i.e., from the graph of V BE versus I E at V CB = 0 V, the value of R i of the given NPN transistor was found to be …. Ω. The current gain of the NPN transistor in the CB configuration was found to be, α ≈ ……at V CB = 2 volts. The error was estimated in the calculation of the input resistance, and it was around …..%.
Conclusions:
The input V-I characteristics of the given NPN transistor were studied. The current gain in the common base configuration was found be close to unity. The current transfers from the lower resistance input side to the higher resistance output side using the transistor action which is not feasible by electrical techniques. Since the current transfer gain is unity, effectively the resistance is transferred from the input side to the output side of the transistor.
Precautions:
1) Ensure the zero position of the potentiometer on the voltage sources before switching ON the power supply to the kit.
2) Ensure the connections, and the appropriate voltage and current ranges in the meters before the start of the measurements.
3) Ensure the reading of the zero voltage in each set of measurements.
4) Note the cut-in voltages and appropriate scales with their units on the graph papers. Occupy more than 50% of the graph paper to ensure good presentation of the results.
5) The suggested voltage readings in the observation table may vary slightly, so choose the voltage intervals appropriate for the experimental set-up.
6) Connecting wires should be removed carefully and tied after the finish of the experiment.
Determination of the radius of curvature of a plano convex lens using Newton's rings method
Aim: To determine the radius of curvature of a plano convex lens using Newton's rings Apparatus: Sodium vapor lamp, plano convex lens, glass plate, travelling microscope, reading lamp (c) Gradually move the crosswire to 14 th , 10 th , 6 th , 2 nd ring and tabulate the MSR and CSD in each case. Move the crosswire on the right side to the first dark ring, and successively note the MSR and CSD for the 2 nd , 6 th , 10 th , 14 th , 18 th ring. Calculate the total reading of each dark ring position using the formula MSR + CSD x LC.
(d) Subtract the right side readings from the left side readings to get the absolute difference, which gives the diameter of the dark rings. Plot a graph between (D n ) 2 versus n, and obtain the slope. Calculate the radius of the plano convex lens using the formula in centimeters.
Observation Table: 1 The radius of curvature of the given plano convex lens was found to be ….cm.
Conclusions:
The Newton's rings were obtained using a plano convex lens placed on a glass plate, effectively producing a wedge shaped film with the varying thickness. The radius of the plano convex lens was found by measuring experimentally the fringe diameters, and it was found to be…….cm in the given experiment.
Precautions:
1) The crosswire must be tangential to the dark fringes, and must move along the diameter.
2) The readings should be noted only in one direction of movement of the crosswire in order to avoid the backlash error. If more than 3 readings are missed, then the measurements should be repeated entirely.
3) The numbers of dark rings in the observation table 2 are suggested readings. In the experiment any 5 rings can be used for measuring the diameter, with a minimum difference of 2.
Determination of the wavelength of sodium light using plane diffraction grating
Aim: To determine the wavelength of sodium light using plane transmission grating Apparatus: Spectrometer, Grating, Sodium vapor lamp, Reading lens
Setting up of the experiment
(a) Adjust the width of the slit so that the light from the sodium source enters the collimator.
(b) Bring the telescope in line with the collimator and focus the collimator as well as telescope on the slit to obtain a thin and sharp image. Use the slit width control knob, collimator focus knob and the telescope focus knob.
(c) Position the grating on the table so that the free surface of the grating faces the collimator and the windows are positioned nearly parallel to the grating surface. Use the movement of the grating table plate and the grating table for this adjustment.
Setting up the grating for the normal incidence of incoming rays
(a) Note the reading of the telescope in the direct image position. This corresponds to the angular separation of 180 degrees between the collimator and the telescope and can be used as a reference. Tabulate the MSR from either window W1 or W2 (whichever is opposite to the telescope position) in the journal and calculate the difference of 90 degree. (b) Rotate the telescope by 90 degree to the right (or to the left) side. Note the MSR from the same window when the telescope is at the right angle (90 degree) position with respect to the collimator.
(c) Rotate slowly the grating table plate (not moving the telescope plate) till you see the reflected image of the slit from the grating surface in the telescope. This position of the grating ensures 45 degree angle between the grating surface and the incoming rays. If the single reflected image is not seen, then adjust the collimator or the telescope focusing knobs if required. (b) Rotate the telescope to the left (or the right) to find the 1 st order images of the two spectral lines D1 and D2. Care should be taken so that the telescope rotates without moving the grating table plate.
(c) The diffracted image of the slit, at successive positions, is labeled as D1 and D2. Use the telescope and collimator focusing knobs to obtain fine diffracted images of the slit.
(d) Continue to rotate the telescope to find the 2 nd order diffraction spectra. Use the focus knob to clearly observe the two fine images of the slit.
(e) Keep the vertical cross-wire on the image D1 first and note the position using the MSR and VSD of the spectrometer windows, W1 and W2.
(f) Use the fine knob below the telescope to move the cross-wire from D1 to D2 and note the readings of MSR and VSD from the windows.
(g) Calculate the angle of deviation for both the spectral lines with respect to the direction image readings. Follow the observation table to find the differences from both the windows. Calculate the wavelengths of sodium spectral lines, D1 and D2, using the formula.
Observation Table: 1 For spectral line D2
Conclusions:
The Diffraction grating was used to obtain the spectral lines of the sodium light source, D1 and D2 as…...nm and…..nm, respectively.
Precautions:
1) The crosswire must be on the center position of the spectral line.
2) The grating table should be leveled so that the slit image is at the center position of the view of the telescope.
3) The spectral lines should be fine and bright. Backlash error should be avoided.
Method of Linear least square fitting
Aim: To determine the parameters of best fit and plot of the line of best fit for the data of the energy gap experiment Usables: Experimental data of the experiment of energy gap of semiconductor, Calculator, Graph paper Formula: (i) Plot the best fit line using the x i and y ls calculated in the step (d).
Least Square Fitting
(j) Use the value of m in the step (c) to calculate the energy gap of the semiconducting material of the PN Junction diode.
Observation Table: 1. Calculation of the best fit parameters and least square error
Results:
The parameters of the best fit were found to be m =…..and c =……. The line of best fit was plotted using these parameters. The energy gap of the semiconductor material of the PN Junction diode was calculated using the slope.
Conclusions:
The least square method was studied to find the best fit line which describes the experimental data points on the graph. The slope and the intercept of the best fit line were found from the least square method formulae. The energy gap of the semiconducting material of the given PN Junction diode was calculated using the slope of the best fit line. The verification of the minimum error was done by varying arbitrarily the slope and the intercept value by 1 %, and it was ascertained that the sum of the squares of errors increases for parameters other than found by the least square formulae.
Precautions:
1) Care must be taken to correctly calculate the sums for the least square formulae.
2) The sums and the values of the parameters should be determined only upto 4 significant digits, and this has to be consistently maintained throughout the calculations.
3) Note that the method of the least square is a statistical procedure, and it does not give any meaningful information on the actual value of the result obtained via experiment. 2. Why the BG is used in this experiment? What is the least count of the BG?
3. How do you control the intensity of light incident on the photocell? What is the direction of current if the cathode is reverse biased?
4. If the same sodium source is used for interference experiment on one side and the photoelectric effect on other side, which experiment will work? Why?
5. On what factors the LSD given above depends? Use n = 2, i.e, the 2 nd order diffraction spectra on both sides.
2.
Calculate the angular separation between the direct reading and the grating surface at 53.5˚ with respect to the collimator. | 2019-04-13T12:53:43.939Z | 2015-09-26T00:00:00.000 | {
"year": 2015,
"sha1": "81618862cfb9ca7f906fd298ba77daf04b617bd9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7f689d50125f463c3d9bb11e18301768d4a4d73f",
"s2fieldsofstudy": [
"Physics",
"Education"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
49389853 | pes2o/s2orc | v3-fos-license | Inner ear exosomes and their potential use as biomarkers
Exosomes are nanovesicles involved in intercellular communications. They are released by a variety of cell types; however, their presence in the inner ear has not been described in the literature. The aims of this study were to determine if exosomes are present in the inner ear and, if present, characterize the changes in their protein content in response to ototoxic stress. In this laboratory investigation, inner ear explants of 5-day-old Wistar rats were cultured and treated with either cisplatin or gentamicin. Hair cell damage was assessed by confocal microscopy. Exosomes were isolated using ExoQuick, serial centrifugation, and mini-column methods. Confirmation and characterization of exosomes was carried out using transmission electron microscopy (TEM), ZetaView, BCA protein analysis, and proteomics. Vesicles with a typical size distribution for exosomes were observed using TEM and ZetaView. Proteomic analysis detected typical exosome markers and markers for the organ of Corti. There was a statistically significant reduction in the exosome protein level and number of particles per cubic centimeter when the samples were exposed to ototoxic stress. Proteomic analysis also detected clear differences in protein expression when ototoxic medications were introduced. Significant changes in the proteomes of the exosomes were previously described in the context of hearing loss and ototoxic treatment. This is the first report describing exosomes derived from the inner ear. These findings may present an opportunity to conduct further studies with the hope of using exosomes as a biomarker to monitor inner ear function in the future.
Introduction
Exosomes are membrane-bound nanovesicles, [1] with diameters ranging from 30-150 nm [2], that are released from both normal and diseased cells into interstitial and bodily fluids. [1] The important roles of exosomes in physiological and pathological processes were frequently unrecognized due to their small size. [1] However, they are accepted as components of intercellular communication systems that can modulate their target cells' function. [1] Research on the involvement of exosomes in disease has expanded over the past years, with a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 studies focusing on their usage as biomarkers in the diagnosis, prognosis, and management of various diseases. [3] However, to our knowledge, the presence of exosomes in the inner ear has not been described in the literature. One author described the presence of extracellular vesicles that resemble exosomes being released from a human vestibular Schwannoma cell line, [4] while another author found heat shock protein (HSP), a protein known to be present in exosomes, being released by the glial supporting cells in the inner ear. [5] Exosome levels in the plasma or other bodily fluids increase during disease processes and decrease during recovery or after treatment. [6,7] Whether the inner ear also produces exosomes under stressful conditions would be interesting and important to determine, as well as whether they play a role in sensorineural hearing loss (SNHL) induced by insults such as ototoxic medications and inflammation. Since exosomes provide molecular information about the parental cells that produce them, they can also potentially be used as biomarkers to help with diagnosis, disease monitoring, and prognosis of inner ear diseases.
The aims of the study were to show the presence of exosomes in the inner ear and characterize the changes in their protein content in response to ototoxic medications.
Animal care
The animals used in this study were five-day-old Wistar rat pups (Harlan, Indianapolis, IN, USA) that were housed under pathogen-free conditions at the animal facility of the Department of Biomedicine of the University Hospital of Basel.
All procedures were conducted with the approval of the Animal Care Committee of Canton Basel City, Switzerland (Kantonales Veterinäramt Basel, Permit Number: 2263) in accordance with the European Communities Council Directive of 24 November 1986 (86/609/EEC). after centrifugation. A second centrifugation was performed at 1500×g for 5 minutes. The residual pellets were resuspended in 200 μL of PBS for analysis.
Once results confirmed the possible presence of exosomes in the samples, serial centrifugation and mini-column methods were used as described previously, [2] which is a validated technique known to produce highly purified exosomes. Using this method, the culture medium was centrifuged at 1000×g and 3000×g for 10 minutes each, followed by 10 000×g for 30 minutes, each done at 4˚C. After each spin, sediments at the bottom of each tube were removed. The resultant supernatant was then passed through a 1.5 cm × 12 cm mini-column (Bio-Rad, Hercules, CA, USA; Econo-Pac columns) packed with Sepharose 2B (Sigma-Aldrich, St. Louis, MO, USA). The column bed volume was 10 ml. The column was washed with 20 ml of PBS and a porous frit was placed at the top of the gel to prevent any disturbance during subsequent elution with PBS. Fractions 1-3 (first 3 ml) were discarded, while #4 (1 ml) was collected and ultracentrifuged at 105.000xg for 2 hours. The supernatant was then discarded and the resultant pellets were suspended in 20-100 μl of PBS, depending on the size of the pellets.
Confirmation of exosomes. Transmission electron microscopy (TEM) (FEI/Philips CM200 FEG, Amsterdam, Netherlands) was performed on isolated exosomes after fixation on a 400-mesh square copper grid with 2% uranyl acetate using a negative staining method.
ZetaView (Particle Metrix GmbH, Meerbusch, Germany) was used to detect nano particles of the correct size and distribution or concentration of exosomes as recommended by the company.
Exosome protein levels were determined using the BCA Protein Assay Reagent kit (Pierce, Rockford, USA) according to the manufacturer's instructions.
The exosome samples were also analyzed using a label-free quantitative mass spectrometrybased proteomics approach at the Proteomics Core Facility of the University of Basel, Switzerland, as recently described [8]. In brief, samples were dissolved in lysis buffer (1% sodium deoxycholate, 0.1M ammoniumbicarbonate), reduced with 5mM TCEP for 15 min at 95˚C and alkylated with 10mM iodoacetamide for 30min in the dark at room temperature. Samples were diluted, digested with trypsin (Promega) at 37˚C overnight (protein to trypsin ratio: 50:1) and desalted on C18 reversed phase spin columns according to the manufacturer's instructions (Microspin, Harvard Apparatus).
1μg of peptides of each sample were subjected to LC-MS analysis using a dual pressure LTQ-Orbitrap Elite mass spectrometer connected to an electrospray ion source (both Thermo Fisher Scientific). Peptide separation was carried out using an EASY nLC-1000 system (Thermo Fisher Scientific) equipped with a RP-HPLC column (75μm × 30cm) packed inhouse with C18 resin (ReproSil-Pur C18-AQ, 1.9μm resin; Dr. Maisch GmbH, Ammerbuch-Entringen, Germany) using a linear gradient from 95% solvent A (0.15% formic acid, 2% acetonitrile) and 5% solvent B (98% acetonitrile, 0.15% formic acid) to 28% solvent B over 75min at a flow rate of 0.2μl/min. The data acquisition mode was set to obtain one high resolution MS scan in the FT part of the mass spectrometer at a resolution of 240,000 full width at halfmaximum (at m/z 400) followed by MS/MS scans in the linear ion trap of the 20 most intense ions. The charged state screening modus was enabled to exclude unassigned and singly charged ions and the dynamic exclusion duration was set to 20s. The ion accumulation time was set to 300ms (MS) and 50ms (MS/MS). The collision energy was set to 35%, and one microscan was acquired for each spectrum. For all LC-MS measurements, singly charged ions and ions with unassigned charge state were excluded from triggering MS2 events.
To determine changes in protein expressions across samples, a MS1 based label-free quantification was carried out. Therefore, the generated raw files were imported into the Progenesis QI software (Nonlinear Dynamics, Version 2.0) and analyzed using the default parameter settings. MS/MS-data were exported directly from Progenesis QI in mgf format and searched against a decoy database of the forward and reverse sequences of the predicted proteome from rattus norvegicus (Uniprot, download date: 24/03/2017, total of 72,508 entries) using MAS-COT. The search criteria were set as following: full tryptic specificity was required (cleavage after lysine or arginine residues); 3 missed cleavages were allowed; carbamidomethylation (C) was set as fixed modification; oxidation (M) as variable modification. The mass tolerance was set to 10 ppm for precursor ions and 0.6 Da for fragment ions. Results from the database search were imported into Progenesis and the protein false discovery rate (FDR) was set to 1% using the number of reverse hits in the dataset. The final protein lists of all quantified peptides for each protein were exported and further statically analyzed using an in-house developed R script (SafeQuant, https://github.com/eahrne/SafeQuant). Two independent experiments were performed; (I) comparison two control, two Cisplatin-and one Gentamicin-treated samples and (II) comparison of control and Cisplatin-treated samples in biological triplicates. The results details of the two proteomics experiments carried out including identification scores, number of peptides quantified, normalized (by sum of all peak intensities) peak intensities, log 2 ratios, coefficients of variations and p-values for each quantified protein and sample are displayed in S1 File. All raw data and results associated with the manuscript have been deposited in to the ProteomeXchange Consortium via the PRIDE [9] partner repository with the dataset identifier PXD009483 and 10.6019/PXD009483 (Reviewer account details: Username: reviewer11926@ebi.ac.uk, Password: of9Cfsth).
Organ of Corti (OC) dissection and tissue culture
All animal experiments were carried out with the approval of the Animal Care Committee of the Canton of Basel, Switzerland. OCs from 5-day-old Wistar rat pups (Janvier Labs, Le Genest-Saint-Isle, France) were dissected from the skull and then placed in Dulbecco's modified Eagle medium supplemented with 10% fetal bovine serum, 25 mM of HEPES, and 30 U/mL of penicillin (all from Sigma Aldrich Chemie GmbH, Steinheim, Germany). Explants were incubated at 37˚C in 5% CO2. After 24h recovery, hair cell damage was induced by exposure to 50 μM gentamicin and 160 μM cisplatin for 48 h.
Hair cell (HC) count
After treatment, the OCs were fixed, permeabilized, and stained with Alexa Fluor 568 phalloidin (Invitrogen AG, Basel, Switzerland). The images were acquired using the Nikon A1R laser confocal microscope with a × 20 lens (Nikon AG Instruments, Egg, Switzerland). The surviving HCs were counted in a section corresponding to 20 IHCs at different sites of the apical, basal and middle turn of each OC in three randomly selected fields. The inner hair cells (IHCs) and outer hair cells (OHCs) were counted to determine HC survival. If there was a gap in the normal ordered array of HCs, cells were considered to be missing because they had undergone apoptosis. The results are presented as the number of surviving hair cells per cochlear turn.
Statistical analysis
The Statistical Package for the Social Sciences version 23 (IBM SPSS Statistics 23) and Graph-Pad Prism version 7.0 for Macintosh (La Jolla, California, USA) were utilized for statistical analysis. The comparisons between the exosome protein levels (mg/ml) in the control and treatment (cisplatin or gentamicin) groups were analyzed with Wilcoxon matched-pairs signed-rank test. On the other hand, the comparisons of particle concentration (×10 7 /cm 3 ) between the control and treatment group were analyzed using the paired one-tailed student t test. A value of p<0.05 was considered significant. The paired tests were used since for each rat pup, we generated 2 matched pairs, 1 explant used as a control and 1 treated with ototoxic drugs. For the ototoxicity experiments a 2-way ANOVA and a Bonferroni's multiple comparisons were used to analyze HC damage.
Detection of inner ear exosomes
As a proof of concept for our experimental setup, we confirmed that the viability of the cells in the control group was good for up to 4 days. After ototoxic stress, the behavior of the isolated inner ear cells of the rat pups in our study were tested and found to be comparable to the previous experiments done in our laboratory; profound hair cell damage was caused by cisplatin and less so by gentamicin (Fig 1). [10][11][12] We then collected the supernatant from the control and intervention groups. The concentrations of cisplatin and gentamicin were selected according to established ototoxic concentrations from previous experiments. [11,13] Exosomes were isolated from the supernatant at different time points and analyzed by different methods as indicated in S1 Fig. When isolated exosome preparations were visualized under TEM, we observed multiple extracellular vesicles, with sizes ranging from 30-150 nm that are typical of exosomes. When magnified, these vesicles have a uniform, cup-shaped appearance, which is the typical morphology of exosomes (Fig 2a).
Particles of the size distribution typical of exosomes were also observed when isolated vesicle samples were viewed with ZetaView (Fig 2b). The mean diameter of the vesicles in our sample was 109 ± 7.14 nm.
Furthermore, we also performed BCA assays to measure the mean exosome protein level. The average concentration of all the control samples was 23 μg/ml, which was clearly less than the concentration of exosomes typically found in cancer patients (mean of 50 μg/ml) [14].
Detection of typical exosome markers using Western blot, flow cytometry, and ELISA was not possible due to the low concentration of the protein yield, even after pooling samples in repeated attempts to increase the end sample concentration. Therefore, we instead performed proteomic analyses using highly sensitive mass spectrometry, where 454 proteins of rat origin were detected using this method. Typical exosome markers found included HSP 70/71 (Hspa5, Hspa1a, Hspa8) and HSP 90 (Hsp90b1, Hsp90aa1, Hsp90b1, Hsp90ab1). We also detected markers for the organ of Corti and hair cells such as alpha-actinin 1 (Actn 1) and Actn 4, myosin heavy chain (Mh9), and ATP5A1. Additionally, we also detected markers that are usually associated with SNHL such as myosin heavy chain (Myh 14), and markers associated with congenital SNHL and abnormal ear cartilage like V-type proton ATPase subunit B (V-ATPase B). We did not detect markers for other parts of the inner ear such as the vestibular organ, stria vascularis, or otoconial matrix.
Inner ear exosomes as biomarkers of ototoxic stress
Overall, when comparing between control and all the samples (n = 76) treated with ototoxic medications (regardless of type), there was a statistically significant reduction in exosome protein levels (p<0.0001) (Fig 3). When analyzed separately, there was still a statistically significant reduction in the exosome protein levels between the control and samples There was also a statistically significant reduction in the number of particles per cubic centimeter when ototoxic stressors were added to the samples (p = 0.0012) (Fig 4).
From the proteomics analysis, there were also differences in protein expression patterns observed in samples treated with either cisplatin or gentamicin compared to the control Inner ear exosomes as biomarkers group. Fig 5 is a heat map showing an overview of the protein expression patterns in different groups of samples. There were clear differences in protein expression between the control group and the cisplatin and gentamicin groups. Fig 6 illustrates the top hits that were significantly different between the control and intervention groups. For example, there were significant reductions in abundances of ribosomal protein S13 (Rps13), L10 (Rpl10) and Acan. Conversely, significant increases in expressions of Tmem 33, Pgm1, Eif3f, Rps24, Cct8, Hsd17b4, Aldh3a1, Ddost, Aldh3a1, Eif3c, Luc7l2 and Acadvl were observed for the intervention group.
Discussion
Extracellular vesicles were isolated from cultured primary rat inner ear cells using an established technique that gives highly purified and functional exosomes. [2,14] These isolated vesicles had typical characteristics (size range and appearance) for exosomes as seen with electron microscopy. When examined using ZetaView, these vesicles were around 100 nm.
Additionally, the potential use of exosomes as biomarkers was also tested in this ex vivo system. The ototoxicity effect of gentamicin and cisplatin on inner ear cells was tested using the same method as previously published by our group. [11,13] As shown in Fig 1 profound hair cell damage was caused mostly by cisplatin and less so by gentamicin. We compared the exosome protein levels and particle concentrations in samples with and without gentamicin or cisplatin and noticed a reduction in both parameters in the group treated with ototoxic medications. The isolated exosomes were then subjected to proteomic analysis where proteins typically expressed in exosomes, as well as inner ear markers, were detected. Interestingly, the protein profile was strikingly different in the treated samples than in the control group, and proteins associated with sensorineural hearing loss were also seen. As such our most The dendrogram illustrates similarity of protein abundance patterns. Control and Cisplatin treated samples were generated from two different animals (animal 1 and 2 as indicated) and two independent samples (from each ear) were analyzed per animal, respectively. For Gentamicin treatment, only two independent samples from one animal were analyzed. The black color represents centered median data with the blue representing upregulation and yellow representing down-regulation of protein expression. Protein names and quantitative results are shown in S1 File.
https://doi.org/10.1371/journal.pone.0198029.g005 significant hits: Tmem33 [15] has been described in noise-traumatized rat cochleae, Pgm1 [16] in cisplatin-induced cytotoxicity and Cct8 [17] in aminoglykoside-induced cytotoxicity. Taken together exosomes levels not only decrease and their protein signature changes, but the exosomes in the treated group appears to enrich in certain proteins that may reflect the cell's state.
Tmem33 (Transmembrane Protein 3) has been localized in the endoplasmatic reticulum (ER) and nuclear envelope. Its exact function needs to elucidated. It is thought that Tmem33 is involved in the regulation of the tubular structure of the ER because of its ability to interfere with the reticulons [18]. Possibly it could be an interesting target for future research.
To our knowledge this is the first report of the presence of exosomes in the inner ear. The presence of exosomes in the inner ear was to be expected since they are known to be released by various cell types in different body fluids and under several stressful conditions such as hypoxia, [19] heat shock, [20] oxidative stress, [21] acidic pH, [22] and cancer. [23] They are also involved in normal physiological processes such as exporting waste products and eliminating drugs. [1] The exact role of exosomes is still unknown, although they are known to play important roles in intercellular communication, [24] genetic material exchange, angiogenesis, immune modulation, tumor metastasis, and oncogene distribution. [25] A number of limitations were identified in our study. For example, we used inner ears from 5-day-old rats' for all our studies. Exosome release may differ in rats of different ages or in humans. We also used a fixed concentration of cisplatin and gentamicin based on our established laboratory data that we have shown has a negative impact on inner ear cells. Different drug concentrations may also have different effects. Furthermore, we cultured the organ of Corti as a whole and did not separate it into the various parts. Another obstacle faced in this study was the low concentration of exosomes yielded from each culture, where we typically only get 20 μg of proteins from 1 ml of culture medium. This low concentration of protein made it impossible to further confirm the presence of exosomes using classic methods such as Western blot, mass spectrometry, or ELISA, all of which were attempted many times, even by using pooled samples to increase the end sample concentration.
These findings are very important, especially as a starting point for further research. If their presence is proven in human ears, the exact role of exosomes in the inner ear needs to be deciphered. The detection of exosomes and changes in their protein profile may be of interest in the treatment of otologic pathologies, especially of the inner ear, where they could also be used to monitor inner ear damage secondary to ototoxic medications to help guide clinical therapy accordingly, or in patients with sudden SHNL. Further research in this field is required, although the potential use of exosomes as biomarkers seems evident.
Conclusion
We found exosomes present in the inner ear, which are most likely produced by the organ of Corti. We also found a statistically significant reduction in exosome protein levels and the number of particles per cubic centimeter when ototoxic stress was introduced to the exosomes. This may be translated by the reduced cell number, since heavy hair cell damage occurs especially in the cisplatin group. Differences in protein expression patterns were also detected in the group treated with ototoxic drugs when compared to the control group. The interesting finding is that the significant hits in the proteomics analyses of the exosomes have previously been described in the context of hearing loss (especially Tmem33) and as such exosomes are not only changing in number and protein compositions, but seem to reflect the inner ear hair cells status. This qualifies exosomes as ideal candidates to be used as biomarkers, also more work needs to be done to further characterize the inner ear exosomes as well as to determine their exact role in the inner ear, especially when ototoxic medications are administered. | 2018-07-03T23:06:13.227Z | 2018-06-22T00:00:00.000 | {
"year": 2018,
"sha1": "b00ade49f2bc6188554dfd9bcec8f7db7785f052",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0198029&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b00ade49f2bc6188554dfd9bcec8f7db7785f052",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
207869951 | pes2o/s2orc | v3-fos-license | Hot Jupiter Atmospheric Flows at High Resolution
Global Circulation Models (GCMs) of atmospheric flows are now routinely used to interpret observational data on Hot Jupiters. Localized ”equatorial β-plane” simulations by Fromang et al. (2016) have revealed that a barotropic (horizontal shear) instability of the equatorial jet appears at horizontal resolutions beyond those typically achieved in global models; this instability could limit wind speeds and lead to increased atmospheric variability. To address this possibility, we adapt the computationally efficient, pseudo-spectral PlaSim GCM, originally designed for Earth studies, to model Hot Jupiter atmospheric flows and validate it on the Heng et al. (2011) reference benchmark. We then present high resolution global models of HD209458b, with horizontal resolutions of T85 (128x256) and T127 (192x384). The barotropic instability phenomenology found in β-plane simulations is not reproduced in these global models, despite comparably high resolutions. Nevertheless, high resolution models do exhibit additional flow variability on long timescales (of order 100 planet days or more), which is absent from the lower resolution models. It manifests as a breakdown of north-south symmetry of the equatorial wind. From post-processing the atmospheric flows at various resolutions (assuming a cloud-free situation), we show that the stronger flow variability achieved at high resolution does not translate into noticeably stronger dayside infrared flux variability. More generally, our results suggest that high horizontal resolutions are not required to capture the key features of hot Jupiter atmospheric flows.
INTRODUCTION
Hot Jupiters atmospheres have been extensively characterized by observational campaigns and, as a result, they provide some of the best laboratories available to us for studying the physics of exoplanet atmospheres Charbonneau & Deming (2007); Seager & Deming (2010); Baraffe et al. (2010); Madhusudhan et al. (2016); Parmentier & Crossfield (2018). The atmospheric flows that develop on these planets, with permanent day and night sides, are the subject of ongoing modelling efforts. While we know and have an understanding of what drives winds in these atmospheres (Showman & Polvani 2011;Hammond & Pierrehumbert 2018), the dominant form of dissipation for wind kinetic energy is still the subject of intense debate. Indeed new physics is at play in these atmospheres: shocks (Li & Goodman 2010;Dobbs-Dixon & Agol 2013;Fromang et al. 2016) MHD effects (Batygin & Stevenson 2010;Perna et al. 2010;Menou 2012;Thorngren & Fortney 2018) and vertical transport (Menou 2019) could all play a role in limiting the wind speeds on these planets.
In their study of shocks in hot Jupiter atmospheres, Fromang et al. (2016) found that a barotropic horizontal shear instability develops in a deep model of the specific hot Jupiter HD209458b. The phenomenology of this instability, and the time variable flow that results from it, is reminiscent of the barotropic instability discussed by Menou & Rauscher (2009) in their shallow hot Jupiter model (see also Heng et al. 2011). To this day, however, a similar shear instability has not manifested in any of the deep global hot Jupiter flow models published in the literature. Fromang et al. (2016) results suggest that this could be due to insufficient resolution in typical deep global model since the instability does not occur in their simulations until a demanding latitudinal resolution threshold is met (see their Figure 4). The existence of such an instability is important in (i) offering a novel avenue to limit wind speeds in hot Jupiter atmospheres and (ii) in the possibility that it could manifest observationally in the form of atmospheric photometric and/or spectroscopic variability. Recently, Komacek & Showman (2019) have reviewed the observational evidence on hot Jupiter variability and presented a detailed exploration of the level of variability expected from global circulation models.
Here we reconsider the horizontal shear instability problem with PlaSim-Gen, an adaptation of the fast PlaSim Earth system simulator to the case of deep atmospheres such as those of hot Jupiters. Our work complements the recent study by Komacek & Showman (2019) . The plan of this article is as follows. In §2, we describe our validation of the PlaSim-Gen model. In §3, we present high horizontal resolution models of HD209458b, which allow us to address the barotropic shear phenomenology of Fromang et al. (2016). We also present explicit post-processed day-side variability diagnostics of our atmospheric flow models. We find no corroborating evidence for the Fromang et al. (2016) phenomenology and establish that the flow variability exhibited in our high-resolution models does not translate into any significant additional observational variability in thermal emission. We conclude in §4.
PLASIM-GEN MODEL VALIDATION
As described in further details in Appendix A, PlaSim-Gen is an adaptation of PlaSim to model deep atmospheres such as those of hot Jupiters. In PlaSim-Gen, the atmosphere is decoupled from the surface, it is assumed dry and various Earth specific modules have been turned off. In the present version, the radiative forcing has been simplified by implementing Newtonian relaxation to precomputed radiative equilibrium profiles (see Appendix A for details).
We validate our implementation by reproducing the deep HD209458b benchmark of Heng et al. (2011). Our radiative relaxation profiles follow the polynomial fits of Heng et al. (2011). We perform the validation at a moderate resolution of T31L30, which is commensurate with other spectral-core models for hot Jupiters published in the literature (e.g. Heng et al. 2011;Rauscher & Menou 2012). A model time step MPSTEP = 180s and dissipation timescale TDISS = 5 × 10 −3 were adopted. We run models up to 1200 planet days, which is longer than the typical runtime of models in the literature. Other model parameters are listed in Table 1 (see also Table A1 in Appendix A).
The top panels in Figure 1 and Figure 2 show, respectively, the zonal mean zonal wind profile and a representative horizontal temperature slice at the 260 mb level for our T31L30 validation model. We find that the outcomes from our validation model compare well to the reference results presented by Heng et al. (2011).
PLASIM-GEN HIGH RESOLUTION MODELS
Next, we move to high resolution versions of this HD209458b model. We focus on resolutions T85L30 and T127L30, which means that we increase the horizontal resolution while keeping the vertical resolution unchanged. This choice is dictated by our attempt to compare directly to the increased latitudinal resolution results discussed by Fromang et al. (2016). We run our models with MPSTEP = 60s (T85) and MPSTEP = 45s (T127), and values of the hyperdissipation timescale applied to all dynamical variables TDISS = 5 × 10 −4 (T85) and TDISS = 1 × 10 −4 (T127). We find that PlaSim-Gen is fairly efficiently parallelized, up to 32 threads, allowing us to run a T127 model for 1200 planet days in less than a week of wall time on a modern workstation.
The middle panel in figures 1 and 2 show our T85L30 results while the lower panels show the T127L30 results. By and large, higher-resolution models reproduce the results of lower resolution versions (T31L30), with smaller-scale flow features that do not seem to impact much the global flow properties. This has already been noticed by Heng et al. (2011) and Liu & Showman (2013).
While the zonal wind structure, wind speeds and temperature maps are largely consistent across resolutions, careful examination reveals a deeper penetration depth of the equatorial jet and a somewhat larger north-south asymmetry (in both zonal wind and temperature maps), which seems absent from the lower T31L30 resolution runs.
These results are worth discussing in the context of the study of Fromang et al. (2016), who find a transition to a markedly different regime of circulation at latitudinal resolutions in excess of 64-128 cell elements (see their Figure 4). Our T85 and T127 runs do reach the latitudinal regime at which one might expect the dynamical transition witnessed by Fromang et al. (2016) to manifest. However, as exemplified in the lower two panels of Figure 2, even at high resolution, we do not find any evidence for the oscillatory/cyclic behavior of the equatorial regions reported by Fromang et al. (2016).
It is conceivable that our results on equatorial flow stability and lack of variability are sensitive to the level of hyperdissipation adopted in our models, as has already been discussed by Heng et al. (2011) for example. To address this possibility, we have run additional T31 and T85 models with different levels of hyperdissipation. We label T31 v1 and T85 v1 our previously described models with TDISS = 5 × 10 −3 and = 5 × 10 −4 , respectively. We also ran models with higher levels of hyperdissipation: T31 v2 with TDISS = 1 × 10 −3 and T85 v2 with TDISS = 1 × 10 −4 (i.e., each with five times faster hyperdissipation than in v1 models). Overall we find little differences in atmospheric flow variability between or v1 and v2 models.
To better quantify our various claims, we inspect the time viability of the equatorial wind speed at the 50mb level, which is one of the variability diagnostics studied by Fromang et al. (2016). Figure 3 shows planet-daily time-series of the equatorial zonal wind speed from planet day 100 to 1200 (avoiding transients from atmospheric spin-up in the first 100 planet days). There is little difference in the nature of the equatorial wind variability between the models with more or less hyper dissipation (at least at T31 or T85 resolutions).
There is however a noticeable difference in the nature of the variability in the high-resolution models (T85 and T127), relative to the moderate resolution models (T31). In particular, past 400 to 600 planet days, there are larger variations, ∼ 10 to 20% in fractional amplitude, on timescales ∼ 100 to 200 planet days, which are absent from the low resolution runs. While the emergence of this extra variability is intriguing in that it matches expectations in terms of the in Heng et al. (2011) resolution requirements discussed by Fromang et al. (2016), we find that the phenomenology of this variability differs from that described by Fromang et al. (2016). The significant shift in equatorial wind speeds, at the ∼10 to 20% level, occur on much longer timescales in our high-resolution runs than the ones reported by Fromang et al. (2016) (∼ days). There is also no clear indication of wiggliness of the equatorial wind in our models (see Fig. 2). Furthermore, careful examination of our models reveals that the shifts in equatorial wind speed occur as a result of a minor, but noticeable, north-south breaking of symmetry in our high-resolution runs, which positions the peak wind speed somewhat above or below the equator. Some evidence for this behaviour is revealed in the lower two panels of Figures 1 and 2, although the effect is small.
One also notices in Figure 1 that the equatorial zonal flow reaches deeper in our T85, and even more so T127, mod-els than in the T31 model. One possible interpretation for the overall phenomenology exhibited in our high-resolution models is that the atmospheric flow, driven from the top, eventually makes contact with the model lower boundary. This would happen at late times, past ∼ 500 planet days, and more easily so in high resolution models, because these models better resolve the meridional circulation associated with the top-forced zonal flow and its gradual penetration with depth (see, e.g. Showman et al. 2006). We have not pursued this interpretation further since we believe that additional physical ingredients are needed first to adequately model the deep layers at pressures 10-100 bars and their interaction with the deeper planetary interior (see, e.g., the bottom drag included by Liu & Showman 2013;Komacek & Showman 2019).
ATMOSPHERIC VARIABILITY
Variability in the atmospheric flow does not necessarily translate into observable photometric and/or spectroscopic variability. For example, small-scale variability may average out once the emission properties over an entire planetary hemisphere are considered.
To quantify the degree of photometric variability one might expect from the equatorial flow variability shown in Figure 3, we have post-processed our models T31 v1, T85 v1 and T127 with the petitRADTRANS open source tool ?. As a diagnostic, we compute dayside emission in two separate narrow bands. We individually process each model column, weighting its emission by the cosine of the angle away from the substellar point, and then integrate all column contributions over the dayside hemisphere.
For concreteness, we adopt the same atmospheric composition parameters as the petitRADTRANS default code example (0.74 H 2 , 0.24 He, 10 −3 H 2 O, 10 −2 CO, 10 −5 CO 2 , 10 −6 CH 4 , 10 −5 Na and 10 −6 K, by mass). All other model parameters are chosen to match those our HD209458b model. The atmosphere is assumed to be cloud-free. To reduce the computational cost of processing from 16000 to 36000 dayside columns (for T85 and T127 resolutions, respectively), we focus on the emission in two representative narrow bands: 2.5-2.6 and 3.8-3.9 microns. Figure 4 shows the dayside-integrated narrow-band photometric viability in these two bands (blue and orange solid lines), with the equatorial wind from Figure 3 superimposed as a green dash line. All quantities are shown in the time interval 800 to 1200 planet days and they have been normalized to emphasize relative variations. Comparing the top panel (T31 v1) to the lower two panels (T85 v1 and T127), it is clear that the larger equatorial wind variability (at the 10 to 20% level) in the high-resolution models does not translate into any significant additional variability in the two narrow infrared bands of interest, relative to the modest resolution model (top panel).
The dayside flux variations remain within the 2% variability level for all three models. This is consistent with our earlier observation that the larger equatorial wind variability at late times in the high-resolution models (Fig. 3) is largely caused by limited north-south shifts in the off-equatorial latitudinal location of the equatorial wind, which have limited impact on the dayside-averaged emission pattern of these atmospheres.
While not disproving the possibility of greater variability, e.g. when clouds are accounted for, our results illustrate well the point that atmospheric flow variability does not necessarily translate into a more readily observable photometric/spectroscopic variability.
DISCUSSION AND CONCLUSIONS
Superrotating equatorial jets like those realized in most simulations of hot Jupiter atmospheric flows are potentially subject to a barotropic (horizontal shear) instability (Kuo 1949;Vallis 2006). While the barotropic instability was triggered in the shallow hot Jupiter model of Menou & Rauscher (2009), with P bottom = 1 bar, none of the many deep hot Jupiter models (P bottom ≥ 100 bar) published to date have shown the barotropic instability phenomenology. In this context, the findings by Fromang et al. (2016) that a barotropically unstable hot Jupiter flow can indeed be achieved in their deep model (P bottom = 220 bar), provided that the latitudinal resolution is high enough, are intriguing. Indeed, the reason why the barotropic instability did not manifest itself in other deep hot Jupiter models could simply be that these models did not have sufficient latitudinal resolution to resolve the instability onset.
To address this possibility explicitly, we have built models at horizontal resolutions well beyond those typically achieved in global models of hot Jupiter atmospheric flows. We have found that the barotropic instability does not manifest in our deep global models, even at high horizontal resolution. While the atmospheric flow exhibits increased variability at high resolution, and at late times, we associate this feature to the deeper penetration of the equatorial jet observed at higher resolution (see Figure 1) and not the barotropic instability.
Our findings raise a number of questions. First, while Mayne et al. (2017) and Mendonça (2019) have argued that very long integration times are needed to reach a balanced state in deep model layers for hot Jupiter conditions, our results suggest that the exact requirement may be resolutiondependent (see Figure 1) and perhaps model-dependent as well (e.g.spectral versus grid-based algorithm).
Second, the evidence so far suggests that shallow enough global models can be susceptible to the barotropic instability, while deep enough models are not (or at least less so, within current computational limitations). This could simply be the result of the greater challenge there is in maintaining a barotropic (vertically-aligned) flow across a larger number of pressure scale heights. In this respect, we note that the low-latitude flow is clearly barotropic (vertically aligned) over a greater vertical extent in our T127 model (lower panel of Figure 1), relative to the other two panels, but this is not enough to trigger a horizontal shear instability.
Third, the reason for the emergence of the barotropic instability in the deep models of Fromang et al. (2016) remains unclear at this point. It could be related to the β−plane approximation (which is only valid in the close proximity of the equator) or perhap the compressible nature of the equations solved by Fromang et al. (2016), since compressible modes are filtered out from the hydrostatic equations solved in many published hot Jupiter models.
To conclude, it would seem that the use of only moderate horizontal resolutions in global hot Jupiter models is supported by our results, in the sense that high resolutions are not strictly necessary to capture the key features of the atmospheric flow. On the other hand, our results also highlight how a suitable approach to model the deep atmospheric layers and their coupling to the planetary interior is still missing and in need of physical clarification. provided by the Canadian Institute for Theoretical Astrophysics at the University of Toronto. KM is supported by the National Science and Engineering Research Council of Canada. This work has made extensive use of the following software packages: matplotlib, petitRADTRANS.
We would furthermore like to acknowledge that our work was performed on land traditionally inhabited by the Wendat, the Anishnaabeg, Haudenosaunee, Metis, and the Mississaugas of the New Credit First Nation.
APPENDIX A: PLASIM-GEN MODEL DESCRIPTION
PlaSim is an intermediate complexity Earth system simulator that is fast, parallelized, modular and extensively documented (Fraedrich et al. 2005). It has been used and extended before to study the climate of Earth-like exoplanets, with varying degrees of deviation from strict Earth conditions (e.g. Paradise & Menou 2017;Checlair et al. 2017;Paradise et al. 2018). Here we describe how we turned PlaSim into PlaSim-Gen, a generic simulator for deep atmospheres, by removing atmosphere-surface interactions and implementing a simple Newtonian relaxation scheme to drive atmospheric motions.
Our first code-level modification is to redistribute model pressure levels on a logarithmic grid, starting at the bottom pressure level PSURF (a standard PlaSim parameter). We followed the exact same procedure as Rauscher & Menou (2012) in distributing sigma levels logarithmically over OOM orders of magnitude. Note that PlaSim's dynamical core (PUMA) is a modern, parallelized version of the IGM spectral dynamical core used by Menou & Rauscher (2009);Rauscher & Menou (2012).
Our second code-level modification is to bypass PlaSim's Earth-centric radiation scheme and directly implement, in the radiation module, temperature tendencies that obey a 'Newtonian' (linear) relaxation to a prescribed temperature profile, T eq , on a prescribed radiative relaxation time, τ rad : We adopt the fits of Heng et al. (2011) for the dayside and nightside profiles of T eq and τ rad , which makes our model specifically tailored to the hot Jupiter HD209458b. Our third code-level modification is to change the timestep units in the code so that the parameter MPSTEP refers to the (integer) number of seconds per timestep (rather than minutes, as originally implemented). This is to accommodate the possibility of short timestep requirements when modeling hot atmospheres.
We have also removed a few Earth-specific prescriptions that were hardcoded in PlaSim (concerning the magnitude of horizontal hyperdissipation at specific resolutions) Finally, we use the highly modular character of PlaSim to isolate relevant physics parametrizations while turning off all Earth-specifics, and otherwise unnecessary, modules. Table A1 lists the PlaSim parameter choices that allow us to model the deep atmosphere of the hot Jupiter HD209458b and to successfully benchmark against the similar implementation described in Heng et al. (2011).
In short, we select 30 vertical levels (spaced logarithmically), keep dry convective adjustment, choose an appropriate reference temperature and select a short enough timestep for numerical stability. We also adjusted the value of the Robert-Asselin time filter and the type of horizontal hyperdissipation implemented to be more in line with what has been used in other spectral dynamical cores when modeling hot Jupiters (e.g., Heng et al. 2011;Rauscher & Menou 2012). In terms of deselecting PlaSim modules, we turn off the sea ice and ocean modules, and we remove surface evaporation, surface fluxes and surface stresses. All precipitations are also turned off, so that our model is effectively dry with zero water content. We turned off PlaSim exisiting Newtonian relaxation scheme since it is super-seeded by the radiation scheme directly applied to temperature tendencies (Equation A1 above).
Note that some of our parameter choices are also chosen so that a suitable radiative transfer scheme would reproduce a tidally-locked hot Jupiter configuration (e.g. with the diurnal cycle on). Those choices are not strictly necessary given our hard-coding of temperature tendencies with a Newtonian relaxation scheme but they illustrate how PlaSim can be extended to account for permanent dayside insolation with a full radiation scheme (as used in, e.g., Checlair et al. 2017).
This paper has been typeset from a T E X/L A T E X file prepared by the author. | 2019-10-31T20:01:09.000Z | 2019-11-01T00:00:00.000 | {
"year": 2019,
"sha1": "75c883b2a3f8f285b90d9586f721379adf3826d4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1911.00084",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f9b7b4032536adbfd1b5e523ca914be258423d03",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
16987886 | pes2o/s2orc | v3-fos-license | Modified urethrovesical anastomosis during robot-assisted simple prostatectomy: Technique and results
Background Despite significant developments in transurethral surgery for benign prostatic hyperplasia, simple prostatectomy remains an excellent option for patients with severely enlarged glands. The objective is to describe our results of robot-assisted simple prostatectomy (RASP) with a modified urethrovesical anastomosis (UVA). Methods From May 2011 to February 2014, RASP with UVA was performed in 34 patients by a single surgeon (O.C.) using the da Vinci S-HD surgical system. The UVA was performed between the bladder neck and urethral margin using the Van Velthoven technique. Demographic, perioperative, and outcome data were recorded. Complications were recorded with the Clavien–Dindo system. Results The mean (standard deviation) age was 68 years (62–74 years). The median preoperative prostate volume (interquartile range) was 117 cc (99–146 cc). Operative time was 96 minutes (78–126 minutes), estimate blood loss was 200 mL (100–300 mL), and two (5.8%) patients required a blood transfusion. No conversion to open surgery was needed. The median specimen weight on pathological examination was 76 g (58–100 g). The average hospital stay was 2.2 days (1–4 days) and average Foley catheter time was 4.6 days (4–6 days). No intraoperative complications were recorded. There were seven (20.5%) postoperative complications, most of them Clavien less than or equal to Grade II. Conclusion The results of our study show that RASP with UVA is a feasible, secure, and reproducible procedure with low morbidity. Additional series with larger patient cohorts are needed to validate this approach.
Introduction
With the development of new surgical techniques and energy sources, options for the endoscopic management of men with moderately enlarged prostates have widened over the past years. However, despite these advances, open simple prostatectomy (OSP) remains particularly well suited for patients with large glands (> 100 cc). 1 Newer options for minimally-invasive treatment of large glands include laparoscopic simple prostatectomy and holmium laser enucleation. While both of them showed comparable outcomes to the open approach, 2e4 they require a steep learning curve thus preventing wider acceptance among urologists. The robotic platform is an attractive alternative as it potentially overcomes these constraints by providing stereoscopic three-dimensional vision and exceptional dexterity to facilitate the more technically demanding steps of the simple prostatectomy procedure. 5 Robotassisted simple prostatectomy (RASP) is a novel procedure not yet widely performed, even in high volume robotic centers with no more than a couple of hundred cases reported worldwide. With this paucity of information, the experience and results of high volume centers is still valuable.
The objective of this report is to describe our results with RASP in a contemporary cohort of men with lower urinary tract symptoms (LUTS) secondary to benign prostatic hyperplasia (BPH).
Materials and methods
Between May 2011 and February 2014, 34 patients with BPHrelated LUTS underwent RASP by a single surgeon (O.C.). Peri-and intraoperative data were prospectively collected and retrospectively analyzed. Indications for surgery included LUTS refractory to medical treatment, urinary retention, and BPH-related consequences to the upper tract. Besides regular preoperative testing, all patients were specifically evaluated with digital rectal examination, prostate-specific antigen, renal and pelvic ultrasound, International Prostate Symptom Score (IPSS), and maximum urine flow (Q max ). Prostate cancer was ruled out with transrectal ultrasound-guided biopsies in patients with elevated prostatespecific antigen and/or abnormal digital rectal examination. Complications were classified with the DindoeClavien classification. 6
Surgical technique
All of the procedures were performed by a single surgeon (O.C.) using the Da Vinci S-HD Surgical System (Intuitive Surgical, Sunnyvale, CA, USA).
Patient position and port placement
After induction, the patient was placed in a lithotomy position and a steep Trendelenburg position, over an antisliding foam with padding for all pressure points. Pneumatic compressors were used on the lower extremities to prevent postoperative deep venous thrombosis. We placed four trocars as follows: 12-mm camera port supraumbilically, two 8-mm robotic ports bilaterally on a line between the camera port and the iliac crest at least 9 cm from the camera port, and a lateral 12-mm port cranial to the right iliac crest for the assistant. Three robotic instruments were used: hot shears monopolar curved scissors, fenestrated bipolar forceps, and a large needle driver. A 0-degree camera was used throughout the whole procedure.
Bladder dissection and opening
Firstly, the median and medial umbilical ligaments were taken down giving full access to the preperitoneal space and prostate. The periprostatic fat was then completely removed to gain full access to the prostatic capsule and vesicoprostatic junction. An anterior opening was made in the bladder before the junction and continued distally along the prostatic capsule. Both edges of the bladder were then sutured to the Cooper's ligament at each side to achieve optimal visualization of the adenoma (Fig. 1).
Dissection of the adenoma
The ureteral orifices were identified first. The correct plane of dissection between the prostatic capsule and the adenoma was first identified circumferentially on both sides of the prostate. Dissection starts at the lower half of the contour with counter traction given by the assistant with the suction cannula (Fig. 2). Dissection then continued towards the anterior half. Finally, the catheter was identified at the apex and the urethra was sectioned under direct vision with care not to risk the sphincteric complex. The adenoma was collected in an endoscopic bag.
Modified urethrovesical anastomosis
After careful revision of hemostasia, a double-needle barbed suture was used to create a posterior urethorvesical anastomosis using the Van Velthoven technique. Being careful not to include the ureteral orifices, the posterior bladder neck and urethra were sown between Hour 3 and Hour 9 to create a halfway urethrovesical anastomosis (Fig. 3).
Bladder closure and postoperative care
A 22-Fr three-way Foley catheter was placed and the prostatic capsule and anterior bladder were sutured in a running fashion using a vicryl 2-0 suture. The bladder was then filled with 200 cc of saline to verify watertight closure. A percutaneous drain was left and bladder irrigation was started and left for 24 hours. Specimen was sent for pathological analysis.
Statistical analysis
Normally distributed quantitative data were summarized as means, and measures of variability were reported as standard deviations, whereas non-normally distributed data were summarized as median and variability reported as interquartile range (IQR). Qualitative data were reported as percentages. A KaplaneMeier curve was designed to present changes in IPSS and Q max after surgery.
Results
Patient characteristics are presented in Table 1. All patients failed previous medical therapy with a-blockers and/or 5-areductase inhibitors. Median (IQR) preoperative prostate volume by transrectal ultrasound was 117 (99e146) cc, while 12 patients (35%) had an indwelling urethral catheter. Table 2 summarizes the intraoperative results. Notably, all procedures were successfully completed robotically. Median (IQR) operative time was 96 minutes (78e126 minutes), estimated blood loss 200 cc (100e300 cc), and two patients (5.8%) had a blood transfusion. Table 3 describes postoperative complications: seven complications (20.5%) were reported, most of them (6/7) being low grade (Clavien I or II). One patient had a bladder neck contracture and was treated with endoscopic incision (Clavien IIIa). Median (IQR) specimen weight was 76 g (58e100 g). A small focus of prostate cancer Gleason 3þ3 was identified in one case and the patient is currently under active surveillance with no evidence of disease progression.
Urinary symptoms and Q max improved significantly at 3 months, with the effectiveness was maintained during follow-up at 12 months (Figs. 4 and 5). None of the patients had de-novo urinary incontinence or erectile dysfunction resulting from the procedure.
Discussion
Open simple prostatectomy is still the standard for patients with LUTS caused by large prostatic adenomas. 7 Nevertheless, this procedure is also associated with significant perioperative morbidity and a long convalescence. Laparoscopic simple prostatectomy has emerged as an alternative for OSP, offering lower blood loss, less pain, shorter postoperative catheterization period, and shorter Fig. 3. A double-needle barbed suture was used to create a posterior urethrovesical anastomosis using the Van Velthoven technique. Being careful not to include the ureteral orifices, the posterior bladder neck and urethra were sown between Hour 3 and Hour 9 to create a halfway urethrovesical anastomosis (broken line). hospital stay. 8 However, it is technically a highly demanding procedure requiring a steep learning curve and advanced laparoscopic skills, thus it remained limited to a selected population of highly expert laparoscopic urologists. The robotic platform provides increased magnification, better visualization, and wristed instrumentation, and has been shown to alleviate the stiff learning curve associated with complex minimally invasive reconstructive procedures. 9 The largest multi-institutional analysis of minimally invasive simple prostatectomy was recently published. 10 Overall, 1,330 consecutive cases were analyzed, including 487 RASPs (36.6%) and 843 laparoscopic simple prostatectomies (63.4%). The median overall prostate volume was 100 mL (range, 89e128 mL) and estimated blood loss was 200 mL (range, 150e300 mL). Intraoperative transfusion was required in 3.5% of cases, intraoperative complications were reported in 2.2% of cases, and the conversion rate to open surgery was 3%. The median length of stay was 4 days (range, 3e5 days) and the overall postoperative complication rate was 10.6%, mostly of low grade (i.e., Clavien I or II). At a median followup of 12 months, significant improvement was observed for Q max and IPSS (P < 0.001). Interestingly, a time trend comparing laparoscopic and robotic simple prostatectomy showed that while in 2006e2008 only 11% of the cases were done roboticallydthis changed to 74% during 2012e2014.
Several technical modifications to the standard open prostatectomy techniques have been described for RSP, probably reflecting a novel technique that is still under development. Sotelo et al's 11 original report on RSP consisted in a horizontal cystotomy proximal to the vesicoprostatic junction. Coelho et al 12 reported a technique in which a continuous vesicourethral anastomosis was performed as during a radical prostatectomy with optimal intraoperative and postoperative outcomes, but with the drawback that this results in complete exclusion of the prostatic bed from further transurethral access.
However, others perform the operation through a capsular incision mimicking the classic technique reported by Sutherland et al. 13 Our technique is a combination of the above. We perform a longitudinal capsular and vesical incision as it gives complete access to the adenoma and facilitates enucleation. After excision of the adenoma and hemostasis, we performed plication of the posterior capsule as described by Coelho et al 12 followed by a modification of their original vesicourethral anastomosis technique, which we perform only at the posterior leap. The advantage of our technique is that it provides hemostasis to the prostatic bed while allowing endoscopic access to the prostatic lodge if needed.
Our cohort represents one of the largest single center experiences in RASP reported to date. There was a significant improvement in the baseline IPSS and maximum urinary flow (Figs. 4 and 5).
Two patients (5.9%) required a blood transfusion and the overall complication rate was 20.5% with only one patient requiring a secondary procedure. Median length of stay was 2 days (1e4 days). These results are comparable to two recent reports on RASP from centers of excellence from Europe and the United States reporting overall complications of 30% and 20%, and transfusion rates of 1.5% and 4%, respectively. 14,15 Conversely, a recent analysis of the United States Nationwide Inpatient Sample demonstrated a clinically significant transfusion rate of 21% among 34,418 OSPs performed between 1998 and 2010. 16 In an attempt to amplify the benefits seen with minimally invasive surgery, Fareed et al 17 reported the first series of singleport RSP in nine patients. Despite improvements in postoperative urine flow, perioperative complications were significant with two patients requiring blood transfusions and two others developing significant hematuria requiring endoscopic evacuation and coagulation. Although single-port RSP is feasible, the high complication rates indicate that further refinements are necessary before it can be more widely endorsed.
The main limitations of this study are its retrospective nature and lack of a control group. However, we wanted to show our experience with a procedure that is still under development and not widely performed even among robotic urologists. Single center reports are still necessary to increase knowledge about the feasibility and technical aspects of this novel technique. Only randomized trials comparing different treatments for large prostates will tell us where RSP really stands.
In conclusion, RASP with UVA is safe and feasible with low morbidity and excellent short-term functional results. | 2018-04-03T02:31:33.096Z | 2016-04-07T00:00:00.000 | {
"year": 2016,
"sha1": "7800a110f3ec9c769bfd036d34030b9ac043084a",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.prnil.2016.04.001",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7800a110f3ec9c769bfd036d34030b9ac043084a",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254695470 | pes2o/s2orc | v3-fos-license | Voice Cues Influence Children’s Assessment of Adults’ Occupational Competence
The adult voice is a strong bio-social marker for masculinity and femininity. In this study we investigated whether children make gender stereotypical judgments about adults’ occupational competence on the basis of their voice. Forty-eight 8- to 10- year olds were asked to rate the competence of adult voices that varied in vocal masculinity (by artificially manipulating voice pitch) and were randomly paired with 9 occupations (3 stereotypically male, 3 female, 3 gender-neutral). In line with gender stereotypes, children rated men as more competent for the male occupations and women as more competent for the female occupations. Moreover, children rated speakers of both sexes with feminine (high-pitched) voices as more competent for the female occupations. Finally, children rated men (but not women) with masculine (low-pitched) voices as more competent for stereotypically male occupations. Our results thus indicate that stereotypical voice-based judgments of occupational competence previously identified in adults are already present in children, and likely to affect how they consider adults and interact with them in their social environment.
Introduction
The human voice is one of the main sources providing first impressions of a speaker's identity, including biological sex. The perceived biological sex of an adult speaker from their voice is primarily defined by mean fundamental frequency (F0, perceived as voice pitch) and, to a lesser extent, from vocal tract resonances (or formants), which in men are on average 50% and 20% lower, respectively, than women's (Titze 1989;Gelfer and Mikos 2005). In addition to signaling sex, these voice patterns (e.g., relatively lower pitch and resonance in men's voices and relatively higher pitch and resonance in women's voices) influence listeners' attributions of gender, that is the "roles, behaviors, activities, and attributes that any society considers appropriate for girls and boys, and women and men" (World Health Organisation 2020). For example, listeners judge men and women with low-frequency voices as physically bigger, stronger, more masculine, more physically and socially dominant than those with voices of relatively high-frequency voices (for reviews Pisanski and Bryant 2019). These associations can be partly explained in evolutionary terms, as voice pitch, at least in males, is inversely related to testosterone (Cartei et al. 2020b;O'Connor et al. 2011), which in turn is positively associated with a host of physiological masculine characteristics, including physical strength and body size (Bhasin et al. 1996), as well as self-reported dominance (Puts et al. 2006). At the same time, listeners have a tendency to overgeneralize the sex dimorphism that characterizes the voice of adult speakers, resulting in sex-stereotype biases in judgement patterns. For instance, the perceived association between pitch and body size may lead to misattributions of physical strength in adults (Feinberg et al. 2005;Sell et al. 2010), and of sex in babies (e.g., lowpitched cries are more likely to be attributed to boys and high-pitched cries to girls, despite the absence of sex differences in the pitch of babies: Reby et al. 2016).
Although most of extant research focuses on the impact of vocal masculinity and femininity on listeners' perceptions of speakers within intrasexual competition or mate choice contexts, a few studies have helped uncover the wider socio-economic implications of speaker attributions. Like masculine-looking men and women (Little 2014;Re and Rule 2017;Rule and Ambady 2009;Sczesny et al. 2006;Todorov et al. 2005), speakers with masculine (e.g., lower-pitched) voices are often considered to have positive personality attributes including competence and leadership abilities. For instance, when asked to select political leaders, both men and women tend to select male and female leaders with more masculine (lower-pitched) voices and rate them as more competent than their higher-pitched counterparts (Klofstad et al. 2012(Klofstad et al. , 2015. In addition, Tigue et al. (2012) showed that voices from political candidates with artificially lowered pitch were associated with perceptions of ability and skill more often than were their higher-pitched versions, independent of whether the content spoken was political or neutral. Similarly, research on the impact of voice pitch within the business context found that artificially lower-pitched voices of job candidates are associated with greater competence, regardless of applicant gender or résumé information (depicting either a stereotypically masculine or a stereotypically feminine applicant- Ko et al. 2009). Moreover, a lowered voice pitch from organizational spokespersons results in greater perceptions of competence and ability to restore organizational reputation compared to a raised voice pitch, particularly in times of crisis (Claeys and Cauberghe 2014).
While this research demonstrates that sex-related voice variation is sufficient to trigger stereotyping in adult listeners, an important theoretical question concerns whether auditory-based stereotyping of adults is already present in childhood, paralleling evidence on children's gender stereotyped judgments of adults based on body shape and facial appearance (Montepare and Zebrowitz-McArthur 1989;Pine 2001). Our study aims to bridge this gap by directly examining how voice variation in masculinity and femininity impacts children's occupational stereotyping of adults. An investigation of this nature will provide valuable insights into the role of vocal cues in the early origins of stereotyping, paving the way for developmental investigations of stereotyping from multiple angles. Moreover, given that children's prior expectancies of other people bias their interactions with them (Harris et al. 1992;Gurland and Grolnick 2003), voice-based judgments may also have an impact on how children would engage with adults, with practical implications for understanding and improving such interactions.
Our study focuses on occupational competence, given that perceived competence is a key dimension (alongside warmth) underlying person and group perception (for a review: Fiske et al. 2007). Although no research to date has directly examined how the voice impacts competence judgments of children, recent evidence suggests that children may be sensitive to sex-related variation in voice frequency, and that this variation influences their assessment of speakers' traits in gender-stereotypical ways. For instance, children are sensitive to vocal masculinity and femininity in the voices of their peers, as they match stereotypically masculine and feminine descriptors of a child character with corresponding masculinized or feminized voices (Cartei et al. 2019a). Moreover, a recent study using a voice imitation paradigm has shown that children conform to gender-stereotyped expectations by masculinizing and feminizing their voices for traditionally male and female occupations (Cartei et al. 2020a). The present work aims to extend this literature by investigating for the first time whether child listeners use variation in voice masculinity and femininity (by artificially lowering/raising voice pitch) to make gender stereotypical predictions about the occupational competence of adult speakers.
We chose to study 8-to 10-year-olds as previous research has shown that from about age 8 children's range of stereotypes expands, and the nature of the gender associations becomes more abstract and multi-dimensional. For instance, they are able to use genderrelated variation in behavior and appearance in a stereotypical manner when making predictions of peers' future occupational career choices (Martin et al. 1990). Specifically, we hypothesize that children will assign higher competence to lower-pitched (more masculine) voices for stereotypically male occupations. Conversely, we expect that children will assign higher competence to higher-pitched (more feminine) voices for stereotypically female occupations. Finally, voices re-synthesized to a midline pitch should receive highest ratings when paired with gender-neutral occupations.
Participants
Forty-eight children (20 females, mean age = 9.46; SD = 0.47, range: 8.6-10.4) took part in the study. The total sample size was based on a previous study of voice perception in child and adult listeners (Cartei et al. 2019a) reporting significant effects of gender-role stereotype ratings based on variation in vocal masculinity and femininity in children's voices.
Children from UK Years 4 and 5 with no history of hearing impairments were prospectively recruited via school newsletters in two village primary schools, with informed consent by the headteachers. Parents were given a written study information sheet explaining the purpose and protocol of the study (that children would be asked to guess how good a person was at their job after listening through headphones to some men and women in specific occupations as they said a series of sentences). Parents were encouraged to ask any questions by contacting the researchers and provided with an opt-out form should they not want their child participating in the study, but no objection was received. After parental consent, children were approached about the study on the day of the experiment. Researchers explained the main points of the consent/assent form verbally, adjusting the explanation to the child's age and comprehension level. Ethical approval was obtained from the University of Sussex Science and Technology Cross-Schools Research Ethics Committee (reference: ER/VC44/17).
Speaker Selection
Eight adult speakers of British English (4 women, mean age = 24; SD = 0.32, range: 21-27) were selected from a database of 26 adults (13 women) reading out loud the following three sentences: "hello, it is nice to meet you", "thank you for your help", "no, I do not want to go" (see "Appendix 1" for details on acoustic analysis). For each speaker, the three sentences were concatenated as a single voice stimulus with 50 ms silence between sentences, creating 5-s "thin slices" (Ambady and Rosenthal 1992) to minimize task fatigue while eliciting listeners' judgements (see: Hughes and Harrison 2017;Tigue et al. 2012 for examples of "thin slices" in voice research). These speakers were selected to maximize the variance in apparent vocal tract lengths (aVTL) from our original sample, which was estimated from formants 1-4 (aVTL is inversely correlated with the averaged distance between adjacent formants as well as absolute formant values: longer vocal tracts result in lower, more closely spaced formant frequencies, translating into a more resonant, or sonorous, voice-see "Appendix 1"). For males, the selected speakers had aVTLs of 15.4 cm, 16.2 cm, 16.7 cm and 17.5 cm. For women, the selected speakers had aVTLs of 14.2 cm, 14.7 cm, 15.0 cm and 15.5 cm.
Pitch Re-synthesis
From each original recording, we used the PSOLA algorithm in PRAAT 6.0.28 (change gender command) to create three stimuli varying in pitch without altering other aspects of the sound. In one stimulus mean F0 was altered to fit the mean F0s for the men and women in our original speaker database (mid F0), while in the other two stimuli F0 was manipulated to be, respectively, 1 standard deviation (SD) lower (lowered F0) or higher (raised F0) than the mean values for men (mid F0: 115.2 ± 12.8 Hz) and women (mid F0: 204.4 Hz ± 29.4 Hz) in our sample, following a similar procedure to Reby et al. (2016). Thus, the resulting F0 values for each of the selected male speakers were: 102.4 Hz, 115.2 Hz, 128.0 Hz and for female speakers: 175.0 Hz, 204.4 Hz, 233.8 Hz.
To confirm the perceived naturalness of the voice stimuli, we asked 10 listeners (5 men, 5 women) to rate the speakers' voices from the database and the 24 resynthesized versions (3 × 8 speakers) on a 7-point scale (1 = very unnatural, 2 = unnatural, 3 = somewhat unnatural, 4 = neither, 5 = somewhat natural, 6 = natural, 7 = very natural). Oneway ANOVAs were separately run on the ratings of male and female speakers, treating the ratings from 1 to 7 as continuous. The within-subjects factor was stimulus type (four levels: original, raised, lowered, and mid resynthesized variants). Listeners' average scores for the original and resynthesized stimuli were above 6 "natural" and there was no significant difference between unmanipulated and resynthesized voices, female: F(3, 24) = 0.663, p > 0.05, male: F(3, 24) = 0.277, p > 0.05.
Procedure
Children sat individually in a quiet room at their school with the researcher. Voice stimuli were played back one at the time from a laptop through high-quality child-safe headphones (PURO Labs BT2200). For each voice, the experimenter read out loud the speaker's occupation, followed by a brief description of the occupation. Next, children listened to the speaker's voice and were asked to rate how good or bad they thought that person (children were told whether it was a man or a woman) was at their job on the basis of their voice. Children marked their answer by putting a cross on a paperbased, picture-aided Likert-scale (1 = very bad, 2 = bad, 3 = not bad nor good, 4 = good, 5 = very good, with corresponding smiley faces ranging from "unhappy" to "happy" (see "Appendix 2"). We selected nine occupations, three stereotypically female (babysitter, beautician, nurse), three gender-neutral (doctor, student, writer), and three stereotypically male (builder, lorry driver, mechanic). Our choice of occupations for each of the three categories was guided by the Office of National Statistics (2019) and by findings from a questionnaire with UK children aged 6-10 on perceived occupational gender ratio and competence (Cartei et al. 2020a).
Each child rated all the voice stimuli in two successive blocks, one with all 12 male voice stimuli from the 4 male speakers, and one with all 12 female voice stimuli from the 4 female speakers (8 speakers × 3 pitch conditions × 1 out of 9 occupations randomized within each child, and counter-balanced between children). Children were told the speakers' sex for each stimulus, and the order in which the blocks were presented was alternated between participants to control for order effects. Before each block, children practiced the task twice by listening to a man's and woman's voice from the original database of 26 speakers, but not from the 8 selected speakers. This pre-test allowed the experimenter to make sure the child understood the task, as well as to adjust the playback volume to a comfortable level.
Statistical Analyses and Results
To investigate the effects of occupation type and F0 variant on children's ratings of men and women speakers, we ran two Linear Mixed Models (LMM) separately for the male and female speakers, with occupation type (male-typed, female-typed, gender-neutral), F0 variant (lowered F0, mid F0, raised F0), listener sex and their 2-way interactions as fixed factors. Apparent Vocal Tract Length (aVTL) and occupation (nested within occupation type) were random factors. Both LMMs also included listener identity as a random factor, with a separate intercept for each listener (Table 1). Pairwise comparisons (Bonferroni corrected) were used to detect significant differences between group means for significant main and interaction effects. Standard estimates of effect sizes (Cohen's d) are reported, with values of 0.2, 0.5, and 0.8 representing small, medium, and large effects (Cohen 1988).
Occupational Competence Ratings of Women Speakers
There was a significant main effect of occupation type on ratings of women speakers: across F0 variants, women were slightly, but significantly, rated as more competent for the gender-neutral occupations than the female (d = 0.28, p < 0.05) or male occupations (d = 0.59, p < 0.05). Women were rated significantly more competent for the stereotypically female occupations than the male occupations, d = 0.31, p = 0.025 (see Fig. 1a).
Occupational Competence Ratings of Men Speakers
There was a significant main effect of occupation type on ratings of men speakers: pairwise comparisons revealed that, across F0 variants, men were rated less competent for the female occupations than for the gender-neutral, d = 0.48, p < 0.05, and male occupations, There was a significant interaction of occupation type and F0 variant (Fig. 3). When paired with the stereotypically female occupations, children rated men's lowered pitch voices as significantly less competent (M = 2.2, SE = 0.15) than mid F0 (M = 2.9, SE = 0.15), d = 0.70, p < 0.05, or raised pitch versions (M = 3.6, SE = 0.14), d = 1.3, p < 0.05, while the latter received higher competence ratings than mid pitch voices d = 0.77, p < 0.05. For the stereotypically male occupations, children rated men's lowered pitch voices as significantly more competent (M = 4.2, SE = 0.14) than the mid pitch (M = 3.4, SE = 0.14), d = 0.80, p < 0.05, and raised pitch (M = 3.2, SE = 0.15) versions, d = 0.97, p < 0.05. For the gender-neutral occupations, no significant differences were found amongst F0 variants, all ps > 0.05.
Discussion
This is the first study to show that children make gender-stereotypical judgments of adult speakers on the basis of speaker's variation in vocal masculinity and femininity, complementing prior research that focused exclusively on adults. Specifically, in line with our predictions, we found that feminized voices received the highest ratings when paired with stereotypically female occupations, and the lowest ratings when paired with stereotypically male occupations. Also consistent with our predictions, masculinized voices received
Fig. 2
Occupation type (female, neutral, male) by F0 variant (raised (yellow), mid (green), lowered(blue)) for women speakers (Color figure online) the lowest ratings when paired with stereotypically female occupations, and male (but not female) masculinized voices received the highest ratings when paired with stereotypically male occupations. Overall, our results show that variation in adults' vocal masculinity and femininity (manipulated by artificially lowering or raising mean voice pitch) affects children's ratings of speakers' occupational competence in gender-stereotypical ways, though ratings for stereotypically male occupations were also influenced by speakers' sex.
In terms of the overall pattern of results, the observed ratings are largely consistent with psychoacoustic studies with adult listeners, showing that (re-synthesized and natural) male voices with lower pitch are preferentially attributed stereotypically male characteristics, such as masculinity (Pisanski et al. 2012), physical and social dominance Puts et al. 2007;Vukovic et al. 2011), authority (Sorokowski et al. 2019), and leadership (Klofstad et al. 2012;Tigue et al. 2012), though perceivers associated higher pitch more strongly with high-than with low-rank behaviors in at least one study (Ko et al. 2015). On the other hand, women with higher-pitched voices are known to be preferentially attributed stereotypically female characteristics, such as femininity (Röder et al. 2013), friendliness (Tsuji 2004;Ohara 1999), and submissiveness (Borkowska and Pawlowski 2011).
Although, as expected, our results show that feminized voices from speakers of both sexes received the highest competence ratings for stereotypically female jobs, psychoacoustic studies report that adult listeners rate lower-pitched individuals as more competent than higher-pitched individuals both from speakers' recordings that are neutral ratings of speakers reading out loud vowels and sentences of gender-neutral content (Krahé and Papakonstantinou 2020;Oleszkiewicz et al. 2016) or politically relevant (e.g., ratings of hypothetical political candidates: Klofstad et al. 2012). However, none of these studies asked listeners to make judgments in the context of female-typed occupations, whereas our study did. Because professions that are dominated by women tend to be stereotyped as more feminine, and requiring more "female-like" traits (e.g., warmth: Eagly and Carli 2003;friendliness: Wharton 1999; helpfulness and cooperation: Cejka and Eagly 1999), competence on these jobs is likely to be judged on these traits, and thus may drive the higher competence ratings for the higher-pitched voices observed in the present study. While the present study did not directly assess whether high-pitched voices triggered these types of inferences, in partial support of this hypothesis, Oleszkiewicz and colleagues (2016) report that adult listeners make positive associations between high pitch and warmth in women's voices (though not in men's). Also, Halper and Stopeck (2019) report that perceptions of warmth primarily drive the relationship between job candidate gender and both likeability and job hireability for female-dominated domains such as the caregiving professions. Both speakers' biological characteristics and listeners' socialization processes may contribute to the observed overall pattern of results. Lower-pitched male voices positively correlate with salivary testosterone levels in childhood and adulthood (Cartei et al. 2014(Cartei et al. , 2020b, and testosterone is a primary driver of physiological masculine features, such as increased muscle size and strength (Bhasin et al. 1996), and physical fitness (Fink et al. 2006;Manning and Taylor 2001), which are valued traits in physically demanding jobs that are male-dominated (Colker 1985). As well as negatively correlating with testosterone, higher-pitched voices in men are preferred by women seeking greater perceived parental and relationship investment (Apicella and Feinberg 2009). Moreover, higher-pitched voices in women positively correlate with level of estrogen, which is positively linked to maternal behavior in numerous species, including rats, mice, sheep, and possibly non-human primates (Bridges 2015). Thus, a high voice pitch may advertise greater actual or perceived propensity for nurturing and care-taking roles, which are stereotypically seen as women's jobs (Guy and Newman 2004). While the observed ratings may partially reflect children's sensitivity to voice cues underlying qualities of speakers, many such attributions are nowadays irrelevant to job competence. For instance, there is considerable overlap in men's and women's physical strength, and many heavy manual jobs are now machine-operated, which means that many women are physically capable of doing such work (Ness 2012).
Moreover, the idea that voice pitch is a reliable cue to biosocial dimensions fails to account for the fact that children and adults typically develop stereotypic views and prejudices concerning groups that are unjustified (and thus uncorrelated with any observable traits or behaviors, e.g., Bereczkei and Mesko 2006;Bigler and Liben 2007;Zebrowitz 1996). Specifically, socialization research has shown that, consistent with the general principle of correspondence bias (Gilbert and Malone 1995), individuals tend to ascribe gender-stereotypic attributes to job holders that are in line with occupational sex ratios, even if those attributes are irrelevant to those jobs (Cejka and Eagly 1999). Given that sexsegregation is still a predominant feature of many jobs (Office of National Statistics 2019), the observed ratings could emerge from children's observations of the vocal characteristics of the sex that is numerically dominant in the occupation (males' voices being, on average, lower-pitched than females'), even if those correspondences are irrelevant to competence.
An additional possibility for children's higher ratings of feminized voices in femaletyped roles is based on children's prior experience. From infancy, children learn to associate higher pitch voices with relational and affective skills, which are important in many stereotypically female occupations, including the ones in the present study (Guy and Newman 2004). Indeed, raised pitch appears to communicate caregivers' affect and intentions nonverbally, and caregivers routinely increase their pitch when speaking to children as opposed to adults (Broesch and Bryant 2015;Grieser and Kuhl 1988). For instance, when mothers speak with a heightened pitch (and expanded melodic contours) they are more able to elicit and maintain infant attention, independent of what they are saying (Papoušek et al. 1990). High-pitch is also common in caregivers' speech when conveying emotional information to children compared to speaking to adults (Kitamura and Burnham 2003).
Contrary to our hypothesis, we also found that women's masculinized voices were not rated as more competent than the mid F0 variant for the masculine occupations. Specifically, to the extent that F0 cues for physiological masculinity in women (e.g., decreased estrogen, lower fertility Bryant and Haselton 2009;Prelevic 2013, but not testosterone: Dabbs andMallinger 1999), more masculine female voices were expected to be rated as more competent in male jobs, but this is not what we observed. An alternative explanation for our findings is that children's competence ratings of low-pitched women's voices resulted from a (conscious or unconscious) compromise between perceived masculinity and overall preference for high-pitched voices in females. Previous research with adult listeners indicates that, while low-pitched voices in both men and women are perceived as more masculine (Krahé and Papakonstantinou 2020), and are preferred over high-pitched voices in male speakers, they are not preferred over high-pitched voices in female speakers (Tsantani et al. 2016). In fact, women speaking with lower-pitched voices are rated as less vocally attractive (Feinberg et al. 2008) and as having fewer favorable personality traits than higher-pitched women (e.g., Scherer 1974Scherer , 1978. Lending support to this argument, a recent study looking at job hiring preferences (Phelan et al. 2008) found that fictitious female job applicants with masculine traits were judged by adult raters as more competent, but lacking in social skills compared to applicants with feminine traits, while no such bias was found in male applicants.
Although variation in voice pitch within the two sexes influenced children's ratings stereotypically, children rated men as significantly more competent than women in male jobs and less competent than women in female jobs, regardless of our pitch manipulations. These results suggest that speaker gender may be a stronger contributor to stereotyping than vocal variation in masculinity and femininity. It is also possible that this effect was heightened by our paradigm, given that children knew in advance the sex of the speaker and rated all speakers of the same sex in one block. Indeed, hiring bias research demonstrates that when occupational assessors are told the sex of hypothetical job candidates, stereotype-congruent associations (e.g., female/male applicants being considered for a stereotypically female/male jobs), are given more favorable evaluations than when stereotype incongruent associations are primed (e.g., female/male applicants being considered for stereotypically male/female jobs), even when applicants are equally qualified (Rice and Barth 2016).
In summary, our study shows that children use within-sex variation in vocal masculinity and femininity when making gender-stereotypical judgments of adults, as previously found in judgments of other children (Cartei et al. 2019a). Our findings also complement those of a recent voice imitation study, which showed that children link vocal masculinity/femininity to stereotypically male/female occupations (Cartei et al. 2020a), by showing that gender-linked variation influences beliefs about competence. Together these observations highlight the fact that the voice is an important aspect of children's gender stereotyping and indicate that it can be easily used as a versatile, implicit measure of children's gender stereotyping, through voice perception or production tasks.
To further trace the developmental trajectory of children's occupational stereotyping (stereotype flexibility and stereotype knowledge), the present paradigm could be used with a wider range of occupations and ratings of relevant traits other than competence (e.g., dominance, friendliness). It could also be extended to younger children and adolescents to assess the degree to which voice stereotypes correlate with a child's classification skills, knowledge about job requirements, and gender stereotype flexibility, all of which develop with age (Liben et al. 2002). Moreover, cross-cultural comparisons with our study should establish the extent to which our findings can be generalized to diverse cultural contexts, outside that of Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies (Henrich et al. 2010). Our paradigm could also be used in conjunction with interindividual measures, to investigate how individual differences in children's occupational stereotyping may emerge. For instance, differences in exposure to division of labor in the family (Serbin et al. 1993;Fulcher et al. 2008), and on television (O' Bryant et al. 1978), both affect children's occupational stereotyping. It would be interesting to know if and how the patterns observed in the present work would be subject to this kind of environmental influence.
Finally, given that children use gender-related voice variation to make judgments about adults in occupations, an important next step would be to explore the relative contributions of these judgments to child-adult interpersonal processes. Specifically, future studies could explore whether voice masculinity and femininity do affect children's interactions with men and women in these roles, by using confederates and recording children's behavioral responses during and after the interactions (e.g. asking children if they felt more comfortable to be treated by a nurse having a feminine rather than masculine voice).
Appendix 1
The original database included 26 adult speakers (aged 18-35, 13 women). Each speaker was recorded while reading out the sentences: "hello, it is nice to meet you", "thank you for your help", and "no, I do not want to go". These sentences were chosen because they were gender-neutral in content, familiar, relatively short, and grammatically simple for adults to say and for children to understand, and because they included the main vowels of British English. For each speaker, the three sentences were scaled at 60 dB and then concatenated in the order presented above, with 50 ms silence in between. Acoustic measurements for each speaker were taken from the entire sequence using PRAAT software (version 6.0.28 on Mac, Boersma and Weenink 2019). Pitch values were obtained using PRAAT's pitch-tracking function with a range setting of 75-300 Hz for males and 100-500 Hz for females (Boersma and Weenink 2019). Formant values were obtained using PRAAT'S formant-tracking function, setting maximum formant to 5000 Hz for males and 5500 Hz for females, and number of formants to 5. Averaged across male speakers, mean F0 was 115.2 Hz (SD = 12.8 Hz, range: 98-144 Hz) and mean ΔF was 1061.4 Hz (SD = 51.9 Hz, range: 999-1138 Hz), corresponding to an apparent Vocal Tract Length of 16.42 cm (SD = 0.8 cm, range: 15.4-17.5 cm). Averaged across female speakers, mean F0 was 204.4 Hz (SD = 29.4 Hz range: 171-274 Hz), and mean ΔF was 1192.1 Hz (SD = 30.3 Hz, range: 1229-1131 Hz), corresponding to an apparent Vocal Tract Length of 14.46 cm (SD = 0.4 cm, range: 14.2-15.5 cm). These values are in line with those of previous samples of speakers of British and American English (e.g., Bachorowski and Owren 1999;Rendall et al. 2005;Cartei et al. 2012). Mean formant spacing (ΔF) and apparent Vocal Tract Length (aVTL), its inverse acoustic correlate measured in cm, were computed from the mean centre frequencies of F1-F4, using the method described by Reby and McComb (2003). Given that the vocal tract can be approximated to a straight uniform tube that is closed at one end and open at the other (Titze 1989), formant frequencies are related to vocal tract length (VTL) by the following equation: where i is the formant number, c is the speed of sound in a mammal vocal tract (350 m/s), VTL is the vocal tract length and Fi is the frequency of ith formant. Formant spacing can be defined as the spacing between any two successive formants, ΔF = F i+1 −F i . Thus from (1), it follows that: And thus Fi can also be expressed as: We can therefore estimate ∆F and aVTL by seeking the best fit for Eq. (3).
Author Contributions All authors contributed to the study design. Testing, data collection, analysis, and interpretation were performed by V. Cartei. V. Cartei drafted the manuscript, and the other authors provided critical revisions. All authors approved the final version of the manuscript for submission.
Funding This study was funded by the Leverhulme Trust (Grant Number: RPG-2016-396).
Availability of data and material
The original dataset can be found at: 10.25377/sussex.12136686. | 2022-12-16T14:57:10.340Z | 2021-02-02T00:00:00.000 | {
"year": 2021,
"sha1": "15e95eafa9f55f349150afd8ef941a052d254a6d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10919-020-00354-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "15e95eafa9f55f349150afd8ef941a052d254a6d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
53089124 | pes2o/s2orc | v3-fos-license | MYC-Induced miR-203b-3p and miR-203a-3p Control Bcl-xL Expression and Paclitaxel Sensitivity in Tumor Cells1
Taxanes are chemotherapeutic agents used in the treatment of solid tumors, particularly of breast, ovarian, and lung origin. However, patients show divergent therapy responses, and the molecular determinants of taxane sensitivity have remained elusive. Especially the signaling pathways that promote death of the taxane-treated cells are poorly characterized. Here we describe a novel part of a signaling route in which c-Myc enhances paclitaxel sensitivity through upregulation of miR-203b-3p and miR-203a-3p; two clustered antiapoptosis protein Bcl-xL controlling microRNAs. In vitro, the miR-203b-3p decreases the expression of Bcl-xL by direct targeting of the gene's mRNA 3’UTR. Notably, overexpression of the miR-203b-3p changed the fate of paclitaxel-treated breast and ovarian cancer cells from mitotic slippage to cell death. In breast tumors, high expression of the miR-203b-3p and MYC was associated with better therapy response and patient survival. Interestingly, in the breast tumors, MYC expression correlated negatively with BCL2L1 expression but positively with miR-203b-3p and miR-203a-3p. Finally, silencing of MYC suppressed the transcription of both miRNAs in breast tumor cells. Pending further validation, these results may assist in patient stratification for taxane therapy.
Introduction
Taxanes are chemotherapeutic agents that disturb microtubule dependent processes, such as cell division, by altering microtubule dynamics. These drugs are widely used in the treatment of ovarian, breast, and lung cancer [1][2][3]. Taxanes, such as paclitaxel, have proved to be effective in the clinical setting, but like many other chemotherapy compounds, they are nonspecific cytotoxins that affect all cells in the body. Also, the molecular determinants of paclitaxel sensitivity in tumor cells have remained elusive [4]. Together, these characteristics result in adverse effects and variable treatment outcomes for the patients. For example, up to 70% of patients with high-grade ovarian tumors treated with a platinumtaxane combination relapse in a median of 15 months despite their initial treatment response [3]. Thus, there is a need for biomarkers that could help to predict the sensitivity of tumors to paclitaxel therapy.
When high concentrations of paclitaxel are applied on cultured cancer cells, the mitotic spindle assembly is disrupted, which activates the spindle assembly checkpoint causing a mitotic arrest [5,6]. The cells either die at the mitotic block or return to interphase without cell division, an event referred to as "exit" or "slippage". Cells that abnormally exit mitosis can undergo post-mitotic death (PMD), arrest in interphase or G 0 , or continue cycling [7,8]. A competition between the cyclin B-dependent mitotic exit network and the increasing proapoptosis signaling determines a cancer cell's response to paclitaxel treatment. The mitotic exit network is a well-established cascade, but much less is known about the regulation of cell death during mitotic arrest and after slippage [8]. The PMD has potential clinical relevance since intratumoral paclitaxel concentrations may not be high enough to adequately activate the spindle assembly checkpoint in tumor cells but can instead allow slippage from mitosis accompanied with chromosome mis-segregation [9].
The intrinsic mitochondrial apoptosis pathway, consisting of effector proteins as well as pro-and antiapoptotic regulator proteins [10], has been suggested to be the main mediator of paclitaxelinduced death [11,12]. In addition to the master regulator of this cell death pathway, c-Myc (MYC) [12,13], the antiapoptotic Bcl-2 family member Bcl-xL (BCL2L1) is conceivably one of the key determinants of taxane sensitivity in cancer cell and xenograft models [14][15][16]. High Bcl-xL expression has indeed been associated with paclitaxel resistance in solid tumors [17,18]. However, not much is known about the regulation of Bcl-xL expression and activity. Topham and coworkers [12], Eischen et al. [19], and Maclean et al. [20] have proven that c-Myc suppresses Bcl-xL expression, but the molecular mechanism remains unclear.
MicroRNAs (miRNA) belong to the family of small regulatory RNAs, and they control the expression of most human genes at posttranscriptional level [21]. miRNAs may also possess diagnostic value as tumor biomarkers [22]. In this study, we aimed to identify novel BCL2L1 regulating miRNA(s) whose altered expression would modulate cancer cells' survival after paclitaxel treatment in vitro and in vivo. We present the first evidence that miR-203b-3p and miR-203a-3p are among the c-Myc-regulated elements that control the expression of Bcl-xL and thereby influence tumor cells' sensitivity to paclitaxel therapy.
Transient Transfection of miRNAs and siRNAs
miRIDIAN miRNA mimics and the MYC siRNA were purchased from GE Dharmacon (Lafayette, CO) and used at a 50-nM concentration. HiPerFect transfection reagent (Qiagen, Valencia, CA) was used to transiently transfect cells with miRNA mimics and siRNAs, and Lipofectamine 3000 (Invitrogen, Thermo Fisher Scientific, Waltham, MA) was used for co-transfecting oligonucleotides and plasmids.
Live-Cell Imaging
To study the effect of miRNAs on paclitaxel sensitivity, we transfected the cells with miRNA mimics, and 28 to 29 hours later, added 10 nM paclitaxel (Sigma-Aldrich) to the culture medium. Imaging with Incucyte live-cell imaging device (Essen Instruments Ltd. Hertfordshire, UK) was started immediately after the drug was supplemented, and the filming continued for 48 to 72 hours at a 30minute image capture interval. The cell fates profiles were determined with visual inspection of the phase-contrast image sequences [8,12,24]. Briefly, death in mitosis (DiM) was determined as death during the drug-induced mitotic arrest based on morphological changes; rounded mitotic cells started surface blebbing, shrank, and disaggregated. PMD was determined as death of a post-mitotic interphase (flat) cell in G1, S, or G2 phase; cells that had exited prolonged mitotic arrest (changed from a round cell morphology to a flat morphology) started intense surface blebbing, shrank, and often disaggregaged into a number of membrane-bound particles.
CellTiter-Glo and Luciferase Reporter Assays
Cell viability was measured with the CellTiter-Glo Luminescent Assay (Promega, Madison, WI) and EnSight Multimode Plate Reader (PerkinElmer, Waltham, MA) according to the manufacturer's instructions. The luciferase reporter assays were performed in MDA-MB-231 SA or 293T cells according to the previously described procedure [25]. The MIR203A-pGL3 construct contained bp 13 to 1183 upstream of MIR203A, as described before [26].
RNA Isolation and miRNA qPCR
For quantitative miRNA PCR, samples were collected 72 hours after transfection and the paclitaxel-treated samples after an overnight drug treatment. The relative expression of mature miRNAs was measured as described previously [25].
Clinical Data Analysis
TCGA Cohorts. The TCGA repository was used to study patient survival and miRNA expression (level 3), obtained with Illumina sequencing [28], in 1172 breast cancer cases. The information regarding The Cancer Genome Atlas (TCGA) ovarian cancer cohort data has been previously described [29].
Bergen Cohort. The cohort has been described in detail previously [30,31]. For description of the sample collection, miRNA profiling, and data analysis, please refer to [29] with the following refinements: demultiplexing was performed with the Illumina CASAVA software, sequence quality was analyzed with FastQC, and samples with less than 300,000 reads were excluded, as well as miRNAs with less than 20 sequencing reads in 25% or more of the samples. In total, the miRNA profiling included 466 mature human miRNAs in 200 patients. miRNA read counts were transformed and normalized using the rlog function of the DeSeq2 R package [32]. From the same tumor samples, the mRNA levels were assayed using the Illumina HT-12 cDNA microarray platform. Raw intensities were processed by quantile normalization based on a set of high-quality probes, and batch correction was performed to adjust for differences between runs [33,34]. Data from cDNA microarray and miRNA sequencing were available from 190 patients (epirubicin; n = 85, paclitaxel; n = 105).
Statistical Analysis
Paired, two-tailed Student's t test was used to perform statistical analyses for the in vitro assays. For comparing differences in gene and miRNA expression between two or more groups in the cohort datasets, t test and ANOVA were applied, respectively. For assessment of associations between survival times and single categorical variables, log-rank tests were performed. Statistical significance was defined as P ≤ .05 (*), P ≤ .01 (**), and P ≤ .001 (***). Values are presented as the average ± standard deviation (S.D.).
miR-203b-3p-induced Modulation of Cell Fate upon Paclitaxel Treatment
To identify novel miRNAs that cause a change in the cell fate upon paclitaxel treatment, we searched for potential candidates among the top 50 miRNAs that are predicted to target BCL2L1 (Table S1): a component of the mitochondrial apoptosis pathway [10] and a key regulator of taxane-induced cancer cell death [12,[14][15][16]24]. Among these candidates, we found four miRNAs (miR-342-3p, miR-203b-3p, miR-505-5p, miR-361-3p) that potentially regulate Bcl-xL and paclitaxel sensitivity in cancer cells. These miRNAs were noted to negatively correlate with BCL2L1 levels and positively associate with taxane response in the NCI cell line database (Table S2). Moreover, low expression of these four miRNAs correlated with reduced patient survival in at least one of the three studied breast and ovarian cancer cohorts ( Figure S1). The steps of the candidate miRNA filtering are presented in more detail in Supplementary Figure 1.
Next, the four candidate miRNAs were applied in live-cell imaging analysis where the fate of miRNA mimic transfected MDA-MB-231 SA breast cancer cells and OVCAR-8 and CaOV-3 ovarian cancer cells was determined upon treatment with clinically relevant paclitaxel dose (10 nM) [9]. Out of the tested four miRNAs, only miR-203b-3p considerably elevated the rate of cell DiM in all cell lines ( Figure 1, A-B, Figure S2): in average by 17.3% (+/−3.5%, P = .01), 14.0% (+/−6.9%), and 15.3% (+/− 5.3%, P = .05) in MDA-MB-231 SA, OVCAR-8, and CaOV-3, respectively. The miR-203b-3p was also confirmed to elevate DiM to a similar extent in the hormone receptor-positive breast cancer cell line MCF-7, in average by 8.7% (+/− 4.2% P = .04; Figure 1, A-B). The intrinsic rate of DiM varied between the cell lines, with the highest frequency in OVCAR-8. However, this did not correlate with the cell death promoting potency of the miR-203b-3p. Death after slippage (PMD) was significantly increased by excess miR-203b-3p only in the breast cancer cell lines, in average by 14.0% (+/−5.0%, P = .03) in MDA-MB-231 and by 12.0% in MCF-7 (+/−9.2%, P = .01) (Figure 1, A-B). The inherent PMD frequency was lower in the breast cancer cell lines compared to the ovarian cancer cell lines, which may partially explain the observed difference in the miR-203b-3p effect. In MDA-MB-231 SA cells, miR-203b-3p also advanced the timing of DiM and PMD, in average by 3.3 hours (+/−0.7 hour) and 2.6 hours (+/−1.8 hours, P = .03), respectively ( Figure 1, C-D). The onset of DiM was intrinsically slowest in the MDA-MB-231 SA cells, but on the other hand, miR-203b-3p did not accelerate PMD in CaOV-3 cells, although the timing of PMD was longest in these ovarian cancer cells (Figure 1, C-D).
In conclusion, excess of miR-203b-3p alters the cell fate upon paclitaxel treatment; the breast and ovarian cancer cells die in Mphase at a higher rate rather than slipping out from mitosis, and a higher fraction of the slipped cells undergo PMD.
Improved Cancer Cell Death and Patient Treatment Response by High Levels of miR-203b-3p
In the further phenotypic studies, we focused on the representative breast and ovarian cancer cell lines MDA-MB-231 SA and CaOV-3, as the miR-203b-3p effect on paclitaxel-induced cell death, especially DiM, was most notable in these cells. First, to strengthen our findings from live-cell imaging assays, we assessed cell viability and the expression levels of cell death markers at cell population scale in miR-203b-3p overexpressing MDA-MB-231 SA and CaOV-3 cell lines after 48-hour paclitaxel treatment. Indeed, the expression of cleaved PARP and cleaved caspase 3 proteins was increased 1.3-to 3.3-fold in the miR-203b-3p overexpressing cell populations compared to the controls (Figure 2A Since high expression of miR-203b-3p promotes DiM instead of mitotic slippage in vitro in breast and ovarian carcinoma cells treated with clinically relevant paclitaxel concentration [9], we speculated that the same may also occur in tumor cells. To get insights into this notion, we retrospectively analyzed patient survival in relation to miR-203b-3p expression in a TCGA breast cancer dataset (n = 1172). The results show a small but significant difference linking improved disease-specific patient survival with high miR-203b-3p levels (P = .008, Figure 2C). We did not find similar association between the miR-203b-3p expression and patient survival in a smaller breast cancer cohort (Bergen, Figure 2D, for PTX and EPI arms Figure S4, A-B). However, in the Bergen cohort, where the patients were randomized to paclitaxel or epirubicin therapy, we did observe a slightly higher miR-203b-3p expression in patient groups that had responded well to paclitaxel therapy ( Figure 2E and Figure S4C). In the epirubicin group, miR-203b-3p expression did not show a similar pattern. On the contrary, the mean miR-203b-3p level was highest in the patients with poorest response to epirubicin and, thus, progressive disease (P = .01, Figure 2F and S4D).
Direct Suppression of the Antiapoptotic BCL2L1
To test if the miR-203b-3p binds to its predicted target gene BCL2L1, we utilized the luciferase reporter assay. Excess of miR-203b-3p significantly suppressed the luciferase activity produced by a BCL2L1 3'UTR-luciferase construct when compared to miR-control (0.78 +/−0.13, P = .04, Figure 3B). Importantly, the binding of miR-203b-3p to BCL2L1 led to a slight but consistent decrease in Figure 3C). Supporting the results from cultured cancer cells, miR-203b-3p and BCL2L1 exhibited a modest but significant negative correlation in the breast tumors from the Bergen cohort (−0.15, P = .04, Figure 3D). The data support our notion that miR-203b-3p may enhance cell death upon paclitaxel treatment by contributing to the suppression of the antiapoptotic Bcl-xL via direct association with the BCL2L1 3'UTR region.
Association of MYC with Bcl-xL, miR-203b-3p, and miR-203a-3p in Breast Tumors
Several groups have demonstrated a negative regulation of Bcl-xL by the oncogene c-Myc, both with in vitro and in vivo models [12,19,20]. Here, we retrospectively studied the association of these two genes and miR-203b-3p in the Bergen breast cancer cohort. In line with the previous studies, patients with high MYC expression had significantly lower BCL2L1 levels (P = .02, Figure S5A); MYC and BCL2L1 also exhibited a direct negative correlation in the breast tumors (−0.26, P b .001, Table S3) that arose almost solely from the paclitaxel arm samples (−0.46, P b .001; Figure 4A).
Another interesting finding from the breast tumors was the positive correlation of MYC and miR-203b-3p (Table S3), especially in the paclitaxel therapy group (0.15, Figure 4B). c-Myc is known to control the expression of several miRNAs [35], suggesting a possible transcriptional induction of miR-203b-3p by c-Myc in the breast tumors. Moreover, we observed a similar positive correlation (0.17, Figure 4C) in the paclitaxel therapy group between MYC and miR-203a-3p, which resides in a genomic cluster with the miR-203b. Also the distinct intercorrelation of miR-203b-3p and miR-203a-3p levels in the breast tumors (0.69, P b .001, Figure 4D) strongly implies that these clustered miRNAs share a transcriptional regulator(s). Since miR-203a-3p has also been reported to suppress Bcl-xL expression [36], yet indirectly, these findings raise a possibility that both miR-203a and miR-203b act as links in the c-Myc/Bcl-xL regulatory axis, the activity of which is associated with the efficacy of paclitaxel treatment.
Regulation of Paclitaxel Sensitivity by c-Myc, Potentially via miR-203s and Bcl-xL
To have molecular-level evidence for support of our findings in breast tumors, we explored the c-Myc/miR-203/Bcl-xL-axis in cultured cancer cell models. First, the positive correlation of MYC with miR-203a-3p and miR-203b-3p in vivo prompted us to measure the expression of the mature miRNAs in c-Myc depleted cells CaOV-3 cells (Figure S5B), which exhibit the highest endogenous levels of these miRNAs among the cell lines used in this study. The results indicated remarkable decreases of miR-203a-3p and miR-203b-3p levels upon MYC silencing and culture in the presence of 10 nM paclitaxel (39.2% +/−29.7% and 65.3% +/−20.2%, P = .03, respectively) ( Figure 5A and Figure S5C). This result is in line with the positive MYC-miRNA correlations observed in vivo. Supporting the hypothesis that c-Myc regulates MIR203 expression via transcription, c-Myc ChIP data from the ENCODE project [37] demonstrated an enrichment of c-Myc instantly upstream of the MIR203 locus ( Figure 5B). This genomic site was also positive for H3K4me histone modification: a marker for active promoters. As an experimental proof that c-Myc regulates MIR203 transcription, the activity of the MIR203 promoter ( Figure 5B) was diminished by 55.1% (+/− 2.9%, P = .001) in c-Myc silenced cells in comparison to controls ( Figure 5C). Finally, we confirmed the finding of Topham et al. [12] that a high MYC expression is a favorable prognostic factor for the survival of breast cancer patients, in a larger dataset ( Figure S5D). Moreover, the patient treatment annotations in the analyzed cohort revealed that MYC levels are associated with the patient survival only upon paclitaxel treatment but not in the epirubicin-treated patients ( Figure 5, D-E).
Discussion
The molecular determinants of tumor cells' response to taxanes have remained poorly understood [4]. Here, we show for the first time that excess miR-203b-3p promotes cell death instead of mitotic slippage in breast and ovarian cancer cells cultured in the presence of a low, clinically relevant paclitaxel dose. Moreover, high levels of miR-203b-3p associate with better survival of breast cancer patients. The sensitization to paclitaxel by miR-203b-3p is, at least partially, achieved through direct suppression of the antiapoptotic Bcl-xL protein. Thus, these novel findings reinforce the previous perceptions about the importance of Bcl-xL in the regulation of paclitaxel sensitivity [14][15][16][17][18] and demonstrate the potential of specific miRNAs as predictors of drug response in cancer patients.
c-Myc is often deregulated in tumors and has a peculiar dual role in cancer: it induces cell proliferation but on the other hand also promotes apoptosis by controlling the mitochondrial apoptosis pathway [12,13,38]. According to its proapoptotic function, high c-Myc levels have been shown to sensitize cultured cancer cells to anti-mitotic cancer therapeutics and associate with better treatment response in breast cancer patients [12]. We also observed a significant correlation between high MYC expression and superior survival of paclitaxel-treated breast cancer patients. The lack of this correlation in the epirubicin therapy arm indirectly implies that c-Myc may serve as a prognostic factor specifically for anti-mitotic cancer therapeutics. Moreover, our findings propose a novel mechanism ( Figure 6) for the c-Myc-mediated sensitization to paclitaxel, which may, at least partially, depend on the transcriptional induction of two Bcl-xL regulating miRNAs, miR-203b-3p and miR-203a-3p [36].
Specific miRNAs, such as miR-203b-3p and miR-203a-3p, can be contributing factors in the previously unidentified link between c-Myc and Bcl-xL, as is supported by the following notions. First, the reported suppression of both mRNA and protein levels of Bcl-xL by c-Myc suggests regulation at the transcriptional level, while the requirement for active protein synthesis proposes that the mechanism is indirect [19,20]. Indeed, miRNAs can regulate the expression of their target genes by both degrading the target gene mRNA and/or inhibiting its translation into protein [21]. Here, we demonstrate for the first time that miR-203b-3p suppresses Bcl-xL protein expression via direct binding to the gene's mRNA 3'UTR and correlates negatively with BCL2L1 mRNA expression in breast tumors. Secondly, several miRNAs are transcriptionally regulated by c-Myc [35]. In the breast cancer cohort, the levels of two adjacent miRNAs, the miR-203b-3p and miR-203a-3p, which has been reported to indirectly repress Bcl-xL protein levels [36], correlated positively with MYC expression. Moreover, the significant declines in the activity of the MIR203A/B promoter and in the levels of both miRNAs in c-Myc depleted cells, along with the ENCODE ChIP data showing c-Myc enrichment at the MIR203A/B promoter, support the concept of c-Myc-induced transcription of these miRNAs. The induction of several Bcl-xL suppressing miRNAs, including miR-203b-3p and miR-203a-3p, provides one feasible solution for the yet unidentified mechanism of c-Myc-mediated Bcl-xL suppression.
Interestingly, the let-7 miRNA family was recently reported to be upregulated in a c-Myc-driven manner in breast, lung, and hematopoietic cancers upon treatment with histone deacetylase inhibitors (HDACi) [39,40]. This led to the suppression of Bcl-xL, a let-7 target gene, which was essential for the drug-induced cell death in this context [39,40]. How widespread the mechanism of c-Mycmediated suppression of Bcl-xL by specific miRNAs is and which drug treatments trigger the signaling event merit further studies. Our Chr14q32.3 5 8 5 , 4 0 1 4 8 5 , 4 0 1 3 8 5 , 4 0 1 2 8 5 , 4 results from in vivo and in vitro studies suggest that paclitaxel treatment may modulate the activity of the c-Myc/miR203/Bcl-xL axis. Paclitaxel and HDACi-induced changes in the chromatin condensation can be a contributing factor in the drug-induced switch of c-Myc transcriptional activity. Intriguingly, at therapeutic concentrations, both drugs can induce a similar mitotic phenotype: a transient mitotic arrest followed by slippage [9,41,42], which as an event may cause changes in the c-Myc transcription factor function. This study sheds new light on the molecular mechanism of c-Mycmediated sensitization to taxane therapy by providing preliminary evidence for existence of a c-Myc/miR-203/Bcl-xL pathway that contributes to the modulation of cancer cells' response to paclitaxel treatment in vitro and in vivo. Moreover, our results imply that especially MYC but potentially also the specific Bcl-xL regulating miRNAs, such as miR-203b-3p, may be harnessed as predictors of tumor cells' drug sensitivity in future. Finally, the data presented here also support the promising concept of combining inhibitors of the Bcl-2 family proteins with taxanes to improve the treatment response, which has already yielded promising results in preclinical models [14,18,43,44]. | 2018-11-11T01:39:44.745Z | 2018-10-22T00:00:00.000 | {
"year": 2018,
"sha1": "5c1ae7b61d7c522f3a2740973aa9ba81c683cc97",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.tranon.2018.10.001",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5c1ae7b61d7c522f3a2740973aa9ba81c683cc97",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
16059492 | pes2o/s2orc | v3-fos-license | Design, Synthesis and Biological Evaluation of Benzohydrazide Derivatives Containing Dihydropyrazoles as Potential EGFR Kinase Inhibitors
A series of novel benzohydrazide derivatives containing dihydropyrazoles have been synthesized as potential epidermal growth factor receptor (EGFR) kinase inhibitors and their biological activities as potential antiproliferative agents have been evaluated. Among these compounds, compound H20 exhibited the most potent antiproliferative activity against four cancer cell line variants (A549, MCF-7, HeLa, HepG2) with IC50 values of 0.46, 0.29, 0.15 and 0.21 μM respectively, which showed the most potent EGFR inhibition activities (IC50 = 0.08 μM for EGFR). Molecular modeling simulation studies were performed in order to predict the biological activity and activity relationship (SAR) of these benzohydrazide derivatives. These results suggested that compound H20 may be a promising anticancer agent.
Introduction
Cancer has become one of the most serious diseases which represents a great threat to human health around the world [1]. Although there are currently a large number of anticancer drugs, more and more people continue to die of cancer [2]. Epidermal growth factor receptor (EGFR) plays important roles in human cancer. As a member of the HER family, EGFR is a tyrosine kinase receptor which plays an essential role in normal cell growth and differentiation, and is involved in tumor proliferation and survival [3,4]. The HER family comprises four members: EGFR (HER1/ErbB-1), ErbB-2 (HER2/neu), ErbB-3 (HER3), and ErbB-4 (HER4) [5]. EGFR is one of the most important targets in current cancer research and its over-expression or abnormal activation often causes cell malignant transformation [6]. The over-expression of EGFR has been observed in many solid tumors, such as colon [7], ovarian [8], breast and non-small cell lung cancer (NSCLC) [9]. Therefore, EGFR inhibition has been developed as one of the most efficient strategies for cancer therapy.
The three combined substructures-the dihydropyrazole along with the naphthalene ring and benzohydrazides-might exhibit synergistic anticancer effects. All of these facts encouraged us to integrate these three moieties and screen new benzohydrazide derivatives containing dihydropyrazoles Some naphthalene derivatives have been reported as potent microtubule inhibitors, apoptosis inducers, glutamamide analogues or P-glycoprotein inhibitors [22,23]. On the other hand, the naphthalene ring can be recognized in various biologically active compounds with clinical applications. Naftifine and terbinafine, allylamine antifungal agents, are widely used for the treatment of fungal infections [24][25][26]. Propranolol is an anti-angina drug, duloxetine is used for the treatment of depression, nafimidone and its oxime ester derivative can be further developed for the treatment of epilepsy [27]. Therefore, preparation and extensive biological evaluations on naphthalene derivatives have continuously attracted our attention [28]. Besides, benzohydrazides are reported to possess a wide variety of biological activities like antiglycation [29], antioxidant [30], antileishmanial [22], antibacterial [31], antifungal [23], antitumor [32] and anticonvulsant [24]. Benzothiazole Schiff bases with diverse biological activities have also been reported [26]. The three combined substructures-the dihydropyrazole along with the naphthalene ring and benzohydrazides-might exhibit synergistic anticancer effects. All of these facts encouraged us to integrate these three moieties and screen new benzohydrazide derivatives containing dihydropyrazoles as potential EGFR inhibitory agents. Herein we disclose the design and synthesis of some novel benzohydrazide derivatives containing dihydropyrazoles as potential EGFR kinase inhibitors, and their antitumor activity against the A549 (human lung cancer), MCF-7 (human breast cancer), HeLa (human cervical cancer) and HepG2 (human hepatocellular cancer) cancer cell lines.
Chemistry
A series of novel benzohydrazide derivatives containing dihydropyrazole moieties were synthesized by the routes outlined in Scheme 1. The structures of the compounds are listed in Table 1. Methyl 4-hydrazinylbenzoate (B) was synthesized from 4-hydrazinylbenzoic acid in methanol at room temperature. The diverse substituted chalcones E1-E16 were obtained by stirring substituted acetophenones and naphthaldehyde in ethanol at room temperature for 6-8 h. The cyclization different chalcones and methyl 4-hydrazinylbenzoate in refluxing ethanol gives compounds F1-F16. Compounds F1-F16 further reacted with excess hydrazine hydrate (80%) to give the requisite intermediate G1-G16. The desired compounds H1-H32 were generated by the reaction of compounds G1-G16 with substituted benzaldehydes in ethanol for 6-8 h. Purified compounds H1-H32 were finally obtained by chromatography. All synthesized compounds H1-H32 gave satisfactory analytical and spectroscopic data in full accordance with their depicted structures, and other data published in a Chinese patent [33].
Antiproliferative Assay
The antiproliferative activities of the newly synthesized derivatives H1-H32 were evaluated under identical conditions by the MTT assay against four cultured cell lines (A549, MCF-7, HeLa and HepG2) with erlotinib as control. The IC50 values of the compounds against these human cancer cells are summarized in Table 2. As shown, the results revealed that all of the target compounds exhibited significant antiproliferative activities, ranging from 0.15 to 100 μM. Compounds F1-F16 further reacted with excess hydrazine hydrate (80%) to give the requisite intermediate G1-G16. The desired compounds H1-H32 were generated by the reaction of compounds G1-G16 with substituted benzaldehydes in ethanol for 6-8 h. Purified compounds H1-H32 were finally obtained by chromatography. All synthesized compounds H1-H32 gave satisfactory analytical and spectroscopic data in full accordance with their depicted structures, and other data published in a Chinese patent [33].
Antiproliferative Assay
The antiproliferative activities of the newly synthesized derivatives H1-H32 were evaluated under identical conditions by the MTT assay against four cultured cell lines (A549, MCF-7, HeLa and HepG2) with erlotinib as control. The IC 50 values of the compounds against these human cancer cells Table 2. As shown, the results revealed that all of the target compounds exhibited significant antiproliferative activities, ranging from 0.15 to 100 µM. Among them, compound H20, which had IC 50 values of 0.46, 0.29, 0.15 and 0.21 µM against A549, MCF-7, HeLa and HepG2, respectively, displayed the most potent antiproliferative activity, comparable to that of the positive control drug erlotinib (with corresponding IC 50 values of 0.20, 0.12, 0.17 and 0.10 µM). These results suggested that compound H20 is more potent than the other compounds overall. Modification of substituents such as methyl, halogen, methoxyl, ethyl and ethoxy was performed to explore the structure-activity relationships of these compounds. Compounds bearing one methyl group at the para-position in the A ring showed stronger antiproliferative activity (the IC 50 values of H4 and H20: 0.45 and 0.21 µM against HeLa cells) than those with hydrogen (H1 and H17), fluorine (H11 and H27), chlorine (H14 and H30), bromine (H16 and H32), ethyl (H5 and H21), ethoxy (H8 and H24) and methoxyl (H6 and H22) substituents, in the order of Me > H > Br > Cl > F, Me > OMe and Et > OEt. This means that compounds with electron-donating groups at the para-position of the A ring had better antiproliferative activity than those with electron-withdrawing groups. A comparison of the composition on the A ring was as follows: for compounds H2-H4, H9-H11, H13-H14, H18-H20, H25-H27 and H29-H30, the potency order of substituent groups on the A ring was found to be para > ortho > meta. Compounds H9-H15 and H25-H31 with halogen groups on the A ring exhibited antiproliferative activities in the order of disubstituted > monosubstituted. Interestingly, compounds H6 and H22 with one methoxyl group on the A ring showed stronger antiproliferative activities than compounds H7 and H23 with three methoxyl groups, so we concluded that more electron-donating groups could reduce the electron density of the A ring and make it difficult to form π-π bonds with amino acids containing aromatic groups.
In the case of constant A ring substituents, changes of substituents on the B ring can also affect the activities of these compounds. Contrary to the A ring, derivatives H17-H32 with a methoxyl at the para-position displayed better activities than at the meta-position (compounds H1-H16). These indicated that the position of the substituent had an influence on the antitumoral activity.
Kinase Inhibitory Activity
All these newly synthesized compounds were evaluated for their inhibitory activities against EGFR kinases using a solid-phase ELISA assay. The approved EGFR inhibitor drug erlotinib was used as a positive control. The inhibition constants (IC 50 ) of the compounds were summarized in Table 3. The EGFR kinase inhibitory activity results of the tested compounds correlated with the structural relationships (SAR) of their inhibitory effects on the cell proliferation assay. Among the tested compounds, compound H20 showed potent anticancer activity with IC 50 of 0.08 µM, which was comparable to the positive control erlotinib (IC 50 = 0.03 µM). Other tested compounds displayed moderate inhibitory activities, with IC 50 values ranging from 0.35 µM to 24.58 µM. This suggests that the potent inhibitory effects of the synthetic compounds on the cell proliferation assay were basically related to their kinase inhibitory activities.
Cytotoxicity Test
All the target compounds H1-H32 were evaluated for their toxicity against 293T human kidney epithelial cells at the median cytotoxic concentration (CC 50 ) value of the tested compounds determined by the MTT assay. As displayed in Table 4, these compounds were tested at multiple doses to study the viability of 293T cells. Judging from the median cytotoxic concentration (CC 50 ) data, all of the tested compounds demonstrated similar cytotoxic activities as erlotinib in vitro against human kidney epithelial 293T cell.
Analysis of Apoptosis
To verify whether the inhibition of cell growth of HeLa cell lines was related to cell apoptosis, we decided to use flow cytometry with an Annexin V-FITC/PI Apoptosis Detection Kit to induce the HeLa cell apoptosis with compound H20. The uptake of Annexin V-FITC/PI markedly increased, and the uptake of normal cells was significantly decreased in a time-dependent manner. According to the annotated data, the percentage of apoptotic cells was significantly elevated directly and in a dose-dependent manner. The results are shown in Figures 2 and 3. As can be seen, the percentages of cell apoptosis were 9.01%, 28.7%, 34.9%, 47.7%, 58.12%, in response to 0, 2.0, 4.0, 6.0, 8.0 µM concentrations of compound H20, respectively. This leads to the conclusion that the percentage of apoptotic cells significantly increased after treatment with high doses of H20.
Molecular Docking
To gain better understanding of the potency of the 32 compounds and guide further SAR studies, we examined the interaction of these compounds with EGFR (PDB code: 1M17) by molecular docking. A simulation of binding between the compounds and the ATP binding sites in EGFR was performed. All docking runs used the DS 4.5 software (Discovery Studio 4.5, Accelrys, Co. Ltd., Beijing, China). The obtained results, presented in Figures 4 and 5, show the optimal binding mode of compound H20 interacting with the 1M17 protein. The amino acid residues of EGFR which had interactions with compound H20 were labeled. In the proposed binding mode, compound H20 was nicely bound to the ATP binding pocket of EGFR through hydrogen bond interactions and hydrophobic interactions.
In the binding model, compound H20 was well bound to the EGFR protein with Gln767, Thr766, Arg752, Val702, Leu694, Leu820, Ala719, the seven amino acids located in the binding pocket of the protein, playing an important role in the combination with compound H20. As we can see from Figures 4 and 5, Gln767 formed conventional hydrogen bonds and Thr766 formed Van der Waals bonds with the nitrogen atom of the Schiff base group, which enhanced the combination activity of compound H20. Moreover, a carbon-hydrogen bond and π-sulfur bonds were found between Arg828 and Cys751 with a benzene ring. Meanwhile the Leu694 formed three alkyl bonds with the methyl, benzene ring and dihydropyrazole ring. Alkyl and π-alkyl bonds were displayed in presence of Leu820 and dihydropyrazole ring and benzene ring, respectively. Moreover, Ala719 also formed π-alkyl bonds with the benzene ring. Naphthalene was also bonded with Val702 by a π-alkyl bond. In molecular docking 3D modeling, compound H20 was nicely bound to the ATP binding site through the hydrogen bond with the backbone of Gln767 (distance = 2.4 Å) which increased the binding affinity dramatically in theory, as displayed in Figure 5.
the nitrogen atom of the Schiff base group, which enhanced the combination activity of compound H20. Moreover, a carbon-hydrogen bond and π-sulfur bonds were found between Arg828 and Cys751 with a benzene ring. Meanwhile the Leu694 formed three alkyl bonds with the methyl, benzene ring and dihydropyrazole ring. Alkyl and π-alkyl bonds were displayed in presence of Leu820 and dihydropyrazole ring and benzene ring, respectively. Moreover, Ala719 also formed π-alkyl bonds with the benzene ring. Naphthalene was also bonded with Val702 by a π-alkyl bond. In molecular docking 3D modeling, compound H20 was nicely bound to the ATP binding site through the hydrogen bond with the backbone of Gln767 (distance = 2.4 Å) which increased the binding affinity dramatically in theory, as displayed in Figure 5. the nitrogen atom of the Schiff base group, which enhanced the combination activity of compound H20. Moreover, a carbon-hydrogen bond and π-sulfur bonds were found between Arg828 and Cys751 with a benzene ring. Meanwhile the Leu694 formed three alkyl bonds with the methyl, benzene ring and dihydropyrazole ring. Alkyl and π-alkyl bonds were displayed in presence of Leu820 and dihydropyrazole ring and benzene ring, respectively. Moreover, Ala719 also formed π-alkyl bonds with the benzene ring. Naphthalene was also bonded with Val702 by a π-alkyl bond. In molecular docking 3D modeling, compound H20 was nicely bound to the ATP binding site through the hydrogen bond with the backbone of Gln767 (distance = 2.4 Å) which increased the binding affinity dramatically in theory, as displayed in Figure 5. In addition, the predicted binding interaction energy was used as the criterion for ranking; the estimated interaction energies of other synthesized compounds ranged from −64.95 to −52.99 kcal/mol, as displayed in Figure 6 with a histogram. The selected compound of H20 had a best estimated binding free energy of −64.95 kcal/mol for EGFR. These molecular docking results, along with the biological assay data, suggested that compound H20 is a potential inhibitor of EGFR. In addition, the predicted binding interaction energy was used as the criterion for ranking; the estimated interaction energies of other synthesized compounds ranged from −64.95 to −52.99 kcal/mol, as displayed in Figure 6 with a histogram. The selected compound of H20 had a best estimated binding free energy of −64.95 kcal/mol for EGFR. These molecular docking results, along with the biological assay data, suggested that compound H20 is a potential inhibitor of EGFR. In addition, the predicted binding interaction energy was used as the criterion for ranking; the estimated interaction energies of other synthesized compounds ranged from´64.95 to´52.99 kcal/mol, as displayed in Figure 6 with a histogram. The selected compound of H20 had a best estimated binding free energy of´64.95 kcal/mol for EGFR. These molecular docking results, along with the biological assay data, suggested that compound H20 is a potential inhibitor of EGFR.
General Information
All chemicals and reagents used in current study were commercially available analytical or chemically pure grade reagents, unless otherwise indicated. All the reactions were monitored by thinlayer chromatography (TLC) on glass-backed silica gel sheets (silica GF 254) and visualized using UV (254 or 365 nm). Column chromatography separations were performed on silica gel (200-300 mesh) using EtOAc/petroleum ether as eluents. Melting points (uncorrected) were determined using an X-4 MP apparatus (Taike Corp, Beijing, China). All the 1 H-NMR and 13 C-NMR spectra were recorded using a model DPX400 spectrometer (Bruker, Billerica, MA, USA) in DMSO-d6 and chemical shifts were reported in δ (ppm) units relative to the internal standard tetramethylsilane (TMS). Mass spectra (MS) were recorded using a Mariner System 5304 mass spectrometer (Isoprime, Manchester, UK).
Synthesis of Methyl 4-Hydrazinylbenzoate (B)
4-Hydrazinylbenzoic acid (3.0 g, 0.020 mol) was suspended in anhydrous methanol (30 mL) and thionyl chloride (5 mL) was added dropwise at 0 °C over 30 min. Then the reaction mixture was stirred at room temperature for 20-24 h, the precipitate that formed was filtered, washed with petroleum ether and ethanol (5:1) three times to remove residual 4-hydrazinylbenzoic acid, and dried to give the title 4-hydrazinylbenzoate.
General Synthetic Procedure for Diverse Substituted Chalcones E1-E16
The diverse substituted chalcones E1-E16 were synthesized by reacting the appropriate substituted acetophenone (3.0 mmol) and one equivalent of naphthaldehyde (0.5 g, 3.0 mmol) using 40% potassium hydroxide (2 mL) as catalyst in ethanol (20 mL). The reaction mixture was stirred at room temperature
General Information
All chemicals and reagents used in current study were commercially available analytical or chemically pure grade reagents, unless otherwise indicated. All the reactions were monitored by thin-layer chromatography (TLC) on glass-backed silica gel sheets (silica GF 254) and visualized using UV (254 or 365 nm). Column chromatography separations were performed on silica gel (200-300 mesh) using EtOAc/petroleum ether as eluents. Melting points (uncorrected) were determined using an X-4 MP apparatus (Taike Corp, Beijing, China). All the 1 H-NMR and 13 C-NMR spectra were recorded using a model DPX400 spectrometer (Bruker, Billerica, MA, USA) in DMSO-d 6 and chemical shifts were reported in δ (ppm) units relative to the internal standard tetramethylsilane (TMS). Mass spectra (MS) were recorded using a Mariner System 5304 mass spectrometer (Isoprime, Manchester, UK).
Synthesis of Methyl 4-Hydrazinylbenzoate (B)
4-Hydrazinylbenzoic acid (3.0 g, 0.020 mol) was suspended in anhydrous methanol (30 mL) and thionyl chloride (5 mL) was added dropwise at 0˝C over 30 min. Then the reaction mixture was stirred at room temperature for 20-24 h, the precipitate that formed was filtered, washed with petroleum ether and ethanol (5:1) three times to remove residual 4-hydrazinylbenzoic acid, and dried to give the title 4-hydrazinylbenzoate.
General Synthetic Procedure for Diverse Substituted Chalcones E1-E16
The diverse substituted chalcones E1-E16 were synthesized by reacting the appropriate substituted acetophenone (3.0 mmol) and one equivalent of naphthaldehyde (0.5 g, 3.0 mmol) using 40% potassium hydroxide (2 mL) as catalyst in ethanol (20 mL). The reaction mixture was stirred at room temperature for 6-8 h, the yellow precipitate that formed was filtered, washed with petroleum ether and ethanol (5:1) three times to remove impurities, and dried to give the diverse substituted chalcones.
General Synthetic Procedure for Compounds G1-G16
Compounds F1-F16 (3.0 mmol) were dissolved in DMF (10 mL) and excess hydrazine hydrate (80%, 5 mL) was added dropwise in methanol (20 mL). The solution was refluxed at 70˝C for 20-24 h, and then poured into water (50 mL) and extracted with ethyl acetate (3ˆ30 mL). The organic layer was collected, and washed with brine. The organic layer was dried over anhydrous sodium sulfate and concentrated in vacuo to obtain yields 60%-80% of crude products G1-G16.
Antiproliferation Activity
IC 50 values of the test compounds against A549, MCF-7, Hela and HepG2 cells were obtained from State Key Laboratory of Pharmaceutical Biotechnology, Nanjing University, and were determined by a standard (MTT)-based colorimetric assay (Beyotime Inst. Biotech., Nanjing, China). Tumor cell lines were grown to mid-log phase and then diluted to 5ˆ10 3 -1ˆ10 4 cells/mL making sure they were seeded in 96-well plates (Philas, Nanjing, China) at a density of 5ˆ10 3 -1ˆ10 4 cells per well. The outer wells containing 1ˆPBS solution of the plate are not used for the assay to avoid the possibility of edge effects. The plate was incubated at 37˝C in a 5% CO 2 incubator in DMEM containing 10% fetal bovine serum (FBS, Gibco, Waltham, MA, USA) for 24 h. Then, we added a 100-µL series concentration of drug-containing medium into the wells to maintain the final concentration of drug as 100, 10, 1 and 0.1 µmol/L. After 12 h, cell survival was determined by the addition of a MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl trtrazolium bromide) solution (10 µL of 5 mg/mL MTT in 1ˆPBS). We waited for 4 h proliferation, discarded the medium and quickly added 150 µL DMSO per well, shaking the plates well for 10 min to ensure complete dissolution. Optical absorbance was measured at 570 nm on a microplate reader (Bio-Rad, Hercules, CA, USA). Survival ratios are expressed in percentages with respect to untreated cells. The experiments were replicated at least three times to verify the methodology reproducibility when using the above-mentioned conditions. The results were summarized in Table 2. 3.3.2. Kinase Assay EGFR tyrosine kinase activity was determined by an enzyme linked-immunosorbent assay (ELISA) in 96-well plates precoated in a 10 mL reaction volume including 4 mL diluted test compounds, 4 mL substrate and 2 mL ATP. After incubation for 1 h at 37˝C, the reactions were stopped with 10 mL 2% (v/v) H 3 PO 4 . The plates were then aspirated and washed twice with 200 mL 0.9% (w/v) NaCl, and incorporation of Pi was determined with an LX300 Epson Diagnostic microplate reader. The residual kinase activities for each concentration of compound and the compound IC 50 values were expressed in percentages with respect to untreated cells. The experiments were replicated at least three times to verify the methodology reproducibility when using the above-mentioned conditions. The results were summarized in Table 3.
Cytotoxicity Assay
The cytotoxic activity in vitro was measured by the colorimetric MTT assay. Cells were incubated in a 96-well plate at a density of 10 5 cells per well with various concentrations of compounds (160, 40, 10, 2.5, 0.25 µM) in distilled 10% DMSO (10 µL) for 24 h. For the cytotoxicity assay, 20 µL of MTT (5 mg/mL) was added per well 4 h before the end of the incubation. After removing the supernatant, 200 µL DMSO was added to dissolve the formazan crystals. The absorbance at λ 570 nm was read on an LX300 Epson Diagnostic microplate reader, untreated cells were used as negative controls. The results are summarized in Table 4.
Cell Apoptosis Assay by Flow Cytometry
For Annexin V/PI assays, HeLa cells were stained with Annexin VFITC and PI and then monitored for apoptosis by flow cytometry. Briefly, 5ˆ10 3 cells were seeded in 6-well plates for 24 h and then were treated with H20 (2.0, 4.0, 6.0, 8.0 µM) for 24 h. Then, cells were collected and washed twice with PBS and stained with 5 µL Annexin V-FITC and 5 µL PI (5 µg/mL) in 1ˆbinding buffer (10 mM HEPES,pH 7.4,140 mM NaOH,2.5 mM CaCl 2 ) for 15 min at room temperature in the dark. Apoptotic cells were quantified using a BD Accuri C 6 Flow Cytometer (Becton, Dickinson and Company, Clifton, NJ, USA). Statistical analysis was done using Flowjo 7.6.1 software. Both early apoptotic (AnnexinV-positive, PI-negative) and late apoptotic (double positive of Annexin V and PI) cells were detected.
Docking Simulations
The crystal structures of the proteins complex were retrieved from the RCSB Protein Data Bank (PDB code: 1M17). Then, three-dimensional structures of the aforementioned compounds were constructed using Chem 3D Ultra 14.0 software (Cambridge Soft Corporation, Boston MA, USA), then they were energetically minimized by using MMFF94 with 5000 iterations and minimum RMS gradient of 0.10. Molecular docking of compound H20 into the three-dimensional EGFR complex structure was carried out using the Discovery Studio (Discovery Studio 4.5, Accelrys, Co. Ltd., Beijing, China) software as implemented through the graphical user interface DS-CDOCKER protocol. All bound waters and ligands were eliminated from the protein and the polar hydrogen was added to the proteins. Briefly, we define the EGFR complex as a receptor, then perform simulation of the compound into the ATP binding site in EGFR by replacing ATP molecule with our compound H20.
Conclusions
A series of benzohydrazide derivatives H1-H32 containing dihydropyrazoles were synthesized and evaluated in vitro for anticancer activities against the A549, MCF-7, HeLa and HepG2 cell lines, EGFR inhibitory activities, as well as MTT assays. These compounds exhibited potent EGFR inhibitory activities and antiproliferative activities against the above four cell lines. Among them, compound H20 showed the most potent EGFR inhibition activities (IC 50 = 0.08 µM) and anticancer activities against A549, MCF-7, HeLa and HepG2 (IC 50 = 0.46, 0.29, 0.15 and 0.21 µM, respectively). Docking simulation of the binding model of compound H20 with EGFR indicated that conventional hydrogen bonds and Van der Waals interactions with the protein residues in the ATP binding site might play a crucial role in its EGFR inhibition and antiproliferative activities. Therefore, compound H20 may be developed as a potential antitumor agent. Author Contributions: Haichao Wang and Hailiang Zhu conceived, designed and performed the synthetic experiment part; Hongxia Li analyzed the data; Xiaoqiang Yan conceived, designed and performed the docking simulation study part; Tianlong Yan conceived, designed and performed the pharmacological test part; Haichao Wang wrote the paper. Xiaoqiang Yan, Zhongchang Wang and Hailiang Zhu assisted with the paper revision. | 2016-08-09T08:50:54.084Z | 2016-08-01T00:00:00.000 | {
"year": 2016,
"sha1": "e1f5a3d975d47eb92c0aea8ea137568a56ecc728",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/21/8/1012/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e1f5a3d975d47eb92c0aea8ea137568a56ecc728",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
55017954 | pes2o/s2orc | v3-fos-license | Physicochemical characterization and sensory profile of 7 principal Tunisian date cultivars
The physicochemical characterization and the sensory evaluation of six Tunisian dates’ cultivars, preselected on the base of the D-optimal design, have been made to compare them with the principal Tunisian dates cultivar, Deglet Nour. The morphological (fresh fruit weight and pulp content) and physicochemical (quality index) studies showed a great diversity among tested cultivars. In fact, the percentage of pulp indicated the existence of cultivars as interesting as Deglet Nour (89.3 ± 0.0), such as Horra (91.9 ± 0.1) and Alig (92.3 ± 0.1). Chemical analysis showed that Mnekher had high levels of total sugars (59.2 ± 0.0 of FM) and that Angou presented the highest ash content (3.6 ± 0.0%). Also, the sensory profiling revealed that each cultivar has its own distinctive characteristics (colour, texture and taste) and that Deglet Nour, Mnekher and Alig presented a tender and soft texture unlike the others, especially the cultivar Kintichi. In addition, the results relating to the hedonic study showed that Deglet Nour,, known as “finger of light”, was the most appreciated (the best preference score) followed by Alig and Mnekher, whereas, the other studied cultivars were rather rejected by the consumers, especially Horra, Kintichi, Angou and Hamra. These two sensory evaluations revealed that the Tunisian consumer is more attracted by sweet and soft cultivars.
Introduction
In Tunisia, date palm (Phoenix dactylifera L.) has a great nutritional and economical importance (Besbes et al., 2009;El Arem et al., 2011).In fact, date fruits are an integral part of Tunisian diet.The information accrued in the past four decades suggest that dates possess diverse medicinal uses including anti-hyperlipidemic, anticancer, gastroprotective, hepatoprotective and nephroprotective activities and thereby serving as an important healthy food in the human diet (Baliga et al., 2011).The observed pharmacological properties may be attributed to the presence of a high concentration of minerals (e.g.selenium, copper, potassium, and magnesium and moderate of manganese, iron, phosphorus and calcium) and various other phytochemicals of complex chemical structure (e.g.phenolics, vit A, vit B1, vit B2, vit B3, vit B6, vit B9 and vit C; and insoluble fibers) (Al Farsi and Lee, 2008).
There are more than 600 varieties of dates worldwide differing in shape and organoleptic properties of the fruits (Ahmed et al., 1995).Tunisian palm is represented by more than 250 cultivars (Rhouma, 1994), which are threatened primarily by the expansion of Deglet Nour (Ferry et al., 1998).The preservation of this heritage requires better knowledge about these cultivars including morphological, chemical, biochemical and especially sensory characterization.This would help improve characteristic, particularly in terms of taste.Sensory analysis is an indispensable prerequisite to chemical analysis, in the definition of the characteristics and value of food products (Gerbi et al., 1997).However, sensory analysis for dates is particularly arduous because of the sweet taste of the product, that's why, the relationships between Tunisian consumer preference and the sensory properties of fresh dates have not been established in the scientific literature (Ismail et al., 2008).Tunisian researches affecting this sector are few.Some studies have focused on morphological characterization of dates or dates products (Rhouma, 1994) and others on the physicochemical and biochemical one (Reynes et al., 1994;Bouabidi et al., 1996), but few have considered the sensory evaluation of dates (Al Hooti et al. 1997;Ismail et al., 2001).And since no sensory study was performed on Tunisian cultivars, this work develops a comprehensive physicochemical and sensory characterization of seven Tunisian dates varieties, the majority of which do not benefit so far a commercial or industrial interest compared to Deglet Nour.
Morphological characterization
Twenty-five fruits were selected randomly and were used for all morphological and physicochemical analyses.Each individual fruit, representing one replicate, was subjected to determination of length and diameter using a micrometer caliper (± 0.01 mm) (Ismail et al., 2008).Pulp and pit weights were determined through a precision balance (model EG-220-3NM, ± 0.002 g).
Physicochemical analysis
The amount of total and reducing sugars in fruit was evaluated by the method of Dinitro Salicylic Acid (DNS) after defection of the sample (Miller, 1959).To 1 mL of the date solution, 4 mL of the DNS reagent was added.Tubes were placed in boiling water bath for 5 min, transferred to ice to rapidly cool down and then brought to room temperature.The absorbance was measured at 540 nm.
The water content was measured by drying 2 grams of pulp in a drying oven at 80°C until constant weight was reached.Results are expressed as percent of fresh weight (El Arem et al., 2011).
The pH was measured using a pHmeter.Four grams of date pulp were dispersed in a flask with 200 ml of boiling water.After being cooled, this solution was used for the determination of the pH (El Arem et al., 2011).
The ash content was determined as a result of mineralization of samples by incineration of 1 g (powdered flesh) in a porcelain container at 450 ° C for 5 h (El Arem et al., 2011).Ash contents were expressed as percent of dry weight.
Sensory profiling
Eleven trained panellists were selected according to ISO 8586-2 (1994) to describe and rate sensory properties of dates.The selection of descriptors for establishing a sensory profile was made according to the method described by Stone et al. (1974).Ten attributes defining the sensory profile focused on the external aspect, colour, tactile texture, mouth texture, stickiness, astringency, sweetness, bitterness, sourness and sandiness.The evaluation of these attributes has been structured on a numerical scale from 1 to 10. Due to the large number of cultivars a reduction using MATLAB® (1999) software was necessary to realize sensory analyses.The selection method was based on the D-optimal design to reduce the number of cultivars (17) used in hedonic and sensory analysis because it is difficult to a judge to compare simultaneously more than 7 products (Claustriaux, 2001).The final number of selected cultivars was 6 (Alig, Hamra, Horra, Angou, Mnekher and Kintichi) to which the cultivar Deglet Nour was added.
Consumer test
Hedonic evaluation was conducted by the participation of 100 untrained Tunisian volunteers (Watts et al., 1991;Lespinasse et al., 2002).The group was formed by 50.1% female and 49.9% male (INS, 2012), 50% of them were between 20 and 40 years old, and 50% were between 40 and 60 years old.Evaluations took place in individual sensory booths, and daylight lighting was used (Lespinasse et al., 2002).The product was introduced anonymously represented by 3 digits codes and arranged on table at random (Lespinasse et al., 2002).Consumers asked to rate each sample on a 9-point descriptive scale where 1 = dislike extremely, 5 = neither like nor dislike, and 9 = like extremely (Ahenkora et al., 1998).
Statistical analysis
Data analysis was performed using descriptive univariate analyses based on ANOVA for each descriptor.The statistical analysis was conducted using MATLAB ® (version 5.3, 1999) and nonrandom associations between categorical variables (at the level of p<0.01) were determined using the Fisher (Foucart et al., 1984).The multivariate analysis is done using the technique of principal component analysis (PCA) (Husson and Pagès, 2007).The analysis of consumer data was generated by MATLAB ® which allowed the classification of cultivars by their order of preferences and identified, accepted and rejected one.
Morphological characterization
The length of date palm fruits varies from 46.3 0.5 mm to 27.7 0.1 mm (Table 1).As for dates width, they are ranging from the simple (13.3 0.1 mm for Khalt Emmwachim) to the twice (26.3 0.1 mm for Tezerzit Safra).These results are similar to those obtained by Rhouma (1994).The average weight of the fruit varies from one to three, also depending on cultivar.It is between 5.5 g 0.2for Khlat Saad and 17.4 0.4 g for Mnekher.
Chemical characterization
The total sugar expressed as a percentage of fresh material (FM) varies between 44.0 0.0 and 62.7 0.1 of FM (Table 2).It is interesting to note that cultivars Zehdi and Mnekher which have high levels of total sugars (respectively 62.7 0.1 and 59.2 0.0 of FM) are subject to rapid fermentation and can't be kept longer.However they can be used to obtain fermented products such as dates vinegar or wine.
The pH value extends over an interval between 5.6 0.1 (Deglet Nour) and 6.7 0.0 (Besser Helou).These values are close to those found by Reynes et al. (1994) (between 5.0 and 6.3).The levels of malate didn't vary a lot depending on cultivar.Allowing to Centeno et al. (2001), and despite the fact that the organic acid content of a fruit is regarded as one of its most commercially important quality traits when assessed by the consumer, relatively little is known concerning the physiological importance of organic acid metabolism for the fruit itself.
Dates' ash content has varied from 0.9 0.1 to 3.6 0.0% of dry matter.The highest ash content was noted for Angou.It is interesting to note that Deglet Nour (2.4 0.1%) was within the range of the average, and several other cultivars can be found most interesting concerning the ash content.
Colour is defined by three parameters (L, a and b) that vary widely among cultivars.The parameter L is comprised between 24.5 0.1 (Tezerzit Safra) and 73.8 0.7 (Deglet Hassen) (Table 3).The cultivar Deglet Nour presented a good brightness (36.6 0.4) but less than Deglet Hassen, Kintichi, and Angoua.These results indicated that the colour of these 3 cultivars is as good as Deglet Nour, which is known as "fingers of light".Concerning the parameter a, positive values indicate setting reddish colours, while negative ones indicate the greenish colours.The results showed values between 17.6 0.1 for Angou and 2.4 0.1 for Tezerzit Safra.
Finally, the parameter b indicates a range of colour between the yellow (for positive values) and blue (for negative values).Relative results showed a large difference between cultivars from 36.6 0.2 for Kintichi and 1.7 0.6 for Tezerzit Safra.The medium value for Deglet Nour cultivar was about 17.4 0.4.
Colour is an important distinctive characteristic in plant breeding and therefore often used in official Plant Breeders' Rights (PBR) tests (van Eck and Franken, 1994).The chromameter is generally used to objectively describe a colour and communicate about colour.It demonstrated a high potential to improve the objectivity of the description of any coloured varieties (van Eck and Franken, 1994).However, when assessment of colour is done visually, it may be subjective and depends on both individual and personal interpretation.The lack of suitable charts makes accurate description of samples hardly possible (van Eck and Franken, 1994).In addition, it is difficult to express the differences between the observed colour and a comparable colour chart.
Many substances are responsible for this fruit colour such as carotenoids (a class of natural fatsoluble pigments) and anthocyanins.Studies (Boudries et al., 2007) have also shown that dates contain the carotenoids; lutein, β-carotene and neoxanthin.A significant decline in the carotenoid level occurs during the transition from the Khalal through Tamar stage and that during the ripening process (Boudries et al., 2007).Anthocyanins are water-soluble vacuolar pigments and may appear in red, purple, or blue.They are widely distributed in many fruits as dates, and they have potential health benefits (Wang et al., 1997;Al Farsi et al., 2005).
The colour is an important parameter in the date consumer acceptance because it represents the first visual attraction with the product.The difference of texture (Table 3), intra and inter specific between cultivars (with and without pit), showed that Kintichi (dry dates) can carry a load of 5.9 0.2 kg with pit and 5.2 0.4 kg without pit; while Safra Tezerzit, which is a soft date, can't carry more than 1.3 0.1 kg with pit and 1.1 0.0 kg without pit.Moisture content and texture are often used to classify date palm varieties into 3 main types: soft, semi-dry, and dry cultivars.Local date palm varieties are numerous and well adapted to agroecological conditions.Their denominations are strictly local and originating, most often, from the place of cultivation, the colour or the shape of the fruits (Mohamed Vall et al., 2011).
Sensory Evaluation Characterization sensory
Sensory analysis is considered an important technique to determine product quality.It comprises a set of techniques for accurate measurements of human responses to foods (Pérez Elortondo et al., 2007).It has been applied to determine appearance, odour, flavour and texture properties, which are important characteristics of food products quality.Sensory evaluation requires panels of human assessors, on whom the products are tested, and the responses made were recorded.This is the case of our study, in which different dates' cultivars were characterized.The results obtained by sensory characterization, showed a statistically significant difference between the cultivars' descriptors studied at a level of confidence of 95%.Indeed, within the following criteria: colour, appearance, mouth texture (soft, sticky and astringent), taste (sweet and bitter), the existence of significant differences between studied cultivars was recorded (Table 4), whereas it wasn't the case of the acid flavor and sandy criterion (presence of sugar crystals).A graphic presentation of individuals (subjects) and variables (cultivars) was resumed in Figure 1.Indeed, the first two lines showed 90% of the total information.The first axis (FD1) is generally an axis of tactile texture or mouth texture, while the second axis (FD2) is an axis of the colour.It is clear that Deglet Nour, placed at the top right, stands out from the total group.It is also interested to note that the group Deglet Nour, Mnekher and Alig are rather in the right part of the plan, in contrast to assessments for cultivars Kintichi, Hamra, Horra and Angoua.In fact, Deglet Nour, Mnekher and Alig present a tender and soft texture unlike the others, especially the cultivar Kintichi, which are qualified generally as dry and hard.Also, cultivars with a dark colour like Mnekher, Alig are located in the upper part of the plan, in contrast to Kintichi which is clearer.This graph is used to confirm the clear distinction between cultivars and the typicity of each of them, and it highlights the effect subject.Indeed, we can see clearly the discriminating power and degree of agreement of the tasting panel concerning Deglet Nour, Alig, Mnekher, Hamra and Kintichi.However, we noticed a great dispersal of spots concerning Horra and Angoua.Moreover, bitterness, astringency and sandiness were negatively correlated to axis 1 and opposed to appearance, mouth texture and feel, sweetness, stickiness and colour.The tender in the mouth is negatively correlated with the astringency and the sandiness that are highly correlated (correlation coefficient close to 1).The results showed generally division into three groups of cultivars (the first group contains Angoua, Kintichi, Hamra and Horra, the second Deglet Nour and Alig, and the latter contains only Mnekher) primarily determined by bitterness, astringency and sandiness which are opposed to appearance, texture and mouth touch, colour and stickiness.Deglet Nour and Alig presented the best results concerning appearance, texture, mouth and tactile stickiness.
Hedonic evaluation
Results of panel evaluation (100 subjects) showed significant variation between the different assessed cultivars (Figure 3).Deglet Nour and Alig spots were significantly higher than those of the other cultivars.In fact, they attracted respectively 90% and 70% of the consumers (left part of the presentation) (Figure 3).Segmentation of Tunisian consumers appeared to be according to whether a sweet date, with a good mouth texture and a better appearance is preferred, whereas, bitter, astringent and sandy dates are rejected.In addition, the histogram reflecting the average note of preference based on cultivars between men and women (Figure 4) showed that there is no significant difference, in order of preference, and that both of them prefer rather Deglet Nour and Alig.The study of the effect of age on consumer preferences showed that consumers aged between 20 and 30 years and those above 30 years, preferred successively, cultivars Deglet Nour, Alig, Mnekher, Horra, Kintichi, Hamra and Angou (Figure 5).However, it appears that subjects aged between 8 and 20 years advanced a different order of preference.They preferred successively cultivars Deglet Nour, Alig, Horra, Mnekher, Hamra, Kintichi and Angoua.However, Alig and Deglet Nour remained the two most appreciated cultivars by all subjects.The study of the difference in preference by region for the 7 cultivars studied (Figure 6), revealed that subjects from southern and northern Tunisia have nearly the same order of preference for the 7 dates cultivars.They prefer successively Deglet Nour, Alig, Mnekher, Horra, Hamra, Kintichi and Angoua.However, subjects from the center of Tunisia prefer successively cultivars Deglet Nour, Alig, Horra, Mnekher, Kintichi, Hamra and Angoua.At the end of this segmentation by region we can say that Deglet Nour, Alig and Angoua remained in the same position in terms of preference for individual consumers however there was a permutation between Horra-Mnekher and Hamra-Kintichi for subjects from the center of the country compared to those from southern and northern Tunisia.
Comparison between sensory evaluation and consumer ranking
In both panel evaluation and consumer ranking, Deglet nour was prescribed to have the best quality, by far, among the tested varieties.Also, Alig and Mnekher did not significantly differ in both panel evaluation and consumer ranking, they were in the second order of preference.However, as stated by Hollingsworth (1996) it is not easy to translate consumer language.For example, when they say that a product is too sweet, they might not want it to be less sweet, but probably they wanted something to balance the sweetness.On the other hand, within each variety, there are, sometimes, qualitative differences, thus, results from the taste panel made on samples from a known background might be different from other samples of the same variety coming from a different source (Ismail et al., 2001).Thus, this work should be pursued using a bigger number of varieties coming from different sources.According to Colomb and Stocker (2007), the brain collects two types of information from taste stimuli: the hedonic aspect (Is it good or bad?) and the sensory aspect (What kind of molecule is it?).While the hedonic information commands ingestion or rejection of food, molecular information is thought to be essential for modifying responses to food through learning.In mammals, the hedonic aspects (good versus bad) and sensory aspects (i.e., the molecular quality) of taste are associated with different brain regions.
In fact, food quality is a multivariate notion ('it tastes good -it looks traditional, safe, healthy, etc.').Traditional foods, like dates, are sometimes used to carry an image of foods tasting good, but in the same time could be perceived either good for health (as related to natural products, no chemical modification, no additives).These aspects, taste and health, have to be improved in parallel and clarified for the consumers (Cayot, 2007).
Conclusion
Sensory characterization of 7 Tunisian dates cultivars, which, unfortunately, the majority does not benefit so far of a commercial or industrial fit, proved that each cultivar is distinguished by several criteria (appearance, mouth texture and feel, sweetness, bitterness, acidity, astringency, stickiness and sandiness).Significant differences among cultivars have been observed through the cultivars analysis results.For example, each cultivar presents its own colour: Kintichi is qualified as the clearest when Alig as the darkest.Also, no significant difference was seen for acidity and sandiness for all cultivars.This sensory characterization has already allowed the chain dates to validate the organoleptic interest of new cultivars as Alig and Mnekher which have almost similar characteristics of Deglet Nour.Hedonic characterization of these cultivars showed that the cultivar Deglet Nour was held the top spot for preference followed successively by Alig and Mnekher and that other cultivars have not been appreciated by consumers.Indeed, the cultivar Angoua was the most depreciated monitoring Hamra, Kintichi and Horra.On the other hand, this sensory analysis, on its two aspects analytical and hedonic, revealed that the Tunisian consumer is attracted by soft and sweet cultivars, with the best mouth texture and best appearance.This study allowed concluding that the cultivar Mnekher, which is a rare and decommissioned date, featured prominently on the preferences' map because it covered 50% of potential preferences.Such result suggests that local extension of the work should help farmers to grow these cultivars and that more awareness companions should help the development of its consumption or valorization (Al-Farsi, 2007;Besbes et al., 2009).Due to its nutritional importance, abundance and low cost, date fruit remains a species with tremendous potential and countless possibilities for further investigation.
Figure 4 .
Figure 4. Average notes of the preferences expressed by men and women according to Tunisian.
Figure 5 .
Figure 5. Average notes of the preferences expressed by age according to Tunisian cultivars.
Figure 6 .
Figure 6.Average notes of the preferences expressed by area according to Tunisian cultivars.
Table 1 .
Morphological characteristics of 17 Tunisian date cultivars based on fruit measurements.
Table 2 .
Chemical characteristics of 17 Tunisian date cultivars based on fruit measurements.
Table 3 .
Textural characteristics and colour of 17 Tunisian date cultivars based on fruit measurements.
TexN: Charge applied to date with pit, Tex: Charge applied to date without pit, L* corresponds to levels of darkness/lightness between black and white, a* signifies the balance between red/green, and b* between yellow/blue.
Table 4 .
Descriptor effect analysis calculated by Fisher ratio. | 2018-12-07T05:30:31.086Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "ccdcd0031875621d747f571cee98f0bea1b86840",
"oa_license": "CCBY",
"oa_url": "https://www.ejfa.me/index.php/journal/article/download/917/659",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "f296b34ae1e0fc5265db5bf2e5a0c1ca4bf751ea",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
18313962 | pes2o/s2orc | v3-fos-license | Magnetic Particle Imaging for Magnetic Hyperthermia Treatment: Visualization and Quantification of the Intratumoral Distribution and Temporal Change of Magnetic Nanoparticles in Vivo
Purpose: Magnetic hyperthermia treatment (MHT) is a strategy for cancer therapy using the temperature rise of magnetic nanoparticles (MNPs) under an alternating magnetic field (AMF). Recently , a new imaging method called magnetic particle imaging (MPI) has been introduced. MPI allows imaging of the spatial distribution of MNPs. The purpose of this study was to investigate the feasibility of visualizing and quantifying the intratumoral distribution and temporal change of MNPs and predicting the therapeutic effect of MHT using MPI. Materials and Methods: Colon-26 cells (1 × 10 6 cells) were implanted into the backs of eight-week-old male BALB/c mice. When the tumor volume reached approximately 100 mm 3 , mice were divided into untreated (n = 10) and treated groups (n = 27). The tumors in the treated group were directly injected with MNPs (Re-sovist ®) with iron concentrations of 500 mM (A, n = 9), 400 mM (B, n = 8), and 250 mM (C, n = 10), respectively, and MHT was performed using an AMF with a frequency of 600 kHz and a peak amplitude of 3.5 kA/m. The mice in the treated group were scanned using our MPI scanner immediately before, immediately after, 7 days, and 14 days after MHT. We drew a region of interest (ROI) on the tumor in the MPI image and calculated the average, maximum, and total MPI values and the number of pixels by taking the threshold value for extracting the contour as 40% of the maximum MPI value (pixel value) within the ROI. These parameters in the untreated group were taken as zero. We also measured the relative tumor volume growth (RTVG) defined by (V−V0)/V0, where V 0 and V are the tumor volumes immediately before and after MHT, respectively. Results: The average, maximum, and total MPI values decreased up to 7 days after MHT and remained almost constant thereafter in all groups, whereas the number of pixels tended to increase with time. The RTVG values in Groups A and B were significantly lower than those in the control group 3 days or more and 5 days or more after MHT, respectively. The above four parameters were significantly inversely correlated with the RTVG values 5, 7, and 14 days after MHT. Conclusion: MPI can visualize and quantify the intratumoral distribution and temporal change of MNPs before and after MHT. Our results suggest that MPI will be useful for predicting the therapeutic …
Introduction
Hyperthermia is one approach to cancer therapy, and is based on the fact that cancer cells are more sensitive to heat than normal tissues.The most commonly used heating method in the clinical setting is capacitive heating that uses a radiofrequency (RF) electric field [1].The therapeutic effect of hyperthermia depends on the temperature of the targeted region and the heating duration [2].The cell-killing mechanism of hyperthermia is related to the activation of immune system, and its efficacy increases dramatically at temperatures above 42˚C to 43˚C [3].The therapeutic effect of hyperthermia can also be enhanced by combining it with radiotherapy, chemotherapy, and immunotherapy [4]- [6].Conventional hyperthermia treatments, however, cause damage to not only cancer cells but also normal tissues.Therefore, it is important to heat the targeted region selectively for safe treatment.
Magnetic hyperthermia treatment (MHT) is one of the methods for hyperthermia treatment and employs the temperature rise of magnetic nanoparticles (MNPs) under an alternating magnetic field (AMF).MNPs generate heat through hysteresis loss and/or relaxational loss when exposed to AMF [7].MHT can selectively heat tumor cells without damaging normal tissues [8].In order to enhance the therapeutic effect of MHT, it is necessary to deliver and accumulate as many MNPs as possible into the tumor tissues [9].Therefore, the development of functionalized MNPs to improve their delivery, accumulation, and anti-tumor effect has attracted recent attention.In addition, various nanocarriers such as magnetic liposomes loading anti-tumor agents, magnetic cationic liposomes, and antibody-conjugated magnetic liposomes have been developed for more effective cancer treatment in animal models [10]- [12].
Magnetic particle imaging (MPI) is an imaging method that has been introduced recently [13].MPI uses a nonlinear response of MNPs to an external oscillating magnetic field and is capable of imaging the spatial distribution of MNPs such as superparamagnetic iron oxide (SPIO) with high sensitivity and high spatial resolution [13].
The purpose of this study was to investigate the feasibility of visualizing and quantifying the intratumoral distribution and temporal change of MNPs using MPI and to evaluate the usefulness of MPI for predicting the therapeutic effect of MHT.
System for Magnetic Particle Imaging
The details of our MPI system are described in our previous papers [14]- [19].In brief, a drive magnetic field was generated using an excitation coil (solenoid coil 100 mm in length, 80 mm in inner diameter, and 110 mm in outer diameter).AC power was supplied to the excitation coil by a programmable power supply (EC1000S, NF CO., Yokohama, Japan), and was controlled using a sinusoidal wave generated by a digital function generator (DF1906, NF Co., Yokohama, Japan).The frequency and peak-to-peak strength of the drive magnetic field were taken as 400 Hz and 20 mT, respectively.The signal generated by MNPs was received by a gradiometer coil (50 mm in length, 35 mm in inner diameter, and 40 mm in outer diameter), and the third-harmonic signal was extracted using a preamplifier (T-AMP03HC, Turtle Industry Co., Ibaragi, Japan) and a lock-in amplifier (LI5640, NF Co., Yokohama, Japan).The output of the lock-in amplifier was converted to digital data by a personal computer connected to a multifunction data acquisition device with a universal serial bus port (USB-6212, National Instruments Co., TX, USA).The sampling time was taken as 10 msec.When measuring signals using the gradiometer coil, a sample was placed 12.5 mm (i.e., one quarter of the coil length) from the center of the gradiometer coil and the coil, including the sample, was moved such that the center of the sample coincided with the position of a field-free line.The selection magnetic field was generated by two opposing neodymium magnets (Neomax Engineering Co., Gunma, Japan).The field-free line can be generated at the center of the two neodymium magnets.
To acquire projection data for image reconstruction, a sample in the receiving coil was automatically rotated around the z-axis over 180˚ in steps of 5˚ and translated in the x-direction from −16 mm to 16 mm in steps of 1 mm, using an XYZ-axes rotary stage (HPS80-50X-M5, Sigma Koki Co., Tokyo, Japan) controlled by LabVIEW (National Instruments Co., TX, USA).Data acquisition took about 12 min.Each projection data set was then transformed into 64 bins by linear interpolation.Both the inhomogeneous sensitivity of the receiving coil and feed through interference were corrected using the method described in [16].Transverse images were reconstructed from the projection data using the maximum likelihood-expectation maximization (ML-EM) algorithm over 15 iterations, in which the initial concentration of MNPs was assumed to be uniform [14] [17].
System for Magnetic Hyperthermia Treatment
The details of our apparatus for MHT are described in our previous papers [7] [15].In brief, the coil for generating the AMF consists of 19-turned loops (6.5 cm in diameter and 10 cm in length) of copper pipe (5 mm in diameter) cooled by water to ensure constant temperature and impedance.The coil was connected to a highfrequency power supply (T162-5723BHE, Thamway Co., Ltd., Shizuoka, Japan) and a manual-matching unit (T020-5723AHE, Thamway Co., Ltd., Shizuoka, Japan).This system can induce an AMF with a maximum peak amplitude of 3.7 kA/m at an output power of 500 W. The peak amplitude of the AMF generated in the coil can be controlled by changing the output of the power supply.
In this study, Resovist ® (FUJIFILM RI Pharma Co., Ltd., Tokyo, Japan) was used as the source of MNPs.Resovist ® is an organ-specific contrast agent for magnetic resonance imaging, used especially for the detection and characterization of small focal liver lesions [7] [15].It consists of MNPs (maghemite, γ-Fe 2 O 3 ) coated with carboxydextran.
Study Protocol
When the tumor volume had grown to approximately 100 mm 3 , mice were divided into a control group (n = 10) and three treatment groups (A, B, and C).The mice in the control group were not treated with MHT.The tumors in the mice in Groups A (n = 9), B (n = 8), and C (n = 10) were directly injected with Resovist ® (0.2 mL of stock solution or stock solution diluted in PBS) with iron concentrations of 500 mM, 400 mM, and 250 mM, respectively, under anesthesia.After injection of Resovist ® , each mouse was placed in a plastic holder for undergoing MPI and MHT.
Figure 1 illustrates the protocol for data acquisition.As illustrated in Figure 1, MHT was started 20 min after the injection of Resovist ® .MHT was performed by applying an AMF at a frequency of 600 kHz and a peak amplitude of 3.5 kA/m [16] for 20 min.During MHT, the temperatures of the tumor and rectum were recorded using two fluorescence-type optical fiber thermometers (AMOTH FL-2000, Anritsu Meter Co., Tokyo, Japan) and two optical fiber temperature probes.One probe was placed in the tumor and the other probe was inserted 1 cm inside the rectum.Both temperatures were recorded every second until the end of MHT.
The MPI studies were performed four times for each mouse; immediately before MHT (2 min after the injection of Resovist ® ), immediately after MHT (42 min after the injection of Resovist ® ) and 7 days and 14 days after MHT (Figure 1).After the MPI studies, X-ray CT images were obtained using a 4-row multi-slice CT scanner (Asteion, Toshiba Medical Systems Co., Tochigi, Japan) with a tube voltage of 120 kV, a tube current of 210 mA, and a slice thickness of 0.5 mm.The MPI image was co-registered to the X-ray CT image using the method described in [15].It should be noted that the X-ray CT image after the second MPI study was substituted by that obtained after the first MPI study.
All animal experiments described above were approved by the animal ethics committee at Osaka University School of Medicine.
Data and Statistical Analyses
The dimensions of the tumor were measured with a caliper every day and the tumor volume (V) was calculated from V = (π/6) × L x × L y × L z , where L x , L y , and L z represent the vertical diameter, horizontal diameter, and height in mm, respectively.The relative tumor volume growth (RTVG) was also calculated from (V−V 0 )/V 0 , where V 0 represents the tumor volume immediately before MHT.In this study, the RTVG value was used as an indicator of the therapeutic effect of MHT.
We drew a region of interest (ROI) on the tumor in the MPI image and calculated the average, maximum, and total MPI values by taking the threshold value for extracting the contour of the tumor as 40% of the maximum MPI value within the ROI.In this study, the MPI value was defined as the pixel value of the transverse MPI image reconstructed from the third-harmonic signals.We also calculated the number of pixels within the ROI.It should be noted that the total MPI value is equal to the product of the average MPI value and the number of pixels.The above parameters in the control group were taken as zero.
The temperature, RTVG, average MPI value, maximum MPI value, total MPI value, and the number of pixels were expressed as the mean ± standard error (SE).Differences in these parameters among groups were analyzed by one-way analysis of variance (ANOVA).Statistical significance was determined by Tukey's multiple comparison test.A P value less than 0.05 was considered statistically significant.
Results
Figure 2 shows the time courses of the temperature in the tumor and rectum during MHT in Groups A (red circles), B (blue circles), and C (green circles).The temperature in the rectum was almost constant at approximately 35˚C five minutes or more after the start of MHT.The temperature in the tumor before MHT was 31˚C to 34˚C.The temperature in the tumor increased after the start of MHT and plateaued at approximately 44˚C, 42˚C, and 41˚C in Groups A, B, and C, respectively.The maximum temperature in the tumor became higher with increasing iron concentration.Figure 3 shows the RTVG values as a function of days after MHT in Groups A (red circles), B (blue circles), and C (green circles), and the control group (black circles).The RTVG value decreased with increasing iron concentration.The RTVG value in Group A was significantly lower than that in the control group 3 days or more after MHT.The RTVG value in Group B was significantly lower than that in the control group 5 days or more after MHT.The RTVG value in Group C was significantly lower than that in the control group 5 days after MHT.
Figure 4 shows the MPI images in Groups A, B, and C immediately before (upper row), immediately after (second row), 7 days after (third row), and 14 days after MHT (bottom row).Note that the MPI images were superimposed on the X-ray CT images.As shown in Figure 4, the MPI value decreased and the spatial distribution of MNPs changed with time.
Figure 5(a) shows the average MPI values immediately before (red bar), immediately after (blue bar), 7 days after (green bar), and 14 days after MHT (black bar) in Groups A, B, and C. Figures 5(b)-(d) show the cases for maximum MPI value, total MPI value, and number of pixels, respectively.The asterisks in the graphs indicate statistical significance (P < 0.05).The average, maximum, and total MPI values decreased greatly between immediately before and immediately after MHT and between immediately after and 7 days after MHT and remained almost constant thereafter in all groups (Figures 5(a)-(c)).In contrast, the number of pixels tended to increase with time in Groups A and C (Figure 5(d)).Although the same tendency was also observed in Group B, there were no combinations with significant difference due to large scattering of the data.
Figure 6 shows the correlations between the average MPI value immediately before MHT and the RTVG value (left column) and between the average MPI value immediately after MHT and the RTVG value (right column).The upper, middle, and lower rows show cases when the RTVG values at 5, 7, and 14 days after MHT were used, respectively.The correlation coefficients between the average MPI value immediately before MHT and the RTVG value were −0.696, −0.666, and −0.642 at 5, 7, and 14 days after MHT, respectively, whereas those between the average MPI value immediately after MHT and the RTVG value were −0.658, −0.667, and −0.650 at 5, 7, and 14 days after MHT, respectively.
Figure 7 shows the correlations between the maximum MPI value immediately before MHT and the RTVG value (left column) and between the maximum MPI value immediately after MHT and the RTVG value (right column).As in Figure 6, the upper, middle, and lower rows show the cases when the RTVG values at 5, 7, and 14 days after MHT were used, respectively.The maximum MPI value immediately before MHT was significantly inversely correlated with the RTVG value (r = −0.656,−0.630, and −0.623 at 5, 7, and 14 days after MHT, respectively).The maximum MPI value immediately after MHT was also significantly inversely correlated with the RTVG value (r = −0.662,−0.671, and −0.651 at 5, 7, and 14 days after MHT, respectively).and C (green circles) and the control group (black circles).The RTVG was calculated from (V−V 0 )/V 0 , where V 0 and V represent the tumor volumes immediately before and after MHT, respectively.Note that the mice in the control group were not treated with MHT.Data are represented by mean ± SE. *P < 0.05 between Group A and the control group, #P < 0.05 between Group B and the control group, and + P < 0.05 between Group C and the control group.Figure 8 shows cases for the total MPI value.The total MPI value immediately before MHT was significantly inversely correlated with the RTVG value (r = −0.702,−0.669, and −0.640 at 5, 7, and 14 days after MHT, respectively).The total MPI value immediately after MHT was also significantly inversely correlated with the RTVG value (r = −0.593,−0.608, and −0.553 at 5, 7, and 14 days after MHT, respectively).
Figure 9 shows cases for the number of pixels.The number of pixels immediately before MHT was significantly inversely correlated with the RTVG value (r = −0.629,−0.571, and −0.507 at 5, 7, and 14 days after MHT, respectively).The number of pixels immediately after MHT was also significantly inversely correlated with the RTVG value (r = −0.581,−0.517, and −0.392 at 5, 7, and 14 days after MHT, respectively).
Discussion
We previously demonstrated that the average MPI value has excellent correlations with the iron concentration of MNPs and the temperature rise of Resovist ® solution induced by AMF [15].We also reported the preliminary results of in vivo studies on the application of MPI to MHT [15].In this study, we evaluated the feasibility of visualizing and quantifying the intratumoral distribution and temporal change of MNPs in vivo using MPI and investigated the usefulness of MPI for predicting the therapeutic effect of MHT.
In this study, we investigated the therapeutic effect of MHT using tumor-bearing mice injected intratumorally with Resovist ® with iron concentrations of 500 mM, 400 mM, and 250 mM.According to Yonezawa et al. [20], cancer cells undergo apoptosis when exposed to heat shock at 42˚C to 43˚C, whereas hyperthermia at temperatures above 44˚C causes necrosis.Mild hyperthermia at approximately 40˚C, however, decreases the induction of cytotoxicity and endoplasmic reticulum (ER) stress and rather protects cells against ER stress-induced Group A and the control group and between Group B and the control group.Although there was a tendency for the RTVG value in Group C to be lower than that in the control group, it did not reach statistical significance except at 5 days after MHT (Figure 3).Thus, it appears that the temperature of the tumor in Group C did not exceed 42˚C to 43˚C, i.e., the threshold temperature for inducing cell death (Figure 2).These results suggest that the therapeutic effect of MHT depends on the temperature of the tumor and therefore on the iron concentration of MNPs accumulated in the tumor.To enhance the therapeutic effect of MHT, it will be necessary to deliver and accumulate sufficient MNPs to tumors and/or to optimize the parameters of AMF such that the tumors are heated above 42˚C to 43˚C.As previously described, the frequency and peak amplitude of AMF and the Correlation between the number of pixels within the ROI drawn on the tumor and the RTVG value.The left and right columns show cases when the numbers of pixels immediately before and immediately after MHT were used, respectively.The upper, middle, and lower rows show cases when the RTVG values 5, 7, and 14 days after MHT were used, respectively.Note that the number of pixels in the control group was taken as zero.The red, blue, green, and black circles represent data in Groups A, B, and C and the control group, respectively.duration of MHT were set at 600 kHz, 3.5 kA/m, and 20 min, respectively, in this study.It might be possible to improve the therapeutic effect in Group C by changing these parameters [22].The frequency (f) and peak amplitude (H) of AMF, however, should be chosen under a criterion of f•H < 5 × 10 9 Am −1 •s −1 to prevent unwanted damage to the surrounding healthy tissue via eddy currents [23].
Since there is a significant correlation between the average MPI value and the temperature rise generated by MNPs, as we previously reported [15], the temperature in regions with a higher MPI value is considered to become higher than that in regions with a lower MPI value during MHT.If we could determine the MPI value at which the temperature of the tumor rises to 42˚C to 43˚C, it would be possible to use this MPI value as an indicator for estimating the effectiveness of MHT.In this study, we directly injected MNPs into the tumor as previously described.It is known that the spatial distribution and amount of the MNPs accumulated in the tumor largely depend on the injection speed and/or actual injected dose of MNPs [24].Thus, it would be necessary to quantify the amount of MNPs in the tumor accurately after the injection of MNPs in order to estimate the temperature rise in the tumor and thus to predict the therapeutic effect of MHT.Since there is an excellent linear correlation between the average MPI value and the iron concentration of Resovist ® as we previously demonstrated by phantom studies [15], MPI will be useful for accurately quantifying the amount of MNPs in the tumor after MNP injection.
In this study, we investigated the temporal change of MNPs accumulated in the tumor by calculating the average, maximum, and total MPI values and the number of pixels within the ROI drawn on the tumor in the MPI image immediately before, immediately after, 7 days, and 14 days after MHT (Figure 5).It appears that the change in the average MPI value corresponds to that in the average amount of MNPs per voxel, i.e., the average concentration of MNPs, and the change in the total MPI value corresponds to that in the total amount of MNPs in the selected slice of the tumor, whereas the change in the number of pixels corresponds to that in the distributed area of MNPs.As shown in Figure 5, the average, maximum, and total MPI values decreased greatly between immediately before and immediately after MHT and between immediately after and 7 days after MHT, and did not largely change thereafter.In contrast, the number of pixels tended to increase with time.These findings suggest that the intratumorally injected MNPs were distributed locally near the injected site immediately after the injection of MNPs and dispersed within the tumor during MHT, and that the MNPs remained in the tumor thereafter.Once MNPs are injected, MPI can be performed repeatedly until the MNPs disappear.This knowledge of the temporal change of the concentration and spatial distribution of MNPs in the tumor obtained by the repeated MPI studies will be useful for the treatment planning of MHT.Especially, the fact that MNPs remain in the tumor even 7 and 14 days after MHT will be useful when considering the repeated application of MHT to enhance its therapeutic efficacy [25].
Kettering et al. [25] investigated the bio-distribution of intratumorally injected MNPs in mice focused on MNP long term monitoring of pre and post therapy over 7 days using multi-channel magnetorelaxometry (MRX).Furthermore, they investigated the bio-distribution of MNPs in internal organs and tumors of sacrificed animals using single-channel MRX, and reported that there was no distinct change of total MNP amounts during long term monitoring and most of the MNPs remained in the tumors; only a few MNPs were detected in the liver and spleen and less than 1% of the injected MNPs were excreted.Their findings, however, are limited to the wholebody or whole-organ amount of MNPs, because the local concentration and spatial distribution of MNPs in the tumor cannot be measured in vivo using multi-channel or single-channel MRX [25].The temperature rise generated by MNPs in steady state is proportional to the concentration of MNPs [23].When we design an optimal MHT treatment plan to prevent insufficient heating of the targeted region and overheating of the healthy tissue, accurate knowledge of the local concentration of MNPs accumulated in the targeted region appears to be more important than knowledge of the total amount of MNPs when the spatial distribution of MNPs is inhomogeneous.
As previously described, we started MHT 20 min after the injection of MNPs, whereas Kettering et al. [25] did so 24 hours after the injection.Giustini et al. [26] reported that the intracellular uptake and aggregation of MNPs varied depending on the duration after the injection of MNPs, which may exert an influence on the heat deposition in the tumor and tumor cytotoxicity when exposed to AMF.Thus, the start time of MHT might affect the spatial distribution and temporal change of MNPs in the tumor and the resulting therapeutic effect of MHT.Further studies on these aspects are currently in progress.
As shown in Figures 6-9, the average, maximum, and total MPI values and the number of pixels immediately before and immediately after MHT were significantly inversely correlated with the RTVG values 5, 7, and 14 days after MHT, suggesting that these parameters are useful for predicting the RTVG 5 to 14 days after MHT.However, there was a tendency for the correlation coefficients between the parameters immediately before MHT and the RTVG values to be higher than those when using the parameters immediately after MHT, except for cases of the maximum MPI value, suggesting that the parameters immediately before MHT may be more effective for predicting the therapeutic effect of MHT than those immediately after MHT.Furthermore, the correlation coefficients between the number of pixels immediately before or after MHT and the RTVG value tended to be lower than those when using the average, maximum, and total MPI values, suggesting that the number of pixels immediately before or after MHT is less effective for predicting the therapeutic effect of MHT than other parameters.
When we analyzed the statistical difference in the number of pixels among groups (Figure 5(d)), the number of combinations that reached statistical significance was lower than those in other parameters such as the average and maximum MPI values (Figure 5(a) and Figure 5(b)).This was also the case when we analyzed the statistical significance in the correlation between the number of pixels and the RTVG value (Figure 9) as described above.These results appear to be mainly due to the fact that the scatter of the data of this parameter is larger than those of other parameters.As previously described, when we drew an ROI on the tumor in the MPI image, we took the threshold value for extracting the contour of the tumor as 40% of the maximum MPI value within the ROI.The threshold value adopted in this study was determined visually from inspection of the MPI images with various threshold values, which were superimposed on the X-ray CT images.Although this threshold value appears to be appropriate in our experience, the number of pixels might be more susceptible to the selection of the threshold value or the method for determining an ROI than other parameters.Thus, further studies on the optimization of the threshold value or the method for determining an ROI may be necessary.
A limitation of this study is that the MPI value was obtained from a single slice of the MPI image with the maximum signal intensity.Analysis with use of a single slice of the MPI image limits the accurate evaluation of the spatial distribution of MNPs in the whole tumor.For more detailed analysis, it will be necessary to acquire three-dimensional (multi-slice) data and to evaluate the three-dimensional distribution and accumulation of MNPs from these data.If this can be realized in the future, we expect that our MPI system can be used for more precise diagnosis and prediction of the therapeutic effect of MHT and can be applied to theranostics, in which diagnosis and therapy are integrated in a single platform.These studies are also currently in progress.
Other methods for imaging MNPs are magnetic resonance imaging (MRI) and micro-CT imaging.When we attempted to image MNPs using MRI with a conventional transverse relaxation time (T 2 * )-weighted imaging sequence, it was almost impossible due to large susceptibility-induced MR signal loss and image distortions in the regions near the MNPs especially for the higher concentrations of MNPs as is the case in MHT.Recently, Dähring et al. [27] proposed the use of micro-CT for determining the MNP distributions within tumors and reported that the information about the MNP distribution obtained by micro-CT permitted individualized MHT and improved the overall therapeutic efficacy.Although the use of micro-CT also appears to be promising and useful for establishing effective MHT, further studies especially on the accuracy and reproducibility in quantifying the amount of MNPs might be necessary for establishing the usefulness of the method.
Conclusion
We could visualize intratumorally injected MNPs in mice using our MPI system and could quantitatively evaluate the temporal change of the spatial distribution of MNPs in tumors.We found that the intratumorally injected MNPs dispersed within the tumor during MHT and then remained in the tumor even 7 and 14 days after MHT.In addition, we found that the MPI values immediately before or immediately after MHT were significantly inversely correlated with the RTVG values.These results suggest that MPI allows us to quantitatively evaluate the temporal change of the spatial distribution of MNPs in vivo in a visible manner and will be useful for predicting the therapeutic effect of MHT and for MHT treatment planning.
Figure 2 .
Figure 2. Time courses of the temperature in the tumor and rectum of the mice in Groups A (red circles), B (blue circles), and C (green circles) during MHT for 20 min.The tumors in Groups A, B, and C were directly injected with Resovist ® with iron concentrations of 500 mM, 400 mM, and 250 mM, respectively.Data are represented by mean ± standard error (SE).
Figure 3 .
Figure 3. Relative tumor volume growth (RTVG) as a function of days after MHT in Groups A (red circles), B (blue circles),and C(green circles) and the control group (black circles).The RTVG was calculated from (V−V 0 )/V 0 , where V 0 and V represent the tumor volumes immediately before and after MHT, respectively.Note that the mice in the control group were not treated with MHT.Data are represented by mean ± SE. *P < 0.05 between Group A and the control group, #P < 0.05 between Group B and the control group, and + P < 0.05 between Group C and the control group.
Figure 4 .
Figure 4. Examples of the MPI images in Groups A, B, and C immediately before (upper row), immediately after (second row), 7 days (third row), and 14 days after MHT (bottom row).Note that the MPI images were superimposed on the X-ray CT images.Scale bar = 10 mm.
Figure 5 .
Figure 5. Average MPI values (a), maximum MPI values (b), total MPI values (c), and the numbers of pixels within the region of interest (ROI) drawn on the tumor in the MPI image (d) immediately before, immediately after, 7 days, and 14 days after MHT in Groups A, B, and C. Bar and error bar represent mean and SE, respectively.*P < 0.05.
Figure 6 .
Figure 6.Correlation between the average MPI and RTVG values.The left and right columns show cases when the average MPI values immediately before and immediately after MHT were used, respectively.The upper, middle, and lower rows show cases when the RTVG values 5, 7, and 14 days after MHT were used, respectively.Note that the average MPI values in the control group were taken as zero.The red, blue, green, and black circles represent data in Groups A, B, and C and the control group, respectively.apoptosis [21].As shown in Figure 2, the temperature in the rectum was almost constant at approximately 35˚C 5 min or more after the start of MHT, whereas the temperature in the tumor rose to approximately 41 to 44˚C during MHT depending on the iron concentration of Resovist ® .These results suggest that the tumor was selectively heated by AMF.As shown in Figure 3, significant differences were observed in the RTVG value between
Figure 7 .
Figure 7. Correlation between the maximum MPI and RTVG values.The left and right columns show cases when the maximum MPI values immediately before and immediately after MHT were used, respectively.The upper, middle, and lower rows show cases when the RTVG values 5, 7, and 14 days after MHT were used, respectively.Note that the maximum MPI values in the control group were taken as zero.The red, blue, green, and black circles represent data in Groups A, B, and C and the control group, respectively.
Figure 8 .
Figure 8. Correlation between the total MPI and RTVG values.The left and right columns show cases when the total MPI values immediately before and immediately after MHT were used, respectively.The upper, middle, and lower rows show cases when the RTVG values 5, 7, and 14 days after MHT were used, respectively.Note that the total MPI values in the control group were taken as zero.The red, blue, green, and black circles represent data in Groups A, B, and C and the control group, respectively.
Figure 9 .
Figure 9. Correlation between the number of pixels within the ROI drawn on the tumor and the RTVG value.The left and right columns show cases when the numbers of pixels immediately before and immediately after MHT were used, respectively.The upper, middle, and lower rows show cases when the RTVG values 5, 7, and 14 days after MHT were used, respectively.Note that the number of pixels in the control group was taken as zero.The red, blue, green, and black circles represent data in Groups A, B, and C and the control group, respectively. | 2016-09-28T00:20:06.222Z | 2016-03-17T00:00:00.000 | {
"year": 2016,
"sha1": "5c4fa01f1cd2f8047d2b0a3023a5c418bcdedd9b",
"oa_license": "CCBY",
"oa_url": "https://www.scirp.org/journal/PaperDownload.aspx?paperID=64658",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "5c4fa01f1cd2f8047d2b0a3023a5c418bcdedd9b",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
269632666 | pes2o/s2orc | v3-fos-license | The role of sociocultural factors in rare medical conditions: The first case report of pseudocyesis in an Ethiopian woman with major depressive disorder
Key Clinical Message Our case report highlights pseudocyesis, a rare medical condition in a 40‐year‐old woman with comorbid major depressive disorder. Cultural influences on experiences, and the need for understanding sociocultural factors in mental health, are emphasized in low‐resource settings.
which led to her father cursing her.Her first two children died shortly after birth, which she believes was a result of the curse.Due to this experience, she used to deny and pretend not to be pregnant when she actually was.However, she has since given birth to three more children who are alive and healthy.While she still feels sadness about the loss of her first two children, she has a supportive and healthy relationship with her husband.
She is a known patient with a diagnosis of major depressive disorder (MDD) 5 months prior to her current presentation, she was diagnosed after she went to a clinic in Adama with a compliant of tearfulness, depressed mood, loss of interest in almost all activities, difficulty focusing and passive death wish, which lasted for a few months.She was given sertraline 50 mg every morning which progressively escalated to 100 mg every morning, she reported having some improvement with her depressed mood, sleep difficulty, focus, and her death wish.
Currently, she presented with main compliant of abdominal distension for 4 years.She reported absence of menses occasional experience of nausea and vomiting, mostly occurring during the morning in the first 3-5 months of onset of symptoms 4 years back.The two symptoms were accompanied by abdominal swelling that was initially small in size and then progressively increased to attain its current size (See Figures 1 and 2).She also reported that she has significant weight gain.After 5 months of the onset of the symptoms, she began to feel "fetal kick" on a daily basis and started to have breast engorgement.
She visited more than 18 healthcare facilities (mostly she was covering her medical expenses with the support of her neighbors and relatives) wondering if she was really pregnant or not.Although all of her tests were negative and she was consistently informed that she was not pregnant, she was not convinced.Then, she visited a traditional birth attendant who described the condition as "የአጥንት ልጅ" (in the local language, literally it means "the baby of a bone")."The baby of a bone" is a cultural explanation for a pregnant woman having vaginal bleeding on multiple occasions, but it was believed that she subsequently could give birth over a duration longer than expected for other pregnancies.Hence, the society gives an explanation that the baby has no blood and is only made of bones.
On mental state exam, she had dysphoric, constricted affect.She was preoccupied with the idea of being pregnant and had no perceptual disturbance.Pertinent physical findings include that she looks older than her stated age.She had symmetrically enlarged breasts, and her abdomen was grossly distended and moved with respiration.She had a lordotic posture.There is moderate abdominal tenderness over the right lower quadrant and there is no palpable mass.There was normoactive bowel sound with no appreciable fetal heartbeat, soft to firm consistency, and no shifting dullness or fluid thrill.
| METHODS
Investigation results of complete blood count, renal, liver, and thyroid function fasting serum glucose, and lipid profile tests were in the normal range.Urine and serum HCG were negative, but serum follicle stimulating hormone (FSH), was three times elevated than the normal lab range.In addition, abdominal ultrasound was unremarkable and did not show the presence of fetus.
With the above history, physical and mental state exam, and considering her investigation results, she was diagnosed to have other specified somatic symptoms-related disorder (pseudocyesis) + co-morbid MDD, Delusional disorder (Somatic type)-Delusion of pregnancy with comorbid depression was also considered for which Olanzapine (5 mg/day, then 10 mg/day) was added.Sertraline was continued with the same dose.She took the olanzapine for 4 months.Supportive psychotherapy was initiated at SPHMMC by her psychiatrist, who is one of the authors of this report.
| OUTCOME AND FOLLOW-UP
She was followed for a total of 6 months and was showing slight improvement with mostly her depressive symptoms and her coping to the situation, but she was lost to follow up for unexplained reasons.The aim of the present case report was to demonstrate the impact of sociocultural, economic status, and poor family support on the development of pseudocyesis.
The case highlighted the intricate sociocultural factors that led to the development of pseudocyesis in an adult female with MDD.Studies indicated that most females with pseudocyesis might have mild-to-moderate affective disorders, including MDD. 3,5,14 It has been suggested that depression plays an important role in the etiology of pseudocyesis, as both conditions share some similar pathophysiological mechanisms, 4 which might explain the present case.
To the best of our knowledge, this is the first recorded case report of pseudocyesis in Ethiopia.
The current patient appears to be grappling with psychological and cultural factors that have led her to have a strong conviction in the "curse" of her father that has persistently stressed her.Her low level of education and unemployment could also be contributing factors.Moreover, low socioeconomic status and poor family support are important factors in the development of pseudocyesis. 15urthermore, pseudocyesis is commonly seen in women who are facing the threat of menopause or who want to preserve their esteem in front of their husbands or their families. 6Women with a history of abortion, those who have lost one or more children due to death, or those who excessively fear losing their children in the future are also at a higher risk for pseudocyesis. 5,16,17These factors provide some insight into the psychological state of our patients.
Consistent with previous reports, the current patient exhibited symptoms such as amenorrhea, nausea, vomiting, abdominal distention (See Figures 1 and 2), enlargement of the breasts (see Figure 1), areolar pigmentation, lordotic posture on walking, and weight gain. 2,4However, laboratory tests (i.e., urine and serum HCG) were negative, and abdominal ultrasound did not support the presence of pregnancy, and the diagnosis of pseudocyesis was.As all or most patients would have the difficulty of accepting the objective laboratory results/reality, 7 our current patient was visiting dozens of clinics wishing to hear the news that "she is pregnant!"We provided psycho-education and support to help the patient recognize the condition and accept her condition, but unfortunately, she was lost to follow up after a few months.
Our patient was a 40-year-old woman presenting with galactorrhea.It is noteworthy that, a diagnosis of pseudocyesis without galactorrhea, may be more common in older than younger women. 18Pseudocyesis commonly occurs in females who are in the reproductive age group of 20-44, 17 mainly in those in their late 30s.However, there are some reports that the problem has been observed in female teenagers [18][19][20][21][22][23] and, surprisingly, in preteen 24 and 6-year-old girls. 14,25he case has also been observed in males, [26][27][28][29] although most of the cases in males would not be categorized under pseudocyesis based on the latest standards. 57]17,19 Psychological and/or social factors may impact the hypothalamic-pituitary-ovarian, and it has been suggested that endocrine traits may serve as a linking pathway between pseudocyesis pathophysiology and MDD. 3 Stress or the excess desire for pregnancy could stimulate the release of pituitary hormones, such as gonadotropin-releasing hormone (GnRH). 16The increased level of FSH in the present patient or other patients might be because of reduced steroid-dependent negative feedback mechanisms on GnRH. 3,4Importantly, dysregulation of monoaminergic pathway in the central nervous system causing a deficit in dopamine and noradrenaline, and increased autonomic activities because of noradrenaline turnover may play a key role (see 3 for review).
Our patient was not rightly diagnosed, despite visiting several healthcare facilities, creating awareness among clinicians might contribute to early detection and better outcomes, and we believe that our report sheds light on the role of socioeconomic factors in the development of pseudocyesis and highlights the need for interventions to address these factors.We suggest a collaborative approach of obstetrics/gynecologists and psychiatrists/mental health professionals in the diagnosis and management of pseudocyesis with or without other comorbid disorders.Furthermore, the present case has important implications for subsequent quantitative and qualitative studies regarding the development of pseudocyesis and for the development of interventions and policies aimed at reducing the incidence of pseudocyesis, particularly in low-income and marginalized populations.
Overall, this case study underscores the importance of considering the sociocultural context in the diagnosis and treatment of rare medical conditions and highlights the potential benefits of a culturally sensitive approach to mental health care.Considering our observation, we suggest that future studies should focus on the impact of sociocultural factors, as well as other comorbid psychiatric conditions, and the development of effective preventive and intervention strategies for pseudocyesis.
F I G U R E 1
The abdominal and breast picture of the patient, standing position.
F I G U R E 2
The abdominal picture of the patient, supine position. | 2024-05-10T05:08:36.833Z | 2024-05-01T00:00:00.000 | {
"year": 2024,
"sha1": "ef0e51589ecbeeea6889e34f74f899ef02e79120",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ccr3.8888",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ef0e51589ecbeeea6889e34f74f899ef02e79120",
"s2fieldsofstudy": [
"Medicine",
"Psychology",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234622223 | pes2o/s2orc | v3-fos-license | Feto Maternal Outcome of Teenage Pregnancy
Original Research Article Objectives: In this study our main aim is to evaluate feto maternal outcome of teenage pregnancy. Methods: This Cross-sectional study was done attertiary medical college and hospital, Dhaka 1 year from August 2018 to July 2019. 100 Teenage pregnant ladies between 18 to 20 years were taken up as a case group for the study. 108 control group pregnant women of 21 to 35 years age, without any preexisting co-morbidities and history of previous caesarean section. Results: In the study, 23% had vaginal deliveries whereas 77% cases had caesaren deliveries. Failed induction was common in case and control group, 14.5% and 9.25%. Also, in case group 12.8% neonates were suffered from fetal distress, where as in control group it was 9.2%. Conclusion: From our study we can say that, in order to improve the teenage health periodic information, education, community activities, ANC camps to be held at primary health care centers. Further study is needed for better outcome.
INTRODUCTION
In recent decade, adolescent pregnancy has become important health issue in a great number of countries, both developed and developing [1].WHO defines teenage pregnancy as any pregnancy from a girl who is 10 to 19 years of age, age being defined as her age at the time of delivery [2]. Adolescent pregnancy rate is on rise, emerging as serious problem all over the world and more so in developing countries like India. It constitutes 11 percent of all the births worldwide and 23 percent of overall disease burden due to pregnancy and child birth due to improper prenatal care needed for monitoring of maternal and fetal development.
The incidence of teenage pregnancy varies dramatically between the different countries, of which 90 percent is contributed by developing countries [3]. Nevertheless teenage pregnancy and delivery rate is significantly less in developed countries compared to developing countries [4].
Incidence of teenage pregnancy in India is 2 women out of every 1000 pregnancies. 5 Teenage pregnancy is associated with series of maternal and fetal complications. Anaemia, pre-eclampsia, eclampsia, preterm delivery, instrumental delivery, increased LSCS rate due to cephalopelvic disproportion and fetal distress are strongly associated maternal complications in teenage pregnancy.
In this study our main objective is to evaluate feto maternal outcome of teenage pregnancy in a Bangladesh.
General objective
In this study our main goal is to assess feto maternal outcome of teenage pregnancy in a Peripheral Military Hospital.
RESULTS
In table-1 shows total number of deliveries where number of total baby's delivered-892. Whereteenage pregnancy was 100 (11.21%). The following table is given below in detail: In figure-1 shows educational status of the study group where in case group most of the patients completed only their SSC, no one completed their masters where as in control group 7 people completed their masters. The following figure is given below in detail:
Fig-1: Educational status of the study group.
In figure-2 shows mode of delivery in case group where 23% had vaginal deliveries whereas 77% cases had caesaren deliveries. The following figure is given below in detail: In table-2 shows complications associated with teenage pregnancy, where failed induction was common in case and control group, 14.5% and 9.25%. The following table is given below in detail: figure-3 shows indications of caesarean delivery in teenage pregnancy where failed induction is in teenage pregnancy with 31 cases and eclampsia and fetal distress contributing to 24 and 28 cases each. the following figure is given below in detail:
DISCUSSION
In our study we found that, adverse outcome of teenage pregnancy arises not only from physical and medical causes associated but also depends on individual, family, social, cultural, economic factors besides lack of access to health care resources, contraception, education.
Increased incidence of LSCS in teenage pregnancies and medical complications associated with it like anemia, PIH, and fetal complications being, prematurity, IUGR, Low birth weight are preventable factors [6].
In one study reported that, teenage pregnancy exposes mothers to many health-related complications and newborns to poor birth outcome. Adverse outcome of teenage pregnancy arises not only from physical and medical causes associated but also depends on individual, family, social, cultural, economic factors besides lack of access to health care, contraception, resources, education [7]. In our study we noted that, in case group most of the patients completed only their SSC, no one completed their masters where as in control group 7 people completed their masters. This implies that teenage mothers are less careful about their pregnancy probably secondary to lack of awareness, maturity and other social factors.
In one study reported that, increased incidence of LSCS in teenage pregnancies and medical complications associated with it like anemia, PIH, and fetal complications being, prematurity, IUGR, Low birth weight are preventable factors [8]. In our study we found that, 23% had vaginal deliveries whereas 77% cases had caesaren deliveries. Which similar to other studies.
Teenage pregnancy remains major health issue in our country due to prevailing social dogmas, age old traditions and poor access to health care in remote rural areas, illiteracy leads to lack of knowledge about family planning and puts the adolescents at risk for early pregnancy. Education play major role in decreasing the incidence of teenage pregnancy and its attendant health risks and psychological issues.
Rate of caesarean delivery was high, predominant indication being cephalo pelvic disproportion, fetal distress, medical disorders associated like pre-eclampsia, and eclampsia. Vaginal delivery was seen in cases with low birth weight baby's secondary to growth restriction or prematurity [9]. In our study we found that, in case group 12.8% neonates were suffered from fetal distress, where as in control group it was 9.2%.
In teenage pregnancy where failed induction is in teenage pregnancy with 31 cases and eclampsia and fetal distress contributing to 24 and 28 cases each. Significant number of neonates born to teenage mothers had low birth weight, probably due to malnutrition, medical diseases associated with pregnancy leading to intrauterine growth restriction and prematurity [9].
CONCLUSION
From our study we can say that, in order to improve the teenage health periodic information, education, community activities, ANC camps to be held at primary health care centers. Further study is needed for better outcome. | 2020-12-07T13:57:33.736Z | 2020-10-07T00:00:00.000 | {
"year": 2020,
"sha1": "80297fb8e38fdfd5f9e8a4296512350dcb1cc518",
"oa_license": null,
"oa_url": "https://doi.org/10.36347/sjams.2020.v08i10.007",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "80297fb8e38fdfd5f9e8a4296512350dcb1cc518",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235464239 | pes2o/s2orc | v3-fos-license | Developing resilience and promoting positive mental health strategies in university students
Aims Suicide is one of the leading causes of death in young people living in Australia, accounting for 7.3% of all deaths among individuals aged 15–19 years. Historically, high levels of suicide have been recorded in Australian university students. This project aims to develop and test a massive online course-program (MOOC) for university students, underpinned by literature and strength-based suicide prevention principles, building resilience and awareness of mental health promoting activities. Method A scoping review of the literature was undertaken to explore the effectiveness of current suicide prevention programs for undergraduate university students, and the effective elements contributing to the success of these programs. Six electronic databases were searched to identify relevant literature. Further, mental health consumers and university students were involved in co-producing the content of the six modules of the ‘Talk-to-me' MOOC. Result Nine articles were included in the review, discussing four types of programs including; gatekeeping, education, promotional messaging and online consultation. It was apparent from this review that there is a significant dearth of interventions and programs currently available to reduce the risk of suicide among undergraduate students, with many of the programs having limited efficacy. Despite this, a number of program elements were identified as beneficial to preventing suicide among post-secondary students including upskilling of students, and improving resilience, and self-management. These findings and further consultation with mental health consumers and undergraduate university students underpinned the development of the content of the ‘Talk-to-me' MOOC which is tailored to meet the needs of university students. The MOOC contains six modules: Mental fitness; strategies to increase mental fitness; self-harm; suicidal behaviour in young adults; interventions for suicidal behaviour; and, gatekeeper interventions. Two case study senarios depicting mental health challenges commonly experienced by yound adults portraying appropriate crisis communication skills were developed and filmed complementing the six ‘Talk-to-me' modules. Conclusion Overall, studies included in the review provide evidence to suggest that preventative programs, incorporating an educational component may be effective to be used in the MOOC to improving help-seeking behaviours among post-secondary education students. Findings from this review have underpinned the development of the ‘Talk-to-Me' MOOC which was launched in March 2020. To date this MOOC has enrolled over 45,000 participants from over 150 countries, with the average age of users being 24 years. Collectively, this line of work highlights that MOOCs are an effective means of mental health promotion to young adults.
Aims. The aim of this medical education case report was to outline the development and outcomes of a reverse-mentorship project that enabled cross-generational collaborative learning. The project took the shape of a philosophy of psychiatry journal club facilitated by a psychiatry core trainee in west London, UK. Background. Reverse-mentorship reverses traditional roles of mentor and mentee. It is an increasingly fashionable concept in medical education. The junior mentors the senior clinician. The implicit learning outcomes include provision of a two-way learning process, development of mentoring skills for the more junior clinician and collaboration that builds social capital within the workplace. Reverse-mentorship is effective when the junior mentor is recognised for their expertise in a particular area. In this instance, the junior mentor has a special interest in the philosophy of psychiatry. Method. Junior mentor and senior mentees formed a monthly journal club. The club tracked arguments from anti-and biological psychiatry on the meaning of mental illness. The debate offered insight into a semantic analysis of mental illness and a deeper conceptual understanding of medicine. The learning material derived from the core concepts of philosophy and mental health (Fulford et al.). The role of the mentor was to facilitate group discussion around arguments from relevant papers. A survey, adapted from a recent reverse-mentorship review article, measured the quality of educational experience for mentor and mentees. Result. Overall, mentees (senior clinicians) agreed that the mentor ( junior clinician) displayed attributes and behaviours for effective mentoring across most domains, including enthusiasm, effective communication, respect for mentee expertise and active listening to the needs of the mentee. The mentor was particularly impressed with the mentees' openness to learn new concepts and respect shown. General reflections on the experience of reverse-mentorship were positive overall. A thematic review highlighted particular aspects, including: a good way to learn a new skill and great opportunity to develop professional skills of mentoring. Conclusion. The importance of mentoring in medical education is well established. Reverse-mentorship is a new concept that looks to harness the unique qualities of millennials, including their aptitudes for empowerment, innovation and collaboration. This medical education case report shows that an enthusiastic junior clinician can successfully pilot an educational-mentoring scheme aimed at senior clinicians. To make more explicit the intuitive benefits of reverse-mentorship, longitudinal reviews are needed. However, this case report contributes important insights into this burgeoning field of medical education.
Developing resilience and promoting positive mental health strategies in university students Aims. Suicide is one of the leading causes of death in young people living in Australia, accounting for 7.3% of all deaths among individuals aged 15-19 years. Historically, high levels of suicide have been recorded in Australian university students. This project aims to develop and test a massive online course-program (MOOC) for university students, underpinned by literature and strength-based suicide prevention principles, building resilience and awareness of mental health promoting activities. Method. A scoping review of the literature was undertaken to explore the effectiveness of current suicide prevention programs for undergraduate university students, and the effective elements contributing to the success of these programs. Six electronic databases were searched to identify relevant literature. Further, mental health consumers and university students were involved in co-producing the content of the six modules of the 'Talk-to-me' MOOC.
Result. Nine articles were included in the review, discussing four types of programs including; gatekeeping, education, promotional messaging and online consultation. It was apparent from this review that there is a significant dearth of interventions and programs currently available to reduce the risk of suicide among undergraduate students, with many of the programs having limited efficacy. Despite this, a number of program elements were identified as beneficial to preventing suicide among postsecondary students including upskilling of students, and improving resilience, and self-management. These findings and further consultation with mental health consumers and undergraduate university students underpinned the development of the content of the 'Talk-to-me' MOOC which is tailored to meet the needs of university students. The MOOC contains six modules: Mental fitness; strategies to increase mental fitness; self-harm; suicidal behaviour in young adults; interventions for suicidal behaviour; and, gatekeeper interventions. Two case study senarios depicting mental health challenges commonly experienced by yound adults portraying appropriate crisis communication skills were developed and filmed complementing the six 'Talk-to-me' modules.
Conclusion. Overall, studies included in the review provide evidence to suggest that preventative programs, incorporating an educational component may be effective to be used in the MOOC to improving help-seeking behaviours among postsecondary education students. Findings from this review have underpinned the development of the 'Talk-to-Me' MOOC which was launched in March 2020. To date this MOOC has enrolled over 45,000 participants from over 150 countries, with the average age of users being 24 years. Collectively, this line of work highlights that MOOCs are an effective means of mental health promotion to young adults.
Burnout and Interventions for Healthcare workers to cope during COVID-19
Harnek Kailey* Maidstone and Tunbridge Wells NHS Trust *Corresponding author. Aims. Healthcare workers are exposed to both physical and mental demands in the hospital environment; recently intensified by overstretching staff and resources during the current COVID-19 pandemic. Despite healthcare workers banding together, physician burnout is more prevalent than ever before due to emotional, physical and mental exhaustion. Firstly, this poster aims to expose the prevalence of burnout among healthcare workers during the COVID-19 pandemic. Secondly, to highlight the interventions and strategies to help minimise burnout among healthcare workers.
Method. I will focus on reviewing clinical trials with a particular focus on healthcare workers affected by burnout within the COVID-19 pandemic timeframe. Therefore, narrowing my search to 20 trials within the past 12 months using the following restricted search criteria: 'burnout', 'covid-19' and 'healthcare'. Furthermore, commenting on strategies and interventions to minimise burnout by stretching my criteria to interventions trialled within the last 24 months. This is due to limited data and trial evidence for burnout strategies within the last 12 months of the COVID-19 pandemic. Result. Burnout is on the rise among healthcare workers across the globe, 47% of healthcare staff are expressing an element of burnout worldwide. With growing concerns of healthcare staff developing long term mental health implications as a result of work-related stress. At present, one third of frontline staff are experiencing depression and distress; which must be addressed. Reviewing recent trials has highlighted a number of successful strategies for approaching burnout including: app technology, talking therapy, staff support and internet-based resources. App-related technology and web resources have shown to be particularly beneficial among recent trials, with limited participation and engagement for support groups/talking therapy. Conclusion. A significant rise in physician burnout and distress during COVID-19 has been noted in various trials. Interventions specifically related to burnout within COVID-19 are limited due to a low yield of completed trials; however, a couple of trials have found an improvement in COVID-19 related stress among healthcare workers using app-related technology. Internet based intervention is cheap, widely accessible and a non-judgemental method for seeking help, especially within a profession where burnout is heavily stigmatized.
Motivators and deterrents in choosing a career in psychiatry; making the most of psychiatry school events Aims. In response to the Royal College of Psychiatrists' recruitment strategy, a bi-annual Psychiatry School event was set up in the North West of England. The Psychiatry School aims to inspire medical students and foundation doctors to choose a career in Psychiatry with two days of workshops on different subspecialties and various aspects of the career pathway. A previous service evaluation has shown attending the event improves attitudes towards psychiatry.
The aim is to assess whether improving attitudes to psychiatry has been sustained and gain a clearer understanding of the motivators and deterrents in choosing a career in Psychiatry to better inform future events.
Method. An online questionnaire about positive and negative aspects of psychiatry was sent to attendees of the Autumn North West Psychiatry School 2020 before and after the event.
Result. The total number of completed questionnaires was 62.
53.6% people were considering applying for core psychiatry training prior to the event and this rose to 85.3% in the post event questionnaire.
Motivators for a career in psychiatry prior to the event included having a better holistic understanding of patients and wide range of sub-specialities. There was a common theme of interest in research opportunities. Dynamic patient-doctor relationship, exploring issues in depth and treating diverse populations were key motivators.
It is encouraging to note that 100% responders felt their positive views on psychiatry were validated.
The majority of deterrents were disregarded and attendees felt positive about choosing a career in psychiatry. Conclusion. Following the event, the only negative view on a career in Psychiatry was the concern about the potential impact on one's own mental health. This is an important issue (highlighted in the Royal College of Psychiatrists Position Statement) that deserves consideration at future events to highlight potential effects on Psychiatrists wellbeing and how these can be avoided or mitigated.
The wide variety of sub-specialities and opportunities for research were key areas that motivated attendees and we will continue to deliver engaging workshops around these themes.
Digital frontiers in international psychiatric recruitment: the lessons of the Northwest School of Psychiatry careers event November 2020 | 2021-06-18T13:18:07.371Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "9d4268a661b35b80819b513e74ead5e0333ce4bc",
"oa_license": "CCBYNCND",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/640633DD89B92EA226C929F0A8EEBEBA/S2056472421004038a.pdf/div-class-title-developing-resilience-and-promoting-positive-mental-health-strategies-in-university-students-div.pdf",
"oa_status": "GOLD",
"pdf_src": "Cambridge",
"pdf_hash": "9d4268a661b35b80819b513e74ead5e0333ce4bc",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": []
} |
259269267 | pes2o/s2orc | v3-fos-license | High Frequency Ultrasound of Basal Cell Carcinomas: Ultrasonographic Features and Histological Subtypes, a Retrospective Study of 100 Tumors
(1) Background: 22 MHz high frequency ultrasound (HFUS) is a non-invasive imaging technique that gives information on depth, length, volume and shape of skin tumors. (2) Methods: We reviewed the clinical, ultrasound, and histological records of 54 patients with 100 histologically confirmed basal cell carcinoma (BCC) tumors with the use of HFUS. (3) Results: Most infiltrative tumors (n = 16/21, 76.2%) were irregular shaped, followed by five (23.8%) being round shaped; most superficial tumors (n = 25/29, 86.2%) were ribbon shaped, followed by four (13.8%) being round shaped; most nodular tumors (n = 26/33, 78.8%) were round shaped, followed by seven (21.2%) that were irregular shaped; and, lastly, all microdular tumors (n = 2/2, 100%) were round shaped. Strong evidence of association (p = 0.000) was observed between the histological subtype and tumor shape as seen using the HFUS. No evidence of association was found between the histological subtype and tumor margin (p > 0.005). Cohen’s Kappa statistic to assess the agreement between BCC subtypes evaluated by histological examination and U/S appearance was calculated equal to 0.8251 (almost perfect agreement). (4) Conclusions: HFUS appears to be a reliable technique for the pre-operative evaluation of BCCs, assisting physicians to decide on the optimal therapeutic approach.
Introduction
Basal cell carcinoma (BCC) is the most common type of skin cancer, comprising 75% to 80% of all types, and it is the most common malignant tumor in white populations [1,2]. Due to changes in sun exposure habits, as well as due to an increase in the life span of people in western societies, the incidence of these tumors is continuously on the rise. Furthermore, BCCs in general have low mortality, but as they mainly occur on the head and neck area, their morbidity is significant and can create severe impairments, especially when they are not treated in timely fashion and have acquired a significant size [3]. Diagnosis is commonly achieved by a combination of clinical and dermatoscopic findings, which tend to be useful in the preoperative prediction of the BCC subtype. Determining the non-invasive potential of a tumor helps us to predict the response to topical or minimally invasive treatments. The sensitivity and specificity of dermoscopy is higher for pigmented than non-pigmented BCCs and when performed by experts [2]. Furthermore, tumors located in difficult-to-treat locations (eyes, nose, lips, ears) and those with poorly defined nonpigmented margins which are often associated with the morphoeic subtype or recurrent tumors pose a diagnostic challenge because an accurate appreciation of the margins is often impossible. We need other non-invasive imaging options that will allow us to predict the exact nature of the tumor.
Early, accurate detection of skin cancer is essential to guide appropriate management and to improve morbidity and quality of life [4]. Additional tools should be used to enhance diagnostic accuracy. Diagnostic techniques, such as optical coherence tomography (OCT), reflectance confocal microscopy (RCM), and ultrasound, have been proposed and studied for basal cell carcinoma management [2]. Cutaneous high-frequency ultrasound examinations can accurately and rapidly differentiate between epidermal, subdermal, and subcutaneous tissues in real time. This procedure may help to identify lesions invisible to the spatially-restricted human eye [5].
High Frequency Ultrasound (HFUS) is a non-invasive, office-based technique that can be used to detect the optimal biopsy site and the depth, the length, and morphology (volume/shape) of the tumor, and also to differentiate BCCs from other skin tumors and, possibly, differentiate different BCC histological subtypes [6][7][8]. Different tumor subtypes behave in a different manner, so the need to recognize these subtypes becomes all the more relevant. In addition, not all tumors need to be treated with surgery: there are many minimally and non-invasive techniques that can be used on low-risk tumors, and the therapeutic choice also depends on the tumor's characteristics (tumor size, subtype, etc.).
In dermatology, high resolution devices with high frequency transducers are used [9,10]. Devices of 20 to 25 MHz are most frequently used, and have the best resolution for the observation of surface structures. Frequencies between 50 and 100 MHz present little penetration, limited to the epidermis [7,10]. Apart from HFUS application to skin tumors, HFUS can also be used to examine other skin conditions, such as inflammatory and infectious cutaneous diseases, skin aging, and cosmiatry [10].
In our study, we aim to explore the value of HFUS in identifying high-risk BCC tumors and in differentiating the latter from low-risk ones that require treatment with minimally invasive techniques, and we also aim to correlate tumor characteristics as they appear in ultrasound (tumor depth, shape, morphology of margins) with the various histological subtypes. HFUS can prove to be an addition to dermoscopy as a non-invasive diagnostic tool that will enhance our diagnostic accuracy in skin oncology, and that will also guide us through therapeutic decision making.
Materials and Methods
For the aims of this study, we retrospectively reviewed clinical, ultrasound, and histological records of patients with 100 basal cell carcinomas, diagnosed from 26 May 2016 to 11 November 2016 at the Dermatology Department of the Pius Hospital of Valls in Spain. A total of 54 patients (35 men, 19 women) with 100 basal cell carcinomas diagnosed clinically and dermoscopically [11] were included in the study. Every tumor was initially recorded using a dermoscope Fotofinder leviacam ® (FotoFinder systems GmbH, Bad Birnbach, Germany) and ultrasound imaging was afterwards performed with Dub ® Skin Scanner (Taberna ProMedicum GmbH, Lueneburg, Germany) using a 22 MHz transducer. All tumors were then removed surgically, immediately fixed with 10% formaldehyde solution, and sent for histologic evaluation. Hematoxylin and eosin dyes were used for specimen staining. The samples were evaluated by the Pathology Department at the Pius Hospital of Valls, Spain. The same dermatologist performed the clinical, dermoscopic, and ultrasonographic examination, as well as the surgical excision of the tumor (PP). HFUS tumor measurements were taken during the evaluation.
Evaluation of Ultrasound Images
In HFUS images, the epidermis and dermis appear as a hyperechoic layer (bright) with the dermis showing up less bright than the epidermis. The subcutaneous tissue appears hypoechoic with hyperechoic fibrous septa in between. Basal cell carcinomas appear hypoechoic in contrast to the adjacent healthy tissue, while the margins can be delimited based on the difference in refraction between the area and the hyperechoic perilesional region [12]. In addition, with ultrasound imaging, it is possible to assess the tumor volume, length, and width, as well as the layers it infiltrates.
In our study, one dermatologist (PP) trained in skin imaging and blinded to the patients' histopathological results evaluated the ultrasonographic features of the lesions, including the tumour shape, margin, hyperechoic spots, width, and depth (measured in mm). Tumor depth measurements were obtained from the epidermal level and also from the surface level after subtracting the exophytic part of the tumour. Figure 1 illustrates how the measurements were taken: The tumour ultrasound characteristics were described as in Wang SQ et al. [13]. The selected tumour shape categories included were round/oval, ribbon-like (rosary-beads like), and irregular shapes. Tumour margins were described as either well or ill defined. Figure 2 shows some examples of the most common types of BCC shapes found in the HFUS images. Ribbon/Rosary-bead tumors can be seen as an elongated thin hypoechoic strip or sometimes as a tiny round tumor with a barely visible elongation. Irregular tumors can have very diverse shapes.
Statistics
The statistical package STATA version 14.2 (StataCorp, College Station, Texas, USA) was used for data analysis. The analysis comprised a preliminary descriptive analysis assessing the main characteristics of our sample population. Subsequently, we stratified our dataset according to BCC subtype, and HFUS measurements were calculated for each category. Absolute and relative frequencies of tumor shape and margins were obtained for the various subtypes. Cross-tabulations and hypothesis testing using the p-value approach was used to examine possible associations. One-way analysis of variance (ANOVA) was used to determine whether there were any statistically significant differences between the means of tumor depth for the various subtypes. Cohen's Kappa coefficient (κ) was used to measure inter-rater reliability.
Dataset Characteristics
From 100 tumors that were included in the study, the majority referred to male patients (n = 66, 66.0%) and 34 to female patients (34.0%). The mean age of the study population was 73.1 years. Most BCCs (n = 33, 33.0%) included in the dataset were nodular, followed by superficial (n = 29, 29.0%), infiltrative (n = 21, 21.0%), and micronodular n = 2 (2.0%). A total of 14 tumors (14.0%) had no indication of the histologic subtype in the histogical report and 1 was reported as superficial plus infiltrative. The majority (n = 66, 66.0%) of the tumors were located on the head and neck area (Table 1).
HFUS Tumor Features (Shape and Margins) and Correlation with BCC Histological Subtype
When assessing the tumor shape, the majority n = 46 (46.0%) were round shaped, followed by 28 tumors (28.0%) that were irregular shaped and 26 (26.0%) that were ribbon shaped. The vast majority (n = 74, 74.0%) were well-defined tumors, whereas 26 (26.0%) were ill defined. (Table 1) There was found to be some evidence of association between the tumor shape and the tumor margin (p = 0.013), with ribbon-and round-shaped tumors most commonly having well-defined margins. In detail, 80% of ribbon-shaped and 81% of round-shaped tumors had well-defined margins. Irregular-shaped tumors were found to have either ill-(n = 12, 52.2%) or well-defined (n = 11, 47.8%) margins. After excluding the tumors that had no indication of histology reported as well as the one that was evaluated as superficial plus the infiltrative tumor type, we stratified our dataset by histological subtype and found that most infiltrative tumors (n = 16/21, 76.2%) were irregular shaped, followed by 5 (23.8%) being round shaped; that most superficial tumors (n = 25/29, 86.2%) were ribbon shaped, followed by 4 (13.8%) being round shaped; that most nodular tumors (n = 26/33, 78.8%) were round shaped, followed by 7 (21.2%) that were irregular shaped; and, lastly, that all microdular tumors (n = 2/2, 100.0%) were round shaped (Table 2A). Strong evidence of association (p = 0.000) was observed between histological subtype and tumor shape, as seen with the HFUS. (Table 2A) No evidence of association was found between histological subtype and tumor margin (p > 0.005) ( Table 2B).
As far as the positive predictive value of HFUS is concerned: (1) 76.2% of infiltrative BCCs were irregular shaped and 69.6% of all irregular-shaped tumors were found to be infiltrative BCCs, so in our dataset, when the tumor is irregular shaped, there is a 69.6% possibility of being an infiltrative BCC. (2) A total of 86.2% of superficial BCCs were ribbon shaped, and 100.0% of all ribbon-shaped tumors were found to be superficial BCCs, so in our dataset, ribbon-shaped tumors have a 100.0% probability of being superficial BCCs.
(3) A total of 78.8% of nodular BCCs were round shaped and 70.3% of round-shaped BCCs were found to be nodular. Round-shaped tumors have a 70.3% probability of being nodular BCCs in our study. Table 2. Cross-tabulations of 85 cases after excluding those cases where the histological subtype was not available (n = 14) and the one case of superficial plus infiltrative histological subtype (n = 1).
HFUS Measurements (Tumor Depth and Length)
In regards to HFUS measurements, the mean HFUS depth was calculated as 1702 (SD = 992.6), whereas the mean depth from the surface of the skin after tumor debulk was 1180.8 (SD = 572.3). After stratifying the dataset according to BCC subtype, we calculated the tumor depth before and after tangential excision. The subject defined as superficial plus infiltrative tumor type and the ones without a histology report were all excluded from the analysis.
In both measurements (before and after tumor "debulking"), micronodular BCCs were found to be deeper than all other subtypes, followed by nodular, infiltrative, and, lastly, superficial ones. HFUS measurement results are presented in Table 3 and have also been graphically displayed in the two box plots (Figures 3 and 4).
Discussion
Basal cell carcinomas are the most frequent skin malignancies. It is important to have non-invasive imaging tumor information that allows the therapeutic decision that will result in clinical cure with minimal treatment to be made. Ideally, we should be able to identify high-risk tumors that require surgical treatments [14], and treat low-risk tumors with minimally invasive techniques. Clinical information is not sufficient to take this decision. Dermoscopy is a non-invasive tool used in BCC diagnosis. However, HFUS provides additional important information: through ultrasound, we can visualize the shape, the length, and, most importantly, the depth of the tumors, as well as identify other tumor traits (i.e., hyperechoic granules) correlated with specific BCC subtypes.
BCC is generally confirmed by histopathological examination. However, skin biopsy provides information solely for the site where the biopsy has been taken. If a tumor is comprised of two different subtypes (nodular and superficial) and an incisional biopsy is performed in the area of the superficial BCC, the physician might incorrectly decide to use a non-invasive technique to treat the whole tumor, and the result of the treatment will probably be that the nodular part will only be partially treated. HFUS can be used as a quick, non-invasive technique to achieve three things. (1) To choose the biopsy site(s). Visualizing the whole tumor (especially important in large tumors) with the use of HFUS enabling us to detect deeper areas, and, as a result, to biopsy higher risk areas. (2) To choose the correct treatment modality. If the entire tumor is superficial, it can be effectively treated with PDT or imiquimod or cryosurgery. (3) To monitor the tumor area after treatment. In our study, high-frequency ultrasound imaging using a 22 MHz probe was performed on 100 BCC lesions in 54 patients, and the depth measured from the ultrasonographic image was analyzed and later related to the histological type.
The HFUS measurements correlated with the histology, and, indeed, micronodular tumors were found to be deeper than all other subtypes, followed by nodular, infiltrative, and, lastly, superficial ones. This correlation and order in the thickness of the tumor did not change, even when the protruding part of the tumor was removed from the measurement. Crisan et al. [6] have shown a significant correlation between U/S and histological findings regarding tumor thickness, pointing at the small differences caused by US overestimation, which may be explained by the presence of the perilesional tumoral infiltrate and/or the retraction of tissue induced during the excision of the tumor.
In our study, all 100 BCCs had shapes that could be easily described as irregular, ribbon, or round. The tumor shape, as visualized by the HFUS, seems to be predictive of the subtype: irregular-shaped tumors were more likely to be infiltrative BCCs, ribbonshaped tumors were more likely to be superficial BCCs, and round-shaped tumors were likely to be nodular BCCs. When superficial BCCs were described as round shaped in the HFUS, they were thinner (less than 400 microns). In all cases, we were able to visualize the morphology, exact localization, and thickness in a reliable way before surgery, and we could thus also perform a correlation with the histological type described after excision. The sonographic appearance of the tumors studied (hypoechoic and oval shaped) was similar to previous reports in the literature [15].
Furthermore, the depth of the lesions displayed by ultrasound has been helpful for the differential diagnosis of lesions at different risk levels. Wand et al. [13] showed that all high-risk BCC lesions examined (defined as micronodular, infiltrative, basosquamous, and mixed) involved the sub-cutaneous tissue, while 78% of low-risk lesions (defined as superficial and nodular) were located in the dermis, resulting in a significant difference between the two groups. The authors concluded that pre-operative ultrasound can be employed to reveal subclinical characteristics of the tumor, which can be crucial to providing important information for therapeutic decision making, and which can also predict the risk of recurrence.
As high-frequency ultrasound has been reported to explicitly present the deep structure of lesions and, in our experience, also measure the tumor depth that correlates with the histological type, important information can be obtained from this method for thera-peutic decision making [4][5][6]15]. Similarly OCT will also provide information on both the morphology and the depth of a tumor. Although its resolution is more precise than that of the ultrasound, the cost of the machine is much higher. On the other hand, RCM has a high resolution, allowing for the examination of lesions at a cellular level, but it cannot penetrate deeply into the skin and, thus, it cannot provide information on the depth of a tumor whilst having, and, at the same time, it is also very expensive to purchase one. All of these diagnostic techniques may play an important role in basal cell carcinoma management, but they all require training to use them, and, moreover, the high cost of both OCT and RCM makes them unrealistic for daily office practice. These two techniques may thus be better reserved for use in academic centers and hospitals [16,17].
Knowing the shape and depth of the tumor allows the surgeon to decide on whether to treat the lesion with minimally invasive techniques, such as shaving, curettage and electrocoagulation, photodynamic therapy, or cryosurgery. Once a tumor has been identified as small, it is possible to just shave it and visualize it ex vivo in order to guarantee that it has been all removed ( Figure 5) [18,19]. Furthermore, correct evaluation of surgical margins is of paramount importance, especially in regards to infiltrative BCCs, which pose a challenge when assessing their margins pre-operatively, resulting in higher rates of incomplete surgical excision [20,21]. HFUS can improve the assessment of basal cell carcinoma margins preoperatively [18]. In our study, all infiltrative BCCs showed clear margins in the pathology report after excision.
In addition, we found numerous intra-tumoral hyperechoic granules in two tumors that were histologically described as micronodular. This finding corresponds with previous descriptions found in the literature by Worstman et al., which stated that hyperechoic spots may predict the high-risk BCC subtypes [22].
Bobadilla et al. [15] studied the capacity of ultrasound in defining a sonographic morphology for BCC, and they tried to ascertain the accuracy level of the measurement of the unknown axis (depth) of the lesion. They argued that ultrasound is indeed helpful when setting up BCC surgery because it can discern, in a non-invasive way, lesions and their depth as well as the patterns of the vessels. Additionally, ultrasound has a fine thickness correlation with histology, and it can also uncover subclinical satellite lesions. Thus, it appears that ultrasound is an interesting approach not only for pre-surgical evaluation but even possibly for follow-up of lesions that are treated with non-invasive treatments.
In conclusion, high-frequency ultrasound appears as a reliable technique to provide important information for pre-operative evaluation of BCC, helping physicians to decide on the optimal therapeutic approach and tailor the treatment to the characteristics of the lesion.
Conclusions
The 22 MHz high frequency ultrasound (HFUS) is a non-invasive imaging technique that gives valuable subclinical tumor information, such as depth, length, volume, and shape. It is a relatively simple procedure that gives valuable preoperative information, which helps personalize the treatment and reduce over-or under-treatment. The level of invasion of a NMSC is usually known only after the tumor has been removed and after the histopathological report has been received. HFUS provides this information preoperatively and can help to guide therapeutic decisions. For instance, the physician may decide to treat thin tumors with minimally invasive techniques and reserve more aggressive treatments for deeper tumors. We have been able to show an association between BCC histological subtype and tumor depth and shape. We calculated an unweighted Cohen's Kappa statistic coefficient equal to 0.8251 (se 0.0735, z [for k = 0] 11.23, p < 0.0001), which indicates an almost perfect agreement beyond chance between BCC subtypes evaluated by histological examination and U/S appearance [23]. The latter highlights the significance and diagnostic accuracy of HFUS as a non-invasive pre-operative diagnostic tool. Our study's limitations derive mainly from its retrospective, non-controlled, and non-randomized design. Larger well-designed series are still needed to explore the role of HFUS in shaping BCC management strategies. Data Availability Statement: Data are available upon reasonable request from the corresponding author. All data relevant to this study are included in the article. | 2023-06-29T05:11:40.668Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "fb840e544f08a63b41e2a67dae5c5e041b6a0be6",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "fb840e544f08a63b41e2a67dae5c5e041b6a0be6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118836468 | pes2o/s2orc | v3-fos-license | Jordan Derivations of some extension algebras
In this paper, we mainly study Jordan derivations of dual extension algebras and those of generalized one-point extension algebras. It is shown that every Jordan derivation of dual extension algebras is a derivation. As applications, we obtain that every Jordan generalized derivation and every generalized Jordan derivation on dual extension algebras are both generalized derivations. For generalized one-point extension algebras, it is proved that under certain conditions, each Jordan derivation of them is the sum of a derivation and an anti-derivation.
Introduction
Let us begin with some definitions. Let R be a commutative ring with identity, A be a unital algebra over R and Z(A) be the center of A. We denote the Jordan product by a • b = ab + ba for all a, b ∈ A. Recall that an R-linear mapping Θ from Every derivation is obviously a Jordan derivation. The converse statement is in general not true. Moreover, in the 2-torsion free case the definition of a Jordan derivation is equivalent to for all x ∈ A. Those Jordan derivations which are not derivations are said to be proper.
There has been an increasing interest in the study of Jordan derivations of various algebras since last decades. The standard problem is to find out whether a Jordan derivation degenerate to a derivation. Jacobson and Rickart [14] proved that every Jordan derivation of a full matrix algebra over a 2-torsion free unital ring is a derivation by relating the problem to the decomposition of Jordan homomorphisms. In [12], Herstein showed that every Jordan derivation from a 2-torsion free prime ring into itself is also a derivation. Zhang and Yu [30] obtained that every Jordan derivation on a triangular algebra with faithful assumption is a derivation. This result was extended to the higher case by Xiao and Wei [27]. They obtained that any Jordan higher derivation on a triangular algebra is a higher derivation. The aforementioned results have been extended to different rings and algebras in various directions, see [5,6,22,23,30] and the references therein.
Path algebras of quivers come up naturally in the study of tensor algebras of bimodules over semisimple algebras. It is well known that any finite dimensional basic K-algebra is given by a quiver with relations when K is an algebraically closed field. In [10], Guo and Li studied the Lie algebra of differential operators on a path algebra KΓ and related this Lie algebra to the algebraic and combinatorial properties of the path algebra KΓ. In [18], the current authors studied Lie derivations and Jordan derivations of a class of path algebras of quivers without oriented cycles, which can be viewed as one-point extensions. It is proved in this case that each Lie derivation is of the standard form and each Jordan derivation is a derivation. Moreover, the standard decomposition of a Lie derivation is unique.
For path algebras of finite quivers without oriented cycles, Xi [24] constructed their dual extension algebras to study quasi-hereditary algebras. This construction were further refined in details in [7,9,25] by Deng and Xi. A more general construction, the twisted doubles, were studied in [8,15,26]. It turns out that dual extension algebras inherit some nice properties of the representation theory aspect from given algebras. On the other hand, in [19], the current authors proved that all Lie derivations of the dual extension algebra are of the standard form. Then it is natural to ask whether all Jordan derivations of dual extension algebras are derivations. We will give a positive answer in this paper. More precisely, one of the main results of this paper is as follows: Theorem. Let K be a field with char = 2. Let (Γ, ρ) be a finite quiver without oriented cycles. Then each Jordan derivation on the dual extension algebra of path algebra K(Γ, ρ) is a derivation.
It should be remarked that each associative algebra with non trivial idempotents is isomorphic to a generalized matrix algebra. The form of Jordan derivations on generalized matrix algebras has been characterized by current authors in [20]. We proved that under certain conditions, each Jordan derivation is the sum of a derivation and an anti-derivation. An example of proper Jordan derivation was also given there. To find a proper Jordan derivation is not an easy task in general. Recently Bencovič [4] introduced the so-called singular Jordan derivations which are usually anti-derivations. He gave a sufficient condition for a Jordan derivation on a unital algebra with a nontrivial idempotent to be the sum of a derivation and a singular Jordan derivation. Our result on Jordan derivations of dual extension algebras implies that neither the conditions in [20] nor those in [4] are necessary. Of course, we want to give other examples to illustrate this fact. The so-called generalized one-point extension algebras introduced in [19] just provide us another class of examples. We prove that under certain conditions, each Jordan derivation on a generalized one-point extension algebra is the sum of a derivation and an anti-derivation.
The paper is organized as follows. After a quick review of some needed preliminaries on path algebras and generalized matrix algebras in Section 2, we investigate Jordan derivations of dual extension algebras in Section 3. Jordan generalized derivations and generalized Jordan derivations are also considered. Then in Section 4, we study Jordan derivations of generalized one-point extension algebras. An interesting example will also be given there.
Path algebras and generalized matrix algebras
In this section, we give a quick review of path algebras of quivers and generalized matrix algebras. For more details, we refer the reader to [1] and [27].
Path algebras.
Recall that a finite quiver Γ = (Γ 0 , Γ 1 ) is an oriented graph with the set of vertices Γ 0 and the set of arrows between vertices Γ 1 being both finite. For an arrow α, we write s(α) = i and e(α) = j if it is from the vertex i to the vertex j. A sink is a vertex without arrows beginning at it and a source is a vertex without arrows ending at it. A nontrivial path in Γ is an ordered sequence of arrows p = α n · · · α 1 such that e(α m ) = s(α m+1 ) for each 1 ≤ m < n. Define s(p) = s(α 1 ) and e(p) = e(α n ). The length of p is defined to be n. A trivial path is the symbol e i for each i ∈ Γ 0 . In this case, we set s(e i ) = e(e i ) = i. The length of a trivial path is defined to be zero. A nontrivial path p is called an oriented cycle if s(p) = e(p). Let us denote the set of all paths by P.
Let K be a field and Γ a quiver. Then the path algebra KΓ is the K-algebra generated by the paths in Γ and the product of two paths x = α n · · · α 1 and y = β t · · · β 1 is defined by xy = α n · · · α 1 β t · · · β 1 , e(y) = s(x) 0, otherwise.
Clearly, KΓ is an associative algebra with the identity 1 = i∈Γ0 e i , where e i (i ∈ Γ 0 ) are pairwise orthogonal primitive idempotents of KΓ.
A relation σ on a quiver Γ over a field K is a K-linear combination of paths where k i ∈ K and e(p 1 ) = · · · = e(p n ), s(p 1 ) = · · · = s(p n ).
Moreover, the number of arrows in each path is assumed to be at least 2. Let ρ be a set of relations on Γ over K. The pair (Γ, ρ) is called a quiver with relations over K. Denote by < ρ > the ideal of KΓ generated by the set of relations ρ. The K-algebra K(Γ, ρ) = KΓ/ < ρ > is always associated with (Γ, ρ). For arbitrary element x ∈ KΓ, write by x the corresponding element in K(Γ, ρ). We often write x as x if there is no confusion caused.
form an R-algebra under matrix-like addition and matrix-like multiplication. There is no constraint condition concerning the bimodules M and N . Of course, they can be equal to zeros. Such an R-algebra is called a generalized matrix algebra of order 2 and is usually denoted by . The structure and properties of linear mappings on generalized matrix algebras have been investigated in our systemic works [17,18,20,21,27].
Any unital R-algebra A with nontrivial idempotents is isomorphic to a generalized matrix algebra as follows: where e is a nontrivial idempotent in A.
Jordan Derivations of dual extension
Lest us first recall the definition of dual extension algebras introduced in [24]. Let Λ = K(Γ, ρ), where Γ is a finite quiver. Let Γ * to be a quiver whose vertex set is Γ 0 and If Γ has no oriented cycles, then D(Λ) is called the dual extension of Λ. A more general definition of dual extension algebras was given in [25] to study global dimensions of dual extension algebras. We omit the details here because it will not be involved in this paper. Clearly, if |Γ 0 | = 1, then the algebra is trivial. Let us assume that |Γ 0 | ≥ 2 from now on. Then D(Λ) is isomorphic to a generalized matrix algebra G = [ A M N B ]. According to the construction of dual extension, it is easy to verify that the pairings Φ MN = 0 and Ψ N M = 0. If M = 0, then N = 0.
Moreover, it is helpful to point out that M need not to be faithful as left A-module or as right B-module. Some examples were given in [19].
We always assume, without specially mentioned, that every algebra and every bimodule considered is 2-torsion free. Let us recall some indispensable descriptions about derivations and Jordan derivations of generalized matrix algebras. For details, we refer the reader to [17] and [20].
where m 0 ∈ M, n 0 ∈ N and are all R-linear mappings satisfying the following conditions:
Proof. Let Θ be a Jordan derivation of D(Λ) with the form (⋆2). We first prove that τ 3 = 0, ν 2 = 0. Let α ∈ N be an arbitrary arrow. Then e(α) = i. Assume that s(α) = j, where j ∈ Γ 0 . In view of condition (5) of Lemma 3.2 we know that This implies that if τ 3 (α) = 0, then ατ 3 (α) = 0. However, ατ 3 (α) = 0 is impossible by condition (5) Proof. If the algebra D(Λ) is trivial, then the theorem clearly holds. Suppose that Γ 0 ≥ 2 and that i ∈ Γ 0 is a source. Let Θ be a Jordan derivation on D(Λ). Let us denote by (Γ ′ , ρ ′ ) the quiver obtained by removing the vertex i and the relations starting at i and write Λ ′ = K(Γ ′ , ρ ′ ). It follows from Lemma 3.3 that each Jordan derivation on D(Λ) is a derivation if each Jordan derivation on D(Λ ′ ) is a derivation. Thus it is sufficient to determine whether every Jordan derivation on D(Λ ′ ) is a derivation. We continuously repeat this process and ultimately arrive at the algebra K after finite times, since Γ 0 is a finite set. Clearly, every Jordan derivation on K is a derivation. This completes the proof. For generalized Jordan derivations on dual extension algebras, the following result is a simple corollary of [3, Lemma 4.1] and Theorem 3.4.
Corollary 3.6. Every generalized Jordan derivation on a dual extension algebra D(Λ) is a generalized derivation.
In order to deal with Jordan generalized derivations, we need the following two lemmas obtained in [16] by the first author and Benkovič. Let f be a Jordan generalized derivation on a dual extension algebra D(Λ). It follows from Lemma 3.7 , Lemma 3.8 and Theorem 3.4 that if f (1) ∈ Z(D(Λ)), then every Jordan generalized derivation is a generalized derivation. In order to prove that f (1) ∈ Z(D(Λ)), the following lemma obtained in [16] will play an important role. Furthermore, we claim that e i f (1)e j = 0 for arbitrary i, j ∈ Γ 0 with i = j. In fact, Lemma 3.9 (2) implies that
Jordan derivations of generalized one-point extensions
We introduced the notion of generalized one-point extension algebras in [19] and studied Lie derivations on them. In this section, we will investigate Jordan derivations of generalized one-point algebras. Let us first recall the definition.
Let Γ = (Γ 0 , Γ 1 ) be a finite quiver without oriented cycles and |Γ 0 | ≥ 2. Let Γ * be a quiver whose vertex set is Γ 0 and Γ * 1 = {α * : i → j | α : j → i is an arrow in Γ 1 }. For a path p = α n · · · α 1 in Γ, write the path α * 1 · · · α * n in Γ * by p * . Given a set ρ of relations, denote by Λ = K(Γ, ρ). Define the generalized one-point extension algebra E(Λ) to be the path algebra of the quiver (Γ 0 , It is helpful to point out that if we choose a suitable idempotent, then neither M nor N need not to be faithful. Let us illustrate an example here. Combining (4.1) with (4.2) gives that k r = 0. If there exists j ∈ Γ 0 with i = j such that k j = 0, then the coefficient of e j in the expansion of Θ(e r )e j is k j . On the other hand, since e j does not appear in the expansion of Θ(e j ), we conclude that e j does not appear in the expansion of e r Θ(e j ) too. This implies that Θ(e j e r ) = 0, which is impossible.
Since Γ is a quiver without oriented cycles, we can take a source i in Γ. Let e i be the corresponding idempotent in E(Λ). Then E(Λ) is isomorphic to a generalized matrix algebra G = [ A M N B ] with A ≃ E(Λ ′ ), where the quiver Γ ′ of Λ ′ is obtained by removing the vertex i and the relations starting at i. Moreover, we have from the construction of E(Λ) that the bilinear pairings are both zero. In this case, the form (⋆2) of any Jordan derivation of E(Λ) becomes as follows: (1) δ 1 is a Jordan derivation on A.
In [20], the form of an arbitrary anti-derivation on a generalized matrix algebra G = [ A M N B ] has been characterized under the condition that M being faithful as left A-module and also as right B-module. If we remove the faithful assumption on M , the form of an anti-derivation on G is as follows: We are now in a position to state the main result of this section. Theorem 4.6. Let Γ be a finite quiver without oriented cycles and Λ = K(Γ, ρ). If there is no path p with length more than one, then every Jordan derivation on the generalized one point extension algebra E(Λ) is the sum of a derivation and an anti-derivation.
Proof. Let Θ be a Jordan derivation on E(Λ). Then by Lemma 4.2 it is of the form ( 1). We claim that if each Jordan derivation on A is the sum of a derivation and an anti-derivation, then so is E(Λ). In fact, assume that δ 1 = d + f , where d is a derivation of A and f is an anti-derivation of A. By Lemma 4.2 we know that all e i do not appear in f (a) for a ∈ A. Note that the length of each path is not more than one. This implies that f (a)m = 0 for all a ∈ A and m ∈ M . Similarly, we can show that nf (a) = 0 for all a ∈ A and n ∈ N . Define a linear mapping f ′ on E(Λ) by is a derivation of E(Λ). This completes the proof of our claim. Repeating this process, we arrive at the algebra K, on which every Jordan derivation is zero. This completes the proof.
Finally, we illustrate an example which satisfies the condition of Theorem 4.6.
Example 4.7. Let Γ be a quiver as follows Then a direct computation shows that Θ is a proper Jordan derivation on E(Λ). | 2013-03-02T10:51:21.000Z | 2013-03-02T00:00:00.000 | {
"year": 2013,
"sha1": "0369dfe8a7e74dd2c95b7b244cfcd77bae91b085",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0369dfe8a7e74dd2c95b7b244cfcd77bae91b085",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
1019962 | pes2o/s2orc | v3-fos-license | Pharmacokinetic and nephroprotective benefits of using Schisandra chinensis extracts in a cyclosporine A-based immune-suppressive regime
Cyclosporine A (CsA) is a powerful immunosuppressive drug. However, nephrotoxicity resulting from its long-term usage has hampered its prolonged therapeutic usage. Schisandra chinensis extracts (SCE) have previously been used in traditional Chinese medicine and more recently coadministered with Western medicine for the treatment of CsA-induced side effects in the People’s Republic of China. This study aimed to investigate the possible effects of SCE on the pharmacokinetics of CsA in rats and elucidate the potential mechanisms by which it hinders the development of CsA-induced nephrotoxicity. A liquid chromatography/tandem mass spectrometry method was developed and validated for determining the effect of SCE on the pharmacokinetics of CsA. Male Sprague Dawley rats, which were administered with CsA (25 mg/kg/d) alone or in combination with SCE (54 mg/kg/d and 108 mg/kg/d) for 28 days, were used to evaluate the nephroprotective effects of SCE. Our study showed that SCE increased the mean blood concentration of CsA. Furthermore, we found that the concomitant administration of SCE alongside CsA prevented the disruption of catalase activity and reduction in creatinine, urea, renal malondialdehyde, and glutathione peroxidase levels that would have otherwise occurred in the absence of SCE administration. SCE treatment markedly suppressed the expression of 4-hydroxynonenal, Bcl-2-associated X protein, cleaved caspase 3, and autophagy-related protein LC3 A/B. On the other hand, the expression of heme oxygenase-1, nuclear factor erythroid 2-related factor 2 (Nrf2), and P-glycoprotein was enhanced by the very same addition of SCE. SCE was also able to increase the systemic exposure of CsA in rats. The renoprotective effects of SCE were thought to be mediated by its antiapoptotic and antioxidant abilities, which caused the attenuation of CsA-induced autophagic cell death. All in all, these findings suggest the prospective use of SCE as an effective adjunct in a CsA-based immunosuppressive regimen.
Introduction
Cyclosporine A (CsA), a potent immunosuppressant, is the most widely used drug in organ transplantation and in the management of several autoimmune disorders. 1 Despite being so very potent, its use for therapeutic purposes remains challenging due to the occurrence of disastrous side effects, namely, acute and chronic nephrotoxicity, severe hypertension, and neurotoxicity. 2,3 Experimental studies have shown that CsA-induced nephrotoxicity is characterized by progressive irreversible renal structural damage, arteriopathy of the afferent arterioles, tubulointerstitial inflammation, striped fibrosis, and decreased glomerular filtration rate. 4 However, the mechanism underlying CsA-induced renal damage is not fully understood. Recent studies have Dovepress Dovepress 4998 lai et al demonstrated that CsA-induced oxidative stress, apoptotic cell death, and excessive autophagosome formation play a crucial role in causing structural and functional kidney impairment. [5][6][7] The CsA-induced autophagy has been shown to be triggered by endoplasmic reticulum stress. 8 Alternatively, other reports have revealed that CsA can induce apoptosis in human proximal cells in vitro via reactive oxygen species (ROS)-mediated cell damage. 9,10 Then, CsA has been shown to be able to induce renal dysfunction and cause altered renal morphology through mechanisms in addition to the side effects mentioned previously, including inflammation, apoptosis, autophagy, and generation of oxidative radicals. 8,[11][12][13] Schisandra chinensis (Turcz, Bail), a member of Magnoliaceae family, has been one of the most widely used traditional herbal medicines in traditional Chinese medicine for thousands of years in People's Republic of China. 14,15 The use of S. chinensis, as a superior herbal medicine, was first recorded in the ancient pharmaceutical book The Divine Husbandman's Herbal Foundation Canon (Shen Nong Ben Tsao Ching). 16 Schisandra lignans, which have a dibenzocyclooctadiene skeleton, are the major constituents of S. chinensis and from them .40 ingredients have been isolated until now, including deoxyschizandrin, schizandrin, schizantherin A, and schizanhenol. 17 Additionally, the S. chinensis extracts (SCE) have been revealed to possess multipharmaceutical bioactivities, including antioxidant, anti-inflammatory, antimicrobial, cardioprotective, hepatoprotective, antiseptic, and antitumor properties, which are extremely useful for the treatment of hepatitis, renal insufficiency, menstrual disorders, and neuroasthenia. 18 Schizandrin and schinzandrin B, both components of the SCE, have been shown to exhibit renoprotective effects by preventing cell death. It has been found that SCE causes enhancement of tubular regeneration in HK-2 cells models affected by cisplatin-induced nephrotoxicity and thereby prevents potential damage from occurring. 19 It has further been reported that SCE might be beneficial for diabetic nephropathy, where SCE by maintaining podocyte integrity through suppression of epithelial to mesenchymal transition leads to the attenuation of albuminuria and glomerulosclerosis. 20 Recently in a study, schizandrin B was shown to suppress hypoxia/reoxygenation-induced apoptosis of H9c2 cells in vitro. 21 In another study, SCE was found to be able to inhibit fiber formation and cell migration in vascular smooth muscle cell by reducing transforming growth factor beta 1-mediated phosphorylation of myosin light chain. This effect was shown to be independent of RhoA/Rho-associated kinase pathway. 22 Though many studies have been conducted on SCE functions, the potential renoprotective and molecular mechanism underlying the benefits of SCE on CsA-induced nephrotoxicity remains to be explored.
This study was designed to evaluate the effects of SCE on the pharmacokinetics of CsA and elucidate the protective effects of SCE on CsA-induced renal injury using salt-depleted rat model of CsA nephropathy. It has been hypothesized that SCE can elevate the blood concentration of CsA, and at the same time, it can also ameliorate renal dysfunction and structural damage through its antioxidant, antiapoptotic, and autophagic properties. In this study, CsA levels in the blood sample were determined using liquid chromatography/ tandem mass spectrometry methods. To test for oxidative renal damage, we searched for the relevant biochemical and histopathological changes in renal tissue. Finally, Western blot and immunohistochemical techniques were put to use in order to demonstrate the expression of heme oxygenase-1 (HO-1), nuclear factor erythroid 2-related factor 2 (Nrf2), P-glycoprotein (P-gp), cleaved caspase 3, Bcl-2-associated X protein (Bax), and LC3A/B protein in the renal tissue.
4999
Pharmacokinetic and nephroprotective of sce against csa toxicity University of Chinese Medicine. The animals were housed in a room with temperature-and light-controlled environment. The animals had free access to low salt diet (0.04% sodium, provided by Medical Laboratory Animal Center, Guangdong, People's Republic of China) and tap water. The Institutional Animal Care and Use Committee of Guangzhou University of Chinese Medicine approved this experimental protocol, and all procedures performed in this study followed the regulations of experimental animal administration issued by the Ministry of Science and Technology of the People's Republic of China.
sce preparation SCE was made as described in our previously published paper. 23 Briefly, 95% ethanol was added to the S. chinensis fruit. Circumfluent extraction was then performed three times lasting 3 hours each. This was followed by macroporous resin (AB-8) purification using 75% ethanol, the solvent used for eluting. To examine the total content of lignanoids within SCE, a quota of SCE (60 μL) was examined at 570 nm on a UV-6100S UV-visible scanning spectrophotometer (Shanghai Precision Instrument Co. Ltd., Shanghai, People's Republic of China). The total lignans content in the prepared SCE was calculated to be 37.30 mg/g. HPLC was performed to determine the components of SCE using an Agilent 1100 liquid chromatography system (Agilent Technologies, Santa Clara, CA). In this study, the retention times of standard control samples of schizandrin, schizandrin A, and schizandrin B were 7.640 minutes, 22.587 minutes, and 33.368 minutes, respectively ( Figure 1A). Accordingly, the contents of schizandrin, schizandrin A, and schizandrin B in three major bioactive compounds in three batches of SCE were calculated as 5.26-5.41 mg/g, 1.68-1.77 mg/g, and 0.721-0.797 mg/g, respectively ( Figure 1B).
Pharmacokinetic study in rats
All rats were deprived of food for 12 hours before blood sample obtainment. Subsequently, the rats were randomly divided into four groups, each group consisting of six rats (n=6): 1) CsA-only group: rats were administered tap water simultaneously with CsA (25 mg/kg) in a span of 10 minutes. 2) SCE administered at a dose of 54 mg/kg. 3) SCE administered at a dose of 108 mg/kg. 4) SCE administered at a dose of 216 mg/kg. In the latter three groups: SCE was administered by gavage to the animals at relevant doses, and 10 minutes post-SCE administration, all rats were given CsA at a dose of 25 mg/kg. Both SCE and CsA were given in an administration volume of 10 mL/kg body weight.
Blood sample (400 μL) was collected using heparinized tubes through the jugular vein at 0 hour, 0.083 hour, 0. Thereon, the upper phase was transferred to a clean centrifuge tube and evaporated to dryness. The residues were dissolved in 200 μL of methanol, vortexed for 3 minutes, and centrifuged at 16,000 rpm for 10 minutes.
Quantification of CsA by liquid chromatography/tandem mass spectrometry Chromatographic analysis was performed using the Agilent 1260 series HPLC system, and the separation was performed using a C 18 column (2.1×50 mm, 3.5 μm particle size, internal diameter; Waters, USA) at 60°C. The mobile phase consisted of methanol-water (80:20, v/v, containing 10 mM ammonium acetate), and it was pumped at a flow rate of 0.2 mL/min. The total running time was 5 minutes for each sample. The chromatographic peak was confirmed by liquid chromatography/mass spectrometry experiment using the Agilent 6460 triple-quadrupole mass spectro meter under a positive electrospray ionization mode, with the spray voltage set at 3,500 V. Desolvation gas (nitrogen) was heated to 300°C and delivered at a flow rate of 5 L/min. The nitrogen nebulizing gas was set at 45 psi. The tandem mass spectrometry detection was conducted by monitoring the fragmentation of m/z 1,219.9→1,203.7 for CsA ( Figure 2A) and m/z 809.5→792.5 for FK520 ( Figure 2B). The method was validated for selectivity, calibration curve, recovery, precision, the lower limit of quantification, and stability according to the US Food and Drug Administration guideline for validation of bioanalytical methods.
sce protects against csa-induced nephrotoxicity study in rats Forty-two rats were randomized into six subgroups of seven rats each and treated daily for 28 days as follows: 1) Control Figure 2 Full-scan product ion of precursor of (A) csa and (B) FK520. Notes: The chromatographic peak was confirmed by LC-MS experiment using the Agilent 6460 triple-quadrupole mass spectrometer under a positive electrospray ionization (ESI) mode, with the spray voltage set at 3500 V. Desolvation gas (nitrogen) was heated to 300°C and delivered at a flow rate of 5 L/min. The nitrogen nebulizing gas was set at 45 psi. Abbreviation: csa, cyclosporine a.
5001
Pharmacokinetic and nephroprotective of sce against csa toxicity group: gavaged with tap water (10 mL/kg) and fed normal salt diet for 28 days. 2) Vehicle control group: gavaged with olive oil (10 mL/kg) and fed low salt diet for 28 days. 3) CsA group: gavaged with 25 mg/kg CsA and fed low salt diet for 28 days in order to establish a nephrotoxic effect. 4) SCE group: gavaged with 108 mg/kg SCE and fed low salt diet for 28 days. 5) 54 mg/kg SCE + CsA group: gavaged with 54 mg/kgSCE +25 mg/kg CsA and fed low salt diet for 28 days. 6) 108 mg/kg SCE +25 mg/kg CsA group: gavaged with 108 mg/kgSCE +25 mg/kg CsA and fed low salt diet for 28 days.
After the aforementioned 28 days, the rats, who fasted overnight, were anesthetized with pentobarbital sodium (40 mg/kg, intraperitoneal). Blood samples were collected from aorta abdominalis, and these samples were then separated by the process of centrifugation at 4,000 rpm within a span of 10 minutes. The serum was aspirated and stored at −80°C until it was needed for biochemical analyses. Following blood collection, rats were sacrificed by overdosing them with pentobarbital sodium. Thereafter, their abdomens were opened and a longitudinal section of the right kidney was taken and fixed in 10% formaldehyde at 4°C for 48 hours. These longitudinal sections from each animal were then embedded in paraffin blocks for histopathological and immunohistochemical analyses. The renal cortex of the left kidney was frozen in liquid nitrogen and stored at −80°C for Western blot and various biochemical analyses.
Biochemical analysis
Using commercially available kits, levels of serum creatinine (Cr) and serum urea, malondialdehyde (MDA) levels (JianCheng Bioengineering Institute, Nanjing, People's Republic of China), as well as catalase (CAT) activity and glutathione peroxidase (GSH-Px) activity (Beyotime, Jiangsu, People's Republic of China) were quantified according to the manufacturers' guidelines. The protein concentration was determined using the bicinchoninic acid Protein Assay Kit (Beyotime), which is based on an absorbance shift in bicinchoninic acid.
histopathological analysis
The longitudinal sections of the right kidney were fixed using 10% neutral phosphate-buffered formalin solution. For light microscopy, they were first dehydrated using a series of ethanol solutions (70%, 80%, 95%, and 100%), then processed in an autotechnicon, and finally embedded in paraffin. Sections 4-5 μm thick were cut by a rotary microtome (Leica Microsystems, Wetzlar, Germany) and stained by hematoxylin-eosin (H&E) and Masson's trichrome to evaluate for tubular necrosis and interstitial fibrosis. The stained specimens were examined by a blinded pathologist using an Olympus CX31 (Olympus Corporation, Tokyo, Japan) light microscope.
Tubular necrosis (dilation and vacuolation), inflammatory cell infiltrate, brush border loss, and the presence of cells or cellular debris in tubular lumens were perused for and histopathological scoring of H&E-stained sections performed. Twelve random sections (×200) from each kidney were analyzed, and histopathological damage score categorized to be in the range 0-3. 1) score 0: damage affecting no 5% of the field; 2) score 1: damage affecting 5%-25% of the field; 3) score 2: damage affecting 25%-75% of the field; and 4) score 3: damage exceeding 75% of the field. All 12 scores were added to give the total necrosis score for each kidney.
Collagen deposits were determined by the development of deep green color after Masson's trichrome staining of tissue sections. Briefly, 4-5 μm paraffin sections were deparaffinized and stained with Weigert's hematoxylin for 10 minutes. Thereafter, these sections were further stained first with a solution containing chromotropic acid, phosphotungstic acid, and glacial acetic acid for 10 minutes, followed by 0.5% light green solution for another 5 minutes. Collagen deposition in the cortex was quantified in 20 fields at ×200 magnification using the Image-Pro Plus software (Media Cybernetics, Rockville, MD, USA).
immunohistochemical analysis
Paraffin-embedded kidney sections of 5 μm thickness were placed on adhesion microscope slides (CITOGLAS, Citotest Labware Manufacturing Co., Ltd, Jiangsu, People's Republic of China), deparaffinized in xylene, and then rehydrated in graded decreasing ethanol concentrations (100%, 95%, 80%, 70%, and 50%). Slides were rinsed gently with phosphatebuffered saline (PBS, pH 7.2) solution for 5 minutes and drained. The antigenic determinants in the kidney cells were unblocked by incubating the sections at 98°C for 20 minutes in 0.01 M citrate buffer (pH 6.0) to ensure heat-induced epitope retrieval. These sections were then rinsed with PBS (pH 7.2) for further 3 minutes. Additionally, endogenous peroxidases were blocked by adding 3% hydrogen peroxide, keeping for 10 minutes at room temperature, and then by washing with PBS (pH 7.2). The sections were then immunoassayed with primary antibodies (4-HNE, 1:200; cleaved caspase 3, 1:300; Bax, 1:250; and LC3A/B, 1:1,000, diluted with 5% BSA in PBS ready to use) overnight in the humidity chamber at 4°C. Negative controls were processed without applying the primary antibodies. After washing three times, Drug Design, Development and Therapy 2015:9 submit your manuscript | www.dovepress.com Dovepress Dovepress 5002 lai et al the sections were incubated for further 20 minutes with Polink-1 One-step polymer detection system kit (ZSGB-Bio Co. Ltd., Beijing, People's Republic of China). The chromogen 3,3′-diaminobenzidine system kit (ZSGB-Bio Co. Ltd.) was prepared and applied as instructed by the manufacturer for color reaction. The slides were then counterstained with hematoxylin and dehydrated in ascending concentrations of ethanol and then xylene, mounted. Every section was examined in a blinded manner using light microscopy (Olympus CX31; Olympus Corporation). For the quantification of the integral optical density of positively stained cells, the cells were counted in 20 consecutive high power fields at ×200 magnification using the Image-Pro Plus software (Media Cybernetics).
Western blot
Renal tissues were homogenized in the radioimmunoprecipitation assay buffer, which contained the protease inhibitor cocktail and centrifuged at 11,000 rpm for 10 minutes at 4°C. Protein concentration in the supernatants was measured using the bicinchoninic acid assay kit (Beyotime). An aliquot of the supernatant (30 μg protein) was then suspended in 4× Laemmli loading buffer that contained dithiothreitol. It was then boiled for 5 minutes at 100°C, electrophoresed on 8% sodium dodecyl sulfate polyacrylamide gel electrophoresis gels at 150 V for 80 minutes, and subsequently transferred onto polyvinylidene difluoride membranes using a Trans-Blot semidry transfer cell (Bio-Rad Laboratories Inc., Hercules, CA, USA). After the transfer, the polyvinylidene fluoride membrane was blocked using 5% skim milk at room temperature for 2 hours followed by incubation with the specific primary antibodies (4-HNE, Nrf2, HO-1, P-gp, Bax, cleaved caspase 3, and LC3A/B were diluted at 1:1,000, respectively) at 4°C overnight. The samples were treated with antirabbit horseradish peroxidase-conjugated secondary antibodies at room temperature for 2 hours and visualized using the enhanced chemiluminescence detection system (Amersham Pharmacia Biotech, Piscataway, NJ, USA). Finally, the images were analyzed and quantified using Image J software (National Institutes of Health, Bethesda, MD, USA).
Pharmacokinetic calculation and statistical analysis
Pharmacokinetic parameters were calculated using a noncompartment model in a noncompartmental analysis performed by a pharmacokinetic program (Data Access Service, Version 2.1; Medical College of Wannan, Anhui, People's Republic of China). The grouped data were statistically analyzed by one-way analysis of variance (ANOVA) followed by the Newman-Keuls post hoc test using software Prism 5.0 (GraphPad Software, Inc., La Jolla, CA, USA). P0.05 was considered to indicate statistically significant results. Data are presented as the mean ± standard deviation from at least six independent experiments.
Methodological validation of csa
Representative chromatograms of the blank whole blood sample, blank whole blood sample spiked with CsA and FK520, and whole blood sample from a rat after a single oral administration of CsA (25 mg/kg) are shown in Figure 3. No interfering peaks were found at or near the retention time of CsA and FK520. In addition, the calibration curve of CsA in the whole blood of rats was established and exhibited a good linear response within the range of concentrations from 20 ng/mL to 5,000 ng/mL (the R 2 of calibration curves was 0.995). Lower limit of quantification of the analytical method was 20 ng/mL for CsA. The extraction recovery and matrix effect of CsA are shown in Table 1. These results indicated that the extraction efficiency was excellent, and there was no matrix effect from the endogenous blood components of CsA. Table 2 summarizes the intraday and interday precision and accuracy of CsA measurements at three concentration levels. Stability samples, which were (at three levels) subject to short-term storage, posttreatment storage, three freeze-thaw cycles, and long-term storage, were examined for their stabilities, and the results are summarized in Table 3. These results were within the acceptance criteria and indicated that the method was accurate, reliable, and reproducible.
effect of different doses of sce on the pharmacokinetics of csa
The mean whole blood concentration versus time curve is presented in Figure 4, and major pharmacokinetic parameters calculated by noncompartmental model are listed in Table 4. After the oral administration of different doses of SCE to rats, the whole blood concentrations of CsA were found to be increased. The apparent elimination half-life (t 1/2 ) of CsA was significantly longer when it was administered with 216 mg/kg SCE than when it was administered solely. The time to reach the maximum (T max ) was delayed from 6.33±1.50 hours to 6.8±1.10 hours, 8.67±1.63 hours, and 13.5±7.55 hours when CsA was administered with 54 mg/kg, 108 mg/kg, and 216 mg/kg of SCE, respectively. The increase in AUC 0−t and AUC 0−∞ was approximately twofold when CsA was administered in combination with 216 mg/kg SCE. Meanwhile, the clearance was reduced by approximately twofold, and the Table 1 The extraction recovery and matrix effect of CsA in rat whole blood sample at 50 ng/mL, 200 ng/mL, and 2,000 ng/mL concentration levels (n=6)
effect of sce on csa-induced changes in the parameters of renal function within serum
Serum Cr and BUN are biomarkers for kidney function. As shown in Figure 5, the serum Cr and BUN levels in rats treated with vehicle group were not statistically different when compared with the control group. The treatment of rats on low salt diet with 25 mg/kg CsA for 28 days continuously of the thiobarbituric acid test. 24 As shown in Figure 6A, lipid peroxidation was increased significantly more in the kidney of rats administered with CsA (25 mg/kg) for 28 days than in the kidney of rats from the vehicle group (P0.05). A concomitant treatment with SCE at doses of 54 mg/kg and 108 mg/kg (P0.05 and P0.001, respectively) significantly prevented this increase. When compared with the CsA-only treatment group, the levels of thiobarbituric acid-reacting substance generation seemed to decreased by 22.34% and 31.63% when SCE was administered at doses 54 mg/kg and 108 mg/kg, respectively. As shown in Figure 6B, there was also a significant reduction in CAT activity levels after CsA treatment when compared with the vehicle group (P0.001). Treatment with SCE, on the other hand, showed significant improvement in CAT activity, which increased by 15.61% and 23.42% when administered with a dose of 54 mg/kg and 108 mg/kg, respectively. Results obtained on the levels of GSH-Px activity in rats, which were treated with CsA,
5007
Pharmacokinetic and nephroprotective of sce against csa toxicity showed that there was a significant reduction in the activity of GSH-Px within the kidney post-CsA treatment (P0.05 versus vehicle group), but when SCE was used in combination with CsA, the GSH-Px activity increased (P0.05 versus CsA group; Figure 6C).
effect of sce on csa-induced histopathological changes in kidney
Histological evaluation was performed using H&E and Masson's trichrome staining in order to demonstrate tubular necrosis and interstitial fibrosis inside the tubuli renales ( Figure 7). To assess the degree of total tubular necrosis in H&E-stained sections ( Figure 7A-F), the scores of tubular damage, inflammatory cell infiltrate, and cell debris in tubular lumens were semiquantitatively analyzed. The total score of tubular necrosis ( Figure 7G) was represented by summing up from six individual scores. Kidney sections from control group ( Figure 7A) and vehicle-treated ( Figure 7B) group showed unremarkably normal histology. Those treated with CsA, however, showed histopathological changes in the renal cortex in the form of dilation of the tubules, severe vacuolation of the tubular cells, and an inflammatory cells infiltrate (13.20±4.11 versus vehicle control, P0.001). It was interesting to note that kidneys isolated from those rats, which were subject to the concomitant administration of SCE at doses of 54 mg/kg and 108 mg/kg alongside CsA, revealed attenuation of the tubular structural damage and massive inflammatory cells infiltrate that could otherwise be seen in the absence of SCE (8.80±4.54, 7.40±4.20 versus 13.20±4.11, P0.05, P0.01, respectively; Figure 7E and F). Furthermore, kidney sections treated with SCE alone showed normal architecture ( Figure 7D), which indicated that SCE by itself does not cause renal damage. Masson's trichrome staining (Figure 8) demonstrated that renal fibrosis was significantly increased in rats given CsA compared with vehicle group (0.252±0.07 versus 0.137±0.04, P0.001). Treatment with 54 mg/kg and 108 mg/kg SCE significantly decreased the renal fibrosis area in CsA-treated rats (0.175±0.05 and 0.122±0.05 versus 0.252±0.07, respectively, P0.001).
sce protects against csa-associated rise in 4-hne It has been appreciated that the pathogenic effect of toxic responses in the kidney can be attributed mainly to the generation of oxidative radicals. 4-HNE, a product of lipid peroxidation, 25 was detected using Western blot and immunohistochemical techniques (Figure 9). Western blot was used to analyze for the presence of 4-HNE adducts to protein following a 28-day exposure to CsA. The data showed that 4-HNE adduction was increased by 1.6-fold with CsA treatment when compared to tissue that was treated in the vehicle group (P0.001). SCE has been reported to be a potent antioxidant, that is primarily used to treat kidney and liver disease. Consequently, when 54 mg/kg and 108 mg/kg SCE were coadministered with Figure 7 Photomicrographs (×200, h&e) of renal tubules showing progressive stages of tubular necrosis and respective necrosis score. Notes: Twelve random sections (×200) from each kidney were examined, and a score from 0 to 3 was given to the tubular profile according to the following arbitrary scale: the damage affecting no .5% of the field was scored 0; mild damage affecting 5%-25% of the field was scored 1; moderate damage affecting 25%-75% of the field was scored 2; and severe damage exceeding 75% of the field was scored 3. All 12 scores were added to give the total necrosis score for each kidney. sce promotes P-glycoprotein-mediated efflux in CsA nephropathy P-gp is an adenosine triphosphate-binding cassette protein, which protects organs from the toxic effects of drugs and xenobiotics via an efflux mechanism. 26 Located in the apical membrane of several barrier epithelia, including the intestine, renal epithelial cells, bile canalicula, and the brain capillary endothelial cells, P-gp's inhibition or induction may be responsible for drug interactions with phytomedicines. 27 Changes in the levels of P-gp were measured in renal tissue by Western blotting assay. The results showed that P-gp level was intensively increased when rats were exposed to CsA when compared with vehicle control rats (Figure 10, P0.001). Interestingly, the administration of CsA in combination with either 54 mg/kg or 108 mg/kg SCE caused a remarkable upregulation of P-gp expression. The expression was found to be increased by 9% when 108 mg/kg SCE was administered, indicating significant promotion of P-gpmediated efflux (P0.001).
effect of sce on hO-1 and nrf2 expressions in renal tissues
There is sufficient evidence supporting the protective role of the Nrf2-mediated pathway against oxidative stress and inflammation. 28 Recent research has shown that Nrf2 has multiple functions, including acute and transient stress responses to oxidative insults. 29 HO-1 gene expression is mainly regulated by the Nrf2 antioxidant response element (ARE) pathway, and induction of this enzyme protects cells from injury and death caused by oxidative stress. 30 Nrf2 expression was found to have increased post-CsA administration, paralleling the enhanced antioxidant capacity, but surprisingly not the inflammatory changes in renal tissues. Oxidative stress, ischemia reperfusion injury, cytokines, nitric oxide, and bacterial lipopolysaccharides increase the expression of HO-1. 31,32 As is shown in Figure 10, low levels of Nrf2 and HO-1 were expressed in the control group and vehicle group. The expression of Nrf2 was significantly higher in the CsA and SCE groups when compared with the vehicle group (P0.001). Furthermore, expression of HO-1 was also significantly increased following CsA treatment (P0.001). When exposed to concomitant administration of SCE along with CsA, the increase in Nrf2 and HO-1 expression levels was higher than the increase found in the CsA only group (P0.001). In particular, it can be deduced , and hO-1. gaPDh was used as the internal control. Data were analyzed by one-way anOVa and presented as mean ± sD (n$3). ***P0.001. Abbreviations: sce, Schisandra chinensis extract; P-gp, P-glycoprotein; nrf2, nuclear factor erythroid 2-related factor 2; hO-1, heme oxygenase 1; csa, cyclosporine a; gaPDh, glyceraldehyde-3-phosphate dehydrogenase; anOVa, analysis of variance; sD, standard deviation; nD, normal diet.
that the increase in the expression of HO-1 during CsA treatment may be the defense mechanism counteracting CsA toxicity. In this study, we found that SCE attenuated renal injury induced by CsA supporting the possibility that HO-1 induction might play an important protective role against the CsA mediated renal damage in a Nrf2dependent manner.
sce decreases the expression of apoptosis-related proteins cleaved caspase 3 and Bax in csa nephropathy
CsA is known to cause increased apoptosis of renal cells. Interestingly, apoptosis plays a central role not only in the physiological processes of kidney growth and remodeling but also in various human renal diseases and drug-induced nephrotoxicity, including the one induced by CsA. 33 Here, we evaluated whether SCE treatment suppressed the expression of apoptotic markers, Bax, and cleaved caspase 3 in CsA-induced nephropathy. As is shown in Figure 11A-
5011
Pharmacokinetic and nephroprotective of sce against csa toxicity
5013
Pharmacokinetic and nephroprotective of sce against csa toxicity P0.001, respectively). At the same time, no significant differences were observed in renal cleaved caspase 3 between control and vehicle group. In contrast, however, Bax expression was found to be slightly upregulated in the vehicle group when compared with the control group. These results were used to credit the low salt diet for the possible acceleration in renal cell apoptosis rate in rats. Furthermore, as can be seen in Figure 11D-G, a significant elevation in cleaved caspase 3 and Bax expression was observed in the CsA-treated group when compared with vehicle group. This elevation was found to be suppressed by concomitant administration of SCE along with CsA.
SCE prevents the upregulation of LC3A/B protein expression in csa nephropathy
The hallmark of autophagy is the presence of autophagosomes. These autophagosomes are characterized by double membrane bound compartments, which contain cytoplasmic material and/or organelles. 34 Autophagy plays a complex role in promoting cell life and death. This process is essentially dependent on the cell type, the kind of stress experienced, and its duration. We tested, using Western blotting for ration of LC3-II (lipid form of LC3) and I (cytosolic form of LC3), whether CsA-induced autophagy is inhibited by SCE cotreatment. As can be seen in Figure 12, when compared with the vehicle control group, there was considerable upregulation of LC3-II/I after CsA treatment (2.9-fold, P0.001).
When comparing with CsA group, cotreatment of rats with 54 mg/kg and 108 mg/kg SCE decreased the expression level of LC3-II/I by 16.92% and 23.53%, respectively (P0.001). Furthermore, immunohistochemical techniques were used to demonstrate that SCE inhibits autophagy during CsA treatment. As shown in Figure 12C and D, when compared with vehicle-treated animals, CsA treatment significantly increased positive signals staining of tubules at 28 days (P0.001). When 54 mg/kg and 108 mg/kg SCE were coadministered with CsA, positive signals staining of tubules were found to be in a significantly lower quantity (P0.01 and P0.001). These data suggest that SCE reduces CsAinduced autophagy.
Discussion
S. chinensis, one of the most important traditional herbal medicines, has been extensively used in coadministration with several drugs, more so than it has been used singly. The simultaneous use of herbal medicines with CsA has recently gained considerable attention as useful concomitants, particularly in reducing the adverse effects of CsA.
Numerous studies have demonstrated that the CsA-induced nephrotoxicity, which is characterized by decreased glomerular filtration rate and fibrosis, is a major cause of chronic renal disease in rat models and this can be applied to clinical practice as well. 35 The molecular mechanisms underlying CsA-induced nephrotoxicity remain poorly understood. To further elucidate the in vivo mechanisms, we analyzed the effect of different doses of SCE on the pharmacokinetics of CsA in rats. We found a higher blood concentration of CsA, with a longer half-life, when a series of single doses of SCE were administered in rats. These results are consistent with a previous study, where it had been demonstrated that the blood concentration of CsA in rats markedly increases after the administration of Wuzhi tablet (Schisandra sphenanthera extract). The inhibitory effect of the Wuzhi tablet ingredients on the activity of P-gp and CYP3A4 was deemed responsible for the formerly mentioned effects. 36 Similarly, another study had also revealed that several constituents from S. chinensis could inhibit the activity of P-gp and CYP3A. 37,38 In previous reports, Schisandra lignan extract was found to cause irreversible inhibition of CYP3A until this enzyme could be newly synthesized. 39 Furthermore, CsA has also been shown to act as both substrate and inhibitor for CYP3A and P-gp. 40 Previous research has also demonstrated the effect Wuzhi tablets have on the transport, metabolism, and pharmacokinetics of digoxin and midazolam (typical P-gp and CYP3A substrate). This in turn provides evidence to support that Wuzhi tablet has a direct inhibitory effect on the activity of P-gp and CYP3A. 41 Therefore, a possible interaction between the strong inhibition of SCE on P-gp-mediated efflux and CYP3A-mediated metabolism of CsA could result in an increase in the systemic exposure to CsA. These results may provide more reliable experimental data and scientific explanations for its rational clinical application or related herb-drug interactions.
Additionally, we also demonstrated the ability of SCE to ameliorate CsA-induced nephrotoxicity in rat model experiment. In this study, we perused through the majority of published studies, which were based on the salt-depleted rat model, assessing CsA-induced nephrotoxicity. The salt-depleted rats underwent renal injury faster than rats on normal salt diet, and their kidneys showed morphological and pathological changes similar to those occurring in renal transplant patients. 42 In agreement with previous studies, 43
5015
Pharmacokinetic and nephroprotective of sce against csa toxicity the development of CsA-induced renal dysfunction in the current investigation was accompanied by a rise in renal MDA levels along with a significant reduction in renal CAT activity and GSH-Px levels. Treatment with SCE reduced CsA-induced renal dysfunction, and this was demonstrated by the improvement in the markers of renal injury mentioned previously. These findings are in line with other studies, suggesting that oxidative stress plays a crucial part in the pathogenesis of CsA-induced nephrotoxicity. [46][47][48] As described previously, the renal functional status seems to be in close agreement with the histopathological data obtained using H&E and Masson's trichrome staining. These data demonstrated tubular necrosis, vacuolation, and renal fibrosis in tubular cells of rats treated with CsA. It was revealed that the group coadministered with SCE showed less renal damage with fewer tubular cells affected, which were similar in appearance with the almost normal architecture observed in the vehicle group. The renoprotective mechanism of SCE is associated with attenuation of interstitial inflammation, fibrosis, apoptotic cell death, and oxidant stress.
CsA is known to increase ROS levels via two methods: 1) CsA treatment itself can reduce levels of components of the antioxidant defense system, thereby enhancing ROS levels in the kidney and 2) CsA-mediated production of transforming growth factor beta and angiotensin II can further enhance the level of ROS by activating nicotinamide adenine dinucleotide phosphate oxidase. 49,50 Previous studies established that ROS production and oxidative stress are involved in CsA nephrotoxicity. 35,46,51 This study supported our hypothesis that SCE had renal protective effects in CsA nephrotoxicity model, a well-established renal injury model. In this model, CsA-induced generation of ROS led to an increase in the expression of 4-HNE (a product of lipid peroxidation) and HO-1 (an inducible isoform in response to oxidative stress). HO-1 is recognized as a protective gene in the kidney and is involved in the degradation of prooxidant heme. This results in the production of anti-inflammatory, antioxidant, and antiapoptotic metabolites. 52 The finding that CsA treatment increased HO-1 expression in an Nrf2-dependent manner was consistent with and confirmed the findings of a previous report on rat kidney tubular epithelial cells and murine fibroblasts. 28,31 Several studies revealed that some lignans isolated from fruits of S. chinensis, including schizandrin B, alpha-iso-cubebenol, and tigloylgomisin H, activated Nrf2 pathway and consequently exhibited anti-inflammatory properties and potential liver cancer prevention abilities. 53,54 It has not been determined whether SCE-induced Nrf2 signaling is required to counteract CsA-induced nephrotoxicity.
Indeed, SCE can increase its expression through enhanced nuclear translocation to Nrf2 and subsequent activation of downstream antioxidant enzyme targets. Both basal and inducible expression of many of these antioxidant enzymes are regulated by Nrf2, a member of the cap 'n' collar family of basic leucine zipper transcription factors, through the antioxidant response element. 55,56 Numerous comparative studies of the phenotypes of wild-type and Nrf2-disrupted mice have revealed the pivotal role of Nrf2 in protection against oxidant injuries: Nrf2-disrupted mice are much more susceptible to toxicity mediated by environmental chemicals and stresses than wild-type mice. 28,57 The mechanism of renoprotection induced by SCE was further investigated. Apoptosis is an essential process in the development and tissue homeostasis of most multicellular organisms, and deregulation of apoptosis has been implicated in the pathogenesis of CsA-induced nephropathy. 58 Apoptosis is initiated by two distinct pathways: an intrinsic pathway involving mitochondria and an extrinsic pathway leading to rapid recruitment of Fas-associated protein with death domain and caspase 8. 59 Using immunoblotting, the expression of cleaved caspase 3 and Bax was found to be increased in rats' kidneys, which received CsA in this study, while SCE coadministration was also found to be able to attenuate the increase in these apoptotic factors. Hence, conclusion drawn from this result was that CsA can induce apoptosis in kidney by increasing cleaved caspase 3 and Bax expression. 51 On the other hand, immunostaining for cleaved caspase 3 and Bax, markers of apoptosis, was significantly attenuated in rats treated with SCE. Therefore, it was inferred that SCE treatment was able to protect tubular cells against CsA-induced apoptosis by attenuating the increase in the activation of apoptosis factors. The apoptosis induced by CsA may develop through the intrinsic pathway because it not only promotes Bax aggregation and translocation to the mitochondria but also causes a caspase-dependent loss of mitochondrial membrane potential and activation of caspase 3. 60,61 Apoptosis has been clearly witnessed in tubular and interstitial cells of transplanted patients with chronic cyclosporine nephrotoxicity. 62 Tubular cell apoptosis has also been observed in animal and cell culture models. 12 Apoptosis of different types of cells could lead to different outcomes, for instance, excessive apoptosis in tubular epithelial cells results in tubular atrophy and loss of functional mass, whereas inflammatory cells cleared away by apoptosis facilitates renal structure remodeling and functional recovery. 63 More importantly, the effects of several lignan compounds, isolated from S. chinensis, on the antiapoptotic activity have been previously studied. Pretreatment with gomisin A from S. chinensis was also found to inhibit renal apoptosis through suppression of caspase 3 activity and elevation of serum tumor necrosis factor alpha, and this led to a decrease in the number of apoptotic cells. 64,65 Taken together, these findings demonstrate that SCE causes reduction in excessive apoptosis and that it may play an important role in preventing the development of CsA nephropathy.
Autophagy is a process for degrading proteins, organelles, and recycling materials in response to cellular stress, and it is accompanied by progressive development of vesicle structures from autophagosomes. 66 It has been recognized to play a critical role in removing both protein aggregates and damaged or excess organelles to maintain intracellular homeostasis and keep the cell healthy. However, excessive or inappropriate autophagy can also lead to cell death (autophagic cell death). Recent reports have demonstrated that CsA induces autophagy in human tubular cells and in vivo in rat kidneys, and it has been suggested that auotophagy serves as a protective mechanism against CsA toxicity. 67 In this study, we evaluated whether CsA-induced autophagy caused renal injury and whether this could be attenuated by concomitant treatment with SCE. We found that the expression of LC3A/B was significantly increased post-CsA treatment. Concomitant administration of SCE reduced the expression of LC3A/B. Overall, the induction of protein aggregates by CsA treatment was decreased by SCE cotreatment, and this effect may contribute to a decrease in subsequent autophagosome formation in CsA nephropathy.
Conclusion
To summarize, whole blood concentrations of CsA in rats were significantly increased after cotreatment with SCE. SCE could increase the systemic exposure of CsA in rats due to the strong inhibitory effect of SCE on P-gp and CYP3A activity. At the same time, concomitant administration with SCE was found to be protective against CsA-induced renal nephrotoxicity. SCE administration led to reduction in lipid peroxidation, oxidative stress, apoptosis, and autophagy normally associated with CsA. SCE cotreatment prevented CsA-mediated inhibition of CAT and GSH-Px enzyme. Maintaining CAT and GSH-Px activity by administering SCE with CsA reduced ROS levels and oxidative stress. Subsequently, we demonstrated that the Nrf2 system plays a protective role in CsA-mediated renal fibrosis, and that HO-1 might make a major contribution to prevent renal dysfunction among Nrf2 target genes. Finally, it can be deduced from our study that renal damage caused by CsA leads to tubular cell apoptosis and autophagic cell death, and that SCE plays a protective role against CsA-induced renal injury by decreasing the induction of apoptotic factors and excessive autophagosomes. In conclusion, SCE may serve as an effective adjunct for a CsA-based immunosuppressive regimen to ameliorate CsA-induced nephrotoxicity. Most importantly, when they are used in combination, the adjustment of dose schedule for CsA should be taken into consideration to account for the effects of SCE on the pharmacokinetics of CsA. Based on the current findings, the therapeutic prospect of SCE for this purpose is recommended. | 2016-08-09T08:50:54.084Z | 2015-08-28T00:00:00.000 | {
"year": 2015,
"sha1": "d0743630b4b18df29ad7e266d5e36bba4c93d738",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=26800",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8953294accdc9fe044830b2b9ea36c1d0b56c2f8",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
258402952 | pes2o/s2orc | v3-fos-license | Comparative gender analysis of the seroprevalence of varicella zoster virus among HIV-infected individuals receiving care at Offa, north-central Nigeria
.
Introduction:
Herpes group of viruses constitute the major viral opportunistic infections (OIs) among HIV-1 infected individuals.Opportunistic infections occur as a result of immune deficiency and have been recognised as the main reason for hospitalization and substantial morbidity in HIV-infected patients (1).Currently eight human herpes viruses belonging to herpesviridae family are known; herpes simplex virus 1, herpes simplex virus 2, varicella-zoster virus (VZV), Epstein-Barr virus (EBV), cytomegalovirus (CMV), human herpesvirus 6, human herpesvirus 7, and human herpesvirus 8 (HHV-8).These viruses share a characteristic ability to remain latent within the body over a long period of time.
Varicella zoster virus, a member of the α-herpesvirinae subfamily is exclusively human pathogen.It is highly infectious virus and is endemic worldwide.Primary infection with VZV leads to acute varicella or "chicken pox".Infection is usually through direct contact with skin lesion or through airborne spread from respiratory droplets (2).Initial infection is followed by establishment of lifelong latency in cranial nerves and dorsal root ganglia from where reactivation can occur years to decades later as herpes zoster or "shingles" which is characterized by painful, localized, vesicular rash in one or adjacent dermatomes.
In HIV-infected individuals, reactivation of VZV causes prolonged and severe manifestation of herpes zoster (3).Although zoster is not viewed as AIDS-defining illness, it can indicate immunodeficiency and tends to occur more often in patients with HIV (4).In fact, herpes zoster occurs at all stages of the HIV infection (5), with reactivation occurring as a result of HIV-induced immunodepression.Other risk factors of herpes zoster include cancers and chronic medical conditions (6).Zoster is said to afflict about 20% of the gene-ral population during their lifetime, especially the elderly, and about 8% to 11% of patients with AIDS (7).While recurrent episodes of zoster in non-immunocompromised patients are rare, occurring in about 1%-4% of cases (8), recurrence increases in AIDS patients to about 10%-27% of zoster cases (9).
Reactivation in the trigeminal ganglion also results in herpes zoster ophthalmicus (HZO) which may likely be the initial manifestation of HIV infection.The rate of reactivation is higher in immunocompromised patients and older people (10).Incidence of HZO is believed to be six times greater in HIV/AIDS patients than in healthy people and occurs in 5%-15% of HIV-positive patients (11).Studies have increasingly shown gender bias in the incidence of zoster with females more likely to acquire zoster than males.This has been highlighted by review of several epidemiological studies (12).Hitherto attributed to the greater longevity of females than males, research by Fleming et al., (13) however showed greater female incidence in almost all age groups, indicating involvement of factors other than age which are yet to be fully understood.
Although reports have it that over 60% of patients with HZO in Nigeria are HIVpositive (14,15), there is paucity of data showing the true burden of the infection among HIV infected persons in the country.In this study, we determined the seroprevalence of VZV in a cross section of HIV-1 infected patients who are receiving treatment at the General Hospital Offa, Kwara State, north-central Nigeria.We also compared male cum female distribution patterns as well as other associated risk factors.
Materials and method: Study setting, design and population:
This comparative cross-sectional hospital-based study was conducted among HIV-infected individuals accessing antiretroviral treatment at the General hospital Offa, Kwara State, northcentral Nigeria.
Sample size and method of sampling:
A total of 273 HIV-infected patients were selected by simple random sampling technique, comprising 93 (34.1%) males and 180 (65.9%) females.The sample size of 273 was determined using the Fischer's formula (16).
Data and sample collection:
Relevant socio-demographic and clinical information including age, marital status, occupation, CD4 + cell count, history of chicken pox and history of vaccination against VZV were recorded using well-structured questionnaire forms.These information were obtained both directly from the patients and their clinical record books.About 5ml of venous blood was obtained from each patient into a sterile bottle and allowed to clot at room temperature.The serum was aspirated into a new Eppendorf tube, appropriately labelled and stored at -20 o C until tested.
Serological assay:
Serum samples were tested for the presence of IgG antibodies using commercially available Enzyme Linked Immunosorbent Assay (ELISA) kit (Diagnostic Automation, Inc., Calabasas, CA, USA) for the detection of VZV specific IgG antibodies.The tests were performed and interpreted according to the manufacturer's instructions.
Ethical clearance:
Ethical clearance for the study was obtained from the Ethical Review Committee (ERC) of Kwara State Ministry of Health, Nigeria (MOH/KS/EU/777/174).The ethical standards of the committee and the Helsinki Declaration of 1975 (revised 2000) were strictly adhered to in carrying out the study.Prior to the study, informed consent was obtained from each patient participant and from the parents where the participants were below 18 years of age.
Statistical analysis:
Data were analysed using Statistical Package for Social Sciences version 22 (SPSS Inc., Chicago, USA).Pearson's Chi-square or Fisher's exact test was used where appropriate to test association at 95% confidence interval.P value <0.05 was considered statistical signi ficance.
Results:
Comparative seroprevalence of VZV in male & female patients with respect to other sociodemographic characteristics: Over the period of September 2019 to March 2020, a total of 273 HIV-infected patients were randomly selected for the study with 93 males (mean age 47.4 years) and 180 females (mean age 43.3 years).Of these, 210 (76.9%) were seropositive for VZV infection, 78 (83.9%) of 93 and 132 (73.3%) of 180 male and female patients tested positive to anti-VZV IgG antibodies (χ 2 =3.265, p=0.071).
The seroprevalence of VZV was not significantly associated with the age group in both male (χ 2 =8.014, p=0.155) and female (χ 2 =4.689, p=0.455) patients.However, the highest seroprevalence (100%) was recorded among patients in age group ≤ 20 years in both the male and female patients, while the lowest seroprevalence (70.0%) among the males was recorded among patients in both age groups 21-30 and ˃60 years.Among the females, the lowest seroprevalence (67.7%) was recorded among patients in age group 31-40 years (Table 1).
The VZV seroprevalence was strongly associated with marital status in both male (χ 2 =12.46, p=0.006) and female (χ 2 =12.139, p=0.007) patients.While the highest seroprevalence (89.8%) was seen among married male participants, the highest seroprevalence (100%) was recorded among widows and single (unmarried) female patients.The lowest seroprevalence (40%) in the males was recorded among the widowers while in the females, the lowest seroprevalence (50%) was recorded among the divorced women (Table 1).
Analysis also showed strong association of VZV seroprevalence with occupation in both male (χ 2 =21.515, p=0.0007) and female (χ 2 =11.173, p=0.025) patients.While artisans, civil servants and participants whose occupation were not determined all had 100% VZV seroprevalence, the lowest seroprevalence was recorded among the students in the male patients.In the female patients, the highest seroprevalence (86.4%) was recorded among patients with undetermined occupation while the lowest seroprevalence (57.1%) was recorded among the artisans (Table 1).
Comparative seroprevalence of VZV in male & female patients with respect to CD4 + count:
The seroprevalence of VZV infection was statistically associated with the CD4 + cell count in both male (χ 2 =5.648, p=0.017) and female (χ 2 =6.448, p=0.011) patients.The highest seroprevalence (88.2% for males and 78.7% for females) were recorded among patients with CD4 + cell count ≤ 350/µL while the lowest seroprevalence rates (64.7% for males and 60.4% for females) were recorded among patients with CD4 + count ˃350/µL (Table 2).
Comparative seroprevalence of VZV in male & female patients with respect to past history of chicken pox:
Records showed that males seroposi-tive for VZV were about twice more likely to have suffered from chicken pox than their female counterparts.Out of 78 males who were seropositive for VZV, 25 (32%) had history of chicken pox either as a child or adult, while out of 132 females who were seropositive for VZV, 23 (17.4%) had previous history of chicken pox (x 2 =5.148,OR=2.235, 95% CI=1.162-4.302,p=0.023) (Fig 1).
Discussion:
Varicella zoster virus infection is common and contributes significantly to morbidity and mortality especially among HIV-infected individuals.Despite this, it has not been given adequate recognition especially in Nigeria.In this study, prevalence of 83.9% and 73.3% were recorded for male and female patients respectively.As far as it can be ascertained, there has not been any previous epidemiological study on VZV infection among HIV-positive patients in Nigeria thus comparing our result with any local finding is difficult.This reflects how neglected the infection is in Nigeria.Since vaccination against VZV is not yet routine in Nigeria and since none of the study participants reported having received VZV vaccination, the seroprevalence obtained in this study indicates the level of previous natural exposure to the virus.
The 83.9% and 73.3% VZV seroprevalence rates recorded for male and female participants in this study indicates similar rates of exposure of both genders to the infection and is comparable to report from similar study in Italy (17).Although a lower prevalence has been reported in India (18), the rates obtained in this study are appreciably lower compared to the reported 90% -100% seroprevalence in more developed regions of the world such as North America, Western Europe, New Zealand and Japan (19,20).It has earlier been noted that about 50% of young adults in tropical regions have not been exposed to primary VZV infection (20).High ambient temperature and humidity (22) as well as high ultraviolet radiation (23) such as seen in tropical regions like Nigeria have all been reported to have deleterious effect on the virus.This may partly explain the relatively lower seroprevalence in this study compared to more developed regions of the world.
The age group of the patients was not significantly associated with the seroprevalence of VZV infection, and this was observed for both male and female participants in this study.Other similar studies did not find any association between age and prevalence of VZV antibodies (24,25).High seroprevalence in the younger age group in this study is not out of place since the virus is known to be acquired at very young age.In the temperate countries, most of the infections are known to occur before adolescence (26), while in the tropical regions, primary infection is usually delayed till adolescence (27).Seroprevalence of 66.3% has been previously reported in Kaduna, Northern Nigeria among children ≤15 years (28).
On the other hand, marital and occupational status were significantly associated with the seroprevalence of VZV infection in both male and female patients in our study.While being students and professionals correlated with low VZV seroprevalence among the male patients, this was not so among the female patients.Few studies have looked into the association between occupation and acquisition of VZV.In one of such studies conducted in a healthcare setting, positive association was observed between VZV sero-negativity and job of the patients (29).Although no immediate reason can be advanced for the observed association of prevalence with marital and occupational status of the patients in our study, it may not be unconnected with increased risk of contact with infected individuals in Some other studies however, did not find any association with these variables (25).Immunologically, significant association was observed between VZV seroprevalence and CD4 + count in both the male (p= 0.017) and female (p=0.011)patients in our study.Cell-mediated immunity is believed to play significant role in maintaining the latent state of VZV infection (30), the lower seroprevalence observed among patients with higher CD4 + count may therefore be as a result of their relatively strong cellular immunity.On the other hand, individuals with more advanced HIV with decreased CD4 + cell count (≤350 cells/µL) had higher seroprevalence of VZV due to their low cellular immunity and therefore stand higher chances of developing zoster in the near future (31).
We also looked into the medical history of the participants and discovered that males are twice as likely to suffer from chicken pox following VZV infection than their female counterparts (ratio 1.8:1).Thirty-two percent of the males seropositive for the virus had history of chicken pox while 17.4% of the females had history of chicken pox indicating that more females had subclinical infection (without chicken pox) than their male counterparts following exposure.Whether ability to suffer from more acute infection (leading to chicken pox) as observed among males in this study translates to reduced chances of suffering from zoster in the future remains to be investigated.
Conclusion:
The seroprevalence rate of VZV in HIV-infected individuals in Offa, Kwara State, Nigeria, is high and the rate is similar in both male and female patients in our study.However, greater percentage of seropositive males than females reported history of chicken pox.
individuals than the others.
Table 1 :
Comparative seroprevalence of VZV among male and female HIV patients in relation to demographic variables
Table 2 :
Comparative seroprevalence of VZV infection among male and female HIV-infected patients in relation to CD4 +
count Variable Males χ 2 (p-value) Females χ 2 (p-value)
Fig 1: Gender-specific seroprevalence of varicella zoster virus infection with respect to history of chicken pox | 2023-04-30T15:05:32.643Z | 2023-04-18T00:00:00.000 | {
"year": 2023,
"sha1": "e7c8182c18eb3d63291eeef5e71dd088f0309a98",
"oa_license": null,
"oa_url": "https://www.ajol.info/index.php/ajcem/article/download/246020/232743",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3efd23620848b6c1239ed70812d161b916855bff",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
49545442 | pes2o/s2orc | v3-fos-license | EMISSION MEASUREMENTS OF GEOGENIC GREENHOUSE GASES IN THE AREA OF “ PUSTY LAS ” ABANDONED OILFIELD ( POLISH OUTER CARPATHIANS )
The Carpathians may play a significant role as a supplier of greenhouse gases to the atmosphere. Unfortunately, most of the discovered oil and gas deposits are recently only historical objects. An example is the Sękowa-Ropica Górna-Siary oil deposit located in the marginal part of the Magura Nappe where oil had been extracted in dug wells until the mid XX century. One of such extraction sites is the “Pusty Las” oilfield. In that area, 10 methane and carbon dioxide emission measurement sites were located, among which 4 in dried dug wells and 6 in dig wells still filled with oil and/or water. Dynamics of methane and carbon dioxide concentration changes were measured with the modified static chambers method. Gas samples were collected immediately after the installation of the chamber and again, after 5 and 10 minutes. In the case of reclaimed or dry dug wells, static chamber was installed directly at the ground surface. In wells still filled with oil and/or water the chamber was equipped with an “apron” mounted on special sticks. The dynamics of concentrations changes varied from -0.871 to 119.924 ppm∙min-1 for methane and from -0.005 to 0.053% obj∙min-1 for carbon dioxide. Average methane emission was 1.9 g∙m-2∙d-1 and that of carbon dioxide was 26.95 g∙m-2∙d-1. The measurements revealed that an abandoned oil field supplies significant amounts of greenhouse gases to the atmosphere although the emission of methane is lower than that measured e.g. in mud volcanoes located in various parts of the world.
INTRODUCTION
The emission of geogenic methane and carbon dioxide is regarded as one of the reasons of global climate changes.The results of studies run worldwide indicate that gas emissions from such sources strongly contribute to the increasing concentrations of greenhouse gases in the atmosphere, among which methane and carbon dioxide play crucial roles [Mazzini et al., 2009;Hong et al., 2009].
The emission of methane sourced by hydrocarbon generation and expulsion in petroleum basins is quoted as one of the two types of methane release to the atmosphere controlled by geological factors.Recently, it shares about 7-14% of global annual methane emission (from ~40 to 60 Tg CH 4 /year).In Europe, total methane emission from seepages reaches about 3 Tg/year [Etiope, 2008].The soil-to-atmosphere emission of carbon dioxide may result from bacterial oxidation of CH 4 or from migration from endogenic sources or from CO 2 generation under anoxic conditions, simultaneously with the methane [Waleńczak, 1987].Other examples of geogenic sources of gas emissions can be mud volcanoes and/or geothermal systems.
EMISSION MEASUREMENTS OF GEOGENIC GREENHOUSE GASES IN THE AREA OF "PUSTY LAS" ABANDONED OILFIELD (POLISH OUTER CARPATHIANS)
Piotr Guzy 1 , Dawid Pietrzycki 2 , Anna Świerczewska 1 , Henryk Sechman 1 , Anna Twaróg 1 , Adrianna Góra 1 Figure.1. Geological sketch of research area (Żytko et al., 1989; Świerczewska, 2005; modificated) Hydrocarbon seepages are the most typical evidences of hydrocarbon potential of petroleum basins.The Outer Carpathians are the area of significant petroleum potential, as revealed by both the macro-and microseepages, and gas exhalations [Kuśmierek et al., 2007;Kuśmierek and Machowski, 2008;Lipińska 2010].The hydrocarbon macroseeps are typical of petroleum basins of complicated tectonics and are observed mostly in the elevated, axial zones of fold structures, and in dislocation zones.Hydrocarbon migration from deep accumulations to the nearsurface zone is predominantly controlled by diffusion and effusion [e.g.: Jones and Drozd, 1983; Matthews, 1996].Hence, the Carpathians may be a regional source of supply of greenhouse gases to the atmosphere.
In the early years of modern petroleum exploration, hydrocarbon seeps were the first prospection premises in many world-famous basins, including the Carpathians.Even before the World War II most of petroleum wells from which commercial inflows of oils and gas were obtained, were localized within the zones of natural hydrocarbon seeps [Link, 1952].
In the Carpathians, hydrocarbons have been commercially exploited since the mid XIX th century.The peak oil production from the Carpathian fields was obtained in 1909 (over 2 Mt) [Karnkowski, 1999].An example of such a field where oil was worked from shallow dug wells is the Sękowa-Ropica Górna-Siary deposit, which operated until the mid XX th century.The deposit is located at the marginal zone of the Magura Unit.The name of the exploitation site is the "Pusty Las" ("Empty Forest") [Pietrzycki, 2013].
The present paper aims to estimate the quantity of methane and carbon dioxide emission to the atmosphere from old, inactive dug wells located in the area of the "Pusty Las" oil site.
STUDY AREA
The study area is located in the Sękowa Commune, in the Gorlice County of the Małopolska District.From geological point of view, this area belongs to the Outer Carpathians, precisely, to the marginal zone of the Magura Unit (the Siary Sub-unit) (Fig. 1).
Geological setting
The Magura Unit is the structurally highest and southwardmost tectonic unit of the Outer Carpathians.From the south, it borders the Pieniny Klippen Belt and from the north it is thrusted over the Fore-Magura and the Silesian units [Oszczypko, 2004].
Generally, the Magura Unit is less hydrocarbon-prone unit in the whole Outer Carpathians [Karnkowski, 1999].
Field works
The studies were carried out in the area of Pusty Las "oil mine".In this area some oil dugwells have already been closed, i.e., entirely filled up with rubble, some others were still empty (or "open") and, thus, recognizable as hollows of various size.
In the field studies, the modified static chamber method was applied [Leventhal, 1992;Dzieniewicz et al., 2002;Korus et al., 2005;Sechman et al., 2006], in which gas is collected in predefined time intervals from the inner space of closed chamber installed at the terrain surface.For sampling of dug wells filled with oil and/or water, the chamber rested over the open space on two transversal levers.The chamber itself is a vessel of the volume of 48 dm 3 , made of stainless steel and it is provided with a sealed opening to allow a sample of gas from the interior volume of the chamber.The area directly covered by the chamber is 28.27 dm 2 .
For gas sampling, 10 sites were selected (Table 1) from which 4 sites were filled-up (closed) dug wells (sites Nos. 1, 2, 3 and 4) and next 6 were open dug wells (sites Nos. 5, 6, 7, 8, 9 and 10).The earlier experiments [Sechman and Dzieniewicz, 2009] demonstrated that the highest dynamics of changes in gas concentrations occur in time intervals from 0 to 10 minutes since chamber installation.Hence, at each site, gas was sampled immediately after chamber setting, then after 5 and after 10 minutes.Overall, during the tests 30 samples of gas from chambers were collected.During the sampling, the atmospheric pressure was measured together with soil temperature at 10 cm depth.Gas samples were pumped to special glass vessels filled with brine [Dzieniewicz and Sechman, 2002].Totally, 30 gas samples were collected from 10 sampling sites.Additionally, atmospheric air sample was taken from the area of the "oil mine".The field works were run in the area of the Pusty Las "oil mine" in April, 2013 In the case of closed dug wells, the rubbish and outer, 10-cm-thick soil layer were removed, and the chamber was pressed into the ground.The chamber walls contacting the ground were sealed (Fig. 3A).As the chamber could not be installed directly in the dug wells filled with oil and/or water, it was equipped with a special "apron" and placed over the well space onto the two transversal bars (Fig. 3B).
Laboratory tests
The molecular composition of gas samples were analyzed at the Laboratory of Gas Chromatography of the Department of Fossil Fuels.We used FISSONS Instruments GC 8160 and CAR-LO ERBA Instruments GC 6300 gas chromatographs equipped with FID and TCD detectors.In each sample, methane, ethane, propane, i-butane, n-butane, neo-pentane, i-pentane, n-pentane, ethylene, propylene, 1-butene and carbon dioxide were determined.Detection limit for FID is 0.01 ppm for hydrocarbons.Analytical precision is 2% of measured value and 10% at detection limit.TCD detection limits for carbon dioxide is 100 ppm and estimated precision is 2% of measured value and 10% at detection limit.
METHODOLOGY OF STATISTICAL ANALYSIS AND RESULTS PRESENTATION
The populations of measured concentrations of gaseous hydrocarbons were initially characterized by determination of basic statistical parameters, calculated for all the obtained results and separately, for samples collected over the closed and the open dug wells.The relationships between measured methane and carbon dioxide concentrations were evaluated using the correlation plots based upon the cartesian coordinates XY system.Such plots were constructed for the whole results and separately, for the results for closed and open dug wells.In order to evaluate the dynamics of changes in concentrations of methane and carbon dioxide, linear plots were drawn, which illustrated * -percentage of samples with concentration of given component over detection limit; ** -minimum, maximum, mean, median and standard deviation in vol.%, b.d.l.-below detection limit (detection limit for hydrocarbons is 0.01 ppm and 100 ppm for carbon dioxide).the changes of gas concentrations in time.Then, computer-generated, linear trend of concentration changes was overprinted on the plots.For the trend lines, the coefficient of determination R 2 was calculated.For both the methane and carbon dioxide, results were noted in ppm and in volume percents per minute.The emission was expressed in milligrams per square meter per day for methane emission and in grams per square meter per day for carbon dioxide emisssion.
METHODOLOGY OF EMISSION CALCULATION
The quantities of methane and carbon dioxide emissions were calculated using the formula established during the previous geochemical surveys [Korus et al., 2002;Dzieniewicz et al., 2006;Sechman and Dzieniewicz, 2009].If the collection chamber was installed at the ground surface the area covered by the chamber was constant.If the collector was mounted over the open dug wells the chamber parameter depended on the changing surface covered by the "apron".
Statistical characterization of population of all measured gas concentrations.
In the analyzed gas samples, saturated hydrocarbons C 1 -C 5 and carbon dioxide were detected.Methane concentrations varied from 0.81 to 1220 ppm.Maximum ethane concentration was 3.62 ppm and this gas was found in almost 40% of the analyzed samples.Propane occurred in 19.4% of the analyzed samples and its values reached 3.5 ppm with the mean value 0.16 ppm.The same percentage of samples contained i-butane whereas i-pentane and n-pentane were detected in over 16% of the analyzed samples.Maximum butane and pentanes concentrations did not exceed 1 ppm.Trace amounts of ethylene were measured in only two samples whereas the remaining gaseous alkenes were absent in the analyzed material.The concentrations of carbon dioxide varied from 0.07 to 0.22 vol.% with the mean value 0.12 vol.% (Table 1).
Concentrations of methane and carbon dioxide measured in the atmospheric air from the area of the Pusty Las site were 3.68 ppm and 0.09 vol.%, respectively.For comparison, the average concentration of atmospheric methane is about 2 ppm and that of carbon dioxide is about 0.03 vol.%.Moreover, concentrations of methane and carbon dioxide measured over the closed bituminous coal mines in the Wałbrzych mining district (SW Poland) were 5 ppm and 0.05 vol.%, respectively [Korus et al. 2002], and methane concentration measured in the atmospheric air sampled immediately over the mud volcano from Azerbeijan was 99.65 vol.%.[Mazzini et al., 2009].
Statistical characterization of gas concentrations in samples collected from over the closed oil dug wells
In gas samples collected from the closed oil dug wells, methane, ethane, propane, i-butane, ipentane, n-pentane and carbon dioxide were detected.Methane was found in all of the analyzed samples, in concentrations from 0.8 to 1220 ppm with arithmetic mean 125.4 ppm and median value 3 ppm.Ethane was detected in almost 42% of analyzed samples in concentrations up to 3.62 ppm (arithmetic mean -0.4 ppm).Propane was observed in almost 17% of the analyzed samples.Its concentrations reached up to 3.5 ppm with arithmetic mean 0.32 ppm.Maximum concentrations of i-butane and n-butane were 0.77 and 0.65 ppm, respectively, and were observed in almost 17% of the analyzed samples.Neo-pentane has not been detected whereas i-petane and n-pentane were observed in about 17% of the analyzed samples, but in amounts below 0.5 ppm.The concentrations of carbon dioxide varied from 0.07 to 0.22 vol.% with the arithmetic mean 0.14 vol.% (Table 2) .
Statistical characterization of gas concentrations in samples collected from over the open oil dug wells
In gas samples collected from the open oil dug wells, methane, ethane, propane, i-butane, neo-pentane, i-pentane, n-pentane and carbon dioxide were detected.Methane was observed in all of the analyzed samples, in concentrations from 1.41 to 288.5 ppm with arithmetic mean 28.7 ppm and median value 2.9 ppm.Ethane was found in almost 40% of the analyzed samples, in concentrations up to 1.23 ppm (arithmetic mean -0.06 ppm).Propane was detected in over 22% of analyzed samples, in concentrations up to 0.44 ppm (arithmetic mean -0.06 ppm).Neo-pentane was detected in a single sample (0.008 ppm).Maxi-mum concentrations of i-butane and n-butane were 0.12 and 0.06 ppm respectively.Both ipentane and n-pentane were found in about 16% of analyzed samples, in concentrations below 0.2 ppm.The concentrations of carbon dioxide varied from 0.07 to 0.15 vol.% with the arithmetic mean 0.11 vol.%.(Table 3).
Evaluation of geochemical indicators
Generally, concentrations of alkanes in analyzed samples decreased with the increasing number of atoms in molecules, which suggests deep origin of these hydrocarbons (Table 1).Concentrations of methane and carbon dioxide were higher in samples taken from above the closed dug wells (Table 2, 3).The lower concentrations of both gases detected over the open dug wells can be related also to the fact that most of the wells were filled with water, which provides a barrier for both gases.Additionally, carbon dioxide, which is heavier than the air, may accumulate in local depressions.The concentrations of methane and carbon dioxide did not reveal correlation (Fig. 4A).However, the coefficient of determination R 2 calculated for samples from over the closed dug wells was 0.48 whereas that determined for open wells was only 0.03 (Fig. 4B, C).These values suggest that relationships between methane and carbon dioxide concentrations from these sites are much interrelated.It can be explained as an effect of microbial generation of these gases under anoxic conditions.Such a process may operate when the dispersed organic matter is contained in a rubble filling the dug wells.Hence, the newly generated microbial gases contribute to the flux of gases ascending from deep sources.In the open dug wells, carbon dioxide, which is heavier than the atmospheric air, presumably accumulates at the bottoms of the wells whereas methane, which is lighter than the air, readily raised towards the sampling chamber.Moreover, deep-sourced carbon dioxide more intensively dissolves in the water filling the wells that deep-sourced methane, which affects the relationships between both gases.* -percentage of samples with concentration of given component over detection limit; ** -minimum, maximum, mean, median and standard deviation in vol.%, b.d.l.-below detection limit (detection limit for hydrocarbons is 0.01 ppm and 100 ppm for carbon dioxide).• the measurements were taken within a short period of time (one week), so the seasonal changes could not have had any influence on the established emission value.
In the area of the Pusty Las "oil mine", significant methane emission occurs from abandoned oil dug wells, although average emission values were only 1 order of magnitude higher than those measured in the petroleum-prone areas in the Carpathians.Moreover, the maximum methane emission measured over one of sampled dug wells in the Pusty Las area was several orders of magnitude lower than the emissions from mud volcanoes from various localities in the world.
The results demonstrate that old oil dug wells still present in the areas of historical hydrocarbon exploitation in the Carpathians may be a significant source of greenhouse gases released to the Earth atmosphere.
Figure 5 .
Figure 5. Dynamics of changes of concentrations measured in sample stations Nos.1-10 (A-J)
Table 1 .
Principal statistical parameters of alkanes and carbon dioxide concentrations calculated for all-30 measurements
Table 2 .
Principal statistical parameters of alkanes and carbon dioxide concentrations calculated for closed dug wells (12 results) -percentage of samples with concentration of given component over detection limit; ** -minimum, maximum, mean, median and standard deviation in vol.%, b.d.l.-below detection limit (detection limit for hydrocarbons is 0.01 ppm and 100 ppm for carbon dioxide). *
Table 3 .
Principal statistical parameters of alkanes and carbon dioxide concentrations calculated for open dug wells(18 results)
Table 4 .
Values of changes of methane and carbon dioxide concentration and emission calculated on the basis of measurements with application of static chamber method | 2018-07-01T04:49:47.777Z | 2017-07-01T00:00:00.000 | {
"year": 2017,
"sha1": "07d382e4ec63aa2adbf95fe4bc76e2d9fa85b1a9",
"oa_license": "CCBY",
"oa_url": "http://www.jeeng.net/pdf-74360-12273?filename=EMISSION%20MEASUREMENTS%20OF.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "07d382e4ec63aa2adbf95fe4bc76e2d9fa85b1a9",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
195563634 | pes2o/s2orc | v3-fos-license | Resource Management Domains of Kharif and Rabi Season Fallows in Central Plateau Region of India: A Strategy for Accelerated Agricultural Development
Over last few decades, acreage of total fallow lands (Kharif and Rabi seasons) in India has remained almost unchanged around 25Mha. The acreage of Kharif (summer) and Rabi (winter) Fallows in Madhya Pradesh (MP) are 1.98Mha and 5.51Mha, respectively. In the semi-arid agroclimatic zones of the states, Fallow-Wheat/ Gram/Indian-Mustard cropping systems are practiced. After harvest of Kharif rice, kodo-kutki, maize or sorghum, farmers generally practice post-rainy season Rabi fallows in the sub-humid regions, south of Narmada River. Kharif fallowing is largely the result of the inability of the farmers to make planting dates independent of monsoon forecasts, and make efficient use of rain water. It appears that factors responsible for Kharif and Rabi fallows are distinctly different and a general consequence of distinctly different soil moisture regimes prevailing in the two crop seasons. Kharif and Rabi fallows have two distinct resource management domains. Whereas, Kharif fallows can be tackled with “PMP-dry seeding” agronomy, production constraints of Rabi fallows can be substantively tackled by shifting from tilled to zero-till agriculture with residue management to make efficient use of the conserved rain water. Some irrigation support will prove useful to tackle mid-season droughts in both situations. Conservation agricultural practices can significantly improve and stabilize crop yields in black soils and other associated soils of in the semi-arid tropics region of the Central India. DOI: 10.14302/issn.2639-3166.jar-19-2590 Corresponding author: Raj Gupta, INSA Scientist, CASA, New Delhi, Email: rajbisa2013@gmail.com
Introduction
Since 1960s, India has pursued an agricultural policy aimed largely at enhancing productivity through input based approaches. This strategy resulted in substantial gains in crop production and productivity to overcome recurring food shortages in the country. The strategy however, has ignored the impact of input interventions on the soil health and the associated ecoservices, resulting in declining factor productivity and widespread problems of natural resource degradation [1]. Such gains were generally limited to well-endowed regions where it was possible to alter the production environment through input use. This has raised some serious questions on the suitability of the past approaches considered for achieving the food security goals.
Sustainable agriculture requires land based solutions for management of natural resources, which integrates biophysical and socioeconomic parameters for characterizing the land management units, known as resource management domains. A resource management domain is a homogenous land unit having similar constraints, requiring a similar management approach for specific land use [2][3][4]. The concept of homogenous resource management domains / zones/ has progressed through many stages such as agroclimatic zones and agro-ecological sub-regions which has now found its way into precision farming.
Agro-ecoregional concept [5] adopted by NBSS&LUP however, ignored the fact that introduction of irrigation water alleviates a major production constraint besides providing opportunities for diversification of agriculture.
Hence an approach that integrates productivity concerns with conservation of natural resources is a pre-requisite In the SAT region, rainfall is generally more than evapotranspiration and soil moisture storage. High intensity rains very often result in runoff, soil erosion and some degradation of black soils. This description would suggest that any effort aimed at improving the productivity of black soils must aim at 'closing the rainy season window' with a crop cover before the actual on-set of monsoons such as to reduce runoff and soil erosion. Thus, it would appear that reversal of degradation processes from bare soils during rainy season is a pre-requisite for addressing concerns of Rabi Fallows and to enhance total system productivity in vertisols and other associated soils.
Tillage effects in red and black soils are short-lived due to structural instability [14][15][16]. Tillage creates a cycle of declines in which tillage increases the need for another tillage operation to maintain infiltration capacity [17][18]. Residue incorporation improved the productivity of crops in red soils [19] and rice-wheat in black soils [16,20]. In Central India where erosion through run off rain water is the major agent of soil degradation and fertility decline, the maximum benefit of residues is likely to be when residues are retained on the soil surface in the tropics [17]. It must also be mentioned here that aquifer formations in plateau region, is by and large restricted to rocky formations wherein natural rate of replenishment of ground water is quite low. Unfortunately, these are also areas (e.g. Malwa Plateau) where ground water development has already reached unsustainable levels [11,21]. Rabi crops are often perceived as more secure than the Kharif crops.
Extent of Kharif and Rabi Fallows and the Prominent Fallow-Cropping Systems
Results of several studies [39] bring out that land kept fallow during rainy season have more stored soil moisture at planting than the continuously cropped systems. Rabi fallowing is generally practiced in areas having (i) shallow soils not able to store sufficient moisture to support a full season winter crop, (ii) agronomic fatigue resulting in significant losses in stored soil moisture due to excessive tillage, and (iii) non-availability of water for supplemental irrigation.
Virmani et al. [33] have indicated that Indian Vertisols can store and supply 41% of the total rainfall to post-rainy season crop. Sahoo et al. [40] reported that It must be mentioned here that many farmers consider dry seeding before the onset of monsoons as a risky proposition in conventionally tilled areas with erratic early-season rainfall [32]. SAT farmers are reluctant practitioners of dry seeding for fear of: • Seed loss through termites and picking by birds, • Loss of seed viability due to extended exposure to high summer temperatures ranging between 38 and 47°C for about a month or so, • Mortality of young seedlings or loss of seed viability in alternate wetting-drying seed cycles during premonsoon period, and Dry seeding in BBF is practiced on wide-raised beds (90 cm or more). Beds are dismantled every season.
No-till dry seeding with mulching can be practiced in any land configurations that facilitates control traffic.
BBF is practiced on the sloping black soils to provide drainage for the established crops. If farmers are not able to establish the crops before monsoons, tilled bare fields make black soils highly prone to soil erosion during rains.
PMP-DS technology is practiced in no-till fields with residues on the surface. Mulching reduces crusting and soil erosion risks to minimum before the establishment of a crop cover and thereafter.
No natural priming of seed with rain water/ water.
PMP-DS allow seed to experience several natural cycles of hydration-dehydration. In the SAT region, surface cover and tillage methods seem to play a very vital role in establishment of the crops under monsoonal climates. Tillage is known to speed-up loss of soil moisture from surface soils and also shift the receding soil moisture zone into deeper soil layers below 15cm [52] . Surface mulching can slow down the loss of soil water [53] and hence the recession of soil moisture front. This has an important bearing on the depth of seed placement and soil overburden on seed. In central India, Rabi crops must be sown immediately after harvest of Kharif crops, and seeding depth should allow maximum use of moisture from receding soil moisture profiles. Else we need to develop a new design for openers and seeding boots in planters to be able to reduce soil load over the seed for good germination and speedy seedling emergence.
Conclusions
Kharif and Rabi Fallows are found in distinctly | 2019-06-26T14:46:08.924Z | 2019-06-07T00:00:00.000 | {
"year": 2019,
"sha1": "24229509c85524aeb6b2f1182658f51559f32c2b",
"oa_license": "CCBY",
"oa_url": "http://openaccesspub.org/article/1103/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7474b36abd0c2eb799e686aecc0d46937bbea618",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
5375908 | pes2o/s2orc | v3-fos-license | Chikungunya virus iridocyclitis in Fuchs’ heterochromic iridocyclitis
A 37-year-old male with a six-month history of recurrent non-granulomatous anterior uveitis with secondary glaucoma was referred for a second opinion. He had a history of chikungunya fever a month back. His best corrected visual acuity (BCVA) was 20/60 and 20/30 in the right and left eye respectively. Slit-lamp examination of the right eye showed fine, pigmented keratic precipitates inferiorly [Fig. 1] and an anterior chamber (AC) reaction of 1+.[2] Fundus showed a cup-disc ratio of 0.7 with a pale neuroretinal rim. Intraocular pressures (IOP) were 38 mm of Hg. Left eye examination was normal. Gonioscopy showed open angles with increased pigmentation inferiorly in the right eye. Laboratory investigations including erythrocyte sedimentation rate (ESR), Mantoux test, serum angiotensin converting enzyme, venereal disease research laboratory test, computed tomography (CT) scan thorax, rheumatoid factor, Human leukocyte antigen B27 and antinuclear antibody tests were normal. Polymerase chain reaction (PCR) on the aqueous tap was negative for Herpes simplex, Varicella zoster, cytomegalovirus, rubella, toxoplasma, Mycobacterium tuberculosis, and positive for chikungunya virus. Topical steroids were added along with anti-glaucoma medications. At one-month follow-up, his IOP was 16 mm of Hg. He came back to us after three months with a reactivation of anterior uveitis and an IOP recording of 34 mm of Hg. He underwent trabeculectomy with mitomycin C in the right eye. PCR on the aqueous was negative for all organisms tested above including the chikungunya virus. He was once again seen after six months with an IOP of 48 mm of Hg. He had a failed bleb and an AC inflammation (1+). He underwent an Ahmed valve implant in the right eye [Fig. 2] and since then his IOP is 10 mm of Hg. PCR on the aqueous was again negative for all organisms tested above including the chikungunya virus.
A 37-year-old male with a six-month history of recurrent non-granulomatous anterior uveitis with secondary glaucoma was referred for a second opinion. He had a history of chikungunya fever a month back. His best corrected visual acuity (BCVA) was 20/60 and 20/30 in the right and left eye respectively. Slit-lamp examination of the right eye showed fine, pigmented keratic precipitates inferiorly [ Fig. 1] and an anterior chamber (AC) reaction of 1+. [2] Fundus showed a cup-disc ratio of 0.7 with a pale neuroretinal rim. Intraocular pressures (IOP) were 38 mm of Hg. Left eye examination was normal. Gonioscopy showed open angles with increased pigmentation inferiorly in the right eye. Laboratory investigations including erythrocyte sedimentation rate (ESR), Mantoux test, serum angiotensin converting enzyme, venereal disease research laboratory test, computed tomography (CT) scan thorax, rheumatoid factor, Human leukocyte antigen B27 and antinuclear antibody tests were normal. Polymerase chain reaction (PCR) on the aqueous tap was negative for Herpes simplex, Varicella zoster, cytomegalovirus, rubella, toxoplasma, Mycobacterium tuberculosis, and positive for chikungunya virus. Topical steroids were added along with anti-glaucoma medications. At one-month follow-up, his IOP was 16 mm of Hg. He came back to us after three months with a reactivation of anterior uveitis and an IOP recording of 34 mm of Hg. He underwent trabeculectomy with mitomycin C in the right eye. PCR on the aqueous was negative for all organisms tested above including the chikungunya virus. He was once again seen after six months with an IOP of 48 mm of Hg. He had a failed bleb and an AC inflammation (1+). He underwent an Ahmed valve implant in the right eye [ Fig. 2] and since then his IOP is 10 mm of Hg. PCR on the aqueous was again negative for all organisms tested above including the chikungunya virus.
In the above case, the patient had a well-documented history of recurrent anterior uveitis with increased IOP prior to the development of chikungunya fever. Although the clinical history favored a viral etiology, the finding of the chikungunya virus in the aqueous tap raised a lot of questions. The aqueous tap was done within one month of the chikungunya fever while this patient had a history of recurrent anterior uveitis six months prior to the development of the chikungunya fever. Also, what is interesting is that the aqueous tap repeated three and six months after the viral fever was negative for the chikungunya virus by the PCR. We speculate that the finding of chikungunya virus in the aqueous tap is casual and could probably be due to the high viraemia which usually occurs at the time of the chikungunya fever. This may result in a spillover of the viral antigen in body fluids like the cerebrospinal fluid (CSF), aqueous, pericardial fluid, etc., and hence the demonstration of the viral antigen. In the case described by Mahendradas et al., [1] the same mechanism may be speculated as the duration of the aqueous tap following the chikungunya fever was only two weeks. As we do not know the duration the viral antigen persists in the aqueous, [3] it is difficult to speculate a causal role of the virus in the uveitis in the two cases.
Tips in ophthalmic photography
Dear Editor, We congratulate Mahesh et al., for their article on "video indirect ophthalmoscopy using handheld video camera." [1] Documentation of clinical findings is a crucial part of an ophthalmologist's work and is important for communication with col leagues and patients. The quality of an image depends as much as on the photographer's knowledge of the anatomy and physiology of the eye, as it does on photographic techniques and technology. In this regard, we would like to describe certain techniques in anterior segment photography which would greatly enhance the quality of the images obtained using an inexpensive personal digital camera.
The settings we recommend for external photography (for lids and adnexal structures) are: 1. Most cameras have "auto focus" mode and the system automatically adjusts to select the best focus. For clinical photography it is better to switch off the auto focus mode and to select "central focus" mode. 2. Flash is optional, depending on ambient lighting. It is better to capture he image with and without flash and choose the better one later. 3. ISO settings should be set as "auto" unless the photographer is experienced with different ISO speeds of camera. 4. An important setting is the "macro mode" function denoted by "flower symbol" [Fig. 1]. It allows the camera to focus on objects as close as 2-4 inches. [2] With this, finer details of lid and adnexal lesions like surface irregularity, vascularity, and pigmentation can be documented, preserving clarity even on zooming. With the same technique gross details of the anterior segment can also be photographed. 5. Proper focus is obtained by pressing the shutter button halfway, then adjusting the camera to obtain clear focus as seen on the LCD viewfinder. On obtaining clear focus, the yellow-colored rectangle will change to green in the LCD viewfinder.
For slit-lamp photography of the anterior segment, the digital camera should be placed in one of the slit-lamp oculars after adjusting focus. Good-quality photographs make scientific publications more convincing. They can also be used as evidence in medicolegal cases. Techniques of obtaining clinical photographs should be part of resident training. This method may help ophthalmologists working in resource-constraint situations to document ophthalmic findings without expensive photography equipments. | 2018-04-03T04:57:44.514Z | 2012-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "733f7f440c7e8fe91ad6c6b89eecc1a583a3239e",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0301-4738.90495",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a38ae28e5d1a248673f1023bd18460f6b2413ddf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3806871 | pes2o/s2orc | v3-fos-license | Systematic substitutions at BLIP position 50 result in changes in binding specificity for class A β-lactamases
Background The production of β-lactamases by bacteria is the most common mechanism of resistance to the widely prescribed β-lactam antibiotics. β-lactamase inhibitory protein (BLIP) competitively inhibits class A β-lactamases via two binding loops that occlude the active site. It has been shown that BLIP Tyr50 is a specificity determinant in that substitutions at this position result in large differential changes in the relative affinity of BLIP for class A β-lactamases. Results In this study, the effect of systematic substitutions at BLIP position 50 on binding to class A β-lactamases was examined to further explore the role of BLIP Tyr50 in modulating specificity. The results indicate the sequence requirements at position 50 are widely different depending on the target β-lactamase. Stringent sequence requirements were observed at Tyr50 for binding Bacillus anthracis Bla1 while moderate requirements for binding TEM-1 and relaxed requirements for binding KPC-2 β-lactamase were seen. These findings cannot be easily rationalized based on the β-lactamase residues in direct contact with BLIP Tyr50 since they are identical for Bla1 and KPC-2 suggesting that differences in the BLIP-β-lactamase interface outside the local environment of Tyr50 influence the effect of substitutions. Conclusions Results from this study and previous studies suggest that substitutions at BLIP Tyr50 may induce changes at the interface outside its local environment and point to the complexity of predicting the impact of substitutions at a protein-protein interaction interface.
Background
Interactions between proteins play an essential role in nearly every cellular process. Each protein in a cell is estimated to interact with approximately five other proteins, forming a complex interaction network [1]. A better understanding of protein-protein interactions, in particular what regulates rates of formation and dissociation and the molecular basis of specificity, would have applications ranging across fields from protein engineering to drug design [2]. Numerous protein-protein interactions have been studied and provide details about the roles of shape complementarity, long-and short-range interactions and solvent in binding [3][4][5][6][7][8][9][10]. However, even with this large accumulation of data, prediction programs often have limited success, largely because of challenges posed by cooperativity between residues, flexibility, and rearrangement at the large, multifaceted interface upon binding [5,11]. Some success has been shown for predicting changes upon mutation to alanine; however, predicting the effects of mutations to the other 19 amino acids often falls short because residues other than alanine lose interactions and also have the ability to form new ones. It is important to understand how mutations to all possible amino acids modify proteinprotein interactions for protein engineering and because mutations other than alanine are frequently seen in nature.
Various protein-protein complexes such as BLIP and β-lactamases have developed into model systems to examine the basic principles underlying protein-protein interactions [6,7,[12][13][14][15]. Studies of a variety of protein complexes indicate that only a subset of residues at the interface contributes substantially to binding affinity and these residues are termed "hot spots" [12,13,[16][17][18][19]. Specificity determinants, i.e., residues where energetic contributions vary depending on the binding partners, were found among the hot spot residues in the BLIPβ-lactamase interaction [12]. Specificity determinants are of particular interest because of their ability to exhibit a significantly different effect on binding affinity to different protein binding partners when mutated. For example, when BLIP Tyr50 was mutated to alanine it exhibited a 50-fold increase in binding affinity for one binding partner (TEM-1) and a 65-fold decrease in binding affinity for another (Sme-1) ( Table 1) [12].
The current study focuses on further examination of the specificity determinant BLIP Tyr50 in the interaction of BLIP with various class A β-lactamases. BLIP is a 17.5 kDa protein produced from the soil bacterium Streptomyces clavuligerus that inhibits class A β-lactamases with varying affinities (subnanomolar to micromolar) ( Table 1) [20][21][22]. Class A β-lactamases hydrolyze the commonly prescribed β-lactam antibiotics rendering them inactive [23]. The production of β-lactamases is the most common mechanism of bacterial resistance in Gram-negative bacteria [23]. BLIP inhibits class A β-lactamases by docking its predominantly polar, concave surface onto the enzyme, burying approximately 2,600 Å 2 of surface area [22]. BLIP competitively inhibits class A β-lactamases via two binding loops that occlude the active site of the enzymes (Fig. 1a & b) [22]. The tertiary structures of class A β-lactamases are homologous but the sequences vary in identity from 30-70% ( Fig. 2) (Table 2) [23].
BLIP positions Tyr50, Glu73, Lys74 and Tyr143 were previously identified as specificity determinants in that substitutions at these positions result in large changes in the relative affinity of BLIP for various class A βlactamases [13]. BLIP Tyr50 resides on the 46-53 loop that contains two hotspots for binding -Asp49 and Tyr53 [13]. A concerted rearrangement occurs at both interfaces upon binding of BLIP and TEM-1 β-lactamase; of particular interest, TEM-1 Tyr105 rearranges upon complex formation to relieve a steric clash with BLIP Tyr50 (Fig. 1c) [16,22,24]. In addition, residue 105, which is tryptophan in KPC-2 β-lactamase, is in a similar position as Tyr105 of TEM-1 and also undergoes a rearrangement in the BLIP-KPC-2 complex (Fig. 1d) [25]. This rearrangement of βlactamase position 105 may be a contributing factor to changes in binding affinity upon mutation of BLIP Tyr50. BLIP Tyr50 forms van der Waals contacts with the β3 strand of TEM-1 and KPC-2, and also interacts directly with positions 107, 129 and 216 on the β-lactamase interface ( Fig. 2) [16,22]. A structural alignment of the positions on the β-lactamase interface is shown in Fig. 2 with TEM-1 and KPC-2 from the apo form and Bla1 from a BLIP-II bound form as no apo structure of Bla1 is available [26]. β-lactamase positions 107, 129 and 216 have the same sequence and similar structure for Bla1 and KPC-2 while TEM-1 differs at positions 129 and 216 (Fig. 2). As discussed above and seen in Fig. 2, the Tyr105 and Trp105 residues of TEM-1 and KPC-2 are in a similar position in apo forms of the enzyme. The Tyr105 residue of Bla1 is in an altered position in the structural alignment, however, this is likely due to the structure originating from the BLIP-II-Bla1 complex (Fig. 2) [26].
In this study, the effect of systematic substitutions at BLIP Tyr50 is examined using kinetic analysis to determine how specificity can be modulated for binding TEM-1, KPC-2 and Bla1 β-lactamases. These experiments were also performed computationally to assess the current success rate of an available protein binding prediction program. A deeper understanding of the interactions of BLIP with β-lactamases offers an opportunity to explore how specificity can be introduced into proteins rationally, by design.
Determination of inhibition constants for β-lactamases
BLIP Y50 was substituted to all 19 amino acids to investigate the role of this residue in modulating specificity. The mutant proteins were purified (with the exception of BLIP Y50I which could not be purified due to low expression and yield) and assayed with TEM-1, KPC-2 and Bla1 β-lactamases to determine the inhibition constants ( Table 3, Fig. 3). It was previously reported that the BLIP Y50A substitution alters the binding specificity for βlactamases; however, the extent to which other substitutions at position 50 alter binding specificity is unknown. Overall, the results of this study support the hypothesis that BLIP Y50 makes important contributions to the binding specificity of BLIP for class A β-lactamases.
BLIP is a potent inhibitor of each of the β-lactamases studied with K i values of 0.5 nM for TEM-1, 1.5 nM for KPC-2 and 2.5 nM for Bla2 ( Table 3). The effect of substitutions at BLIP Y50 on the binding affinity for the enzymes, however, is widely different. Most BLIP Y50 substitutions retain tight binding for KPC-2 while many substitutions reduce binding to TEM-1 and the majority of substitutions are detrimental for binding Bla1 (Table 3). This is apparent from the finding that only 3 substitutions result in a greater than 10-fold loss in affinity for KPC-2 while 10 substitutions reduce binding by >10-fold for TEM-1 and 15 result in a >10-fold loss in affinity for Bla1 ( Table 3). The changes in binding constants for the substitutions were normalized by calculating the changes in free energy of the complex using the following equation: ΔΔG = -RT ln (K i WT / K i MUT ) ( Table 4). A negative ΔΔG value is indicative of an increase in binding affinity as compared to wild type while a positive value corresponds to a decrease in binding affinity. Using these values, the wide difference in tolerance to BLIP Y50 substitutions for binding β-lactamases is clear in that the average effect of substitutions (ΔΔG) on binding KPC-2 was 0.7 kcal/mol while that for binding TEM-1 was 1.3 kcal/mol and that for binding Bla1 was 2.3 kcal/mol (Table 4 and Fig. 4). These results indicate that the sequence requirements at BLIP position 50 are significantly less stringent for binding KPC-2 compared to the requirements for binding TEM-1 and Bla1. Examination of the substitution results in Tables 3 and 4 reveals some common sequence requirements at BLIP position 50 for binding all three β-lactamases. For example, cysteine, phenylalanine and lysine substitutions showed a greater than 10-fold decrease in binding affinity for all three enzymes (Table 3). Cysteine may decrease binding affinity because of the potential of being oxidized, which would disrupt binding. Phenylalanine has a similar van der Waals volume as the wild-type tyrosine residue but does not have the hydrogen bonding capacity and could potentially interrupt the organization of structural waters at the interface because of its strong hydrophobic properties. The decrease in binding affinity when lysine is substituted at BLIP position 50 is likely due to introduction of an unpaired charge in the interface. Although lysine was the only charged residue to globally decrease binding affinity by 10-fold, the general finding is that charged Fig. 1 Structural representation of the interaction between BLIP and β-lactamases. BLIP is shown as a purple ribbon with Tyr50 BLIP shown as stick. TEM-1 a and KPC-2 b β-lactamases are shown as white spheres with the catalytic Ser70 in yellow and positions 107, 129 and 216 (that make contact with Tyr50 BLIP ) are shown in red. PDB codes: 1JTG and 3E2K. Alignment of apo (gray) and bound (white) TEM-1 c and KPC-2 d structures shown in ribbon with position 105 β-lactamase shown as stick. BLIPY50 is shown as a purple stick in the bound form. The measurement provides the distance 105 β-lactamase moves upon binding to BLIP. PDB codes 1BTL and 2OV5 (apo) and 1JTG and 3E2K (bound). Images generated with Chimera residues at position 50 result in a decrease in binding affinity for all β-lactamases tested, although the effect is less pronounced for binding KPC-2 (Table 3). This is supported by the fact that BLIP containing arginine, glutamate or aspartate at position 50 exhibited decreased affinity for TEM-1 and Bla1 by greater than 100-fold and also exhibited decreased affinity for KPC-2 (Table 3). A proline at position 50 is also generally disruptive in that it decreased the binding affinity of BLIP for all three βlactamases, possibly by altering the conformation or flexibility of the Y50 loop, which includes two hot spot residues for binding [27]. Alignment of class A β-lactamase residues at the BLIP interface. a The alignment is based on the structure of class A β-lactamase residues found at the BLIP interface as defined by the TEM-1/BLIP complex X-ray structure. The positions that contact BLIP position 50 are boxed in red. PDB codes used for the structural alignment are as follows: 2OV5 for KPC-2, 3QHY for Bla1, 1BTL for TEM-1, 1SHV for SHV-1 and 1DY6 for SME-1. Structural alignment performed in Chimera [41]. b β-lactamase structures are shown as grey ribbon and were aligned using MacPyMOL. Interface residues are shown in navy blue with β-lactamase position 105 shown as blue sticks. Residues that make direct contact with Tyr50 BLIP (107, 129 and 216) are shown as red sticks. A global structural alignment of TEM-1, Bla1 and KPC-2 β-lactamases is shown in two orientations. c A close-up view of an alignment of the β-lactamase residues that make contact with Tyr50 BLIP . β-lactamase position 105 is also shown as stick model as it has been shown to make structural rearrangements upon binding to BLIP [14,16,35]. d A close up alignment of β-lactamase positions 107, 129, 216 and 105 are shown with changes in orientation made for ease of viewing the structural alignment. Residues are labeled with their corresponding β-lactamase. PDB codes used for generation of images were as follows: 1BTL for TEM-1, 3QHY for Bla1 and 2OV5 for KPC-2. Images generated in MacPyMOL Another common trend for binding all three βlactamases is that substitutions of BLIP Y50 by the small amino acids alanine and glycine either does not affect or improves affinity (Table 3). For example, BLIP Y50A retains affinity for KPC-2 and Bla1 and exhibits 50-fold tighter binding of TEM-1 while Y50G shows a small increase in affinity for all three enzymes. As noted above and shown in Figs. 1 and 2, Tyr105 in TEM-1 and the equivalent Trp105 in KPC-2 change position in the BLIP-β-lactamase complexes compared to the apo-enzymes in order to avoid a steric clash with BLIP Y50. It is possible that substitution of Y50 with alanine or glycine avoids the clash and allows β-lactamase residue 105 to retain its apo-position in the complex, which may result in improved affinity.
Finally, polar residue substitutions at BLIP Y50 have quite disparate effects on binding the β-lactamases. For example, serine and threonine substitutions have relatively small effects on binding KPC-2 and TEM-1 but result in greatly decreased binding to Bla1 (Table 3). In addition, glutamine at BLIP position 50 does not affect binding to any of the β-lactamases while an asparagine substitution results in decreased affinity for all three βlactamases, including a greater than 10-fold decrease for binding TEM-1 and Bla1 (Table 3). This result cannot easily be explained because asparagine has similar properties to glutamine, which had little to no effect on binding any of the β-lactamases.
Impact of BLIP Y50 on binding specificity
Because the purpose of this study was to examine the role of BLIP position 50 as a specificity determinant, substitutions that have differential effects on binding are of interest. As indicated above and is apparent in Tables 3 and 4, many substitutions at BLIP Y50 have differential effects on β-lactamase binding, the most clear example being the numerous substitutions that retain or modestly impact binding to KPC-2 while greatly decreasing binding to Bla1 (Y50-L,M,W,N,S,T,H), and the subset of substitutions that retain binding to KPC-1 and TEM-1 while losing affinity for Bla1 (Y50-L,M,S,T). In contrast, there are no BLIP Y50 substitutions that retain affinity for Bla1 while losing affinity for KPC-2 or TEM-1 (Table 3). Thus, the large differences in stringency of sequence requirements at position 50 results in BLIP variants that bind KPC-2 but not TEM-1 and Bla1 as well as those that bind KPC-2 and TEM-1 but not Bla1. However, substitutions at BLIP Y50 do not produce a variant that binds Bla1 but not TEM-1 or KPC-2.
It was next of interest to examine a possible structural basis for the observed differences in sequence requirements for BLIP Y50 substitutions for binding Bla1 versus KPC-2 and TEM-1. The side chain of BLIP Y50 is in direct contact with β-lactamase residues 107, 129 and 216 in the crystal structures of the BLIP-TEM-1 and BLIP-KPC-2 complexes (Fig. 2). These β-lactamase contact residues are identical between KPC-2 and Bla1 (P107-Y129-T216) while TEM-1 differs at 2 of the 3 positions (P107-M129-V216) (Fig. 2). Based on these sequences, it would be expected that substitutions at BLIP Y50 would have similar effects on binding KPC-2 and Bla1. The results indicate this is clearly not the case. Therefore, a simple comparison of the β-lactamase contact residues for BLIP Y50 does not explain the observed differences in effects of substitutions on binding the β-lactamases. Although KPC-2 and Bla1 have the same amino acids at positions 107, 129 and 216, the overall sequence identity of all β-lactamase residues at the interface is higher between Bla1 and TEM-1 compared to KPC-2 ( Table 2). There are a total of 10 positions (71, 102, 106, 109, 112, 133, 172, 246, 248 and 249) on the β-lactamase interface where TEM-1 and Bla1 have the same sequence and the sequence of KPC-2 differs (Fig. 2). KPC-2 is much better at accommodating changes at BLIP Y50 than both Bla1 and TEM-1 and has the most sequence differences at the interface. Therefore, more widespread differences in the entire interface may influence the effect of substitutions at BLIP Y50. This may be due to changes at the interface induced by mutation of BLIP Y50 that propagate outside of its local environment. In fact, previous studies have shown that BLIP Tyr50 is energetically coupled to both positions Tyr143 and Glu73, BLIP Y50I could not be purified which are not in direct contact with Tyr50 [13]. The hypothesis that changes at position 50 are influenced by other sites in the interface and vice versa is consistent with previous observations of structural plasticity and cooperativity of the BLIP interface upon mutation [12,13,15,16,27]. For example, it has been shown that the BLIP W150A mutation induces a greater than 4 Å shift in residue Asp49, demonstrating both structural flexibility of the loop containing Tyr50 and long distance coupling at the BLIP interface [28]. An interesting question is whether there are also more stringent sequence requirements at other BLIP positions in the interface for binding Bla1 versus TEM-1 and KPC-2. A previous alanine-scanning mutagenesis study for 23 BLIP residues that contact βlactamase in the bound complex evaluated binding to TEM-1 and Bla1 (KPC-2 was not evaluated) [27]. The results of this study suggest that the sequence requirements are not generally more stringent for BLIP binding Bla1 in that the average ΔΔG effect for the 23 alanine substitutions was less detrimental for binding Bla1 (avg ΔΔG = 0.4) than for TEM-1 (avg ΔΔG = 1.0) [27]. Therefore, the stringent sequence requirements observed here for Bla1 binding are unique to position 50 and not a general property of all interface positions for binding Bla1. Fig. 3 Determination of inhibition constants of BLIP mutants for binding TEM-1, KPC-2 and Bla1. The concentration of BLIP is shown on a log scale on the x-axes and fractional initial velocity is shown on the y-axes. Inhibition curves are shown for wild type BLIP in black. Inhibition curves for BLIP mutants that showed tighter binding than wild type are shown in blue while curves for mutants that showed weaker binding are shown in red
Comparison of experimental and predicted ΔΔG values
It is appealing to use computational methods to guide engineering of binding specificity in protein-protein interactions. Therefore, we were interested to examine how computational methods compared to our experimental results. The same mutagenesis experiment was performed computationally on BLIP position 50 with the BeAtMuSiC server, which computes theoretical ΔΔG values based on a set of statistical potentials derived from known protein structures [11]. The TEM-1-BLIP (PDB code: 1JTG) and KPC-2-BLIP (PDB code: 2OV5) complexes were submitted for analysis. The BLIP-Bla1 interaction was not analyzed because there is currently no crystal structure available of this complex.
The ΔΔG values generated by the BeAtMuSiC server and the experimentally determined ΔΔG values are plotted in Fig. 5. The average predicted ΔΔG value for the TEM-1/BLIP interaction was 2.0 kcal/mol while the experimentally determined average ΔΔG value was 1.3 kcal/mol. For the KPC-2-BLIP interaction, the average predicted ΔΔG was 2.2 kcal/mol and the experimentally determined average ΔΔG was 0.7 kcal/mol. Therefore, the BeAtMuSiC server was more accurate at predicting ΔΔG values for the BLIP-TEM-1 interaction than the BLIP-KPC-2 interaction.
Discussion
Specificity determinants are often identified through alanine scanning of interface residues [16-18, 29, 30]. Whether mutations to other amino acids would also identify these residues as specificity determinants is unknown. Here, we present data supporting the role of BLIP Y50 as a specificity determinant and furthermore, provide evidence that this position can be targeted to engineer binding specificity of BLIP for a range of class A β-lactamases. The BeAtMuSiC server did not predict any negative changes in free energy meaning that the energy of the wild-type complex is predicted to be more stable than any of the mutants. However, the BLIP Y50A substitution was shown experimentally to bind TEM-1 βlactamase 50-fold tighter than wild type BLIP. It is known that some residues rearrange upon binding of BLIP and class A β-lactamases and this could influence the accuracy of the prediction programs [27]. These rearrangements could play into the differences seen (about 1.4 kcal/mol) between our experimental ΔΔG values and the predicted values. Furthermore, BLIP Y50 is located on a loop that has two hot spots for binding; therefore, even small alterations in the placement of this loop induced by Y50 substitutions could result in large changes in binding affinity. In addition, as described above, the effect of substitutions at BLIP Y50 may be influenced by positions outside of the direct contact residues through coupled interactions and therefore predictions of effects of substitutions poses a significant challenge for computational prediction programs. However, it may be these same properties that provide BLIP with its unique ability to bind structurally homologous proteins with a wide-range of affinities.
Because BLIP binds homologous β-lactamase structures that only differ by small changes in sequence at the interface, the BLIP-β-lactamase system is useful for examining how sequence dictates binding affinity. However, an alignment of the β-lactamase sequences in direct contact with the BLIP Y50 residue (TEM-1 residues 107, 129 and 216) suggests that KPC-2 and Bla1 would exhibit the same changes in binding affinity upon mutation of BLIP Y50 because they have the same amino acids in similar conformations at these positions; however, this was not the case (Fig. 2). Although KPC-2 and Bla1 have the same amino acids at positions 107, 129 and 216 that directly interact with Y50, the overall sequence identity of all β-lactamase residues at the interface is higher between Bla1 and TEM-1 compared to KPC-2 ( Table 2). KPC-2 was much better at accommodating changes at BLIP Y50 than both Bla1 and TEM-1 and had the most sequence differences at the interface. This suggests that simply comparing sequence identity of positions that make direct interactions with BLIP Y50 (or any BLIP residue) is not sufficient to predict changes in binding affinity upon mutation. Therefore, more widespread differences in the entire interface may influence the effect of substitutions at BLIP Y50. This may be due to changes at the interface induced by mutation of BLIP Y50 that propagate outside of its local environment due to structural plasticity and coupled interactions.
Conclusions
Properties such as structural plasticity and cooperativity between residues are important for mediating protein interactions and critical for allosteric regulation in various cell processes [31][32][33][34]. Understanding how these properties contribute to binding specificity would greatly improve current protein binding prediction programs. This is an active area of investigation in G-protein coupled receptors, the human growth hormone receptor and other proteins [31][32][33][34]. Numerous studies such as these have established that the dynamic nature of proteins is critical to binding and proper functioning; however, this dynamic nature is challenging to predict and structurally understand, as flexible proteins are inherently difficult to model and crystallize. Here, we demonstrate the complexity of predicting the impact of substitutions using the well-studied BLIP-β-lactamase protein-protein interaction model. Furthermore, we have shown that surveying sequence homology and the structural interface of a complex are not sufficient in predicting the impact of mutations.
Currently, protein prediction programs are unable to reliably predict changes in binding affinity upon mutation at the protein interface. Systematic studies such as these could improve the current state by providing experimental data to be incorporated into these programs. Lastly, there is a pressing need for new detection methods for β-lactamases, which are a widespread source of resistance to β-lactam antibiotics. Identification of specificity determinants in BLIP could be useful in the development of BLIP-based diagnostic reagents that can discriminate between class A β-lactamases and inform treatment options for clinicians.
Construction of BLIP Y50 mutants
BLIP position 50 was mutated to all 19 amino acids using the Quickchange method (Stratagene) and Pfu polymerase (Stratagene) on the pGR32 plasmid with an N-terminal His-tag as previously described [35]. DNA sequencing was used to confirm the mutations and that no extraneous mutations occurred elsewhere on the BLIP gene in each of the mutants (Lonestar Labs).
Protein purification
N-terminal His-tagged BLIP mutants were purified using the TALON Metal Affinity Resin (Clontech) [35]. Despite multiple attempts, the BLIP Y50I mutant could not be purified due to poor expression and yield. The TEM-1 and Bla1 proteins were purified as previously described using a zinc chelating column and elution with a pH gradient [36]. KPC-2 was purified as previously described using a HiTrap SP column and elution with an NaCl gradient [37]. The BLIP mutants and the various β-lactamases were each concentrated and injected onto a Superdex 75 gel filtration size exclusion column as a final purification step. Fractions with greater than 90% purity as determined by SDS-PAGE were combined, concentrated and used in the inhibition assay. The protein concentrations for the β-lactamases and BLIP mutants were determined by a Bradford assay where they were compared with a curve that was calibrated by quantitative amino acid analysis specific to each protein.
The concentrations for all proteins were confirmed by measuring absorbance at 280 nm and using the extinction coefficient as determined by the ExPASy ProtParam tool [38]. Kinetic parameters (k cat and K m ) were determined to confirm activity for each βlactamase using the chromogenic β-lactam substrate, nitrocefin (data not shown).
β-lactamase inhibition assay
Inhibition constants for BLIP mutants binding to the βlactamases were determined as previously described [35]. Increasing concentrations of BLIP were incubated with a constant concentration of β-lactamase (1 nM) for 1 h at room temperature in 50 mM sodium phosphate buffer pH 7.0. The chromogenic substrate, nitrocefin, was then added at the K m concentration for the βlactamases and the initial velocity was measured at 482 nm in 20 s intervals. The experiments were performed in at least duplicate. The K i app for each BLIP mutant was determined by fitting the initial velocities to the Morrison tight-binding equation [39]: where E free is the concentration of free enzyme determined by residual activity of the β-lactamase by comparison with the initial velocity of nitrocefin hydrolysis by the uninhibited β-lactamase, [E 0 ] is the total enzyme concentration and [I 0 ] is the total BLIP concentration. The errors reported were calculated based on the fit of the curve. The K i values were calculated from the K i app values as previously described using eq. 2 [40]: ΔΔG calculations ΔΔG was calculated using the following equation: Using this equation, a decrease in K i upon mutation would result in a negative ΔΔG value while an increase in K i would be reported as a positive change in free energy. Error for ΔΔG values was calculated using the following equation: Where 'SEM' represents standard error of the mean, 'WT' represents wild-type BLIP and 'MUT' represents the mutant protein.
Computational prediction of the effect of BLIP mutations on binding affinity The BeAtMuSiC online server was used to predict changes in binding affinity of the BLIP mutants on complex formation with TEM-1 and KPC-2 β-lactamases [11]. The BeAtMuSiC server relies on a set of statistical potentials derived from known protein structures and predicts the changes in binding affinity by the combined effect of the mutation on the overall stability of the complex and the interface [11]. PDB codes 1JTG (BLIP/ TEM-1) and 3E2K (BLIP/KPC-2) (chains A and B) were submitted for analysis. | 2017-08-03T01:02:56.295Z | 2017-03-06T00:00:00.000 | {
"year": 2017,
"sha1": "a2c64fb0667f242587c2ea144174dabc4681afae",
"oa_license": "CCBY",
"oa_url": "https://bmcbiochem.biomedcentral.com/track/pdf/10.1186/s12858-017-0077-1",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a2c64fb0667f242587c2ea144174dabc4681afae",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
239057424 | pes2o/s2orc | v3-fos-license | Nutritional Compositions and In-vivo Antioxidant Effect of Corchorus olitorius Ethanol Leaf Extract in CCl4-induced Oxidative Stress in Wistar Rats
Background: Oxidative stress has been implicated in the pathophysiology of various disease conditions with concomitant toll on the body’s defense mechanism against free radicals. To continuously sustain and support the efficiency of the body’s antioxidant defense system, natural plant sources are required. Thus, the need for alternative options especially of plants that are neglected and under-utilized. Hence, this study aimed at investigating the proximate and phytochemical compositions and in-vivo antioxidant effect of ethanol leaf extract of C. olitorius on antioxidant enzyme activities in CCl4-induced oxidative stress in Wistar rats. Methods: Thirty albino rats of Wistar strain (120-150g) were divided into six groups (A – F) of five rats each. Groups A, B and C served as test groups and were administered 200 mg/kg, 400 mg/kg and 600 mg/kg doses of C. olitorius leaf extract respectively while Group D served as normal control. Groups E and F served as the positive and negative controls and were administered 50 mg/kg Silymarin and distilled water respectively. The administration lasted for 15 days after which blood was collected via cardiac puncture. Original Research Article Ndukwe et al.; EJNFS, 13(5): 73-81, 2021; Article no.EJNFS.73158 74 Results: Findings showed that the leaf was rich in total phenol (21.47 ± 0.00 mgGAE/g) and tannin (23.34 ± 0.75 mgTAE/g) with little quantity of oxalate (0.48 ± 0.09 mg/g), cardiac glycosides (0.30 ± 0.07 %) and phytate (0.25 ± 0.01 %). The result of the proximate composition revealed that the leaf was rich in carbohydrate (44.16 ± 1.21 %), ash (20.31 ± 0.51 %) and protein (11.29 ± 2.06 %) with negligible quantity of lipid (0.46 ± 0.11 %). More so, the activities of superoxide dismutase, catalase and glutathione peroxidase were all increased in the extract treated group when compared to the controls. Conclusion: From the above findings, it can be concluded that the ethanol leaf extract of C. olitorius may possess exploitable nutritional components and potential antioxidant activity against the debilitating effects of free radicals.
INTRODUCTION
Oxidative stress has been implicated as the leading factor in the pathogenesis of a variety of debilitating diseases such as cancer, autoimmune disorders, arthritis, cardiovascular conditions and in aging [1][2][3]. This is due to the over-production of reactive oxygen species (ROS) including superoxide anion, hydroxyl radical, anoxide etc. that significantly increase the rate of oxidation of several biological membrane components, organelles, nucleic acids, proteins, and polyunsaturated fatty acids. When the body's antioxidant defense system is overwhelmed by these free radicals, there seems to be a tilt in favor of prooxidation affecting essential biomolecular cell components through lipid peroxidation. One free radical generating species of immense biochemical relevance is carbon tetrachloride (CCl 4 ). It is used as a chemical feedstock for the industrial manufacture of various products ranging from aerosols to resins and as extraction solvent. Intracellularly, CCl 4 is converted to trichloromethyl ( . CCl 3 ) by the action of a specific cytochrome P 450 , which in turn reacts with oxygen initiating a cascade of deleterious effects [2,4]. These reactions thus interfere with normal metabolic processes [3] causing a host of clinical manifestations associated with liver, kidney, lung and heart diseases.
To continuously sustain and support the body's natural enzyme antioxidant system, fruits and vegetables offer viable natural options hence the need for their scientific evaluations for potential therapeutic efficacies. This is because plants are believed to contain phytochemicals that can actively mop up and neutralize free radicals thereby protecting against CCl 4 -induced disease disorders [3].
C. olitorius, also known as Ahihara in Igbo, is a leafy vegetable that belongs to the family Tiliaceae, and commonly called Jute mallow in English and "Ewedu" in the Southwestern Nigeria [5]. It is an annual herb with a slender stem and an important green leafy vegetable in many tropical areas such as Nigeria [6]. In West African countries including Ghana, Nigeria and Sierra Leone, the vegetable is cultivated for the stem, bark, which is used in the production of fiber (Jute), and its mucilaginous leaves, which are also used as food vegetable [7]. The leaves (either fresh or dried) are cooked into a thick viscous soup or added to stew or soup and are rich sources of vitamins and minerals [7]. Traditionally, in some parts of Nigeria, leaves' decoction is used for treating iron deficiency, folic acid deficiency, ascites, pains, piles, tumors, gonorrhea, and fever [8,9].
Upon the foregoing background, the study aimed to investigate the proximate and phytochemical compositions in addition to the in-vivo assessment of the effect of the plant leaf extract of C. olitorius on antioxidant enzymes in Wistar rats.
Sample Collection and Preparation
Fresh green leaves of C. olitorius were purchased from Eke-Awka market in Awka South Local Government Area of Anambra State, Nigeria. The plant sample was identified and authenticated by a taxonomist at the Department of Botany, Nnamdi Azikiwe University, Awka and specimen deposited at the herbarium of the Department. The leaves were washed over running tap water and then in doubly distilled water to remove dirt, and thereafter, was airdried at room temperature for 14 days. The dried samples were ground with a Corona manual grinder into a fine powder and stored in an airtight container until further use.
Extraction of Plant Materials
Exactly twenty grams (20g) of the ground sample was soaked in 200 ml of 70 % ethanol and was allowed to stand for 48 hours at room temperature with intermittent stirring. The mixture was filtered through Whatman paper No. 4 with the aid of a vacuum filter and the filtrate was evaporated at 60˚C using a water bath (Techmel and Techmel, 420, USA). The dried residue was weighed and reconstituted in 70 % ethanol at a concentration of 10 mg/ml and stored at 4˚C in a refrigerator until further use.
Experimental Animals
A total of thirty adult Wistar rats weighing between 150 and 200 g were purchased from Onyewuchi Farms, Ifite, Awka and allowed to acclimatize for a period of seven days in the Animal house of the Department of Applied Biochemistry, Nnamdi Azikiwe University, Awka. The rats were kept in standard cages with saw dust as bedding and fed with commercial rat chow and water ad libitum. They were handled ethically according to the standards provided by the National Institute of Health on Animal Handling and Safety.
Animal grouping and dose administration
The animals were grouped into six (A -F) of five rats per cohort. Test groups A, B and C were administered 200 mg/kg, 400 mg/kg and 600 mg/kg body weights of plant leaf extract respectively. Group D was the normal control and was administered distilled water. Group E served as the positive control and was administered 50 mg/kg of the standard reference drug, Silymarin through oral gavage while Group F was designated the negative control and was administered distilled water.
Induction of oxidative stress in animals
Carbon tetrachloride (CCl 4 ) was administered intraperitoneally using olive oil as a vehicle in the ratio of 1:1 (0.5 ml/kg body weight) to induce oxidative stress in accordance with previously established methods [10]. Administration of plant extract as well as standard were done orally for 14 days. Previous study revealed that C. olitorius at high doses of 2500, 5000 and 7500 mg/kg body weight had no apparent toxic and lethal effects on the animals, which probably indicate that the extract has high safety index [11].
Blood sample collection
On the fifteenth day, the rats were anaesthetized with chloroform after overnight fasting. Blood was drawn slowly through cardiac puncture, collected into plain bottles, and allowed to clot. Thereafter, the blood samples were centrifuged at 3000 rpm for 10 minutes. The sera obtained were transferred into another vial for enzyme assay.
Proximate analysis of C. olitorius leaf extract
The moisture, ash, crude fiber and crude fat were determined according to the methods of Association of Official Analytical Chemists [12]. Crude protein was determined by the Micro-Kjeldahl method as proposed by AOAC [12]. The total percent carbohydrate content was estimated by the difference of 100 of the other proximate components as reported by Yerima and Adam [13] using the following formula: Total Carbohydrate (%) = 100 -(% Moisture + % Ash + % Crude fibre + % Crude protein + % Fat)
Qualitative and quantitative phytochemical analysis of C. olitorius leaf extract
The qualitative phytochemical screening of the plant leaf extract of C. olitorius was carried out according to the method of Usunobun et al. [14], while the quantitative phytochemical contents were carried out as reported below: Total phenolic content and flavonoids were determined by modified colorimetric tests as described by Barros et al [15]. Phytate content was determined using the method of Young and Greaves [16]. The saponin content of the extract solution was determined by the method of Obadoni and Ochuko [17] while oxalate was determined according to Osagie [18]. Tannin, cardiac glycoside and terpenoids were determined according to method of AOAC [12]. The alkaloid content was determined by the method of Harborne [19]. Sterol was determined by the Libermann-Burchard's test.
Assay of superoxide dismutase activity
The in-vivo superoxide dismutase (SOD) activity was assayed by the method of Sun and Zigma [20] based on its ability to inhibit the autooxidation of epinephrine determined by the increase in absorbance (480nm). The enzyme activity was calculated by measuring the change in absorbance with UV-VIS spectrophotometer (Axiom 752) at 480 nm for 3 minutes.
Assay of catalase activity
In-vivo catalase activity was determined according to the method as described by Sinha [21]. It was assayed spectrophotometrically (Axiom 752) at 620 nm and expressed as micromoles of H 2 0 2 consumed/min/mg protein at 25 o C.
Assay of glutathione peroxidase (GPx) activity
The method proposed by Rotruck et al. [22] was used to assay for in-vivo GPx activity. The change in absorbance was recorded every 30 seconds up to 3 minutes in a spectrophotometer (Axiom 752 UV-VIS). One unit of peroxidase is defined as the change in absorbance/minute at 430 nm.
Determination of Malondialdehyde Levels
Malondialdehyde (MDA) levels, as an index of lipid peroxidation, reacts with thiobarbituric acid (TBA) to give a complex pink colour. This was used to assess lipid peroxidation using the method of Budge and Aust [23]. Absorbance was measured in an Axiom 752 UV-VIS spectrophotometer at 532nm against the blank. Malondialdehyde level (in µM) was calculated using the molar extinction coefficient for MDA-TBA complex of 1.56 x 105 M-1 cm-1 .
Data Analysis
Data was analyzed using GraphPad Prism 5 Program (GraphPad Software, San Diego, CA, USA). Descriptive statistics and One-way Analysis of Variance were used to statistically test the data. Tukey HSD post-hoc test was used to indicate exact point of significance between means of groups. The results were expressed as Mean ± Standard Deviation and values were considered significant at p < 0.05 of 95% confidence interval. Fig. 1 shows the effect of the leaf extract of C. olitorius on superoxide dismutase, catalase, glutathione peroxidase and malondialdehyde levels in Wistar rats. Result indicated that there was a significant increase (p < 0.05) in SOD, catalase, glutathione peroxidase activities in-vivo and malondialdehyde levels were significantly decreased (p < 0.05) in the extract treated group when compared with the standard control.
DISCUSSION
Plants produce different chemical compounds or phytochemical which have been used in a wide range of commercial, medicinal, and industrial applications. The moisture content obtained in this study was found to be 19.49 ± 0.82 %, far lower than 86.35 % reported by Adeniyi et al. [24] for C. olitorius leaf and slightly lower than 30.90 % as reported by Onwordi et al. [25] for C. olitorius leaf. This result is an indication that the plant can be stored under good conditions without absorbing moisture from the atmosphere leading to microbial contamination [26]. The moisture content of any food is an index of its water activity, and it is used as a measure of storage stability and susceptibility to microbial contamination. Hence, the lower the moisture contents of a food, the higher the storage stability. The ash value observed in this study is consistent with the findings of Onwordi et al (25] who reported 21.20 ± 0.80 % of ash content for C. olitorius leaf. Ash content reflects the organic mineral matter present in a food sample. The protein content value of the present study agrees with that reported for C. olitorius by Onwordi et al. [25]. It has long been established that the performance of foods in biological system depends on the quantity and quality of their proteins [27]. Proteins are major constituents of most structural and cellular components in any living organism as they are composed of amino acids and hence help in cellular growth. C. olitorius leaves are poor source of lipid as indicated by the present study. The total lipid contents of 0.46 ± 0.11 % is consistent with 0.32 % for C. olitorius leaves as reported by Onwordi et al. [25] but considerably lower to 6.10 % reported for ethanol leaf extract of C. olitorius by [28]. Vegetables are poor sources of fat hence good for obese people. The carbohydrate content of C. olitorius (44.16 ± 1.21 %) is higher than 31.34 % reported by Onwordi et al. [25] in C. olitorius leaves but consistent with the 42.99 % carbohydrate content of ethanol leaf extract of C. olitorius reported by Adeniyi et al. [24]. The carbohydrate content obtained for this plant is sufficient to classify C. olitorius as a carbohydrate-rich food and hence could supply most of the body's energy requirements.
C. olitorius leaves have been shown to contain numerous phytochemical constituents. These phytochemicals are known to possess therapeutic efficacies which justify their uses in traditional medicine [29]. Sharmila et al. [30] documented that these phytochemicals may be responsible for several pharmacological activities like wound healing, cholesterol lowering and antidiabetic activity. It has long been documented that plant steroids, flavonoids and phenols are antioxidants [31]. The phenol content obtained in this study was to be similar to the findings of Roy et al. [32]. Phenols are a class of aromatic organic compounds with the molecular formula C 6 H 5 OH and are white, volatile crystalline solids. Similarly, the tannin content of C. olitorius obtained in this study was consistent with the findings of Roy et al. [32]. Tannins are a class of astringent, polyphenolic biomolecules that bind to and precipitate proteins and various other organic compounds including amino acids and alkaloids [33]. Other phytochemicals present in the leaf extract of C. olitorius included cardiac glycosides, oxalate, phytate as well as flavonoids. Flavonoids are a group of plant metabolites thought to provide health benefits Flavonoids are polyphenolic molecules containing 15 carbon atoms and are soluble in water. They are known generally to be responsible for colour, taste, prevention of fat oxidation, and protection of vitamins and enzymes [34].
In this study, acute CCl 4 exposure significantly elevated the malondialdehyde (MDA) levels indicating enhanced peroxidation and breakdown of the antioxidant defense mechanisms. Decomposition products of lipid hydroperoxide such as malonaldehyde can cause chaotic crosslinkage with proteins and nucleic acids, which plays an important role in the process of carcinogenesis.
In this investigation, administration of ethanol leaf extract of C. olitorius at different doses of 200, 400 and 600 mg/kg body weight significantly decreased the MDA levels, suggesting its protective effects against CCl 4 induced oxidative damage. These findings agree with the studies of Airaodion et al. [35] who reported both the ameliorative efficacy of phytochemical contents of C. olitorius leaves against acute ethanol-induced oxidative stress in Wistar rats and hepatoprotective effect of Parkia biglobosa on acute ethanol-induced oxidative stress in Wistar rats.
All organisms have their own cellular antioxidant defense system composed of both enzymatic and non-enzymatic components. Enzymatic antioxidant pathway consists of SOD, CAT and GPx. Superoxide anion is dismutated by SOD to H 2 O 2 , which is reduced to water and molecular oxygen by CAT or is neutralized by GPx, which catalyzes the reduction of H 2 O 2 to water and organic peroxide to alcohols using GSH as a source of reducing equivalent. Superoxide dismutase (SOD) plays an important role in reducing the effect of free radicals' attack, and SOD is the only enzymatic system quenching molecular oxygen (O 2 ) to H 2 O 2 and plays a significant role against oxidative stress [36]. These radicals have been reported to be deleterious to polyunsaturated fatty acids and proteins [37]. In this study, significant difference was observed in the activity of SOD in C. olitorius treated group compared with the control groups. This might be that CCl 4 -induced oxidative stress elevated ROS in the liver which SOD tend to combat thereby increasing its activity. C. olitorius was able to reduce the ROS generation with subsequent increase in SOD activity due to its high phytochemical content as reported by Orieke et al [36].
Catalase contributes to ethanol oxidation by oxidizing a small amount of ethanol in the presence of H 2 O 2 generating system to form acetaldehyde [38]. In this study, a significant increase was observed in the activity of catalase in C. olitorius treated animals when compared with control. This contradicts the findings of Airaodion et al. [35] who reported a nonsignificant difference when animals were treated with Parkia biglobosa. The activity of catalase in animals treated with C. olitorius after the induction of oxidative stress with CCl 4 was significantly increased when compared with those without C. olitorius treatment. This might be that CCl 4 -induced toxicity generated elevated ROS in the liver which catalase tend to combat, thereby increasing its activity. C. olitorius was able to reduce the ROS generation with subsequent decrease in catalase activity due to its high phytochemical content as reported by Orieke et al. [36]. Increased catalase activity in this study following exposure to CCl 4 suggests elevated oxidation. This agrees with the study of Airaodion et al. [35] and Oyenihi et al. [39] who reported a significantly higher CAT activity after ethanol-induced oxidative stress.
Glutathione peroxidase (GPx) is another enzymic antioxidant that acts as a defense against oxidative stress. It directly quenches ROS such as lipid peroxides, and also plays a major role in xenobiotic metabolism. It is known to detoxify hydrogen peroxide and lipid peroxide by donating electron to hydrogen peroxide to reduce it to water and oxygen, thereby protecting macromolecules such as lipids from oxidation. The findings correlate with the works of Ganie et al [2] and Shah et al [3] who reported significant decrease in the activity of GPx on CCl 4 -induced oxidative stress but was restored on administration of P. hexandrum and N. biserrata ethanol leaf extracts respectively, suggesting the role of bioactive antioxidant compounds in the plants.
CONCLUSION
The plant leaf extract showed considerable proximate and phytochemical contents as well as exhibited appreciable in-vivo antioxidant activities. From the above findings, C. olitorius appears to have excellent health and medicinal benefits which deserve to be further explored. Going forward, it is advocated that the phytochemical compounds be characterized for further antioxidant studies.
DISCLAIMER
The products used for this research are commonly and predominantly use products in our area of research and country. There is absolutely no conflict of interest between the authors and producers of the products because we do not intend to use these products as an avenue for any litigation but for the advancement of knowledge. Also, the research was not funded by the producing company rather it was funded by personal efforts of the authors.
ETHICAL APPROVAL
As per international standard or university standard written ethical approval has been collected and preserved by the authors. | 2021-10-21T15:30:46.600Z | 2021-09-06T00:00:00.000 | {
"year": 2021,
"sha1": "50cf3ffe26cbb0702716803e450931f20d1820e1",
"oa_license": null,
"oa_url": "https://www.journalejnfs.com/index.php/EJNFS/article/download/30421/57066",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cee2bb5d3b0435794111113f657de680b6b95a47",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
55173283 | pes2o/s2orc | v3-fos-license | Threshold Perspectives on Meson Production
Studies of meson production in nucleon-nucleon collisions at threshold are characterised by few degrees of freedom in a configuration of well defined initial and final states with a transition governed by short range dynamics. Effects from low-energy scattering in the exit channel are inherent to the data and probe the interaction in baryon-meson and meson-meson systems otherwise difficult to access. From dedicated experiments at the present generation of cooler rings precise data are becoming available on differential and eventually spin observables allowing detailed comparisons between complementary final states. To discuss physics implications of generic and specific properties, recent experimental results on meson production in proton-proton scattering obtained at CELSIUS and COSY serve as a guideline.
INTRODUCTION
High precision data from the present generation of cooler rings, IUCF, CELSIUS, and COSY, have contributed significantly over the last decade to our present knowledge and understanding of threshold meson production (for a recent review see [1]).
Due to the high momentum transfers required to create a meson or mesonic system in production experiments close to threshold the short range part of the interaction is probed. In nucleon-nucleon scattering, for mesons in the mass range up to 1 GeV/c 2 distances from 0.53 fm (π 0 ) down to less than 0.2 fm (φ ) are involved. At such short distances it is a priori not clear, whether the relevant degrees of freedom are still baryons and mesons, or rather quarks and gluons. As there is no well defined boundary, one goal of the threshold production approach is to explore the limits in momentum transfer for a consistent description using hadronic meson exchange models. Within this framework, questions concerning both the underlying meson exchange contributions and especially the role of intermediate baryon resonances have to be answered.
Another aspect which enriches the field of study arises from the low relative centreof-mass velocities of the ejectiles: Effects of low energy scattering are inherent to the observables due to strong final state interactions (FSI) within the baryon-baryon, baryon-meson, and meson-meson subsystems. In case of short-lived particles, low energy scattering potentials are otherwise difficult or impossible to study directly.
DYNAMICS OF THE TWO PION SYSTEM
In γ and π induced double pion production on the nucleon the excitation of the N * (1440) P 11 resonance followed by its decay to the Nσ channel, i.e. N * (1440) → p(ππ) I = l = 0 , is found to contribute non-negligibly close to threshold [2,3,4]. Nucleon-nucleon scattering should provide complementary information, eventually on the ππ decay mode of the N * (1440), which plays an important part in understanding the basic structure of the second excited state of the nucleon [5,6,7].
Exclusive CELSIUS data from the PROMICE/WASA setup on the reactions pp → ppπ + π − , pp → ppπ 0 π 0 and pp → pnπ + π 0 [8,9,10] are well described by model calculations [11]: For the π + π − and π 0 π 0 channels, the reaction preferentially proceeds close to threshold via heavy meson exchange and excitation of the N * (1440) Roper resonance, with a subsequent pure s-wave decay to the Nσ channel 1 . While nonresonant contributions are expected to be small, resonant processes with Roper excitation and decay via an intermediate ∆ (pp → pN * → p∆π → ppππ) and ∆∆ excitation (pp → ∆∆ → pπ pπ) are strongly momentum dependent and vanish directly at threshold. Double ∆ excitation, which is expected to dominate at higher excess energies beyond Q = 250 MeV [11] involves higher angular momenta and consequently strongly anisotropic proton and pion angular distributions. On the other hand, the Roper decay amplitude via an intermediate ∆ depends predominantly on a term symmetric in the pion momenta (eq.(1)), leading to the p(π + π − ) I = l = 0 channel and an interference with the direct Nσ decay.
Experimentally, for the reaction pp → ppπ + π − at excess energies of Q = 64.4 MeV and Q = 75 MeV angular distributions give evidence for only s-waves in the final state, in line with a dominating pp → pN * → pp(π + π − ) I = l = 0 process, with the initial inelastic pp collision governed by heavy meson (σ , ρ) exchange. Roper excitation is disclosed in the pπ + π − invariant mass distribution (Fig.1a), where the data are shifted towards higher invariant masses compared to phase space in agreement with resonance excitation in the low energy tail of the N * (1440). Compared with Monte Carlo simulations including both heavy meson exchange for N * excitation, and pp S-wave final state interaction, but only the direct decay N * → p(π + π − ) I = l = 0 (dotted lines), the production process involves additional dynamics, which is apparent from discrepancies especially in observables depending on the π momentum correlation k 1 · k 2 , i.e. π + π − invariant mass M ππ (Fig.1b) and opening angle δ ππ = ( k 1 · k 2 ) (Fig.1c). A good description of the experimental data is achieved including the N * (1440) decay via an intermediate ∆ and its destructive interference with the direct decay branch to the Nσ channel (solid lines) in the ansatz for the Roper decay amplitude [11]: where the first term describes the direct decay, the parameter c adjusts the relative strengths of the two decay routes, and D ∆ denote the ∆ propagators. A fit to the data On the other hand they indicate the strong energy dependence of the ratio from the momentum dependence in the decay branch via an intermediate ∆, which will surpass the direct decay at higher energies [10]. A model dependent extrapolation based on the validity of ansatz (1) leads to R(1440) = 3.9 ± 0.3 at the nominal resonance pole in good agreement with the PDG value of 4 ± 2 [13]. Within the experimental programme to determine the energy dependence of the N * → Nππ decay exclusive data (for details see [14]) have been taken simultaneously at the CELSIUS/WASA facility on both the ppπ + π − and ppπ 0 π 0 final states. In case of the π + π − system the preliminary results at an excess energy of Q = 75 MeV are in good agreement with the relative strength of the decay routes adjusted to an extrapolated ratio R(1440) = 3. However, at slightly higher excess energy (Q = 127 MeV) the data might be equally well described by a value R(1440) = 1, which is noticeably favoured at both excess energies by the data on π 0 π 0 production, indicating distinct underlying dynamics in π 0 π 0 and π + π − production. One difference becomes obvious from the isospin decomposition of the total cross section [9]: An isospin I = 1 amplitude in the ππ system, and accordingly a p-wave admixture, is forbidden by symmetry to contribute to the neutral pion system in contrast to the charged complement. A p-wave component was neglected so far in the analysis, since the unpolarized angular distributions show no deviation from isotropy. However, there is evidence for small, but non-negligible analysing powers from a first exclusive measurement of π + π − production with a polarized beam at the COSY-TOF facility [14,15], suggesting higher partial waves especially in the ππ system.
At higher energies, i.e. Q = 208 MeV and Q = 286 MeV with respect to the π + π − threshold, preliminary data for both π + π − and π 0 π 0 from CELSIUS/WASA rather follow phase space than expectations based on a dominating pp → pN * → ppσ reaction mechanism [14]. At these energies, the ∆∆ excitation process should influence observables significantly, and, thus, a phase space behaviour becomes even more surprising, unless the ∆∆ system is excited in a correlated way.
THE PROTON-PROTON-ETA FINAL STATE
As a general trait in meson production in nucleon-nucleon scattering, the primary production amplitude, i. e. the underlying dynamics can be regarded as energy independent in the vicinity of threshold [16,17,18]. Consequently, for s-wave production processes, the energy dependence of the total cross section is essentially given by a phase space behaviour modified by the influence of final state interactions. In Fig.2 total cross section data obtained in proton-proton scattering are shown for the pseudoscalar isosinglet mesons η and η ′ [19]. In both cases, the energy dependence of the total cross sec- Total cross section data for η (squares [20,21,22,23,24,25]) and η ′ (circles [24,26,27,28,29]) production in proton-proton scattering versus excess energy Q [19]. In comparison, the energy dependences from a pure phase space behaviour (dotted lines, normalized arbitrarily), from phase space modified by the 1 S 0 proton-proton FSI including Coulomb interaction (solid lines), and from additionally including the proton-η interaction phenomenologically (dashed line), are shown. Meson exchange calculations for η production including a P-wave component in the proton-proton system [30] are depicted by the dashed-dotted line, while the dashed-double-dotted line corresponds to the arbitrarily normalized energy dependence from a full three-body treatment of the ppη final state [31] (see also [32]).
tion deviates significantly from phase space expectations. Including the on-shell 1 S 0 proton-proton FSI enhances the cross section close to threshold by more than an order of magnitude, in good agreement with data in case of η ′ . As expected from kinemat-ical considerations [1] the cross section for η production deviates from phase space including the pp FSI at excess energies Q ≥ 40 MeV, where the 1 S 0 final state is no longer dominant compared to higher partial waves. Deviations at low excess energies seem to be well accounted for by an attractive proton-η FSI (dashed line), treated phenomenologically as an incoherent pairwise interaction [1,17,33]. In comparison to the proton-η ′ (Fig.2) and proton-π 0 systems only the pη interaction is strong enough to become apparent in the energy dependence of the total cross section [34]. In differential observables, effects should be more pronounced in the phase space region of low proton-η invariant masses. However, to discern effects of proton-η scattering from the influence of proton-proton FSI, which is stronger by two orders of magnitude, requires high statistics measurements, which have only become available recently [19,35,36]: Close to threshold, the distribution of the invariant mass of the proton-proton subsys- [35]). The dotted and dashed lines follow a pure phase space behaviour and its modification by the phenomenological treatment of the three-body FSI as an incoherent pairwise interaction, respectively. The latter was normalized at small invariant mass values. Effects from including a P-wave admixture in the pp system are depicted by the dashed-dotted line [30], while the dashed-double-dotted line corresponds to a pure s-wave final state with a full three-body treatment [32].
tem is characteristically shifted towards low invariant masses compared to phase space (dotted line in Fig.3). This low-energy enhancement is well reproduced by modifying phase space with the 1 S 0 pp on-shell interaction. A second enhancement at higher pp invariant masses, i.e. low energy in the pη system, is not accounted for even when including additionally the proton-η interaction incoherently (dashed line). However, including a P-wave admixture in the pp system by considering a 1 S 0 → 3 P 0 s transition in addition to the 3 P 0 → 1 S 0 s threshold amplitude, excellent agreement with the experimental invariant mass distribution is obtained (dashed-dotted line [30]). In return, with the P-wave strength adjusted to fit the invariant mass data, the approach fails to reproduce the energy dependence of the total cross section (Fig.2) below excess energies of Q = 40 MeV. Preliminary calculations considering only s-waves in the final state but using a rigorous three-body treatment of the ppη final state actually decrease the cross section at large values of the pp invariant mass (dashed-double-dotted lines [32]) compared to an incoherent two-body calculation within the same framework. However, close to threshold the energy dependence of the total cross section is enhanced compared to the phenomenological incoherent treatment and the data (Fig.2). Although part of this enhancement has to be attributed to the neglect of Coulomb repulsion in the pp system, consequently overestimating the pp invariant mass at low values, qualitatively the full three-body treatment has opposite effects compared to a P-wave admixture in the proton-proton system in view of both the total cross section as well as the pp invariant mass distribution. In the approximate description of the total cross section by the phenomenological s-wave approach with an incoherent FSI treatment these two effects seem to cancel casually. Close to threshold, resonance excitation of the S 11 (1535) and subsequent decay to the pη final state is generally 2 believed to be the dominant η production mechanism [17,30,38,39,40,41,42,43]. In this context, the issue of the actual excitation mechanism of the S 11 (1535) remains to be addressed. The η angular distribution is sensitive to the underlying dynamics: A dominant ρ exchange favoured in [41] results in an inverted curvature of the η angular distribution compared to π and η exchanges which are inferred to give the largest contribution to resonance excitation in [42]. In the latter approach the interference of the pseudoscalar exchanges in the resonance current with non-resonant nucleonic and mesonic exchange currents turns the curvature to the same angular dependence as expected for ρ exchange. Presently, due to the statistical errors of the available unpolarised data at an excess energy of Q ≈ 40 MeV [35,36] it is not possible to differentiate between a dominant ρ or π, η exchange, as discussed in [36]. Data recently taken at the CELSIUS/WASA facility with statistics increased by an order of magnitude compared to the available data might provide an answer in the near future [44].
Spin observables, like the η analyzing power, should even disentangle a dominant ρ meson exchange and the interference of π and η exchanges in resonance excitation with small nucleonic and mesonic currents [42], which result in identical predictions for the unpolarised η angular distribution. First data [45] seem to favour the vector dominance model, but final conclusions both on the underlying reaction dynamics and the admixture of higher partial waves [30] have to await the analysis of data taken with higher statistics for the energy dependence of the η analysing power [46].
ASSOCIATED STRANGENESS PRODUCTION
In elementary hadronic interactions with no strange valence quark in the initial state the associated strangeness production provides a powerful tool to study reaction dynamics by introducing a "tracer" to hadronic matter. Thus, quark model concepts might eventually be related to mesonic or baryonic degrees of freedom, with the onset of quark degrees of freedom expected for kinematical situations with large enough transverse momentum transfer.
First exclusive close-to-threshold data on Λ and Σ 0 production [47,48] obtained at the COSY-11 facility showed at equal excess energies below Q = 13 MeV a cross section ratio of exceeding the high energy value (Q ≥ 300 MeV) of 2.5 [49] by an order of magnitude.
In the meson exchange framework, estimates for π and K exchange contributions based on the elementary scattering processes do not reproduce the experimental value (2) [48,50]. However, inclusive K + production data in pp scattering at an excess energy of Q = 252 MeV with respect to the pK + Λ threshold show enhancements at the Λp and ΣN thresholds of similar magnitude [51]. Qualitatively, a strong Σ 0 N → Λp final state conversion might account for both the inclusive SATURNE results as well as the Σ 0 depletion in the COSY-11 data. Evidence for such conversion effects is known e. g. from fully constrained kaon absorption on deuterium via K − d → π − Λp [52].
In exploratory calculations performed within the framework of the Jülich meson exchange model [50], taking into account both π and K exchange diagrams in a coupled channel approach, a final state conversion is rather excluded as origin of the experimentally observed ratio: While Λ production is found to be dominated by kaon exchange, both π and K exchange turn out to contribute to the Σ 0 channel with similar strength. Qualitatively, this result is experimentally confirmed at higher excess energies between Q = 200 MeV and Q = 430 MeV from polarization transfer measurements from the DISTO experiment [53,54,55]. It is concluded in [50], that only a destructive interference of π and K exchange might explain the experimental value (2). Σ production in different isospin configurations should provide a crucial test for this interpretation, since for the reaction pp → nK + Σ + the interference pattern is found to be opposite compared to the pK + Σ 0 channel. Data close to threshold have recently been taken at the COSY-11 facility [56].
However, within an effective Lagrangian approach [57] both Λ and Σ 0 production channels are concluded to be dominated by π exchange and excitation of the S 11 (1650) close to threshold, while at excess energies above Q = 300 MeV the N * (1710) governs strangeness production 3 . In this energy range the influence of resonances becomes evident from recent data on invariant mass distribution determined at COSY-TOF [59].
To study the transition region between the low-energy enhancement (2) and the high energy value measurements have been extended up to excess energies of Q = 60 MeV [58,60]: In order to describe the energy dependence of the total cross section for Λ production, in addition to phase space the pΛ final state interaction has to be taken into account. In contrast, Σ 0 production is satisfactorily well described by phase space behaviour only [58]. This qualitatively different behaviour might be explained by the Σ 0 p FSI being much weaker compared to the Λp system. However, the interpretation implies dominant S-wave production and reaction dynamics that can be regarded as energy independent. Within the present level of statistics, contributions from higher partial waves can be neither ruled out nor confirmed at higher excess energies for Σ 0 production.
The energy dependence of the production ratio R Λ/Σ 0 is shown in Fig.4 in comparison with theoretical calculations obtained within the approach of [50] assuming a destructive interference of π and K exchange and employing different choices of the microscopic hyperon nucleon model to describe the interaction in the final state [61]. The result . Λ/Σ 0 production ratio in proton-proton scattering as a function of the excess energy. Data are from [48] (shaded area) and [60]. Calculations [61] within the Jülich meson exchange model imply a destructive interference of K and π exchange using the microscopic Nijmegen NSC89 (dashed line [62]) and the new Jülich model (solid line [63]) for the Y N final state interaction.
crucially depends on the details -especially the off-shell properties -of the hyperonnucleon interaction employed. At the present stage both the good agreement found in [50] with the threshold enhancement (2) and for the Nijmegen model (dashed line in Fig.4) with the energy dependence of the cross section ratio should rather be regarded as accidental 4 . Calculations using the new Jülich model (solid line in Fig.4) do not reproduce the tendency of the experimental data. It is suggested in [61] that neglecting the energy dependence of the elementary amplitudes and higher partial waves might no longer be justified beyond excess energies of Q = 20 MeV. However, once the reaction mechanism for close-to-threshold hyperon production is understood, exclusive data should provide a strong constraint on the details of hyperon-nucleon interaction models.
PRESENT AND FUTURE
Intermediate baryon resonances emerge as a common feature in the dynamics of the exemplary cases for threshold meson production in nucleon-nucleon scattering discussed in this article. However, this does not hold in general for meson production in the 1 GeV/c 2 mass range (for a discussion on η ′ production see [64]). Moreover, the extent to which resonances are evident in the observables or actually govern the reaction mechanism depends on the specific channels, which differ in view of the level of present experimental and theoretical understanding.
The N * (1440) resonance dominates π + π − production at threshold, and exclusive data allow to extract resonance decay properties in the low-energy tail of the Roper. Dynamical differences between the different isospin configurations of the ππ system and the behaviour at higher energies remains to be understood with first experimental clues appearing.
With three strongly interacting particles in the final state, a consistent description of η production close to threshold requires an accurate three-body approach taking into account the possible influence of higher partial waves. High statistics differential cross sections and polarization observables coming up should straighten out both the excitation mechanism of the N * (1535) and the admixture of higher partial waves.
At present, the available experimental data on the elementary strangeness production channels give evidence for both an important role of resonances coupling to the hyperon-kaon channels and on a dominant non-resonant kaon exchange mechanism. Experiments on different isospin configurations, high statistics and spin transfer measurements close to threshold should disentangle the situation in future.
From the cornerstone of total cross section measurements, it is apparent from the above examples to what extent our knowledge is presently enlarged by differential observables and what will be the impact of polarization experiments in future to get new perspectives in threshold meson production. | 2014-10-01T00:00:00.000Z | 2003-11-12T00:00:00.000 | {
"year": 2003,
"sha1": "c9043aa3f63d72aa177872aaa3768ffcf973b83c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/nucl-ex/0311011",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c9043aa3f63d72aa177872aaa3768ffcf973b83c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
232041294 | pes2o/s2orc | v3-fos-license | First trimester employment, working conditions and preterm birth: a prospective population-based cohort study
Objectives To explore the association between working conditions during first trimester and total preterm birth (PTB), and subtypes: spontaneous PTB and iatrogenic PTB, additionally to explore the role of hypertension. Methods Pregnant women from the Amsterdam Born Children and their Development study, filled out a questionnaire between January 2003 and March 2004, two weeks after first prenatal screening (singleton liveborn, n=7561). Working conditions were working hours/week, standing/walking hours/week, physical work load and job strain. Results Prolonged standing/walking during first trimester was associated with an increased risk for total PTB (OR=1.5; 95% CI 1.0–2.3, after adjustments). Other working conditions were not related to total PTB. The separation into spontaneous and iatrogenic PTB revealed that standing/walking was associated with iatrogenic PTB only (OR=2.09; 95% CI 1.00–4.97). The highest risk was found for the combination of a long workweek with high physical work load (OR=3.42; 95% CI 1.04–8.21). Hypertension did not mediate these associations; however, stratified analysis revealed that high physical work load was only related to iatrogenic PTB when pregnancy-induced hypertension was present (OR=6.44; 95% CI 1.21–29.76). Conclusion This study provides evidence that high physically demanding work is associated with an increased risk for iatrogenic PTB and not with spontaneous PTB. Pregnancy-induced hypertension may play a role: when present, high physical work load leads to a more severe outcome.
INTRODUCTION
Preterm birth (PTB) is a principal adverse outcome of perinatal care, associated with infant mortality and subsequent morbidity. 1 In the last decades, the prevalence of PTB slightly decreased, 2 3 while neonatal outcome in general has improved considerably. However, there is room for considerable improvement of PTB, even if we account for some iatrogenic increase. 3 Risk factors for spontaneous PTB are maternal factors (including pre-existent hypertension), obstetric factors (including placental dynamics) and social factors, which include workrelated factors. [4][5][6] Although work in general is associated with better outcomes most likely through indirect pathways 7 or by selection of women with better health in employed jobs known as the 'healthy worker effect', 8 specific working conditions are potential risk factors for PTB through direct, biological pathways. An increased risk from long working hours, [9][10][11] high physical work load, [12][13][14] prolonged standing 10 and psychosocial job strain 10 has been suggested, but results are not unequivocal. 12 [14][15][16] So far, six reviews have been conducted, four focusing on high physical work load, long working hours and prolonged standing and one on lifting. 9 17-21 These studies concluded that working conditions were associated with increased risk for preterm delivery, but the effects were small to moderate in these studies (pooled estimates RR <1.3). One recent meta-analysis confirmed the result with respect to extended working hours (>40 hours/ week). 9 To the best of our knowledge, job strain has only been considered in one critical review on psychosocial characteristics of work, which showed a modest but inconclusive association between job strain and PTB. 22 Differences in research designs and definitions and measurements of work-related factors may account for inconsistent observations. First, frequently physical work load and working hours are considered as independent exposures. However, interaction may be assumed as heavy work load can be expected to be more detrimental under fulltime rather than part-time working conditions. Second, most studies combine spontaneous and iatrogenic (medically indicated) PTB into one outcome measure, while the pathophysiological mechanism only partially overlaps. 5 23 24 Working conditions could thus be related differently to these types of PTB. Additionally, hypertension during pregnancy, a driver of iatrogenic PTB, could be a mediator or modifier in the relation between work and PTB. The positive association between job strain and blood pressure is consistently reported in the working population, 25 and also in pregnant women. 26 Finally, so far, focus is on physical work load, while only a few studies looked at self-rated job strain, defined as high job demands in combination with low job control, as a potential independent contributor to adverse outcomes, like PTB. 27 The purpose of this empirical study was to explore the association between-on the one hand-the exposure of employment acknowledging different pathways via working conditions like weekly working hours, hours standing or walking, physically demanding work and psychosocial job strain, and-on the other hand-the outcome PTB, both total and subdivided into spontaneous and iatrogenic PTB. In addition, the complex role of pregnancy-induced hypertension (PIH) was explored by testing its mediating as well as its moderating role. Evidence for mediation would mean that heavy working conditions lead to PIH and subsequently to PTB. Moderation would mean that PIH acts as a risk enhancer, when present, heavy working conditions would to lead to increased risk for PTB. Shedding some light on this mechanism is important as PIH is the most important single clinical entity responsible for poor neonatal outcome in modern society and in particular in that condition iatrogenic PTB is often applied, to prevent worse outcome (fetal death) at later stage, which still often involves spontaneous PTB.
An unselected urban cohort of pregnant women was studies where work-related factors were measured prospectively at the end of the first trimester. 28 We hypothesised that 1 heavy working conditions are associated differently with the subtypes of PTB, 2 that heavy working conditions in combination with a long workweek have the most detrimental effects, 3 and that PIH mediates the above-stated associations methods.
Study population
Prospective data from a community cohort of pregnant women in the city of Amsterdam, the Netherlands, were used (Amsterdam Born Children and their Development (ABCD study)). The ABCD study investigates the relationship between maternal lifestyle and psychosocial conditions during pregnancy and the child's health at birth as well as in later life ( www. abcdstudy. nl). 28 Details of the study design, including measurements, have been described previously. 28 29 In short, between January 2003 and March 2004, all pregnant Amsterdam women were invited to participate at their first prenatal visit to the obstetric care provider (on average 12th gestational week), and requested to complete a questionnaire, covering socio-demographic data, obstetric history, lifestyle, dietary habits and psychosocial factors. The questionnaire was available in Dutch, English, Turkish and Arabic for immigrant women. Three months to 6 months after delivery, the women received an infant questionnaire covering the health of the mother and her baby.
In total 12 373 women were invited and 8266 women returned the questionnaire (response rate: 67%). Of this group, 7731 gave birth to a viable singleton infant for whom information on birth weight and pregnancy duration was available. For this study, we excluded births before 24 weeks of gestation and maternal age younger than 20 years. In total, 7561 women had complete data on all relevant variables for the analysis. The study protocol was approved by the medical ethical committees of all Amsterdam hospitals and the Registration Committee of Amsterdam. All participating women gave written consent.
Measures
Exposure measurement: employment was defined as paid work for at least 8 hours/week during first trimester (self-reported). All other situations were classified as being unemployed. The amount of weekly working hours was categorised into three categories (8-31 hours, 32-40 hours and >40 hours), based on conventional working schemes in the Netherlands. The self-administered validated Dutch version of the Job Content Questionnaire (JCQ) measured job strain. 30 31 The JCQ consists of two subscales, 'job demands' and 'job control', respectively, which together define 'job strain'. Job demand is covered by altogether 25 items, referring to work pace (11 items; concerning, eg, time pressure and amount of work), mental work load (7 items; eg, the requirement to perform simultaneously several tasks) and physical work load (7 items; concerning strenuous posture and load carrying). Job control is covered by 11 items, concerning, for example, perceived control of own work pace. All JCQ items use a 4-point response mode. In our study, scale reliability (Cronbach's α) for job demands and job control were 0.82 and 0.91, respectively. For analysis, the sum score of job demand was trichotomised into: low (<50th percentile) moderate (between 50th and 90th percentile) and high (>90th percentile), and for job control: high (>50th percentile), moderate (between 10th and 50th percentile) and low (<10th percentile). Women with high job demands and low or moderate job control were scored as having high job strain, those with low job demands and moderate or high job control as having low job strain and all other combinations as having moderate job strain. 29 Additional to the JCQ, physical work load was measured by 1 the reported number of weekly hours standing or walking, categorised into 4 categories (<10 hours, 10-19 hours, 20-30 hours and >30 hours), and 2 by taking the subscale physical work load from the job demands scale as a separate variable. The score on this subscale was trichotomised into low (<50th percentile), moderate (between 50th and 90th percentile) and high (>90th percentile).
Outcome measurement: pregnancy duration (ultrasound based or, if unavailable, on the timing of the last menstrual period) was obtained from the youth healthcare registration of the Public Health Service in Amsterdam; every newborn (alive or dead) is registered at the civil registration, and brought to the attention to the youth healthcare to be included in preventive schemes. The Dutch Perinatal Registration (PRN) provided comprehensive data on pregnancy, obstetric history and pregnancy outcomes for 80% of our sample. This data were linked by anonymous probabilistic linkage methods to the ABCD data, 32 which also accounted for small errors on the birth date (eg, midnight births). If variables were available from two sources, for example, maternal age, this allowed for additional quality checks.
Workplace
Primary outcome variable was PTB (gestational age between 24 weeks and 37 weeks). The Dutch PRN registers the onset of delivery (eg, spontaneous, induction and section) only when women delivered under the supervision of a gynaecologist. Based on these data, we divided total PTB into spontaneous PTB (delivery onset by spontaneous preterm labour or premature rupture of membranes) and iatrogenic PTB (delivery onset through induction or primary caesarean section). PTBs with unknown type of delivery onset (11%) were classified as spontaneous PTB if a women not specifically reported in the infant questionnaire to have had an iatrogenic delivery or if a women had not been under the supervision of a gynaecologist. 23 Explanatory variables: apart from the above clinical information, all other explanatory variables were self-report: maternal age (years), parity (two categories: primiparae and multiparae), ethnicity (country of birth of the pregnant mother to include second generation: the Netherlands, Surinam/Antillean, Turkey/Morocco, other non-Western and other Western), maternal education (years of education after primary school, continuous), smoking during pregnancy (dichotomised into yes or no), alcohol use (dichotomised into yes or no), marital status (married/cohabiting vs single), previous PTB (dichotomised into yes or no) and pre-gravid maternal body mass index (BMI, kg/m 2 ). Chronic (pre-existent) and PIH were both defined combining self-reported data and PRN registration. Chronic hypertension was the case if pre-existent hypertension was recorded in the PRN or if women reported high blood pressure and/or the use of medication against high blood pressure before pregnancy or in the first 20 weeks of pregnancy. PIH was assumed to be present if pregnancy-related hypertension, eclampsia or pre-eclampsia was recorded in the PRN, or if women without pre-existent hypertension reported high blood pressure and/or the use of antihypertensive medication during pregnancy. 33 34
Statistical analysis
We estimated the hypothesised effects of working conditions on PTB by logistic regression models in employed women only. First, univariate analysis for each working condition separately provided unadjusted effects. Multivariate models controlled for the following factors: maternal age, parity, educational level, smoking habits during pregnancy, pre-gravid BMI and previous PTB, to reveal the statistically independent effect of work conditions. These covariates were chosen as they previously proved to be independent risk factors for PTB. 23 The correlations between the covariates were all below 0.23 (collinearity check). We tested if there was significant mediation by PIH (binary mediator) on the association between working conditions and PTB with structural equation modelling. A 95 percentile bootstrap CI was calculated based on 1000 bootstrap resamples for the indirect effect in order to test for significance. In all models, employed women with the least heavy working condition (lowest exposure category) were designated as the reference category, implying that risk estimates show added risk (if any) compared with low burden workers.
The combined effects of working hours and job strain and of working hours and physical work load were tested. Variables were redefined into six categories and the least heavy working condition in combination with a workweek of less than 32 hours was taken as the reference category. To test the modifying effect of hypertension, stratified analysis were performed for those with and without PIH.
Data were analysed using SPSS V.25.0. Goodness of fit of the logistic regression models was assessed by the Hosmer-Lemeshow test. The mediation analyses were conducted using the capture programme in Stata V. 15. Only missing values of pre-pregnancy BMI were imputed (5% missing); less than 1% of other data were missing.
RESULTS
Compared with the non-response group (N=4107), the response group (N=8266) was a little older (mean age: 31.7±5.2 vs 30.2±5.8), more often primiparae (% primiparae: 55.7 vs 40.1), more often from Dutch origin (% Dutch: 62.6 vs 35.3). No differences were found with respect to the outcome variables birth weight and pregnancy duration. To test whether selective participation caused selection bias, extensive non-response analysis was performed by probabilistic medical record linkage with the Dutch PRN. Results showed similar associations in the response and the non-response group between risk factors and several adverse outcome indicators, suggesting no selection bias. 32 The socio-demographic background of the pregnant women, stratified by employment status, is shown in table 1. Differences between the two groups can largely be explained by difference in employment status between the ethnic groups. Most of the women (63%) worked at least 8 hours a week during first trimester. Employed compared with unemployed women were older, higher educated, smoked less, drunk more, had lower pre-gravid BMI and less often a previous PTB, had more often hypertensive disorders, were more often primiparae and less often single. Socio-demographic background, stratified by working condition, is shown in online supplemental table 1. High physical work load, long hours of standing/walking a week and high job strain were more prevalent in those women from lower educational or non-Dutch background. The rate of PTB in our sample (only singletons included) was 5.4%. About 80% of the PTBs were spontaneous (table 2). This proportion did not differ between the employed and the unemployed women.
Total PTB
More than 30 hours/week standing/walking was associated with an increased risk for total PTB (OR: 1.44; 95% CI: 1.01-2.24) in the adjusted analyses (table 3). A bias-corrected bootstrap CI for the (table 3).
Physical work load with weekly working hours as combined risk
The combination of high physical work load with ≥32 weekly working hours (4.7% of the working women) was not associated with total or spontaneous PTB, yet it resulted in the highest risk for iatrogenic PTB (table 4). Compared with women with low physical workload who worked <32 hours/week (reference group), they showed a more than three times increased risk (adjusted OR: 3.42; 95% CI: 1.04-8.21). The combination of high job strain with long working hours was not associated with an increased risk for PTB or any of its subtypes (data not shown).
Modifying role of PIH
The relation between physical work load and iatrogenic PTB was modified by PIH (p for interaction=0.07; table 5). The analysis shows that high physical work load in combination with PIH was related to iatrogenic PTB. This modifying effect was not present for total PTB and spontaneous PTB (data not shown).
DISCUSSION
In this large prospective community cohort of pregnant women, high physical work load and more than 30 hours/week standing or walking, measured during women's first trimester, were independently associated with a higher risk for iatrogenic PTB. The combination of high physical work load and a long workweek showed the highest impact, with (after adjustment) a more than three times increased risk for iatrogenic PTB. On the other hand, no effects of work were found for spontaneous PTB. In general, PTB effects were smaller than those observed for a small for gestational age (SGA. 29 Our results suggests that high physical work load does not lead to a more severe outcome via the development of PIH (no mediation). This supports previous findings that high physical work load was not associated with the risk of PIH, or its subcomponents preeclampsia or gestational hypertension. 17 35 In fact, the results suggest that physical work load has a more severe impact on the pregnancy outcome when PIH is present ('risk-enhancer'). Indeed, in another paper of our group, we showed that high physical work load, combined with a long workweek is associated with reduced fetal growth. 29 In this paper, we did not combine physical work load with hypertension, but the prevalence of an SGA baby in those with gestational hypertension was 22.5% when this was combined with high physical work load, while this was 13.6% in those with low physical work load. It is known that pre-eclampsia, fetal distress, SGA and placental abruption are indicators for a iatrogenic PTB, which suggests an association with ischaemic placental disease. 36 It could be that those women who develop hypertension during pregnancy continue to work in this adverse work situation, but also that the origin of a suboptimal placentation during the first weeks of pregnancy is caused by high physical work load in combination with other factors (eg, genetic or environmental) that predispose for the developing of high blood pressure. Regrettably, we only have the work exposure variables during the first trimester, whether women changed their working conditions, is not known.
Our results confirm the case-control study from Escribá-Agüir and co-workers, 37 who reported that the magnitude of the physical work load was greater for iatrogenic PTB (OR: 3.88; 95% CI: 2.04-7.39) than for spontaneous PTB (OR: 1.74; 95% CI: 0.99-3.01), and the study from Klebanov in which they compared the pregnancy outcome from medical residents to those from the wives of male medical residence. They found no difference in the rate of preterm delivery; however, (pre)eclampsia, a major risk factor for iatrogenic PTB, was more than twice as common among the residents, after adjustment for parity, age and ethnicity. 16 Escribá-Agüir and co-workers also combined the two subtypes (total PTB) and showed an increased risk of physical work load (OR: 2.35; 95% CI: 1.41-3.94), which is also previously found but not confirmed in our study. 38 39 We did not find any effect of job strain (work stress) on preterm delivery. This is in agreement with a large prospective cohort study in the USA. 12 In another population based case-control study, 10 an effect was found for low job satisfaction. Our results did not show an effect of job strain on total preterm delivery or the subtypes. Also in combination with full-time working, job strain did not result in any increased risk, comparable to others. 27 An association might be present in subgroups like those with low social support or in specific ethnic groups. 22
Potential limitations
Our study involved several limitations. First, as stated above, we measured working conditions only during the first trimester. Whether working conditions changed during pregnancy is unknown; it is, therefore, possible that first trimester is an indicator for third trimester working conditions. Changes during pregnancy were most likely in the highest work exposure groups 40 (eg, women with highly physical workloads may have moved to a desk job). Such attenuations in exposure would imply that our estimates are conservative. Some studies have included multiple measurements during pregnancy but have restricted analyses to women who worked throughout their pregnancy. 41 This approach leads to underestimates of early-pregnancy workload effects, and may even result in favourable rather than adverse work effects among those who work to term, if early quitting is associated with work-related pregnancy complications such as suspected intrauterine growth restriction.
Second, the percentage of unemployment was high in our cohort (36%). This can be explained in part by our definition of employment as working at least 8 hours/week during the first trimester. Given that most studies include only working women, comparisons between previous studies and our investigation are difficult. However, the unemployment rate in the Netherlands among women in the 25-34 year age group is 24.7%, which is high relative to other Western countries. In our cohort, the percentage was higher than the norm as a result of the comparatively large group of women of non-Dutch origin, among whom, according to national statistics, rates of unemployment are often high. We believe that our employment rate was representative of large cities in the Netherlands and that selective participation among women who were unemployed did not occur.
Third, we showed that adverse working conditions are indicative of lower socioeconomic status (SES) (online supplemental table 1), which in itself is associated with iatrogenic PTB. 6 Although education, profession and income are all components of SES, many studies focusing on community populations indicate that the main effects of SES act through employment (in addition to smoking) and, to a lesser extent, education. We adjusted for educational level, which can be considered as overcorrection; the true estimates might, therefore, be larger.
Fourth, despite our large cohort, the numbers are small for the iatrogenic PTB. Therefore, the results should be interpreted with caution. The postulated role of gestational hypertension should be confirmed in future studies.
Fifth, despite our efforts to include all pregnant women in Amsterdam, selective participation took place and those from ethnic minorities and lower socioeconomic status were less presented in our study. 28 29 32 However, we think that this did not lead to biased results as the included groups were representative for the total groups. 32 However, this selective participation might have influenced the prevalence of the working conditions. Recall bias is unlikely as the information on working conditions were obtained before the outcome was assessed.
Study implications
In conclusion, we found that in general there is no reason to assume that working during pregnancy has a negative influence on preterm delivery, or its subtypes. However, the association observed between iatrogenic PTB and high physical work load in combination with a long workweek seems to be genuine. In addition, high physical work load should be avoided in those pregnant women with first indications of hypertensive disorders during pregnancy.
We believe that optimising the work environment during pregnancy is important as the participation of women of reproductive age in the workforce continues to increase. Although only 4.7% of the working women in our cohort were in the highest physical work load group and longest workweek categories, women facing such conditions should not be ignored given that these percentages will be higher in other countries in which part-time employment is less common. Moreover, these adverse working conditions were more prevalent in women from lower socioeconomic and non-Dutch background. As these women also have other risk factors for PTB, like smoking, these groups might need specific attention in preventive strategies.
We are aware that our results must be confirmed in other large scaled prospective community cohort studies before firm conclusions can be drawn. These studies should include large numbers of pregnant women to validly study work-related risk factors for iatrogenic PTB and the role of hypertensive disorders. Multiple measurements of these work-related risk factors should be included in future studies to investigate whether the first trimester is a vulnerable window in which work-related risk factors can cause pregnancy complications that cannot be reversed. Although most pregnant women reduce their working loads at the end of their pregnancy, our results indicate that reducing physical workload in the initial stages of pregnancy may be beneficial among women with full-time physical demanding work and first signs of hypertensive disorders.
Workplace
Patient consent for publication Not required. Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement Data are available upon request due to ethical restrictions related to protecting patient confidentiality. Researchers who are interested in using data for research purposes can apply for access to the Amsterdam Born Children and their Development data by contacting the research committee at abcd@ amc. uva. nl.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/. | 2021-02-25T14:06:34.280Z | 2021-02-24T00:00:00.000 | {
"year": 2021,
"sha1": "ff12a4b262272ea625167b87ffc67f4b9a5db2bf",
"oa_license": "CCBYNC",
"oa_url": "https://oem.bmj.com/content/oemed/78/9/654.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "BMJ",
"pdf_hash": "ff12a4b262272ea625167b87ffc67f4b9a5db2bf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258081329 | pes2o/s2orc | v3-fos-license | Differential impact of COVID-19 on mental health and burnout
Abstract Background There may be differential impact of the COVID-19 pandemic on mental health and burnout rates of healthcare professionals (HCPs) performing different roles. Aims To examine mental health and burnout rates, and possible drivers for any disparities between professional roles. Methods In this cohort study, online surveys were distributed to HCPs in July–September 2020 (baseline) and re-sent 4 months later (follow-up; December 2020) assessing for probable major depressive disorder (MDD), generalized anxiety disorder (GAD), insomnia, mental well-being and burnout (emotional exhaustion and depersonalization). Separate logistic regression models (at both phases) compared the risk of outcomes between roles: healthcare assistants (HCAs), nurses and midwives (nurses), allied health professionals (AHPs) and doctors (reference group). Separate linear regression models were also developed relating the change in scores to professional role. Results At baseline (n = 1537), nurses had a 1.9-fold and 2.5-fold increased risk of MDD and insomnia, respectively. AHPs had a 1.7-fold and 1.4-fold increased risk of MDD and emotional exhaustion, respectively. At follow-up (n = 736), the disproportionate risk between doctors and others worsened: nurses and HCAs were at 3.7-fold and 3.6-fold increased risk of insomnia, respectively. Nurses also had a significantly increased risk of MDD, GAD, poor mental well-being and burnout. Nurses also had significantly worsened anxiety, mental well-being and burnout scores over time, relative to doctors. Conclusions Nurses and AHPs had excess risk of adverse mental health and burnout during the pandemic, and this difference worsened over time (in nurses especially). Our findings support adoption of targeted strategies accounting for different HCP roles.
INTRODUCTION
The increased demand on the healthcare workforce because of the coronavirus disease 2019 (COVID-19) pandemic has had a profound effect on the mental health of healthcare professionals (HCPs).Due to the potential for reduced quality of patient care and work absence [1][2][3], we must identify the risk factors for adverse mental health among HCPs and the mechanisms to support them under pandemic conditions [4].
Factors such as age, working in junior positions, being parents of dependent children and having an infected family member are associated with poorer mental health in HCPs during pandemics [5].It is debatable whether all HCPs are similarly affected: a few studies have shown that physicians (doctors) were more affected than those in nursing profession [6][7][8], whereas other studies have shown contrasting results [9][10][11][12][13][14][15].The inconsistency in study findings can be due to small studies, single time-point assessment, narrow focus or comparing selective roles [6][7][8][9][10][11]15].Additionally, existing studies generally lack reporting the potential drivers of this differential impact, if any.Moreover, relatively underexamined roles such as allied health professionals (AHPs) and healthcare assistants (HCAs) must be investigated to inform role-tailored mental health interventions.
Addressing these shortcomings, the primary objective of this cohort study (part of the COVID-19 Disease and Physical and Emotional Wellbeing of Healthcare Professionals project; CoPE-HCP) [16] was to examine the relationship between different healthcare roles and various mental health outcomes at two distinct periods during the COVID-19 pandemic.Supplementing this, we examined the relationship between different healthcare roles and the change in mental health and burnout scores over the 4-month study period.Our secondary objective was to examine the relationship between healthcare roles and specific COVID-19-related worries, as a possible driver.We hypothesized a differential impact on mental health across healthcare roles and these differences would increase over time.
METHODS
The project protocol has been published containing details of the project methodology [16] (also available on clinicaltrials.gov;NCT04433260).The study was approved by the Cambridge East Research Ethics Committee (20/EE/0166).Informed (digital) consent was obtained from all participants.
Participant recruitment was facilitated via open invitation of internally controlled e-mail lists of NHS participating institutions, and Queen Mary University of London for control sample (wider project).
This study involved a series of online surveys.The baseline survey was conducted between July and September 2020 which corresponded to the trough of the first wave of COVID-19 and the easing of the first UK lockdown.
The baseline survey gathered demographic information, current mental health and physical health diagnosis (multiple-choice closed-ended item or stating 'other') and domains including professional role, work environment and COVID-19-related worries: worry about their own health, risk and inadequate PPE, risk of family catching COVID-19, inadequate training to deal with COVID-19-related tasks, inadequate supervision and redeployment.The worry items were rated from 1 to 5: 'never' , 'rarely' , 'sometimes' , 'often' and 'always' .
At the end of the baseline survey, participants were asked if they consent to receiving follow-up surveys.The follow-up survey was conducted 4 months later (December 2020) and corresponded with the second UK national lockdown due to the rise in COVID-19 cases during the winter of 2020.The follow-up survey contained the same mental health assessments.
Statistical analyses were conducted using STATA v17.0.Of 2100 available records, only self-identified HCPs were included and categorized into four groups: doctors (regardless of training level: senior doctor, higher specialist trainee, clinical fellow, core trainee or foundation doctor), HCAs (including phlebotomists, porters, cleaners), nurses (including midwives) and AHPs (pharmacists, occupational therapists, physiotherapists, radiographers).Descriptive statistics for sociodemographic characteristics were presented for each role as the number within each group and percentage.For follow-up, only participants who had provided a baseline survey were included.
Outcome-based analysis was performed on a complete case basis, excluding subjects from a particular analysis if they had not responded to items relevant to the primary outcomes.Missingness for primary outcomes was minimal (<10%).
We performed separate logistic regression analyses for each outcome (at baseline and follow-up) between the four HCP roles.Crude and adjusted odds ratios (ORs; with 95% confidence intervals [CIs] and P-values for the global trend across roles) were calculated with doctors as the reference group.Models were adjusted for a priori confounders: age, gender, time
What is already known about this subject:
• There may be differential impact of COVID-19 on mental health and burnout rates of healthcare professionals performing different roles.Due to potential bias at follow-up, we conducted chi-square analyses on demographic characteristics and mental health, in baseline-only participants compared to cohort participants.
Separate linear regression analyses were performed to relate professional role to the change in PHQ-9, GAD-7, ISI, SWEMWBS and combined burnout domain scores from baseline to follow-up.The change in scores was calculated by subtracting the baseline score from the follow-up score for each participant.Crude and adjusted coefficients (with 95% CIs and P-values for global trend) were calculated with doctors as the reference group.Models were adjusted as above.
Separate logistic regression analyses were then performed to relate COVID-related worries (assessed at baseline) to each HCP role.Each response was dichotomized into 'worried' ('always', 'often' and 'sometimes') or 'not worried' ('rarely' and 'never').Crude and adjusted ORs (with 95% CIs and P-values) were calculated with doctors as the reference group.Models were adjusted as above.
As a crude indicator of COVID-19-related worries as a potential driver, a Pearson's correlation analysis was conducted relating COVID-19-related worries to raw mental health scores at baseline and follow-up.Listwise deletion was used for the correlation analysis.
RESULTS
The baseline surveys were received between 24 July 2020 and 2 October 2020 (51% were received by 1 September 2020).The follow-up surveys were received between 22 January 2021 and 13 March 2021 (55% were received by 28 January 2021).
At baseline (n = 1537; Table 2), the rates of probable mental health issues were considerable: 25% had probable MDD, 20% had GAD and 16% had clinical insomnia.Regarding well-being, 25% had possible depression/anxiety.Regarding burnout, emotional exhaustion and depersonalization were apparent in 42% and 13% of respondents, respectively.
Chi-square analysis indicated no significant differences in mental health of follow-up participants (n = 736) compared to baseline-only participants (n = 801).Demographic characteristics were balanced, except for ethnicity (P < 0.01), gender identity (P < 0.01) and number living in the household (P < 0.01): the follow-up sample had larger proportions of female and White participants, and fewer people living in the household compared to baseline-only participants.
Table 4 reports the change in mental health and burnout scores over the study period in the cohort (Table 4; n = 736).Compared to doctors, nurses had significantly increased change in GAD-7 (1.36 [0.46 to 2.26]; P < 0.01) and combined emotional exhaustion and depersonalization scores (1.15 [0.59 to 1.71]; P < 0.001) over the study period.Nurses and midwives also had significantly reduced change in mental well-being scores (−1.17 [−1.94 to −0.40]; P < 0.01) over the study period.No significant associations were observed for other roles.Supplementary Table 4 (available as Supplementary data at Occupational Medicine Online) displays mean values for change in PHQ-9, GAD-7, ISI, SWEMWBS and combined emotional exhaustion and depersonalization scores, stratified by roles.
Significant associations were observed between COVID-19-related worry items and raw mental health scores at baseline (all coefficients P < 0.01) (n = 1386; Supplementary Table 5, available as Supplementary data at Occupational Medicine Online), and between each baseline worry item and follow-up raw mental health scores (n = 685; all coefficients P < 0.01 except for between depersonalization and worry about redeployment, P < 0.05).
DISCUSSION
Despite the COVID-19 pandemic having a huge impact on the mental health of both HCPs and non-HCPs (albeit disproportionately higher burnout amongst HCPs) [22], our study demonstrates mental health and burnout disparities across different HCP roles.Compared to doctors, nurses were 1.9-fold and 2.5fold more likely to have probable MDD and clinical insomnia, respectively, at baseline.Similarly, AHPs were 1.7 times and 1.4 times more likely to have probable MDD and emotional exhaustion (burnout), respectively.In contrast, compared to doctors, HCAs were 64% less likely to have depersonalization (burnout).These findings were consistent 4 months later.The There are differing levels of missing data for each outcome, and for each model.Out of 736 subjects for whom we have been able to define a HCP role, and who had both a baseline and follow-up survey response: • Major depressive disorder -716 were included in all analyses.254 were medical doctors, 105 were HCAs or other, 191 were nurses and midwives, 166 were AHPs.
• Generalised anxiety disorder -715 were included in all analyses.254 were medical doctors, 104 were HCAs or other, 191 were nurses and midwives, 166 were AHPs.
• Clinical insomnia -714 were included in all analyses.253 were medical doctors, 104 were HCAs or other, 191 were nurses and midwives, 166 were AHPs.
• Possible or probable depression or anxiety (SWEMWBS) -712 were included in all analyses.252 were medical doctors, 104 were HCAs or other, 190 were nurses and midwives, 166 were AHPs.
• Burnout (Emotional Exhaustion) -709 were included in all analyses.252 were medical doctors, 102 were HCAs or other, 189 were nurses and midwives, 166 were AHPs.
• OCCUPATIONAL MEDICINE
follow-up analysis also showed an increase in the risk difference, and strength of association: compared to doctors, nurses were 87% more likely to have probable MDD, 3.7 times more likely to have clinical insomnia, 2-fold more likely to have GAD, 68% more likely to have low mental well-being and 66% and 95% more likely to have emotional exhaustion and depersonalization, respectively.The significant increased risk of probable GAD, burnout and low mental well-being in nurses observed at follow-up (but not baseline) was supported by our linear regression models: nurses had significantly worsened GAD-7, SWEMWBS and burnout symptoms over time, relative to doctors.These findings highlight that the mental health impact of the COVID-19 pandemic disproportionately affects nurses on multiple domains, relative to doctors.Furthermore, nurses had greater risk of worry regarding several work-related and COVID-19-related aspects, relative to doctors, which may explain the disparities in mental health and indicates that support strategies must be modified for specific roles to mitigate the mental health and burnout impact on HCPs.For example, providing adequate training and supervision, and adequate staffing in all areas to mitigate redeployment worries, may help protect against adverse mental health in nurses and midwives during pandemics.
These findings could be explained by differential susceptibility (e.g.varying levels of resilience [23][24][25][26]) to adverse mental health between roles.However, a Japanese study found, while nurses were more likely than doctors to have MDD, there were no significant differences between occupation on resilience measures, despite resilience scores predicting MDD rates across the total sample [27].We speculate that specific workplace factors relevant to the role are a more likely explanation (i.e.differential exposures to health hazards: the time spent in patient-facing roles and exposure to COVID-19) [28].Nurses often spend more time in direct patient-facing roles, with tasks involving greater proximity to patients and emergency care, relative to doctors.Indeed, we observed more nurses testing positive for COVID-19 and missing more days due to work, relative to doctors, which could indicate increased exposure to COVID-19 in patient-facing settings.
While AHPs might not spend as much time in a patient-facing setting relative to HCAs and nurses, we observed that AHPs were at significantly higher risk of MDD than doctors at both phases.Similar observations have been made previously [9].AHPs may have had increased time facing patients compared to before the pandemic, but may experience additive stressors such as medication shortages (in the case of pharmacists) and triage which are not normally encountered [29].Therefore, tailored support measures should be implemented for AHPs where their responsibilities drastically change compared to before the pandemic.
Regarding strengths, this is amongst the first cohort studies to evaluate the risk of adverse mental health between HCP roles during the pandemic.To our knowledge, just one study compared the mental health impact between HCP roles at two separate points during the pandemic, but the assessment of mental health was relatively narrow [14].Drawing on this, a second strength is our inclusion of relatively underexamined issues such as burnout and insomnia.Finally, a unique aspect is that we evaluated for the potential underlying cause of the differential mental health and found important workplace-and worryrelated differences between roles.However, there are limitations.First, while the validated mental health assessments are appropriate for large samples, these are less accurate than face-to-face psychiatric assessment.Secondly, there are comparisons within groups which were not analysed: within AHPs, we do not distinguish between occupational therapists or pharmacists which might reduce the specificity of any interventions/policy changes made (similar assertions can be made when comparing nurses to midwives).One could argue that senior doctors (e.g.consultants) and other doctors should not be combined due to differences in functions and powers, but no significant differences between these two groups were observed regarding mental health (Supplementary Tables 6a and 6b, available as Supplementary data at Occupational Medicine Online).A useful avenue of inquiry would also be comparing between departments (e.g.oncology versus intensive care units).Third, we did not assess domains such as moral injury [30] which may have provided further insight into the disproportionate mental health.
Fourth, we cannot rule out that our participants, especially those who remained at follow-up, were more likely to exhibit mental health issues than non-participants, although we observed no major differences in profile between those who dropped out and those who remained at follow-up.That said, follow-up participants may be less likely to exhibit mental health issues due to the 'healthy worker' effect.Finally, the pandemic conditions changed rapidly during 2021 and 2022, and there are additional stressors such as the cost-of-living and health service budgeting crises, as well as new exposures (e.g.Mpox/Monkeypox), which might disproportionately impact different roles.An assessment of job attrition and a re-assessment of mental health between roles would be valuable in this context.
Overall, this study demonstrates that, on multiple domains, the mental health and burnout of nurses during the COVID-19 pandemic are more adversely impacted than in doctors.By follow-up, nurses were more likely to have probable MDD, GAD, clinical insomnia, low mental well-being and burnout.AHPs may also be at increased risk of probable MDD compared to doctors, which was sustained across the study period.These findings may help in the prioritization and tailoring of well-being interventions for specific healthcare roles to mitigate the differential mental health impact of the COVID-19 pandemic.
Table 1 .
Outcome variables and respective assessment tools
* Adjusted for age, gender, time since COVID peak, highest level of education, relationship status, number living in household, current diagnosis of mental health condition, current diagnosis of physical health condition, and full-time/part-time working status.P values are for global trend across all roles relative to medical doctors.There are differing levels of missing data for each outcome, and for each model.Out of 1537 subjects for whom we have been able to define a HCW role: • Major depressive disorder -1434 were included in all analyses.601weremedicaldoctors, 204 were HCAs or other, 348 were nurses and midwives, 281 were AHPs.•Generalisedanxietydisorder -1429 were included in all analyses.597weremedical doctors, 204 were HCAs or other, 347 were nurses and midwives, 281 were AHPs.•Clinicalinsomnia -1418 were included in all analyses.595were medical doctors, 201 were HCAs or other, 245 were nurses and midwives, 277 were AHPs.•Possible or probable depression or anxiety (SWEMWBS) -1393 were included in all analyses.584 were medical doctors, 198 were HCAs or other, 338 were nurses and midwives, 273 were AHPs.•
* Adjusted for age, gender, time since COVID peak, highest level of education, relationship status, number living in household, current diagnosis of mental health condition, current diagnosis of physical health condition, and part-time/full-time working status.P values are for global trend across all roles relative to medical doctors.
Table 4 .
Separate linear regressions for the association between change in mental health, wellbeing, and burnout scores from baseline to follow-up and professional roles (medical doctors as reference group).
Note.Crude and adjusted coefficients provided.* Adjusted for age, gender, time since COVID peak, highest level of education, relationship status, number living in household, current diagnosis of mental health condition, current diagnosis of physical health condition, and part-time/full-time working status.
Table 5 .
COVID-related worries by HCP role with medical doctors as the reference group (n = 1403). | 2023-04-13T06:17:31.074Z | 2023-04-11T00:00:00.000 | {
"year": 2023,
"sha1": "eb38b5321a943722eba3d33098b5676ac03b7c09",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/occmed/advance-article-pdf/doi/10.1093/occmed/kqad011/49852198/kqad011.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "d390caeecfba589f734a4064a73a391dafb58026",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
79333608 | pes2o/s2orc | v3-fos-license | Management of Lowe syndrome: a case report
Lowe syndrome (the oculocerebrorenal syndrome of Lowe, OCRL) is a multisystem disorder characterized by anomalies affecting the eyes, nervous system and kidneys.1-3 The disorder was first recognized by Lowe et al. in 1952, and described as a unique syndrome with organic aciduria, decreased renal ammonia production, hydrophthalmos, and mental retardation. In 1954, renal Fanconi syndrome was recognized as being associated with Lowe syndrome and in 1965, a recessive X-linked pattern of inheritance was determined.2,4 Lowe syndrome is a very rare disease, with an estimated prevalence in the general population of 1 in 500,000. According to the Lowe Syndrome Association (LSA) in the USA, the estimated prevalence is between 1 and 10 affected males in 1,000,000 people, with 190 living in the year 2000. The Italian Association of Lowe Syndrome estimated that there were 34 Lowe syndrome patients (33 boys and one girl) living in Italy in the year 2005.2,4,5 It almost exclusively affects males.6 Physicians may not be familiar with Lowe syndrome due to its rarity.4
L owe syndrome (the oculocerebrorenal syndrome of Lowe, OCRL) is a multisystem disorder characterized by anomalies affecting the eyes, nervous system and kidneys. [1][2][3] The disorder was first recognized by Lowe et al. in 1952, and described as a unique syndrome with organic aciduria, decreased renal ammonia production, hydrophthalmos, and mental retardation. In 1954, renal Fanconi syndrome was recognized as being associated with Lowe syndrome and in 1965, a recessive X-linked pattern of inheritance was determined. 2,4 Lowe syndrome is a very rare disease, with an estimated prevalence in the general population of 1 in 500,000. According to the Lowe Syndrome Association (LSA) in the USA, the estimated prevalence is between 1 and 10 affected males in 1,000,000 people, with 190 living in the year 2000. The Italian Association of Lowe Syndrome estimated that there were 34 Lowe syndrome patients (33 boys and one girl) living in Italy in the year 2005. 2,4,5 It almost exclusively affects males. 6 Physicians may not be familiar with Lowe syndrome due to its rarity. 4 The disease is caused by pathogenic DNA variations in the OCRL1 gene on chromosome Xq24-26, which encodes phosphatidyl inositol polyphosphate 5-phosphatase. 4,7 More than 100 pathogenic DNA variations leading to Lowe syndrome have been described, of which more than 90% are located in two hot spots (exons 10-18 and 19-23) in the OCRL1 gene. 4 The pathogenesis of Lowe syndrome due to deficiency of a phosphatidyl inositol 4,5-bisphosphate 5-phosphatase in the Golgi complex is unknown. 5.8,9 Clinical and laboratory studies eventually lead to the correct clinical diagnosis, which can be confirmed by molecular studies of the OCRL1 gene located on chromosome Xq24-26. 4,6 The diagnostic triad of Lowe syndrome includes eye anomalies (congenital cataracts and infantile glaucoma) resulting in impaired vision, neurological deficits (infantile hypotonia with subsequent mental impairment and the absence of deep tendon reflexes), and renal tubular dysfunction of the Fanconi type with glomerulosclerosis, resulting in progressive chronic renal failure and end-stage renal disease. 2,4,8 Cataracts should be removed early in order to avoid amblyopia. Early targeted rehabilitation therapy is necessary to treat hypotonia. Renal tubular acidosis must be recognized and treated promptly with alkali supplements. These supplements should include citrates (sodium and/or potassium citrate) and sodium bicarbonate in various doses and combinations, to maintain serum bicarbonate levels. 2 The longest reported survival was that of a 54-year-old patient. Quality of life depends on the extent of the mental and renal manifestations. Some patients may enjoy a discrete social life and assisted working capacity. 2 Early diagnosis and treatment of metabolic disturbances may delay morbidity and mortality in this syndrome. 4
The Case
An 8-year-old boy with aphakia was scheduled for surgical removal of posterior capsule opacification (PCO). In May 2013, he was referred from the Department of Ophthalmology to the Department of Child Health for the management of anemia. He was the only child of non-consanguineous parents. The mother had no history of any illness, and did not take drugs or herbal medications during pregnancy. He was delivered vaginally, cried spontaneously, and had full term gestational age with a birth weight of 3,900 grams. No icterus, dyspnea or cyanosis were noted. He was well until the age of 4 months when white patches in both eyes were noted. He was brought to a local ophthalmologist, then referred to our hospital. He was diagnosed with congenital cataracts, cheloids and nystagmus in both eyes and underwent surgery to remove the cataracts. However, he was not followed up routinely due to parents' financial difficulties.
The PCO recurred at the age of 5 years, but the parents were only able to bring him to the hospital 2 weeks prior to the referral to our department. The ophthalmologist scheduled another surgery. However, severe anemia was found during the pre-surgical work-up, hence, the patient was transferred to the Department of Child Health at the age of 8 years. At the Pediatric Hematology Ward, the anemia was established to be caused by chronic disease and he received a series of packed red blood cell transfusions. Persistent hypokalemia and hyponatremia were noted and further workup showed that he also had polyuria and polydipsia. His milk feeding was increased to more than 1,500 mL per day. He was subsequently referred to the Pediatric Nephrology Division for further management.
His basic immunizations were complete. His developmental milestones were delayed, as he was unable to sit down or talk at the time of presentation. The family history was unremarkable.
On admission to the Pediatric Nephrology Ward, he was alert, but weak. As his height was 79 cm (less than 3 rd centile) and his weight was 9 kg (less than 3 rd centile), he was considered to have growth failure (Figure 1). His head circumference was 44 cm (less than -2 SD) (Figure 2), so he was considered to have microcephaly. He looked pale, but was not tachypneic or tachycardic. His blood pressure (BP) was 110/70 mmHg (95 th centile BP for his age and height was 111/75 mmHg), respiratory rate 26 x/ minute, pulse 110 x/minute and axillary temperature 36.5°C. He had frontal bossing, deep set eyes and chubby cheeks. Caries and poor dentition were also seen. Neuroocular examination revealed nystagmus in both eyes with posterior capsule opacification and cheloids. Chest examination revealed rachitic rosary with Harrison's groove. His lungs were clear. The abdomen was flat and no organomegaly was palpated. Bilateral pedal edema was not noted. Neurological examination showed hypotonia, floppy limbs, drop foot, and areflexia, with a 360 degree range of move- Figure 1. The anthropometry of the patient: height 79 cm (less than 3 rd centile) and weight 9 kg (less than 3 rd centile) ment in all extremities with athetoid involuntary movements seen occassionally. His urine output was 9-10 mL/kg/hour. Laboratory studies during his hospitalization revealed anemia with hemoglobin level of 5.05 g/dL, hematocrit 17%, leukocyte count 6,800/µL, platelet count 147,000/µL. Serum electrolyte measurements revealed hypokalemia, hyponatremia, and hypophosphatemia with sodium level 118 mmol/L, potassium 2.2 mmol/L, calcium 8.4 mg/dL, chloride 100 mmol/L and phosphate 2.3 mg/dL. Repeated hyponatremia and hypokalemia findings occurred during hospitalization. Serum alkaline phosphatase and aspartate aminotransferase were increased, at 189 g/dL and 87 IU/L, respectively. There was proteinuria and blood gas analysis revealed metabolic acidosis with bicarbonate level of 13 mmol/L. The glomerular filtration rate (GFR 58 ml/minute/1.73m 2 ) was decreased, with urea level 25 mg/dL and creatinine 0.7 mg/dL. From all data, we suspected that the patient suffered from Fanconi syndrome that needed further work up. There were no evidence of toxoplasma or rubella virus infections, as IgM and IgG levels for toxoplasma and rubella were negative. He also had normal thyroid function tests.
Chest x-ray revealed widened bilateral anterior costae with coarse bone trabeculation that could have been caused by anemia or rickets (Figure 3). His bone age was delayed, with features that seemed to be appropriate for a 6-month-old boy (Figure 4). The right kidney size was 9.26 cm and the left kidney was 8.99 cm. Both kidneys appeared echogenic with loss of corticomedullary differentiation from renal ultrasound, and assumed to be parenchymal kidney disease (Figure 5). The 2D-echocardiogram revealed no left ventricular hypertrophy, with normal structure and good systolic function [left ventricular ejection fraction (LVEF) 75% and fractional shortening (FS) 42%). Partial hypogenesis of the corpus callosum was seen from head CT-scan (Figure 6). A bone survey revealed brachiocephaly and decreased skull bone density, bowing of the humerus and ulna bilaterally, widened distal metaphyses of the femur and proximal metaphyses of the tibia bilaterally, as well as generalized osteopenia (including pelvis, thoracolumbar spine, and lower extremities) (Figure 7).
Based on the history, clinical manifestations, laboratory and radiology data, the patient was diagnosed to have Lowe syndrome with stage 3 chronic kidney disease (CKD). Subsequent management included was cancelled due to his unstable serum electrolyte levels, following discussion of benefits and risks of surgery with the parents. He was then discharged with oral supplementations of bicarbonate and salt.
The patient did not return for routine follow up due to financial constraints. On the last home visit on August 17, 2013, the patient was in stable condition without any medications (Figure 8).
Discussion
Based on the history, clinical manifestations and laboratory data, the patient was diagnosed as having Lowe syndrome, with oculocerebrorenal manifestations comprising of congenital cataracts, cheloids and nystagmus (eye manifestations), areflexia, hypotonia of extremities, developmental delay (nervous system manifestations) and kidney manifestations with stage 3 CKD due to Fanconi syndrome [polyuria, proximal renal tubular acidosis, proteinuria, hypokalemia, hypophosphatemia, anemia, growth failure, and rickets (dental caries, delayed bone age, generalized osteopenia, and rachitic rosary)].
Clinical and laboratory studies eventually lead to the correct clinical diagnosis, which can be confirmed by molecular studies of the OCRL1 gene located on chromosome Xq24-26. 4,6 Pre-and postnatal diagnoses are made by enzymatic and molecular analyses. Antenatal diagnosis is made by enzymatic activity in cultured chorionic villi at 9-11 weeks or in cultured amniotic fluid cells at 15-20 weeks. 10,11 An early diagnosis based only on clinical criteria can be difficult and may not be confirmed for several years as the clinical features may be nonspecific or absent during the early stages. 11 Facial dysmorphism is often present and consists of frontal bossing, deep-set eyes, chubby cheeks and fair complexion. Dental findings include prolonged retention of primary teeth, chronic subrachitic state, enlarged pulp chambers and mildly dysplastic dentin formation. 10 Eye, central nervous system and kidney involvements are required for the diagnosis of Lowe syndrome including the following: 1.) Cataracts are a hallmark of Lowe syndrome and are present at birth in all the male patients. 4,12 Glaucoma (present in 50% of patients) with or without buphthalmos, is de-tected within the first year of life and sometimes even later. Sight sharpness is compromised by aphakia, and together with retinal dysfunction is responsible for nystagmus. Corneal and conjunctival cheloids (present in 25% of patients) further compromise the sight. 2,6,12 Female carriers of OCRL have no clinical symptoms but do show lens abnormalities in the form of tiny punctate opacities in the cortex visible by slit lamp examination; 5,12,13 2.) Central nervous system: a serious or very severe hypotonia is present at birth, often with an absence of the deep tendon reflexes. Motor development is retarded and the autonomous gait becomes apparent generally after the third year. About 10% of patients show slight mental retardation. Mental retardation may be moderate or severe, with an intelligence quotient (IQ) of 50 or less. 2,5 Approximately 50% of the patients over 18 years of age have seizures, and up to 9% of patients present with febrile convulsions. Cranial magnetic resonance imaging (MRI) may show a light ventriculomegaly and multiple periventricular cystic lesions in a by stander of the patients. No significant nerve and muscle pathologies are present and they are not useful for diagnosis. 2,12 Neuropathological examination of OCRL brains has been reported to range from completely normal to variously abnormal, with no specific pathologic findings; 5 3.) Renal disease is primarily characterized by renal Fanconi syndrome. 5,14 The first symptoms generally develop during the first months of life but the severity and age of onset vary and tend to worsen with age. Symptoms are generally related to renal bicarbonate, salt and water wasting, causing growth failure. Later, generally during the second decade of life, a significant number of patients develop chronic renal failure, which may lead to end-stage renal failure and require dialysis. 12,13,15 Symptoms related to the renal Fanconi syndrome include: 2,5,6,15-17 a.)polyuria, polydipsia, dehydration, recurrent vomiting and growth failure; b.)low molecular weight proteinuria, which appears to be present in all patients and may be helpful for perinatal diagnosis; c.)proximal renal tubular acidosis; d.)renal phosphate wasting, leading to the development of renal rickets, osteomalacia, and pathological fractures; e.)hypercalciuria, leading to nephrocalcinosis and nephrolithiasis as a result of the Fanconi syndrome and vitamin D therapy; f.)aminoaciduria, glycosuria; and g.)hypokalemia.
Clinical diagnosis of Lowe syndrome should be confirmed by molecular studies of the OCRL1 gene, which unfortunately, we were unable to do due to financial constraints. Lowe syndrome is an X-linked disorder that results from loss of function of the OCRL1 protein (mutations in the gene). 6,14,15 The Lowe syndrome gene was linked to markers in the Xq24-q26 region, and the locus DXS42 was the most closely linked marker. 6 The OCRL1 is a lipid phosphatase that converts phosphatidylinositol 4,5-bisphosphate (PIP2) to phosphatidylinositol 4-phosphate. Mutations (deletions, frameshifts, and stop mutations, with a smaller fraction occurring as splicing and missense mutations) that cause Lowe syndrome result in the complete loss of OCRL1 catalytic activity. The OCRL1 appears to be the major PIP2-hydrolyzing enzyme in the human renal proximal tubule cells and cells lacking OCRL1 have slightly elevated levels of PIP2. 15,18 This suggests that Lowe syndrome may represent an inborn error of inositol phosphate metabolism. 10 Phosphatidylinositol 4,5-bisphosphate is present in cultured skin fibroblasts, but not in peripheral blood cells. Phosphatidylinositol 4,5-bisphosphate activity is markedly reduced in fibroblasts from patients with Lowe syndrome. 12 Concentrations of the muscle enzymes creatine kinase, aspartate aminotransferase and lactate dehydrogenase, as well as of total serum protein, serum α2-globulin and high density lipoprotein cholesterol are elevated. Serum enzyme elevations suggest muscle involvement in the Lowe syndrome. 13 Phosphoinositides (PIs) are essential phospholipids that regulate a number of processes including intracellular signaling, transport of ions and metabolites across membranes, exo-and endocytosis, regulation of the actin cytoskeleton (cytoskeletal dynamics), transcriptional regulation and membrane traffic. 14 At birth, ocular involvement with bilateral cataracts and hypotonia may be found in congenital infection (rubella), peroxisomal disorders, mitochondropathies, myotonic dystrophies or congenital myopathies (muscle eye brain disease). The appearance of renal involvement excludes these alternative diagnoses within the first months of life. 10 Another possible diagnosis could be Dent's disease, a rare, X-linked, renal proximal tubulopathy, but without metabolic acidosis and ocular or brain involvement. 2,15,18 The management of our patient should include ophthalmic surgery for PCO, physical rehabilitation and supplements of bicarbonate, salt (potassium) and vitamin D. Unfortunately, we were unable to give vitamin D supplementation due to financial constraints. In addition, the surgery was cancelled due to the unstable serum electrolyte levels. Treatment of Lowe syndrome includes eye, nervous system and kidney management. Cataracts should be removed early in order to avoid amblyopia. The early use of eyeglasses or contact lenses improves visual function and, consequently, psychosocial skills. The ocular tone should be tested frequently in order to diagnose glaucoma early and to treat it either with anti-glaucoma medication, or gonial or trabeculotomy surgery. Conjunctival or corneal cheloids are difficult to treat. Surgical lens implantation is not recommended, and spectacles are preferable to contact lenses. 2,13 Early targeted rehabilitation therapy is necessary to treat hypotonia. 13 An adequate psychological, pedagogical and occupational program may increase learning capacity and prevent frequent and serious behavioral crises during adolescence. Areflexia is a peculiar state, which does not require treatment. Seizures require treatment with drugs specific for the symptoms. The behavioral problems occurring during adolescence and the obsessive-compulsive disorder require specific competence on the part of the health staff. 2 Renal tubular acidosis must be recognized and treated promptly with alkali supplements. These include citrates (sodium and/or potassium citrate) and sodium bicarbonate in various doses and combinations, to maintain serum bicarbonate levels at around 20 mEq/L. Doses may vary between 1-8 mEq/ kg/day, which should be divided into at least three separate doses. 12,13 Potassium citrate is particularly useful as it also helps to prevent nephrocalcinosis and tends to reduce renal calcium excretion. If polyuria is present, patients should receive supplementary fluid. Sodium intake should be adjusted according to the extent of renal salt loss. In infants and very young children, oral supplements should be promptly adjusted in case of diarrhea. Intravenous infusions may be needed. Rickets should be treated with oral phosphate supplements and vitamin D. Excessive amounts of vitamin D should be avoided as they may increase renal calcium excretion. Treatment should be targeted towards maintaining serum calcium and parathormone (PTH) levels within normal range and serum phosphate levels above 2-2.5 mg/dL. 2 Genetic counseling should be done for carrier detection and prenatal testing, as the mother has a 25% possibility of having an affected son and a 25% possibility of having a carrier daughter. Molecular analysis of the OCRL1 gene is a more specific way to diagnose female carriers if the mutation in the proband is known. Genetic mutations should be documented first in the proband. As many as 94% of female carriers may be detected during ophthalmologic examination by slit-lamp biomicroscopy. The typical findings are multiple punctuate lenticular opacities or a single, dense posterior cataract. However, even if the mother has normal ocular findings, mutation analysis should still be done because some women are non-penetrant carriers and do not have the characteristic eye manifestations, albeit rarely. 4,11,12 The longest reported survival was that of a 54-year-old patient. Death usually occurs between the end of the second decade and the beginning of the fourth decade of life. The most common cause of death is renal tubulopathy, progressively evolving into renal insufficiency. Patients' quality of life depends on the extent of their mental and renal manifestations. Some patients may enjoy a discrete social life and assisted working activity. Early diagnosis and treatment of metabolic disturbances may delay the morbidity and mortality in this syndrome. 2,4,10,11 | 2019-03-16T13:03:05.383Z | 2015-06-30T00:00:00.000 | {
"year": 2015,
"sha1": "64d68a232a181d555a34a45d82e1e10fd16453f0",
"oa_license": "CCBYNCSA",
"oa_url": "https://paediatricaindonesiana.org/index.php/paediatrica-indonesiana/article/download/59/33",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "63d8b0097eef720d234760dbc99c6638df2ab2af",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247171020 | pes2o/s2orc | v3-fos-license | Factors Influencing Total Delay of Breast Cancer in Northeast of China
Objectives Delay in diagnosis and treatment, called total delay, could probably result in lower survival rates in breast cancer patients. This study aimed to investigate the factors associated with the comprehensive delay behaviors and to evaluate its effect on outcomes in patients with breast cancer in Dalian, a northeast city of China. Methods A retrospective chart review was conducted using a cancer registry dataset including 298 patients. The Kaplan–Meier survival analysis was used to identify the threshold of total delay, dividing the patients into a group with significant uncertainty and a group without substantial delay. The factors associated with the significant total delay were investigated from the potential candidates, like income level and marital status, by using the chi-squared test. The difference of the clinicopathologic characteristics between the patients grouped by the significant total delay, like tumor size and lymph node metastasis, was also investigated to find out the effect of the total delay. Results A total of 238 charts were used for analysis. The mean age was 57.3. The median of total delays was 3.75 months. Thirty days was identified as a threshold, more than which the total delay can lead to worse survival. Patients’ marital status (p = 0.010), income levels (p = 0.003), smoking status (p = 0.031), initial visiting hospital level (p = 0.005), self-health care (p = 0.001), and self-concern about initial symptom (p ≈ 0.000) were identified as the independent predictors of the total delay. Metastasis (p ≈ 0.000) was identified as the significant result relating to the significant total delay. Conclusions A total delay of more than 30 days predicts worse survival in breast cancer patients in Dalian. Several factors, like patients’ marital status and income levels, can be considered to be relevant to the significant total delay. We recommend that these factors be used to predict the potential patients with the significant total delay in the clinical practice.
INTRODUCTION
According to the National Central Cancer Registry of China, breast cancer is the dominant cause of cancer death in women younger than 45 years (1). Longer delays of diagnosis and initial therapy have been reported to result in cancer progression and poor survival (2,3). This research investigated the socioeconomic factors and the clinical consequences associated with the total delay of breast cancer in Chinese women. The total delay includes two parts: 1) patient delay denotes the interval between the patient's self-discovery of symptoms and the initial diagnosis; and 2) system delay denotes the interval between the initial diagnosis and the standard medical treatment. Patient delay always occurs in developing countries, and studies have reported that poor outcomes induced by patient delay are mainly due to low education and poor income status (4). Whether patient delay or system delay, some questions are pending, like the correct period of uncertainty and the possible associated factors that we could use to predict the delay cohort. Several works have investigated the associated delay factors for breast cancer patients, which show that the factors were diverse. Studies have shown that a lack of knowledge about breast cancer symptoms and screening methods is an essential factor in delaying diagnosis and seeking medical attention (5). The individual-environmental-social factors, like lower socioeconomic and recent immigration, were likely to delay medical help (6,7). A study in South Africa found that most patients who delayed seeking help blamed poor transportation or treatment that interfered with work, dating, or even marriage (8). A cross-sectional study from China linked patient delay to perceived health competence (9). As shown by Khakbazan (10), some of the factors associated with the delay identified previously could not be generalized for different regions and races. For this reason, we collected the clinical data to infer the length of the total delay associated with the mortality, and we investigated the key factors influencing the total delay of breast cancer for women in the northeast of China.
MATERIALS AND METHODS
This cross-sectional study was conducted by data of patients diagnosed with breast cancers in the second hospital of Dalian Medical University between January 1, 2012, and December 31, 2012. Pathologists confirmed the diagnosis of breast cancer after surgery or core needle biopsy. Patients who did not complete standard adjuvant therapy were excluded. Patients with breast cancer were interviewed at the department of breast surgery after obtaining the agreement from each patient. We collected the information from the medical records of these patients. The follow-up questionnaire was conducted by phone call following the second year after patient diagnosis. All of the interviewers were previously trained residents not involved in the clinical management of the patients. We also excluded those patients who had no complete follow-up information. The record collection includes the following medical factors: initial symptom, family history of cancer, tumor molecular subtype, TNM stage, and metastasis. It also includes sociodemographic factors: age at presentation, marital status, marriage bonds, education level, residence, attitude to help-seeking, smoking habit, alcohol drinking patterns, insurance types, level of first visiting hospitals, occupation status, and self-health care. The results were analyzed using SPSS (version 25.0) and R language (version 3.5.1). We defined the significant total delay as the minimum delay leading to poor survival. The Kaplan-Meier survival analysis identifies the threshold for the significant total delay. The hypothesis test was used to distinguish the factors associated with the delay. Then, the label "1" was assigned to the patients with significant total delay and "0" to the patients without significant total delay. The multivariate logistic regression model was constructed concerning these labels to verify the factors identified using the hypothesis test.
Study Population
A total of 296 charts were reviewed for this study. Fifty-seven charts did not meet inclusion criteria, and 238 charts were used for analysis. The sociodemographic characteristics of patients are illustrated in Table 1. In all patients, the mean age of patients was 57.3 ± 12.1 years. One hundred twenty-four (52.1%) patients resided in urban areas. One hundred two (42.4%) patients were single, where single included unmarried, divorced, and windowed. One hundred four (43.5%) patients were part-time employed, and 91 patients (38.1%) were full-time employed. One hundred sixty-four (62.8%) patients' education levels were higher school or above. About the insurance status, 213 (89.1%) patients were with Medicare or Medicaid. One hundred eighteen (49.4%) patients were with no or low income. Fifty-five patients (22.2%) initially visited small local clinics. Ninety (37.7%) patients were self-concerned about the initial symptoms. Forty-six (19.2%) patients were smoking. Only 2 (0.8%) patients have an alcohol drinking habit. Forty-two (17.6%) patients conducted self-health care after discovering symptoms, like breast massage and taking traditional Chinese medicine. The medical history of patients is listed in Table 2. In all patients, the median of total delay was 3 months (0.1-12). Thirty-one (13.0%) died. Two hundred seven (87.0%) patients' initial symptoms were lump, six (2.5%) patients' initial symptoms were nipple changes, and fifteen (6.3%) patients' initial symptoms were breast pain. Two hundred four (85.7%) patients have no family history of cancer. Metastasis happened in 75 (31.5%) patients. About the pathological types, triple-negative, HER2-enrich, Luminal A, and HR+HER2+ were diagnosed in 41 (17.2%), 12 (5.0%), 115 (48.1%), and 58 (24.3%), respectively. In all patients, the mean tumor size is 2.7 ± 1.8 cm. The detailed population of TNM classification can also be found in Table 2.
Total Delay of 30 Days Affecting Survival
We conducted the Kaplan-Meier survival analysis to investigate how long the total delay can affect the survival of patients. To test the significant difference in survival, we divided the patients into two groups by a threshold value of total delay. The threshold value was initially set as 10 days and was sequentially added by 10 days. We found that the total delay of 30 days can lead to significantly different survival curves ( Figure 1). Therefore, we divided the patients into two groups: one group included the patients with total delays less than 30 days, and one group included the patients with total delays of more than 30 days.
Significance Test for Grouped Patients
We also conducted the chi-squared test for the nominal variables to identify the significantly different factors for the grouped patients. Their marital status (p = 0.02), income levels (p = 0.003), smoking status (p = 0.03), insurance (p = 0.03), initial visiting hospital levels (p = 0.005), self-health care (p = 0.01), self-concern about initial symptom (p ≈ 0), and metastasis (p ≈ 0) were significantly different.
We consequently constructed the binomial logistic regression model by using the aforementioned significantly different variables as the covariates. The model has a percentage correction of 80.7%, which implies that significantly different factors can be used as the features for the classification of total delay.
DISCUSSION
Our study first found that the total delay of 30 days can lead to significantly different survival rates. The studies conducted in other countries and areas indicated that the diagnosis delay of cancers ranged from 2 to 15.2 weeks (11)(12)(13)(14)(15)(16). Primarily, Ramirez evaluated 87 studies and suggested that the delay of 3 months could impact the long-term survival of breast cancer patients (11). Compared with this result, the delay in our investigated area was more pressing for the patients.
Various factors can prompt breast cancer patients to ignore their problems and delay medical treatment. The individualenvironmental-social factors have been associated with the delay (17)(18)(19). According to the further application of the investigation results, we used the individual-environmental-social factors, including education level, occupation, income level, place of residence, particular dietary habit, smoking habit, alcohol drinking habit, insurance, marital status, insurance, first visiting the hospital, first consulted person, and self-health care.
According to our results, marital status was identified as an essential factor, not reported by other works. In our investigated area, the patients with no life partner tended to delay the medical diagnosis and treatment. Smoking was a relevant factor for 34 of 46 smoking patients who delayed seeking help. The smoking rate in our investigated cohort was 19.5%, much higher than the average level in China (the current smoking rate is 0.6%, and the ever-smoked rate is 3.4% among Chinese women) (20). Thus, the smoking status should be considered as an important factor in our investigated area.
Our results also indicated that patients with a low-income level were often associated with longer delays. This fact reflects the same panorama of other developing countries; e.g., Maghous and Fedewa confirmed that socioeconomic status appears to have a negative impact (4,21).
We found that self-health care was a novel factor. Of 42 patients, 35 consequently conducting self-health care, like having a massage and acupuncture, have a longer delay. We suggested that self-health care could strengthen the patients' selfconfidence in their health status and become mindful of their symptoms. Atypical presenting symptoms of breast cancer lead to diagnostic intervals. To test this theory, we included the nature of breast masses and symptoms of breast disease. Although the results were not statistically significant, we believe that the study's emphasis on a higher likelihood of delayed treatment for breast cancer patients without tumor symptoms is a reference (22). Understanding and attention to initial symptoms and monitoring and managing symptoms have been the first and most crucial steps in the help-seeking process after symptom discovery (23,24). In our study, the initial visiting hospital level and the patient's self-concern about initial symptom were identified to be important factors, which are considered to have effects on the initial symptom interpretation and monitoring. The initial visiting hospital level was also identified as an essential factor associated with the delays. Forty of 52 patients who initially visited the small local clinic have delays. Women's trust in the physicians' professionalism was identified to affect patients' helpseeking behavior (19,25). This implied that the small clinics around Dalian city could not often offer proper initial symptom interpretation and monitoring. A total of 122 of 148 patients without self-concern about initial symptoms were identified to have delays. The reasons for negative attitudes toward the symptoms could be diverse. Economic status was the limiting factor for some patients. Some patients obtained the wrong symptom interpretation. The embarrassment of breast examination derived from traditional attitude could be the barriers to receiving care in Chinese middle-aged women (10).
Khokher stated that some of the factors associated with the delay identified previously could not be generalized for different races and regions (2). For example, Bleicher and Polverini have reported that the African American race was associated with delays in diagnosis and treatment (26,27). However, the African American race often did not make up a higher percentage of Medicaid beneficiaries (28). Thus, rather than the race itself, the difference of economics between races can be considered a factor for the delay. However, such difference was not significant in Dalian, and as a result, ethics was not considered a potential factor.
To investigate what the delay will lead to, we also analyzed the difference between the delay and non-delay patients for the disease-associated factors, including the age of onset, initial symptom, family history of tumor, TNM stage, and molecular subtype. We found that 60 of 75 patients with metastasis have delays. This fact proved that the identified delay of 30 days could lead to advanced breast cancer.
As with other data analysis studies, this one has limitations. The data used in this research were collected from one large comprehensive hospital, which covered a quarter of patients in Dalian city. Our samples were not nationally representative due to its inclusion of patients seeking care at a single medical center. We did not group the negative attitudes to the symptoms, which could be directly related to other factors, like income level or selfhealth care. However, a similar study in Guangzhou, China, supports our research and shows that premenopausal patient status, breast disease history, and delayed physical examination affect the timing of patients' visits (29).
Some studies show that the problem of delayed treatment is not very serious and that there is an ultimate delay time within which delayed treatment seems to be tolerated. Optimal times from diagnosis are <90 days for surgery, <120 days for chemotherapy, and, where chemotherapy is administered, <365 days for radiotherapy (30). The worldwide panic caused by COVID-19, the complication of medical procedures, and the difficulty of medical treatment for middle-aged and elderly patients have also primarily affected the enthusiasm of patients seeking medical treatment. However, as the influence of this period was not included in this study, no further details can be given (31). In the future, we will also keep collecting data to testify and justify the statistical inference. We will propose an efficient prediction method for the patients' delay status based on our identified factors. With the prediction method, we finally want to optimize the help-seeking behavior of the patients to shorten the delay.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding authors.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Second Affiliated Hospital of Dalian Medical University. The patients/participants provided their written informed consent to participate in this study. | 2022-03-02T14:33:54.303Z | 2022-03-02T00:00:00.000 | {
"year": 2022,
"sha1": "112c5a2cd752fd18964f25460a81dbbc9d49c978",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2022.841438/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "112c5a2cd752fd18964f25460a81dbbc9d49c978",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238736821 | pes2o/s2orc | v3-fos-license | Enhanced Data Security of Communication System Using Combined Encryption and Steganography
—Data security has become a paramount necessity and more obligation in daily life. Most of our systems can be hacked, and it causes very high risks to our confidential files inside the systems. Therefore, for various secu rity reasons, we use various methods to save as much as possible on this data, regardless of its different forms, texts, pictures, videos, etc. In this paper, we mainly rely on storing the basic image which should be protected in another image after changing its formal to composites using the DWT wavelet transform. The process of zeroing sites and storing their contents technique is used to carry the components of the main image. Then process them mathematically by using the exponential function. The result of this process is to obtain a fully encrypted image. The image required to be protected from detection and discrim-ination is hidden behind the encrypted image. The proposed system contains two algorithms. The first algorithm is used for encoding and hiding, but the second algorithm is designed for returning and decoding the main image to its original state with very efficiently.
Introduction
Images encryption strategies are broadly utilized to overcome the issue of safe transferring for both images and messages via electronic transfer media both images and messages via electronic transfer media by utilizing the classical cryptographic processes [1][2][3]. However, the main problem of this method is that it is limited use remains with the huge amounts of data or high-resolution images [4,5]. The process of hiding the board image was completed after removing the most important part of the data in the embedded image. This data was saved because it is considered as a decryption key. The fundamental indication of this research paper is to stow away and encrypt the full image interior another one [6]. At first: images should have analyzed using wavelet transform formula, where the images go through levels of (DWT). This process produced four factors conditions, (ca), (ch), (cv), and (cd). Then comes the process of clearing enough space to include target image components on embedded image components. To make the appropriate images more secretly, exponential function math was used. Decryption was mainly based on returning the last discarding values to their original positions of images, then it takes the Inverse Discrete Wavelet Transform (IDWT) to produce un-secure data. The most objective of this strategy is to hide images with 2-D and 3-D on other images to produce a single encrypted image with tall effectiveness.
Literature review
Some authors show that a biometric verification system which usages two individual biometric structures collective by waterline inserting with secret PIN encryption to get a non-unique ID of each person [7][8][9]. The converted structures and models trek over unconfident the Internet or intranet of the communication system in the client-server situation. In addition, the researchers suggested a method that composite of encryption and information hiding the use of a few characteristics of Deoxyribonucleic Acid (DNA) sequences [10][11][12]. The suggested system contains two parts. The first part has the confidential information encoded by using a DNA and Amino Acids-Based Show reasonable cryptograph. the second part contains the encoded information steganography assistant which secreted into some location of DNA classification. Also, the authors suggested an LSB & DCT-based steganography process for saving the information [13][14][15][16][17]. All the information bits are implanted by modifying the slightest noteworthy bit low frequency bits of Discrete Cosine Transform (DCT) factors which include the image segments [11,18,19]. In [20][21][22], they suggest improved protection for the data. By using encryption and steganography. The information is encrypted and hidden behind an image then transferred to the cloud afterward. The image can be downloaded whenever it seems appropriate and the data can be decoded to recover the original file. In [23], They used the RSA encryption algorithm and image steganography for data concealment, as well as the LSB approach. The Advanced Encryption Standard (AES) algorithm was adjusted and used to encode the secret message. The encrypted message was protected using this technique. In [24][25][26][27], A technique used on the advanced LSB (least significant bit) and RSA algorithm was discussed. It is less chance of an attacker being enabled to use steganalysis to recover data when matching data to an image. In [28], They suggest a new form of steganography based on gray m level modulation using image transformation, hidden key, and cryptography for true color images. Both the private key and the secret data are initially encoded using multiple encryption algorithms (bitxor processing, bit shuffling, and stego key based encoding); then encoded in the pixels of the host image. In addition, before data hiding, the input image was transposed. Objective analysis employing several image quality evaluation criteria is used to evaluate the proposed technique, which shows promising results in terms of secret data and preservation. In [29], They suggest a new combination method of cryptanalysis and steg analysis by using an HTML file. RJDA is a method that uses LSB (least significant bit) as an algorithm for steganography and encryption/decryption. Confidentiality is one of most critical security criteria to ensure that the purpose of saving or transmitting data cannot be interpreted by any other unauthenticated person. In [30], They implement techniques that combine cryptography and steganography to encode the data as well as to conceal the image details. It provides the data being transmitted with two layers of protection and also focuses on the power of mixing methods of cryptography and steganography. In [31], They use LSB (Least Significant Bit) as a steganography method and AES, RSA, DES, 3DES, and Blowfish as cryptographic algorithms to encrypt the information that should be concealed in an image. The work in this paper demonstrates an increase in the capacity of the current steganography techniques to accommodate. In this paper, a new method which combine both of techniques is proposed to obtain better results. a method involving both these techniques to gain better results has developed. Firstly, the encrypted image should be encoded using the Rivest-Shamir-Adleman (RSA) symmetric encryption cryptography and transformed to American Standard Code for Information Interchange (ASCII) values. the received encrypted message in decimal format is changed to octal and finally to binary format. Secondly, the binary bits obtained are disguised inside any digital image using Least Significant Bits insertion to make a new image with a special key known as stego image, which is delivered to the recipient who executes inverse operations to acquire the encrypted image. Between the source and stego images, the Mean Squared Error and Peak Signal to Noise Ratio values are calculated, and the model accuracy by this technique are shown to be much better than most of the other methods.
3
The proposed system The proposed system is divided into main three parts. The first part is used to analyze the images by using DWT. The second part is used to hide (embed) the analytical combinations after the zeroing process behind an image. The last part is used to encrypt the image by using a mathematical exponential function. The suggested construction method compacts with the discrete 2-D DWT, whose mathematical procedures which definite as following:
1.
Check the first image which has high measurements, modified image, rectangular image (365x438).
2.
Check the next image which has low measurements; image to be encoded and secreted, tree image (258x350).
10.
Obtaining the encoded and secreted image…end
1.
Check the first image which has high dimensions, modulated image, geometric image (819x1024x3).
2.
Check the second image which has low measurements; image to be encoded and secreted, pyramid image (600x600x3).
7.
Recover the images to the original …end Images from (A to C2) of Figure (6) display the investigational results of ALGORITHM (4). Tables 1 shows the feature measurements used to show the method efficiency and to compare status Before and after the hiding and encryption activities. correlation and entropy measurements for the two-dimensional and three-dimensional images were used, as follows:
Conclusions
The technology data is fully relying on web services. This paper deals with security difficulties and how can be stopped. The cryptography and Steganography the method is used to secure data. Table I shows that that the closing of the main characteristics of the images involved in the encryption process resulted from the encrypted image with less entropy, and it is noted that the correlation values closed to zero, indicating the quality of the method, i.e., closer result to zero better quality of the method. After eliminating the protection and decrypting, the resulting images show that it is exactly the same as the main image. The approach used in this paper will assist to create a confident construction for data security. | 2021-09-25T16:16:00.875Z | 2021-08-23T00:00:00.000 | {
"year": 2021,
"sha1": "99e7dd8dd0caf47b14e7fb052d669de7544e44d3",
"oa_license": "CCBY",
"oa_url": "https://online-journals.org/index.php/i-jim/article/download/24557/9765",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f24d95da7fed2a067b2ac43e35d3c74d7a4b3a66",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
44143413 | pes2o/s2orc | v3-fos-license | Effect of concentrated light on morphology and vibrational properties of boron and tantalum mixtures
Heating a mixture of boron (impurities: carbon ∼ B50C2, boric acid – H3BO3) and tantalum (Ta) powders in nitrogen flow in a xenon high-flux optical furnace was performed. As-received powder composed of h-BN, H3BO3, TaB2, B9H11 and a number of other phases including β-rhombohedral boron, apparently, heavily doped with Ta. FT–IR examination of any sample of the material reveals the complicated vibration spectrum containing, in particular, an absorption band near 2260 cm−1. The shapes of these bands are different for samples because powders were synthesized at different temperatures. Known, that in β-rhombohedral boron lattice, there are nano-sized voids of different types, which allow an accommodation of single atoms or small groups of atoms. Theoretical calculations performed by the method of quasi-classical type yields the same value, 2260 cm−1, for the vibrations frequency of Ta atoms in D-type crystallographic voids in β-rhombohedral boron lattice. Since, Ta atoms are known to prefer accommodation just in D-voids the experimentally detected bands can be identified with localized vibrations of Ta atoms.
Introduction
The use of a concentrated solar light and high-flux xenon optical simulator for heating offers a number of significant advantages. Main features of a concentrated light are purity, practically absence of inertia of the heating, the relatively high operating temperatures (up to 2000 C), the opportunity to use an effect of high temperature gradients and the ability to handle in the air, in vacuum and in the protective, oxidizing or reactive atmospheres. It is local, one-sided heating with radial symmetry of the heating zone [1,2,3,4,5].
Many methods for catalytic synthesis of BN structures [6] were developed last years.
Graphite-like nanostructures including nanotubes and encapsulated polyhedral particles have been obtained by arcing hexagonal boron nitride (h-BN) and tantalum in a nitrogen atmosphere [7]. Direct synthesis of BN in nitrogen flow can be considered as a model, which allows understanding structural and phase transformation while heating in the presence of dopants. A local and one-sided heating and high temperature gradients of a xenon high-flux optical furnace defines morphology, complicated phase composition and basic physicochemical properties of powder.
Therefore, this furnace can provide also conditions for transition of some of boron phases in presence of tantalum and in nitrogen flow into b-rhombohedral boron (b-B).
The b-B is a very perspective high-temperature semiconducting material [8]. Its doping mechanism can be fundamentally different from that usually occurs in conventional crystalline semiconductors. The complex crystalline structure of b-B [9] is characterized by a variety of different crystallographic voids. Thus, these nano-sized interstices can accommodate single atoms or small groups of atoms and, in particular, large metal atoms with very slight structural distortions.
The effect of doping with a number of metals on the b-B electronic structure was investigated utilizing theoretical approach of quasi-classical type [10]. Previously, the same quasi-classical method was successfully used for calculating the frequencies of atomic vibrations in diboron B 2 and some other boron-containing diatomic molecules [11], and boron nitride (BN) in different structural modifications such as hexagonal sheet [12] and nanotubes [13,14]. The heavier dopants in b-B have to form deep donor levels inside the band gap or virtual levels inside the valence band. Therefore, at any level of doping with these metals material has to remain semiconducting.
The frequencies of atomic vibrations associated with the same metal impurities accommodated in crystallographic voids of A-, D-and E-types characteristic of b-B lattice were calculated within the above-mentioned quasi-classical approach [15]. These vibration modes were predicted to be within the wavelength range 1080e4380 cm À1 , i.e. above the intrinsic phonon bands of b-B and, consequently, were attributed to localized vibrations. Therefore, metal-doping of b-B, modifying its electronic properties at the same time, can affect thermal properties of this material because metal dopants localized in nano-sized voids have to serve as additional scattering centers phonons. The detailed, but indirect, comparison of these theoretical results with experiment was carried out [16] based on the thermal conductivity measurements performed for b-B samples doped with certain metals. It was demonstrated that, the decrease in thermal conductivity in these samples indeed can be consistently explained by the mentioned scattering mechanism.
There are almost no direct experimental data on localized vibration modes of b-B crystal. Only exception is optical properties of tantalum (Ta) doped b-B studied within the range of 400e5000 cm À1 in [17]. An absorption band was detected near 2200 cm À1 . The intensity of this absorption band correlated with Ta content in tested samples, which was quite high and varied in wide range of 2•10 16 e3•10 20 cm À3 . This allocation is in satisfactory agreement with the frequency of 2260 cm À1 calculated for vibrations of Ta atoms localized in crystallographic voids of D-type [10]. But, Ta atoms incorporated in the b-B lattice are known to occupy preferentially the D-type voids [8]. This coincidence speaks in favor of identification of vibrations of Ta atoms localized in D-type voids of b-B
crystals.
Why there is the lack of optical information on impurities-related localized vibrations in b-B? It would be intelligible that, IR absorption by impurity atoms is detectable only at sufficiently high level of doping. However, according to the calculation
Experimental
Boron powder with impurity of carbon of mean grain size of 0.20 mm was selected as initial material (Fig. 1a). Phase composition of the powder was a-tetragonal boron to be stabilized by carbon impurities (wB 50 C 2 ), boric acid (H 3 BO 3 ) commonly covering fine boron particles and a-rhombohedral boron (Fig. 1b).
Heating of initial powders was carried out in a xenon high-flux optical furnace in flow of nitrogen [18]. A quartz chamber was used for new phases formation. A compacted sample of initial mixed powders boron and tantalum was placed on a surface of the compacted h-BN powder to avoid a contamination. The chamber was positioned in the center of a focal zone of three xenon emitters. Synthesis was carried out at the low density of energy in focal zone of set-up w0.7,10 4 kW/m 2 . Time of an experiment was 30 min. Produced powdered material was collected from copper water-cooling screens and the surface of a quartz chamber. A detailed description of the experimental can be found in [19,20,21,22,23].
Initial boron and produced powders were examined by scanning electron microscopy (SEM) "Superprobe 733" (JEOL, Japan). Crystallinity of the initial and produced powders was determined by the powder X-ray diffraction (XRD) with using diffractometer DRONe3.0 (Cu K a radiation). It is clear that, these factors can significantly affect morphology and phase-composition of the product.
Fourier Transform Infrared (FTIR) absorption spectra were recorded by Nicolet 6700eFTeIR Fourier Spectrometer (Thermo Scientific). Absorption of the IR radiation is indicated by peaks in the spectrum, which are specific for the type of atomic vibrations in the testing material. According to the working principle, the measured intensity of the light decayed in sample is evaluated by the transmission spectra, % tm. We have converted them into absorption spectra, % abs: % abs ¼ % 100 e % tm.
The intensity of the absorptions depends upon several factors, but particularly useful information is the concentration of vibrating atoms.
Results and discussion
When heated in a xenon high-flux optical furnace in flow of nitrogen mixed powders of boron and tantalum result in formation of as-received equiaxed and plate-like particles. The produced powder of a mean grain (particle) size of w1 mm that is up to 5 times larger than for initials powders. A particle size distribution was not uniform because powder was composed of two groups of particles: plate-like (a mean particle size of 2 mm) and equiaxed (a mean particle size of 0.2 mm), each of them has an own particle size distribution (Fig. 2a).
According XRD study, the synthesized multiphase product mainly consists of h-BN, b-B, and H 3 BO 3 (Fig. 2b).
In addition, it is possible to identify traces of some other phases. Apparently, Ta reacts with excessive boron and forms boride phases, especially, tantalum diboride TaB 2 . Availability of moisture in nitrogen results in additive molecular hydrogen H 2 presence and formation not only of boron acid, but also boranes (e.g., B 9 H 11 ), boron hydrides, borazine B 3 N 3 H 6 , some other borane-nitrides, and also tantalum hydride Ta 2 H and nitride Ta 3 N 5 .
The detected crystalline phases of boron included those with chemical formulas B 105 or B 315 , i.e. b-B described by rhombohedral or hexagonal unit cells, respectively. Since, the powder was produced at high temperature-gradients, defective structures were formed and, as result, different complicated XRD peaks were obtained, what makes more ambiguous their identification. Their broadening and slight shift should be related, respectively, to nanocrystallinity and Ta-impurities.
Based on previous experiments [20], it can be concluded that large plate-like particles predominantly are h-BN and H 3 BO 3 and, therefore, equiaxed small particles contain rest of above listed phases.
The b-B phase is obviously available in an as-received powder because in present experiments, reaction temperature was about 1200e1500 C. The temperature when a-rhombohedral boron starts its transition to the b-rhombohedral structure is at w1200 C [24].
Real content of b-B in the powdered product seems to be certainly higher than it can be seen from the XRD pattern. This assumption is supported by the detected high content of boric acid. Surface structural defects activate oxidation and hydrogenation processes in boron fine particles [25,26] in the presence of water vapor forming boron oxide (B 2 O 3 ) and then boric acid shell-layers (in Figs. 2b and 3 are marked only H 3 BO 3 -peaks because for such multiphase mixture they are practically indistinguishable from B 2 O 3 -peaks), which mask the b-B crystalline core (Fig. 3).
All the b-B component has to be heavily doped with Ta because tantalum very easily diffuses into powdered boron, especially, in presence of boron acid [27,28].
As is known, crystalline lattices interstitial occupancies by dopants atoms can be quantitatively estimated by the XRD data if the material is characterized together with the specially prepared specimen with known composition. However, in our case it is impossible because structure of the synthesized product is too far from that of crystalline b-B:Ta. Instead, it is a multi-phase mixture containing b-B in form of nanopowder: nanocrystallinity is evident from the specific broadening of the b-B XRD peaks. On the other hand, initial charge contained up to 2 at% Ta, which is the quite enough concentration for making qualitative conclusion that the b-B phase is interstitially doped with Ta. The slight shifts of the b-B peaks positions also can be explained by impurities effect on the lattice constant. Therefore, we can trust to the XRD analysis that product of synthesis is a mix of phases which include b-B:Ta. However, the direct experimental verification of this opinion seems to be almost impossible because it is too difficult: (1) to separate b-B component from the fine-powdered multi-phase material, and (2) to remove surface shell-layers of boric acid and boron oxide which mask the b-B particles' singlephase core.
IR absorption spectra and their close-up views near peaks presumably related to the absorption by localized vibrations of Ta-impurities in b-B were obtained for samples I and II collected from the material synthesized near a reaction zone ( Fig. 4a and b) and a sample III collected from the surface of a chamber wall (Fig. 4c).
FTeIR examination of each sample reveals the complicated vibration spectrum containing, in particular, an absorption band in the vicinity of 2260 cm À1 , theoretically predicted to be related with vibrations of Ta atoms localized in D-type subnano-sized voids of the b-B crystalline lattice. These spectra and the shapes of mentioned absorption peak are different because the samples are taken from surfaces at different distance from the focal zone, i.e. were produced at different experimental conditions, what yields the difference in phase composition.
For the samples I and II taken from the reaction zone, FTeIR spectra are quite similar each to other and peak under the consideration is placed at 2260 cm À1 , i.e., is undistinguishable from the value theoretically predicted in [15]. Peak heights also are close enough: 9.82 % and 8.73 %, respectively.
As for the sample III collected from the chamber wall, in general features its spectrum significantly differs from that for samples I and II. Absorption peak is broader and higher: 37.25 %. Maximum is placed at 2201 cm À1 . This value almost exactly coincides with the only experimental value of 2200 cm À1 available until now [17].
Within the errors of quasi-classical calculations, it is in agreement with theoretical vibration frequency of 2260 cm À1 as well.
Since, two samples were parted out from the material synthesized near the reaction zone and third sample collected from the chamber wall surface their absorption spectra differs significantly (Fig. 4). Let's emphasize that these presented spectra refer to different parts of the multi-phase product of synthesis and, consequently, they are not compatible with any spectrum known for b-B.
According to the recent review on b-B [29], there is no direct information on vibrations of metal dopants of this material. Raman spectra of numerous icosahedral boron-rich solids, including V-doped b-B, were presented in another review [30]. In addition to above mentioned theoretical study [10] of electronic parameters of metal-doped material, it was studied experimentally and analyzed an effect of some 3d-metals (Cr, Cu, Fe, Ni, and V) [31], Li and Mg [32], etc. on the b-B electronic properties and structure. There are available Raman spectra of different metalborides: measured for LuB 12 and ZrB 12 [33], and numerous other dodecaborides [34] (which are excellently agreed with ab initio calculations of the phonon spectra); single-crystalline lanthanum hexaboride LaB 6 [35]; rare earth tetraborides lattice strengthens when is doped with metals. As for the accuracy of quasiclassically calculated energy-parameters, such as frequencies of lattice localized vibrations, it is estimated to be quite acceptable: less than 4 %.
In general, the vibration frequency of an atom in a solid is known to be determined by its mass and bond strength to its neighbors. At that point, we should note next: Above mentioned explains why localized phonon mode of Ta-atoms doped in b-B is splited above the band of the b-B intrinsic delocalized phonon modes.
From the obtained results we can conclude that, content of the b-B:Ta component in the powdered product is higher at lower temperature of synthesis, when more part of the initial boron is structurally transformed into b-B modification doped with Ta, but less part of initial boron is used in synthesis of BN. At higher obtaining temperatures, synthesis of boron nitride becomes more intensive reducing content of b-B phase.
The b-B structure formed at higher temperatures should be more perfect, what leads to the narrowing of the Ta-related absorption peak.
Conclusion
Heating the mixed powders of boron and tantalum in a xenon high-flux optical furnace in flow of nitrogen results in formation of powder composed of h-BN, b-B, H 3 BO 3 , and also TaB 2 , B 9 H 11 and other phases. Since heating is a local and one-sided, in presence of high temperature gradients particle size distribution is not uniform. As-received powder is composed of two groups of particles: platelike (mainly h-BN and H 3 BO 3 , a mean particle size of 2 mm) and equiaxed (b-B doped with Ta, TaB 2 , B 9 H 11 and other phases, a mean particle size of 0.2 mm), and each of them have an own particle size distribution. Tantalum very easily diffuses into powdered boron, especially, in presence of boron acid; therefore, the b-B component is heavily doped with Ta. Partially, b-B can be hidden under H 3 BO 3 layer.
Based on excellent agreement between position of the absorption peak in IR spectra of the material, containing b-B doped with Ta, experimentally detected in present work and vibration frequency of Ta atoms incorporated in b-B crystalline lattice theoretically calculated previously, we can conclude that, tantalum dopant atoms in crystalline lattice of b-rhombohedral boron predominantly occupy crystallographic voids of D-type and perform localized vibrations with frequency w2260 cm À1 .
Declarations
Author contribution statement
Competing interest statement
The authors declare no conflict of interest. | 2018-06-05T04:06:12.113Z | 2018-03-01T00:00:00.000 | {
"year": 2018,
"sha1": "9fe958870555cef342cf7775993fedfcbdc63d0a",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2405844017327329/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9fe958870555cef342cf7775993fedfcbdc63d0a",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
209007420 | pes2o/s2orc | v3-fos-license | A prospective study of the hearing gain achieved in relation to the site and size of tympanic membrane perforation after type 1 tympanoplasty with temporalis fascia graft
Tympanic membrane perforation leads to recurrent ear infections and hearing loss. The target of middle ear surgery is to render the ear safe with good hearing. Mucosal type of chronic otitis media (COM) is characterised by a perforation in the pars tensa. The surface area of the tympanic membrane plays an important role in transmitting the sound energy to inner ear. It also serves as a protective function to middle ear infections and also shields the round window. This shielding is necessary to create a phase difference, so that the sound waves do not impact on both oval and round windows simultaneously. A perforation on the tympanic membrane results in reduced surface area for sound pressure transmission and loss of shielding effect on the ABSTRACT
INTRODUCTION
Tympanic membrane perforation leads to recurrent ear infections and hearing loss. The target of middle ear surgery is to render the ear safe with good hearing. Mucosal type of chronic otitis media (COM) is characterised by a perforation in the pars tensa. The surface area of the tympanic membrane plays an important role in transmitting the sound energy to inner ear. It also serves as a protective function to middle ear infections and also shields the round window. This shielding is necessary to create a phase difference, so that the sound waves do not impact on both oval and round windows simultaneously. A perforation on the tympanic membrane results in reduced surface area for sound pressure transmission and loss of shielding effect on the round window. 1 There were studies which stated that hearing loss will be more if the perforations are present in the posteroinferior quadrant and it will be less in anteroinferior quadrant. 2 Hearing loss will be less in perforation away from the manubrium than those touching the manubrium. 3 The present study is an effort to test the validity of the above concept. In a study by Kumar et al, it was found there was a relation between area of perforation and amount of hearing loss at 250 and 500 HZ but not at higher frequencies. 4 Gudepu et al had shown that the hearing loss associated with perforation was more at lower frequencies than at higher frequencies and it increases with the size of perforation. 5 Our study aims to evaluate the hearing improvement following type 1 tympanoplasty with temporalis fascia graft in relation to site and size of perforation.
METHODS
The present study is a prospective study of 120 patients between the age group of 15 to 60 years with mucosal type of COM who attended outpatient department of ENT in Pushpagiri Institute of Medical sciences and Research centre, Tiruvalla, Kerala, India from June 2015 to June 2018. Patients were grouped depending on the site and size of tympanic membrane perforation and pure tone audiogram (PTA) of each patient was recorded. All the patients underwent type 1 tympanoplasty with temporalis fascia graft by underlay technique. All the patients were followed up for one year and hearing assessment done at 3 month, 6 months and 1 year. The PTA at the end of one year is taken for the assessment. The data obtained were analysed using ANOVA test.
Inclusion criteria
All patients between the age group of 15 to 60 years chronic otitis media mucosal type with perforations which remained dry for minimum three months duration and underwent type 1 tympanoplasty using temporalis fascia graft were selected; both sexes were included.
Exclusion criteria
Exclusion criteria were patients below 15 years and above 60 years; patients with wet ears; immunocompromised patients; patients with sensorineural hearing loss; patients with recurrent and residual perforation; patients in whom ossiculoplasty were done; those patients whose graft did not take up after 1 year follow up.
After taking written informed consent and ethical clearance, a detailed history was taken and otological examination was done under the microscope. Tympanic membrane perforations were classified into small, medium, large and subtotal based on the number of quadrants involved. Location of the central perforation was also denoted by its relationship to the handle of malleus as anterior, posterior, or inferior. Audiological assessment were done by PTA. PTA average was calculated by taking average of air conduction threshold at 500 Hz, 1 KHz, 2 KHz and 4 KHz. Air bone gap (ABG) was also assessed similarly. In all cases, surgical closure of the perforation was done by type 1 tympanoplasty using temporalis fascia graft by underlay technique. Follow up of the patients were done at 3, 6 and 12 months postop by assessing the graft intake and serial PTA. The results were tabulated and statistically analysed by ANOVA test.
Age distribution
The age of the patients ranged from 15 to 60 years with mean age group of 32. The maximum number of patients presented in the age group <30 years (47.5%), followed by patients in the age group of 31 to 50 (42.5%) followed by patients in age group of 51 to 60 (10%) (Figure 2).
Site of perforation
63 cases (52.5%) had anterior perforation that is perforation anterior to the handle of malleus. 27 cases (22.5%) had posterior perforation that is perforation posterior to the handle of malleus and 30 cases (25%) had perforation on either sides of handle of malleus (Table 1).
Size of perforation
Out of 120 cases of COM with perforations, 24 cases (20%) had small perforation, in which perforation involved only one quadrant of pars tensa, 66 cases (55%) had medium sized perforation with involvement of two quadrants, 18 cases (15%) had large perforation involving three quadrants and 12 cases (10%) had subtotal perforation involving all the quadrants (Table 2). (Table 3). Results were analysed statistically with ANOVA test. The F value was found to be significantly higher than Fcrit (Table 5).
DISCUSSION
The middle ear acts as an effective transformer to conduct acoustic energy from the tympanic membrane to the stapes foot plate at the oval window. This is achieved by an intact tympanic membrane, ear ossicles and an airfilled middle ear to shield the round window from the acoustic stimuli to maintain the phase difference between the two windows. The impedence difference between the middle and inner ear is mainly matched by the ratio of the surface area of the tympanic membrane to the stapes footplate and by the lever action of the ossicles and tympanic membrane. In tympanic membrane perfortion, the lever action is compromised. In perforation with an intact rim of tympanic membrane, hearing is better preserved than when margins are affected as energy transfer from canal to membrane is via the rim. The degree of loss is proportional to the size of perforation and greater loss in lower frequencies. 6 COM mucosal type is one of the important cause for hearing loss and recurrent ear infections in developing countries like India. Type 1 tympanoplasty is the surgical choice for this condition. This is a prospective study on the clinical profile of 120 patients with COM mucosal type in a tertiary care centre in South India. The study quantifies the hearing loss in relation to size and site of perforation and the air bone closure achieved following type 1 tympanoplasty with temporalis fascia. In our study, like other studies of Wasson et al, we observed that larger the perforation of the tympanic membrane greater the decibel loss in sound perception which can be attributed to the "Round Window Baffle" effect. Unlike this, a study by Voss et al and Oluwole et al did not observe any significant differences in hearing loss in anterior versus posterior quadrant perforations. 7,8 In the present study, patients below 15 and above 60 years excluded. Maximum number of patients was in the age group of below 30 years. Children are considered to be poor candidates for type 1 tympanoplasty as they are more prone to recurrent respiratory infections due to immaturity of immune system and Eustachian tube related problems. Out of 120 cases 54 cases were males and 66 cases were females.
Out of 120 cases, 24 patients had small sized perforation, 66 had medium perforation, 18 patients had large perforation and 12 had subtotal perforation. In a similar study conducted by Voss, to determine the hearing loss in perforations of tympanic membrane in 62 cases, 48% had small perforations, 40% had medium sized perforations and 12% had subtotal perforations. 7 Voss stated that the dominant mechanism for hearing loss is the reduction in sound pressure difference across the tympanic membrane.
The study stated that reduction in the areal ratio between the tympanic membrane and stapes makes little contribution to the total loss, and direct stimulation of the oval and round windows may limit the loss, but only for perforations greater than 1 to 2 quadrants of tympanic membrane.
Titus et al, in his study on correlation of the site of tympanic membrane perforation with degree of hearing loss using video-otoscopy concluded that in acute tympanic membrane perforations, the site of the perforation and the magnitude of hearing loss was insignificant (p=0.244) versus that in chronic perforations (p=0.047). 9 Titus concluded that the posterior perforations were most common in chronic perforations and speculated that there was greater hearing loss due to superimposition of diseases to the middle ear like cholesteatoma. 9 In the present study, regarding the site of perforation, hearing loss preoperatively was more in perforations involving both anterior and posterior part of tympanic membrane than each site occuring singly. The ABG preoperatively was also more in this group. This was similar to the study by Pannu et al, Nahata et al which also showed that posterior quadrant perforations caused more hearing loss due to loss of round window baffle effect. 10,11 Comparing the size of perforation, hearing loss and ABG were more in subtotal perforations followed by large perforations. Nayak et al also observed a similar result in their study. 12 The gain in hearing and air bone closure were more in patients who had subtotal perforations and in patients with perforation involving both anterior and posterior quadrants. This was followed by large perforations and perforation involving only posterior quadrant. These results were statistically significant by analysis of variance test (ANOVA) with a high f value.
CONCLUSION
Chronic otitis media-mucosal type is more common in young and middle aged population. Unilateral involvement is more common. There is significant | 2019-11-22T00:59:13.094Z | 2019-12-23T00:00:00.000 | {
"year": 2019,
"sha1": "73d23494335ef1e1cf1716d833a03f98b073111e",
"oa_license": null,
"oa_url": "https://www.ijorl.com/index.php/ijorl/article/download/1896/1027",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "90bf8f02864ae7f20c78b868b1513de763772e7b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55905485 | pes2o/s2orc | v3-fos-license | Constraining Self-Interacting Dark Matter with the Milky Way's dwarf spheroidals
Self-Interacting Dark Matter is an attractive alternative to the Cold Dark Matter paradigm only if it is able to substantially reduce the central densities of dwarf-size haloes while keeping the densities and shapes of cluster-size haloes within current constraints. Given the seemingly stringent nature of the latter, it was thought for nearly a decade that SIDM would be viable only if the cross section for self-scattering was strongly velocity-dependent. However, it has recently been suggested that a constant cross section per unit mass of sigma_T/m~0.1cm^2/g is sufficient to accomplish the desired effect. We explicitly investigate this claim using high resolution cosmological simulations of a Milky-Way size halo and find that, similarly to the Cold Dark Matter case, such cross section produces a population of massive subhaloes that is inconsistent with the kinematics of the classical dwarf spheroidals, in particular with the inferred slopes of the mass profiles of Fornax and Sculptor. This problem is resolved if sigma_T/m~1cm^2/g at the dwarf spheroidal scales. Since this value is likely inconsistent with the halo shapes of several clusters, our results leave only a small window open for a velocity-independent Self-Interacting Dark Matter model to work as a distinct alternative to Cold Dark Matter.
INTRODUCTION
It is now clear that observations of dark matter dominated systems such as low-mass (dwarf) and low surface brightness (LSB) galaxies favour the presence of dark matter cores of O(1 kpc) (e.g. Moore 1994, Kuzio de Naray et al. 2008, de Blok 2010, Walker & Peñarrubia 2011, Amorisco & Evans 2012. These observations are a challenge for the Cold Dark Matter (CDM) paradigm where dark matter haloes are predicted to have density cusps, an imprint of the collisionless nature of CDM (the core-cusp problem). In a possibly related issue, it has been pointed out recently that the dark satellites of Milky-Way (MW) size halo simulations are too dense to be consistent with the kinematics of the MW dwarf spheroidals (dSphs) (the too big to fail problem; Boylan-Kolchin et al. 2011. Although these are significant challenges to the CDM model, their solution could naturally lie in our incomplete understanding of the complex process of galaxy formation. Even though the internal dynamics of dwarfs is dominated by dark matter today, it is conceivable that earlier episodes of star formation and subsequent gas removal by supernova feedback might have been violent enough to CITA National Fellow, e-mail: jzavalaf@uwaterloo.ca † Hubble Fellow ‡ Hubble Fellow modify the initial cuspy dark matter distribution into a cored one (e.g. Navarro et al. 1996;Pontzen & Governato 2012). Current hydrodynamical simulations have shown that such a mechanism is able to create large cores in intermediate-mass galaxies (Governato et al. 2010, and, once tidal stripping is taken into account, it might solve the too big to fail problem as well (Brooks & Zolotov 2012). It is not clear, however, if such episodes of large gas blow-outs are consistent with the star formation histories and stellar properties of LSBs and dwarfs (e.g. Kuzio de Naray & Spekkens 2011;Boylan-Kolchin et al. 2012;Peñarrubia et al. 2012).
Given the large uncertainties regarding whether baryon physics can reconcile the CDM model with observations of dwarf galaxies, it is prudent to consider the alternative, which is to question the fundamental CDM hypotheses, namely, the collisionless and cold nature of CDM particles. This alternative is additionally encouraged by the null detection of several experiments that are pursuing the discovery of the favoured CDM particles, and whose sensitivity is reaching the natural values for the interaction cross sections of the particle physics models that predict them (e.g. Supersymmetry, Abazajian et al. 2012;Aprile et al. 2012).
An exciting possibility is that of self-interacting dark matter (SIDM) originally introduced over a decade ago by Spergel & Steinhardt (2000). Self-scattering between dark matter particles is a feature of present hidden-sector dark matter models that predict the existence of new gauge bosons. The presence of these bosons is invoked to enhance the annihilation and/or self-scattering of dark matter particles to explain a number of puzzling observations (the Sommerfeld enhancement, e.g. Arkani-Hamed et al. 2009, Buckley & Fox 2010. Collisional dark matter is constrained by the requirements from different astrophysical observations, such as the ellipsoidal shape of haloes, the avoidance of subhalo evaporation in galaxy clusters, and the avoidance of the gravothermal catastrophe (e.g. Miralda-Escudé 2002;Gnedin & Ostriker 2001;Firmani et al. 2001). The original excitement caused by SIDM died off by the apparently strong constraints on the scattering cross section set particularly by X-ray and lensing observations of clusters in the analysis by Miralda-Escudé (2002): σT /m 0.02 cm 2 g −1 ; such a low cross section would have no relevant impact for the dynamics of galaxies at the O(1 kpc) scale. Peter et al. (2012) have revised this constraint and found that it was overestimated by over an order of magnitude suggesting that a current constraint is of O(0.1 cm 2 g −1 ).
SIDM is clearly a viable model if the cross section depends on the relative velocity in such a way that dark matter behaves as a collisional fluid in dwarfs, and is essentially collisionless at the scale of clusters. Although this idea was phenomenologically proposed (Yoshida et al. 2000) and explored with cosmological simulations (Colín et al. 2002) a decade ago, its theoretical support has come up only recently (e.g. Ackerman et al. 2009;Feng et al. 2009Feng et al. , 2010Buckley & Fox 2010;Loeb & Weiner 2011;van den Aarssen et al. 2012;Tulin et al. 2012). Moreover, earlier simulations lacked the resolution needed to reliably explore the sub-kpc region of dwarf-size haloes. It was only until recently that it was explicitly shown that theoretically motivated velocity-dependent SIDM (vd-SIDM) models produce core sizes consistent with those found in MW dSphs, and also solve the emergent too big to fail problem (Vogelsberger et al. 2012, hereafter VZL).
On the other hand, Rocha et al. (2012) have suggested that a velocity-dependent cross section is not essential since a SIDM model with a constant σT /m = 0.1 cm 2 g −1 (allowed by cluster constraints) is also consistent with the inner structure of dSphs. This seems to contradict earlier estimates made by Yoshida et al. (2000) who suggested that the average number of collisions per particle in the central core, ncore, scales as the cube root of the halo mass. Since for this cross section, ncore ∼ 2 for a cluster-size halo (within ∼ 10% of the virial radius, see their Fig. 2), ncore would thus be suppressed by a factor of O(100) in dwarf-size haloes, resulting in cores that are too small. This scaling is however imprecise since for a halo of virial mass M : where ρcore and σ vel are the average density and velocity dispersion of dark matter particles, tage is the formation time, and ∆core is proportional to the density contrast relative to the background density; all of these quantities are defined within the core of the halo. Thus, the higher concentrations and larger formation times of dwarf-size haloes reduce the M 1/3 dependence. It is important to remark that both, Yoshida et al. (2000) and Rocha et al. (2012), extrapolated the regimes they could directly simulate to the regime of dwarfs. In this Letter, we resolve this issue by proving explicitly that a constant scattering cross-section of σT /m = 0.1 cm 2 g −1 is not able to create O(1 kpc) cores in the dark subhaloes where the MW dSphs are expected to live; it deviates only slightly from the CDM predictions. Unless baryonic processes are invoked, the Figure 1. Dependence of the momentum-transfer weighted cross-section per unit mass on the relative velocity for the different SIDM models considered here. The constant cross section cases with σ T /m 1 cm 2 g −1 are likely ruled out by halo shapes based on X-ray and lensing observations of clusters . The models with a velocity-dependent crosssection are tuned to satisfy all current astrophysical constraints and have been shown to be consistent with the kinematics of MW dSphs (VZL). range of interesting constant σT /m values is thus very narrow: 0.1 cm 2 g −1 < σT /m < 1 cm 2 g −1 .
SIMULATIONS AND RESULTS
Our analysis is based on re-simulations of the Aq-A halo (level 3 resolution) of the Aquarius project (Springel et al. 2008), which is a set of representative MW-like haloes within the CDM WMAP-1yr cosmology. This halo has a virial mass of M200 ∼ 1.8 × 10 12 M within a radius of r200 ∼ 246 kpc (enclosing an average density of 200 times the critical density). The particle mass in the simulations is mp ∼ 4.9 × 10 4 M and the Plummer equivalent gravitational softening length is ∼ 120 pc. We use an algorithm that adds dark matter self-scattering to the N -body code GADGET-3 for gravitational interactions (last described in Springel 2005). The algorithm uses a N-body/Monte Carlo approach to represent the microphysical scattering process in the macroscopic context of the simulation. The connection between this type of approach and the Boltzmann equation is nicely described in Appendix A of Rocha et al. (2012). The details of this algorithm can be found in VZL, as well as simple controlled tests that show the agreement between the outcome of the code and analytical expectations. All dark matter models were simulated starting with the same initial conditions and their present-day self-bound subhalo population was identified using the SUBFIND algorithm (Springel et al. 2001).
In addition to CDM, we consider five SIDM cases (recently presented in Vogelsberger & Zavala 2012 to analyse the impact of self-scattering in direct detection experiments): three with a constant cross section and two with a velocity-dependent one given by a Yukawa-like interaction (e.g. Loeb & Weiner 2011). The transfer cross section scaling with the relative velocity can be seen in Fig. 1 (Walker et al. 2009;Wolf et al. 2010). Clearly, the most massive CDM subhaloes are inconsistent with the kinematics of the MW dSphs. SIDM can alleviate this problem only for a constant scattering cross-section σ T /m 1 cm 2 g −1 (SIDM10 and SIDM1) or if it has a velocity dependence (vdSIDMa and vdSIDMb). Current constraints from clusters put an upper limit to the constant cross section case close to σ T /m ∼ 0.1 cm 2 g −1 (SIDM0.1). This value is too low to solve the too big to fail problem. The observational data in the bottom right can be fitted by lower mass subhaloes, not shown here since they are affected by the limited resolution of our simulations.
We note that the formula for σT /m for the velocity-dependent cases is only valid in the classical regime, once quantum effects are important, the finite interaction length of the Yukawa potential cuts off the zero-velocity divergence of the cross section (see e.g. Feng et al. 2010). For our purposes, the quantity of relevance is (σT /m) v which goes to zero at zero velocity. It is clear that for the vdSIDM models, σT /m 0.1 cm 2 g −1 at the characteristic velocities in MW dSphs (the observed velocity dispersion of stars along the line of sight is ∼ 10 km s −1 , e.g. Walker et al. 2009). This fact alone already casts a doubt on the possibility of SIDM0.1 (σT /m = 0.1 cm 2 g −1 ) producing similar results as the vdSIDM cases that were shown to be consistent with the kinematics of the MW dSphs in VZL. We note that there is a change in nomenclature relative to VZL: RefP0≡CDM, RefP1≡SIDM10, RefP2-3≡vdSIDMa-b. Fig.2 shows the inter-quartile range (i.e., 25-75%) of the distribution of the present-day circular velocity profiles of the 15 subhaloes with the largest values of Vmax(z = 0) (the maximum of the circular velocity) within 300 kpc halocentric distance. The symbols with error bars correspond to estimates of the circular velocity within the half-light radii of the sample of 9 MW dSphs used by Boylan-Kolchin et al. (2011. Since current data for the stars in the dSphs provide an incomplete description of the 6dimensional phase-space distribution, the derived mass profiles are typically degenerate with the velocity anisotropy profile. However, the uncertainty in mass that is due to this degeneracy is minimised near the half-light radius, where Jeans models tend to give the same value of enclosed mass regardless of anisotropy (e.g. Strigari et al. 2007;Walker et al. 2009;Wolf et al. 2010). Observations can then be used to constrain the maximum dark matter density within this radius. CDM clearly predicts a population of massive subhaloes that is inconsistent with all the 9 dSphs, whereas for SIDM this problem disappears as long as σT /m 1 cm 2 g −1 on dSph scales. The currently allowed case with σT /m = 0.1 cm 2 g −1 is very close to CDM, only reducing slightly the inner part of the subhalo velocity profiles. On the contrary, the vdSIDM models clearly solve the too big to fail problem. We note that the extent of the too big to fail problem in CDM depends on the mass of the MW halo, if it is in the low end of current estimates, 10 12 M , the problem may be resolved (e.g. Wang et al. 2012), although a low halo mass may generate other difficulties such as explaining the presence of the Magellanic Clouds. In the context of SIDM, the lower the mass of the MW halo, the weaker the argument against σT /m = 0.1 cm 2 g −1 .
A simple statistical test of the agreement between the subhalo distributions of two models and the 9 dSphs is to compute the chisquare difference associated to the likelihood of having n + (n − ) data points above (below) the median of the distribution of each model. Assuming that the probability distribution of finding n ± data points is Poissonian: Comparing SIDM1 and the vdSIDM models with SIDM0.1, the difference is driven solely by Draco with the former preferred over the latter with ∆χ 2 ∼ 4.4 (2.1σ). Using an interpolation of our three constant cross section cases, we estimate that σT /m ∼ 0.6 cm 2 g −1 is the minimum value for which ∆χ 2 = 0 relative to SIDM1. To show the typical core size and central densities that are predicted by the different SIDM models, we plot in Fig. 3 the density profile of the 15 subhaloes with the largest Vmax(z = 0) values. A value of σT /m ∼ 1 cm 2 g −1 is needed for a constant cross section SIDM model to mimic the effect of the vdSIDM models and produce ∼ 1 kpc cores with central densities of O(0.1 M pc −3 ). If the transfer cross section is reduced to 0.1 cm 2 g −1 , then the subhaloes are only slightly less dense than in CDM, having cores (central densities) that are at least twice smaller (higher) than those in the other SIDM cases.
VZL showed that the SIDM10 and vdSIDM models have convergent density and circular velocity profiles within the central density core; we have found the same for SIDM1 and to lesser extent for SIDM0.1. Convergence is harder to achieve for CDM since, at a fixed radius, the two-body relaxation time is shorter than for SIDM (due to the reduced densities in the latter case). Power et al. (2003) showed that the density profile converges at a given radius when the two-body relaxation time is larger than the Hubble time at this radius. At the resolution level of our simulations, the convergence radius for CDM is ∼ 600 pc, which implies that the CDM circular velocity and density profiles shown in Figs. 2 and 3 underestimate the true dark matter content within ∼ 600 pc (Springel et al. 2008), whereas for SIDM is at least half of this value. In any case, the expectation is that if the density profile of SIDM0.1 has not converged yet, higher resolution would drive it towards higher densities, not lower, bringing it even closer to CDM (this is a trend confirmed for the cases analysed in VZL, see their Fig. 9).
By using the fact that some MW dSphs have chemodynamically distinct stellar subcomponents that independently trace the same gravitational potential, Walker & Peñarrubia (2011) Fig. 1). We show the median and 1 st and 3 rd quartiles of the subhalo distribution for each case. The velocity-dependent SIDM cases produce cores of approximately 600 pc. Of the constant cross section SIDM models we explored, the one that is currently allowed by cluster constraints, SIDM0.1 (σ T /m = 0.1 cm 2 g −1 ), only deviates slightly from CDM; the associated core sizes are less than 300 pc.
showed that it is possible to constrain the slopes of their inner mass profiles. They found that Fornax and Sculptor are consistent with cored density profiles while cuspy profiles with ρ ∝ r −1 are ruled out with a significance 96% and 99%, respectively. We use this method to test the consistency of the SIDM models explored here. We found that all SIDM models, except for SIDM0.1, are well fitted by the following three-parameter formula: which is similar to the Burkert profile (Burkert 1995) but with two scale radii rs and rc. The remaining case, SIDM0.1, is better fitted by: Using these formulae, we found the best fit parameters for the massive subhaloes in each of the SIDM models. Such fits are restricted to a radial range between the softening length of our simulations ∼ 120 pc and the radius where tidal stripping has made the outer logarithmic slope of the density profile steeper than −3. The latter restriction is of relevance only for four subhaloes that are affected significantly by tidal stripping within ∼ 5 kpc. Two of these are clearly affected within ∼ 1 kpc and should likely be removed in a more detailed analysis; they are the least consistent with the data.
To find the best fit parameters we minimise: where the sum goes over all radial bins.Thus defined, Q gives an estimate of the goodness of the fit. In Table 1, we give the best Table 1. Best fit parameters for the median of the SIDM density profiles of the 15 subhaloes with the largest Vmax(z = 0) values. The last two have a constant cross section while the others have a velocity-dependent cross section (see Fig. 1). SIDM1.0 is likely ruled out by cluster observations (see Rocha et al. 2012). The density profile used for the fits is given by Eq.
fit parameters for the median of the subhalo population for each SIDM model (except for SIDM10 which has been ruled out). We note that Peñarrubia et al. (2012) already used Eq. (4) to estimate rc 1 kpc for Fornax and Sculptor. Cores of this size are too large to be consistent with most of the subhaloes in SIDM0.1.
To test the consistency of the different SIDM models, we use the parameters of the fits to compute the slope of the inner mass profile between the pair of half light radii (the median likelihood values) of the two distinct stellar subcomponents in Fornax and Sculptor. We then test whether this slope is as steep as the lower limit set by the data. The confidence level at which a given slope is said to be excluded is determined by the fraction of the posterior distribution fp of allowed slopes that are larger. For σT /m = 0.1 cm 2 g −1 , all but 2 subhaloes are excluded at > 95(90)% confidence for Fornax (Sculptor); the remaining two subhaloes have values of fp 0.86(0.81) for Fornax (Sculptor). On the contrary, the other SIDM models (except for SIDM10 that was not analysed) are clearly more consistent with the data with only four subhaloes excluded at 90% confidence for Fornax (five of the subhaloes actually have fp < 0.8), while only three subhaloes are excluded at > 80% confidence for Sculptor. We found no clear preference between the vdSIDM models and the case with constant σT /m = 1 cm 2 g −1 . To consider the impact of the non-spherical morphologies of Fornax and Sculptor, we repeated the analysis for elliptical rather than circular radii for the stars used to estimate the slope of the mass profiles (see sect. 6.1 of Walker & Peñarrubia 2011). We find that for all models Fornax becomes slightly more exclusive while Sculptor is considerably less exclusive.
DISCUSSION AND CONCLUSIONS
Self-Interacting Dark Matter (SIDM) offers a promising solution to the dwarf-scale challenges faced by the otherwise-remarkably successful Cold Dark Matter (CDM) model. The original idea of a velocity-independent, elastically scattering cross section died off quickly, mostly due to the apparently stringent constraint found by Miralda-Escudé (2002) requiring that the cross-section per unit mass was σT /m 0.02 cm 2 g −1 . This value is uninteresting, with earlier estimates requiring σT /m to be at least of O(1 cm 2 g −1 ) to create ∼ 1 kpc cores in dwarf-size haloes (Yoshida et al. 2000;Davé et al. 2001). Peter et al. (2012) have recently revised earlier constraints on collisional dark matter and found them to be overestimated by over an order of magnitude; the current constraint is σT /m 0.1 cm 2 g −1 . Moreover, these authors have revived, in a companion paper , the velocity-independent 10 7 10 8 10 9 10 10 Figure 4. Subhalo mass function for a MW-size halo within CDM and different elastic SIDM models. The only model that leads to a difference relative to CDM has a constant cross section of σ T /m = 10 cm 2 g −1 , which is clearly ruled out by cluster observations. SIDM model by suggesting that a value of σT /m = 0.1 cm 2 g −1 is seemingly consistent with the inner structure of the MW dSphs.
Motivated by the prospect of a viable constant cross section SIDM model, we investigate the claims from Rocha et al. (2012) using high resolution cosmological SIDM simulations of a MWsize halo. Contrary to Rocha et al. (2012), we are able to resolve the sub-kpc structure of the massive subhalo population to sufficiently small radii for comparison with the MW dSphs. We find that a velocity-independent SIDM model is consistent with the kinematics of dSphs only if σT /m ≈ 1 cm 2 g −1 (see Fig. 2), i.e., a value of this order is required to solve the too big to fail problem (Boylan-Kolchin et al. 2011. If the cross section is lower by an order of magnitude, the subhalo population is still too dense to be consistent with the MW dSphs. On the other hand, as shown already in VZL, velocity-dependent SIDM models with a Yukawa-like interaction (as proposed in Loeb & Weiner 2011, see Fig. 1) successfully solve the too big to fail problem.
We also use the inner slopes of the mass profiles of Fornax and Sculptor, from the analysis of Walker & Peñarrubia (2011), as examples to test the consistency of the different models we simulate here. For a velocity-independent SIDM model with σT /m ∼ 0.1 cm 2 g −1 , we find that 13 of of the 15 subhaloes with the largest Vmax(z = 0) values are inconsistent with the data from Fornax (Sculptor) at > 95(90)% confidence (the other two are inconsistent at 81% confidence). A constant cross section ten times larger is as consistent as the velocity-dependent SIDM models explored here with only four (three) of the top 15 subhaloes excluded at > 90(80)% confidence in the case of Fornax (Sculptor); for all these cases, there are several subhaloes that are unambiguously consistent with the data.
According to the analysis of Peter et al. (2012), a constant cross section of σT /m = 1 cm 2 g −1 is likely inconsistent with the observed halo shapes of several clusters. We have now shown that σT /m = 0.1 cm 2 g −1 is too close to CDM to represent a distinct alternative. An interpolation of our simulations suggests that the central densities of the massive subhaloes would be consistent with the MW dSphs if σT /m ∼ 0.6 cm 2 g −1 . We conclude that the hypothesis of a constant scattering cross section as solution to the core-cusp problem remains viable but within a very narrow range of σT /m values. The challenges to make a definitive test of this hypothesis are twofold: the cluster-constraints need be refined, and the impact of conservative baryonic processes needs to be estimated. Although adding gas physics is the next step of SIDM simulations, a challenge to make SIDM an even more attractive alternative to CDM is the prospect of explaining the observed scarcity of MW satellites and field dwarfs (e.g. Klypin et al. 1999;Zavala et al. 2009) without invoking extreme baryonic processes. As we show in Fig. 4, all allowed elastic SIDM models essentially produce the same abundance of dwarf-size haloes as in CDM. A promising possibility is that of exothermic interactions between excited and nonexcited states of dark matter (e.g. Loeb & Weiner 2011). The velocity kick imparted during the collision might be large enough to cause the evaporation of low-mass haloes. | 2013-01-04T16:56:52.000Z | 2012-11-27T00:00:00.000 | {
"year": 2012,
"sha1": "0a2334e04729d5f67fe747fdbff7d2941b188447",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnrasl/article-pdf/431/1/L20/4208161/sls053.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "9fe3857a2f41a2248e8fc94ee3b82a8e635f22ff",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
251754831 | pes2o/s2orc | v3-fos-license | The Influential Factors of Internal Audit Effectiveness: A Conceptual Model
: The purpose of this paper is to systematically review the literature on the influential factors of internal audit effectiveness and articulate these factors in a conceptual model. A systematic literature review (SLR) is conducted to identify the influential factors of internal audit effectiveness; relevant studies are reviewed between the period January 1999 and March 2022 through a lens focused on the key factors of internal audit effectiveness. In addition, our review took into consideration what is mentioned in The International Professional Practices Framework for Internal Auditing (IPPF). Five factors of internal audit effectiveness and their dimensions are identified and comprised into a conceptual model, these factors are internal audit organizational characteristics, internal audit relationships, internal audit processes, internal audit resources, and internal audit coordination with other assurance providers. This paper provides internal audit practitioners, audit committees, and senior management in organizations with a broad understanding and comprehensive overview of the key factors that should be considered to make their internal audit functions more effective. This paper proposes a conceptual model that provides a holistic view of the influential factors of internal audit effectiveness and clearly identifies the dimensions of the factors. Additionally, it provides an opportunity for future research to test the model and build on it as well.
Introduction
Internal audit (IA) functions play a crucial role in assisting organizations to achieve their objectives and safeguard their assets (Alqudah et al. 2019). Additionally, the IA become a vital management tool for achieving effective control in organizations (Behrend and Eulerich 2019;Endaya and Hanefah 2016). Having an effective IA function is important for organizations; the effective role, as interpreted by The International Professional Practices Framework for Internal Auditing (IPPF), will ultimately have a major contribution to improving the effectiveness of an organization's risk management, internal control, and governance processes (The Institute of Internal Auditors 2017). Internal auditing is defined by The Institute of Internal Auditors (IIA) as "An independent, objective assurance and consulting activity designed to add value and improve an organization's operations. It helps an organization accomplish its objectives by bringing a systematic, disciplined approach to evaluate and improve the effectiveness of risk management, control, and governance processes." (The Institute of Internal Auditors 2017, p. 29) Moreover, effective IA is important to an organization's audit committee, senior management, and external auditor. The IA provides the audit committee and senior management with an objective assessment of the whole organization's operations, processes, and performance (The Institute of Internal Auditors 2017). Senior management relies on IA to enhance the controls and reduce the risk as well as improve its operations, while the audit committee relies on IA to achieve robust internal controls and attain a quality of financial reporting as well as maintain compliance with regulations ). On the other hand, defined the verb "influence" as "to change the way that someone thinks or the way that something develops," whereas the noun "influence" is defined as "the power to change people or things" (Cambridge University 2022). In this paper, we defined "the influential factors" as "those factors that are substantial for the IA function as well as are important to IA effectiveness and may affect it." In addition, this paper argues that the IA function should look to a set of influential factors in order to understand the IA effectiveness; without this consideration, its effectiveness may be affected. The proposed conceptual model for IA effectiveness integrates five factors and their dimensions. These factors are IA organizational characteristics, IA relationships, IA processes, IA resources, and IA coordination with other assurance providers.
The structure of this paper is organized as follows. The next section provides background about IA effectiveness. Section three describes the methodology used. Section four systematically reviews the literature on the influential factors. Section five presents the conceptual model based on what has been discussed about the influential factors. The last section concludes the paper and provides research implications and recommendations for future research. Ridley (2008) highlights that IA was built on the three Es of effectiveness, efficiency, and economy, where effectiveness is the most important "E." Efficiency and economy are worthless if IA is ineffective (Dittenhofer 2001;Lenz and Hahn 2015). Researchers look at IA effectiveness from different points of view; however, they shared the common view that effectiveness is achieved when the defined IA objectives and goals are achieved (Ahmad et al. 2009;Saidin 2013, 2014;Dittenhofer 2001;Mihret and Yismaw 2007). According to Badara and Saidin (2013), IA effectiveness is the ability to achieve the predefined IA objectives, while Dittenhofer (2001) indicates that these objectives should be stated in clear terms to achieve them. On the other hand, although the definition of internal auditing clearly stated that IA is designed to add value and improve the organization's operations as well as evaluate and improve the effectiveness of risk management, control, and governance processes, IA's role in organizations diverse and different from one organization to another (Rupšys and Boguslauskas 2007). Moreover, the level of effectiveness varies amongst various organization operations (Al-Twaijry et al. 2003). IA is a complicated process; it is part of the organization's internal control system and depends on its effectiveness (Badara and Saidin 2014). This complicated process includes audit planning, conducting audit engagements, conforming audit results, following up the results to ensure that proper actions, and developing the staff to ensure they have the sufficient knowledge and skills to conduct the audit engagements; however, IA effectiveness is not limited to evaluating the above aspects to ensure that IA is able to achieve its objectives (Dittenhofer 2001).
Internal Audit Effectiveness
Furthermore, Lenz and Hahn (2015) looked into IA effectiveness from an institutional theory lens and indicated that there are different macro and micro factors that influence IA effectiveness. Macro factors are represented by coercive, normative, and mimetic forces; where coercive forces are explained by compliance with the regulations that affect the role of IA in the organization, normative forces are explained by the degree of conformance with internal auditing standards, and mimetic forces are explained by the benchmarking against successful IA in organizations. Conversely, the micro factors are explained by factors related to the organization and factors related to internal resources, processes, and relationships. From another perspective, Azzali and Mazza (2018) examine IA effectiveness from an agency theory lens. They maintain that IA is an agent to the board of directors and the management and it will be effective when performing its role for their benefit; this view is aligned with the definition of internal auditing, which focuses on helping the organization to achieve its objectives.
On the other hand, some researchers indicate that IA effectiveness is achieved based on many factors. For example, integration of management support for IA with IA work and internal auditors' competencies (Badara and Saidin 2013); compliance with internal auditing standards (Cohen and Sayag 2010); and quality of IA procedures (Dittenhofer 2001). While the IA provides recommendations to improve the organization's operations, to understand the IA effectiveness, some researchers look into the IA recommendations for implementation rate (Arena and Azzone 2009;Bednarek 2018;Erasmus and Coetzee 2018;Soh and Martinov-Bennie 2011). Moreover, some studies consider the demand and supply view for IA effectiveness, where the demand view is based on the satisfaction of the organization's management and auditee on the IA, while the supply view is based on the auditors' view on the IA effectiveness (Alzeban and Gwilliam 2014;Cohen and Sayag 2010;Erasmus and Coetzee 2018;Yee et al. 2008). Alzeban and Gwilliam (2014) view IA effectiveness from internal auditors' and auditees' standpoints and study IA effectiveness based on the ability of IA to plan; improve the productivity of the organization; evaluate and improve the organization's internal control and risk management; and the implementation of IA recommendations. The demand view of IA effectiveness helps understand how audit work is perceived; however, the supply view helps to understand the factors that influence the effectiveness (Lenz and Hahn 2015).
Moreover, while the scope of IA is wide and included different aspects of the organization, it carries out a wide range of independent evaluations (Alqudah et al. 2019;Al-Twaijry et al. 2003;Arena and Azzone 2009;Cohen and Sayag 2010;Rupšys and Boguslauskas 2007;Mihret and Yismaw 2007). To achieve effectiveness, the IA must perform a variety of things, yet researchers have different viewpoints on what an effective IA should do. For example, the effective IA should assist the organization to achieve its objectives and safeguard its assets (Alqudah et al. 2019;Azzali and Mazza 2018); evaluate the organization's internal control system and improve its effectiveness (Lenz and Hahn 2015); evaluating organization's risk management and improving its effectiveness (Chambers and Odar 2015;Cohen and Sayag 2010;Goodwin-Stewart and Kent 2006); evaluating an organization's compliance with laws and regulations ; supporting management to prevent fraud (Alqudah et al. 2019); improving the organization's operations (Ahmad et al. 2009) and improving its performance (Alzeban 2020;Coetzee and Erasmus 2017); and providing recommendations to improve different aspects of the organization (Alqudah et al. 2019;Al-Twaijry et al. 2003;Arena and Azzone 2009;Cohen and Sayag 2010;Mihret and Yismaw 2007). Moreover, Onay (2021) stated that "IA effectiveness is one of the most prominent issues that internal auditors should consider in order to establish good governance both in terms of their functions and organizations" (p. 1). Therefore, knowing the influential factors that influence IA effectiveness is important for the IA functions and its organizations.
Methodology
Based on the research objective and the related question defined in the introduction, the current paper employed a systematic literature review (SLR). This paper followed the guideline provided by Xiao and Watson (2019), who identified eight steps to conduct the SLR: (1) formulating the research problem; (2) developing and validating the review protocol; (3) searching the literature; (4) screening for inclusion; (5) assessing quality; (6) extracting data; (7) analyzing and synthesizing data; and (8) reporting the findings. The first step is covered in the introduction section of this study where the research problem is formulated. Following the guideline, the second step is to develop and validate the review protocol which includes the purpose of the study, research question, inclusion criteria, search strategies, quality assessment criteria and screening procedures, and strategies for data extraction, synthesis, and reporting. The purpose of this study and its research question, as explained in the introduction section, are focused on understanding and identifying the influential factors of IA effectiveness and conceptualizing it in a model. Searching the literature is the third step; the search was based on reliable online databases to identify the relevant literature including Scopus, Web of Science, Emerald Insight, Science Direct, SpringerLink, IEEE Xplore, and WorldCat Digital Library. The importance of using a range of online databases is to ensure wide coverage of available literature and to maximize the coverage in the research (Saunders et al. 2019). A combination of the following keywords was used during the online search. We combined the "internal audit" word with each one of the following words: effectiveness, quality, performance, efficiency, add value, factors, relationship, affect, influence, association, case study, empirical, examination, drivers, evaluation, measurement, assessment, and framework.
In step four, the literature selected based on different disciplines aligned with the definition of internal auditing according to the IIAs' IPPF, disciplines including IA effectiveness, IA performance, IA efficiency, IA quality, and IA adding value were the literature utilized these terms to refer to the extent to which the defined IA objectives achieved. Moreover, the papers selected are in the English language, published between the period January 1999 to March 2022, and included theoretical reviews and empirical studies, both qualitative and quantitative. Greater attention was given to literature that addressed the factors that influence IA effectiveness and study its relationship with IA effectiveness. The gray literature and the papers that do not address the factors that influence IA effectiveness topic are excluded. At this step, the initial search shows that over 5000 papers are relevant to this study. Scanning of the title, the abstract, and the keywords for the first 150 papers was conducted for each type of search as the rest of the papers in each search show that they were not directly relevant to this study's topic. This scanning led to identifying 156 papers considered for further review. After the screening for the inclusion step, step five is assessing the quality of the selected papers based on a full-text review; in this step, the papers that are selected are validated based on their meeting the criteria developed in step four, and a decision is taken to consider the selected papers or not for further analysis in the next step. As a result, 34 papers are selected; these papers directly addressed the influential factors for IA effectiveness (see Table A1 in Appendix A) and, in addition, these papers are the primary source to understand the influential factors and build on them. Moreover, the assessment of the full text led to 25 additional papers that were considered as a source to support the analysis and the discussion of the results in the fourth section of this study, which were included in the references list. In step six, the papers that are selected are classified based on the authors, the years, the research method, and the factors used to study IA effectiveness, the papers are summarized, and each factor influencing IA effectiveness is synthesized. The papers selected for more analysis in step six are analyzed and synthesized in step seven, and inclusion and exclusion criteria are considered again during this step. In step eight, the influential factors of IA effectiveness reported in a synthesized way show the importance of each factor, as explained in the fourth section of this paper based on the relevant literature, and our model is conceptualized as shown in the fifth section of this paper. Moreover, we took into our consideration what is mentioned in the IIA's IPPF and link it to each factor to support the analysis, the discussion, and the reporting. Figure 1 below summarizes the process of the SLR. qualitative and quantitative. Greater attention was given to literature that addressed the factors that influence IA effectiveness and study its relationship with IA effectiveness. The gray literature and the papers that do not address the factors that influence IA effectiveness topic are excluded. At this step, the initial search shows that over 5000 papers are relevant to this study. Scanning of the title, the abstract, and the keywords for the first 150 papers was conducted for each type of search as the rest of the papers in each search show that they were not directly relevant to this study's topic. This scanning led to identifying 156 papers considered for further review. After the screening for the inclusion step, step five is assessing the quality of the selected papers based on a full-text review; in this step, the papers that are selected are validated based on their meeting the criteria developed in step four, and a decision is taken to consider the selected papers or not for further analysis in the next step. As a result, 34 papers are selected; these papers directly addressed the influential factors for IA effectiveness (see Table A1 in Appendix A) and, in addition, these papers are the primary source to understand the influential factors and build on them. Moreover, the assessment of the full text led to 25 additional papers that were considered as a source to support the analysis and the discussion of the results in the fourth section of this study, which were included in the references list. In step six, the papers that are selected are classified based on the authors, the years, the research method, and the factors used to study IA effectiveness, the papers are summarized, and each factor influencing IA effectiveness is synthesized. The papers selected for more analysis in step six are analyzed and synthesized in step seven, and inclusion and exclusion criteria are considered again during this step. In step eight, the influential factors of IA effectiveness reported in a synthesized way show the importance of each factor, as explained in the fourth section of this paper based on the relevant literature, and our model is conceptualized as shown in the fifth section of this paper. Moreover, we took into our consideration what is mentioned in the IIA's IPPF and link it to each factor to support the analysis, the discussion, and the reporting. Figure 1 below summarizes the process of the SLR.
The Influential Factors
Internal auditing is still an emerging profession. To understand this profession, more attention should be given to the factors that make the IA effective (Lenz et al. 2018), and more studies should be conducted on the drivers of IA effectiveness (Erasmus and Coetzee 2018). To provide a broad overview of the factors that influence IA effectiveness, we use the institutional theory as an underpinning theory to understand the influential factors. Institutional theory is used as a useful platform to understand the aspects that determine IA effectiveness and the factors that influence IA effectiveness (Lenz and Hahn 2015). Many studies used the institutional theory as a theoretical framework to explain the factors shaped due to each institutional force and how the institutional theory views IA changes as a response to the three institutional forces: coercive forces, normative forces, and mimetic forces. Coercive forces are related to the influence of compliance with laws and regulations; normative forces are related to the influence of the degree of conformance with internal auditing standards; and mimetic forces are related to the influence of the tendency of organizations to model and benchmark themselves based on similar types of organizations that are considered successful (Al-Twaijry et al. 2003;Christopher et al. 2009;Lenz and Hahn 2015;Lenz et al. 2018). However, Lenz and Hahn (2015) differentiate between macro factors and micro factors, where the macro factors are based on the institutional forces; they also highlight that there is a limitation to investigating macro factors due to the potential of undervaluing or missing important external drivers such as political, economical, societal, technological, legal, and environmental megatrends. Moreover, they highlight that those micro factors are superior to macro factors because the IA function is an internal monitoring mechanism for internal stakeholders; from their perspective, micro factors include organizational characteristics, IA resources, IA processes, and IA relationships.
In addition, Lenz et al. (2014) suggested four key factors as "building blocks" that shaped the IA effectiveness. These factors are organizational characteristics, IA resources, IA processes, and IA relationships. Furthermore, in a recent qualitative study, Roussy et al. (2020) build models that attempt to understand the relationships between the four key factors "building blocks" as well as considering any other factors as a dimension of these factors without clear identification of the dimensions of these factors. However, these models overlooked a new insight into IA related to its role in leading the combined assurance in the organization and its coordination with the other assurance providers, as this role is a new phenomenon (Kurnia and Yulian 2018). This paper builds on and extends the models of Lenz et al. (2014) and Roussy et al. (2020), with a narrow focus on the IA influential factors and their dimensions in order to provide a holistic view of the key factors of IA effectiveness.
In this paper, our model is based on five key factors including ten dimensions, these key factors are IA organizational characteristics, IA relationships, IA processes, IA recourses, and IA coordination with other assurance providers. The dimensions of these factors are IA size, IA independence, IA relationship with the audit committee, senior management support to IA, adopting risk-based audit, adopting quality assurance and improvement program, IA competencies, IA outsourcing, leading the implementation of combined assurance, and cooperation with external audit. The following sections will highlight what the literature says about these factors.
Internal Audit Organizational Characteristics
The IA function is part of the organization, and the context in which the IA performs its duties and the organizational setting are represented by IA organizational characteristics (Roussy et al. 2020). Researchers argue that IA organizational characteristics include many factors influencing IA effectiveness; for example, Mihret and Yismaw (2007) indicate that IA organizational setting includes its organizational status, its integrity, and its policies and procedures that enable it to achieve useful audit results. However, Turetken et al. (2019) mention that the IA organizational setting is not linked to policies and procedures that direct the audit process but also includes its status in the organization and its organizational profile, whereas Karagiorgos et al. (2011) point out that organizational setting is represented by the IA position in the organizational structure and its independence, which is important to determine and maintain its segregation of duties. In this paper, we used IA independence and size as dimensions of IA organization characteristics since these dimensions implicit the other dimensions such as policies and procedures and integrity. The IA function cannot maintain its integrity without having its independence, and its policies and procedure are also affected by its independence and its size since the International Standards for the Professional Practice of Internal Auditing (internal auditing standards) (Standard 2040-Policies and Procedures) explain that the form and content of policies and procedures are dependent upon the size and structure of the IA activity and the complexity of its work.
Internal Audit Independence
Despite internal auditors being normally employees in the organizations, professional bodies are increasingly emphasizing the need for IA independence (Alzeban and Gwilliam 2014). IA independence is "the freedom from conditions that threaten the ability of the IA activity to carry out IA responsibilities in an unbiased manner" (The Institute of Internal Auditors 2017, p. 23). In addition, internal auditing standards (Standard 1100-Organizational Independence) emphasized the importance of maintaining the organizational status and the independence of the IA function, which can be gained through reporting to a level within the organization that allows the IA to perform its duties without having interference in determining the scope of work, performing the audit and communicating the audit results, as well as having a dual-reporting line to the audit committee and the Chief Executive Officer (CEO) of the organization (The Institute of Internal Auditors 2017). Reporting to the audit committee assists in making IA effective by preventing the organization's management from interference in the scope of IA and controlling the IA work, while reporting to the CEO assists the IA in carrying out its responsibilities without obstacles and addressing difficult issues with other senior leaders (The Institute of Internal Auditors 2019).
Furthermore, prior studies emphasized the importance of IA independence to its effectiveness; independence is also important to enable better communication with senior management by emphasizing the independence of the auditee as well as creating an objective atmosphere that assists to communicate the audit results without influence from the auditee (Mihret and Yismaw 2007). In addition, independence creates a supportive environment that helps the IA perform its work without pressure and makes the internal auditors more objective and provides a message to the employees in the organization that they can rely on the IA results (Cohen and Sayag 2010). Similarly, D'Onza et al. (2015) emphasized that IA independence is fundamental to ensuring the trustworthiness of IA services. Moreover, the lack of independence for IA affects its ability to provide assurance to the audit committee, which affects the committee's ability to fulfill its corporate governance role effectively (Christopher et al. 2009;D'Onza et al. 2015). However, some threats affect this independence, such as using the IA function as a training ground for a future managerial position within the organization; when the IA budget is approved by the CEO or the Chief Finance Officer (CFO), as this is considered a powerful tool to imposing budget constraints that may reduce the scope and affect IA effectiveness; when senior management is heavily involved in developing the IA plan; and when IA plays a consulting role and is perceived by senior management as a partner and performs as a subservient management role (Christopher et al. 2009).
Internal Audit Size
The size of the IA function plays an essential role in its effectiveness. To properly carry out IA responsibilities, the IA function needs to be adequately resourced (Alzeban and Gwilliam 2014). The internal auditing standard (2030-Resource Management) emphasizes the importance of having sufficient resources to implement the IA plan and deploying resources effectively to optimize the achievement of the plan (The Institute of Internal Auditors 2017). The agency theory justifies the larger size of the IA function where agents have more information than the principals. This information asymmetry affects the principals' ability to monitor whether or not their interests are being properly served by agents; this justifies the larger size of the IA function to closely monitor the agents' activities and safeguard the principals' interests (Sarens and Abdolmohammadi 2011). Furthermore, a large size of the IA function allows to rotate internal auditors, and this rotation leads to increasing objectivity (Arena and Azzone 2009;Turetken et al. 2019).
Some studies argue that there is a relationship between IA size and its effectiveness. For example, Alzeban and Gwilliam (2014) find that IA size is positively and significantly correlated with IA effectiveness. They also report that the availability of IA resources affects the percentage of actions taken by auditees on audit results, whereas Al-Twaijry et al. (2003) show that the smaller size of the IA function limits the scope of work and adversely affects the ability to achieve IA objectives and fulfill their duties and responsibilities successfully. They justify not having sufficient resources due to an insufficient budget. In another study, Ahmad et al. (2009) reveal that internal auditors are ranking the lack of staff in IA functions as number one of the main ten problems faced by IA functions and a major setback that can restrain IA effectiveness. Therefore, the size of the IA should be considered by organizations' audit committees and senior management when they need to improve the IA's effectiveness.
Internal Audit Relationships
Relationships for any IA function are important to IA effectiveness. The IIA's IPPF emphasized the importance of dual-reporting relationships with the audit committee and senior management, where the first reporting line is functionally reporting to the audit committee and assists in making IA effective by enabling IA access to sensitive matters, ensuring sufficient organizational status for IA, and preventing organization management from interference to control IA work and ensuring the highest level of governance on IA work (The Institute of Internal Auditors 2019). Conversely, the second reporting line is administratively reporting to senior management-mainly reporting to the Chief Executive Officer (CEO)-is important to support the IA effectiveness through supporting IA function with the appropriate authority and budget as well as facilitate its work to carry out its responsibilities without obstacles and to deal with difficult issues with other senior leaders (The Institute of Internal Auditors 2019). Roussy et al. (2020) indicate that transparent and trustful relationships between the Chief Audit Executive (CAE) with the audit committee and the CEO are important for IA effectiveness. Moreover, they point out that the quality of relationships is linked to the frequency of meetings and formality of communication, which are important to improving transparency and trust. In this paper, we used the IA's relationship with the audit committee and senior management support as dimensions of IA relationships, since the audit committee and senior management represented the main stakeholders for the IA function. IA function cannot be effective without having a positive relationship with the audit committee and appropriate support from senior management (Soh and Martinov-Bennie 2011).
Relationship with Audit Committee
Despite the fact that the IA function and audit committee are separate control bodies, they both have similar goals in terms of monitoring and evaluating the internal control system of their organization (Arena and Azzone 2009). However, the audit committee relies on the work performed by the IA function to fulfill its responsibilities (Brender et al. 2015) as well as look at IA as a source of information that assists it to fulfill its duties (D'Onza et al. 2015). Therefore, it is important for the audit committee to make the IA function effective. The IA function is supervised by the audit committee, where the IA function is represented by the CAE who is reporting to the audit committee. The relationship between the IA and the audit committee is critical to IA effectiveness, when the audit committee consists of independent directors having finance and accounting expertise, the audit committee plays an active role in oversighting the IA function with a possibility of more frequent meetings and informal access from the CAE (Lenz and Hahn 2015). The frequency of meetings, the level of formality, and the level of confidence between the CAE and audit committee members drive the relationship and affect the IA effectiveness (Roussy et al. 2020;Sarens et al. 2009;Soh and Martinov-Bennie 2011). A positive relationship between the IA function and the audit committee facilitates the role of IA and provides the audit with sufficient support to fulfill its responsibilities.
The audit committee's oversight of the IA function helps identify the problems in the IA itself and offers opportunities for improvement (Arena and Azzone 2009). Moreover, audit committees act as a preserver of the organizational independence of the IA and strengthen the IA's ability to overcome undue pressure from senior management (Ahmad et al. 2009;D'Onza et al. 2015;Soh and Martinov-Bennie 2011). Moreover, an intensive working relationship between the IA and the audit committee is expected to strengthen the objectivity and independence of the IA (Lenz and Hahn 2015). Furthermore, regular access from the CAE to the audit committee provides an opportunity to address the concerns raised in the IA reports and received support to address the reported weaknesses (D'Onza et al. 2015). Similarly, frequent interactions between the CAE and audit committee strengthen the communication process and allow the IA to address its concerns and get advice and support to become more effective in improving the organization's risk management, internal control, and governance processes (Abbott et al. 2007;D'Onza et al. 2015;Goodwin-Stewart and Kent 2006). The audit committee empowers the IA to escalate outstanding issues with management and approve the plan and the required resources (Soh and Martinov-Bennie 2011), where a trustful and transparent relationship between the audit committee and the CAE is important to solve any actual concerns raised by the IA (Roussy et al. 2020). Arena and Azzone (2009) conclude that involvement of the audit committee in IA activities sends a message that the organization is committed to increasing the credibility of the IA and encourages line managers to be more active in implementing IA recommendations.
Senior Management Support
The relationship between the IA function, represented by CAE and senior management, is crucial for IA effectiveness. Senior management wants the IA function to take more responsibility to enhance the organization's internal control and risk management, while the IA function expects senior management to support it in fulfilling its responsibilities (Sarens and De Beelde 2006b). Internal auditing standards emphasize the importance of this relationship for supporting the independence of the IA and the objectivity of the internal auditors (Standard 1100-Independence and Objectivity); also, it is important to support the improvement of the quality of the IA by communicating the quality assurance and improvement program results to senior management (Standard 1320-Reporting on the Quality Assurance and Improvement Program); in addition, it is important to supporting and facilitating the role of the IA through communicating the IA requirements (Standard 2060-Reporting to Senior Management and The Board). Moreover, the standards emphasize the importance of senior management support through their involvement in developing the IA plan (Standard 2010.A1-Planning) (The Institute of Internal Auditors 2017). Previous studies show that senior management support is important for IA effectiveness. This support enables the IA to maintain its independence (Alzeban and Gwilliam 2014); provide IA with sufficient budget and resources in order to fulfill its responsibilities effectively (Ahmad et al. 2009;Alqudah et al. 2019;Alzeban and Gwilliam 2014;Ta and Doan 2022); provide appropriate tools that assist to complete the audit engagements (Alzeban and Gwilliam 2014;Cohen and Sayag 2010); provide IA with the right number of staff and attracting skilled and experienced staff (Ahmad et al. 2009;Alzeban and Gwilliam 2014;Cohen and Sayag 2010;Ta and Doan 2022); ensure enough and up-to-date training and development programs (Alzeban and Gwilliam 2014;Cohen and Sayag 2010); and provide resources and commitment to implementing IA recommendations which considered as an indicator for achieving IA effectiveness (Mihret and Yismaw 2007).
On the other hand, lack of management support adversely affects the auditee's level of cooperation with IA and creates an unfavorable attitude towards the IA by the auditee as well as the perception of the auditee by the IA as unimportant because it is recognized to be unimportant by senior management (Mihret and Yismaw 2007). It would be difficult for the IA to have complete access to all activities, records, and assets without full auditee cooperation (Ahmad et al. 2009). Employees act according to what their managers expect of them; accordingly, when employees recognize that top management recognizes IA as essential to them, they will appreciate and accept its work as well as cooperate with the IA and support it (Cohen and Sayag 2010;Sarens and De Beelde 2006b).
Internal Audit Processes
IA processes are important to IA effectiveness as through the processes, IA will be able to achieve its objectives; however, IA processes are shaped by adopting a risk-based auditing approach and quality assurance and improvement program. Researchers linked IA processes with adopting a risk-based auditing approach (Castanheira et al. 2010 Risk-based audit approach affects the priorities for the audit and the areas that will be considered during the audit as well as the resources needed to do the audit and the audit tools and techniques used to achieve the objectives of the audit engagements (The Institute of Internal Auditors 2019). Conversely, the quality assurance and improvement program is designed to enable the IA to conform with the internal auditing standards and code of ethics; ensure the IA's efficiency and effectiveness; and provide an opportunity for improvement (The Institute of Internal Auditors 2019). In addition, a quality assurance and improvement program should be developed in a way that helps the IA add value to the organization and improve its operations (Marais 2004) and quality assurance and improvement program, shaping the processes through responses and feedback from internal auditors and audited entities (The Institute of Internal Auditors 2019). In this paper, adopting risk-based audit and quality assurance and improvement programs are used as dimensions of IA processes since these are the most important factors that shaped the IA processes.
Adopting Risk-Based Audit
Risk-based auditing is seen as a modern approach that assists organizations in recognizing the risks that limit their ability to meet their targets (Arena and Azzone 2009;Lenz and Hahn 2015). The internal auditing standards demand risk-based auditing, and standard 2010-Planning stated that "the CAE must establish a risk-based plan to determine the priorities of the IA activity, consistent with the organization's goals" (The Institute of Internal Auditors 2017, p. 10). Typically, planning is seen as a key audit activity and it includes preparing a strategic plan, annual plan, and programs for individual audit assignments, also proper planning allows a large number of audits to be carried out in a given timeframe by improving efficiency (Mihret and Yismaw 2007). From the agency theory perspective, Zainal Abidin (2017) highlights that the concept of agency theory evolved from the separation of the ownership (the board) and the agent (management), with a well-designed control and oversight system aiming to maximize the benefit to all parties. From this perspective, the IA's role is to monitor the actions and decisions made to execute the strategies in order to achieve the targets; thus, IA adopted a risk-based audit approach to ensure that risks associated with strategies are identified and mitigated properly and management is acting in accordance with the expectations of the owner. IA adopted a risk-based audit approach to achieve its effectiveness, since this approach allows effective utilization of resources and allows focus on important matters (Azzali and Mazza 2018). The risk-based audit approach goes beyond compliance and allows the IA to provide assurance on the effectiveness of risk management and internal controls (Lois et al. 2021). Additionally, when the IA is linked with risk management, it enables the IA to assist the organization's manager in understanding the weaknesses of internal control as well as enhance the communication between the internal auditors and auditees (Arena and Azzone 2009;D'Onza et al. 2015). Organizations identify controls to mitigate their risks, which makes risk management essential for IA effectiveness (Turetken et al. 2019). The value provided by IA increases when IA contributes to improving the effectiveness of risk management, whereas this contribution increases when IA uses a systematic approach when carrying out risk management assessment (D'Onza et al. 2015). Effective IA assessed the risks facing the organization and built an audit plan to address them, however, the risk assessment process must be dynamic and link changes in the company's risk profile to changes in the audit plan (Feizizadeh 2012). IA involvement in risk management enables it to update the audit plan based on the updated risks; nevertheless, internal auditors are concerned about their abilities to play a key role in risk management (Sarens and De Beelde 2006a).
Adopting a Quality Assurance and Improvement Program
A quality assurance and improvement program plays a key role in IA effectiveness. To demonstrate that IA is valuable to the organization and has a good reputation within the organization, IA must continuously evaluate its performance and improve its service quality (Mihret and Yismaw 2007). However, the quality of the audit work refers to the quality of IA activities (Endaya and Hanefah 2013;Mihret and Yismaw 2007). The quality of IA is determined through the internal capability to provide valuable and useful findings and recommendations; the quality of IA is an indicator of the level of staff competencies, the scope of work provided, and the extent to which audit is appropriately planned, executed and communicated (Mihret and Yismaw 2007). Moreover, the quality of IA refers to a set of IA activities: these activities include planning, supervision, fieldwork, reporting results, and recommendations as well as follow-up action plans on the recommendations (Endaya and Hanefah 2013). Furthermore, the quality of IA refers to the adherence to internal auditing standards (Arena and Azzone 2009;Cohen and Sayag 2010;Rupšys and Boguslauskas 2007;Turetken et al. 2019). A higher level of quality of IA work improves the IA effectiveness, where the quality of IA is understood in terms of adherence to internal auditing standards and a high level of planning and execution (Cohen and Sayag 2010). Therefore, performing auditing in accordance with internal auditing standards will contribute to the IA's effectiveness (Turetken et al. 2019). The conformance with internal auditing standards affects the IA's effectiveness and its ability to add value since the standards provide a framework for performance and a range of value-added activities (D'Onza et al. 2015). The internal auditing standards emphasize the importance of maintaining a quality assurance and improvement program that covers all aspects of the IA activities and continuously monitors its effectiveness (Standard 1300-Quality Assurance and Improvement Program). The quality assurance and improvement program includes an internal assessment and external assessment. These assessments are designed to enable the evaluation of IA conformance with internal auditing standards and code of ethics; evaluate IA efficiency and effectiveness; and provide an opportunity for improvement (The Institute of Internal Auditors 2017; Soh and Martinov-Bennie 2011).
The internal assessment provides an ongoing review of the IA performance, and a periodic self-assessment is conducted by someone within the organization having sufficient knowledge of internal auditing standards and practices. The external assessment provides assurance that IA work conforms with the internal auditing standards, and this assessment is conducted by an independent assessor from outside the organization (The Institute of Internal Auditors 2017). Once the quality of IA improved to a degree that meets the management interest, management support and commitment to implementing the IA recommendations would be a natural result since management realized the value and contribution of IA to the achievement of organizational goals; ultimately, this would positively enhance the IA effectiveness (Mihret and Yismaw 2007).
Internal Audit Resources
IA resources are an integral part of the IA's success, and they are the CAE and the IA staff matter (Lenz and Hahn 2015). IA work required experienced professional staff to undertake a wide range of audits, as well as the staff should have the necessary education, professional qualifications, and proper training (Al-Twaijry et al. 2003;Cohen and Sayag 2010). Some studies linked IA resources with internal auditors' competencies (Ahmad et al. 2009 . Internal auditors' competencies include significant operational experience specific to the organization; IT competencies; and other specific competencies such as judgment, adaptability, listening skills and persuasiveness, and strength of character (Roussy et al. 2020). Conversely, IA outsourcing provides the IA with experienced resources with specialized skills and also fosters the objectivity of the internal auditors and the independence of the IA (Dellai and Omri 2016). In this paper, internal auditors' competencies and IA outsourcing are used as dimensions of IA resources since these are the most important factors that shaped IA resources.
Internal Auditors' Competences
Internal auditors' competencies are critical for IA effectiveness (Al-Twaijry et al. 2003;Alzeban and Gwilliam 2014;Dellai and Omri 2016;George et al. 2015;Ta and Doan 2022). The competencies of internal auditors can enhance the effectiveness of the IA by improving the perception and recognition of their role within the organization (Arena and Azzone 2009). The internal auditing standards (Standard 1200-Proficiency and Due professional Care) emphasized that internal auditors must possess the knowledge and skills and other competencies needed to perform their individual responsibilities (The Institute of Internal Auditors 2017). Internal auditors must have the knowledge, skills, and other competencies that are necessary to perform their proficiency and due professional care responsibilities (Endaya and Hanefah 2013). A skilled auditor is more capable of completing audits, providing advice on how to improve the internal control system, identifying appropriate solutions based on his past experience, and dealing with conflict and complex situations (Arena and Azzone 2009).
Internal auditors utilize their knowledge to assess the objective and the scope of the audit engagements in order to determine how to complete the audit engagements effectively (The Institute of Internal Auditors 2019). However, despite the importance of internal auditor competence, most organizations tend to concentrate on establishing IA functions in order to meet the regulations without looking to the available resources, training, education, and qualification of auditors (Elmghaamez and Ntim 2016). Internal auditors perform a wide variety of audit engagements within the organization (Cohen and Sayag 2010). Therefore, it is essential to recruit internal auditors having experience, professional skills, and knowledge of a wide range of operations and systems, as well as it is important to improve their skills through continuous training and development (Mihret and Yismaw 2007). The development and training of internal auditors are very important to IA's success (Al-Twaijry et al. 2003).
Internal Audit Outsourcing
IA can be performed by an internal in-house team from the organization or outsourced to third parties (Dellai and Omri 2016;Turetken et al. 2019). Prior studies argue that IA outsourcing has advantages and disadvantages to IA effectiveness. Although that IA outsourcing enhances the objectivity of the auditor and the independence of the IA function (Dellai and Omri 2016), outsourcing the routine IA tasks threatens the independence and the quality of IA (Abbott et al. 2007;Selim and Yiannakas 2000). Outsourcing of the IA allows the forming of a team with specialized skills and decreases the cost of recruiting and training the internal team. However, outsourced internal auditors do not have the full picture of the organization's environment and culture and also face some resistance from the auditee to provide them with access to the necessary information and to identify critical issues (Dellai and Omri 2016). Despite outsourced IA improving IA effectiveness and having a positive influence on organization performance by reducing the risks and operating costs (Sudsomboon 2011;Prawitt et al. 2012), an in-house team is able to detect fraud more likely than the outsourced IA (Abbott et al. 2007;Coram et al. 2008;Selim and Yiannakas 2000). Moreover, considerations such as IA technological know-how, quality of IA services provided, and communication and coordination issues may play a role in the managerial decision to outsource the IA; however, issues such as the desire to protect the firm information and improve the organizational performance are important for the decision to in-house than the decision to outsource the IA (Sharma and Subramaniam 2005). From the external auditor perspective, an outsourced IA is more reliable than an in-house IA team, since the IA outsourcing is more competent and more objective than the in-house team (Ahlawat and Lowe 2004;Davidson et al. 2013); however, this reliance is decreased when the outsourced team provides additional services such as tax and consulting services (Desai et al. 2011).
Coordination with Other Assurance Providers
The increase in compliance requirements and business complexity drives organizations to establish many internal assurance providers and rely on external assurance providers. These assurance providers are charged with measuring and reporting risks, identifying control gaps, tracking remediation, and concluding whether control processes are operating effectively in specific areas, as well as providing assurance on areas they assessed and providing recommendations to strengthen the related controls which often in areas within the scope of IA's work (The Institute of Internal Auditors 2011). While internal assurance providers represent the oversight functions that are part of senior management or report to senior management, external assurance providers are assurance activities performed by parties outside the organization and may report to senior management or external stakeholders such as external auditor and statutory auditor (The Institute of Internal Auditors 2011, 2019). Organizations use a variety of internal and external assurance providers to assist the board of directors in carrying out their oversight responsibility and implementing effective governance practices, and some of these assurance providers include the functions responsible for compliance, legal, quality assurance, health and safety, corporate social responsibility, and IA and, outside the organization, including external auditors (Decaux and Sarens 2015).
Furthermore, IA coordination with internal and external assurance providers plays an important role in IA effectiveness. The internal auditing standards emphasize the effective coordination role of the CAE with other assurance providers; sharing the information among them, and considering relying on them to ensure proper coverage and minimize the duplication of efforts (Standard 2050-Coordination and Reliance); and considering the reliance on their results while communicating the IA overall opinion (Standard 2450-Overall Opinions). Internal auditing standards also emphasize the important role of IA in providing recommendations to improve the organization's governance processes for coordinating the activities and communicating the information among the board, external auditor, IA, other assurance providers, and management (Standard 2110-Governance) (The Institute of Internal Auditors 2017). However, the coordination process varies between organizations. In small organizations, informal processes could be found, while coordination in large organizations could be complex and formal (The Institute of Internal Auditors 2019). Coordination with other assurance providers is done through combined assurance implementation, where IA plays important role in leading this implementation (The Institute of Internal Auditors 2019). In addition, the external auditor is typically the main external assurance provider, whereas most of the previous studies focus only on it since an external auditor is mandatory for most organizations and other external assurance providers vary and depend on the nature of the organizations (Alqudah et al. 2019;Alzeban and Gwilliam 2014;Badara and Saidin 2014;Alzeban and Gwilliam 2014). In this paper, the leading role of IA in implementing the combined assurance and IA coordination with external auditors is used as dimensions of coordination with other assurance providers, since these are the most important factors that shape the IA's role in their coordination with other assurance providers.
Leading the Implementation of Combined Assurance
The combined assurance concept was adopted by the IIA as a responsibility for the IA to effectively coordinate with other assurance providers to ensure proper coverage of organization risks (The Institute of Internal Auditors 2019). Combined assurance is a new concept implemented by organizations. It aims to satisfy audit committees by providing them with confidence that the combined efforts of all assurance providers are sufficient to provide assurance that all significant risk areas have been addressed adequately and some controls exist to mitigate these risks (Schreurs and Marais 2015). From the agency theory perspective, IA is seen as an agent to the board of directors and the audit committee (principals) and agency problem exist when the principals entrust the agent; therefore, IA acts and lead the combined assurance implementation which aims to provide holistic coverage of the organization's business risks (Rossouw 2015). Combined assurance is a way for IA to coordinate assurance efforts with other assurance providers where this coordination improves the effectiveness of the IA by reducing the frequency and redundancy of the IA (The Institute of Internal Auditors 2019). Each assurance provider carries out its assurance role in isolation and reports its results separately (Schreurs and Marais 2015); this leads to a lack of consistency and transparency in assurance services and inefficiencies in risk management (Sarens et al. 2012;Schreurs and Marais 2015) as well as put auditee and management under pressure from assurance fatigue and assurance gaps (Decaux and Sarens 2015).
On the other hand, the isolation work of each assurance provider leads to providing the board and audit committee with multiple views; therefore, the board will not be in a good position to perform their monitoring role, and this negatively affects the governance (Sarens et al. 2012;Decaux and Sarens 2015; The Institute of Internal Auditors 2019; Kurnia and Yulian 2018). Consequently, coordination is important among these assurance providers (Decaux and Sarens 2015;Schreurs and Marais 2015). Some studies found that IA is in an ideal position to lead the implementation of combined assurance as well as report the combined results of other assurance providers because IA has a holistic view of the organization's risk and control environment (Decaux and Sarens 2015;Kurnia and Yulian 2018;Schreurs and Marais 2015). Leading the combined assurance implementation will improve the stature of the IA in the organization and also help IA become more effective by addressing areas that have not been covered by other assurance providers and facilitating auditee cooperation with the IA and assisting them in resource planning in order to facilitate the assurance work (Kurnia and Yulian 2018). However, leading the implementation of the IA is misunderstood as the IA is leveraging on it instead of collaborating with other assurance providers (Decaux and Sarens 2015;Rossouw 2015) and may also cause a conflict of interest and affect the IA independence (Schreurs and Marais 2015).
Internal Audit Coordination with External Auditor
Despite the external auditor's scope of work considered while implementing the combined assurance, this consideration is used only to ensure that there is holistic coverage for the organization's business risks. However, even if combined assurance is effectively implemented, the IA coordination with the external auditor cannot be ignored to achieve IA effectiveness. In addition, although internal and external audits performed different roles, both complement each other. While the external audit is concerned with inaccuracies and misstatements that affect the financial information, IA is more concerned with nonfinancial information related to governance, risk management, and internal controls (Chartered Institute of Internal Auditors 2020). Higher external audit quality drives the assurance value, which reduces the bias in management reporting and adds credibility to a company's finical statement (Boubaker et al. 2018). Therefore, effective cooperation between internal and external audits will be beneficial for both of them and for the organization they serve (Endaya 2014). However, despite there being a good relationship between IA and external auditors, the levels of mutual reliance between them vary (Soh and Martinov-Bennie 2011). Some studies show that a closer relationship between internal and external audit positively and significantly correlated with IA effectiveness; IA coordination with the external auditor is essential for IA effectiveness through maintaining good coordination and cooperation lead to a good relationship; sharing valuable information and opinions; joint planning and sharing plans; preventing unnecessary duplication of work; and exchanging important materials to facilitate higher quality audits (Alqudah et al. 2019;Alzeban and Gwilliam 2014;Badara and Saidin 2014). Furthermore, cooperation between internal and external audits provides a means for faster fraud detection (Alqudah et al. 2019;Endaya 2014). Moreover, a constructive relationship based on regular communication and sharing of information has a benefit to the organization they serve and a close and constructive relationship also leads to the efficient use of resources (Chartered Institute of Internal Auditors 2020). In addition, a cooperative relationship creates a strong accountable relationship where IA becomes more effective when the relationship is strong (Alqudah et al. 2019).
The Conceptual Model
This paper attempts to articulate a conceptual model based on previous literature. The above literature analysis provided the theoretical foundation for the model development. Based on the literature review, this paper discussed the influential factors of IA effectiveness; drawing on this discussion, we propose a conceptual model that explains the relationship between the influential factors and IA effectiveness, as shown in Figure 2. The literature analysis shows that many factors affect IA effectiveness, and the conceptual model clearly shows the dimensions for IA organizational characteristics, IA relationships, IA processes, IA resources, and IA coordination with other assurance providers. The literature analysis revealed that IA organizational characteristics are part of the organization's context, where the IA status is shaped, and their independence gained by reporting to the level with the organization allows it to perform without obstacles and assist it in communicating IA results objectively without influence from the auditee and management. Additionally, IA size is part of the IA organization characteristics, where the IA should be adequately resourced to optimize the achievement of the IA plan and assist it in rotating the auditors to increase their objectivity as well as achieve the objectives of the IA successfully.
Furthermore, the literature analysis revealed that IA relationships are represented by dual-reporting relationships, functionally reporting to the audit committee and administratively reporting to a level within the organization that allows IA to fulfill its responsibilities. A positive relationship with the audit committee facilitates the role of IA and supports it to address its concerns and become more effective in improving the organization. In addition, the relationship with senior management is very important to support the IA role. Senior management support is essential for IA effectiveness, which is also crucial for maintaining the IA independence, facilitating the communication of IA requirements, providing IA with appropriate budget and resources as well as it is important for the senior management commitment to implement IA recommendations. Moreover, based on the literature, IA processes are crucial for IA effectiveness, IA processes are shaped by adopting a risk-based audit approach and adopting a quality assurance and improvement program. Risk-based audit affects audit priorities and areas to be considered during the audit and the resources needed and audit tools and techniques used, whereas adopting a quality assurance and improvement program is designed to ensure that IA follows the internal auditing standards, where performing audit work in accordance with internal auditing standards will contribute to the IA effectiveness.
assist it in rotating the auditors to increase their objectivity as well as achieve the objectives of the IA successfully. Furthermore, the literature analysis revealed that IA relationships are represented by dual-reporting relationships, functionally reporting to the audit committee and administratively reporting to a level within the organization that allows IA to fulfill its responsibilities. A positive relationship with the audit committee facilitates the role of IA and supports it to address its concerns and become more effective in improving the organization. In addition, the relationship with senior management is very important to support the IA role. Senior management support is essential for IA effectiveness, which is also crucial for maintaining the IA independence, facilitating the communication of IA requirements, providing IA with appropriate budget and resources as well as it is important for the senior management commitment to implement IA recommendations. Moreover, based on the literature, IA processes are crucial for IA effectiveness, IA processes are shaped by adopting a risk-based audit approach and adopting a quality assurance and improvement program. Risk-based audit affects audit priorities and areas to be considered during the audit and the resources needed and audit tools and techniques used, whereas adopting a quality assurance and improvement program is designed to ensure that IA follows the internal auditing standards, where performing audit work in accordance with internal auditing standards will contribute to the IA effectiveness.
On the other hand, IA resources are analyzed in the literature, IA competencies are important to perform the IA responsibilities effectively, and IA staff should have the required education, skills and training to work effectively. Moreover, the other dimension of the IA resources is outsourcing, outsourcing enhances the independence of the IA function and the objectivity of the internal auditors as well as provides the IA with experienced resources with specialized skills. Finally, the literature analysis showed that IA should consider the coordination with other assurance providers, and this consideration can be through leading the implementation of combined assurance and coordination with the external auditor. IA effectiveness is affected by combined assurance implementation, through the combined assurance implementation IA can reduce the frequency and redundancy of the IA. In addition, coordination with the external auditor facilitates sharing val- On the other hand, IA resources are analyzed in the literature, IA competencies are important to perform the IA responsibilities effectively, and IA staff should have the required education, skills and training to work effectively. Moreover, the other dimension of the IA resources is outsourcing, outsourcing enhances the independence of the IA function and the objectivity of the internal auditors as well as provides the IA with experienced resources with specialized skills. Finally, the literature analysis showed that IA should consider the coordination with other assurance providers, and this consideration can be through leading the implementation of combined assurance and coordination with the external auditor. IA effectiveness is affected by combined assurance implementation, through the combined assurance implementation IA can reduce the frequency and redundancy of the IA. In addition, coordination with the external auditor facilitates sharing valuable information and opinions, joint planning and sharing plans; preventing unnecessary duplication of work; and exchanging important materials to facilitate higher quality audits. Overall, all of the above factors are influential for IA effectiveness.
Conclusions
This paper summarized the influential factors of IA effectiveness based on an extensive literature review and argued that the body of knowledge needs a model that provides a holistic view and shows the relationship between the influential factors and IA effectiveness. Based on a systematic literature review (SLR) covering the period from January 1999 to March 2022, our research expands the internal auditing body of knowledge by attempting to capture the influential factors of IA effectiveness into one conceptual model. The existing literature on the factors that influence IA effectiveness is mostly focused on identifying the key factors that influence IA effectiveness, without major attention given to which conceptual model is appropriate in order to enrich the internal auditing theory; there is also no consensus among researchers about the optimal model for IA effectiveness.
Additionally, most researchers investigated different factors individually based on the objectives of the research without justifications on why they did not include the other factors in their studies. There was an evident need to develop an IA effectiveness model that integrated all the influential factors and their dimensions. This paper, first, discussed the concept of IA effectiveness based on the existing literature to reach an understanding of what IA effectiveness means. Then, we discussed the influential factors of IA effectiveness and its dimensions. After that, we proposed a conceptual model based on what the literature says, the model is built on and extends Lenz et al. (2014) and Roussy et al. (2020) models. The contribution of this paper is related to the fact that the proposed model includes a holistic view of the influential factors and their dimensions are clearly identified in the model, and the model clearly takes into consideration the role of IA in leading the implementation of combined assurance since this role is a new phenomenon and should be considered as a factor of IA effectiveness. The proposed model will drive future studies to test the model and build on it as well. Furthermore, the proposed model in this paper needs empirical validation to extract the most important determination of IA effectiveness and the most significant factors. Moreover, the proposed model provides an opportunity to study the relationships between the influential factors, this could be possible through empirical study or to understand these relationships through case studies taking into consideration the agency theory and the institutional theory. In addition, based on the proposed model, comparative studies between IA functions existing in different industries or countries will provide insights into the key factors associated with IA effectiveness.
Furthermore, IA effectiveness has been traditionally examined by researchers as a unidimensional variable across different contexts (Alqudah et al. 2019;Al-Shbail and Turki 2017;Al-Twaijry et al. 2003;Alzeban and Gwilliam 2014;Alzeban 2010;Badara and Saidin 2014;Bednarek 2018;Cohen and Sayag 2010;Dellai and Omri 2016;Endaya and Hanefah 2016;George et al. 2015;Onay 2021;Salehi 2016;Ta and Doan 2022); therefore, there is an opportunity for qualitative research to look more deeply into IA effectiveness and identify its dimensions, this is will not be possible without considering the main objective on the internal auditing profession. Practically, the proposed model provides IA practitioners, audit committees, and senior management with a broad understanding and holistic view of the key factors that should be considered when they want to make their IA functions more effective and boost the role of IA in their organizations. Moreover, this paper provides insights and opportunities to policymakers and regulators to take into consideration the key factors of IA effectiveness while improving their corporate governance legislations. Finally, the current paper is not free from limitations. A potential limitation is that the literature reviewed by this study is limited to academic studies; therefore, future research may consider gray literature to expand this study and provide further insight on the influential factor of IA effectiveness. | 2022-08-24T15:18:04.704Z | 2022-08-19T00:00:00.000 | {
"year": 2022,
"sha1": "ec27a4cf036ebc632021e918d0f0c89f259fcf65",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-7072/10/3/71/pdf?version=1660913348",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "61f92b2672bce5de0b19972010c46ef064590d2c",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": []
} |
2505895 | pes2o/s2orc | v3-fos-license | CNN Image Retrieval Learns from BoW: Unsupervised Fine-Tuning with Hard Examples
Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In this work, we propose to fine-tune CNN for image retrieval from a large collection of unordered images in a fully automated manner. We employ state-of-the-art retrieval and Structure-from-Motion (SfM) methods to obtain 3D models, which are used to guide the selection of the training data for CNN fine-tuning. We show that both hard positive and hard negative examples enhance the final performance in particular object retrieval with compact codes.
Introduction
I mage retrieval has received a lot of attention since the advent of invariant local features, such as SIFT [1], and since the seminal work of Sivic and Zisserman [2] based on Bag-of-Words (BoW). Retrieval systems have reached a higher level of maturity by incorporating large visual codebooks [3,4], spatial verification [3,5] and query expansion [6,7,8]. These ingredients constitute the state of the art on particular object retrieval. Another line of research focuses on compact image representations in order to decrease memory requirements and increase the search efficiency. Representative approaches are Fisher vectors [9], VLAD [10] and alternatives [11,12,13]. Recent advances [14,15] show that Convolutional Neural Networks (CNN) offer an attractive alternative for image search representations with small memory footprint.
CNNs attracted a lot of attention after the work of Krizhevsky et al. [16]. Their success is mainly due to the computational power of GPUs and the use of very large annotated datasets [17]. Generation of the latter comes at the expense of costly manual annotation. Using CNN layer activations as off-theshelf image descriptors [18,19] appears very effective and is adopted in many tasks [20,21,22]. In particular for image retrieval, Babenko et al. [14] and Gong et al. [22] concurrently propose the use of Fully Connected (FC) layer activations as descriptors, while convolutional layer activations are later shown to have superior performance [15,23,24,25].
arXiv:1604.02426v1 [cs.CV] 8 Apr 2016
Generalization to other tasks [26] is attained by CNN activations, at least up to some extent. However, initialization by a pre-trained network and re-training for another task, a process called fine-tuning, significantly improves the adaptation ability [27,28]. Fine-tuning by training with classes of particular objects, e.g. building classes in the work of Babenko et al. [14], is known to improve retrieval accuracy. However this formulation is much closer to classification than to the desired properties of instance retrieval. Typical architectures for metric learning, such as siamese [29,30,31] or triplet networks [32,33,34] employ matching and non-matching pairs to perform the training and better suit to this task. In a recent work, Arandjelovic et al. [35] follow such an architecture in order to perform fine-tuning based on geo-tagged databases. Our work bears resemblance to their work because, in contrast to prior work, during fine-tuning we directly optimize the similarity measure to be used in the final task. The major difference is that we dispense with the need of annotated data or any assumptions on the training dataset and that we enforce hard matching and hard non-matching examples through the SfM information.
A number of image clustering methods based on local features have been introduced [36,37,38]. Due to the spatial verification, the clusters discovered by these methods are reliable. In fact, the methods provide not only clusters, but also a matching graph or sub-graph on the cluster images. These graphs are further used as an input to a Structure-from-Motion (SfM) pipeline to build a 3D model [39]. The SfM filters out virtually all mismatched images, and also provides camera positions for all matched images in the cluster. The whole process from unordered collection of images to 3D reconstructions is fully automatic.
In this paper, we address an unsupervised fine-tuning of CNN for image retrieval. We propose to exploit 3D reconstructions to select the training data for CNN. We show that compared to previous supervised approaches, the variability in the training data from 3D reconstructions delivers superior performance in the image retrieval task. During the training process the CNN is trained to learn what a state-of-the-art retrieval system based on local features and spatial verification would match. Such a system has large memory requirements and high query times, while our goal is to mimic this via CNN-based representation. We derive a short image representation and achieve similar performance to such state-of-the-art systems.
In particular we make the following contributions. (1) We exploit SfM information and enforce not only hard non-matching (negative) but also hard matching (positive) examples to be learned by the CNN. This is shown to enhance the derived image representation. (2) We show that the whitening traditionally performed on short representations [40] is, in some cases, unstable and we rather propose to learn the whitening through the same training data. Its effect is complementary to fine-tuning and it further boosts performance. (3) Finally, we set a new state-of-the-art based on compact representations for Oxford Buildings and Paris datasets by re-training well known CNNs, such as AlexNet [16] and VGG [41]. Remarkably, we are on par with existing 256D compact representations even by using 32D image vectors.
Related work
A variety of previous methods apply CNN activations on the task of image retrieval [22,15,23,24,25]. The achieved accuracy on retrieval is evidence for the generalization properties of CNNs. The employed networks were trained for image classification using ImageNet dataset, optimizing classification error. Babenko et al. [14] go one step further and re-train such networks with a dataset that is closer to the target task. They perform training with object classes that correspond to particular landmarks/buildings. Performance is improved on standard retrieval benchmarks. Despite the achievement, still, the final metric and utilized layers are different to the ones actually optimized during learning.
Constructing such training datasets requires manual effort. The same stands for attempts on different tasks [19,25] that perform fine-tuning and achieve increase of performance. In a recent work, geo-tagged datasets with timestamps offer the ground for weakly supervised fine-tuning of a triplet network [35]. Two images taken far from each other can be easily considered as non-matching, while matching examples are picked by the most similar nearby images. In the latter case, similarity is defined by the current representation of the CNN. This is the first approach that performs end-to-end fine-tuning for image retrieval and in particular for the task of geo-localization. The employed training data are now much closer to the final task. We differentiate by discovering matching and nonmatching image pairs in an unsupervised way. Moreover, we derive matching examples based on 3D reconstruction which allows for harder examples, compared to the ones that the current network identifies. Even though hard negative mining is a standard process [20,35], this is not the case with hard positive examples. Large intra-class variation in classification tasks requires the positive pairs to be sampled carefully; forcing the model to learn extremely hard positives may result in over-fitting. Another exception is the work Simo-Serra et al. [42] where they mine hard positive patches for descriptor learning. They are also guided by 3D reconstruction but only at patch level.
Despite the fact that one of the recent advances is the triplet loss [32,33,34], note that also Arandjelovic et al. [35] use it, there are no extenstive and direct comparisons to siamese networks and the contrastive loss. One exception is the work of Hoffer and Ailon [34], where triplet loss is shown to be marginally better only on MNIST dataset. We rather employ a simase architecture with the contrastive loss and find it to generalize better and to converge at higher performance than the triplet loss.
Network architecture and image representation
In this section we describe the derived image representation that is based on CNN and we present the network architecture used to perform the end-to-end learning in a siamese fashion. Finally, we describe how, after fine-tuning, we use the same training data to learn projections that appear to be an effective post-processing step.
Image representation
We adopt a compact representation that is derived from activations of convolutional layers and is shown to be effective for particular object retrieval [26,25]. We assume that a network is fully convolutional [43] or that all fully connected layers are discarded. Now, given an input image, the output is a 3D tensor X of W × H × K dimensions, where K is the number of feature maps in the last layer. Let X k be the set of all W × H activations for feature map k ∈ {1 . . . K}. The network output consists of K such sets of activations. The image representation, called Maximum Activations of Convolutions (MAC) [25], is simply constructed by max-pooling over all dimensions per feature map and is given by The indicator function 1 takes care that the feature vector f is non-negative, as if the last network layer was a Rectified Linear Unit (ReLU). The feature vector finally consists of the maximum activation per feature map and its dimensionality is equal to K. For many popular networks this is equal to 256 or 512, which makes it a compact image representation. MAC vectors are subsequently 2normalized and similarity between two images is evaluated with inner product. The contribution of a feature map to the image similarity is measured by the product of the corresponding MAC vector components. In Figure 1 we show the image patches in correspondence that contribute most to the similarity. Such implicit correspondences are improved after fine-tuning. Moreover, the CNN fires less to ImageNet classes, e.g. cars and bicycles.
Network and siamese learning
The proposed approach is applicable to any CNN that consists of only convolutional layers. In this paper, we focus on re-training (i.e. fine-tuning) state-ofthe-art CNNs for classification, in particular AlexNet and VGG. Fully connected layers are discarded and the pre-trained networks constitute the initialization for our convolutional layers. Now, the last convolutional layer is followed by a MAC layer that performs MAC vector computation (1). The input of a MAC layer is a 3D tensor of activation and the output is a non-negative vector. Then, an 2 -normalization block takes care that output vectors are normalized. In the rest of the paper, MAC corresponds to the 2 -normalized vectorf .
We adopt a siamese architecture and train a two branch network. Each branch is a clone of the other, meaning that they share the same parameters. Training input consists of image pairs (i, j) and labels Y (i, j) ∈ {0, 1} declaring whether a pair is non-matching (label 0) or matching (label 1). We employ the contrastive loss [44] that acts on the (non-)matching pairs and is defined as wheref (i) is the 2 -normalized MAC vector of image i, and τ is a parameter defining when non-matching pairs have large enough distance in order not to be taken into account in the loss. We train the network using Stochastic Gradient Descent (SGD) and a large training set created automatically (see Section 4).
Whitening and dimensionality reduction
In this section, the post-processing of fine-tuned MAC vectors is considered. Previous methods [23,25] use PCA of an independent set for whitening and dimensionality reduction, that is the covariance matrix of all descriptors is analyzed. We propose to take advantage of the labelled data provided by the 3D models and use linear discriminant projections originally proposed by Mikolajczyk and Matas [45]. The projection is decomposed into two parts, whitening and rotation. The whitening part is the inverse of the square-root of the intraclass covariance matrix C The rotation part is the PCA of the interclass covariance matrix in the whitened space eig(C The projection P = C , where µ is the mean MAC vector to perform centering. To reduce the descriptor dimensionality to D dimensions, only eigenvectors corresponding to D largest eigenvalues are used. Projected vectors are subsequently 2 -normalized.
Training dataset
In this section we briefly summarize the tightly-coupled BoW and SfM reconstruction system [39] that is employed to automatically select our training data. Then, we describe how we exploit the 3D information to select harder matching pairs and hard non-matching pairs with larger variability.
BoW and 3D reconstruction
The retrieval engine used in the work of Schonberger et al. [39] builds upon BoW with fast spatial verification [3]. It uses Hessian affine local features [46], RootSIFT descriptors [47], and a fine vocabulary of 16M visual words [48]. Then, query images are chosen via min-hash and spatial verification, as in [36]. Image retrieval based on BoW is used to collect images of the objects/landmarks. These images serve as the initial matching graph for the succeeding SfM reconstruction, which is performed using state-of-the-art SfM [49,50]. Different mining techniques, e.g. zoom in, zoom out [51], sideways crawl [39], help to build larger and complete model.
In this work, we exploit the outcome of such a system. Given a large unannotated image collection, images are clustered and a 3D model is constructed per cluster. We use the terms 3D model, model and cluster interchangeably. For each image, the estimated camera position is known, as well as the local features registered on the 3D model. We drop redundant (overlapping) 3D models, that might have been constructed from different seeds. Models reconstructing the same landmark but from different and disjoint viewpoints are considered as non-overlapping.
Selection of training image pairs
A 3D model is described as a bipartite visibility graph G = (I ∪ P, E) [52], where images I and points P are the vertices of the graph. Edges of this graph are defined by visibility relations between cameras and points, i.e. if a point p ∈ P is visible in an image i ∈ I, then there exists an edge (i, p) ∈ E. The set of points observed by an image i is given by We create a dataset of tuples (q, m(q), N (q)), where q represents a query image, m(q) is a positive image that matches the query, and N (q) is a set of negative images that do not match the query. These tuples are used to form training image pairs, where each tuple corresponds to |N (q)| + 1 pairs. For a query image q, a pool M(q) of candidate positive images is constructed based on the camera positions in the cluster of q. It consists of the k images with closest camera centers to the query. Due to the wide range of camera orientations, these do not necessarily depict the same object. We therefore propose three different ways to sample the positive image. The positives examples are fixed during the whole training process for all three strategies.
Positive images: MAC distance. The image that has the lowest MAC distance to the query is chosen as positive, formally This strategy is similar to the one followed by Arandjelovic et al. [35]. They adopt this choice since only GPS coordinates are available and not camera orientations. Downside of this approach is that the chosen matching examples already have low distance, thus not forcing network to learn much out of the positive samples.
Positive images: maximum inliers. In this approach, the 3D information is exploited to choose the positive image, independently of the CNN descriptor. In particular, the image that has the highest number of co-observed 3D points with the query is chosen. That is, This measure corresponds to the number of spatially verified features between two images, a measure commonly used for ranking in BoW-based retrieval. As this choice is independent of the CNN representation, it delivers more challenging positive examples. Positive images: relaxed inliers. Even though both previous methods choose positive images depicting the same object as the query, the variance of viewpoints is limited. Instead of using a pool of images with similar camera position, the positive example is selected at random from a set of images that co-observe enough points with the query, but do not exhibit too extreme scale change. The positive example in this case is where scale(i, q) is the scale change between the two images. This method results in selecting harder matching examples which are still guaranteed to depict the same object. Method m 3 chooses different image than m 1 on 86.5% of the queries. In Figure 2 we present examples of query images and the corresponding positives selected with the three different methods. The relaxed method increases the variability of viewpoints.
Negative images.
Negative examples are selected from clusters different than the cluster of the query image, as the clusters are non-overlaping. Following a well-known procedure, we choose hard negatives [42,20], that is, non-matching images with the most similar descriptor. Two different strategies are proposed. In the first, N 1 (q), k-nearest neighbors from all non-matching images are selected. In the other, N 2 (q), the same criterion is used, but at most one image per cluster is allowed. While N 1 (q) often leads to multiple, and very similar, instances of the same object, N 2 (q) provides higher variability of the negative examples, see
Experiments
In this section we discuss implementation details of our training, evaluate different components of our method, and compare to the state of the art.
Training setup and implementation details
Our training samples are derived from the dataset used in the work of Schonberger et al. [39], which consists of 7.4 million images downloaded from Flickr using keywords of popular landmarks, cities and countries across the world. The clustering procedure [36] gives 19, 546 images to serve as query seeds. The extensive retrieval-SfM reconstruction [39] of the whole dataset results in 1, 474 reconstructed 3D models. Removing overlapping models leaves us with 713 3D models containing 163, 671 unique images from the initial dataset. The initial dataset contained on purpose all images of Oxford5k and Paris6k datasets. In this way, we are able to exclude 98 clusters that contain any image (or their near duplicates) from these test datasets.
The largest model has 11, 042 images, while the smallest has 25. We randomly select 551 models (133, 659 images) for training and 162 (30, 012) for validation. The number of training queries per cluster is 10% of the cluster size for clusters of 300 or less images, or 30 images for larger clusters. A total number of 5, 974 images is selected for training queries, and 1, 691 for validation queries.
Each training and validation tuple contains 1 query, 1 positive and 5 negative images. The pool of candidate positives consists of k = 100 images with closest camera centers to the query. In particular, for method m 3 , the inliers overlap threshold is t i = 0.2, and the scale change threshold t s = 1.5. Hard negatives are re-mined 3 times per epoch, i.e. roughly every 2, 000 training queries. Given the chosen queries and the chosen positives, we further add 20 images per cluster to serve as candidate negatives during re-mining. This constitutes a training set of 22, 156 images and it corresponds to the case that all 3D models are included for training.
To perform the fine-tuning as described in Section 3, we initialize by the convolutional layers of AlexNet [16] or VGG [41]. We use learning rate equal to 0.001, which is divided by 5 every 10 epochs, momentum 0.9, weight decay 0.0005, parameter τ for contrastive loss 0.7, and batch size of 5 training tuples. All training images are resized to a maximum 362 × 362 dimensionality, while keeping the original aspect ratio. Training is done for at most 30 epochs and the best network is selected based on performance, measured via mean Average Precision (mAP), on validation tuples.
Test datasets and evaluation protocol
We evaluate our approach on Oxford buildings [3], Paris [53] and Holidays 1 [54] datasets. First two are closer to our training data, while the last differentiates by containing similar scenes and not only man made objects or buildings. These are also combined with 100k distractors from Oxford100k to allow for evaluation at larger scale. The performance is measured via mAP. We follow the standard evaluation protocol for Oxford and Paris and crop the query images with the provided bounding box. The cropped image is fed as input to the CNN. However, to deliver a direct comparison with other methods, we also evaluate queries generated by keeping all activations that fall into this bounding box [23,35] when the full query image is used as input to the network. We refer to the cropped images approach as Crop I and the cropped activations [23,35] as Crop X . The dimensionality of the images fed into the CNN is limited to 1024 × 1024 pixels.
In our experiments, no vector post-processing is applied if not otherwise stated.
Results on image retrieval
Learning. We evaluate the off-the-shelf CNN and our fine-tuned ones after different number of training epochs. Our different methods for positive and negative selection are evaluated independently in order to decompose the benefit of each ingredient. Finally, we also perform a comparison with the triplet loss [35], trained on exactly the same training data as the ones used for our architecture with the contrastive loss. Results are presented in Figure 4. The results show that positive examples with larger view point variability, and negative examples with higher content variability, both acquire a consistent increase in the performance. The triplet loss 2 appears to be inferior in our context; we observe oscillation of the error in the validation set from early epochs, which implies over-fitting. In the rest of the paper, we adopt the m 3 , N 2 approach. Dataset variability. We perform fine-tuning by using a subset of the available 3D models. Results are presented in Figure 5 with 10, 100 and 551 (all available) clusters, while keeping the amount of training data, i.e. training queries, fixed. In the case of 10 and 100 models we use the largest ones, i.e. ones with the highest number of images. It is better to train with all 3D models due to the higher variability in the training set. Remarkably, significant increase in performance is achieved even with 10 or 100 models. However, the network is able to over-fit in the case of few clusters. All models are utilized in all other experiments.
Learned projections. The PCA-whitening [40] (PCA w ) is shown to be essential in some cases of CNN-based descriptors [14,23,25]. On the other hand, it is shown that on some of the datasets, the performance after PCA w substantially drops compared with the raw descriptors (max pooling on Oxford5k [23]). We perform comparison of this traditional way of whitening and our learned whitening (L w ), described in Section 3.3. Table 1 shows results without postprocessing and with the two different methods of whitening. Our experiments confirm, that PCA w often reduces the performance. In contrast to that, the proposed L w achieves the best performance in most cases and is never the worst performing method. Compared to no post-processing baseline, L w reduces the performance twice for AlexNet, but the drop is negligible compared to the drop observed for PCA w . For the VGG, the proposed L w always outperforms the no post-processing baseline.
Our unsupervised learning directly optimizes MAC when extracted from full images, however, we further apply the fine-tuned networks to construct R-MAC representation [25]. It consists of extracting MAC from multiple sub-windows and Table 1. Performance comparison of CNN vector post-processing: no post-processing, PCA-whitening [40] (PCAw) and our learned whitening (Lw). No dimensionality reduction is performed. Fine-tuned AlexNet produces a 256D vector and fine-tuned VGG a 512D vector. The best performance highlighted in bold, the worst in blue. The proposed method consistently performs either the best (18 out of 24 cases) or on par with the best method. On the contrary, PCAw [40] often hurts the performance significantly. Best viewed in color. then aggregating them. Directly optimizing R-MAC during learning is possible and could offer extra improvements, but this is left for future work. Despite the fact that R-MAC offers improvements due to the regional representation, in our experiments it is not always better than MAC, since the latter is optimized during the end-to-end learning. We apply PCA w on R-MAC as in [25], that is, we whiten each region vector first and then aggregate. Performance is significantly higher in this way. In the case of our L w , we directly whiten the final vector after aggregation, which is also faster to compute.
Net
Dimensionality reduction. We compare dimensionality reduction performed with PCA w [40] and with our L w . The performance for varying descriptor dimensionality is plotted in Figure 6. The plots suggest that L w works better in higher dimensionalities, while PCA w works slightly better for the lower ones.
Remarkably, MAC reduced down to 16D outperforms state-of-the-art on BoWbased 128D compact codes [11] on Oxford105k (41.4 vs 45.5). Further results on very short codes can be found in Table 2.
Over-fitting and generalization. In all experiments, all clusters including any image (not only query landmarks) from Oxford5k or Paris6k datasets are removed. To evaluate whether the network tends to over-fit to the training data or to generalize, we repeat the training, this time using all 3D reconstructions, including those of Oxford and Paris landmarks. The same amount of training queries is used for a fair comparison. We observe negligible difference in the performance of the network on Oxford and Paris evaluation results, i.e. the difference in mAP was on average +0.3 over all testing datasets. We conclude that the network generalizes well and is relatively insensitive to over-fitting. Comparison with the state of the art. We extensively compare our results with the state-of-the-art performance on compact image representations and extremely short codes. The results for MAC and R-MAC with the fine-tuned networks are summarized together with previously published results in Table 2. The proposed methods outperform the state of the art on Paris and Oxford datasets, with and without distractors with all 16D, 32D, 128D, 256D, and 512D descriptors. On Holidays dataset, the Neural codes [14] win the extreme short code category, while off-the-shelf NetVlad performs the best on 256D and higher. No results better than ours were reported for 128D on Holidays.
We additionally combine MAC and R-MAC with recent localization method for re-ranking [25] to further boost the performance. Our scores compete with state-of-the-art systems based on local features and query expansion. These have much higher memory needs and larger query times.
Conclusions
We addressed fine-tuning of CNN for image retrieval. The training data are selected from an automated 3D reconstruction system applied on a large unordered photo collection. The proposed method does not require any manual annotation and yet outperforms the state of the art on a number of standard benchmarks for wide range (16 to 512) of descriptor dimensionality. The achieved results are reaching the level of the best systems based on local features with spatial matching and query expansion, while being faster and requiring less memory. Table 2. Performance comparison with the state of the art. Results reported with the use of AlexNet or VGG are marked by (A) or (V), respectively. Use of fine-tuned network is marked by (f ), otherwise the off-the-shelf network is implied. D: Dimensionality. Our methods are marked with and they are always accompanied by Lw. New state of the art highlighted in red, surpassed state of the art in bold, state of the art that retained the title in outline outline outline outline outline outline outline outline outline outline outline outline outline outline outline outline outline, and our methods that outperform previous state of the art on a gray background . Best viewed in color. Compact representations mVoc/BoW [11] 128 48.8 -41.4 -----65.6 -Neural codes † [14] ( Extreme short codes Neural codes † [14] (f A) 16 -41.8 -35.4 ----60.9 60.9 60.9 60.9 60.9 60.9 60.9 60.9 60.9 60.9 60.9 60.9 60.9 60.9 60.9 60.9 60. Re-ranking (R) and query expansion (QE) BoW+QE [6] 1M 82.7 -76.7 -80.5 -71.0 ---BoW+fineV+QE [48] 16M 84.9 -79.5 -82.4 -77.3 ---HQE [8] 65k [25] with the off-the-shelf network. | 2016-04-08T19:04:35.000Z | 2016-04-08T00:00:00.000 | {
"year": 2016,
"sha1": "067944fa7e4f5ca08036fa1b98046ce41c133009",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1604.02426",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fd296542e24eae16582a8a128aa8230180a74efe",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
208461727 | pes2o/s2orc | v3-fos-license | Errors Associated with the Rights of Medication Administration at Hospital Settings
Aims & Objectives: To investigate the associations between nurse-related medication errors by examining: (a) the Rights of medication administration and hospital units, (b) the Rights of medication administration and drug classes, and (c) interactions between hospital units and drug classes regarding the Rights of medication administration. Background: Medication errors are associated with the five Rights of medication administration (right patient, right drug, right dose, right time, and right route), hospital units, and drug classes, however these factors are not often examined simultaneously. Design & Methods: 1,273 medication error incident files from the risk management departments of five acute-care community hospitals in the southwestern United States were analyzed. Descriptive statistics and the Chi-square test were used in data analysis. Results: Giving medications at the wrong time was the most frequent cause of error in medical surgical units (54.1%) and intensive care units (51.7%). Errors related to cardiovascular drugs were commonly due to wrong dosages (40.2%) and wrong time of administration (40.2%). In addition, errors related to wrong dosages of antimicrobials were strongly associated with errors in intermediate care units (46.4%) and medical surgical units (52.1%), while wrong dosages of cardiovascular drugs were highly correlated with errors in intensive care units (49.0%) and intermediate care units (50.0%). Conclusion: Interactions between hospital units and drug classes were found in regard to being associated with the Rights of medication administration, especially with errors in wrong time and wrong dose. Relevance to clinical practice: To reduce medication errors and improve patient safety, continuing education for nurses regarding basic pharmacology, factors contributing to medication errors such as drug classes and hospital units, and preventing medication errors should be a priority. Medication administration policies and guidelines should be continuously updated and enforced.
Introduction
Medication Administration Errors (MAEs) are among the most common medical errors in health care settings. They affect patient safety, mortality rates, length of hospital stays, and related costs [1][2][3]. Serious, preventable medication errors occur in 3.8 million inpatient admissions and 3.3 million outpatient visits every year in the United States, costing approximately $20.6 billion annually [4]. The Institute of Medicine estimated that approximately 7,000 deaths per year in the US were attributed to preventable medication errors [5].
Background
A medication error is defined as "any preventable event that may cause or lead to inappropriate medication use or patient harm while the medication is in the control of the health care professional, patient, or consumer. Such events may be related to prescribing, ordering, dispensing, administering, or monitoring the drug" [6]. Nurses are the largest group of health professionals and have the last opportunity to prevent medication errors [7]. It is reported that approximately 64.6% of nurses have committed MAEs. However, 39.9% of medication errors by nurses are not reported, indicating the incidence of nurse-related medication errors is higher than reported [8].
In hospitals, there are five stages of the medication process: ordering/prescribing, transcribing and verifying, dispensing and delivering, administering, and monitoring and reporting [5]. Nurses are involved in all stages of the medication process except ordering/prescribing. Our study concentrated on the "administering" stage and examined nurse-related medication errors [9]. During the administering stage, specifically, Five Rights of Medication Administration (Five Rights) are emphasized in several nursing guidelines to prevent and reduce nurse-related MAEs [10]. They are right patient, right drug, right dose, right time, and right route. Errors associated with these Five Rights may be grouped differently, from Five Rights to 15 Rights however, based on the data collected, we added documentation errors and wrong technique, along with the Five Rights, to create a total of Seven Rights in our study [11].
Studies show that nurse-related medication errors often take place during the medication administration process, such as wrong time, wrong dose, or by omission of the medication [12][13][14]. These medication errors usually occur in hospital units and are often associated with drug classes. Medical surgical units, Intensive Care Units (ICU), Intermediate Care Units (IMCU), and Emergency Departments (ED) are among the most error-prone hospital units [15][16][17]. Further, drug classes related to MAEs vary among hospital units, although cardiovascular drugs, antibiotics, electrolytes, analgesics, and anti-diabetics are the most common drugs associated with MAEs [15,[18][19].
To find a successful intervention to decrease MAEs, potential causes of MAEs need to be identified. According to the Reason's conceptual model, the causes of MAEs in hospitals are a combination of active failures (e.g. errors caused by individuals via unsafe acts, slips and lapses, knowledge and rule-based mistakes, or violations) and latent conditions (e.g. amount of workload and skill mix, general work environment, communication, and organizational decisions) [20]. As said by the Swiss Cheese Model of system accidents, "the system produces failure when a hole in each slice momentarily aligns and permits trajectory of accident opportunity" [20]. The causes of MAEs in hospitals could be explained from aligning a series of barriers in terms of latent conditions of error-prone hospital units, drug classes, and active failures of the Seven Rights. Many studies have identified the cause of errors in hospital settings however most of them focus solely on one single factor, such as Rights of Medication Administration (Rights), hospital unit, or drug class [15][16][17][18][19].
Nevertheless, medication errors may well be related to multiple factors. For instance, a specific Right may be associated with a certain drug class or a hospital unit. MAEs in more errorprone hospital units may be associated with a particular drug class that is related to certain Rights. From this standpoint, much of the research on MAEs has important limitations. Moreover, most existing studies have been based on subjective, self-reported surveys potentially with inherent reporting bias [21][22]. Others are based on retrospective chart reviews that may affect the validity and reliability of the data [14]. Direct observation is often used in studies and may introduce the Hawthorne effect that nurses being observed may act differently in the presence of an observer [10,12,23]. Studies based on objective incident reports often have very small sample sizes that seriously limit the generalizability of their findings [10,14].
This study, based on medication error incident reports, examined associations between: (a) the Rights and hospital units, (b) the Rights and drug classes, and (c) potential interactions between hospital units and drug class on the Rights. Volume 2018; Issue 01
Data
Data were extracted from a prior study being described elsewhere [24]. In that study, the case group was the medication error group which included medication incidents and associated nurses. The control group included nurses who did not have medication errors. Medication error incident information was obtained from medication error files kept at the risk management departments of five short-term acute community hospitals in the Southwestern region of the United States [24]. Our study only focused on the medication error case group that included information for 1,276 medication error incidents. After three records with missing values were excluded, the final number of observations was 1,273. The study was approved by a university's institute review board and the Western Institutional Review Board.
Measures
This study examined MAEs during the administration stage, which predominantly falls into the nursing scope of practice [7]. We categorized the Seven Rights according to the administration phases. They were reflected by wrong dose (e.g. wrong infusion), wrong time (e.g. wrong frequency, wrong duration of medication administration, omission of dose), documentation error or wrong documentation (e.g. medication given without order), wrong route (e.g. administration of dose in a different form or doses given in the wrong site), and wrong technique (e.g. exclusion or incorrect performance of a procedure ordered by the prescriber immediately before administration of each dose of medication) [22].
The hospital units where MAEs occurred most often were cardiology, the Emergency Department (ED), Intensive Care Units (ICU), Intermediate Intensive Care Units (IMCU), medical-surgical, and maternity. We followed the Anatomical Therapeutic Chemical (ATC) system to classify drugs as cardiovascular, antimicrobial, endocrine, and neurologic [25]. In addition, consequences of MAEs were grouped into four categories, from the least to the most severe: 1) potential to cause harm, 2) error affected the patient but did not cause harm, 3) need to increase patient monitoring but no harm, and 4) temporary harm but treatment needed.
Results of our classification of the data were summarized using descriptive statistics. Associations of the Seven Rights with hospital units and drug class were examined by the binary analysis using the Chi-square test. Due to small numbers in some hospital units and drug classes (n < 30 for hospital unit and n < 30 for drug class), we only applied the Chi-square analyses on more frequent hospital units, drug classes, and the Seven Rights. They included MAEs related to wrong time/dose, documentation errors, the medical-surgical unit, IMCU, and ICU. Also included were cardiovascular, antimicrobial, and endocrine drug classes, as well as electrolytes. Further, we analyzed medication errors in regard to associations of hospital units, and the Seven Rights and drug class, respectively.
1.
Our study shows that wrong time, wrong dose, and documentation errors are the most frequent inaccuracies noted during medication administration. Furthermore, the associations between the Seven Rights and hospital units exist. Errors as a result of administering a medication at the wrong time tend to happen in ICUs and medical-surgical units. ICUs contain potential risk factors for MAEs. Multiple high alert medications are typically given to patients in ICUs in addition to complicated clinical procedures [26][27]. Frequent changes in substances and doses are required in fast paced medical care in ICUs [28]. There are also potential organizational risk factors that include patientto-nurse ratios [29]. Medical-surgical units have similar potential risk factors in terms of heavy patient loads [18]. Many distractions and time constraints are other risk factors for MAEs in Medicalsurgical units [18]. Therefore, MAEs caused by wrong time such as omission of dose tend to happen more in hospital units that require higher attentions.
Our findings also show associations between drug classes and the Seven Rights specifically, cardiovascular drugs, antimicrobials, endocrine drugs, and electrolytes. Errors with antimicrobials are mainly due to wrong time. During a patient's hospital stay, the dosage of antimicrobials must be adjusted by nurses, doctors, or pharmacists based on laboratory test results time by time. Therefore, clear communication among healthcare providers and precise dosage calculations based on updated orders in compliance with protocols, are critical to avoid antimicrobial-related MAEs [30]. MAEs with medication errors relating to electrolytes and endocrine drugs are also highly associated with wrong time. Concentrated electrolytes are the most common MAEs related to electrolytes, however, our study showed omission of doses or medication given at the wrong time is more strongly associated with errors regarding electrolyte administration. Endocrine drugs such as insulin are widely used in critical and intensive care patients in order to reduce morbidity and mortality [31].
Maintenance of optimal insulin concentration is a critical task. These findings are further expanded by examining potential interactions with hospital units. In regard to hospital units, electrolyte-related MAEs due to wrong time tend to happen in Medical-surgical units. It is reported that the frequent need for rapid life-saving responses may contribute to wrong time and documentation errors in medical-surgical units [26]. Our study also showed MAEs with endocrine drugs due to wrong time tend to happen in ICUs. MAEs regarding electrolytes and endocrine drugs may be a result of organizational and environmental factors rather than individual skills or knowledge [13,20,30,32].
Medication given in wrong doses prevails in MAEs. Our study showed MAEs in ICUs and IMCUs tend to happen with cardiovascular drugs as a result of the wrong dose. Insufficient knowledge of correct dosage has been identified as a main cause of MAEs with "high alert" medications such as heparin [13,30,32].
Other research also indicates that more than half of the nurses did not read syringe markings at eye level when mixing medications [10]. Documentation error is prominent with antimicrobials in ICUs and with cardiovascular drugs in IMCUs and Medicalsurgical units. Again, frequent changes in dose and substance of antimicrobials and cardiovascular drugs may affect the verbal and written prompts, especially in fast paced hospital units such as ICUs and Medical-surgical units [29].
Our findings indicate that error prone drugs such as cardiovascular drugs, antimicrobials, electrolytes, and endocrine drugs all need frequent dose adjustment and maintenance of optimal concentration. Therefore, it is critical that these medications be administered at the right time with the right dose in order to reduce MAEs. Our study also showed that hospital units with complex medication administration and complicated clinical procedures such as ICUs and Medical-surgical units are more prone to MAEs. Therefore, associations between hospital units and drug classes regarding the Seven Rights revealed from our study suggest that the amount of workload may be one of the aligned holes that allow MAEs as the Swiss Cheese Model of system accidents stated. Without adequate health professionals (e.g., the high number of beds per nurse ratio), fast paced hospital units with complex interventions and increased patient admissions may contribute to nurses not being able to administer medication at the right time and provoke documentation errors [29]. In fact, many studies have identified heavy workload as an important contributor to MAEs [33,34]. Even though our study did not measure the effect of workload on MAEs specifically, our findings may indicate that the high bed-to-nurse ratio is a possible underlying contributing factor to MAEs which merits further research [9]. For example, since antimicrobials are listed as time-critical medications that should be administered at strict time intervals with frequent dose adjustments, we observed that giving antimicrobials at the wrong time is associated with IMCUs and medical-surgical units [35].
Existing research also reports that the low patient-to-nurse ratios in medical-surgical units makes the administration of medications on a strict timeline difficult [10]. Therefore, a heavy workload may indeed be one of underlying causes of the Seven Rights related MAEs that are also associated with drug class, and hospital unit.
The study had limitations. First, data were collected without a predefined medication error tracking form; therefore, variations in categorizing medication errors across different hospital systems existed, which might, to a certain extent, affect the accuracy of assigning root causes of medication errors. In addition, the data were obtained from five hospitals in the Southwestern United States, which limited the generalizability of the results. Future research on "at the time of medication error" as a contributing factor of nurse related MAEs is needed to examine the associations between nurse shifts and incidents of MAEs. Future research should also investigate relationships between the bed-to-nurse ratio/nurse workload and medication errors.
Implications for Nursing Practice and Policy
Medication administration errors due to wrong timing are sometimes inevitable considering the synchronized drug administration timeline, frequent dose changes with antimicrobials, and certain features of hospital units, like inadequate staffing [10]. Use of the Electronic Health Record (EHR), which can alert nurses during the medication administration phase of required dosage changes and drugs that are outside the acceptable administration window, is one approach to avoid wrong-time errors [36]. Although many hospitals are using EHRs, some may not have the means or capability to alert nurses about drugs outside the acceptable administration window, high-risk drug alerts, or drugs that require dosage changes.
Identifying hospital units where medication administration errors frequently occur provide more specific alerts to nurses for avoiding potential medication errors. Strengthening related training and education should then follow, as Huckels-Baumgart and Manser point out that roots of medication error chains include inattention and lack of training [14]. Nursing education on drug classes associated with medication errors with regard to the more frequent Rights provides guidelines for proper medication administration. For example, educational materials in nursing pharmacology curriculums specifically focusing on medication administration errors associated factors (e.g., drug class, hospital unit, and combinations of those factors) will be helpful to reinforce and stress the issue. It is also recommend offering more targeted nursing training and continuing education programs that emphasize medication administration error prevention's knowledge, skills, techniques, and approaches.
Policies and programs should target hospital units and drug classes for reducing medication administration errors with regard to the Seven Rights. The awareness improvement training such as Plan-Do-Study-Act and an educational program developed based on actual reported medication error incidents may enable nurses to reduce preventable medication administration errors and improve patient safety [35].
Since medication administration errors may be caused by combinations of certain latent conditions with regard to hospital units and drug classes, errors related to the Seven Rights may be minimized by alleviating the heavy workload of nurses. Mandated nursing staffing to patient ratio legislation and reduction of workloads should be considered. These have been shown to reduce medication administration errors related to human factors such as fatigue from lack of sleep, working too many hours, and time constraints [10,15,[37][38].
Conclusion
This study, collecting more accurate medication error information from hospital's risk management departments, supports previous evidence that medication errors continue to exist in hospital settings and prove to be more common in some units than others. Examining the Seven Rights, hospital units, and drug classes simultaneously enabled us to identify potential factors associated with medication administration errors. Findings from this study may assist hospital administration and staff in developing practical and effective strategies to reduce medication errors and improve patient safety. | 2019-12-01T02:04:17.310Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "f57c4221ffaa2f9c4559810cef9b0db3fb8b9053",
"oa_license": "CCBYSA",
"oa_url": "https://gavinpublishers.com/admin/assets/articles_pdf/1520489488article_pdf125811574.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4fa4ae1574b8c12944f96be772c3b55b59f18b4d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247930971 | pes2o/s2orc | v3-fos-license | Mannan-Binding Lectin via Interaction With Cell Surface Calreticulin Promotes Senescence of Activated Hepatic Stellate Cells to Limit Liver Fibrosis Progression
Background & Aims Liver fibrosis represents a hallmark of most chronic liver diseases (CLD) triggered by recurrent liver injury and subsequent myofibroblast transdifferentiations of resident hepatic stellate cells (HSCs). Mannan-binding lectin (MBL) is potentially involved in hepatic fibrosis in CLD through unclear mechanisms. Therefore, we investigated the crosstalk between MBL and HSCs, and the consequent effects on fibrosis progression. Methods Samples from patients with liver cirrhosis were collected. MBL deficiency (MBL-/-) and wild-type (WT) C57BL/6J mice were used to construct a CCl4-induced liver fibrosis model. Administration of MBL-expressing, liver-specific, adeno-associated virus was performed to restore hepatic MBL expression in MBL-/- mice. The human HSC line LX-2 was used for in vitro experiments. Results MBL levels in patients with liver cirrhosis were correlated with disease severity. In the CCl4-induced liver fibrosis model, MBL-/- mice showed severer liver fibrosis accompanied by reduced senescent activated HSCs in liver tissue compared with WT mice, which could be inhibited by administering MBL-expressing, liver-specific, adeno-associated virus. Moreover, depleting senescent cells with senolytic treatment could abrogate these differences owing to MBL absence. Furthermore, MBL could interact directly with calreticulin associated with low-density lipoprotein receptor-related protein 1 on the cell surface of HSCs, which further promotes senescence in HSCs by up-regulating the mammalian target of rapamycin/p53/p21 signaling pathway. Conclusions MBL as a newfound senescence-promoting modulator and its crosstalk with HSCs in the liver microenvironment is essential for the control of hepatic fibrosis progression, suggesting its potential therapeutic use in treating CLD associated with liver fibrosis.
Here, we discovered that mannanbinding lectin-hepatic stellate cell interaction via cell surface calreticulin promotes senescence of activated hepatic stellate cells, contributing to the control of hepatic fibrosis progression.
BACKGROUND & AIMS: Liver fibrosis represents a hallmark of most chronic liver diseases (CLD) triggered by recurrent liver injury and subsequent myofibroblast transdifferentiations of resident hepatic stellate cells (HSCs). Mannan-binding lectin (MBL) is potentially involved in hepatic fibrosis in CLD through unclear mechanisms. Therefore, we investigated the crosstalk between MBL and HSCs, and the consequent effects on fibrosis progression.
METHODS:
Samples from patients with liver cirrhosis were collected. MBL deficiency (MBL -/-) and wild-type (WT) C57BL/ 6J mice were used to construct a CCl 4 -induced liver fibrosis model. Administration of MBL-expressing, liver-specific, adenoassociated virus was performed to restore hepatic MBL expression in MBL -/mice. The human HSC line LX-2 was used for in vitro experiments.
RESULTS: MBL levels in patients with liver cirrhosis were correlated with disease severity. In the CCl4-induced liver fibrosis model, MBL -/mice showed severer liver fibrosis accompanied by reduced senescent activated HSCs in liver tissue compared with WT mice, which could be inhibited by administering MBL-expressing, liver-specific, adenoassociated virus. Moreover, depleting senescent cells with senolytic treatment could abrogate these differences owing to MBL absence. Furthermore, MBL could interact directly with calreticulin associated with low-density lipoprotein receptor-related protein 1 on the cell surface of HSCs, which further promotes senescence in HSCs by upregulating the mammalian target of rapamycin/p53/p21 signaling pathway.
CONCLUSIONS: MBL as a newfound senescence-promoting modulator and its crosstalk with HSCs in the liver microenvironment is essential for the control of hepatic fibrosis progression, suggesting its potential therapeutic use in treating CLD associated with liver fibrosis. ( H epatic fibrogenesis results from a repeated woundhealing response to chronic injuries and subsequent accumulation of excessive extracellular matrix (ECM) that progressively deteriorates liver structure and function. 1 Without proper and prompt treatment, liver fibrosis, as a common pathologic process of most chronic liver diseases (CLDs), can progress into cirrhosis, carcinoma, and liver failure, becoming one of the major public health problems worldwide. 2 Of note, the onset and progression of hepatic fibrosis and its regression process are orchestrated by the integrated network consisting of numerous molecules and cells with profibrotic or antifibrotic function in the local environment, among which hepatic stellate cells (HSCs) are a key piece of the puzzle. 3 Upon liver fibrosis induction, liver injuries and inflammation trigger the activation of HSCs, leading to the transdifferentiation of quiescent, vitamin A-storing HSCs into ECM-secreting, fibrogenic myofibroblasts that now are well established as a central driver of fibrosis. 4 Recent discoveries have shown that activated HSCs with a senescent phenotype might help control and/or reverse liver fibrosis by reducing ECM levels, thus reestablishing the quiescent environment. 5,6 Therefore, how activated HSCs restore the quiescent state through cellular senescence needs to be elucidated, contributing to the development of effective therapies for the treatment of hepatic fibrosis. 7 Although the fate of activated HSCs is regulated by an interdependent network of molecular and cellular components in the liver microenvironment, 8,9 the underlying mechanisms of HSC senescence induction are still unclear.
Mannan (or mannose)-binding lectin (MBL), which belongs to the family of collectins, is a soluble pattern recognition molecule produced primarily by hepatocytes. [10][11][12] MBL can mediate phagocytosis and trigger the complement cascade through the lectin pathway by binding to carbohydrate motifs. 13 In human beings, the MBL genetic system comprises 1 functional gene (MBL2) that encoded MBL2, whereas rodents have 2 MBL isomers (Mbl1 and Mbl2) that encode MBL-A and MBL-C, respectively. 12 In addition, the single human MBL gene is closely related to rodent Mbl2 rather than rodent Mbl1. 14 Moreover, we previously showed the prevalence of MBL-C expression but not MBL-A in the liver of wild-type (WT) mice. 15 Therefore, we focused on the MBL-C (mice) and MBL2 (human) and used MBL to indicate both of them in the present study. In addition, MBL could exert different biologic functions apart from complement activation, playing crucial roles in tissue homeostasis and various diseases. 10,16,17 Emerging evidence has indicated that MBL is involved in the development of fibrosis in CLD patients, including patients with liver cirrhosis, 18 hepatitis C virus patients with more severe fibrosis, 19 individuals with schistosomiasis-induced liver fibrosis, 20 and patients with HBV-related liver cirrhosis. 21 Our previous study showed that MBL, as an essential component of the liver microenvironment, suppresses tumor development via interaction with local stromal cells. 15 However, the exact function of MBL in liver fibrosis, especially its crosstalk with other cells in the local environment and the consequent effects on fibrosis progression, has not yet been thoroughly investigated.
In this study, we found a notable association between MBL level and severity of liver fibrosis in human beings. Then, we used a mouse model of CCl 4 -induced liver fibrosis in MBL-deficient mice and human HSC cell line LX-2, seeking to elucidate the potential role of MBL in liver fibrosis and its underlying mechanisms. Our findings uncovered a previously unrecognized mechanism in which MBL-HSC interaction promoted HSC senescence and subsequent fibrosis alleviation, providing insight into a therapeutic strategy for CLD associated with liver fibrosis.
MBL Levels Are Increased During Liver Fibrosis and Associated Inversely With Cirrhosis Severity
Because MBL is reportedly involved in hepatic fibrosis, 18 we decided to address the association of MBL and liver fibrosis progression. Examination of the plasma MBL concentration in patients with liver cirrhosis and healthy controls using enzyme linked immunosorbent assay (ELISA) analysis showed that MBL levels were markedly higher in cirrhosis patients ( Figure 1A). Then, we assessed the association between plasma MBL levels and clinical characteristics in these patients. The correlation analysis showed that cirrhosis patients with higher plasma alanine aminotransferase (ALT) and aspartate aminotransferase (AST) activity tend to display lower MBL levels in plasma, indicating their inverse correlations ( Figure 1B-E). Plasma MBL levels in these patients also were correlated inversely with Child-Pugh grade ( Figure 1F), whereas levels were associated positively with the plasma concentrations of albumin ( Figure 1B). In addition, we observed significantly lower MBL expression and fewer MBL-positive cells in the liver tissue of patients with Child-Pugh B liver cirrhosis than those with Child-Pugh A liver cirrhosis by immunohistochemical staining ( Figure 1G). Consistently, in an established murine model of CCl 4 -induced liver fibrosis, the level of MBL protein ( Figure 1H and I) and its messenger RNA (mRNA) expression ( Figure 1J) were increased in the liver upon fibrogenesis. We observed that MBL distributes throughout the liver and is seen predominantly around the central vein and sinusoid ( Figure 1H). Similar results also were observed in the bile duct ligation model of liver fibrosis ( Figure 1K-M). Therefore, these data suggest that MBL is involved in the progression of hepatic fibrosis.
MBL Contributes to Control of Liver Fibrosis Progression
Because MBL might have a potential role in the pathogenesis of hepatic fibrosis, we further address the exact role of MBL in the ongoing development of liver fibrosis. First, we used MBL -/mice and WT littermates to investigate the characteristics of liver fibrosis in the CCl 4 -induced murine model. We observed that ALT and AST levels and lactate dehydrogenase (LDH) activity in serum from MBL -/mice markedly increased compared with those from WT mice after CCl 4 treatment (Figure 2A). The histologic staining of liver sections showed that CCl 4 -treated MBL -/mice displayed a larger fibrosis area in liver tissue than WT counterparts ( Figure 2B). Furthermore, Sirius Red staining ( Figure 2C) and Masson's trichrome staining ( Figure 2D), respectively, showed significantly more collagen deposition and fibrous connective tissue hyperplasia in liver tissue of MBL -/mice compared with that of WT controls. Because liver fibrosis also is characterized by activated HSCs in which a-smooth muscle actin (a-SMA) is a vital indicator, 22 we subsequently assessed a-SMA levels in the liver. As expected, the expression of a-SMA in liver sections ( Figure 2E), as well as intrahepatic a-SMA protein ( Figure 2F) and its mRNA expression ( Figure 2G), strongly increased in MBL -/mice compared with those in WT mice upon CCl 4 -induced fibrogenesis. The quantitative reversetranscription polymerase chain reaction (PCR) results showed that the mRNA levels of fibrosis-associated genes remarkably increased in isolated HSCs of MBL -/mice compared with those of WT controls during liver fibrosis ( Figure 2H). These results indicate that MBL absence led to a deterioration of liver fibrosis.
Furthermore, we subsequently performed a tail-vein administration of liver-specific MBL-expressing, adenoassociated virus (pAAV-MBL) to restore MBL expression in the liver of MBL-deficient mice as we reported previously, 15 followed by CCl 4 injection and further assessment of liver fibrosis. Immunohistochemistry and Western blot analysis in the liver tissue showed that MBL -/mice lack MBL expression, while MBL expression was restored after pAAV-MBL injection ( Figure 3A and B). Consistently, much more fibrous connective tissue hyperplasia and collagen deposition in liver tissue were observed in MBL -/mice than in WT counterparts, as determined by histologic staining ( Figure 3C). Similar to the earlier-described results in mice without pAAV administration, MBL -/mice with pAAV-control pretreatment showed markedly increased intrahepatic a-SMA expression ( Figure 3C). However, pAAV-MBL delivery nearly eliminated the progression of liver fibrosis owing to the absence of MBL, indicated by comparable liver fibrosis features in WT mice and MBL -/mice ( Figure 3C and D). Collectively, these data showed that restoration of hepatic MBL expression relieves the liver fibrosis progression owing to MBL deficiency, implying that MBL contributed to the amelioration of liver fibrosis progression.
MBL Limits Liver Fibrosis Progression via Promoting HSC Senescence
During hepatic fibrosis, some activated HSCs might progressively undergo senescence, becoming less fibrogenic, thus holding a vital position in controlling fibrosis. 23 Together with the earlier-described results that determined the MBL involvement in the control of hepatic fibrosis, emerging data raised a potential association between MBL and activated HSC senescence in liver fibrosis. Interestingly, as shown in the senescence-associated-b-galactosidase (SA-b-Gal) staining that indicated the senescent cells, 24 the number of SA-b-Gal þ cells decreased dramatically in liver sections of MBL -/mice compared with that of WT mice ( Figure 4A). In addition, MBL deficiency strikingly increased the proliferative Ki67 þ cells in liver tissue, as determined by immunohistochemistry staining ( Figure 4B). Moreover, we investigated that p21, as one of the cellular senescence markers, was expressed by a-SMA þ activated HSCs in fibrotic liver sections, and the p21 þ a-SMA þ cell frequency notably decreased as a result of MBL deficiency ( Figure 4C). In accordance with the histologic staining, the mRNA expression of senescence-related genes, including senescence-associated secretory phenotype genes, decreased significantly in isolated HSCs of MBL -/mice compared with those of WT controls ( Figure 4D). Moreover, hepatic MBL restoration through liver-specific pAAV-MBL delivery in MBL -/mice eliminated the reduction of the senescent activated HSC cell frequency ( Figure 5A) and the protein (p53 and p21) expression of senescence-related genes as well as their mRNA levels owing to the absence of MBL ( Figure 5B and C). These results indicated that MBL expression led to an amelioration of liver fibrosis progression along with the increased frequency of senescent activated HSCs.
Senescent cells can be selectively eliminated by dasatinib and quercetin (DþQ) administration, which belongs to a new class of drugs known as "senolytics." 25 Recently, although there are no efficient drugs to selectively target senescent HSCs, DþQ, as the most prominent senolytic, shows toxicity toward senescent HSCs and thus can be used to deplete senescent HSCs in the liver. 26 To testify whether the senescent HSCs are involved in MBL-mediated The mRNA levels of senescence-related genes in liver tissues were assessed by quantitative reverse-transcription PCR analysis. Scale bars: 100 mm. Data are presented as means ± SEM. **P < .01, 1-way analysis of variance followed by Tukey post hoc tests for multiple group comparisons. CCL, chemokine (C-C motif) ligand; CXCL, chemokine (C-X-C motif) ligand; DAPI, 4 0 ,6-diamidino-2-phenylindole; HPF, high-power field; IL, interleukin; MMP, matrix metalloproteinase. amelioration of liver fibrosis progression, we applied senolytic drugs (DþQ) by oral administration along with CCl 4 administration to eliminate senescent HSCs in WT mice. To evaluate the eliminating effect of senolytic drugs on senescent cells in liver, we performed co-staining of a-SMA and other cellular senescent indicators or Ki67 in liver sections. The results show that DþQ treatment significantly reduced the frequency of senescent activated HSCs, whereas it increased the number of Ki67 þ a-SMA þ cells ( Figure 6A). In addition, co-staining of a-SMA and pro-apoptosis molecule Bax or anti-apoptosis protein Bcl2 showed that DþQ treatment increased the frequency of a-SMA þ Bax þ cells while reducing the presence of a-SMA þ Bcl2 þ cells, whereas they became comparable between WT and MBL -/mice ( Figure 6B). In addition, the protein expression (p53 and p21) and mRNA levels of representative senescence-related genes including senescence-associated secretory phenotype genes in liver tissues were reduced after senolytic administration ( Figure 6C and D). Furthermore, WT mice showed notably more liver injury, fibrous connective tissue hyperplasia, and collagen deposition in liver sections after DþQ treatment, which was comparable with DþQ-treated MBL -/mice ( Figure 7). Taken together, these results suggest that HSC senescence might be responsible for MBL-mediated alleviating of hepatic fibrosis.
MBL Prompts Senescence of HSCs Through Mammalian Target of Rapamycin/p53/p21 Signaling Pathway
Our pilot immunofluorescence assay showed that MBL is colocalized with a-SMA in fibrotic liver sections in mice ( Figure 8A) or human beings ( Figure 8B), where p53 expression was notably higher in MBL þ a-SMA þ activated HSCs than in MBLa-SMA þ cells ( Figure 8B). Together with our previous study showing that MBL could interact with HSCs in which MBL was hardly detected, 15 these results prompt us to explore whether MBL could affect the senescence of HSCs. Here, we treated human HSC line LX-2 with or without the senescence inducer H 2 O 2 , 27 which could trigger the cellular senescence of HSCs, along with MBL protein or not. As anticipated, H 2 O 2 did induce the senescence of LX-2 cells, indicated by more SA-b-Gal expression ( Figure 9A), more robust cellular growth arrest ( Figure 9B), and increased protein expression and mRNA level of senescence-related molecules ( Figure 9C and D). However, MBL treatment could significantly promote H 2 O 2 -induced cellular senescence (Figure 9), and synergistically reduce the mRNA level of fibrosis-associated genes with H 2 O 2treatment ( Figure 9E). Although MBL treatment alone does not affect the senescence (Figure 9) or viability of LX-2 cells ( Figure 9F), these results indicated that MBL promoted HSC senescence.
A previous study showed that p53 is a vital molecule involved in HSC senescence, 28 in which mammalian target of rapamycin (mTOR)/p53/p21 is considered one of the classic senescent pathways. 29 Our pilot Western blot analysis showed that phosphatidylinositol-3-kinase (PI3K)/ AKT/mTOR signaling was up-regulated in the liver tissues of WT mice compared with MBL -/mice during pAAVcontrol administration, which was abrogated by pAAV-MBL treatment ( Figure 10A). Furthermore, we observed that intraperitoneal injection of rapamycin did inhibit mTOR/p53/p21 activation in vivo during fibrosis establishment ( Figure 10B and C). This rapamycin administration eliminated MBL deficiency-mediated differences in hepatic fibrosis degree and HSC senescence ( Figure 10C-E). Together with the earlier results about p53 and p21, we proposed that the mTOR/p53/p21 pathway might be involved in MBL-mediated HSC senescence. Similar to these results, MBL treatment substantially promoted senescence of LX-2 cells, indicated by increased frequency of SA-b-GAL þ cells, stronger cellular growth arrest, increased protein expression (p53 and p21), and mRNA levels of senescence-related molecules, as well as the reduced mRNA levels of fibrosis-associated genes ( Figure 11). However, this MBL-induced senescence-promoting effect was abolished by pretreatment with the p53-specific inhibitor pifithrin-a (PFT-a) (Figure 11), or the mTOR-suppressor rapamycin ( Figure 12), ahead of the exposure to MBL and H 2 O 2 , as shown by the comparable frequency of SA-b-GAL þ cells, cellular growth arrest, protein expression, and mRNA levels of senescence-related molecules in MBL-treated or untreated cells. Together, these results implied that MBL promoted senescence of HSCs via the mTOR/p53/p21 pathway.
MBL Promotes HSC Senescence Through Interaction With Cell Surface Calreticulin
According to our previous study, MBL could bind to the T-cell surface with calreticulin (CRT) and subsequently activate the downstream pathway. 30 CRT, which is expressed in various cell types, including HSCs, 31 can signal in association with low-density lipoprotein receptor-related protein 1 (LRP1), thus leading to the activation of the PI3K/ AKT/mTOR pathway. 32,33 Our pilot experiment showed that MBL is bound to cell-surface CRT in the human HSC line LX-2 ( Figure 13A). In addition, when we incubated primary mouse HSCs with the supernatant harvested from stimulated primary mouse hepatocytes, the fluorescence staining showed the colocalization of MBL and CRT on the cell surface of HSCs, accompanied by the up-regulated activation of the mTOR/p53/p21 signaling pathway ( Figure 13B-E). To define whether MBL binding to CRT subsequently can regulate the downstream mTOR activation and further affect the senescence of HSCs, we used CRT blocking antibody to inhibit the potential binding of CRT and MBL. 30 The results showed that the anti-CRT pretreatment completely blocked the MBL-mediated promotion of senescence, as indicated by comparable SA-b-GAL positivity ( Figure 14A), cellular growth arrest ( Figure 14B), and mRNA levels of senescencerelated genes ( Figure 14C) in cells treated with MBL or not. Interestingly, we observed that MBL increased the cellsurface CRT-LRP1 interaction while not affecting their surface expression ( Figure 14D). As illustrated in Figure 14E, LX-2 cells exposed to MBL and H 2 O 2 showed an increased intracellular level of catalytic subunit p110 of PI3K and its interaction with LRP1, and the association of LRP1 and Rab8. Moreover, MBL drastically increased p110 level and Akt phosphorylation, as well as activation of the mTOR/p53/p21 pathway as determined by immunoblotting, which was notably inhibited by anti-CRT pretreatment ( Figure 14F). Anti-CRT treatment also abolished the MBL-mediated reduced mRNA levels of fibrosisassociated genes in LX-2 cells upon MBL treatment ( Figure 14G). These results suggest that MBL binds to the membrane CRT of HSCs, promoting HSC senescence.
Discussion
In this study, we have made novel findings toward a better understanding of MBL function in the liver microenvironment. We first discovered that MBL levels increase markedly in the liver upon hepatic fibrogenesis, whereas they decrease when fibrosis progresses. Genetic deficiency of MBL exacerbates liver fibrotic pathologies mediated by the reduction of HSC senescence, which is abolished by hepatic MBL restoration. One of the underlying mechanisms is that MBL directly interacts with cell-surface CRT, which further signals in association with LRP1, leading to PI3K/ Akt activation and downstream mTOR/p53/p21 pathway, thus promoting cellular senescence of HSCs and relieving liver fibrosis progression. Hence, MBL expression and its association with HSCs control hepatic fibrosis by promoting HSC senescence, implying that MBL might serve as a bona fide regulator of liver fibrosis progression and potential targets for antihepatic fibrosis therapies.
MBL, as a primarily liver-derived soluble opsonin, holding a critical position in innate immunity mainly through initiating lectin pathway activation, also can serve multiple modulatory functions apart from complement activation in tissue homeostasis and various diseases. 10,16,17,34 In addition, MBL has been described as an acute-phase protein that increases and plays vital roles in the acute-phase response. 35,36 Local inflammation or injuries in the liver during hepatic fibrosis could induce the acute-phase protein expression to maintain homeostasis and tissue repair. 37,38 Therefore, it is not surprising that our current study found that plasma MBL levels were higher in cirrhosis patients than in healthy controls.
However, we observed that plasma MBL levels and MBL expression in liver tissues were correlated inversely with cirrhosis severity. These results were partially supported by a previous study indicating lower MBL levels in patients with more advanced stages of cirrhosis. 18 An early study also showed that coding mutation homozygosity O/O associated with lower MBL expression was related to advanced liver fibrosis in hepatitis C virus patients. 19 This evidence suggests that the polymorphism of the MBL gene in patients with different stages of cirrhosis results in the differences of maximal MBL expression among these patients. Furthermore, because our current work suggests that hepatic MBL contributes to the amelioration of fibrosis progression, patients with lower MBL expression thus have less MBL-mediated control of fibrosis deterioration and might suffer from severer liver cirrhosis. Because our findings suggest that MBL has a potential role in liver fibrosis, we decided to address its exact positions. While using MBL-deficient mice, we discovered that genetic depletion of MBL did lead to a notable deterioration of hepatic fibrosis. Given that the liver, particularly hepatocytes, is a key source of MBL, 11,12,39-41 our recent studies have established liver-specific AAV vectors carrying the MBL gene that could restore hepatic MBL expression. 13,15 Accordingly, in the current study, restoration of hepatic MBL expression with pAAV-MBL nearly completely abrogated the liver fibrosis aggravation owing to MBL absence, providing evidence that MBL contributes to the amelioration of hepatic fibrosis.
Located in the liver microenvironment, activated HSCs represent the dominant profibrogenic cell population, contributing to most ECM-producing myofibroblasts responsible for liver fibrogenesis. [42][43][44] Although still controversial, emerging evidence suggests that the senescence of activated HSCs improves hepatic fibrosis by eliminating the major source of ECM. 20,23,24,45 Furthermore, cellular senescence, as a sustained state of cell-cycle arrest, could promote tissue remodeling after injury and play crucial roles in CLD. 46 Therefore, we propose that activated HSC senescence might be associated with MBL-mediated amelioration of hepatic fibrosis progression. Indeed, we showed that the frequency of senescent activated HSCs markedly decreased in MBL-deficient mice compared with those in WT controls. In addition, the subsequent restoration of hepatic MBL expression with pAAV-MBL could eliminate the reduction of senescent HSCs in MBL-deficient mice. Although there have not been any drugs to eliminate only senescent HSCs selectively, senolytic drugs such as DþQ, a prominent senotherapeutic strategy, 47 can efficiently deplete senescent HSCs in the liver. 26 In the current study, we found that senolytic drugs (DþQ) dramatically reduced senescent activated HSCs in the liver. Most importantly, these senolytic drugs could eliminate the differences of liver fibrosis features between WT and MBL -/mice. Apparently, our current work suggests that HSC senescence is involved in MBL-mediated alleviation of hepatic fibrosis progression, although senescence of other cells could not be excluded. Of note, senolytics DþQ could exacerbate hepatic fibrosis in WT mice with MBL expression. This result is different from a previous report indicating that the senolytic fisetin reduced cholangiocyte senescence and improved fibrosis in the Mdr2 -/mouse. 48 One possible reason for the different findings and outcomes between these 2 studies is that the murine fibrosis models' mechanisms differ significantly. The Mdr2 -/mouse model spontaneously develops primary sclerosing cholangitis and progressive secondary biliary fibrosis owing to accumulation of toxic bile acids in hepatocytes and initiation of a profibrogenic cholangiocyte response. 49 Emerging evidence has indicated that CCl 4 metabolized in the liver induces hepatocyte apoptosis, leading to parenchymal liver damage and HSC activation, favoring the fibrogenic process. 50 Notably, CCl 4 -induced hepatic fibrosis starts from the hepatic centrilobular zone 3 and progresses toward the portal tracts of zone 1, opposite the biliary fibrosis in Mdr2 -/mice. 51 Another reason might be that the distinct senolytics are likely to target different cells and work differently at various time points in these 2 models. Although all senolytic drugs appear to clear senescent cells, each drug's exact roles and mechanisms, especially in different contexts in vivo, still are unclear. 52,53 Even though it has been reported that DþQ showed selective cytotoxicity toward senescent HSCs, 26 the study mentioned earlier indicated that fisetin selectively eliminated senescent cholangiocytes, whereas DþQ targeted both proliferating and senescent cholangiocytes. 48 Furthermore, we performed senolytic administration ahead of CCl 4 injection to ensure the depletion of senescent HSCs in the present study, whereas the report mentioned earlier only focused on the senescent cholangiocytes. 48 Therefore, although the exact mechanisms need to be explored further and other senescent cells involved could not be excluded, it is not surprising that the difference in findings of the roles of these distinct senolytics in our present study and the report mentioned above. 48 The fate of activated HSCs is orchestrated by an interdependent network of molecular and cellular components in the liver microenvironment. 8,9,54 However, the underlying mechanisms of HSC senescence induction still are unclear. In this study, we observed MBL þ a-SMA þ activated HSCs in the liver fibrotic sections, in which the senescent maker p53 was expressed. This finding was partially supported by our previous study showing the colocalization of MBL and a-SMA þ activated HSCs in the peritumor region in HCC. 15 Together with our earlier finding that MBL was hardly detected in HSCs but could interact directly with HSCs, 15 these data prompt us to investigate the underlying mechanisms of the interaction between MBL protein and HSCs and the downstream signaling pathway. Intriguingly, while using H 2 O 2 to induce HSC senescence, MBL treatment markedly promoted the senescence of HSCs. Meanwhile, mTOR/p53/ p21, as a classic senescent pathway, 21,55 was up-regulated by MBL. Nevertheless, this regulatory role was eliminated by selective inhibitors of mTOR or p53, indicating their involvement in MBL-mediated HSC senescence.
Given our previous reports showing that MBL binds to CRT on the T-cell surface, 30 as well as the presence of CRT on the cell surface of LX-2 cells 56 and activated HSCs, 31 we subsequently blocked the CRT-MBL interaction with anti-CRT. As expected, anti-CRT pretreatment completely blocked the MBL-mediated pro-growth senescence, MBL-up-regulated PI3K/Akt pathway, and downstream mTOR/p53/p21 signaling in LX-2 cells. Furthermore, coculture of primary mouse HSCs with hepatocyte supernatant also resulted in CRT-MBL colocalization and mTOR/ p53/p21 signaling pathway activation in HSCs, suggesting hepatocyte-secreted MBL could interact with CRT and upregulate the mTOR/p53/p21 signaling pathway. Intriguingly, MBL treatment strengthened the interaction of LRP1 and CRT on the cell surface, which might be explained by previous studies showing the cell surface CRT-MBL interaction 31 and the MBL binding to cell surface LRP1 (CD91). 57 Of note, we discovered that MBL interaction with CRT increased the intracellular level of catalytic subunit p110 of PI3K bound to LRP1 associated with Rab8. These results were supported by a previous report indicating that the Toll-like receptors, which are pattern recognition receptors such as MBL, could crosstalk and activate LRP1, which recruits a Rab8a/PI3K complex, and that further activates Akt/mTOR signaling. 33 In addition, the present results could be explained by the previous observation that cell surface CRT signals in association with LRP1 by forming a CRT-LRP1-receptor complex. Furthermore, it is reported that tissue-type-plasminogen activator, as a ligand of LRP1, mediated the resolution of CCl 4 -induced acute liver injury through triggering LRP1-associated signaling in HSCs, which might lead to the regression of activated HSCs in vivo. 58 Thus, our present findings provide new insights into the mechanisms underlying MBL-mediated HSC senescence upon stress.
In summary, our data provide compelling evidence that MBL is essential for controlling liver fibrosis progression, broadening the knowledge of MBL biologic function. In addition, this effect is partially mediated through MBL binding to the cell surface CRT of HSCs, facilitating CRT-LRP1 interaction and further activating the downstream mTOR/p53/p21 signaling pathway that promotes cellular senescence. These findings elucidate a previously unrecognized senescence-promoting role of MBL in the liver microenvironment that contributes to the control of hepatic fibrosis progression and provides a whole new perspective for antifibrotic therapy based on MBL expression.
Animals
All animal experiments in this study were performed in accordance with the guidance of the Welfare and Ethical Committee for Experimental Animal Care of Southern Medical University. WT C57BL/6J mice were purchased from the Animal Center of Southern Medical University (Guangzhou, China). MBL -/mice on a C57BL/6J background were purchased from the Jackson Laboratory (Bar Harbor, ME). The animals were housed under a 12-hour light/dark cycle in a specific pathogen-free animal condition with a controlled temperature (20 C-25 C) and humidity (50% ± 5%). Female 6-to 8-week-old mice were used for all experiments.
Model of Liver Fibrosis
Mouse models of liver fibrosis were established according to the previously reported method with minor modification. 59 Mice were injected intraperitoneally with 25% CCl 4 (Macklin, Shanghai, China) at a dose of 1 mL/g 3 times a week for 6 weeks, and were killed 2 days after the last injection of CCl 4 for further analysis. To eliminate the
Bile Duct Ligation
To establish liver fibrosis, mice underwent bile duct ligation or sham surgery under anesthesia as described previously. 60 In brief, after midline laparotomy, mice common bile was exposed by a wet cotton swab under sterile conditions and ligated with 1 surgical knot. The sham mice underwent a similar surgical procedure except for the ligation of the bile duct. All mice were killed after 20 days.
Rapamycin In Vivo Treatment
Rapamycin was applied to inhibit mTOR activation in mice during the establishment of liver fibrosis. Mice received rapamycin (1 mg/kg body weight/day) by intraperitoneal injection daily along with CCl 4 injection until they were killed. 61
Patient Samples
Paraffin-embedded, formalin-fixed cirrhotic liver sections were obtained from 6 patients with histologically diagnosed liver cirrhosis confirmed at Nanfang Hospital, Southern Medical University (Guangzhou, China). Serum samples were obtained from age-/sex-matched healthy volunteers and cirrhotic patients. Patient information is presented in Table 1. All of the samples were coded anonymously in accordance with local ethical guidelines, as stipulated by the Declaration of Helsinki, with written informed consent and a protocol approved by the Institutional Review Board of Nanfang Hospital, Southern Medical University.
Cell Culture LX-2 cells, an immortalized human HSC line, were gifts from Dr Bai XC of Southern Medical University (Guangzhou, China). Cells were cultured in Dulbecco's modified Eagle medium supplemented with 10% fetal bovine serum, 1% penicillin/streptomycin at 37 C, 5% CO 2 . To observe the effect of MBL on HSC senescence upon liver fibrosis, LX-2 cells were treated with or without 300 mmol/L H 2 O 2 and/ or 10 mg/mL MBL protein purified as we described previously. 30 For p53 inhibition, LX-2 cells were pretreated with 10 mmol/L PFT-a ahead of the treatment with H 2 O 2 and/or MBL. 62 For mTOR inhibition, LX-2 cells were exposed to 100 nmol/L rapamycin before the treatment of MBL and/or H 2 O 2 . 63
Immunochemistry
Paraffin-embedded, formalin-fixed, 5-mm-thick tissue sections were processed for immunohistochemical staining with the following primary antibodies: mouse anti-mouse MBL antibody, mouse anti-human MBL antibody, rabbit anti-Ki67 antibody, and rabbit anti-a-SMA antibody. The subsequent procedure followed a method described in previous studies. 15 The expression was visualized by diaminobenzidine staining (Beyotime Biotechnology, Shanghai, China). Sections were lightly counterstained with hematoxylin.
For H&E, Masson, and Sirius Red staining, paraffinembedded, formalin-fixed, 5-mm-thick tissue sections were processed for indicated staining. Five arbitrarily selected fields of 3 random liver sections from 3 to 5 mice per group at 100Â magnification were analyzed and quantified.
Immunofluorescence
Paraffin-embedded, formalin-fixed, 5-mm-thick tissue sections were processed for immunofluorescent staining with the following primary antibodies: mouse anti-mouse MBL antibody, mouse anti-human MBL antibody, rabbit anti-a-SMA antibody, rabbit anti-p21 antibody, rabbit anti-Bax antibody, rabbit anti-Bcl2 antibody, rabbit anti-p-H2.X antibody, rabbit anti-p16 antibody, and rabbit anti-CRT antibody. The subsequent procedure followed a method described in our previous study. 13 Sections were mounted with the Mowiol-based antifading medium (Beyotime Biotechnology) and analyzed with a laser scanning microscope system (Nikon Eclipse Ni, Tokyo, Japan). The triple immunofluorescent staining of p53, MBL, and a-SMA on liver sections was conducted by Servicebio Company (Wuhan, China).
Quantitative Real-Time PCR
Total RNA was extracted from livers or cells using TRIzol reagent (TransGene Biotech, Beijing, China). Complementary DNA was synthesized from 1 mg mRNA using TranScript Allin-One First-Strand Complementary DNA Synthesis Super-Mix (TransGene Biotech) in a total volume of 20 mL. Then, the complementary DNA was amplified under the following reaction conditions: denaturation at 94 C, annealing at 55 C, and extension at 72 C for 40 cycles. Real-time PCR was performed with an Eppendorf Realplex PCR system using TransStart Tip Green quantitative reverse-transcription PCR SuperMix (TransGene Biotech). The expression was normalized to the expression of the housekeeping gene bactin. The primers used were synthesized in BGI Genomics (Shenzhen, China). The primer sequences used in the experiment are shown in Table 2.
Isolation and Culture of Primary Mouse HSCs and Hepatocytes
To extract mice HSCs for mRNA analysis, the fresh liver was perfused slowly via the inferior vena cava with 30 mL warm phosphate-buffered saline at a rate of 5 mL/min, and then digested with 20 mL liver digest medium (RPMI 1640, Collagenase IV at 0.5 mg/mL, and DNAse Ӏ at 0.1 mg/mL) at a rate of 5 mL/min. After the gall bladder was removed, the livers were carefully excised and passed through a 70-mm cell strainer. Then cells were purified by density gradient centrifugation using discontinuous 30%/70% (vol/vol) Percoll (Sigma, St. Louis, MO) gradients. Cells at the upper interface were collected and used for further analysis by quantitative real-time PCR.
To extract and culture mice HSCs, primary HSCs and hepatocytes were extracted from mice livers by collagenase and purified by Percoll as we previously described. 15,64 HSCs and hepatocytes were cultured in Dulbecco's modified Eagle medium supplemented with 10% fetal bovine serum. The activated HSCs were determined by a-SMA expression. To obtain enough hepatocyte-derived MBL, hepatocytes were stimulated by lipopolysaccharide (100 ng/ mL) for 8 hours before medium replacement with fresh culture medium. The supernatants of hepatocytes were collected the next day. Activated HSCs were incubated with these supernatants (30% of total medium) for 48 hours and collected for further experiments.
Cell Cycle and Apoptosis Assay
The effect of MBL on cell apoptosis was detected using the cell-cycle staining kit (MultiSciences, Hangzhou, China), according to the manufacturer's instructions. LX-2 cells were seeded into 6-well plates overnight and then cultured in the media with the indicated concentration of MBL and/ or H 2 O 2 for 48 hours. Subsequently, the cells were collected and resuspended in 1 mL DNA staining solution and 10 mL permeabilization solution for 30 minutes in the dark. Data were acquired on a LSRII/Fortessa flow cytometer (BD Biosciences, Heidelberg, Germany) and analyzed using FlowJo software (Tree Star, Ashland, OR).
The effect of MBL on cell apoptosis was detected using the Annexin-V/7-AAD kit (MultiSciences), according to the manufacturer's instructions. LX-2 cells were seeded into 6well plates overnight and then cultured in the media with the indicated concentration of MBL and/or H 2 O 2 for 48 hours. Subsequently, the cells were collected and resuspended in 100 mL Binding Buffer containing 5 mL Annexin-V and 10 mL 7-AAD for 15 minutes in the dark. Data were acquired on a LSRII/Fortessa flow cytometer (BD Biosciences) and analyzed using FlowJo software.
ELISA Assay
To measure serum levels of MBL, ALT, AST, and LDH, serums were collected from human patients or mice. The serum levels of MBL in patients and ALT, AST, and LDH in mice were measured by ELISA. The assay kits for mouse ALT, mouse AST, and mouse lactate dehydrogenase were purchased from Jiancheng Biotech (Nanjing, China) and a kit for human MBL from R&D Systems.
Senescence-Associated b-Galactosidase Staining
The effect of MBL on senescence was detected using the senescent-associated b-galactosidase staining kit (Beyotime Biotechnology) according to the manufacturer's instructions. Cells seeded on 6-well plates or paraffinembedded, formalin-fixed, 5-mm-thick tissue sections were incubated with a combination of staining solution overnight. Five random fields from each section were analyzed under a magnification of 100Â, 3 random sections from each mouse were analyzed, and data from 3 to 5 mice per group were used to compare different treatment groups.
Statistical Analysis
All values are expressed as means ± SEM. Unpaired Student t test or 1-way analysis of variance followed by the Tukey post hoc tests was used for multiple group comparisons. P < .05 was considered significant. Statistics were calculated with GraphPad Prism version 8 (GraphPad Software, San Diego, CA). | 2022-04-04T15:30:06.559Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "054a3db6aa522b1571ea014f8be95f55a3229335",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cmghjournal.org/article/S2352345X22000637/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3e0810b27fd9a15d1a8f8363695ab422a97d3e33",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248392150 | pes2o/s2orc | v3-fos-license | BeAGLE: Benchmark $e$A Generator for LEptoproduction in high energy lepton-nucleus collisions
The upcoming Electron-Ion Collider (EIC) will address several outstanding puzzles in modern nuclear physics. Topics such as the partonic structure of nucleons and nuclei, the origin of their mass and spin, among others, can be understood via the study of high energy electron-proton ($ep$) and electron-nucleus ($e$A) collisions. Achieving the scientific goals of the EIC will require a novel electron-hadron collider and detectors capable to perform high-precision measurements, but also dedicated tools to model and interpret the data. To aid in the latter, we present a general-purpose $e$A Monte Carlo (MC) generator - BeAGLE. In this paper, we provide a general description of the models integrated into BeAGLE, applications of BeAGLE in $e$A physics, implications for detector requirements at the EIC, and the tuning of the parameters in BeAGLE based on available experimental data. Specifically, we focus on a selection of model and data comparisons in particle production in both $ep$ and $e$A collisions, where baseline particle distributions provide essential information to characterize the event. In addition, we investigate the collision geometry determination in $e$A collisions, which could be used as an experimental tool for varying the nuclear density.
I. INTRODUCTION
One of the pillars of the Standard Model [1,2] is the theory of Quantum Chromodynamics (QCD), which describes the mechanism for the interactions between quarks and gluons [3]. It is a self-contained fundamental theory of quark and gluon fields that is rich in symmetries [4,5]. However, despite the successes of QCD, many fundamental questions remain open to-date, some of which will have to be addressed by a highly anticipated new machine -the Electron-Ion Collider (EIC) [6,7].
The upcoming U.S.-based EIC is being designed to achieve a wide range of center-of-mass energies from 20 to 140 GeV, ion beams from deuteron to heavy nuclei (e.g. lead), high luminosities of 10 33−34 cm −2 s −1 , and highly polarized (70%) electron, proton, and light-ion beams [8]. The EIC will be the world's first dedicated electron-nucleus collider and the first collider to scatter polarized electrons off polarized light ions. The EIC science covers a broad range of topics from detailed investigations of hadronic structure with unprecedented precision to exploring new regimes of strongly interacting matter [9,10]. The EIC will allow us to investigate the full three-dimensional dynamics of the proton, going well * changwan@mails.ccnu.edu.cn † elke@bnl.gov ‡ mdbaker@mdbpads.com § zhoudunming@bnl.gov ¶ zhengliang@cug.edu.cn beyond the information about the longitudinal momentum nuclear structure contained in colinear parton distributions. With the unique capability to reach a wide range of momentum transfer Q 2 and Bjorken-x (x Bj ) values, the EIC can offer the most powerful tool to precisely quantify how the spin of gluons and that of quarks of various flavors contribute to the proton spin. Another frontier of the EIC science is to understand the formation of nuclei and their partonic structure. Particularly, the nucleus itself is an unprecedented QCD laboratory, where novel nuclear phenomena can be systematically studied by colliding electrons with different nuclear species [6]. However, the challenge of achieving the entire EIC science program via a single machine and a general-purpose detector is also unprecedented. The design of the Interaction Region (IR) and integration of a general purpose collider detector, along with its ancillary detectors over ±40 meters along the beam-lines requires careful planning. This design has to be guided and optimized via simulations of the physics processes and their kinematics to achieve the optimal placement of the detectors to maximize geometric acceptance, and to aid in identification of the best technologies. Therefore, a general-purpose eA Monte Carlo (MC) model suitable for both investigating the physics and the impact of the machine design is sorely needed.
The BeAGLE (Benchmark eA Generator for LEptoproduction) general-purpose MC generator simulates eA collisions with the production of exclusive final-state particles, including the fragments from the nuclear remnant breakup process [11]. Prior to the present paper, it has already been used extensively for exploring physics with final-state particles at pseudorapidities > 4.5, e.g., diffractive and spectator-tagging physics, and the associated detector/IR integration requirements for the "far-forward" region (ion-going direction) at the EIC [7]. Key physics topics at the EIC, which are very demanding on far-forward detection, include tagging and vetoing of incoherent Vector Meson (VM) production in ePb collisions to enable studies of gluon imaging in nuclei [12] and tagging of the spectator nucleon in eD scattering to allow for the extraction of free nucleon structure [13], as well as to study Short-Range Correlations [14,15] in the deuteron [16].
The design of the far-forward detectors and subsequent IR integration issues are urgent at this time because the EIC accelerator design will soon be settled, and the detector technology choices are happening in parallel, with both efforts requiring input from the other. Therefore, in order to maximize the EIC physics output and design the interaction region that is optimized for the aforementioned scientific goals, a reliable MC generator that can describe a wide range of final-states with different kinematic regions is needed. In this paper, we will significantly extend our focus from studies on exclusive observables [12] in the far-forward region to inclusive particle production for both forward and central regions based on BeAGLE simulations. Moreover, we will compare BeA-GLE simulations with available fixed-target eA data to further improve the model, and systematically study the model parameter dependence on various observables.
The outline of this paper is as follows. A detailed introduction of BeAGLE is given in Sec. II. In Sec. III, we discuss the validation process on the PYTHIA-6 MC model [17] using the HERA leading proton data [18]. Based on the established PYTHIA parameters, we compare the BeAGLE simulations with fixed target µA data from the E665 experiment [19]. In Sec. IV, we present results from a systematic investigation of the collision geometry, determined via the detection of neutrons from the nuclear breakup. In Sec. V, we describe the future opportunities and challenges of the BeAGLE model. Finally, a summary is provided in Sec. VI.
II. BEAGLE
BeAGLE is a hybrid model that uses modules from DPMJet [20], PYTHIA-6 [17], PyQM [21], FLUKA [22,23] and LHAPDF5 [24] to describe high-energy leptonuclear scattering. Overall steering and optional multinucleon scattering (shadowing) is provided in BeAGLE, as well as an improved description of Fermi momentum distributions of nucleons in the nuclei (compared to DP-MJet). DPMJet is not designed for light nuclei, so substantial changes had to be made for the case when the nucleus is a deuteron; details are described below. The geometric density distribution of nucleons in the nucleus is provided primarily by PyQM while the parton distributions within that geometry are taken from the EPS09 nuclear parton distribution functions (nPDFs) [25]. BeA-GLE also allows the user to provide "Woods-Saxon" parameters, including non-spherical terms, to override the default geometric density description. The partonic interactions and subsequent fragmentation process is carried out by PYTHIA-6. The optional PyQM module implements the Salgado-Wiedemann quenching weights to describe partonic energy loss [26]. Hadron formation and interactions with the nucleus through an intra-nuclear cascade are described by DPMJet. The decay of the excited nuclear remnant is described by FLUKA, including nucleon and light ion evaporation, nuclear fission, Fermi breakup of the decay fragments, and finally de-excitation by photon emission. See Fig. 1 for an illustration and the User's Guide here: https://eic.github.io/software/ beagle.html.
Due to the structure of the BeAGLE generator coherent diffraction is currently not included. Since the primary interaction is modeled by PYTHIA-6 at the nucleon level, for any nuclear beam, the target nucleus will break up or at least be excited in the final state. Furthermore, for diffractive interactions, the lepton-nucleus cross section is assumed to be A times the lepton-nucleon cross-section, rather than calculated from first principles. As observed in the data in ep collisions at HERA, coherent diffraction in DIS was found to be 15% of the total inclusive DIS cross section [27], while in the nucleus, it has been predicted that coherent diffractive processes can be enhanced due to possible gluon saturation effects at high energy [28]. Measurements of coherent diffraction in nuclei are expected to be one of the golden channels to study non-linear QCD effects [29] at the EIC.
In this framework, the lepton-nucleus collision can be illustrated in several steps as follows: A. The collision is simulated by selecting a struck nucleon in the nucleus according to a Glaubertype model, where the nucleon level cross sec-tion is weighted by the EPS09 nPDFs leading to an event at the partonic level; optional gluon radiation by PyQM [30], accounting for nuclear medium effects, is available; finally, the fragmentation/hadronization is performed with the Lund string model provided by PYTHIA-6; B. Hadrons produced during the previous stage participate in a "formation-zone" Intra-Nuclear Cascade (INC) [31], which produces secondary particles; C. The breakup of the excited nuclear remnants will be treated by the FLUKA model.
A. Hard interactions and Fermi momentum
Initial nucleons are placed in coordinate space according to the Woods-Saxon distribution [32] with intrinsic Fermi momentum, some of which will be struck off the nucleus by the exchanged virtual photon emitted by the lepton. The corresponding nucleon level cross section, σ γ * N (x, Q 2 ), is obtained from PYTHIA-6, where the magnitude of σ γ * N (x, Q 2 ) is parametrized such that the σ γ * A /(Aσ γ * N ) follows the EPS09 nuclear modification factor R(x, [25]. This scaling feature based on the nPDFs at the cross section level enables general studies of nuclear effects. For the hard scattering between a virtual photon and the struck nucleon discussed above, three different options of an initial collision geometry, including multiple nucleon scattering and shadowing effects [33][34][35], are available. The BeAGLE framework also allows a userdefined parameter, genShd, to switch between the different modes: i) genShd = 1, only one nucleon is probed by the virtual photon and participates in the primary scattering simulated by PYTHIA-6; ii) genShd = 2, if the impact parameter between the virtual photon and any nucleon is less than a distance, D = σ γ * N /π, one random selected nucleon will be simulated by PYTHIA-6 for an inelastic interaction, while the other nucleons will undergo elastic interactions; iii) genShd = 3 is the same as genShd = 2 except the order is fixed such that the first struck nucleon always undergoes inelastic scattering, while the rest scatter elastically.
Due to nuclear binding, nucleons inside of a nucleus have internal momentum, commonly known as Fermi motion [36]. In BeAGLE, we adopt a non-relativistic model of the nucleon spectral function, provided by Ref. [37]. This parametrization applies to all nuclei, ranging from deuterons to heavy nuclei, e.g., lead (Pb). For the case of the deuteron, the parametrization has been extended with the Light-Front formalism by Strikman & Weiss [38]. Details from recent BeAGLE deuteron studies can be found in Refs. [13,16].
The nucleon momentum distribution in the ion rest frame is parametrized as follows, n(k) = n 0 (k) + n 1 (k). (1) For nucleus A = 2, 3, 4, For nucleus A > 4, Here k is the internal nucleon momentum and A (0) to F (0) are parameters given in Table A1-A2 in Ref. [37]. Note that n 0 (k) describes the low momentum part of the wave function, or the Mean-Field region, while n 1 (k) describes the high momentum tail, known as the Short-Range Correlation region. Currently, only n 0 (k) has been implemented in BeAGLE. However, for the deuteron case, n(k) = n 0 (k), where Short-Range Correlations in the high momentum tail have been studied in Ref. [16]. In addition, for A > 4, the parametrizations are based on a few typical nuclei A typical , e.g., Carbon-12, Oxygen-16, Calcium-40, Iron-56, Lead-208, and above. Any nucleus between them, A select , will use one of the nearest typical nuclei for the mass number, such that A select < A typical . Differences in n 0 (k) for various mass numbers A are generally small for A > 12. BeAGLE currently does not account for the Fermi motion in the DIS cross section calculations, where kinematic distributions, e.g., x and Q 2 , are unmodified from the PYTHIA-6 generator. Accounting for the Fermi momentum would violate energy-momentum conservation because the primary interaction simulated by PYTHIA-6 assumes an on-shell nucleon mass. The higher the offshell mass as determined from the nuclear wave function, the more violation in energy and momentum it will cause. In order to correct for this artifact, the excess energy and momentum are absorbed by the remnant nucleus.
However, in the case of deuteron (or light nuclei in general) this correction will not be reasonable because there is only one spectator nucleon (or a few spectator nucleons) in the system. The correction would artificially distort the spectator momentum distribution. Therefore, for the deuteron, we leave the spectator unmodified, where the energy and momentum are corrected by the outgoing particles from the current fragmentation 1 . By using this approach, the spectator tagging and related physics topics can be studied with the genuine information from the wave function. No Final-State Interactions (FSI) are present in the BeAGLE generator for leptondeuteron collisions.
B. Intra-nuclear Cascade
The generated particles from the primary scattering will be placed at the struck nucleon position and transported through the INC following the formation zone formalism implemented in DPMJet [20]. Each primary particle is assigned a formation time sampled from an exponential distribution with the characteristic time scale τ [40,41] defined in the lab frame as follows: where E, m and p ⊥ are the energy, mass, and transverse momentum of the produced particle, respectively. The parameter τ 0 is treated as a free parameter to be determined/tuned by the experimental data. These produced particles can induce secondary interactions (a cascade process) if they are formed inside the nucleus. Particles with higher energy and smaller transverse mass are more likely to be formed outside and leave the nucleus without a secondary interaction. The value of the τ 0 parameter has been systematically extracted from the experimental data in our previous publication [12], and in the present work.
Since forward neutron production from the evaporation process is sensitive to the INC, we use the multiplicity data of neutron emission in µPb collisions from the E665 experiment at Fermilab [39] to tune the τ 0 parameter. BeAGLE does not simulate coherent diffractive events, which do not produce neutrons in the final state. However, the E665 data do include contributions from the coherent diffractive process. In order to properly use the neutron multiplicity to determine the τ 0 parameter, a weight (f = N coherent /N total ) is needed for the BeA-GLE model to account for the coherent contribution in the cross section data.
Note that the coherent event fraction f was not explicitly determined in the E665 experiment [39], thus a few assumptions on f were made in order to determine the value of τ 0 . Figure 2 (left) shows the average neutron multiplicity N n as a function of photon energy ν as measured by the E665 experiment and simulated by BeAGLE with f = 24%, where different values of τ 0 are presented. A constant fit is performed to the E665 data, where the yellow band shows a statistical uncertainty corresponding to one standard deviation. With the assumption of f = 24%, the best value of τ 0 is found to be 10 fm. The N n from E665 is found to be 4.7 ± 0.5. The choice of f is inspired by HERA measurements [27], where one finds a large fraction of diffractive events, contributing about 15% to the total deep inelastic cross-section for ep collisions [42,43]. However, theoretical studies, e.g., in Ref [28], indicate that the ratio of diffractive events to the total cross section in eA could be larger than what is observed in ep collisions, due to non-linear QCD effects. In Fig. 2 (right), the average neutron multiplicity as a function of f is presented, where the dotted line with the yellow band represents a match to the E665 data with different assumptions for f , given by (4.7 ± 0.5)/(1 − f ). The straight colored lines show a few selected τ 0 values with their corresponding N n in BeAGLE. As shown in Fig. 2 (right), for τ 0 = 14 fm, the corresponding f value is less than that in ep collisions at HERA, while for τ 0 = 5 fm or 7 fm, the corresponding f needs to be larger than 0.3, which exceeds current theoretical predictions [28,44]. Therefore, we use τ 0 = 10 fm as the default setting in the BeAGLE model, while in the following analysis we perform systematic studies using other τ 0 values.
C. Nuclear remnant breakup
After all possible secondary interactions are exhausted, excitation energies of the nuclear remnant can be calculated by summing up the recoil momenta transferred to the remnant by the particles leaving the nuclear potential. The breakup of the nuclear remnant is modeled using fission, the evaporation of nucleons and light nuclei, and photon emissions within the FLUKA machinery [22,23] for a given excitation energy. Since FLUKA is not an open-source program, the BeAGLE event generator has no handle on changing the evaporation process and can only adjust the INC in the previous step.
III. DATA AND MC COMPARISON
In this section, comparisons of experimental data and the BeAGLE MC will be presented. We start with the case of ep by using the PYTHIA-6 model, which is independent of the eA modeling in BeAGLE. The target fragmentation of the leading proton distribution has been investigated, and a good set of baseline parameters regarding the nucleon target fragmentation are established. The leading proton data are based on the ZEUS experiment at HERA [45]. These improvements on the PYTHIA-6 parameters will be used later in the BeAGLE event generator. After that, we will show a comparison of BeAGLE and E665 data [19] for inclusive charged particle rapidity distributions in both the forward and backward regions for µXe collisions.
A. Comparison between PYTHIA-6 and ep data at ZEUS
Since PYTHIA-6 is used to model the primary interaction in BeAGLE, it is crucial to optimize the parameters used in this stage of the framework. Leading proton data collected by the ZEUS experiment at HERA [45] were examined in order to optimize the PYTHIA parameters for the fragmentation p T and intrinsic k T . Three parameters were investigated in PYTHIA and are detailed in Table I. MSTP(94) controls the energy partitioning in the beam remnant cluster decay. The default value of 3 uses the regular fragmentation function, while MSTP(94)=2 uses the function P (χ) = (k + 1)(1 − χ) k , where χ is the light cone energy fraction taken by the hadron or diquark. The fragmentation function corresponding to MSTP(94)=2 and PARP(97)=6 is, P (χ) = 7(1 − χ) 6 . PARJ(170) is a parameter which we added to PYTHIA to allow separate control of the Gaussian rms p T for hadrons in the recoil, which in standard PYTHIA is set to be the same as that for the string fragmentation: PARJ (21). (Top) The p 2 T -slope, b, of the cross-section d 2 σLP /dxLdp 2 T of leading proton, as defined by the parameterisation A · e −b·p 2 T and obtained from a fit to the data in bins of xL. (Bottom) Single differential cross section of the leading proton normalized to the total DIS cross section 1/σinc · dσLP/dxL. The MC results represented by lines are compared to data from Ref. [45]. Figure 3 shows comparisons of different calculated distributions with the measurements of leading protons from the ZEUS experiment [45] for positron-proton scattering with beam energies of E e = 27.5 GeV and E p = 820 GeV. In Ref. [45], the semi-inclusive reaction e + p → e + Xp was studied with the ZEUS detector with an integrated luminosity of 12.8 pb −1 . The final-state proton, carrying a large fraction of the incoming proton energy, but a small transverse momentum, was detected by the ZEUS leading proton spectrometer (LPS). The selection of the LPS These measurements were carried out in the kinematic range Q 2 > 3 GeV 2 , 45 < W < 225 GeV, y > 0.03 and the leading proton is measured with p 2 T < 0.5 GeV 2 , x L > 0.32, where x L is the longitudinal momentum fraction of the measured proton and the incoming proton beam momentum. Figure 3 (top) shows the p 2 T -slope, b, of the cross-section d 2 σ LP /dx L dp 2 T for leading protons, as defined by the parametrization A·e −b·p 2 T and obtained from a fit to the data in bins of x L . The black points represent the data, while the yellow band is the experimental systematic uncertainty [45]. The other colored lines represent the distributions for different values of PARJ(170) in PYTHIA, corrected for the leading proton spectrometer (LPS) acceptance effects. The distribution of the single differential cross section normalized to the total DIS cross section 1/σ inc · dσ LP /dx L is shown in Fig. 3 (bottom), and the σ inc = 223.9 nb. It is found that the optimal parameter in this comparison is PARJ(170) = 0.32.
There are two additional parameters that are sensitive to the leading proton distribution: i) PARJ (21), the width of the transverse momentum distribution in the fragmentation, and ii) PARP(91), the Gaussian width of the intrinsic k T distribution. We find that the result with PARJ(21) = PARP(91) = 0.32 agrees best with the ZEUS leading proton data. However, a value of 0.4, tuned to data collected by the HERMES experiment at HERA [46,47], does a better job in the current fragmentation region. It is noted that to describe the European Muon Collaboration (EMC) data [48], higher values of the fragmentation p T and intrinsic k T are preferred. We use PARJ(21) = PARP(91)= 0.4 as the default settings for results presented in this study. The summary of the PYTHIA parameters used in this paper which are different from the default values are listed in the Appendix in Table II.
B. Comparison between BeAGLE and the E665
µXe data A challenge in validating the BeAGLE generator is that there are only limited eA collision data available to compare with. The best available data are measurements of particle production from the E665 experiment [19] at Fermilab. In Ref. [19], the data were collected with the E665 spectrometer, which used the 490 GeV muon beam from the Tevatron at Fermilab. The experiment used a streamer chamber as a vertex detector, providing nearly 4π acceptance for charged particles. Results of charged hadron production in muon-xenon (µXe) and muon-deuteron (µD) collisions [19] are used to compare with the BeAGLE model. The general picture of the interaction is the virtual photon, emitted by the incoming muon, interacts with a parton of a nucleon in the target nucleus. The hadronic center-of-mass frame (cms) is defined as the system formed by the virtual photon and the target nucleon: the struck parton is scattered into the forward direction, while the target remnant travels into the backward direction. 2 Given a limited particle identification capability in the E665 experiment [19], all positively charged hadrons in the data with x F (m π ) less than −0.2 are assigned the proton mass, while all other positively and negatively charged hadrons are treated as pions. The variable, x F (m π ), is defined as x F = 2p * L /W , with p * L being the longitudinal momentum of the hadron in the cms frame, assuming all particles are pions. In order to properly compare BeAGLE simulations with the results from Ref. [19], the partial identification of particles is performed in the same way on the BeAGLE simulated events. The version of the BeAGLE event generator used here is 1.01.03. The BeAGLE control card is shown in Table III in the Appendix. Figure 4 shows the distributions of W, Q 2 , ν, x bj for BeAGLE compared with the E665 µXe data [19]. The positive muon beam with an energy of 490 GeV is scattered off a Xe target. A set of kinematic cuts had to be applied to select events: θ > 3.5 mrad, Q 2 > 1 GeV 2 , 8 < W < 30 GeV, and 0.1 < ν/E µ < 0.85. The red solid lines represent the generated MC events from BeAGLE, while the E665 data [19] after correcting for detector acceptance effects are shown in black open circles. The ratio between the BeAGLE data and the corrected E665 results are shown in the bottom of each plot. The comparison shows that BeAGLE can do a reasonable job of describing the kinematics of the E665 data, while large deviations can be seen at the small x Bj and high Q 2 . with respect to our default value of τ 0 = 10 fm, which was determined from the evaporated neutron multiplicity data (see Fig. 2 (right)). Lower τ 0 values in Fig. 2 (right) would suggest a very large fraction of diffractive events in µA collisions. However, the discrepancy in negatively charged particle production needs to be considered, for a clear understanding of the τ 0 dependence.
In Fig. 5(c) and (d), the average charged particle multiplicity for positive and negative y * are shown based on BeAGLE simulations. Different τ 0 parameters in the BeAGLE model and E665 data are also presented for comparison. The distribution for charged hadrons from the BeAGLE simulations underestimates the E665 data. However, with a lower τ 0 value, the average multiplicity distributions for charged hadrons in the target fragmentation region are improved. The quantitative dependence on τ 0 is similar to that of the positively charged hadrons shown in Fig. 5(a), as positively charged hadrons dominate in the target fragmentation region.
We find that the BeAGLE model underestimates the multiplicity everywhere, especially for negatively charged particles, and in the current fragmentation region, where neither of them has a τ 0 dependence. Although the data may suggest a lower τ 0 parameter in µA collisions, this comparison also implies that something other than τ 0 plays an important role in the particle production. Recent results from the H1 experiment at HERA have reported a measurement of the charged particle multiplicity distribution [49] in a wide range of DIS kinematics, where the PYTHIA-8 model [50,51] also underestimates the data almost everywhere. While a separate analysis on this subject in ep DIS is highly important, e.g., a Rivet analysis [52], the analysis in this paper is only focused on parameters sensitive to nuclear effects.
In order to quantitatively understand the difference between the small amount of available experimental data and the BeAGLE model, we investigate the particle production in a differential way. In Fig. 6, the normalized cms-rapidity y * distributions for positively and negatively charged hadrons are shown in µXe ((a) and (b)) and µD ((c) and (d)) with 490 GeV muon beams. The selected kinematic phase space is within 14 < W < 20 GeV. In µXe events, for positively charged hadrons, there is no τ 0 dependence found at forward rapidities, while a strong dependence is observed in the backward region. In the E665 y * distribution comparisons with BeA- GLE simulations shown in Fig. 6, BeAGLE underestimates the forward particle production, and predicts a different peak position of the backward production. Additionally, BeAGLE underestimates the negatively charged particles and all charged particles in µD almost everywhere in rapidity except for the very forward and backward regions. For both µXe and µD systems, similar observations are found in other W ranges, thus it is observed that the discrepancies between the data and BeA-GLE are not dependent on the kinematics. To further isolate the contributions from the primary interaction and the nuclear remnant fragmentation, we study the difference between positively and negatively charged particles, which is more sensitive to effects like INC. Therefore, the normalized cms-rapidity y * distributions of the net charge, ρ + (y * ) − ρ − (y * ), are shown in Fig. 7 for both µXe and µD collisions, where ρ ± is defined as follows, Here N ev is the number of selected events and N ± is the number of positively or negatively charged hadrons, respectively. In µXe events, for charged hadrons, there is no τ 0 dependence of the ρ ± distribution at forward rapidity or in the current fragmentation region, similar to what has been found in Fig. 6. However, in the backward region, despite the large τ 0 dependence in the BeAGLE model, the peak position of the distribution is found to be stable for all τ 0 values, and is different compared to the E665 data by about half a unit of rapidity. In µD events, τ 0 dependence is hardly visible, while the shift in the peak position in the backward region is even larger than that in µXe events.
Comparisons of the normalized y * distributions between µXe and µD collisions for positively charged hadrons are presented in Fig. 8 for both E665 data and BeAGLE simulations. Here, BeAGLE uses a τ 0 of 3 fm but shows different assumptions for final-state nuclei. The discrepancy still exits in the backward region, where the peak positions from BeAGLE sit at larger negative values of y * compared to the E665 data. Since there is no clear description of remnant nuclei detection in Ref. [19], we try a few different ways to treat the remnant nuclei in the BeAGLE simulation. The red line is the result with all remnant nuclei included. The magenta line denotes a randomly selected 50% of all nuclei. The blue line, green line, and orange line represent only nuclei whose mass number A is smaller than 4, a random selection of 25% of all nuclei, and no nuclei, respectively. With different fractions of nuclei included, the net charge density in the region of −4 < y * < −2 changes, while the peak position remains the same.
For the comparisons presented above, specifically from Fig. 5 to 8, a few things should be noted. First, the particle identification in the E665 experiment assigned either a proton or pion mass in the reconstruction, based on the x F value. In the absence of a more precise particle identification (PID) methodm such as those currently in use (e.g. dE/dx or time-of-flight measurements), this approach might be problematic. Figure 9 shows the y * distributions of π + mesons (upper) and protons (lower) in µXe collisions using a 490 GeV muon beam from the BeAGLE generator. The red points, which are labeled as "E665 selection", represent the same method as the E665 data from Ref. [19], while the blue curves represent the result based on the true mass from the MC PID, and the magenta curves assume a wrong mass assignment, e.g., proton mass for pions (top), and pion mass for protons (bottom). Pions are mostly produced in the hard scattering and dominate the current fragmentation region. A large proportion of protons are generated during the INC process and its y * distribution is dominated in the region of −3 < y * < −2.5. If the protons were mis-identified as pions, the cms-rapidity y * would be shifted toward morecentral values of rapidity. Although the data were fully corrected at the particle level, residual mis-identification in the data and a subsequent discrepancy between the data and BeAGLE in particle compositions are possible. Secondly, the E665 measurement from Ref. [19] did not explicitly describe the details of experimental detection of remnant nuclei. Although the peak position of µXe − µD shown in Fig. 8 remains the same, the details of remnant nuclei detection together with a different particle composition as described above may cause the peak position of the distribution change. Finally, the missing coherent diffractive events in the BeAGLE model could be another reason for the observed discrepancy. Naively, the diffractive DIS events would have a rapidity gap, and the y * distribution would be expected to be shifted more towards the forward than the backward direction.
In addition to Ref. [19], a similar result was reported by the E665 Collaboration in Ref. [53]. In this study, it employed so-called "gray tracks" to enhance proton identification. "Gray tracks" are particles whose momenta are between 200-600 MeV/c, and the streamer density as observed in the streamer chamber picture is clearly higher than that of a minimum ionizing particle. Unfortunately the data reported in Ref. [53] were not corrected for experimental inefficiencies and there is no reliable method to study the impact of such gray tracks in our simulations. In light of these challenges, a truly Ref. [19]. equivalent comparison between existing data and BeA-GLE cannot be made. Therefore, in order to further understand particle production over a wide range of rapidity, only the EIC can provide more information about the target fragmentation in lepton-nucleus collisions. Figure 10 shows the normalized distribution of positively charged particles as a function of pseudorapidity (η) at the top EIC energy, simulated by BeAGLE. The total distribution includes all particle species, depicted in the black curve. Other colors indicate distributions for different particle species. Almost all pions and kaons are produced during the hard collision, and their pseudorapidities range from −4 to 4, which falls into the acceptance of the expected general purpose detector of the EIC. Protons are distributed across a wide range of pseudorapidity, from −4 to 10, where three different far-forward proton detectors (B0 tracker, Off-momentum detector, and Roman Pots) can cover a large fraction of the phase space of pseudorapidity > 4.5 [7]. Nuclei are produced in the last step of the BeAGLE proceeses via evaporation, but they are separated into two kinematic regions. The nuclei distributed within 7 < η < 10 are light nuclei, e.g., deuterons and alpha particles. The large remnant nuclei are distributed within 10 < η < 15. Detecting these nuclei is a major experimental challenge, and is one of the on-going efforts at the future EIC, hopefully achieved through optimizing the far-forward instrumentation and the 2nd IR design [7].
IV. COLLISION GEOMETRY DETERMINATION IN LEPTON-NUCLEUS INTERACTIONS
In this section, we show an example of how BeAGLE can help optimize measurements with different collision geometry in lepton-nucleus interactions at the future EIC. Precise quantification of the nuclear effects in eA collisions requires knowledge of the underlying collision geometry. In fixed target DIS experiments of nuclei up to now, the collision geometry has only been qualitatively investigated by varying the target nucleus. However, at the EIC, it is possible to characterize an event-by-event collision geometry by studying the nuclear breakup, an idea initially introduced in Ref. [41]. The collision geometry in each event can be linked to the multiplicity of evaporation neutrons at very forward rapidities (> 4.5), measured by the Zero-Degree Calorimeter (ZDC) (see Ref. [7] for details). In the following, we will introduce variables that are sensitive to the collision geometry and their correlation with experimental observables, e.g., evaporated neutrons and protons. Compared to Ref. [41], we provide more systematic studies by varying model parameters using the BeAGLE generator to demonstrate the robustness of this measurement. This result provides an important experimental handle to all inclusive and semi-inclusive DIS measurements at the EIC. The collision geometry in DIS reveals the underlying spatial information of the nuclear matter probed by the exchanged virtual photon with respect to the rest of the nuclear target. Depending on the physics process under study, different collision geometry quantities can be defined. In Ref. [41], the fiducial traveling length and the impact parameter were used as important controls to quantify the effect of parton energy loss and gluon saturation. In this paper, we define the effective interaction length, d, as the distance between the photon-nucleon interaction point and the edge of the nucleus in the direction of the virtual photon, weighted by the nuclear density ρ 0 , Here, z 0 is the position of the nucleon involved in the scattering along the photon moving direction, and b is the impact parameter. If multiple nucleons are participating in the interaction, we use the effective interaction length averaged over all the involved nucleons. This definition avoids the possible negative region 3 of fiducial d used in Ref. [41] and is more directly connected to the amount of nuclear material. These geometric variables are depicted in Fig. 11. In addition, we use the scaled thickness function T (b) as an alternative to characterize the collision geometry as follows, in units of fm. This quantity can be explicitly studied together with the gluon saturation physics in eA collisions. 3 If one scattered nucleon is outside of the geometric nuclear radius due to fluctuation, the fiducial d becomes negative.
The most abundant final-state products produced during the nuclear breakup are evaporated protons and neutrons. The left and right panel in Fig. 12 show the distribution of the neutrons and protons, respectively, as a function of momentum and scattering angle at two collision energies (18 × 110 GeV and 5 × 50 GeV). The evaporation momenta are close to the beam momentum and their scattering angles are small (∼ few milliradians). At a beam energy of 50 GeV, the largest scattering angle is about 6 mrad, while at 110 GeV, the maximum scattering angle is about 3 times smaller. In contrast to neutrons, there are only a few protons emitted at very small angles because the protons need to overcome the Coulomb barrier to leave the nucleus. As the number of emitted protons during the nuclear evaporation is significantly lower than that of neutrons, it is best to study the properties of the nuclear evaporation process by measuring neutrons.
The neutron multiplicity has been demonstrated to be a tool to access the collision geometry variables [41], inspired by how centrality is determined in heavy-ion collisions [54]. As it is difficult to directly measure a large number of neutrons, we use the energy deposition in the ZDC, similar to Ref. [41]. Higher energy deposited in the ZDC (large multiplicity of evaporation neutrons) is expected to correspond to more-central events. The distribution of energy deposition in the ZDC E ZDC n (in generated level) is shown in Fig. 13(a), where the blue area corresponds to the events with a centrality of 0-1%, representing the top 1% events with the highest energy deposition being greater than 2.82 TeV. The red area corresponds to the events with a centrality of 60-100 % with the energy being less than 0.44 TeV. In Fig. 13(b), the average traveling distance d is shown as a function of the ZDC energy percentage class. The value of d for minimum-bias (0-100%) events is 4.402. d decreases clearly going from a centrality of 0-1% to 0-10%, but one loses a factor of 10 in statistics, this decreasing trend is not obvious in peripheral collisions. For the following analysis, we choose 0-1% as a central collision, and 60-100% as the most-peripheral collision.
The correlation between the deposited energy in the ZDC and impact parameter b, the nuclear thickness T (b)/ρ 0 are shown in Fig. 14(a) and (c), separately. With increasing energy, b decreases while T (b)/ρ 0 becomes larger. Figure. 14(b) and (d) show the b and T (b)/ρ 0 distributions in central (0-1%) and peripheral (60-100%) collisions, separately. They are normalized by the number of total events. A clear difference between central and peripheral collisions in both b and T (b)/ρ 0 can be seen, and by selecting different centrality classes, we obtain an experimental handle on the collision geometry.
C. Systematic study of collision geometry
In this subsection, we perform four different systematic tests, shown in Fig. 15 First, as the centrality is selected via the energy deposition in the ZDC, we take the ZDC energy resolution and the angular acceptance (θ < 5.5 mrad) into account. We assume a ZDC energy resolution of σ E = 100% √ E + 10% to smear the energy of each individual neutron with a Gaus-sian distribution. Figure 15(a) and Fig. 16(a) illustrate the change in the b and T (b)/ρ 0 distributions after detector smearing, respectively. The black points represent central collisions, while the red points depict peripheral collisions. The solid markers show the generated distribution without smearing, while the open markers include detector smearing. One can conclude that the results at generator level and after detector smearing are almost identical. The small impact of the ZDC energy resolution on centrality does not put stringent requirements on the ZDC performance.
Secondly, in this analysis, the default option is τ 0 = 10 fm and genShd = 3. In order to study the impact of τ 0 on centrality, it was lowered to 3 fm. A smaller τ 0 means more particles can be formed in the nucleus, which results in more emitted neutrons from the nuclear break up, and consequently a larger energy deposition in the ZDC. Figure 15(b) and Fig. 16(b) shows the b and T (b)/ρ 0 comparison for τ 0 = 10 and 3 fm in both central and peripheral collisions for the genShd = 3 case, respectively. There is no significant difference between the distributions of τ 0 = 10 and 3 fm observed for peripheral events, while some differences for central events. However, the difference between peripheral and central events is small, showing a weak dependence on τ 0 .
Thirdly, the energy of the emitted particles scales with the beam energy. However, for the b distribution, there is no significant difference between central and peripheral collisions for the various beam energies, as shown in Fig. 15(c). The same behavior is observed for T (b)/ρ 0 , and summarized in Fig. 16(c). This indicates that there is no beam energy dependence for the centrality definition. Therefore, although some model parameters are not precisely determined in BeAGLE, we find the correlation between ZDC energy and collision geometry is very stable.
To model nuclear shadowing effects, BeAGLE has 3 different models implemented, as described in Sec. II A. Studies indicate a very small effect of shadowing on the energy deposition in the ZDC in the BeAGLE framework. Predictions for b and T (b)/ρ 0 with the different shadowing models are also studied. Fig. 15(d) and Fig. 16(d) show the comparison of b and T (b)/ρ 0 between genShd = 3 and genShd = 1, respectively. In both distributions, no difference is observed between the two shadowing options in central collisions, but some differences are seen in peripheral collisions. The observed differences arise from the low Q 2 region. This can be understood from the fact that Q 2 ∝ 1 λ , where λ is the wavelength of the photon. At low Q 2 , the photon has a large wavelength, and can interact with many nucleons at once. However, for high Q 2 , the wavelength of the photon is small, and therefore less nucleons particpate in the interaction. No difference as a function of x bj for these two shadowing models is found.
V. DISCUSSION
In the previous sections, we compared ep DIS events from the PYTHIA event generator to data from the ZEUS experiment at HERA, as well as µXe and µD collision results from the BeAGLE event generator and E665 data at FermiLab. The results show that we can tune the PYTHIA model to describe target fragmentation in ep collisions, while BeAGLE can not fully describe the target fragmentation region in eA at E665. Model uncertainties, e.g., τ 0 , and insufficient knowledge of the experimental selection in E665 might be responsible for the observed discrepancy. In order to further improve our understanding on the way toward the EIC, currently available Ultra-Peripheral Collisions (UPC) data at the Relativistic Heavy-Ion Collider, e.g., the recent data of J/ψ photoproduction in the deuteron-gold UPC [55], and UPC data from the Large Hadron Collider, will be extremely valuable, along with tagged target fragmentation studies at the Continuous Electron Beam Accelerator Fa- cility at Jefferson Lab. These data provide a new pathway for study and validatation and improvement of the BeAGLE generator.
In addition, BeAGLE currently cannot simulate coherent diffraction in eA due to the construction of the model. This is closely related to the determination of the formation time parameter, e.g., τ 0 . Another future plan for the BeAGLE development will be in this area, where coherent diffraction will provide important insights into the underlying gluon dynamics in the nucleus.
In parallel to this work, there are recent efforts in improving the parton energy loss model PyQM in a different study [30], modification of the DIS kinematics in light nuclei to account for Fermi momentum, implementation of the EMC effect [56][57][58][59][60], and short-range correlations using a generalized-contact formalism [61][62][63] for lower energy scattering. All past studies, the current work, and future studies have positioned BeAGLE as the prime MC tool for studying lepton-nucleus collisions at high energy, particularly towards the upcoming EIC.
VI. SUMMARY
In this work, we provide a comprehensive description of a high energy lepton-nucleus collision MC event generator -BeAGLE. We validate the model by comparing simulated observables from BeAGLE to available experimental data. The comparison of the PYTHIA-6 model calculations with the ZEUS experimental data in ep collisions shows that we have a good PYTHIA model with refined tunes for target fragmentation in lepton-proton collisions. The BeAGLE event generator describes the E665 lepton-nucleus data for various kinematic variables. However, it only gives a fair description of the charged particle production spectra as measured by the E665 experiment. In order to obtain a full understanding of particle production in the current and target fragmentation region, a future facility of high-energy lepton-nucleus collisions, e.g., the EIC, is required.
Based on the BeAGLE event generator, a systematic investigation of collision geometry determination using the detection of neutrons from the nuclear breakup is presented. We find the forward neutron production can provide a good experimental handle on the effective interaction length and nuclear thickness. These parameters will be important for the quantitative study of partonic energy loss in a nuclear medium, and for studies of nonlinear gluon dynamics. Detector requirements for a ZDC are discussed, where the energy resolution has a small impact on the centrality determination and thus does not put stringent requirements on the detector, in contrast to studies of spectator tagging [13,16]. In addition, we present the dependence of the collision geometry on shadowing effects, the formation time parameter τ 0 , and the beam energy. All systematic variations are found to have small impact on the determination of the collision geom-etry, showing that this robust experimental measurement has minimal model dependence. The study reported in this paper provides an important baseline for developing a general-purpose MC event generator for high energy lepton-nucleus collisions. Table II summarizes the PYTHIA parameters used in this paper which are not the same as the default value.
The meaning of each parameter and the default value can be found in Ref. [17]. Except the parameters that introduced in Sec. III A, others were tuned by HERMES data [46,47]. Note that the parameters of MSTP(17)=6 and PARP(166)=0.67597 are not PYTHIA-6 standard parameters. They are defined as a different parameterization of R VMD with respect to the default, where R VMD is the ratio of the hadronic cross sections of longitudinally to transversely polarized vector mesons and defined as, with C =PARP(165) and B =PARP(166). See Refs. [17,47] for details. Table III shows the BeAGLE input control card, including the meaning of each parameter and different value. Lepton beam can be "ELECTRON" (or "MUON+"). TARPAR The first number is nucleus A number and the second number is charge Z for the target. The third number is the n/p handling mode for the Pythia subevent: • 0 = sequential n, p. The first events are en, the remaining ep (binomial prob.) Useful for getting en and ep cross-sections from Pythia. • 1 = en only test mode (not really for physics).
• 2 = ep only test mode (not really for physics).
• 3 = random mix. The events are randomly en or ep. Useful because you can analyze just a subset of the data without bias.
TAUFOR
The first number is the formation time parameter (τ 0 ) in fm/c for the intra-nuclear cascade, where the second number is the number of generations followed. Default=25, 0 means no cascade. MOMENTUM The first number is for the lepton beam (GeV/c), the second for the ion (or p) beam. Both numbers should be entered as positive, but the lepton beam will be multiplied by −1.
L-TAG
These numbers are cuts: yMin, yMax, Q 2 Min, Q 2 Max, thetaMin, thetaMax, where y and Q 2 (GeV 2 ) are the leptoproduction kinematics and theta (radians) refers to the lepton scattering angle in the laboratory frame.
PY-INPUT
Specifies the file (with an eight-character maximum name!) used as a pythia input file. See instructions at https://eic.github.io/software/beagle.html. FERMI First number: • -1 = no Fermi motion at all.
• 2 = Fermi motion added to Pythia subevent after the fact. Fifth number is the number of events to print out and in some cases to be verbose about. PYVECTORS Allowed Pythia vector mesons for diffraction: 0(D)=all, 1=ρ, 2=ω, 3=φ, 4=J/ψ. USERSET First number specifies the meaning of the variables User1,User2,User3. Second number specifies the maximum excitation energy in GeV handed to FLUKA (D=9.0). Note: This should not come into play, but it protects against infinite loops. PHOINPUT Any options explained in the PHOJET-manual can be used in between the "PHOINPUT" and "ENDINPUT" cards. PROCESS 1 1 1 1 1 1 1 1 ENDINPUT START First number specifies the number of events to run. Second number should be 0 (or missing). STOP | 2022-04-27T06:47:52.905Z | 2022-04-13T00:00:00.000 | {
"year": 2022,
"sha1": "801ce71f7dbb091c94a3a5d73ec3ae887a059f1e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "801ce71f7dbb091c94a3a5d73ec3ae887a059f1e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
244144841 | pes2o/s2orc | v3-fos-license | Incidence, survival, and associated factors estimation in osteosarcoma patients with lung metastasis: a single-center experience of 11 years in Tianjin, China
Background Osteosarcoma is the most common primary malignant bone tumor. The current study was conducted to describe the general condition of patients with primary osteosarcoma in a single cancer center in Tianjin, China and to investigate the associated factors in osteosarcoma patients with lung metastasis. Methods From February 2009 to October 2020, patients from Tianjin Medical University Cancer Institute and Hospital, China were retrospectively analyzed. The Kaplan–Meier method was used to evaluate the overall survival of osteosarcoma patients. The Cox proportional hazard regression analysis was performed to analyze the prognostic factors of all osteosarcoma patients and those patients with lung metastasis, respectively. Furthermore, risk factors for developing lung metastasis were identified in synchronous lung metastasis (SLM) and metachronous lung metastasis (MLM) patients. Results A total of 203 patients were involved and 150 patients were successfully followed up for survival status. The 5-year survival rate of osteosarcoma was 70.0% and the survival months for patients with SLM and MLM were 33.3 ± 12.6 and 45.8 ± 7.4 months, respectively. The presence of lung metastasis was one of the independent prognostic factors for prognosis of osteosarcoma. In patients with lung metastasis, twenty-one (10.3%) showed lung metastasis at the diagnosis of osteosarcoma and 67 (33%) were diagnosed with lung metastases during the later course. T3 stage (OR = 11.415, 95%CI 1.362–95.677, P = 0.025) and bone metastasis (OR = 6.437, 95%CI 1.69–24.51, P = 0.006) were risk factors of SLM occurrence. Bone metastasis (OR = 1.842, 95%CI 1.053–3.224, P = 0.032), good necrosis (≥ 90%, OR = 0.032, 95%CI 0.050–0.412, P < 0.001), elevated Ki-67 (OR = 2.958, 95%CI 1.098–7.969, P = 0.032) and elevated LDH (OR = 1.791, 95%CI 1.020–3.146, P = 0.043) were proved to be independent risk factors for developing MLM. Conclusion The overall survival, prognostic factors and risk factors for lung metastasis in this single center provided insight about osteosarcoma management. Supplementary Information The online version contains supplementary material available at 10.1186/s12885-023-11024-9.
Background
Osteosarcoma is the most common primary malignant bone tumor in young adult, the prevalence was reported to be 8-11 per million people per year [1]. Since the comprehensive treatment strategy by chemotherapy and surgery, the 5-year overall survival (OS) rate has been significantly improved [2,3].
Distant metastasis, especially the lung metastasis, has been a serious issue in osteosarcoma management. To facilitate related studies and improve outcomes of osteosarcoma, four clinical oncology groups in European and American collaborated to construct the EURAMOS (European and American Osteosarcoma Studies) group [4]. Through international, collaborative randomized, controlled trials (RCTs), their first study (EURAMOS-1) recruited a total of 2260 patients with resectable highgrade osteosarcoma across 17 countries from 2005 to 2011 [4]. After a median of 54 months follow-up, the 5-year event-free survival (EFS) and overall survival rates were reported to be 59.0% and 71.0%, respectively [5]. Specifically, patients with lung metastasis had a 2.34-fold higher risk of death when compared with those without lung metastasis [5]. In fact, several studies have reported the negative effect of lung metastasis on survival in osteosarcoma [6,7]. The 5-year overall survival was just 20-30% in patients with lung metastasis [2]. A previous study, based on 1,408 patients with osteosarcoma in Surveillance, Epidemiology and End Results (SEER) database, reported a total of 238 patients (16.9%) with lung metastases at diagnosis [8]. Similarly, the latest study showed that around 14% osteosarcoma patients were with lung metastasis at diagnosis and the indeterminate nodules in lung can turn into the metastatic disease at a median time of 5.3 months [9]. Lung metastasis has become the focus in osteosarcoma in recent years.
Under the consideration of poor prognosis in lung metastatic patients, computerized tomography (CT) of the chest was recommended as the routine examination for patients with osteosarcoma, especially for those with suspicious lung lesions [9,10]. However, the differential diagnosis of benign and malignant in both the nodules (<5mm) and indeterminate nodules has been treated as the challenging issue among bone oncology surgeons [9,11,12]. In the osteosarcoma patients with high risk of metastasis, PET-CT was recommended for its high sensitivity (90%). Thus, risk evaluation on metastasis in osteosarcoma at diagnosis and during later course is important. The previous study reported that the large tumor size was associated with the higher odds of lung metastasis occurrence. Patients with tumor size larger than 371cm 3 showed a probability of 69% to suffer lung metastasis, compared with 34% in those with the smaller tumor size [13]. Axial location, tumor size >10 cm, higher N stage and bone metastasis presence were reported to be significant risk factors of lung metastasis in osteosarcoma [8]. These findings were valuable to identify the high-risk patients.
After widely literatures reviewing, most studies on osteosarcoma were performed based on Caucasian population. In China, with the development of Chinese Society of Clinical Oncology, the standardized treatment has been widely introduced and performed. As the first established department of bone and soft tissue sarcoma in China, we have the advantage to treat large population of osteosarcoma with the standardized chemotherapy and surgery by the same multidisciplinary team. Thus, we summarized our experience in the past ten years. Based on the single-center data, the survival and prognostic factors of patients with osteosarcoma were investigated. The incidences of both synchronous and metachronous lung metastasis were evaluated and the risk factors of lung metastasis were explored.
Patient selection
This retrospective study was approved by the institutional research ethics committee of Tianjin Medical University Cancer Institute and Hospital (NO. bc2021011). Based on medical records from February 2009 to October 2020, patients with historically diagnosed osteosarcoma were selected and followed by phone/clinic until December 2020. The inclusion criteria were listed as following: (a) historically diagnosed as primary osteosarcoma; (b) complete basic information; (c) clear evidence of lung metastasis. Patients were excluded if the survival status or lung metastasis was not available. Patients who cannot be followed were also excluded.
Variables used in current study
Variables were involved as following: age (≤ 18 years, 19-40 years, ≥ 41 years), gender (male and female), tumor site (upper limb, lower limb, spine/pelvis), surgery (no, salvage, amputation and unknown), necrosis (Huvos I-II < 90%, Huvos III-IV ≥ 90%), alkaline phosphatase (ALP) level (normal, elevated in one time than the upper limitation, elevated more than two times), lactic dehydrogenase (LDH) level (normal and elevated), bone metastasis (no, yes) and lung metastasis (no, synchronous, metachronous). The T stage and N stage in the present study were defined according to TNM Staging System for Bone in American Joint Committee on Cancer (AJCC), which was listed in supplementary Table 1. Lung metastasis was diagnosed by pathological examination or chest CT according to the standard described by Tsoi KM [9]. For patients who received biopsy, the diagnosis was determined based on pathologic findings. Moreover, the increase of lung nodule's size more than 25% or the appearance of new nodule during follow-up chest CT were diagnosed with lung metastasis. Synchronous lung metastasis (SLM) was defined as lung metastasis diagnosed at the initial osteosarcoma diagnosis while metachronous lung metastasis (MLM) was defined as the occurrence of lung metastasis in patients' later course.
Treatment
As for high grade localized osteosarcoma, neoadjuvant chemotherapy combined with surgery and adjuvant chemotherapy were performed according to NCCN guidelines. The standard first-line neoadjuvant chemotherapy in our department is cisplatin, doxorubicin, and high-dose methotrexate (MAP) regimen. For patients with good histologic response to neoadjuvant chemotherapy, wide excision and limb salvage were performed, followed by another four cycles of the same chemotherapy regimen after surgery. For patients with recurrent or refractory disease, the combination of etoposide and ifosfamide (IE) was used. Besides, the IE regimen was performed for patients who received MAP regimen previously.
During 11 years, to reconstruct the large bony defect, several kinds of surgeries were performed, including joint preserving surgery, tumor-devitalized autograft, and 3D printing implant. The prosthetic replacement was commonly performed on patients with osteosarcoma in upper and lower limbs. The tumor-devitalized autograft was performed with frozen autograft technique on the cases with unsatisfactory margin. The 3D printing implant was performed to preserve the joint function in children.
Statistics
The quantitative data were described as mean ± standard deviation (SD) and categorical data were presented as the number and the percentage (N, %). Pearson chi-square test was used to evaluate the difference between categorical variables. The overall survival was defined as the time from the diagnosis of osteosarcoma to all causes of death, which was analyzed using the Kaplan-Meier method. The survival difference between groups was tested by the Log-rank test and prognostic factors of osteosarcoma were identified by the Cox proportional hazard regression analysis.
For patients with lung metastasis, the time period from the diagnosis of lung metastasis to all causes of death was recorded and related prognostic factors were explored using the Cox proportional hazard regression analysis. Further analyses were conducted to explore the risk factors for developing lung metastasis in different pattern of lung metastasis (SLM and MLM). Initially, patients with MLM were deemed as no lung metastasis at the diagnosis of osteosarcoma and the Logistic regression analysis was used to identify the risk factors for developing SLM. When exploring the risk factors of MLM, patients with SLM were excluded from the analysis and the Cox proportional hazard regression analysis was performed.
Two-sided P < 0.05 was considered as statistically significant. Variables with P-value < 0.05 in the univariate regression analysis were further analyzed using a multivariate regression analysis. All statistical analyses were performed using SPSS 22.0 (IBM Corporation, NY, USA).
Characteristics and survival outcome of patients with osteosarcoma
Patients with osteosarcoma were reviewed and followed by phone/clinic with the follow-up time ranged from 2 to 144 months. Eventually, a total of 203 patients were identified with the clear status of lung metastasis and the demographic and clinical characteristics were described in Table 1. The average age at the diagnosis of osteosarcoma was 22.8 ± 14.2 (5-77) years. The historical types of osteosarcoma tumor were described as following: osteosarcoma NOS (N = 88), conventional osteosarcoma (N = 89), telangiectatic osteosarcoma (N = 7), small cell osteosarcoma (N = 3), low-grade central osteosarcoma (N = 4), parosteal osteosarcoma (N = 8) and periosteal osteosarcoma (N = 4). Since patients with low-grade tumors (low-grade central osteosarcoma, parosteal osteosarcoma and periosteal osteosarcoma) do not undergo chemotherapy routinely, they were excluded from chemotherapy variable in Table 1 within different variables and the statistical results were shown in Table 2.
The prognostic factors of osteosarcoma patients with lung metastasis
The prognostic factors of all patients with active followup were identified and illustrated in the Fig. 1 and Table 2 indicated that patients with various patterns of lung metastasis presented different survival outcome. To further explore the differences between groups and identify prognostic factors in osteosarcoma patients with lung metastasis, patients without lung metastasis at the last follow-up were excluded. At last, a total of fifty patients were included into the present analysis. As shown in supplementary Table 2, no independent prognostic factor was identified in our analysis.
Risk factors for developing lung metastasis in osteosarcoma
As shown in Table 4 For MLM, the information of the interval from osteosarcoma diagnosis to lung metastasis was available in 63 patients. The median internal time was 11 (2-99) months and the distribution of MLM was illustrated in Fig. 2. A total of 37 (58.7%) MLM patients were found in the first year after the diagnosis of osteosarcoma, 18 (28.6%) patients in the second year, and 8 (12.7%) patients in the later time. Chemotherapy was routinely scheduled as previously described for these patients. Noteworthily, metastasectomy of the pulmonary lesion was performed in five patients. The detailed information of the five patients was described in supplementary Table 3.
In Table 5.
Discussion
In the current study, we summarized our experience from 203 osteosarcoma patients. Based on the cohort, the 5-year survival rate was 70.0%. Such long-term survival reached a promising level, which was better than that in our previous study based on SEER data from 2010 and 2016 [14].
Based on the Cox regression analysis, the presence of bone metastasis and lung metastasis were associated with worse survival in osteosarcoma and the performance of surgery was associated with better survival outcome. Since 1970s, the introduction of chemotherapy significantly improved the survival of osteosarcoma patients. With the afterwards development of the innovated surgeries, the comprehensive treatment from multidisciplinary team was recommended [10]. In a recent meta-analysis, patients after limb-salvage surgery achieved better five-year survival rate than the patients after amputation with neoadjuvant chemotherapy [15]. Thus, the salvage surgery has become the first choice for eligible patients just did as the current study (salvage 52.2% vs. amputation 32%). Chemotherapy and good tumor necrosis were another important issue in the treatment of osteosarcoma [16]. Chemotherapy response showed significant correlation with the long-term survival in osteosarcoma [17,18]. The EURAMOS-1 study reported that those patients, who had a poor histological response to neoadjuvant chemotherapy, were associated with worse survival outcome after surgery [5]. As previously reported, the patients with lung metastasis showed poor survival [7,19,20]. In the present study, the average overall survival of SLM patients, MLM patients and patients without lung metastasis were 33.3 ± 12.6, 45.8 ± 7.4 and 139.2 ± 2.8 months, respectively. The treatment of pulmonary metastatic lesions showed significant effect on the improved survival of osteosarcoma patients. For osteosarcoma patients with resectable lung metastasis, the NCCN guidelines recommend wide excision of the primary tumor and preoperative chemotherapy should be performed [10]. Meanwhile, pulmonary metastasectomy should be under the consideration in selected cases. It was reported that patients with less lesions, unilateral lung disease and patients after metastasectomy showed improved survival [21,22]. In our study, most patients with lung metastasis were offered chemotherapy instead of lung surgery. Gemcitabine, docetaxel and other new agents, including regorafenib [23] and apatinib [24], can be potential second-line choices. With the accurate prediction of survival and benefit from metastasectomy on lung function improvement, the metastasectomy should be encouraged in the eligible patients. Twenty-one (10.3%) patients in the current study showed lung metastasis at the diagnosis of osteosarcoma, which was less than that previously reported [8,25]. T3 stage and the presence of bone metastasis were risk factors for synchronous lung metastasis in our study, which was consistent with previous results from SEER [8]. During the median follow-up time of 49 months, 67 patients (33%) were detected with lung metastasis. The average interval time from osteosarcoma to lung metastasis was 14.0 ± 14.1 months. Accordance with a previous study, most of lung metastasis happened in the first two or three years [26]. The proportion of patients with lung metastasis was 58.7% and 28.6% in the first and second year after osteosarcoma diagnosis, respectively. Thus, lung CT should be scheduled with high frequency in the first two years. In our current study, the presence of bone metastasis, bad necrosis rate, elevated Ki-67 and LDH were risk factors associated with higher odds of metachronous lung metastasis. And patients with the risk factors should be paid with more attention. A previous study found more lung metastases and bilateral lesions in patients after only surgery of primary tumor, compared with those after surgery plus chemotherapy [6]. Based on different risk of lung metastasis, lung CT plan can be more efficient.
Some limitations should be mentioned. Due to the long internal from osteosarcoma diagnosis, some patients were lost and cannot be reached. Limited size of the included patients and unknown information in some variables caused uncertainty in data statistics. For example, the assessment of HUVOS necrosis rate after neoadjuvant chemotherapy were unavailable in the majority of patients. Furthermore, the limitation of the retrospective study design also leads to weakness to draw confirming conclusions.
Conclusions
In summary, the osteosarcoma patients in our institute were effectively treated, with the 5-year overall survival of 70%. The incidences of synchronous and metachronous lung metastasis were 10.3%, and 33%, respectively.
The prognostic factors found in the current study can be significant on survival prediction. Risk factors of lung metastasis can be used to identify high-risk patients and guide individualized screening. | 2021-11-17T16:25:31.848Z | 2021-11-15T00:00:00.000 | {
"year": 2023,
"sha1": "7be85249e73194439caa748de53ac4bd2a967cfc",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1071152/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Springer",
"pdf_hash": "c46b83553c79ece5d54c00bc2227146d712145f0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
17265296 | pes2o/s2orc | v3-fos-license | Dual mTORC1/C2 inhibitors suppress cellular geroconversion (a senescence program).
In proliferating cells, mTOR is active and promotes cell growth. When the cell cycle is arrested, then mTOR converts reversible arrest to senescence (geroconversion). Rapamycin and other rapalogs suppress geroconversion, maintaining quiescence instead. Here we showed that ATP-competitive kinase inhibitors (Torin1 and PP242), which inhibit both mTORC1 and TORC2, also suppressed geroconversion. Despite inhibition of proliferation (in proliferating cells), mTOR inhibitors preserved re-proliferative potential (RP) in arrested cells. In p21-arrested cells, Torin 1 and PP242 detectably suppressed geroconversion at concentrations as low as 1-3 nM and 10-30 nM, reaching maximal gerosuppression at 30 nM and 300 nM, respectively. Near-maximal gerosuppression coincided with inhibition of p-S6K(T389) and p-S6(S235/236). Dual mTOR inhibitors prevented senescent morphology and hypertrophy. Our study warrants investigation into whether low doses of dual mTOR inhibitors will prolong animal life span and delay age-related diseases. A new class of potential anti-aging drugs can be envisioned.
It is difficult to predict the outcome of life-long administration of dual mTOR inhibitors in animals. On one hand, they inhibit both rapamycin-sensitive and -insensitive functions of mTOR and thus might extend life span extensively. Yet, the role of rapamycin-insensitive functions in aging is unknown. Furthermore, potential toxicity due to lack of selectivity toward TOR may preclude antiaging effects. Fortunately, a simple cellular model is available to investigate anti-aging activity in cell culture. In proliferating cells, mTOR is active and drives cellular growth. When the cell cycle is forcefully arrested, but mTOR is still active (as in proliferating cells), then mTOR drives "senescence" [49,50].
In other words, mTOR drives conversion from reversible cell cycle arrest to senescence (geroconversion) and rapamycin suppresses or decelerates geroconversion .
Senescent cells cannot restart proliferation (reproliferation), even when the arrest is lifted and cells re-enter the cell cycle. In the presence of active mTOR, p21-or p16-arrested cells lose their re-proliferative potential (RP). In the presence of rapamycin [54,55,62] and everolimus [61], cells maintain RP during cell cycle arrest and can re-proliferate, when the arrest is lifted. Also, rapalogs moderately suppress cellular hypertrophy and beta-Gal-positive senescent morphology. Here we tested whether non-rapalog mTOR inhibitors such as Torin 1 and PP242 can suppress geroconversion.
RESULTS mTOR inhibitors suppress geroconversion
We compared gerosuppressive and anti-proliferative (cytostatic) effects of dual mTOR inhibitors, Torin 1 and PP242 (Fig. 1A). We also tested deforolimus, a rapalog also known as ridaforolimus. To measure cytostatic effects, proliferating HT-p21 cells were treated with these inhibitors (Fig. 1). After a 3-day treatment, drugs were washed out and cells were incubated in fresh medium for another 3-6 days. Colonies were stained with Crystal Violet. TOR inhibitors decreased size of colonies but not their number. This is consistent with reversible inhibition of proliferation in the presence of inhibitors (Fig. 1). Cells retained their re-proliferative potential and resumed proliferation when the inhibitors were removed. Therefore, the number of colonies remained the same as in control but their size became progressively smaller with increasing concentrations of inhibitors ( Fig.1, see To determine cytostatic activity, cells were treated with drugs for 3 days, then drugs were washed out and cells were allowed to recover in drug-free medium. Colonies were stained after 5 days of regrowth. To investigate drug effects on geroconversion, HT-p21 cells were treated with IPTG in the presence of serial dilutions of the drugs. After 3-day treatment, drugs were washed out and colonies were allowed to form for 9 days, then stained and counted in triplicates. Data are mean ± SD. "cytostatic" panels).
To investigate effects of the drugs on geroconversion, we induced ectopic p21 in HT-p21 cells by treating them with IPTG [62,80,81]. These cells undergo senescence when p21 is turned on by IPTG and lose their re-proliferative potential (RP). When IPTG is washed out, the senescent cells can re-enter the cell cycle but cannot proliferate [63].
Suppression of geroconversion was evaluated by the ability of IPTG-treated HT-p21 cells to resume proliferation after IPTG was removed (Fig. 1, gerosuppression). Treatment with IPTG alone decreased the number of colonies, (Fig. 1B; no drug). This is consistent with irreversible senescence in most IPTGtreated cells, so only a few cells were quiescent and retained RP. Treatment of IPTG-arrested cells with deforolimus, Torin 1 or PP242 increased the number of colonies. These colony-forming cells were in quiescent state and retained potential to re-proliferate. Cells resumed proliferation once IPTG and drugs were removed. In summary, the effects of mTOR inhibitors on proliferation (cytostatic effect) and re-proliferative potential (gerosuppressive effect) were opposite. While decreasing the size of colonies in first case, mTOR inhibitors increased the number of colonies in second case (Fig. 1B).
At concentrations shown in Figure 1B, gerosuppressive effect of Torin 1 was maximal at the lowest concentration shown (100 nM). At higher concentrations (> 300 nM), the gerosuppressive effect of Torin 1 was partially lower (Torin 1 partially canceled its gerosuppressive effect) and colonies became smaller. Therefore, we next titrated concentrations down ( Fig. 2A). IPTG-arrested HT-p21 cells were treated with a range of concentrations from 1 nM to 3000 nM, according to experimental schema shown in Fig. 1A. Gerosuppressive effect of Torin 1 was shifted toward lower concentrations compared to that of PP242 ( Fig. 2A). Minimal concentrations that suppressed geroconversion were 1 nM and 10 nM for Torin 1 and PP242, respectively. Maximum effects of Torin 1 and PP242 were observed at concentrations 30 nM and 300 nM, respectively, indicating that Torin 1 is overall 10 times more potent than PP242. Therefore, 10-fold lower concentrations of Torin 1 than PP242 could be used to achieve equipotent effect. Figure 1A. Cells plated at low density were treated with IPTG in the presence of serial dilutions of inhibitors. After 3 days, IPTG and inhibitors were washed out and cells were incubated in drug-free medium. Then colonies were stained with Crystal Violet and counted. Data are mean number of colonies per a sector of a well ± SD. B. Immunoblot analysis. HT-p21 cells were treated with IPTG in combination with Torin 1 or PP242 for 24 h and lysed. Rapamycin (R) at 500 nM was included as additional control. Immunoblotting was performed with the indicated antibodies. Note: IPTG-induced p21 is highly expressed in all samples, confirming that the inhibitors do not interfere with p21 induction by IPTG and cell cycle arrest. www.impactjournals.com/oncotarget
Correlation between gerosuppression and mTOR inhibition
We next investigated inhibition of the mTOR pathway by Torin 1 and PP242. Consistent with gerosuppressive effects (Fig. 2A), Torin 1 inhibited phosphorylation of S6 kinase (target of mTORC1) and its downstream target phospho-S6 at concentrations 10 times lower than PP242 (Fig. 2B). The inhibition of the S6K/S6 axis corresponded to concentrations at which preservation of RP (gerosuppression) was observed. The S6K/S6 axis is a major target of rapamycin.
Finally, like rapamycin, Torin 1 and PP242 did not decrease p21 levels, so they did not abrogate IPTGinduced arrest. At equipotent concentrations both mTOR inhibitors decreased cyclin D1, a marker of senescent hyper-mitogenic cell cycle arrest [63]. Hyper-elevated cyclin D1 is a common marker of senescent phenotype [63].
mTOR kinase inhibitors prevent senescent morphology
Treatment with IPTG resulted in a large flat cell morphology and beta-Gal-positivity (Fig.3A). Enlarged cell size was due to hypertrophy as indicated by increased amount of protein per cell (Fig. 3B, compare control and IPTG). Torin 1 or PP242 prevented senescent morphology and decreased cell size. As it has been previously shown, rapamycin partially decreased beta-Gal staining. Rapamycin only partially prevented senescent morphology and hypertrophy (Fig. 3). Notably, Torin 1 and PP242 were far more potent in suppressing senescent morphology and hypertrophy, compared with rapamycin.
Dual inhibitors of TORC1/2 prevent etoposideand doxorubicin-induced senescence in normal human fibroblasts WI38t
To investigate whether gerosuppressive activity of Torin 1 and PP242 is not cell-type specific, we next examined their effect on senescent morphology and RP in WI38t fibroblasts undergoing senescence by treatment with low concentrations of etoposide or doxorubicin. As shown in Fig. 4A, both etoposide and doxorubicin treatment caused large flat and beta-gal positive morphology in WI38t cells. In etoposide-and doxorubicintreated cells, co-treatment with either Torin 1 or PP242 prevented senescent morphology. Furthermore, when WI38t cells were pre-treated with either senescing-drug (etoposide or doxorubicin) alone, most of the cells lost the ability to re-proliferate after drugs were removed. Cotreatment of cells with senescing-drugs and either Torin 1 or PP242 preserved RP after removal of the drugs. In other words, treatment of cells with etoposide in the presence of either Torin 1 or PP242 preserved some cells in reversible quiescence instead of irreversible senescence. mTOR inhibitors also prevented senescent morphology of SKBR3 cells undergoing senescence by treatment with PMA (Fig. 4C), a model of senescence described previously [82]. . After 4-day treatment, cells were extensively washed to remove the drugs and allowed to regrow in drug-free medium and counted after 3 weeks in culture. RP (re-proliferative potential) was calculated by dividing final cell numbers by initially-plated number of cells. Data present mean ±SD from triplicate wells. C. Effect of Torin 1 on senescent morphology of SKBR3 cells undergoing senescence induced by PMA. Cells were pre-treated with Torin 1 (100 nM) for 24 h before addition of 100 nM PMA. After 3-day treatment with PMA, drugs were washed out, cells were incubated in drug-free medium for another 3 days and stained for beta-gal. Bar -100 µm. www.impactjournals.com/oncotarget DISCUSSION Rapamycin and other rapalogs are anti-inflammatory and anti-aging drugs [83,84]. Rapamycin slows down aging, prevents age-related diseases and extends life span in all species tested, including mice [85][86][87][88][89][90][91][92][93][94][95][96][97][98][99][100][101][102][103][104].
Loss of re-proliferative potential (RP) is a quantitative marker of geroconversion. Non-senescent (quiescent) cells retain RP, meaning that they can re-start proliferation (re-proliferation), when arrest is reversed. In contrast, senescent cells lack RP. We found that in the presence of Torin 1 and PP242, arrested cells retained RP. Like rapalogs, these pan-mTOR inhibitors prevented loss of RP during cell cycle arrest caused by p21 and etoposide. Importantly, mTOR inhibitors suppressed geroconversion despite their intrinsic anti-proliferative effect (in proliferating cells).
The ability of mTOR inhibitors to preserve RP indicates that (at gerosuppressive concentrations) mTOR inhibitors are not harmful in any way; otherwise cells would not resume proliferation and would not form colonies. In fact, at high concentrations, the gerosuppressive effect, measured as preservation of RP, decreased and even disappeared.
Gerosuppression coincided with inhibition of the mTORC1/S6K/S6 axis. The maximal gerosuppression, as measured by RP, was similar for rapalogs and pan-mTOR inhibitors.
PP242 was 10 times less potent but it was also less cytostatic and inhibited mTORC1/S6K/S6 at 10-fold higher concentrations. We conclude that their effects are almost identical at equipotent concentrations. At high concentrations, Torin 1 and PP242 were more efficient than rapamycin in suppressing cellular hypertrophy and senescent morphology. This indicates that morphology and hypertrophy depend in part on rapamycin-insensitive functions of mTOR.
Rapamycin and other rapalogs have been safely used even in daily high doses for many years in patients [108][109][110], so their use as anti-aging drugs (intermittent schedules) is expected to be without significant side effects. Dual mTOR inhibitors have all activities of rapamycin plus other effects and therefore must have more side effects by definition. Still, our study indicates that, at low concentrations, they may be considered as potential anti-aging drugs because they can preserve reproliferation, which would be impossible if the drugs were toxic, given the delicate mechanism of cellular division.
Still, mTOR inhibitors are cytostatic. Therefore, for their clinical development as anti-aging drugs, it would be important to determine balance between cytostatic vs gerosuppressive effects. mTOR inhibitors also vary in their affinity to mTORC1 and mTORC2 complexes, as well as by their off-target effects. Therefore, a larger variety of inhibitors should be further tested at a wider range of concentrations for their effects on cell proliferation vs geroconversion. Optimal gerosuppressive concentrations of various mTOR inhibitors should be selected (Oncoscience, 2015, in press).
Re-proliferative potential (RP)
Cells were plated at low densities and treated with senescence-inducing agents as described in figure legends. After 3-4-day treatment drugs were washed out and cells were allowed to regrow in drug-free medium for a few days as indicated in figure legends. Colonies were stained with 1% Crystal Violet (Sigma-Aldrich) and counted.
Secondary anti-mouse and anti-rabbit antibodies were from Cell Signaling.
SA-β-Gal staining
β-gal staining was performed using Senescencegalactosidase staining kit (Cell Signaling Technology), according to manufacturer's protocol. Cells were microphotographed under light microscope.
CONFLICTS OF INTERESTS
MVB is a consultant at Everon Biosciences, Inc. | 2017-06-10T16:14:47.938Z | 2015-07-27T00:00:00.000 | {
"year": 2015,
"sha1": "62b92d1bdf2b8d59041ab00d825bf923f63f53a5",
"oa_license": "CCBY",
"oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=10968&path[]=4836",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "62b92d1bdf2b8d59041ab00d825bf923f63f53a5",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
232382863 | pes2o/s2orc | v3-fos-license | Smoking Cessation Intention and Its Association with Advice to Quit from Significant Others and Medical Professionals
Few studies have simultaneously considered the effects of significant others and medical professionals’ advice to quit smoking on smoking cessation intention. The present study involved 3841 current adult Korean smokers, divided into four groups with an intention to quit within 1 month, within 6 months, someday, and without intention to quit. Multinomial multiple logistic regression analysis was conducted according to smoking cessation intention level, adjusted for potential confounders, including past smoking cessation attempts. Smokers who had been advised to quit smoking by both significant others and medical professionals, significant others only, and medical professionals only were 2.63 (95% confidence interval (CI): 1.62–4.29), 1.84 (95% CI: 1.17–2.89), and 1.44 (95% CI: 0.70–2.94) times more likely to intend to quit within 1 month, respectively, than those who were not advised to quit. The odds ratios of an intention to quit within 6 months were 2.91 (95% CI: 1.87–4.54), 2.49 (95% CI: 1.69–3.68), and 0.94 (95% CI: 0.44–2.05), respectively. To promote smokers’ intention to quit, the role of significant others should be considered. Medical professionals’ advice to quit smoking remains important, increasing the effects of significant others’ advice.
Introduction
The use of tobacco products is among the biggest threats to human health. In 2015, worldwide, the age-standardized prevalence of daily smoking was 25.0% for men and 5.4% for women, and 11.5% of global deaths (6.4 million) were attributable to smoking [1]. This evidence suggests that programs aimed at the prevention and cessation of tobacco consumption should be among the top priorities of national public health policies.
Effective smoking cessation strategies require the understanding of the process that leads to smoking cessation and its determinants. According to the stages of change, a core construct of the transtheoretical model, smoking cessation involves a progression through precontemplation, contemplation, preparation, action, maintenance, and termination [2]. In smoking cessation behavior, these stages tend to present as three distinct states: intention to quit, an attempt to quit, and successful quitting [3]. According to the theory of planed behavior, the most important direct determinant of behavior is behavioral intention [4]. Several studies have shown that intention to quit smoking is a precursor to subsequent quitting attempts or smoking cessation. In fact, intention has been shown to account for 12% of variance in quitting success rates [5], while a strong desire to quit has been associated with a greater likelihood of quitting [6]; moreover, both those who quit and those who relapsed reported a significantly higher baseline intention to quit than persistent smokers did, and smokers' baseline intention to quit positively predicted quitting attempts [7]. Finally, quitting attempts that lasted >1 month were significantly associated with an intention to quit in 2 months [8].
Among the various social influence factors that may affect smoking behaviors, verbal stimulation to quit smoking as a type of explicit social influence was suggested. An example of explicit verbal social influence is someone suggesting the smoker quits [9]. The most influential people who might recommend quitting tobacco use are significant others (family members, friends, and coworkers) and health professionals. An individual who attempts to change his or her health behavior may be positively influenced by their significant others during the change process [10]. For example, the proportion of explained variance in employees' intention to quit smoking increased significantly due to social pressure from their partner and children [11]. Another study revealed that having smoking-related family conflicts was independently associated with an intention to quit either in the next 30 days or thereafter [12]. Finally, Chinese and Vietnamese adult male smokers declared that family encouragement and physician recommendation were the main facilitators in smoking cessation behavior [13].
Medical professionals' advice to quit smoking may motivate smokers to do so and facilitate subsequent smoking cessation. A systemic review of 17 trials has shown that brief advice from physicians increased the rate of quitting 1.7-fold compared to the standard of care [14]. Smokers who received advice to quit smoking from a doctor were 1.9-fold more likely to plan to quit smoking than those who did not visit a doctor in the past 6 months [15]. In a quasi-experimental study, the advice from physicians and nurses increased the likelihood of a progression toward the action stage of the smoking cessation process, while reducing the risk of regression to the previous stage, although the latter relationship was not statistically significant [16].
Although several studies have examined the relationship between smoking cessation intentions and the advice to quit smoking from significant others or medical professionals, studies that simultaneously consider both of these factors are scarce. Such combined advice may have an additive or synergistic effect in triggering the intention to quit smoking. The aim of this study was to examine the relationship between smoking cessation advice from significant others and medical professionals and the intention to quit smoking, using the stages of behavior change as a framework.
Materials and Methods
This study used the 2017 Korea Community Health Survey (KCHS) data from Gyeongsangbuk-do province, which is among 17 regional local governments in Korea that covers 10 cities and 13 counties. The KCHS has been conducted nationwide annually since 2008 by the Korea Centers for Disease Control and Prevention to investigate community health and health-related behaviors. KCHS is conducted in 255 municipalities, where community health centers are located. Participating households are selected using stratified cluster sampling methods; all household members aged ≥ 19 years are included. Approximately 900 people per municipality participate in the survey. Qualified interviewers visit the sampled houses and collect data via face-to face interviews.
Of 22,164 participants, the present study used data of 3841 current smokers with either a daily or an occasional smoking among those who had smoked ≥ 100 cigarettes in their lifetime. The KCHS was exempted from the institutional review board review by the Korea Centers for Disease Control and Prevention. Detailed information about the KCHS is available elsewhere [17].
Measures
According to the transtheoretical model, the intention to quit smoking falls into four categories: (1) intention to quit within the next month (preparation stage), and (2) intention to quit within the next 6 months (contemplation stage), with the precontemplation stage subdivided into (3) intention to quit someday but not within the next 6 months, and (4) no intention to quit. There were significant differences in the characteristics of participants between those who had the intention to quit someday and those who declared having no such intention. As a result, they were included in the present study analysis, representing a separate category.
The experience of getting advice to quit smoking from others (significant others or medical professionals) was classified into four types: (1) both, (2) significant others only, (3) medical professionals only, and (4) no advice. This experience was assessed with the following questions. (1) "Do you get advice from the people around you to quit or reduce smoking?" The possible responses to this question were: "not at all", "a little", "somewhat", and "always." These responses were recoded into two categories: "yes" for "somewhat" and "always"; "no" for "not at all" and "a little". (2) "In the past year, have you received advice from doctors, dentists, oriental doctors, or nurses to quit smoking?" The possible responses to this question were yes or no.
Statistical Analysis
The distributions of covariates were calculated by four stages of intention to quit smoking, and were assessed by χ 2 tests. Multinomial logistic regression analysis with the group of no intention to quit ("not at all") used as a reference group was performed to evaluate the association between advice and intention to quit smoking. All analyses were performed using SPSS version 19.0 (IBM, Armonk, NY, USA). p-values of <0.05 were considered indicative of a statistically significant finding.
The stages of behavior change consider the intent to quit smoking a prerequisite for attempting to quit; however, in the case of a failure to quit, an attempt to quit may be a determinant of a continued intention to quit [18]. Accordingly, in the present study, two multivariate analyses depending on the inclusion of quit attempts were performed to analyze the association between intention to quit smoking and advice to quit from others, regardless of the impact of previous smoking cessation attempts [12].
Results
A total of 61.7% of the participants had the intention to quit smoking (within 1 or 6 months, and someday, respectively, 5.3%, 7.4%, and 49.0%). The rate of the intention to quit within 1 month was the highest among those participants who got advice from medical professionals only (7.7%), followed by medical professionals and significant others (7.0%), significant others only (4.6%), and no one (4.2%). Those who had previously attempted to quit smoking were more likely to intend to quit smoking within 1 month, compared to those who did not attempt to quit (1.2%); moreover, those who had attempted to quit smoking within the past year (13.7%) were more likely to intend to quit than those who attempted to quit more than a year before (3.9%). There were significant differences in the rates of intention to quit smoking within 1 month according to smoking frequency/amount, exposure to public anti-smoking campaigns within the past year, and experience of smoking cessation within the past year (Table 1). Table 2 presents the results of multinomial logistic regression analyses. In the model that did not account for previous attempts at quitting, the adjusted odds ratios (ORs) for the intention to quit within 1 month (OR = 3.01, 95% confidence interval (CI): 1.89-4.79), 6 months (OR = 3.42, 95% CI: 2. 22-5.27), and someday (OR = 1.69, 95% CI: 1.37-2.08) were the highest among those who got advice from both significant others and medical professionals. Advice from significant others was also significantly associated with the intention to quit smoking within 1 month (OR = 1.98, 95% CI: 1.28-3.05), 6 months (OR = 2.72, 95% CI: 1.86-3.97), and someday (OR: 1.57, 95% CI: 1.31-1.88). However, advice from medical professionals (OR = 2.14, 95% CI: 1.08-4.24) was significantly associated with the intention to quit smoking only within 1 month. After additional adjustments for previous attempts at quitting, the adjusted ORs for the intention to quit within 1 or 6 months were the highest among the participants who got advice from both significant others and medical professionals, with OR estimates of 2.63 (95% CI: 1.62-4.29) and 2.91 (95% CI: 1.87-4.54), respectively. Of note, advice from significant others was significantly associated with the intention to quit, regardless of previous attempts at quitting. However, a significant association between advice from medical professionals and the intention to quit within 1 month was not present in the additional adjusted model including previous attempts (OR 1.44, 95% CI: 0.70-2.94).
The ORs of intention to quit within 1 month among smokers that had attempted to quit within the past year or any previous years compared to those who did not were 28.31 (95% CI: 16.10-49.77) and 4.82 (95% CI: 2.73-8.51), respectively. The ORs of the intention to quit within 6 months were 16.15 (95% CI: 10.41-25.05) in those who had attempted to quit within the past year and 4.38 (95% CI: 2.88-6.66) in those who had done so beforehand ( Table 2).
Discussion
Overall, 61.7% of the study participants reported that they had an intention to quit smoking; however, only 5.3% of them intended to quit within the next month (preparation stage). This result was very low compared to results of previous studies conducted with adult smokers in Canada, where the corresponding rate was 32.5% [19], and 36% among Vietnamese male smokers in California [12]. The distribution of stages of smoking behavior in countries or groups may vary depending on the level of tobacco control policies of each group. The higher rates of intention to quit in Canada and California can be explained as a result of tobacco control policies: (1) Canada was the first country to implement pictorial warning labels in cigarette packs in 2001 and has been among the countries with the largest warning label [20]. (2) In California, the California Tobacco Control Program (CTCP), one of the longest running comprehensive tobacco control programs in the United States, has been running since 1989 [21]. In addition, the participants that intended to quit smoking within the next 6 months (contemplation stage) constituted 7.4% of the sample; overall, only 12.7% of the study participants could be considered as having the intention to quit smoking. The remaining 49.0% of the sample declared having an intention to quit smoking without a specific timeline, which is not equivalent to having an intention to quit, according to the stages of change in the transtheoretical model [2]. Nevertheless, any interpretation of smoking cessation intention rates must account for the differences in definitions between studies, for example: just want to quit smoking [3]; all of the cases, including intention to quit within the next month, the next 12 months, and will quit but not within the next 12 months [7]; separately considering the intention to quit in the next 30 days, or later but not within the next 30 days [12].
Compared to those who never attempted to quit, the ORs for the intention to quit smoking within 1 (OR: 28.3 vs. 4.8) or 6 months (OR: 16.2 vs. 4.4) were much higher for those smokers who had attempted to quit within the past year than those who attempted to quit in previous years. These findings are consistent with those of previous studies that have shown that the intention to quit smoking was stronger in the participants that had attempted to quit within the past year than that of their counterparts who had not [12,19]. In addition, even the participants that had attempted to quit more than a year before the present study were 4.8 or 4.4 times more likely to declare an intention to quit smoking within 1 or 6 months, respectively, than were those who had never tried to quit. These findings suggest that those who fail to quit may maintain a strong intention to do so. Overall, these findings suggest that smokers with a history of attempts at quitting, particularly within the past year, should be a priority group for smoking cessation programs by encouraging them to continue trying to quit smoking and providing appropriate smoking cessation interventions.
The advice to cease smoking provided by significant others was associated with the intention to quit within 1 month, 6 months, or someday, after adjusting for various potential confounders. These associations remained even after adjusting for previous attempts at quitting, which was the factor most relevant to the intention to quit, suggesting that the effect of significant others' smoking cessation advice may be independent of other factors. Although a specific person was not defined as a "significant other" in the present study, in the Korean cultural context, these "others" are likely to include family members rather than friends or coworkers, who might regard smoking as a private matter. This presumption may be supported by the results of a prior study that social pressure from a partner and children, but not that of coworkers, influenced employees' intentions to quit smoking [11]. The family acts as a source of social pressure and support in the social influence theory [11,22]; thus, some previous studies have emphasized the role of family in the smoking cessation process [23][24][25]. For example, family interactions related to smoking behaviors, such as the experience of smoking-related family conflicts, had a strong influence on smokers' intentions to quit. [12]. Strategies aimed at achieving smoking cessation intention should consider not only smokers but also their families, which may play an important role in inducing smoking cessation.
Meanwhile, smoking cessation advice by medical professionals was associated with the intention to quit within 1 month in the model that did not account for previous attempts at quitting; however, this relationship did not remain significant after adjustments for previous attempts at quitting. These findings are inconsistent with those of previous studies, which have shown that the recommendation from health care professionals to quit smoking triggered smoking cessation intentions or attempts, regardless of affecting the likelihood of success [26][27][28]. The Korean government is implementing various antismoking policies, aimed at reducing smoking rates among men to the levels of <20%. Since 2015, these policies include economic support for smokers intending to quit, for example, varenicline prescriptions, and counseling by clinicians [29]. However, the present findings suggest that doctors' recommendations to quit may not translate into intention to quit. Although the effect of smoking cessation advice provided by medical professionals alone was limited in the present study, it may have increased the effectiveness of such recommendations provided by significant others, in particular, among participants with the intention to quit smoking within 1 month. Regarding the intention to quit within 1 month, the advice from significant others (OR = 1.8, 95% CI: 1.2-2.9) and medical professionals (OR = 1.4, 95% CI: 0.7-2.9) showed an additive effect (OR = 2.6, 95% CI: 1.6-4.3) when provided concurrently. These findings suggest that smoking cessation policies should incorporate the advice of medical professionals and the social group of smokers.
This study has some limitations. First, this was a cross-sectional survey, which precludes any meaningful discussions about causality. Second, significant others were not defined as a specific person. Third, this study was based on a sample from a single province in Korea, limiting the generalizability of the present findings. However, covariates likely to affect the examined relationships, as previously reported, were accounted for in the present analysis [3,10,15,19]. In addition, the participants were representative of a region consisting of 23 cities or counties with a population of approximately 2.5 million, which makes this sample likely to be nationally representative. Fourth, there may be inherent limitations of the transtheoretical model related to dividing the stages arbitrarily [30]. Therefore, the results of this study should be interpreted with caution. Finally, the strength of this study is that it simultaneously considered the impact of both significant others' and health care providers' recommendations on smoking cessation as factors affecting smoking cessation intention.
Conclusions
Significant others, such as family members, are important to promoting smokers' intention to quit. Although the impact of medical professionals' advice to quit smoking may be limited, such advice is nonetheless important, as it may compound the effect of advice given by significant others, in particular, in the context of setting an intention to quit within 1 month. Moreover, smokers who have tried to quit within the previous year have a much higher willingness to quit than those who did not make such attempts. Overall, smokers require continuous encouragement and support to cease smoking. Data Availability Statement: After approval of use, publicly available datasets were analyzed in this study. This data can be found here https://chs.cdc.go.kr/chs/rdr/rdrInfoProcessMain.do (accessed on 5 March 2021). | 2021-03-29T05:16:40.384Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "7823c6efe451abb4ce489ab68d9666d6bf7a7783",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/18/6/2899/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7823c6efe451abb4ce489ab68d9666d6bf7a7783",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.